Understanding Ultra-Dense Cellular Networks: Multi-slope Path Loss Models and Analysis

22 Jan 2015

Existing cellular network analyses, and even simulations, typically use the standard path loss model where received power decays 1/d^x over a distance d, with a pathloss exponent x. This model leads to tractable analysis of downlink cellular network performance with base stations distributed by a Poisson point process. However, it is widely known that this standard path loss model is quite idealized, and that in most scenarios the path loss exponent x is itself a function of d. This is particularly important as networks become increasingly dense, since the path loss exponents governing many nearby transmissions (and their interference) may be small, even less than two in some cases. Such low path loss has large implications on the gain from network densification, including in millimeter wave spectrum.

In this work, WNCG Postdoctoral Associate, Xinchen Zhang, and Prof. Jeffrey Andrews introduced some novel analytical techniques for multi-slope path loss models, where different distance ranges are subject to different path loss exponents. The team focused on the dual-slope path loss function and deriving the coverage probability (relative to an SINR or SIR target) and potential throughput. The exact mathematical results show that the SIR monotonically decreases with network density and (consequently) network coverage is maximized at some finite density. The WNCG researchers also observed some surprising asymptotic results in the event of ultra-densification (BS density approaches infinity). For example, in some instances the coverage probability and/or the rate approach zero for each node.

For more information, read the full paper HERE