Machine Learning to Manage Cellular Network Faults and Improve Voice-Over-LTE Service
We present two examples of using machine learning to improve end-user quality of experience (QoE) in cellular networks operating today. In particular, we demonstrate how to automate the clearing of operational faults in outdoor networks and compensation of signal impairments in indoor networks for voice-over-LTE (VoLTE) applications. Our proposed methods are compatible with 3GPP LTE Release 8 and higher.
Cellular standards rely on user equipment (UE) to advise the base station about retransmission and link adaptation schemes. This may happen periodically through feedback from the UEs, which drains UE batteries. Retransmissions cause voice frame duplications in VoLTE and high latencies in data services—both of which deteriorate end-user QoE.
WNCG Professor Brian L. Evans and PhD student Mr. Faris B. Mismar have published a reinforcement learning framework for different layers in the wireless protocol stack without overburdening the UE.
Through exploitation of semi-persistent scheduling (SPS) in packetized voice in indoor environments, a downlink power control (PC) algorithm is proposed using Q-learning. SPS occurs in periods between 10 and 640ms. This PC algorithm, residing in the base station, ensures that effective downlink signal to interference plus noise ratio (SINR) is met against both operational network faults and signal impairments. Combined with link adaptation, the PC algorithm enables higher orders of modulation and coding due to the improved SINR. The algorithm operates in an indoor environment to ensure adequately long channel coherence time for measurement collections. The PC algorithm works to achieve the target SINR by increasing or decreasing the radio link power. The power adaptation is based on observing the radio environment and assigning proper rewards to positive behavior. The performance is measured through experimental mean-opinion scores and call retainability. The UE does not need to send additional feedback reports to the base station.
Network faults reduce downlink data rates due to lower SINRs. While power control can improve data rates, it may not be feasible due to difficulty in obtaining the channel state information in a time-varying outdoor environment. A deep Q-learning based algorithm is proposed to create a near-optimal mapping between network faults and alarm resolving actions in an outdoor cluster. As the number of faults and their resolving actions increase in a network, the use of deep Q-learning helps in enabling self-healing functionality in cellular networks. This can improve the performance of the data services as measured through the SINR distribution and the user and cell downlink throughputs.
F. B. Mismar and B. L. Evans, “Q-Learning Algorithm for VoLTE Closed-Loop Power Control in Indoor Small Cells”, Proc. Asilomar Conf. Signals, Systems & Computers, Oct. 28-31, 2018. arXiv:1707.03269.
F. B. Mismar and B. L. Evans, “Deep Q-Learning for Self-Organizing Networks Fault Management and Radio Performance Improvement”, Proc. Asilomar Conf. Signals, Systems, & Computers, Oct. 28-31, 2018. arXiv:1707.02329.