WNCG - Wireless Networking and Communications Group - Optimization
http://wncg.org/tags/optimization
enUser Association in Heterogeneous Networks
http://wncg.org/research/briefs/user-association-heterogeneous-networks
<div class="field field-name-field-publish-date field-type-datetime field-label-hidden"><div class="field-items"><div class="field-item even"><span class="date-display-single">Wednesday, January 1, 2014</span></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"> <p>In dense heterogeneous cellular networks, mobile devices such as smart phones can potentially associate with several different base stations. Which one should they choose? WNCG Profs. Andrews and Caramanis, in collaboration with lead researcher Qiaoyang Ye and Principal Engineer Mazin Al-Shalash from WNCG affiliate Huawei have been working towards characterizing optimal user associations, and simple techniques to approach these optimal associations, which can be extremely complex to determine. Their work has drawn attention from several major players in the 3GPP standards in addition to Huawei, including Qualcomm, Nokia Siemens Networks, Alcatel-Lucent, and NTT Docomo, who have each built on their ground-breaking results. </p>
<p>First some background. To meet crushing data traffic demands, cellular networks are evolving into ever-denser and irregular heterogeneous networks, especially through proliferation of small cells (e.g., picocells and femtocells). Due to the disparate transmit powers of different base stations, “natural” user association metrics like SINR or RSSI can lead to a major load imbalances and under-utilized small cells, with the macrocell remaining a major bottleneck. A critical missing piece in the conventional association metrics is the load, which provides a view of resource allocation and thus affects the long-term rates. In general, finding an optimal load-aware user association is a combinatorial optimization problem with exponential complexity. Meanwhile, any practically useful solution must be lightweight and efficient, and ideally solvable in a distributed way.</p>
<p>In two recent papers, Prof. Jeff Andrews, Prof. Constantine Caramanis, Qiaoyang Ye, and Mazin Al-Shalash, along with Beiyu Rong and Yudong Chen have used tools from convex optimization to address the association problem, devising easily computable upper bounds to optimal network performance, and then devising extremely efficient distributed algorithms that are provably near-optimal. In addition, they compared the extremely simple approach advocated by 3GPP known as “biasing”, or cell range expansion, whereby small cell received powers are artificially biased by a certain amount, for example 10 dB, compared to the macrocells to result in more mobile users associating with them.</p>
<p>The first paper provides a low-complexity distributed algorithm that converges to a near-optimal solution. We found that the gap between the rate-optimized association and the range expansion approach can actually be very small, if the bias is chosen carefully. This is somewhat surprising, and was the first result in the literature to show that simple optimized biasing (where all BSs in the network of a certain class use the same exact value) is in fact pretty close to a globally optimal association policy.</p>
<p>Since users offloaded to small cells suffer strong interference from macro base stations, muting the macrocells for a certain fraction of resources reduces this interference, at the cost of turning off the most congested base stations. Is this a good tradeoff? In the second paper, we considered this question, and found that the answer is generally “yes”. In particular, under a typical small cell deployment of say 6 picocells per macrocell, the macrocell should mute itself roughly half of the time. This increases the edge rate substantially, in part by allowing more aggressive biasing since the interference is reduced.</p>
<p> </p>
<ul><li>Paper 1. <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6497017&tag=1">http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6497017&tag=1</a></li>
<li>Paper 2. <a href="http://arxiv.org/pdf/1305.5585v1.pdf">http://arxiv.org/pdf/1305.5585v1.pdf</a></li>
</ul><p> </p>
<p>This work was also partially supported by the National Science Foundation, and the Defense Threat Reduction Agency (DTRA).</p>
</div></div></div><div class="field field-name-field-related-faculty field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Faculty: </div><div class="field-items"><div class="field-item even"><a href="/people/faculty/jeffrey-andrews">Jeffrey Andrews</a></div><div class="field-item odd"><a href="/people/faculty/constantine-caramanis">Constantine Caramanis</a></div></div></div><div class="field field-name-field-related-students field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Researchers: </div><div class="field-items"><div class="field-item even"><a href="/people/students/qiaoyang-ye">Qiaoyang Ye</a></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Keywords: </div><div class="field-items"><div class="field-item even"><a href="/tags/heterogeneous-networks">heterogeneous networks</a>, <a href="/tags/optimization">Optimization</a>, <a href="/tags/stochastic-geometry">stochastic geometry</a></div></div></div>Thu, 03 Apr 2014 19:36:51 +0000cc333383430 at http://wncg.orghttp://wncg.org/research/briefs/user-association-heterogeneous-networks#commentsMemory-Limited Learning
http://wncg.org/research/briefs/memory-limited-learning
<div class="field field-name-field-publish-date field-type-datetime field-label-hidden"><div class="field-items"><div class="field-item even"><span class="date-display-single">Monday, March 3, 2014</span></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"> <p>WNCG Prof. Constantine Caramanis along with Ph.D. student Ioannis Mitliagkas and MSR Bangalore researcher Dr. Prateek Jain, have obtained the first-ever linear-memory algorithm for Principal Component Analysis. Their algorithm is efficient to implement, needs to see each data point only once, and works even in the setting of many missing entries.</p>
<div title="Page 1">
<p>Principal component analysis is a fundamental tool for dimensionality reduction, clustering, classification, and many more learning tasks. It is a basic preprocessing step for learning, recognition, and estimation procedures. The core computational element of PCA is performing a (partial) singular value decomposition, and much work over the last half century has focused on efficient algorithms and hence on computational complexity. The recent focus on understanding high-dimensional data (examples: video or image data, medical or DNA data), where the dimensionality of the data scales together with the number of available sample points, has led to an exploration of the sample complexity of covariance estimation. What has not been considered is the memory complexity of PCA algorithms. The only algorithms with known performance guarantees thus far, require O(p<sup>2</sup>) memory, in p dimensions. This can be prohibitive for modern high-dimensional applications.</p>
<p>This work fills precisely this need. We develop an algorithm with O(p) memory requirement (the best possible) and with performance matching state-of-the-art memory-intensive algorithms. Moreover, in followup work, we also develop an algorithm that works even when each data point has suffered a vast number of deletions or erasures. </p>
<ul><li>Paper 1: <a href="http://users.ece.utexas.edu/~cmcaram/pubs/Streaming-PCA.pdf">Memory-Limited Streaming PCA</a></li>
<li>Paper 2: <a href="https://webspace.utexas.edu/im4454/www/kdd2014long.pdf">Streaming PCA with Many Missing Entries</a></li>
</ul><p>This research was partiall funded by the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).</p>
</div>
</div></div></div><div class="field field-name-field-related-faculty field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Faculty: </div><div class="field-items"><div class="field-item even"><a href="/people/faculty/constantine-caramanis">Constantine Caramanis</a></div></div></div><div class="field field-name-field-related-students field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Researchers: </div><div class="field-items"><div class="field-item even"><a href="/people/students/ioannis-mitliagkas">Ioannis Mitliagkas</a></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Keywords: </div><div class="field-items"><div class="field-item even"><a href="/tags/statistics">Statistics</a>, <a href="/tags/machine-learning">Machine Learning</a>, <a href="/tags/optimization">Optimization</a></div></div></div>Mon, 03 Mar 2014 21:24:20 +0000cc333383333 at http://wncg.orghttp://wncg.org/research/briefs/memory-limited-learning#commentsMixed Regression: Disentangling Mixed Data
http://wncg.org/research/briefs/mixed-regression-disentangling-mixed-data
<div class="field field-name-field-publish-date field-type-datetime field-label-hidden"><div class="field-items"><div class="field-item even"><span class="date-display-single">Friday, February 7, 2014</span></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"> <p>In two recent papers, Caramanis, Chen, Sanghavi and Yi obtain the best known statistical and computational complexity bounds for mixed regression. </p>
<p>Mixture models carry much explanatory power, and are natural modeling tools: rather than asking for a single model to explain all observations, they treat observed data as a superposition of simple statistical processes. Due to the wide applicability and naturalness of this modeling approach, their popularity extends across many application areas and domains, including health-care, object recognition, and natural language processing. Yet the inherently combinatorial nature of the mixture -- the assumption that one subset of data come from one model, and another subset from another -- presents significant algorithmic challenges in learning. Essentially the core of the challenge is that clustering and fitting must be performed simultaneously. </p>
<p>In two recent papers, WNCG faculty Constantine Caramanis and Sujay Sanghavi, in collaboration with Xinyang Yi, Yudong Chen, provide efficient algorithms that give the best known statistical and computational complexity bounds for this problem. In the first paper, we use alternating minimization, essentially showing that the EM algorithm has fast convergence. In the second, we use convex optimization techniques to derive an efficient algorith for mixed regression; we also obtain minimax optimal rates.</p>
<ul><li>Paper 1. <a href="http://arxiv.org/pdf/1310.3745v1.pdf">http://arxiv.org/pdf/1310.3745v1.pdf</a></li>
<li>Paper 2. <a href="http://arxiv.org/pdf/1312.7006.pdf">http://arxiv.org/pdf/1312.7006.pdf</a></li>
</ul><p>This research was partiall funded by the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).</p>
</div></div></div><div class="field field-name-field-related-faculty field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Faculty: </div><div class="field-items"><div class="field-item even"><a href="/people/faculty/constantine-caramanis">Constantine Caramanis</a></div><div class="field-item odd"><a href="/people/faculty/sujay-sanghavi">Sujay Sanghavi</a></div></div></div><div class="field field-name-field-related-students field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Researchers: </div><div class="field-items"><div class="field-item even"><a href="/people/students/xinyang-yi">Xinyang Yi</a></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Keywords: </div><div class="field-items"><div class="field-item even"><a href="/tags/machine-learning">Machine Learning</a>, <a href="/tags/optimization">Optimization</a>, <a href="/tags/statistics">Statistics</a></div></div></div>Thu, 27 Feb 2014 22:23:03 +0000cc333383316 at http://wncg.orghttp://wncg.org/research/briefs/mixed-regression-disentangling-mixed-data#comments