WNCG - Wireless Networking and Communications Group - Optimization
https://wncg.org/tags/optimization
enAryan Mokhtari Receives NSF Grant to Research Optimization Algorithms for Large-Scale Learning
https://wncg.org/news/aryan-mokhtari-receives-nsf-grant-research-optimization-algorithms-large-scale-learning
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"> <p>WNCG <a href="https://sites.utexas.edu/mokhtari/">professor Aryan Mokhtari</a> has received a grant from the National Science Foundation (NSF) to study Computationally Efficient Second-Order Optimization Algorithms for Large-Scale Learning. The project “lays out an agenda to develop a class of memory efficient, computationally affordable, and distributed friendly second-order methods for solving modern machine learning problems.”</p>
<p> “Current optimization algorithms for large-scale machine learning are inefficient at times since these methods operate using only first-order information (gradient) of the objective function. This project aims to develop a class of fast and efficient second-order methods that exploit the curvature information of the objective function to accelerate convergence in ill-conditioned settings. The research encompasses three different thrusts: (I) Developing memory efficient incremental quasi-Newton methods with provably fast convergence guarantees; (II) Improving the computational complexity of second-order adaptive sample size algorithms by leveraging quasi-Newton approximation techniques; and (III) Designing distributed second-order methods that outperform first-order algorithms both in terms of overall complexity (in convex settings) and in terms of quality of solution (in non-convex settings).”</p>
<p>Aryan Mokhtari is an Assistant Professor in the Department of Electrical and Computer Engineering at UT Austin and a member of the Wireless Networking and Communications Group. His research interests include the areas of optimization, machine learning, and artificial intelligence. His current research focuses on the theory and applications of convex and non-convex optimization in large-scale machine learning and data science problems.</p>
</div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above"><div class="field-label">Keywords: </div><div class="field-items"><div class="field-item even"><a href="/tags/nsf-grant">NSF Grant</a></div><div class="field-item odd"><a href="/tags/wncg-faculty">WNCG faculty</a></div><div class="field-item even"><a href="/tags/optimization">Optimization</a></div><div class="field-item odd"><a href="/tags/machine-learning">Machine Learning</a></div></div></div><div class="field field-name-field-publish-date field-type-datetime field-label-above"><div class="field-label">Publish Date: </div><div class="field-items"><div class="field-item even"><span class="date-display-single">Tuesday, September 29, 2020</span></div></div></div><div class="field field-name-field-image field-type-image field-label-above"><div class="field-label">Key Image: </div><div class="field-items"><div class="field-item even"><img src="https://wncg.org/sites/default/files/aryan_mokhtari.jpg" width="1200" height="801" /></div></div></div><div class="field field-name-field-related-faculty field-type-node-reference field-label-above"><div class="field-label">Related Faculty: </div><div class="field-items"><div class="field-item even"><a href="/people/faculty/aryan-mokhtari">Aryan Mokhtari</a></div></div></div><div class="field field-name-field-feature field-type-list-boolean field-label-above"><div class="field-label">Feature: </div><div class="field-items"><div class="field-item even">No</div></div></div>Tue, 29 Sep 2020 19:31:28 +0000jlu754665 at https://wncg.orghttps://wncg.org/news/aryan-mokhtari-receives-nsf-grant-research-optimization-algorithms-large-scale-learning#commentsUser Association in Heterogeneous Networks
https://wncg.org/research/briefs/user-association-heterogeneous-networks
<div class="field field-name-field-publish-date field-type-datetime field-label-hidden"><div class="field-items"><div class="field-item even"><span class="date-display-single">Wednesday, January 1, 2014</span></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"> <p>In dense heterogeneous cellular networks, mobile devices such as smart phones can potentially associate with several different base stations. Which one should they choose? WNCG Profs. Andrews and Caramanis, in collaboration with lead researcher Qiaoyang Ye and Principal Engineer Mazin Al-Shalash from WNCG affiliate Huawei have been working towards characterizing optimal user associations, and simple techniques to approach these optimal associations, which can be extremely complex to determine. Their work has drawn attention from several major players in the 3GPP standards in addition to Huawei, including Qualcomm, Nokia Siemens Networks, Alcatel-Lucent, and NTT Docomo, who have each built on their ground-breaking results. </p>
<p>First some background. To meet crushing data traffic demands, cellular networks are evolving into ever-denser and irregular heterogeneous networks, especially through proliferation of small cells (e.g., picocells and femtocells). Due to the disparate transmit powers of different base stations, “natural” user association metrics like SINR or RSSI can lead to a major load imbalances and under-utilized small cells, with the macrocell remaining a major bottleneck. A critical missing piece in the conventional association metrics is the load, which provides a view of resource allocation and thus affects the long-term rates. In general, finding an optimal load-aware user association is a combinatorial optimization problem with exponential complexity. Meanwhile, any practically useful solution must be lightweight and efficient, and ideally solvable in a distributed way.</p>
<p>In two recent papers, Prof. Jeff Andrews, Prof. Constantine Caramanis, Qiaoyang Ye, and Mazin Al-Shalash, along with Beiyu Rong and Yudong Chen have used tools from convex optimization to address the association problem, devising easily computable upper bounds to optimal network performance, and then devising extremely efficient distributed algorithms that are provably near-optimal. In addition, they compared the extremely simple approach advocated by 3GPP known as “biasing”, or cell range expansion, whereby small cell received powers are artificially biased by a certain amount, for example 10 dB, compared to the macrocells to result in more mobile users associating with them.</p>
<p>The first paper provides a low-complexity distributed algorithm that converges to a near-optimal solution. We found that the gap between the rate-optimized association and the range expansion approach can actually be very small, if the bias is chosen carefully. This is somewhat surprising, and was the first result in the literature to show that simple optimized biasing (where all BSs in the network of a certain class use the same exact value) is in fact pretty close to a globally optimal association policy.</p>
<p>Since users offloaded to small cells suffer strong interference from macro base stations, muting the macrocells for a certain fraction of resources reduces this interference, at the cost of turning off the most congested base stations. Is this a good tradeoff? In the second paper, we considered this question, and found that the answer is generally “yes”. In particular, under a typical small cell deployment of say 6 picocells per macrocell, the macrocell should mute itself roughly half of the time. This increases the edge rate substantially, in part by allowing more aggressive biasing since the interference is reduced.</p>
<p> </p>
<ul>
<li>Paper 1. <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6497017&tag=1">http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6497017&tag=1</a></li>
<li>Paper 2. <a href="http://arxiv.org/pdf/1305.5585v1.pdf">http://arxiv.org/pdf/1305.5585v1.pdf</a></li>
</ul>
<p> </p>
<p>This work was also partially supported by the National Science Foundation, and the Defense Threat Reduction Agency (DTRA).</p>
</div></div></div><div class="field field-name-field-related-faculty field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Faculty: </div><div class="field-items"><div class="field-item even"><a href="/people/faculty/jeffrey-andrews">Jeffrey Andrews</a></div><div class="field-item odd"><a href="/people/faculty/constantine-caramanis">Constantine Caramanis</a></div></div></div><div class="field field-name-field-related-students field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Researchers: </div><div class="field-items"><div class="field-item even"><a href="/people/students/qiaoyang-ye">Qiaoyang Ye</a></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Keywords: </div><div class="field-items"><div class="field-item even"><a href="/tags/heterogeneous-networks">heterogeneous networks</a>, <a href="/tags/optimization">Optimization</a>, <a href="/tags/stochastic-geometry">stochastic geometry</a></div></div></div>Thu, 03 Apr 2014 19:36:51 +0000cc333383430 at https://wncg.orghttps://wncg.org/research/briefs/user-association-heterogeneous-networks#commentsMemory-Limited Learning
https://wncg.org/research/briefs/memory-limited-learning
<div class="field field-name-field-publish-date field-type-datetime field-label-hidden"><div class="field-items"><div class="field-item even"><span class="date-display-single">Monday, March 3, 2014</span></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"> <p>WNCG Prof. Constantine Caramanis along with Ph.D. student Ioannis Mitliagkas and MSR Bangalore researcher Dr. Prateek Jain, have obtained the first-ever linear-memory algorithm for Principal Component Analysis. Their algorithm is efficient to implement, needs to see each data point only once, and works even in the setting of many missing entries.</p>
<div title="Page 1">
<p>Principal component analysis is a fundamental tool for dimensionality reduction, clustering, classification, and many more learning tasks. It is a basic preprocessing step for learning, recognition, and estimation procedures. The core computational element of PCA is performing a (partial) singular value decomposition, and much work over the last half century has focused on efficient algorithms and hence on computational complexity. The recent focus on understanding high-dimensional data (examples: video or image data, medical or DNA data), where the dimensionality of the data scales together with the number of available sample points, has led to an exploration of the sample complexity of covariance estimation. What has not been considered is the memory complexity of PCA algorithms. The only algorithms with known performance guarantees thus far, require O(p<sup>2</sup>) memory, in p dimensions. This can be prohibitive for modern high-dimensional applications.</p>
<p>This work fills precisely this need. We develop an algorithm with O(p) memory requirement (the best possible) and with performance matching state-of-the-art memory-intensive algorithms. Moreover, in followup work, we also develop an algorithm that works even when each data point has suffered a vast number of deletions or erasures. </p>
<ul>
<li>Paper 1: <a href="http://users.ece.utexas.edu/~cmcaram/pubs/Streaming-PCA.pdf">Memory-Limited Streaming PCA</a></li>
<li>Paper 2: <a href="https://webspace.utexas.edu/im4454/www/kdd2014long.pdf">Streaming PCA with Many Missing Entries</a></li>
</ul>
<p>This research was partiall funded by the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).</p>
</div>
</div></div></div><div class="field field-name-field-related-faculty field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Faculty: </div><div class="field-items"><div class="field-item even"><a href="/people/faculty/constantine-caramanis">Constantine Caramanis</a></div></div></div><div class="field field-name-field-related-students field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Researchers: </div><div class="field-items"><div class="field-item even"><a href="/people/students/ioannis-mitliagkas">Ioannis Mitliagkas</a></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Keywords: </div><div class="field-items"><div class="field-item even"><a href="/tags/statistics">Statistics</a>, <a href="/tags/machine-learning">Machine Learning</a>, <a href="/tags/optimization">Optimization</a></div></div></div>Mon, 03 Mar 2014 21:24:20 +0000cc333383333 at https://wncg.orghttps://wncg.org/research/briefs/memory-limited-learning#commentsMixed Regression: Disentangling Mixed Data
https://wncg.org/research/briefs/mixed-regression-disentangling-mixed-data
<div class="field field-name-field-publish-date field-type-datetime field-label-hidden"><div class="field-items"><div class="field-item even"><span class="date-display-single">Friday, February 7, 2014</span></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"> <p>In two recent papers, Caramanis, Chen, Sanghavi and Yi obtain the best known statistical and computational complexity bounds for mixed regression. </p>
<p>Mixture models carry much explanatory power, and are natural modeling tools: rather than asking for a single model to explain all observations, they treat observed data as a superposition of simple statistical processes. Due to the wide applicability and naturalness of this modeling approach, their popularity extends across many application areas and domains, including health-care, object recognition, and natural language processing. Yet the inherently combinatorial nature of the mixture -- the assumption that one subset of data come from one model, and another subset from another -- presents significant algorithmic challenges in learning. Essentially the core of the challenge is that clustering and fitting must be performed simultaneously. </p>
<p>In two recent papers, WNCG faculty Constantine Caramanis and Sujay Sanghavi, in collaboration with Xinyang Yi, Yudong Chen, provide efficient algorithms that give the best known statistical and computational complexity bounds for this problem. In the first paper, we use alternating minimization, essentially showing that the EM algorithm has fast convergence. In the second, we use convex optimization techniques to derive an efficient algorith for mixed regression; we also obtain minimax optimal rates.</p>
<ul>
<li>Paper 1. <a href="http://arxiv.org/pdf/1310.3745v1.pdf">http://arxiv.org/pdf/1310.3745v1.pdf</a></li>
<li>Paper 2. <a href="http://arxiv.org/pdf/1312.7006.pdf">http://arxiv.org/pdf/1312.7006.pdf</a></li>
</ul>
<p>This research was partiall funded by the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).</p>
</div></div></div><div class="field field-name-field-related-faculty field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Faculty: </div><div class="field-items"><div class="field-item even"><a href="/people/faculty/constantine-caramanis">Constantine Caramanis</a></div><div class="field-item odd"><a href="/people/faculty/sujay-sanghavi">Sujay Sanghavi</a></div></div></div><div class="field field-name-field-related-students field-type-node-reference field-label-inline clearfix"><div class="field-label">Related Researchers: </div><div class="field-items"><div class="field-item even"><a href="/people/students/xinyang-yi">Xinyang Yi</a></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Keywords: </div><div class="field-items"><div class="field-item even"><a href="/tags/machine-learning">Machine Learning</a>, <a href="/tags/optimization">Optimization</a>, <a href="/tags/statistics">Statistics</a></div></div></div>Thu, 27 Feb 2014 22:23:03 +0000cc333383316 at https://wncg.orghttps://wncg.org/research/briefs/mixed-regression-disentangling-mixed-data#comments