In the matrix completion problem, one sees a few entries of an (approximately) low-rank matrix and hopes to (approximately) recover the entire matrix. This problem has gotten a lot of attention recently, partly due to its applicability to recommender systems. It is known that, by solving a convex program, the original matrix can be recovered with a number of observations that is linear in n. However, for current analyses of faster algorithms (with runtime linear in n), the number of samples additionally depends at least quadratically on the *condition number* of the matrix. In this wor
IEEE GLOBECOM is one of two flagship conferences of the IEEE Communications Society (ComSoc), together with the IEEE ICC. Each year the conference hosts over 1,000 peer-reviewed technical papers and a cutting-edge industry program. The conference meets in North America and attracts roughly 2,000 leading scientists, researchers and industry practitioners from around the world. This year, Dr. Robert Heath and Dr.
Personalized models often revolve around per-user parameters quantifying, say, an individual's interest in a certain product category or susceptibility to a certain type of advertisement, even after known features of the product and the person have been taken into account. Social networks offer an appealing way to make inferences about such parameters, the intuition being that one's parameter is "close'' to that of one's friends. We look at this basic scenario from two angles.
In high speed network measurement, such as in core Internet routers, there may only be a few nanoseconds available per packet for measurement purposes. Our objective is to be able to accurately measure the flow size distribution, that is the number of packets that belong to the same data transfer (think TCP connection), in such an environment. The flow size distribution is an important metric for traffic managment, profiling, and security purposes.
This work studies the problem of sequentially recovering a sparse vector x_t and a vector from a low-dimensional subspace l_t from knowledge of their sum m_t=x_t+l_t. If the primary goal is to recover the low-dimensional subspace where the l_t's lie, then the problem is one of online or recursive robust principal components analysis (PCA). An example of where such a problem might arise is in separating a sparse foreground and a slowly changing dense background in a surveillance video.
Several general-purpose deterministic global optimization algorithms have been developed for mixed-integer nonlinear optimization problems over the past two decades. Central to the efficiency of such methods is their ability to construct sharp convex relaxations. Current global solvers rely on factorable programming techniques to iteratively decompose nonconvex factorable functions, until each intermediate expression can be outer-approximated by a convex feasible set. While it is easy to automate, this factorable programming technique often leads to weak relaxations.