Two Recent Results in Random Matrix Theory And their Applications in Semi-Supervised Learning and Wireless Interference Management

Event Status
Scheduled
In this talk, I will present two recent results in random matrix theory. In the first part of the talk, I will present a result that leads to analyzing the required number of labeled examples (also known as label complexity) of graph-based methods for semi-supervised learning. Graph-based methods have been quite successful in solving the semi-supervised learning problem, as they take into account the underlying geometry of the data. For a common distance-based similarity graph constructed out of a pool of unlabeled data points drawn from an underlying probability distribution in the feature space, we give an asymptotic bound on the expected fraction of nodes needed to be labeled for perfectly learning smooth decision boundaries. In order to derive the result, we analyze the spectral properties of the graph Laplacian as well as the class membership of the nodes on the graph. Our result strengthens the notion that learning using distance-based similarity graphs is akin to solving the low density separation problem on a finite sample. The second part of the talk is on a condition for rank loss of an ensemble of structured random matrices. Each row in each of the considered matrices is scaled by a random number. This problem has an application in designing linear schemes for wireless interference networks with no channel state information at the transmitters. In this setting, each matrix represents a beamforming matrix and the random row scaling is due to the unknown channel coefficients. We use the obtained result to present conditions for the feasibility of interference alignment in these networks and understand the optimal coding schemes for arbitrary network topologies.
Date and Time
Feb. 20, 2015, All Day