This talk will deal with the notions of adaptive and non-adaptive information, in the context of statistical learning and inference. Suppose that we have a collection of models (e.g., signals, systems, representations, etc.) denoted by X and a collection of measurement actions (e.g., samples, probes, queries, experiments, etc.) denoted by Y. A particular model x in X best describes the problem at hand and is measured as follows. Each measurement action, y in Y, generates an observation y(x) that is a function of the unknown model. This function may be deterministic or stochastic. The goal is to identify x from a set of measurements y_1(x),...,y_n(x), where y_i in Y, i=1,...,n. If the measurement actions y_1,...,y_n are chosen deterministically or randomly without knowledge of x, then the measurement process is non-adaptive. However, If y_i is selected in a way that depends on the previous measurements y_1(x),...,y_{i-1}(x), then the process is adaptive. Adaptive information is clearly more flexible, since the process can always disregard previously collected data. The advantage of adaptive information is that it can sequentially focus measurements or sensing actions to distinguish the elements of X that are most consistent with previously collected data, and this can lead to significantly more reliable decisions. The idea of adaptive information gathering is commonplace (e.g., humans and animals excel at this), but outside of simple parametric settings little is known about the fundamental limits and capabilities of such systems. The key question of interest here is identifying situations in which adaptive information is significantly more effective than non-adaptive information. The answer depends on the interrelationship between the model and measurement spaces X and Y. The talk will cover the general problem and look at several illustrative examples from signal processing, statistics, and machine learning.
Adaptive Information
Event Status
Scheduled
Event Details
Date and Time
April 6, 2012, All Day