skip to content

Statistical Laboratory

We present an implicit regularization scheme for gradient descent methods
applied to unpenalized least squares regression to solve the problem of
reconstructing a sparse signal from an underdetermined system of linear
measurements under the restricted isometry assumption. For a given
parameterization yielding a non-convex optimization problem, we show that
prescribed choices of initialization, step size and stopping time yield a
statistically and computationally optimal algorithm that achieves the minimax
rate with the same cost required to read the data up to poly-logarithmic
factors. Beyond minimax optimality, we show that our algorithm adapts to
instance difficulty and yields a dimension-independent rate when the
signal-to-noise ratio is high enough. We validate our findings with numerical
experiments and compare our algorithm against explicit $\ell_1$ penalization.
Going from hard instances to easy ones, our algorithm is seen to undergo a
phase transition, eventually matching least squares with an oracle knowledge of
the true support.

(based on joint work with Patrick Rebeschini and Tomas Vaskevicius)

Frontpage talks

Statistics

Cambridge Statistics Clinic

Cambridge Statistics Clinic

Cambridge Statistics Clinic

Further information

Time:

29May
May 29th 2020
14:00 to 15:00

Venue:

https://zoom.us/j/95022384263?pwd=N3Z6elB2Vy9Jajd6azlCNjFHQVlKdz09

Speaker:

Varun Kanade, University of Oxford

Series:

Statistics