Presented by:
Ramji Venkataramanan
Date:
Tuesday 24th July 2018 - 09:45 to 10:30
Venue:
INI Seminar Room 1
Abstract:
In many statistical inference problems, we wish to bound the
performance of any possible estimator. This can be seen as a converse result,
in a standard information-theoretic sense. A standard approach in the
statistical literature is based on Fano’s inequality, which typically gives a
weak converse. We adapt these arguments by replacing Fano by more recent
information-theoretic ideas, based on the work of Polyanskiy, Poor and Verdu.
This gives tighter lower bounds that can be easily computed and are
asymptotically sharp. We illustrate our technique in three applications:
density estimation, active learning of a binary classifier, and compressed
sensing, obtaining tighter risk lower bounds in each case.
(joint with Oliver Johnson, see doi:10.1214/18-EJS14)
(joint with Oliver Johnson, see doi:10.1214/18-EJS14)
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.