Presented by:
Shahar Mendelson
Date:
Friday 19th January 2018 - 11:00 to 11:45
Venue:
INI Seminar Room 1
Abstract:
The question of prediction is of central importance in Statistical Learning Theory. The optimal tradeoff between accuracy and confidence and the identity of a procedure that attains that optimal tradeoff have been studied extensively over the years.
In this talk I present some of ideas used in the recent solution of this problem: a procedure $\hat{f}$ that attains the best possible accuracy/confidence tradeoff for a triplet $(F,X,Y)$ consisting of an arbitrary class $F$, an unknown distribution $X$ and an unknown target $Y$.
Specifically, I explain why under minimal assumptions on $(F,X,Y)$, there is a natural critical level $r^*$ that depends on the triplet and the sample size $N$, such that for any accuracy parameter $r>r^*$, one has $E((\hat{f}(X)-Y)^2|D) \leq \inf_{f \in F} E(f(X)-Y)^2+r^2$ with probability at least $1-2\exp(-cN \min\{1,r^2/\sigma^2\})$,
where $\sigma^2=\inf_{f \in F} E(f(X)-Y)^2$.
In this talk I present some of ideas used in the recent solution of this problem: a procedure $\hat{f}$ that attains the best possible accuracy/confidence tradeoff for a triplet $(F,X,Y)$ consisting of an arbitrary class $F$, an unknown distribution $X$ and an unknown target $Y$.
Specifically, I explain why under minimal assumptions on $(F,X,Y)$, there is a natural critical level $r^*$ that depends on the triplet and the sample size $N$, such that for any accuracy parameter $r>r^*$, one has $E((\hat{f}(X)-Y)^2|D) \leq \inf_{f \in F} E(f(X)-Y)^2+r^2$ with probability at least $1-2\exp(-cN \min\{1,r^2/\sigma^2\})$,
where $\sigma^2=\inf_{f \in F} E(f(X)-Y)^2$.