Norm Estimation, Precision Sampling, and Rademacher Type
Seminar Room 1, Newton Institute
We describe new algorithms to compute norms of a vector in a stream, which can be seen as linear embeddings into certain low-dimensional "computational spaces". In particular, we show that for a variety of settings, simple algorithms follow from the application of a single simple probabilistic technique, called Precision Sampling. These settings include classic scenarios such as l_p norms for p\in(0,2] and p>2, as well as mixed norms l_p(l_q). In the latter case, we show how (a natural generalization of) the Rademacher type yields essentially tight bounds for all regimes p,q>0.
Precision Sampling itself addresses the problem of estimating a sum of reals \sum a_i from weak estimates of each a_i\in[0,1]. More precisely, one chooses in advance "precisions" w_i>=1 for each i and obtains an estimate of a_i within additive 1/w_i. The core question is: what is the best trade-off between the approximation to \sum a_i and the total precision, \sum_i w_i ?
Joint work with Robert Krauthgamer and Krzysztof Onak.