We consider the properties of a large class of learning algorithms defined in terms of classical regularization operators for ill-posed problems. This class includes regularized least-squares, Landweber method, $\nu$-methods and truncated singular value decomposition on hypotyesis spaces of vector-valued functions defined in terms of suitable reproducing kernels. In particular universal consistency, minimax rates and statistical adaptation of the methods we will be discussed.