Regularized estimation of the covariance matrix via the Cholesky decomposition of the inverse has been popular in high-dimensional settings in the context of ordered variables, because the Cholesky factor of the inverse has a nice regression interpretation and can be regularized using regression methods. However, sparsity introduced in the inverse does not usually result in a sparse estimator of the covariance matrix itself. Other methods that regularize covariance directly, such as banding, do not guarantee positive definiteness of the estimator, which is automatic with the Cholesky decomposition. Here we show that the Cholesky factor of the covariance matrix itself also has a natural regression interpretation, and therefore methods introduced for the Cholesky factor of the inverse (banding, adaptive banding, lasso) can be applied to the covariance matrix itself to obtain a sparse estimator with guaranteed positive definiteness. We study the properties of this type of estimator and compare it to other methods that impose sparsity in the covariance matrix directly, such as banding and thresholding.