Skip to Main content Skip to Navigation

Hyperparameter selection for high dimensional sparse learning : application to neuroimaging

Abstract : Due to non-invasiveness and excellent time resolution, magneto- and electroencephalography (M/EEG) have emerged as tools of choice to monitor brain activity. Reconstructing brain signals from M/EEG measurements can be cast as a high dimensional ill-posed inverse problem. Typical estimators of brain signals involve challenging optimization problems, composed of the sum of a data-fidelity term, and a sparsity promoting term. Because of their notoriously hard to tune regularization hyperparameters, sparsity-based estimators are currently not massively used by practitioners. The goal of this thesis is to provide a simple, fast, and automatic way to calibrate sparse linear models. We first study some properties of coordinate descent: model identification, local linear convergence, and acceleration. Relying on Anderson extrapolation schemes, we propose an effective way to speed up coordinate descent in theory and practice. We then explore a statistical approach to set the regularization parameter of Lasso-type problems. A closed-form formula can be derived for the optimal regularization parameter of L1 penalized linear regressions. Unfortunately, it relies on the true noise level, unknown in practice. To remove this dependency, one can resort to estimators for which the regularization parameter does not depend on the noise level. However, they require to solve challenging "nonsmooth + nonsmooth" optimization problems. We show that partial smoothing preserves their statistical properties and we propose an application to M/EEG source localization problems. Finally we investigate hyperparameter optimization, encompassing held-out or cross-validation hyperparameter selection. It requires tackling bilevel optimization with nonsmooth inner problems. Such problems are canonically solved using zeros order techniques, such as grid-search or random-search. We present an efficient technique to solve these challenging bilevel optimization problems using first-order methods.
Document type :
Complete list of metadata
Contributor : ABES STAR :  Contact
Submitted on : Monday, October 11, 2021 - 3:19:11 PM
Last modification on : Saturday, June 25, 2022 - 8:28:53 PM
Long-term archiving on: : Wednesday, January 12, 2022 - 8:06:53 PM


Version validated by the jury (STAR)


  • HAL Id : tel-03373531, version 1


Quentin Bertrand. Hyperparameter selection for high dimensional sparse learning : application to neuroimaging. Statistics [math.ST]. Université Paris-Saclay, 2021. English. ⟨NNT : 2021UPASG054⟩. ⟨tel-03373531⟩



Record views


Files downloads