Alberti, Giovanni S.Lassas, MattiDe Vito, ErnestoRatti, LucaSantacesaria, MatteoRanzato, Marc'AurelioBeygelzimer, AlinaDauphin, YannLiang, Percy S.Wortman Vaughan, Jenn2023-03-032023-03-032021Alberti, G S, Lassas, M, De Vito, E, Ratti, L & Santacesaria, M 2021, Learning the optimal Tikhonov regularizer for inverse problems. in MA Ranzato, A Beygelzimer, Y Dauphin, P S Liang & J Wortman Vaughan (eds), Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021. Advances in Neural Information Processing Systems, vol. 30, Neural Information Processing Systems Foundation, pp. 25205-25216, 35th Conference on Neural Information Processing Systems, NeurIPS 2021, Virtual, Online, 06/12/2021. https://doi.org/10.48550/arXiv.2106.06513conferenceORCID: /0000-0001-7948-0577/work/130152455ORCID: /0000-0003-2043-3156/work/130153385http://hdl.handle.net/10138/355456In this work, we consider the linear inverse problem y = Ax+ε, where A: X → Y is a known linear operator between the separable Hilbert spaces X and Y, x is a random variable in X and ε is a zero-mean random process in Y . This setting covers several inverse problems in imaging including denoising, deblurring and X-ray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori, but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator A and depends only on the mean and covariance of x. Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both x and y, and one unsupervised, based only on samples of x. In both cases we prove generalization bounds, under some weak assumptions on the distribution of x and ε, including the case of sub-Gaussian variables. Our bounds hold in infinite-dimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.12engunspecifiedinfo:eu-repo/semantics/openAccessComputer and information sciencesLearning the optimal Tikhonov regularizer for inverse problemsConference contributionopenAccess58cdd599-6b4d-49d3-8e4b-193a28d16f1e85125026324