Provably Convergent Data-Driven Convex-Nonconvex Regularization


An emerging new paradigm for solving inverse problems is via the use of deep learning to learn a regularizer from data. This leads to high-quality results, but often at the cost of provable guarantees. In this work, we show how well-posedness and convergent regularization arises within the convex-nonconvex (CNC) framework for inverse problems. We introduce a novel input weakly convex neural network (IWCNN) construction to adapt the method of learned adversarial regularization to the CNC framework. Empirically we show that our method overcomes numerical issues of previous adversarial methods.

NeurIPS 2023 Workshop on Deep Learning and Inverse Problems
Zakhar Shumaylov
Zakhar Shumaylov
Mathematics PhD student at University of Cambridge

My research interests include everything applied mathematics.