Inverse problems and Machine learning

Ozan Öktem

Lecture 1: Introduction to machine learning and some of its underlying mathematics

This aim is to introduce basic notions from machine learning. We will quickly focus on supervised learning in the purely data driven setting, i.e., there are no physics driven mechanistic models for how training data is generated. The aim is to survey some of the results and open problems associated with developing a mathematical and computational theory for deep learning in this setting.

Lecture 2: Machine learning in the context of inverse problems: Learning priors and post-processing

Focus here is on applying machine learning to solve ill-posed inverse problems, i.e., to recover an operator that maps the data to signal. The starting point is to very briefly survey current regularization schemes emphasizing on the ability to account for a priori information. Next, is to outline challenges associated with using machine learning for solving ill-posed inverse problems followed by a survey on early attempts that are based on using it as a post- processing step. We conclude with outlining the limitations with this latter approach.

Lecture 3: Learned iterative schemes

This lecture introduces specific deep neural networks for solving ill-posed inverse problems that account for the a priori information contained in a forward model. We outline the current approaches, point to open problems, and conclude with showing examples of their performance illustrated in tomographic image reconstruction.


Linear and Nonlinear Inverse problems in imaging with practical Applications

Samuli Siltanen

Lecture 1: Introduction to X-ray tomography

  • What is an X-ray image? The Beer-Lambert law

  • Slice imaging and history of CT. Inverse problem of tomography

  • Are you a natural tomographer?

  • Filtered back-projection and the Radon transform

  • Applications

Lecture 2: Pixel-based imaging and matrix models

  • Why matrices for tomography instead of filtered back-projection?

  • Singular value decomposition and ill-posedness

  • Naive and regularized reconstructions for 12x12 pixel tomography

  • Review of basic regularization methods: Truncated SVD, Tikhonov regularization, total variation regularization, wavelet sparsity

  • A learning-based approach

Lecture 3:

  • Limited angle data and the Helsinki Tomography Challenge 2022

  • Nonlinear imaging: passive gamma emission tomography of spent nuclear fuel

  • More applications

Exercise/Lab classes:

Siiri Rautio, Salla Latva-Äijö, Elli Karvonen and Elena Morotti will help with the sessions.

  • Exercises on Monday: simple tomographic matrix models.

  • Lab class on Tuesday: working with open datasets from Helsinki

Modern Techniques of Large Scale Optimization for Data Science

Jacek Gondzio

Lecture 1: Interior Point Methods (IPMs) for LP

- Motivation, logarithmic barrier function, central path, neighbourhoods,

- path-following method, convergence proof, complexity of the algorithm,

- practical implementation issues.

Lecture 2: Interior Point Methods for QP, (convex) NLP, SOCP and SDP

- Quadratic Programming (QP) problems, primal-dual pair of QPs,

- Nonlinear (convex) inequality constraints,

- Second-Order Cone Programming,

- Semidefinite Programming,

- Newton method, logarithmic barrier function, self-concordant barriers.

Lecture 3:

- Sparse Approximations with IPMs:

  modern applications of optimization which require a selection  of a 'sparse' solution originating from computational statistics,  signal or image processing, compressed sensing, machine learning, and discrete optimal transport, to mention just a few.

- Alternating Direction Method of Multipliers (ADMM).

Exercise/Lab classes:

Filippo Zanetti and Margherita Porcelli will help with the sessions

Exercise on Monday: 

IPMs:  From LP to QP 

Conjugate Gradients for positive definite linear systems 

Exercise on Tuesday: 

Examples of IPMs in action: 

Material-separating regularizer for multi-energy X-ray tomography 

Semidefinite Programming: Matrix Completion

Smooth and non-smooth optimisation for imaging applications

Luca Calatroni

Lecture 1: Basics of convex analysis and basic algorithms 

In this lecture we revise the basic notions on smoothness (Lipschitz continuity, Gateaux/Frechet differentiability..) and convex analysis (subdifferentials and subgradients, Fenchel conjugation, strong convexity..) that will be required for defining two basic algorithms solving convex optimisation problems: gradient descent (GD) and proximal gradient descent (GD).

Lecture 2: Acceleration strategies

In this lecture we will describe acceleration strategies for improving convergence performance of GD (Nesterov acceleration), PGD (FISTA) and show how strong convexity can be explicitly dealt with to define even faster acceleration schemes. 

Lecture 3: Sparsity-based problems: from convex to non-convex approaches

In this lecture we will give a focus on how sparsity appears in applications. We will comment on the use of l_1 VS l_0 optimisation-based approaches. For the latter ones, we will review Iterative hard thresholding and greedy algorithms, and introduce the notion of continuous relaxations, both for  constrained and unconstrained l_0 problems. Applications to microscopy image analysis will be shown.

Lab: ISTA and FISTA for sparse image reconstruction 

In this lab we will consider convex optimisation algorithms for solving problems with an l_2 data term and a sparsity promoting term for problems of image reconstruction such as molecule localisation (for undersampled, blurred and noisy data, with l_1 penalty). We will compare ISTA with its accelerated version (FISTA) and, for a slightly modified model, with its strongly convex variant.

Exercise class: Proof of convergence of FISTA in convex/strongly convex setting

Proof of convergence of FISTA in function values + strongly convex variant (with explicit knowledge of strong convexity parameter).