Abstract: In many scientific and medical settings, we cannot directly observe images of interest, such as a person’s internal organs, the microscopic structure of materials or cells, or distant stars and galaxies. Rather, we use MRI scanners, microscopes, and telescopes to collect indirect data that require sophisticated algorithms to form an image. Historically, these methods have relied on mathematical models of simple image structures to improve the quality and resolution of the resulting images. More recent efforts harness vast collections of images to train computers to learn more complex models of image structure, yielding more accurate and higher-resolution images than ever. These new methods lead to a renaissance in computational imaging and new insights into designing neural networks and other machine learning models in a principled manner, jointly leveraging both training data and physical models of how imaging data is collected. In this course, we will cover some exciting new directions in this emerging area, such as
(a) plug-and-play methods;
(b) variational networks and deep unrolling;
(c) deep equilibrium models;
(d) learning regularization functionals;
(e) scalable and mini-batch OPML;
(f) diffusion models;
(g) self-supervised OPML approaches;
(h) robustness and domain adaptation.
Here is the notebook to complete for the exam