Darker, smaller, faster: Machine learning regularizers for imaging at the extremes
by
The notion of data-driven regularizers for ill-posed imaging has been around since at least the invention of learned sparse codes, or “dictionaries,” by Olshausen and Field in 1996. Learning the code through a deep neural network, as pointed out in 2010 by Gregor and Lecun, narrows the regularizer on-demand. Thus, resilience to noise and incomplete measurements is improved.
For the past 9 years, my group has been working on imaging at extreme conditions, which include: low photon incidence, down to a single photon per pixel; strong attenuation and scattering, as occurs to coherent x-rays (generated by a synchrotron) when they propagate through complex objects such as integrated circuits; and, more recently, phenomena faster than the camera frame rate and with motion range that is a fraction of a pixel, i.e. severely undersampled in both space and time domains. I will present and critique the methods that we used to make progress in these difficult problems, followed by a more general outlook on how machine learning-inspired methods can be useful in the physical sciences and engineering.
Seminar Room, INPP building, NCSR Demokritos
Videoconference via https://us02web.zoom.us/j/82802459812