The Taub Faculty of Computer Science Events and Talks
Sunday, 10.10.2021, 11:00
The removal of noise from an image is an important and fundamental task in image processing. The statistical distribution of this noise is dependent on the measuring technique and the nature of the captured image and should be considered when tackling the denoising problem at hand. In some applications, such as in night vision, astronomy and fluorescence microscopy, the images are acquired under low light conditions and the image sensor counts a small number of photons for each pixel. Under these circumstances, the noise can be modeled using the Poisson distribution and is called Poisson-noise or shot-noise. The recovery of clean images in these applications is essential, and therefore effective Poisson-denoising algorithms are required.
Several methods can address this problem, the most common one is by approximating the Poisson distribution of the captured image with a Gaussian one, which is reasonable for a high enough photon count in the sensor. In this way, using a variance stabilizing transform (VST) such as the Anscombe transform on the image will result in an image signal with approximately Gaussian noise. Many Gaussian denoising algorithms are available and can then be used to denoise the resulting image. Finally, an inverse Anscombe transform can be applied to yield the resulting denoised image. While this method is widely used and works well for higher intensity image signals, for the very low photon-count case the Gaussian approximation of the noise does not hold, and a different approach should be considered. For example, using the Plug-and-Play Prior method, the Poisson denoising of an image can be performed in an iterative manner using a Gaussian denoiser across the entire image intensity range. Alternatively, a denoising algorithm that directly addresses the Poisson noise can be used.
In recent years, deep learning has achieved state-of-the-art results in many computer vision tasks, including image denoising. Nevertheless, the iterpretability and explainability of deep learning models is still lacking, and their method of solving complex tasks in unclear. In order to combat these problems, a possible approach is to adapt classical image processing principles and algorithms into the models architectures.
In this work we propose several deep learning models for the purpose of Poisson denoising, based on classical image processing principles. These models are based on sparse representation, self-similarity and multiscale analysis principles, and their action can be explained theoretically. We compare the performance of these models in Poisson denoising of images to other common classical algorithms and deep learning models, and show the competitive results of some of these models.