Modern digital cameras have very powerful noise reduction algorithms implemented in their signal processors. All of them have one thing in common: The algorithm needs to decide “What is noise and what is image content”. Have a look at the concept.
We will describe how a simple noise reduction algorithm works. This is definitely not the best, but the concept is very clear and most of the implementations in digital cameras are based on this idea:
1) divide the signal in noise and image content
2) reduce the noise
3) put the image together again
The algorithm shown here is a also known as “coring” and is that low in its computational needs, that is has been used for years, even for real-time noise reduction in video processing.
We start with an image, that shows noise we want to reduce (1). An easy way to reduce noise is to make a low-pass filtering. The result (2) is a blurry version of the starting image. So we have successfully reduced the noise, but we do not have a good looking image as we also have made it blurry. The aim is to reduce the noise and NOT make it appear blurry.
The difference of these two images (1 and 2) is a high-pass version of the image (3). This image contains the information that was lost when making the low-pass filtering. So the original image is the sum of the low-pass and the high-pass image. (1 = 2 + 3)
In the high-pass image (3) edges and high contrast structures get high values, low-contrast structures get a low value. We now make the assumption, that the small values represent noise, higher values represent the high-frequency image content. So we make a threshold operation and force all low values to 0, while higher values are left untouched.
The result of this operation (4) is now added to the low-pass image (2). The resulting image (5) now has the noise reduced (like in 2), but is not blurry. We preserved edges and strong structures and have at the same time reduced noise.
Shown here is only a intensity image, noise reduction for color images makes it more complex.
This simple algorithm can be modified in a lot of different ways:
• differentiate in more than just “low-pass” and “high-pass”
• use wavelet decomposition to make a more detailed differentiation between frequency bands
• make a “soft” threshold operation to avoid artifacts
• implement noise reduction differently for intensity and color noise
What you see if you have close look at the resulting image: We have lost image information in low contrast fine details. This phenomena is also known as “texture loss” and is a very important part in the image quality analysis. The iQ-Analyzer uses different algorithms to describe this, for more details have a look at these links:
ANALYSIS SOFTWARE
iQ-Analyzer
CONFERENCE PAPERS
» Differences of digital camera resolution metrology to describe noise reduction artifacts
» Interaction of image noise, spatial resolution, and low contrast ne detail preservation in digital image processing
» Noise Reduction vs. Spatial Resolution
ua