Image Quality Factors


Image Quality Factors
  1. Introduction
  2. The problem with noise
  3. Noise types
    1. Temporal noise (random noise)
    2. Spatial noise (pattern noise or non-uniformity noise)
    3. Color noise vs. Intensity noise
  4. Noise sources
    1. Photon-shot noise
    2. Read noise
    3. Improving noise
  5. How to measure noise?
    1. ISO 15739 SNR
    2. Measuring Noise using the dead leaves pattern
  6. Noise standards (that we use)
    1. ISO 15739 Visual noise
    2. EMVA 1288 Non-Uniformities
  7. Conclusion
  8. References


Noise in an image is defined as the presence of artifacts that do not originate from the original scene content. Generally speaking, noise is a statistical variation of a measurement, created by a random process. In imaging, noise emerges as an artifact in the image that appears as a form of grainy structure that covers the image.

Noise can have different forms and appearances within an image and is, in most cases, an unwanted or disturbing artifact that reduces the subjective image quality.

Image 1: An example of how noise can greatly affect the image quality.

The problem with noise

Noise is a byproduct of irregular signal fluctuations that accompany a transmitted signal. What’s important to understand here, is that these fluctuations are not a part of the signal and instead obscure the intended target.

Thus, one of the most crucial tasks in imaging becomes finding a solution to create a strong signal with a minimum amount of noise beside it. Unfortunately, finding a solution often proves to be a significant challenge in imaging, particularly in a low light situation where the signal is already low. The first step when dealing with image noise is to identify the type of noise you’re dealing with.

Noise types

In digital imaging, we encounter various types of noise, but unfortunately the terms and definitions differ slightly across standards and publications making it difficult to always agree on one or two solutions. In this article, we try our best to cover the most widely used definitions and solutions.

temporal vs. spatial
Image 2: Temporal noise (left) vs. spatial noise (right).

Temporal noise (random noise)

Temporal noise is almost always completely random and is the result of variations in the process of generating a digital value from a single pixel by converting incoming photons into electrons. In addition, the number of photons that hit a single-pixel during the exposure time will also vary. This procedure is known as photon shot noise.

If we observe the same single pixel in an image that has been captured multiple times, we will see this pixel fluctuate between the various images as demonstrated in image 2. Even though the scene from the images doesn’t change, we will still see variation in the digital value we receive from this particular pixel.

Spatial noise (pattern noise or non-uniformity noise)

Spatial noise is typically caused by variations in the individual pixel and is therefore not random. This noise type is often referred to as “non-uniformities,” as the term noise itself implies that it is based on a random process. EMVA1288 uses the term “non-uniformity” while ISO 15739 uses “fixed pattern noise.”

Pixels that are positioned next to each other on the sensor will display differences in their digital values even if the object is equal. As a result, each pixel will show a slightly different behavior resulting in slightly contrasting digital values.

Keep in mind that variations between the pixel can also be caused by temporal noise, the various forms of spatial noise can only be observed when the temporal noise is minimized. This process is normally carried out by averaging hundreds of images to minimize the random component. The remaining (averaged) image will show then only show the spatial noise.

One form of spatial noise is Pixel Response Non-Uniformity (PRNU), a slight variation in the sensitivity of each pixel. Dark Signal Non-Uniformity (DSNU) is another form that has a slight variance between pixels in their signal, or, more simply put, the generated signal in an absence of light.

averaging images
Image 3: Averaging images together to reduce the presence of noise.

Color noise vs. Intensity noise

In the previous section, we only analyzed the behavior of single pixels and their neighbors from Monochrome sensors. These sensors will show the noise as a variation in the intensity only. Color sensors, however, will display both intensity noise and color noise.

Color noise is created and amplified during the generation of color information. Essentially, a single pixel captures only color information for a certain band of the light spectrum (e.g., Red, Green or Blue). This process is true for nearly all types of sensors.

Color noise is attributed to a process known as demosaicing where the missing color information is interpolated from a neighboring pixel to ensure Red, Green and Blue is obtained in each pixel. As a result, the noise of an individual pixel will affect the color information of a neighboring pixel and during the interpolation process the noise in these pixels will smear out.

The typical color noise scenario in imaging is the presence of strong noise in the blue channel and lower noise in the green and red channels. The strong noise in the blue channel will also affect the other channels due to demosaicing.

It’s important to note that the human observer is much more sensitive to intensity noise than to color noise, but nevertheless, strong color noise can still be disturbing to the overall image quality.

Noise sources

There are many different sources of noise in an image. To simplify, we can differentiate into two main noise sources: Photon-Shot noise and Read noise.

Photon-shot noise

The photon-shot noise refers to the noise of the light itself. If we imagine light as a flow of photons, then we see that this flow is not perfectly constant over time. To compare, picture an instrument that measures rain on a small surface. If we have heavy rain, we can accurately provide a number that is the average for the surface per time interval. A very light rain, however, will only show a few drops per time interval and will thus change rapidly for the various measurements.

The same idea from the rain example applies to photon-shot noise. The ratio of signal to noise (SNR) equals the square root of the signal for photon-shot noise. Put simply, the more photons we have, the better the SNR and vice versa.

improving SNR
Image 4: The more signal we have the better the SNR.1

Photon-shot noise

Read noise is a summary of multiple types of noise sources within the reading process of the sensor. In many cases the noise is constant, so the lower the signal the worse the SNR. Likewise, the lower the number of photons, the lower the SNR.

When we plot the SNR vs. the number of photons per pixel per exposure, we can differentiate the SNR into two regions:

Read-noise limited: Occurs when the read noise is so strong that the SNR is significantly lower than the lowest SNR that appears from the photon shot noise.

Photon shot noise limited: Occurs when the measured SNR is just slightly below the highest SNR that can be reached with the photon shot noise.

Improving noise

A perfect camera (free of read noise and quantization noise) is not noise-free, but instead will still show the photon shot noise. Keep in mind, however, that the lower the number of photons per pixel per exposure, the worse the SNR.

To improve noise on the sensor level, you need to reduce the read noise and increase the number of photons per pixel per exposure. Larger pixels will collect more photons and a longer exposure time will capture more photons. The maximum length of the exposure time is based on the application as it will also introduce motion blur. Options for improving noise are thus limited, but, in many cases, signal processing will reduce the noise from the signal using image enhancement and noise reduction algorithms.

How to measure noise?

ISO 15739 SNR

ISO standard 15739 describes a procedure to measure and report the SNR of a camera.2 The procedure uses a test target based on ISO 14524, so the camera under test can reproduce a test target under controlled conditions.3 Example targets are seen below:

Image 5: The TE269 OECF test chart for measuring noise.
Image 6: The TE270X test chart for measuring noise.

The SNR is calculated for every patch of the test target and provides a function of SNR vs. Luminance. To report a single number, follow the numbered approach.

  1. Extract L_ref from the OECF
  2. Calculate L_SNR
  3. Interpolate and report the SNR for L_SNR

The reference luminance (L_ref) is the lowest luminance that lead to a digital value of 245 in one of the three channels. The value of 245 is valid for 8bit, sRGB images.

If we have the L_ref, then we can easily calculate the L_SNR as:

LSNR = 0,13 x Lref

Where does the 0.13 or 13% come from? The idea is to calculate on an 18% gray card, but the exposure control of a camera would not expose an 18% reflection to 18% of the maximum digital value as it requires some “headroom” for highlights in the image. The ISO assumes a headroom of 140%, thus resulting in the 13%.

Essentially, when the L_SNR is known, the corresponding SNR value is interpolated from the SNR curve.

Measuring Noise using the dead leaves pattern

Another option for measuring noise is using the dead leaves pattern.4 This pattern consists of a random pattern of circles with varying diameter and color giving us a non-uniform pattern. Measuring noise using a non-uniform pattern will provide better results of the true noise performance and ultimately lead to a better correlation of the user experience.

dead leaves
Image 7: An example of the dead leaves test pattern.

Noise standards (that we use)

ISO 15739 Visual noise

While the SNR value is well established metric for describing the performance of a sensor, it’s not the best choice when wanting to describe how much noise a human observer can see.

ISO 15739 uses a metric called visual noise, which correlates much better with the human perception of noise than the SNR.

The Visual Noise value is simple to understand: The higher the value, the more noise an observer will see. The major difference between the SNR and VN is that VN will weigh the noise according to the visibility. Noise that cannot be seen, will not be taken into account for the noise measurement.

How do we know what noise can/cannot be seen by a human observer?

We can model the response of the human visual system to spatial frequencies. Essentially, the Contrast Sensitivity Function (CSF) and an assumption of the viewing condition allows us to calculate the importance of different parts of the noise spectrum. From the image below, Image 1x will have most of its “noise” in the high spatial frequencies and a low response in the CSF. Image 4x has most of its “noise” in the lower spatial frequencies and can be easily observed according to the CSF. Thus, Image 4x gets a much higher VN value than 1x.

Noise levels
Image 8: 1x, 2x, and 4x noise. The 4x noise has a much higher visual noise value than the others.

EMVA 1288 Non-Uniformities

The EMVA (European Machine Vision Association) 1288 was established to help the buyer choose the best camera or sensor that fits their requirements.5 The standard describes a proper test setup, test procedure, algorithms, and reporting for thoroughly testing the noise in a camera system.

A test using these guidelines will yield noise values for the following: Dark Signal Non-Uniformity (DSNU), Signal to Noise Raito (SNR), Pixel Response Non-Uniformity (PRNU), dark noise, and dark current.


Noise is a byproduct of irregular signal fluctuations that accompany a transmitted signal, but these fluctuations are not a part of the signal and instead obscure the intended target. As a result, one of the most crucial tasks in imaging is finding a solution to create a strong signal with a minimum amount of noise beside it.

We have outlined a few different solutions for measuring the impact of noise. In our test lab, we follow ISO 15739 to obtain the SNR as it is a much more accurate representation of the human perception of noise in an image. We also often utilize the dead leaves chart to better correlate our testing with that of a real environment.

It is most important to find a solution that covers the specifications of your devices under test.