Technical
Signal-to-Noise Ratio (SNR) in Hyperspectral Cameras
By Dr. Rand Swanson, CEO - December 8, 2023
Figure 1: Mineral classification maps created from NASA AVIRIS hyperspectral data. RHS map (after instrument upgrade) shows 4X improvement in SNR as compared to LHS map. Image courtesy of USGS1.
Signal-to-noise ratio (SNR), a well-known metric for understanding data
quality, is the magnitude of a signal divided by the magnitude of the noise in
the signal. The SNR can be calculated by taking multiple identical measurements,
calculating their mean value, \(\overline{M(\lambda)}\), and dividing the mean by the standard deviation, \(\sigma(\lambda)\):
\[SNR (\lambda) = {\overline{M(\lambda)}\over \sigma(\lambda)}\]
SNR is written as a function
of wavelength because both the signal and the noise vary between spectral
channels.
In practice, SNR is rarely determined using the multiple measurements approach because there are too many real-world variables at play. It’s useful to understand what these variables are and how they impact SNR.
Signal Collected by a Hyperspectral Imager
The signal collected by a hyperspectral imager, \(\Phi(\lambda)\), in units of Joules, is approximated using the following formula:
\[\Phi(\lambda) = {{\pi L(\lambda)A_D \varepsilon(\lambda)(\Delta\lambda)(\Delta t)} \over{4(f/\#)^2+1}} \]
\(\boldsymbol{L(\lambda)}\) is the at-sensor spectral radiance at wavelength \(\lambda\) in units of W/(m2 sr nm). This value indicates the brightness of the light coming into the imager. Signal, and by extension SNR, increases with brighter illumination. Generally, illumination changes with wavelength. For example, Figure 2 shows the signal from a Lambertian object with perfect reflectivity when illuminated by the sun in typical atmospheric conditions. Note that illumination becomes weaker at both short (~400 nm) and long wavelengths, and thus the signal (and SNR) typically degrade at short and long wavelengths. The visible wavelength range is approximately 400-700 nm.
\(\boldsymbol{A_D}\) is the detector area (m2) for a channel, often the pixel area on the camera. A large pixel area increases your signal. Pixel binning (combining the signal from adjacent pixels into a single value) effectively increases the pixel area. Since it’s complicated and expensive to integrate new cameras into an existing hyperspectral imager, this is not considered an adjustable parameter for most users.
\(\boldsymbol{\Delta\lambda}\) is the optical bandwidth spread out across the detector area (i.e., pixel). This parameter is also determined by the instrument design, so it’s not considered an adjustable parameter.
Figure 2: Signal from a Lambertian object of perfect reflectivity when illuminated by the sun in typical atmospheric conditions. Visible wavelengths are indicated in blue.
\(\boldsymbol{\varepsilon(\lambda)}\) is the optical system efficiency; the overall efficiency depends upon the optical throughput of the lenses, the diffraction grating efficiency, and the detector quantum efficiency. Grating and detector efficiency values change significantly with wavelength, which impacts SNR (Figure 3). \(\varepsilon(\lambda)\) is a product of the efficiencies listed. Improving efficiency requires changing components inside the hyperspectral imager, so this is not considered an adjustable parameter for most users, either.
Figure 3: Diffraction grating and detector efficiency as a function of wavelength for the Pika L.
\(\boldsymbol{\Delta t}\) is the integration time, otherwise known as shutter speed, in seconds. This is one of the easiest parameters to adjust. The maximum integration time is 1/frame rate. To increase signal (and thus, SNR), you can decrease the frame rate and increase the shutter speed.
\(\boldsymbol{(f/\#)}\) is the imaging lens f-number, which is a measure of the instrument’s aperture. For maximum signal (and highest SNR) you should set the f-number on the objective lens to the f-number of the instrument. Setting the objective lens f-number to a lower value than the instrument f-number can lead to excess stray light, which will degrade your results. If a deeper depth of field is important, set the objective lens to a higher f-number.
Total Noise
There are many sources of noise in a hyperspectral camera, too many to adequately address in a blog
post. Fortunately, an abbreviated treatment is sufficient for most
purposes.
For this discussion we assume
that:
-
The detectors are “photon detectors” (e.g., conventional silicon CCD and
CMOS cameras, InGaAs cameras, and MCT and InSb cameras), which means that shot-noise
is important (and often the dominant noise source).
- The various noise sources are uncorrelated within a pixel and from pixel to pixel. For example, dark current noise does not correlate with read noise within a pixel, and the dark current noise in one pixel is independent of the dark current noise in any other pixel.
The second assumption is important because uncorrelated noise sources do
not add linearly, they add in quadrature, as shown below. Total mean noise, \(N_T\), from multiple uncorrelated noise sources (\(N_x\) in units of electrons) is given by:
\[N_T = \sqrt{N_1^2 + N_2^2 + N_3^2 + \dots} \]
This means that you only need to identify the largest noise sources to
adequately approximate the total noise. For example, in the equation below,
there are two noise sources, one of which is 30 percent the value of the other.
The total noise (1.04) is only slightly larger than the largest noise source
(1.0).
\[ \begin{align} N_T & = \sqrt{N_1^2 +{(0.3N_1)}^2} \\[1ex]
& = (1.04)N_1 \\
\end{align} \]
& = (1.04)N_1 \\
\end{align} \]
Types of Noise
-
Shot Noise: Shot noise is caused by variance in the signal itself. The signal
collected by a hyperspectral imager in units of electrons is given by \((\lambda){{\lambda}\over{hc}}\), which is the signal collected (in units of energy) divided by the amount
of energy per photon. Here \(h\)
is Plank’s constant and \(c\) is the speed of light.
-
Dark Current Noise: The dark current is often provided by the detector vendor. It
contributes to the signal desired by an amount equal to the dark current times
the integration time, \((i_{Dark})\Delta t\), where \(i_{Dark}\)
is the dark current in units of electrons per second and \(\Delta t\)
is the integration time. The dark current is subtracted from the
recorded signal – this is not noise, just background that should be removed.
The dark current noise is the variance
in the dark current. Much like shot noise, the dark current noise is equal to
the square root of the dark current contribution, \(\sqrt{(i_{Dark})\Delta t}\) .
-
Read Noise: This noise source is typically provided by camera vendors. It’s a single contribution to the noise associated with every electronic read of the pixel.
- Digitization (or Quantization) Noise: Most cameras used with hyperspectral imagers provide a digital output, usually between 8 and 16 bits. With a range of signal inputs associated with each digitization value, the digitization process causes uncertainty that leads to digitization noise. Digitization noise, \(N_{Dig}\), is given approximately by: \[N_{Dig} = {{(Full\,Well)}\over{2^B \sqrt{12}}} \] where \((Full\,Well)\) is the well depth of the detector in electrons and B is the number of bits. This equation assumes the analog-to-digital converter is adjusted so the maximum output corresponds to the full well depth (i.e., the camera is not in high gain mode). Stated another way, this is the number of electrons associated with each digitization step divided by the square root of twelve.
Other noise sources exist and
become significant in unique cases, like thermal noise, but this list is usually
sufficient for an accurate approximation of SNR.
Approximating Signal-to-Noise Ratio (SNR)
Using the noise sources discussed above, the SNR as a function of wavelength, \(SNR (\lambda)\), is calculated using the following
equation: \[SNR (\lambda) = {\Phi(\lambda){{\lambda}\over{hc}} \over {\sqrt{\left[\Phi(\lambda){{\lambda}\over{hc}}\right] + [(i_{Dark})\Delta t] + [({N_{Read}})^2] + \left({Full\,Well}\over{2^B\sqrt{12}}\right)^2 } } } \]
where \(N_{Read}\) is the read noise and
all other parameters have been defined above.
Before going further,
it is helpful to look at a few additional important concepts.
-
Shot Noise Limit: Because the shot-noise increases with signal, it becomes the dominant noise source in cases where one has a large signal. In this situation, the SNR is well-approximated by the square root of the number of electrons collected by the sensor. Since the largest signal that can be collected by a pixel in units of electrons is the full-well depth, one finds that the maximum possible SNR is approximately the square-root of the Full Well Depth. This is why optimal performance is achieved by operating close to detector saturation.
For silicon cameras, pixel full-well depths are typically a few tens of thousands, which means the highest possible SNRs for a single pixel are in the low hundreds. InGaAs and MCT cameras often have full-well depths of around one-million electrons, resulting in maximum SNRs of around one thousand. -
Binning: The spectral resolution of hyperspectral imagers is often better than required, but the SNR may be lower than desired. As such, it is often advantageous to bin spectral channels to improve the SNR. However, care must be taken to take full advantage of binning.
In principle, the SNR achieved by binning is given by: \[SNR (\lambda_{BN}) = {\sum_1^{BN}\Phi(\lambda_i){{\lambda_i}\over{hc}} \over {\sqrt{\sum_1^{BN}\Phi(\lambda_i){{\lambda_i}\over{hc}} + BN ( { [(i_{Dark})\Delta t] + [({N_{Read}})^2] + \left({Full\,Well}\over{2^B\sqrt{12}}\right)^2 } } ) } } \] where \(BN\) is the number of channels binned and \( \lambda_{BN} \) is the wavelength label for the binned channels (typically the central wavelength of the binned spectral range). All noise sources are assumed to be uncorrelated. For wavelength channels that are sufficiently close in wavelength, one would expect their values to be approximately the same. Consequently, \[\sum_1^{BN}\Phi(\lambda_i){{\lambda_i}\over{hc}} \cong BN\left( \Phi(\lambda_{BN}){{\lambda_{BN}}\over{hc}} \right) \] With this approximation, \[SNR (\lambda_{BN}) \cong \sqrt {BN} \left ( \sqrt { \Phi (\lambda_{BN}) { { \lambda_{BN} } \over {hc} } } \right) \] Thus, the SNR increases by approximately the square root of the number of bins and the maximum SNR with binning occurs when operating near saturation (from the Shot Noise Limit section above): \[SNR_{Max} \approx \sqrt {BN} \sqrt {(Full\,Well)} \]
Comparing Measured and Modeled Signal-to-Noise Ratio (SNR)
Our experimental
SNR was determined by recording multiple measurements of a calibrated
integrating sphere with a Resonon Pika XC2 hyperspectral camera. Using a
central spatial channel, we calculated the SNR for each band by dividing the
mean by the standard deviation.
The SNR
was modeled using the approach described above, with the integration time
adjusted to match the setting used for the measurements and using the known
radiance values from the integrating sphere. Plots of the SNR, measured and
modeled, are shown in Figure 4.
Figure 4: Measured and modeled SNR for the Pika XC2. Image courtesy of David Allen, NIST.
You can see the modeled SNRs of all Resonon's hyperspectral cameras here.
Dr. Rand Swanson, CEO
Dr. Rand Swanson, CEO of Resonon
He considers himself fortunate to collaborate with the talented and dynamic Resonon team and to focus his efforts on hyperspectral imaging—a technology that is simple in concept, complex in execution, and applicable to a vast array of uses.
References
1. Effects of spectrometer band pass, sampling, and signal-to-noise ratio on spectral identification using the Tetracorder algorithm.
Swayze, G. A., R. N. Clark, A. F. H. Goetz, T. G. Chrien, and N. S. Gorelick (2003), J. Geophys. Res., 108, 5105, doi:10.1029/2002JE001975, E9.
Contact us
For more information on our hyperspectral imaging systems and what might fit your application needs, please reach out. One of our Sales Team members will be happy to help you!
Contact usLearn More
Technical Cameras
Signal-to-Noise Ratios (SNRs) of Resonon Hyperspectral CamerasDecember 12, 2023
Basics
Hyperspectral Imaging 101: Terminology GlossaryOctober 11, 2023
Contact Us
Click below and our hyperspectral experts will contact you soon.
Complete Hyperspectral Imaging Solutions
Contact
Resonon Inc.123 Commercial Drive
Bozeman, MT 59715 USA