Modulation Transfer Function and Image Sampling Analyzer
Explanation
Enter your telescope and imaging parameters using the sliders and/or numeric entry fields. Telescope parameters can either be entered as aperture and focal ratio, or focal length and focal ratio, by selecting the input mode. Telescopes are normally specified by aperture and focal ratio, while camera lenses are normally specified by focal length and focal ratio. The third parameter, focal length or aperture, will be computed automatically. In Aperture & Focal Ratio mode, for example, the focal length will be calculated automatically by multiplying the aperture by the focal ratio. For focal ratio, be sure to enter the final focal ratio of your system, including any reducers or extenders in use.
Enter your imaging site’s typical seeing, the imaging wavelength of interest, and your camera’s native pixel size. You can also select one of the drizzle or binning modes if you intend to use these in your acquisition or processing workflow. Keep the Pixel Size parameter at the native pixel size: it will be multiplied or divided correctly when computing results if drizzle or binning is selected.
Using this information, key values such as the plate scale, stellar full-width at half maximum (FWHM), and the ultimate diffraction limit of your sytem will be computed, and the graph of your system’s Modulation Transfer Function will be updated. You can readily see whether diffraction, seeing, or the camera’s pixels are the main source of loss of contrast (blurring). Most imaging systems with a focal length greater than one meter, at typical imaging sites, are limited by seeing. At lower focal lengths, or at sites (or on nights) with excellent seeing, blurring by the camera’s pixels becomes a significant source of loss of contrast. In exceptional cases, the diffraction limit of the optics is dominant. By moving the sliders around and experimenting with various values of seeing, aperture, focal ratio, etc., you can become more familiar with how these parameters affect the achievable resolution, and which ones matter most in various scenarios.
Anything less than 100% contrast represents blurred detail. This contrast, whether it has been lost by diffraction, seeing, or pixel sampling, can be at least partially restored by deconvolution.
Restore contrast to fine detail
with BlurXTerminator
Understanding the results
Spatial frequency is an important concept in understanding the detail present in an image. It is basically a measure of how rapidly the intensity of light changes over distance across the camera sensor or, more relevant for our purposes, over angle on the sky. Large-scale variations, such as the overall change in brightness across the span of a nebula or galaxy, are represented by low spatial frequencies. Fine-scale details like stars, small structures in galaxies and nebulas, etc., are reprented by high spatial frequencies. It is the contrast at these high spatial frequencies which give an image its appearance of sharpness or clarity, and let us see all of the detail that makes astronomical objects so interesting.
Spatial frequency can be measured in two ways: in the image space, referring to the physical size of features in the image projected onto the camera sensor, or in the object space, referring to angular distances on the sky. The latter is more useful and intuitive for our purposes here, and is measured in units of cycles per arcsecond. A cycle is an alternation in an image feature from bright to dark. The more rapidly light varies versus angle on the sky in a particular part of an image, the higher the spatial frequency.
The Modulation Transfer Function (MTF) is a depiction of how much contrast is lost versus spatial frequency for a given system. A graph of the MTF always starts at 1.0 (100% contrast reproduction) at a spatial frequency of zero, which is basically saying that light passes through the instrument. As the spatial frequency increases, and signficant details become more closely spaced, the optics have more and more difficulty reproducing those variations in light intensity. Contrast drops until finally reaching zero, meaning no detail is preserved at or above that scale.
A fundamental factor that limits the achievable resolution for any optical system, imposed by the wave nature of light iself, is the diffraction limit. An image of tiny spot of light (such as a star) produced by an optical system will never be smaller than a certain size on the camera, no matter how perfect the optics, due to how light behaves. Thinking of the physical size of this spot on the camera sensor, the only thing that can make it smaller is reducing the focal ratio of the optics. Thinking instead of what the size of this spot means in terms of capturing fine detail on the sky, the only thing that can improve this is inreasing the aperture of the instrument.
Note that the term “diffraction limit” is frequently used to mean two different things. One is the ultimate limit of angular resolution achievable by an instrument with a given aperture. The other meaning is related to this, but expresses the contrast loss as spatial frequency increases toward this ultimate limit. Both of these are expressed in the MTF graph. The Diffraction Limit curve represents the constrast loss across spatial frequency of a “perfect” optical system. The point on the far right at which this curve reaches zero is the ultimate diffraction limit — the highest spatial frequency or, in other words, the minimum angular separation that the system can resolve.
To designers of telescopes and lenses, the dashed Diffraction Limit curve represents perfection: if they do their job well, the performance of their designs will approach this curve across the entire field of view. A truly perfect optical system, however, one that faithfully reproduces contrast at all spatial frequencies, would be a straight line across the top of the graph: 100% contrast at all spatial frequencies. This is not fully achievable, of course, but can be approached with good optics, good seeing, good acquisition equipment and skills, sufficient exposure time, and modern deconvolution tools such as BlurXTerminator. Deconvolution restores contrast in an image, often raising it above the diffraction limit in the MTF curve, as if the image had been taken with a much larger instrument or under much better seeing conditions.
You will notice that with certain input parameters, a portion of the overall MTF plot will turn red. This indicates the range of spatial frequencies over which undersampling is occurring. Undersampling happens when the camera’s pixels are too large to capture fine-scale (high spatial frequency) detail that could otherwise be captured with smaller pixels. By moving the Seeing and/or Pixel Size sliders, you can experiment and see what it would take to properly sample the image produced by your optical setup. At very short focal lengths, it is nearly impossible to avoid undersampling.
Undersampling is not necessarily a disaster: it simply means that there is finer resolution available from your optics and imaging conditions than can be captured by your camera. Often a sacrifice is being made: the field of view is wider at shorter focal lengths, allowing large objects to be imaged in a single frame, but the ability of the camera to capture the finest details is lost.
Drizzle is a technique than can be used in many cases to recover some of this lost resolution. If your native camera resolution undersamples the image produced by your optics, as shown by the red part of the MTF curve, you can select a drizzle factor to see an approximation of how much more detail can be captured using this technique. Note that the recovered detail usually has very low contrast. Deconvolution can restore this contrast and fully realize the potential of the drizzle technique.
The opposite of undersampling is oversampling, which is to say that the camera’s pixels are so small relative to the features projected onto the sensor by the optics that more than enough pixels are capturing the available detail. There is nothing wrong with oversampling: no detail is being lost as it is with undersampling. It does, however, result in image file sizes that are larger than necessary, and some image processing tools are not designed to handle excessive amounts of oversampling. While it is better to be slightly oversampled than slightly undersampled, if that choice exists, excessively oversampled images can safely be “down-sampled” or “binned” in software or hardware. A Total FWHM value above 8 pixels is excessively oversampled. With older CCD cameras, binning in the camera during image readout actually improves the signal-to-noise ratio of the image, but with most modern CMOS cameras, binning or “integer resampling” can be done in software.
If you want to evaluate whether or not this is appropriate for your system, click the “Bin 2×2” mode above. If no portion of the overall MTF curve turns red, then binning or integer resampling can be done with no loss of detail. Your image processing will run about four times faster, you’ll use four times less disk space to store your images, and image processing alorithm developers will be pleased.