Good images of planets push the bounds of most telescopes' resolution. Planetary imagers want detailed images of bright objects. Deep sky imagers are more concerned with capturing photons from dim objects and some DSOs require large fields of view. Photographing these images may require trading resolution for better sensitivity or field of view.
Planetary images benefit from the highest resolution that your telescope and sky conditions allow. Camera sensors sample images onto a discrete grid of pixels. Under-sampling planetary images using a sensor with a pixel pitch too large to capture the telescope's resolution will loose details that cannot be recovered in post processing. Understanding how your telescope's resolution relates to your image sensor pixel size will guide you in camera and lens selection. This is determined by:
These are not hard to calculate. First you need to know the diffraction limited resolution of your telescope. The wave nature of light passing through an aperture causes an interference pattern of darker and lighter bands. Telescope resolution is measured as the smallest angular separation of two stars that can seen with a telescope of a given circular aperture. There are two standard ways to calculate this: the Rayleigh Criterion (theoretical) and Dawes Limit (empirical) which give similar results. Let's use the Rayleigh Criterion because it is easier to remember:
angular_resolution_in_radians = 1.22 * wavelength_of_light / telescope_aperture
We need a wavelength for visible light. Human vision responds to light with wavelengths between 0.38 and 0.74 micrometers, with an average of 0.56 micrometers in green light.
angular_resolution_in_radians = 0.68 / telescope_aperture_in_micrometers
Next we need the distance that the telescope’s angular resolution corresponds to on a camera sensor at prime focus. We use the small angle approximation (sine(a) = a_in_radians)
d = angular_resolution_in_radians * focal_length
Substituting:
d = 0.68 * focal_length / telescope_aperture with all distances in micrometers
Next, we need to include the effect of digital sampling. A digital camera samples the image at the pitch of the camera sensor. Because the image is sampled on a discrete grid, the Sampling Theorem applies. The Sampling Theorem specifies that the sampling rate must be twice that of the highest image resolution frequency. Here that is the diffraction limited resolution of the telescope. In other words, in a digital image we must have pixels (photo sensor cell) to capture both the closest resolvable individual stars and the space between them. Sampling with this spatial frequency, is called critical or Nyquist sampling. Sampling at this pitch will capture all of the information in a resolution limited image.
critical_sampling_pixel_pitch_in_microns = 0.68/2 * focal_length/aperture
The telescope focal_length/aperture is the focal ratio of the telescope.
critical_sampling_pixel_pitch_in_microns = 0.34 * focal_ratio
divide the focal ratio by 3 to get the camera sensor pitch in microns for the best resolutionthat your telescope is capable of.
For example: with the f/13 Questar, a camera with 13/3 or 4.3 micron pixels will capture the highest resolution images that this scope is capable of. An f/4.9 telescope like the RedCat 51 requires a 1.6 micron sensor pitch.
Astrophotographers have tools which can be used to match camera sensor pitch to telescope limited resolution over a limited range. Barlow lenses or focal reducers increase or decrease the effective focal length and focal ratio of a telescope. If you are stacking under-sampled planetary images that have position drift or dithering. You can recover resolution by drizzle stacking onto a finer pixel grid.
Sensitivity and signal to noise ratio in a camera are a complicated topic. They are much more important for imaging dim DSOs than planets, so I won't discuss them here.
Planetary imagers often want to capture more than just a close up of the planet. Images with multiple targets like Jupiter or Saturn with their moons or planetary conjunctions require wider fields of view.
Field of view is another calculation is easy to remember and perform in your head. Using radians this is just:
FOV_in_radians = sensor_dimension / focal_length
Geometry gives us two pis in a circle, with 360/2pi (57.3) degrees in a radian
FOV_in_degrees = 57.3 * sensor_dimension / focal_length
For example a full frame sensor is about 36 mm wide. With a full frame camera the W.O. RedCat 250/51 mm telescope will capture a field about 57.3 * 36 / 250 or 8.25 degrees wide.
Content created: 2015-05-18 and last modified: 2020-07-07
Submit comments or questions about this page.
By submitting a comment, you agree that: it may be included here in whole or part, attributed to you, and its content is subject to the site wide Creative Commons licensing.
Starter telescopes for beginners
Getting started in astrophotography?
Choose & setup a camera for astrophotography
Astro RaspberryPi Camera and kin, the ASIAir and StellarMate
Blind Smart-phone Equatorial Wedge or GEM Polar Alignment
Celestron FirstScope with equatorial tripod mount
Day-lapse Images of Earthshine on the Crescent Moon
DSO Astrophotography without a Telescope
DSO imaging without a star tracker
Overview & equipment for lunar eclipse photography
Framing and tracking a lunar eclipse
Moon photography - a dozen ways to shoot the Moon
Meteor shower photography & planning
Matching image sensor size to telescope resolution
Narrow band imaging with color cameras
Print and Display Astrophotography