Planetary Image Workflow

a work in progress ...

Astrophotographers usually used dedicated video cameras for good planetary images. DSLR or mirrorless cameras are much more portable and self contained. They are usually used for nightscape or deep sky images. With the right features, they can also take good video images of planets. Our agenda is:

  • define terms
  • calculate the most important number we need - a telescope's diffraction limited resolution in microns at prime focus
  • look at the camera features needed for images of the planets
  • walk through my complete planetary imaging workflow - this is specific to the Questar, Sony camera, and image processing tools that I use

The universe of cameras offers almost every conceivable combination of features and overlapping acronyms to go with all of them. To keep this note simple I am going to stick to the two most common classes of modern cameras used by astrophotographers.

  • still camera- refers to a removable lens, large sensor (APS-C or larger) still image camera with video capabilities. Although they usually offer remote control and live view features, they are self contained and don't require any additional equipment for framing, focusing, or storing images.
  • video camera- refers to a removable lens video camera. Both dedicated planetary image cameras and webcams are included. They have smaller sensors and pixels than the cameras described above and no capability to frame, focus, or save video images in the camera itself. An external computer or recorder is required.

Use Video for planetary image stacking

You need lots of raw images to stack to get good images of planets. The more images you have, the higher you can set your stacking quality cut off. Because planets are relatively bright, you don't need the longer exposures and wider field of view possible with still image cameras. With video you can take lots of images and without the overhead and wear of the mechanical shutter in a still image camera. Video can add additional noise and artifacts from the rolling video shutter and compression. These are not usually a problem because the planets are relatively bright and move slowly.

Most planetary astrophotographers use dedicated video cameras that require a computer to control them and to save the video. This is not ideal for a light weight kit. I’ve found that the right DSLR or mirrorless camera can do a great job imaging with the Questar and without the additional weight and bulk of a laptop.

Using a DSLR or Mirrorless Camera

Planets require lots of magnification. It is important that the camera doesn't throw away the resolution that your telescope is capable of. Large sensor still cameras also have large pixels to increase their light sensitivity. This isn't important for bright planets. By using a Barlow lens to magnify the image, match the diffraction limited resolution small telescopes to typical sensor element sizes of about 4-6 microns. If you have a large aperture telescope you may need a dedicated planetary video camera. A dedicated video camera with small pixels can trade take advantage of the high resolution of a big scope at the cost of reduced FOV.

Still image cameras have many more pixels than needed for video. To keep the field of view (crop factor) similar to that for still images, the image is down-sampled. Even with HD (1920x1080, 2 Mega-pixel) video, down sampled video from still cameras throws away too much resolution for good planetary images.

Some manufacturers (Canon) provide a cropped video mode that takes a narrow FOV (often 640 x 480) from the middle pixels of the sensor at full native resolution. Cropped or not, still cameras haven't provided video resolutions better than HD because of I/O bandwidth limitations getting data off the sensor at 30 or 60 frames per second. Still cameras' larger pixel size required a long focal length telescope and they had no FOV advantage with cropped video.

UHD 4k video to the rescue

The advent of 4k Ultra High Definition video (3840 × 2160 or 8 Mega-pixel) offers new opportunities for planetary imaging with large sensor cameras like DSLRs. A 1:1 native resolution cropped video mode is required. The Sony a6300 does this with 4k 30 fps video (but not in the 24fps 4k mode). The high I/O bandwidth (about 100 MB/sec) required for 4k video offers other possibilities as well. For example the a6300 also offers a full 24 MP resolution continuous shooting mode, with a fully electronic rolling silent shutter at 3 fps. Effectively 6k x 4k (24 Mega-pixel) slow frame rate video.

Shooting planets in 4k video

Here's my process for shooting planets in 4k on the Sony a6300:

  • Adjust your optical train for the FOV and resolution you want to capture. On the Questar 3.5" any APS-C (>16 MP) or full frame (>32 MP) camera will be at near or just below Nyquist sampling of the image for Dawes Limit on the Questar 3.5". This suggests that the 24 MP a6300 capture the full resolution of the Questar at prime focus. In real life, I find that I get better final resolution using Barlow lenses when I use deconvolution in post processing. If you need a wider FOV for a target like Jupiter with its moons strung out, shoot at prime focus.
  • Center your target and focus using magnified focusing and focus peaking. You can use a Bahtinov mask, but it isn't necessary.
  • Fine tune your exposure using test still image shots. On the a6300 although the video frame rate is fixed at 30fps your exposure can be as long as 1/4 sec. Keep your ISO high enough to keep exposure time shorter than 1/30 sec to take full advantage of the 30 fps video.
  • If you need the widest possible FOV (e.g. for the full lunar disk) instead of video, shoot full 24 MP stills. Use continuous shooting silent electronic shutter mode at 3 FPS. The Sony infrared remote does not support this. Use the Smart Remote camera app with Raw + JPEG quality and Sony smart-phone app over WiFi.
  • If you are shooting 4k video, make sure it is 100M 30 fps and not 24 fps video mode. Then shoot a couple of minutes of video using the video record button on the Sony IR remote.

Shooting wide FOV with stills

If you need the widest possible FOV and resolution, you will want to stack still images. The object of the game is to get as many images as quickly as possible. With Jupiter you are limited to about 3 minutes before the rotation of the planet makes it necessary to de-rotate your images. Do all of the standard astrophotography setup things: electronic first curtain shutter (or silent shutter if you have it) on, long exposure NR off, white balance auto or daylight, raw image format, manual focus peaking with auto magnification... Use continuous shooting mode, if you can, and shoot a couple hundred images, otherwise use an intervalometer in "free run" mode. If you don't have a silent electronic shutter this can be hard on your camera.

Compared to video you get raw data without any compression artifacts and you get a wider field of view. I love the wide field of view because you can capture all of the visible moons of Jupiter and Saturn as well as planetary conjunctions (these all need HDR multiple exposure stacking …), these are things that you can’t capture in the dedicated planetary video cameras because of their small sensors and small FOV.

In short use video if you can, if not use an electronic silent shutter if you can. I’ve found that with good seeing, I can get enough still frames with the old slow NEX-5N to get good planetary images. For me this is in the range of 100-300 images (verses the thousands that one gets with video). A camera like the a6000 with 4k cropped video and 3 fps electronic silent shutter continuous shooting makes shooting lots of images for stacking much easier.

Post-processing workflow

Importing images

If I shoot still images, I normally convert from raw format to tiff as my first step. I use CaptureOne to do this conversion as well as a rough image crop to cut down on the overhead of image data that I don't plan to use. Sony's XAVC S video compression is a high profile H.264 format in an MP4 container. The video tools on OS X that I use handle it fine as is. I haven't tried the S-Log2 and S-Log3 profiles which are supposed to allow 14 bit dynamic range in video.

Software overview

I’m learning to use PixInsight, which runs on all these platforms, its stacking seems oriented to DSOs. It does a great job with background extraction and has powerful image processing algorithms that can all be masked to only apply to where they will improve the image. These all seem to have more options than anyone can absorb at first glance. So far I'm using PixInsight only where I can't get the results I want from the workflow described here.

I use Lynkeos (freeware), Nebulosity (inexpensive) for stacking and first pass sharpening and Photoshop for post processing. There is now a beta of Gimp out that handles 16 bit images, so I plan to switch to it or some other competitor from Photoshop. I don’t like Adobe’s indentured servitude for life plan :-).

Aligning images in Lynkeos

Before images can be stacked they must be aligned and graded for quality. I always do a first pass alignment and image grading with Lynkeos. The flow is left to right in the interface: preview, align, grade and sharpen. You can save work in progress. It handles large images well. You can load them from the menu or just drag and drop them. Same for video, although there is a delay while it imports the thousands of frames. It handles the high profile 4k mp4 video that the Sony’s put out well. If you delete outright bad images in the preview mode with video, work from beginning to end. It seems to have to run through the video to get to a random frame so random access can be slow with thousands of 4k video frames. Frames with big dust spots on the planet or where the tripod got bumped can fool the automatic grading so it’s worth a pre-screen to eliminate images that will cause problems. Lynkeos also has a limited capability to work with flat frames and dark frames (Nebulosity is more flexible). I don’t use flats and darks with planetary images so I will skip this. Pay attention to how much your subject drifts in the frame from first to last image, you will need that in the next step, alignment. In alignment you draw a square box and Lynkeos will quickly align all frames to what is in the box. The only trick is the box needs to be big enough to account for any drift in the subject location. It will not work well if the alignment box crops the subject in some frames.

Grading images in Lynkeos

After images are aligned in Lynkeos, the next step is analyze (grading). Again you draw a square box around the area of interest. This time it can be nice and tight because the images are aligned. The output generates a single "Quality" number. There is a slider that you can use to select the images above a quality cut off. I examine high and low quality images and look at the spread in quality values to decide where to set this. I usually range between about 8 images selected to the top 20% or so with lots of high quality frames. If I think I’m going to want to use Nebulosity’s higher quality alignment, I tag the sharpest image file and then the other top quality files so that I don’t waste time processing files that I’m unlikely to use. OS X has a handy color tagging feature in the finder that I use. At this point you can move on to the stacking step. This time you can sweep out a rectangle which will be your stacked image crop. I usually get the best results with the sigma reject mode (usually use about 1.5 sigma). This rejects statistically outlying pixels in a second pass. A bit slower but it gets rid of sensor dust and bad pixels if you have a little drift in the location of your image on the sensor (slightly lest than perfect telescope alignment is your friend here). If this stack is good you are ready to recover some resolution. If not go to Nebulosity and use their batch non-stellar alignment.

Stacking in Nebulosity

I also use Nebulosity for stacking planetary images and full disk lunar images which can give very poor results with Lynkeos stacking. Nebulosity has a very high quality non-stellar automatic alignment function, that is also very slow. I use it for full disk lunar images and planetary close ups. It has the ability to do an automatic 8 parameter affine transform on each frame that can normalize out seeing distortions. I’ve found this essential for stacking full disk lunar images because of the seeing variations over the large target. Yes the Moon is bright and relatively clear - why stack? Stacking increases the signal to noise ratio of the image, high S/N ratio stacked images hold up to deconvolution image sharpening much better. With Nebulosity non-stellar automatic alignment it is essential that the images to be stacked be cropped as close as possible to the final target. Even then alignment can take on the order of an hour per frame! (Nebulosity’s deep sky image stacking with stars for alignment is also very good and much faster.) The results with the Moon and planets can be spectacular, but because of the time cost I only use it where it is likely to pay off. Nebulosity’s workflow is a bit different from other apps which may hide intermediate steps and images behind the user interface. It labels intermediate files by adding on tags to the file name. This takes a bit to get used to, but has the great advantage of making it easy to mix and match steps and external tools in the workflow.

If your images are not already fairly closely cropped to the subject, Nebulosity has a batch cropping tool. Don’t try to automatically align a tiny Jupiter in a full frame DSLR image unless you have weeks of time :-). From the menu in Nebulosity, select Batch / Automatic Alignment (non-stellar). Select save individual images. From there you select either Rigid (4 parameter),for planets, or Affine (8 parameter), for large scenes like the full lunar disk. Next select your highest quality image as the Master image. After "OK" select all the images that you will want to align. The size of image has a dramatic effect on automatic alignment speed. My full disk Moon images used to take about 20 minutes per frame when they were cropped from 16MP NEX-5N images. Moving up to the 24MP a6300 increased my alignment time per frame to an hour per frame. Now is the time to sleep or go to work or read that long Russian novel…

Once Nebulosity has produced a set of aligned images in FITS format, use Batch / Align and Combine Images to stack them. Select "Save Stack" and Alignment Method "none" (they are already aligned). I usually use Stacking Function "Std. Dev. filter (1.5)" - similar to Lykeos. This will take just a few minutes. Again the result is a FITS file, but Batch / Batch Conversion / FITS to TIFF will get you back to a TIFF. If your images show atmospheric diffraction color fringes - red on one side and blue on the other Nebulosity will let you separate your color image into three monochrome images. You can use the Nebulosity automatic alignment to align the colors to each other. These can then be recombined into a color aligned color image in a tool like Photoshop. You can also do a cut-and-try color alignment in Lynkeos.

Deconvolution sharpening in Lynkeos

Back in Lynkeos with my stacked image either from Lynkeos or viewing the TIFF of the Nebulosity stacked image, we’re ready to recover some resolution. Lynkeos has the typical unsharp mask tool. I recommend that you never use this as it make the image less sharp and introduces some artifacts that fool your brains sharpness "happiness" into thinking that the image is sharp.

Deconvolution should be done while data is still linear, before any stretching. Real resolution can be recovered by deconvolving the point spread function of the imaging system across the image. Fortunately computers are fast now and approximations to the point spread function work well. There are algorithms to do this in either the spatial or frequency domain.

Lynkeos has two algorithms "Deconvolution" and "Wavelet" that work well. Deconvolution has only 2 settings so it’s a bit less intimidating. With the Questar and Sony cameras I find that the default Radius of 2.5 and a Threshold of 0.5 is a good starting point. The fist pass will take a couple of minutes, but subsequent adjustments are much faster.

The things to watch out for on the image include: onion skinning or ringing light and dark bands near the edge of the planet. These can be minimized by masking away the edge in a tool that works with masks like PixInsight, but Lynkeos only works with the whole image. Also watch out for clipping where detail is lost in brightly lit areas.

As you work, you will see resolution and contrast improvement. This is real detail that is being recovered, unlike the artifacts created by unsharp mask. Generally it’s best to use a light hand, further sharpening can be done later with a tool that supports masking to confine it to the regions where it is needed. Sometimes I also use the wavelet tool as well. It has a lot more knobs to turn, but different characteristics. I generally leave the 1st wavelet alone, and values between 1.1 and 1.3 for the others. You can make a lot of improvement in the resolution of your images in Lynkeos without over sharpening and visible artifacts.

Photoshop and PixInsight curve stretching and exposure adjustment

The final part of my workflow is dynamic range management. Getting all of the subtle image detail into a brightness range that can be seen by eye. Both Photoshop and PixInsight are great tools for pulling detail out of your image data. Their models are very different - stacked layers verses sequential transformations. With both I've found it very important to learn how to generate feathered masks that select specific exposure zones of the image to control the effect of specific algorithms. I end this note here. There are many tutorials available on using these tools in this part of the workflow.

Content created: 2015-05-18

Translate                

     

Comments


Submit comments or questions about this page.

By submitting a comment, you agree that: it may be included here in whole or part, attributed to you, and its content is subject to the site wide Creative Commons licensing.