The Pinwheel Galaxy is the 3rd largest galaxy outside the Milky Way visible in the sky. It is a spectacular spiral almost face on 22 million light years away. After my success with upscaled processing of M51, I take a closer look at processing M101 using these technques and compare the results to a Hubble Space Telescope image scaled to the same pixel scale. These images are made with a 2" aperture telescope, and just over 2 hours of data!
The cropped and scalled image above is linked to the full size iamge.
Imaged starting 2022-04-29 07:31 UT. I stacked and processed 45 three minute exposures taken with a William Optics RedCat 250/51mm telescope, Baader UV/IR cut filter, ZWO ASI 533 MC camera, and SkyWatcher AZ-EQ5 mount, ASI EAF and guide camera. All were controlled with a ZWO ASIAir Plus Controller. Processed in PixInsight, StarXTerminator, Topaz DeNoise, and Photoshop.
A 1:1 scale 640x480 crop of this data alternately processed at the original pixel scale, shows softer detail compared to the scaled down crop of the final image above.
A 1:1 scale 640x480 crop of this data alternately processed image at the original pixel scale, then scaled up by 2x after all processing in Photoshop. This scaled image is just too soft with no additional detail revealed.
A 1:1 scale 640x480 crop of this data alternately processed image at the original pixel scale, then scaled up by 2x after all processing in Topaz Megapixel AI. Megapixel AI does a slightly better job scaling up the finished 1x image that the defualt Photoshop algorithm, but still isn't a worthwhile result.
Next we look at a close up of the image produced by moving the upscaling to the beginning of non-linear processing. This is the same process I used in the earlier M51 image, and was used to produce the full size image linked at the top of this page.
A 1:1 scale 640x480 crop of the image processed at 2x image scale. The 2x scaling was done after the stacked image was seperated into star and nebula layers. StarXTerminator was used to separate the star layer from a nebulosity layer. Next a linear MultiScale DeNoise was applied. The results were stretched non-linearly in PixInsight and exported in TIFF format for further processing.
The scaleing of the stretched nebula layer was done using Topz Gigapixel AI. The star layer was scaled using PhotoShop for simplicity. The scaled nebula layer was the processed with Topaz DeNoise AI to sharpen it and reduce noise. The processed nebula and star layer were then recomposited in photoshop. The star layer was exposure stretched to minimize the star size bloat caused by scaling. The nebula layer was processed to enhance detail, color and luminosity contrast, - identically to the processing of the comparison unscaled image.
My golden reference for quality, a 1:1 scale 640x480 crop of a scaled Hubble Space Telescope image at the 2x pixel scale of my image. This image below represents an ideal image for a telescope with a resolution matching mine. Original image Credit NASA, ESA, CXC, SSC, and STScI
The superior image is clearly the scaleed down one from the Hubble. Comparing my images processed with 2 times scaling performed early and late, the image scaled near the start of non-linear shows detail more clearly. Compared to the Hubble image my best image showes three deficiencies:
Can I produce an image as good as the Hubble with a 2" telesocpe? Of course not! The original Hubble image has a pixel scale of 0.267 arcsec/pixel, 14 times the resolution of the 1:1 images matching my pixel scale of 3.76 arcsec/pixel. Can software with machine learning based processing to layer, upscle, sharpen, and denoise images be incorporated into a DSO workflow to produce visibly better images? Yes! I believe that similar benefits are possible for images made with larger aperture scopes.
Content created: 2022-05-14
By submitting a comment, you agree that: it may be included here in whole or part, attributed to you, and its content is subject to the site wide Creative Commons licensing.