This article in the starry-night.nl: Astro Pixel Processor series about working with APP gives an in-depth explanation about data-calibration with APP. The workflow uses data from a Astro Pixel Processor Beta group member in order to illustrate how to get correct data calibration using APP.
- Object: M51, the Whirlpool galaxy
- OTA: Takahahsi TOA-130
- Canon EOS 600D (mod)
- Lights: 5 x 90 seconds, iso 1600
- darks: 5x 90 seconds, iso 1600
- bias: 5x 0 seconds iso1600
Step 1) Create a Bad Pixel Map
We first create a Bad Pixel Map. Since we only have darks, we can only correct hot pixels. Cold pixels can be corrected using flats but most modern camera’s hardly have any cold pixels at all.. so this is no significant problem.
|Clicking on the images in this article is possible and recommended to view the images full screen.|
Be aware, put some effort into creating a good bad pixel map, since this Bad Pixel Map or BPM can be used for a very long time. You only need to make a good BPM once per year or even longer. I have been using a BPM for my Nikon D5100 BCF mod (firmware hacked: true dark current patch) for over 3 years now with high efficiency, still no need to make a new one… A good BPM can be created using about 20-50 darks with exposure times well over 5 minutes and the iso/gain of the camera that you are most likely to use in your imaging sessions. And also 20-50 flats with exposures of about 1 second or longer should be fine, iso value isn’t very important here. Remember, the more frames you use for the BPM, the better it will be for your bad pixel correction for a long time to come 😉
We first load the dark frames with 1) LOAD and push the dark button to select the darks. The we go to 2) Calibrate and scroll down to the Bad Pixel Map options. We select create bad pixel map and set hot pixel kappa at 1. If you have more darks ( >20), the kappa usually can be set at 2 for good results. Click on create calibration masters. After the creation of the Bad Pixel Map, we investigate the bad pixel map by zooming on it and we check the fits header for statistics. The amount of hot pixels, i.e. pixels that seem to show non-linear behaviour, is 0,825% with the hot kappa 1 setting. This is a very normal value and probably good for this small data set.
Step 2) Create a Master Bias
We first deselect the create bad pixel map option. Then we go to 1) Load and push the clear button. The file list is now completely emptied. We proceed by loading the bias frames by cliking on the bias button. At 2) Calibrate, we use median integration for the bias master integration without outlier rejection. If you have more than 20 bias frames (which you’ll normally have), select average and use normal or winsorized sigma clipping for outlier rejection with a conservative setting, like 1 iteration kappa 3.(Don’t use Linear Fit clipping anywhere in the calibration process!) Click on create calibration masters.
After APP is finished, check the statistics of the master bias. You’ll notice, that the more bias frames that you’ll use, the lower the Multi Resolution Support Gaussian Noise (MRS noise) estimate becomes. So with more frames, the master bias will become less noisy. Your aim should be to to create a master bias with very little noise, so that means, use plenty of bias frames. I usually use 100-250 frames. The explanation being: the noise in the master bias will always be injected in your light frames in calibration, so try to reduce this injection to a minimum by using a lot of bias frames.
Step 3) Create a master Dark
We can now choose 2 ways to properly calibrate the lights frames with a master dark:
- calibrate all darks with the master bias, so we subtract the bias pedestal from the darks. This will mean that both the master dark and master bias should be used in the calibration of the lights
- don’t calibrate the darks with the master bias, since the bias is already part of your darks. This will mean we’ll only use the master dark in the the calirbation of the light frames. So the master bias actually becomes redundant. (This is actually the recommended method for camera sensors that have significant fixed pattern noise in the dark current, so dark current calibration is really needed)
I’ll choose method 2). So I remove the bias frames and master bias from the file list by clicking on clean in 1)LOAD. Then I’ll load the darks by clicking on the dark button. At 2) Calibrate, I choose median integration and no outlier rejection for the same reasons as in the previous bias integration. Click on create calibration masters. And check the master dark visually and the statistics in the FITS header.
Step 4) Calibrate the light frames and remove possible chromatic aberration
in step 3) I chose to do dark calibration without bias calibration of the darks frames, so I only need the master dark (which contains the Bias) for dark and bias calibration at the same time.
I clean the file list again and I will load the lights with the light button. Then I load the masterdark and our Bad Pixel Map as well using any button to load frames (APP will detect these master frames). And I visually inspect the uncalibrated and calibrated images using the linear and l-calibrated image viewer options. Clearly, the bad pixel map is doing a very good job, since the calibrated frame hardly shows any hot pixels.
The data of Hans has no visible chromatic aberration due to his fantastic refractor, the Takahahsi TOA-130. If your data does have colored cyan/magenta/red/blue edges on the stars indicating chromatic aberration, APP can remove this to a large degree (both lateral and tranversal CA thanks to a dedicated registration model for CA correction) using the align channels option, part of the options to save the calibrated frames. In this case I don’t use this option since I can’t visually detect any chromatic aberration.
Data calibration is now finished.
Step 5) analyse stars & registration
So wtih the 5 lights loaded together with the master dark frame and the bad pixel map, we proceed with 3) star analysis. We use default settings so we change nothing. And at 4) register we also use the default settings. We then click on start registration. Now both star analysis and registration are performed. Let’s check the results after registration in the file list panel, since we encounter some output that needs our attention.
First look at the #stars & star density colomn. In frame 2 and 3, the amount of stars that are detected is much lower than the other frames. This is a direct and clear indication that these frames or bad compared to the other three. The star density value, is a value that corrects the #stars value to make the stars value image scale and Field of View independent, hence the name star density. This is most useful in case you want to integrate frames with different imaging scales and/or Field of Views. So the star density value is an image scale & Field of View independent metric and much better suited for comparison of quality of your frames when you have frames of different scales and/or Field of Views.
Secondly, look at the Full Width Half Maximum, or FWHM, value of the stars in the frames. We see a similar pattern. Frame 2 & 3 have strongly elongated stars when compared to the other frames. The reported FWHM is the average FWHM of all of the detected stars. The min and max FWHM value is the minimum FWHM cross section of this average star profile in the frame, where as the max FWHM, is the maximum cross section of this average star profile in the frame. If the image has perfect round stars, then the min and max FWHM value should be the same. If the max value is twice the min value, it indicates that the star, in the direction of the maximum cross section, is twice as big as in the direction of the minimum cross section.
FHWM is als reported relatively instead of absolute, so image scale independently, like the star density. So APP can actually compare the quality of frames of different image scales and Field of VIews directly.
Thirdly, the registration RMS of the frames that were registered to the reference frame, frame 5, also show a problem which simply can’t be corrected due to the strongly elongated stars. Frame 2 and 3 are hard to register since they contain elongated stars. Possibly the result of bad tracking during the exposure. Frame 2 has a registration error of 0,58 pixel for 120 star pairs between frame 2 and it’s reference frame. This is not good, but in this case acceptable. Frame 3 has an error of 2,38 pixel. We can’t accept this so we will not use this frame later on in integration.
Finally, the quality of the frames is expresses as a number, which is the result of a formula containing, star density, star shape, star size and noise which will be calculated in the next step. Frame 2 and 3 are worse than the other frames again.
Step 6) image normalization
We use default settings, we normalize with add + scale and biweight midvariance as scale estimator. And we enable background neutralization. We click on normalize lights and check the analytical results in the file list window. Basically, what we find is that the 5 frames are almost identical for background, dispersion, SNR and noise. The background, dispersion and noise values are always reported normalized in the range of 0 to 1. So if you were planning to integrate frames of both 16bits and 32bits, the data normalisation is always done in the 0-1 data range (32bits floats) so this is no problem for APP.
Notice that the quality score is adjusted now, since the MRS gaussian noise in the frames is taken into account as well in the quality calculation. Picture stays the same however. Frame 2 and 3 are not very good when compared to the other frames.
Step 7) image integration
We decided earlier we definitely don’t want to use frame 3 in the data integration. I use the “lights to stack” slider to deselect this frame from stacking.
So we are only integrating 4 frames, so this time again, we choose median integration without outlier rejection. I set the composition mode to full to have a field of view for the stack that includes all pixels of all frames. I enable MBB at 5%, no LNC. I don’t create output maps. Data interpolation is set at Lanczos-3 with NUOS, since we have very little frames. Other settings are left at default. We click on stack When APP is finished, we inspect the the stack/data integration and statistics in the fits header. No doubt, 4 x 90 seconds is a very short integration time for this object. If we were to combine a lot more light frames, this should prove to be a very nice image 😉 No doubt as well. Colors are very good already.
Step 8) post processing of the stack
I’ll now do a short postprocessing of the stack of Hans’ data. First I do a light pollution correction, then background calirbation and in the end a final crop of the field of view. And, in the end, I stretch the data with some added saturation. I now start to notice that flat calibration is really recommended. I do see some dust donuts appearing when I stretch the data a bit more aggresive.
I think, that if we were to combine more data and we would add good flat calibration into the mix, this will become a very nice picture sooner than later! For 4×90 seconds this is already very nice, especially considering the colors in M51, the Whirlpool galaxy.
Please feel free to comment/or make remarks to this article or ask your question in the APP Group.
Copyright © Mabula Haverkamp (author & developer APP) & © starry-night.nl 2017