Astro Pixel Processor

Thoughts about multiplicative+additive+nonlinear normalization

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #15166

    Wang
    Participant
    posts: 17

    Hi,

    We talked about this in the previous thread. I think it’s better to open a new thread for this topic. The method I am going to suggest was once used in my old image processing. So it should work to certain degree.

    The idea is to identify overlapping regions between two registered frames. Suppose the pixel brightness in one frame is I1(x,y), and the other is I2(x,y). For all x, y, we can find a function to describe the relation between I1 and I2:

    I2 = a + b*I1 + c*I1^2 + d*I1^3 + …

    If there are enough pixels (nx*ny), it is possible to fit the polynomial parameters a, b, c, d,….

    1. For b, c, d,… all equal to 0, and a non-zero a, it is a simple additive normalization.
    2. For a, c, d,… equal to 0, and a non-zero b, it is multiplicative normalization.
    3. For non-zero a and b (all others 0), it is multiplicative + additive.
    4. Finally, by allowing c, d, and even higher order terms, it can normalize two nonlinear images.

    A few potential issues:

    1. Noise. This is easy. We can first smooth I1 and I2 before conducting the polynomial parameters. However, the smoothing has to avoid stars. From the registration stage, there must be info about locations of stars. So it should be possible to mask stars and then smooth the images.
    2. Stars. When there are some nebulas in the overlapping region, it is easy to normalize two images. Just mask the stars and smooth the image before the polynomial fitting. What if there are no nebulas at all and the background is purely noise? Then we have to use stars to get parameter b. The best way to normalize two images with only stars is aperture photometry on the stars. This may not be difficult itself. However, it is not easy to incorporate this into the above polynomial fitting method. For extremely high-quality images, we can simply throw all pixels into the above polynomial fitting without worrying about stars and smoothing out the noise. Unfortunately, amateur images are usually not good enough for such simple method to work.
    3. Finally, this method can work on normalizing two images. But I don’t know how to deal with multiple images in a big mosaic.

    Hope these are useful in some ways.

    Cheers,

    Wei-Hao

    • 2 people like this.
    #15246

    Haverkamp
    Participant
    posts: 648

    Hi Wei-Hao (@whwang),

    Thank you for explaining your thoughts and writing down this concept.

    “The idea is to identify overlapping regions between two registered frames.”

    I have this covered already, in both the advanced normalisation mode and LNC algorithms.

    Suppose the pixel brightness in one frame is I1(x,y), and the other is I2(x,y). For all x, y, we can find a function to describe the relation between I1 and I2:

    I2 = a + b*I1 + c*I1^2 + d*I1^3 + …

    If there are enough pixels (nx*ny), it is possible to fit the polynomial parameters a, b, c, d,….”

    I understand the concept and I think I can use this.

    My LNC algorithm currently only corrects local brightness differences additively. But it is already using a bivariate polynomial model with constraints to keep the solution stable. (The degree of LNC is the degree of the bivariate polynomial). So for an 8th degree LNC you have lots of parameters for each individual frame.

    The constraints make sure that the solution that comes out for all frames together is as flat as possible and that all overlap areas are “connected” to each other.

    I think your idea would be most suited to implement as an upgrade in my LNC algorithms, since these can already handle the problem of combining all frames with each other directly, your point 3).

    The upgrade would be to use both additive and multiplicative terms in the LNC calculation so it would be able to handel non-linear frames as well when you select a higher degree. I would need to expand my current bivariate polynomial model to do this.

    Currently my LNC algorithms are linear least squares solutions with constraints. So they have an  exact solution. To expand LNC with your suggestion, the algorithms wouldn’t be linear anymore (in the sense of solving the differential equations). I’ll most definitely need to use a trust-region Levenberg-Marquardt approach, I think, to be able to solve it quickly and robustly.

    About the sample points, using all pixels of all frames gives way too much sample points I think. Currently I divide each frame in really small blocks (for instance 32×32 = 1024pixels, in LNC this size is dynamically determined) for which I determine a robust(using sigma clipping) median to represent a sample point for location in that area. So that gives you one sample point for a lot of pixels and noise is not a factor anymore.

    But to correct for dipsersion on those blocks as well (multiplicative), noise would still be an issue.

    From my star analysis routines I can easily exclude the stars in applying noise reduction on all other pixels.

    To get the same star intensities from the new LNC algorithm, I probably need to use an alternative method for the dispersion of each block after noise reduction. Normally I use the standard deviation, or mad, or Biweight MidVariance, and possibly only on one side of the central value (which we need to use here). Maybe I should use a dispersion metric that is somewhat further away from the central value… I’ll think about this.

    So I think that I can expand my LNC algorithms to solve all of your 3 points 😉 Would be a really great improvement and I don’t think this would take too much of my time to make. So it’s on my list to do.

Viewing 2 posts - 1 through 2 (of 2 total)

You need to log in or to reply to this topic.