Question

Given N x-ray images with different exposure doses, I must combine them into a single one which condenses the information from the N source images. If my research is right, this problem falls in the HDRI cathegory.

My first approach is a weighted average. For starters, I'll work with just two frames.

Let A be the first image, which is the one with lowest exposure and thus is set to weigh more in order to highlight details. Let B be the second, overexposed image, C the resulting image and M the maximum possible pixel value. Thus, for each pixel i:

w[i] = A[i]/M

C = w[i] * A[i] + ( 1 - w[i] ) B[i]

An example result of applying this idea:

enter image description here

Notice how the result (third image) nicely captures the information from both source images.

The problem is that the second image has discontinuities around the object edges (this is unavoidable in overexposed images), and that carries on to the result. Looking closer...

enter image description here

The best reputed HDR software seems to be Photomatix, so I fooled around with it and no matter how I tweaked it, the discontinuities always appear in the result.

I think that I should somehow ignore the edges of the second image, but I must be do it in a "smooth way". I tried using a simple threshold but the result looks even worse.

What do you suggest? (only open source libraries welcome)

Was it helpful?

Solution

The problem here is that each image has a different exposure dose associated. Any HDR algorithm must take this into account.

I asked the people who created the x-ray images, and the exposure dose for the second image is approximately 4.2 times that of the first one. I was giving wrong EV values to Photomatix because I didn't know that EV is expressed in terms of stops, 1 stop meaning twice the reference value. So, assigning 0 EV to the first image and +2.1 EV to the second one, the discontinuities were gone, keeping all information.

Next problem was that I had no idea how Photomatix did this. So then I tried doing the same using Luminance HDR, aka qtpfsgui, which is open source.

To sum it up, the exposure bracketed images must be fed to an HDR compression algorithm, which creates an HDR image. Basically, that's a float point image which contains the information of all images. There are many algorithms to do this. Luminance HDR calls this HDR creation model and offers two of them: Debevec, and Robertson.

However, an HDR image cannot be displayed directly on a conventional display (i.e. monitor). So we need to convert it to a "normal" (LDR) image while keeping as much color information as possible. This is called tone-mapping, and there also various algorithms available for this; Luminance calls these Tonemap Operators and offers several. It also selects the most suitable one. The Pattanaik operator worked great for these images.

So now I'm reading Luminance's code in order to understand it and make my own implementation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top