Question

I am working on a structure from motion application and I am tracking a number of markers placed on the object to determine the rigid structure of the object.

The app is essentially using standard Levenberg-Marquardt optimization over multiple camera views and minimizing the differences between expected marker points and the marker points obtained in 2D from each view.

For each marker point and each view the following function is minimised:

double diff = calculatedXY[index] - observedXY[index]

Where calculatedXY value depends on a number of unknown parameters that need to be found via the optimization and observedXY is the marker point position in 2D. In total I have (marker points * views) number of functions like the one above that I am aiming to minimise.

I have coded up a simulation of the camera seeing all the marker points but I was wondering how to handle the cases when during running the points are not visible due to lighting, occlusion or just not being in the camera view. In the real running of the app I will be using a web cam to view the object so it is likely that not all markers will be visible at once and depending on how robust my computer vision algorithm is, I might not be able to detect a marker all the time.

I thought of setting the diff value to be 0 (sigma squared difference = 0) in the case where the marker point could not be observed, could this skew the results however?

Another thing I noticed is that the algorithm is not as good when presented with too many views. It is more likely to estimate a bad solution when presented with too many views. Is this a common problem with bundle adjustment due to the increased likeliness of hitting a local minimum when presented with too many views?

Was it helpful?

Solution

It is common practice to just leave out terms corresponding to missing markers. Ie. don't try to minimise calculateXY-observedXY if there is no observedXY term. There's no need to set anything to zero, you shouldn't even be considering this term in the first place - just skip it (or, I guess in your code, it's equivalent to set the error to zero).

Bundle adjustment can fail terribly if you simply throw a large number of observations at it. Build your solution up incrementally by solving with a few views first and then keep on adding.

You might want to try some kind of 'robust' approach. Instead of using least squares, use a "loss function"1. These allow your optimisation to survive even if there are a handful of observations that are incorrect. You can still do this in a Levenberg-Marquardt framework, you just need to incorporate the derivative of your loss function into the Jacobian.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top