Question

I'm still implementing a perspective projection for my augmented reality application. I've already asked some questions about the viewport-calculation and other camera stuff, which is explained from Aldream in this thread

However, I don't get any useful value at the moment and I think this depends on my calculation of the cartesian coordinate space.

I had some different ways to transform latitude,longitude and altitude to a cartesian coordinate space, but nothing of them seems to work properly. Currently I'm using ECEF(earth centered), but I also tried different calculations like a combination of the haversine-formula and trigonometry (to calculate x and y from the distance and the bearing between two points).

So my question is:

How does the cartesian coordinate space affect my perspective projection? Where do I have to "compensate" my units?(When I'm using meter or centimeter for example)?

Lets say I'm using ECEF, than I get values in meter, so for example, my camera is at (0,0,2m height) and my point is at (10,10,0). Now I can easily use the function mentioned on wikipedia and afterwards using the conversion of dx,dy,dz explained in my other thread (mentioned above). What I still don't get: How does this projection "know" what my units in the coordinate system are? I think this is the mistake I'm currently doing. I don't handle the units of my coordinate system and therefore, cannot get any good value from my projection.

When I'm using a coordinate system with centimeter as unit, all of my values from my perspective projection are increasing. Where do I have to "resolve" this unit-problem? Do I have to "transform" my camera-width and camera-height from pixel to meter? Do I have to convert the coordinate system to pixel? Which coordinate-system should be used to handle this situation? I hope you can understand my problem.

Edit:I solved it myself. I've changed my coordinate system from ecef to a own system (using haversine and bearing and then calculating x,y,z) and now I get good values! :)

Was it helpful?

Solution

I'll try another way to explain it here then. :)

The short answer is: the unit of your cartesian positions doesn't matter as long as you keep it homogeneous, ie as long as you apply this unit both to your scene and to your camera.

For the longer answer, let's go back to the formula you used...

Projection to screen

With:

  • d the relative Cartesian coordinates
  • s the size of your printable surface
  • r the size of your "sensor" / recording surface (ie r_x and r_y the size of the sensor and r_z its focal length)
  • b the position on your printable surface

.. and do the pseudo dimensional analysis. We have:

[PIXEL] = (([LENGTH] x [PIXEL]) / ([LENGTH] * [LENGTH])) * [LENGTH]

Whatever you use as unit for LENGTH, it will be homogenized, ie only the proportion is kept.

Ex:

[PIXEL] = (([MilliM] x [PIXEL]) / ([MilliMeter] * [MilliMeter])) * [MilliMeter]
        = (([Meter/1000] x [PIXEL]) / ([Meter/1000] * [Meter/1000])) * [Meter/1000]
        = 1000 * 1000 / 1000 /1000 * (([Meter] x [PIXEL]) / ([Meter] * [Meter])) * [Meter]
        = (([Meter] x [PIXEL]) / ([Meter] * [Meter])) * [Meter]

Back to my explanations on your other thread:

If we use those notations to express b_x:

b_x = (d_x * s_x) / (d_z * r_x) * r_z
    = (d_x * w) / (d_z * 2 * f * tan(α)) * f
    = (d_x * w) / (d_z * 2 * tan(α)) // with w in px

Wheter you use (d_x, d_y, d_z) = (X,Y,Z) or (d_x, d_y, d_z) = (1000*X,1000*Y,1000*Z), the ratio d_x / d_z won't change.


Now for the reasons behind your problem, you should maybe check if you apply the correct unit to the position of your camera / to its distance to the scene too. Check also your α or the unit of the focal length, depending on which one you use.

If think the later suggestion is the most likely. It can be easy to forget to also apply the right unit to the characteristics of your camera.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top