Question

I'm rusty a bit here.

I have a vector (camDirectionX, camDirectionY, camDirectionZ) that represents my camera direction of view. I have a (camX, camY, camZ) that is my camera position.

Then, I have an object placed at (objectX, objectY, objectZ)

How can I calculate, from the camera point of view, the azimut & elevation of my object ??

Was it helpful?

Solution

The first thing I would do, to simplify the problem, is transform the coordinate space so the camera is at (0, 0, 0) and pointing straight down one of the axes (so the direction is say (0, 0, 1)). Translating so the camera is at (0, 0, 0) is pretty trivial, so I won't go into that. Rotating so that the camera direction is (0, 0, 1) is a little trickier...

One way of doing it is to construct the full orthonormal basis of the camera, then stick that in a rotation matrix and apply it. The "orthonormal basis" of the camera is a fancy way of saying the three vectors that point forward, up, and right from the camera. They should all be at 90 degrees to each other (which is what the ortho bit means), and they should all be of length 1 (which is what the normal bit means).

You can get these vectors with a bit of cross-product trickery: the cross product of two vectors is perpendicular (at 90 degrees) to both.

To get the right-facing vector, we can just cross-product the camera direction vector with (0, 1, 0) (a vector pointing straight up). You'll need to normalise the vector you get out of the cross-product.

To get the up vector of the camera, we can cross product the camera direction vector with the right-facing vector we just calculated. Assuming both input vectors are normalised, this shouldn't need normalising.

We now have the orthonormal basis of the camera. If we stick these vectors into the rows of a 3x3 matrix, we get a rotation matrix that will transform our coordinate space so the camera is pointing straight down one of the axes (which one depends on the order you stick the vectors in).

It's now fairly easy to calculate the azimuth and elevation of the object.

To get the azimuth, just do an atan2 on the x/z coordinates of the object.

To get the elevation, project the object coordinates onto the x/z plane (just set the y coordinate to 0), then do:

acos(dot(normalise(object coordinates), normalise(projected coordinates)))

This will always give a positive angle -- you probably want to negate it if the object's y coordinate is less than 0.

The code for all of this will look something like:

fwd = vec3(camDirectionX, camDirectionY, camDirectionZ)
cam = vec3(camX, camY, camZ)
obj = vec3(objectX, objectY, objectZ)

# if fwd is already normalised you can skip this
fwd = normalise(fwd)

# translate so the camera is at (0, 0, 0)
obj -= cam

# calculate the orthonormal basis of the camera
right = normalise(cross(fwd, (0, 1, 0)))
up = cross(right, fwd)

# rotate so the camera is pointing straight down the z axis
# (this is essentially a matrix multiplication)
obj = vec3(dot(obj, right), dot(obj, up), dot(obj, fwd))

azimuth = atan2(obj.x, obj.z)

proj = vec3(obj.x, 0, obj.z)
elevation = acos(dot(normalise(obj), normalise(proj)))
if obj.y < 0:
    elevation = -elevation

One thing to watch out for is that the cross-product of your original camera vector with (0, 1, 0) will return a zero-length vector when your camera is facing straight up or straight down. To fully define the orientation of the camera, I've assumed that it's always "straight", but that doesn't mean anything when it's facing straight up or down -- you need another rule.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top