Question

I am building an augmented reality application and I have the yaw, pitch, and roll for the camera. I want to start placing objects in the 3D environment. I want to make it so that when the user clicks, a 3D point pops up right where the camera is pointed (center of the 2D screen) and when the user moves, the point moves accordingly in 3D space. The camera does not change position, only orientation. Is there a proper way to recover the 3D location of this point? We can assume that all points are equidistant from the camera location.

I am able to accomplish this independently for two axes (OpenGL default orientation). This works for changes in the vertical axis:

x = -sin(pitch)
y = cos(pitch)
z = 0

This also works for changes in the horizontal axis:

x = 0
y = -sin(yaw)
z = cos(yaw)

I was thinking that I should just make combine into:

x = -sin(pitch)
y = sin(yaw) * cos(pitch) 
z = cos(yaw)

and that seems to be close, but not exactly correct. Any suggestions would be greatly appreciated!

Was it helpful?

Solution

It sounds like you just want to convert from a rotation vector (pitch,yaw,roll) to a rotation matrix. The conversion can bee seen on the Wikipedia article on rotation matrices. The idea is that once you have constructed your matrix, to transform any point simply.

final_pos = rot_mat*initial_pose

where final and initial pose are 3x1 vectors and rot_mat is a 3x3 matrix.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top