I'll try to give an overview over the algorithm used to solve this kind of problem. First, we need to know the physical characteristics of the camera, that is focal length and real size of the images it takes, together with the size in pixels. If I'm saying the "real" size of the image that really means the size of the image (or maybe, easier imaginable, the size of the negative of a classical film camera). Example values for a typical medium format camera for aerial mapping would be 50mm focal length, 9000*6800 pixels, with 6microns pixel size, giving ~40x54mm image size.
The algorithm to compute the position of one pixel on the ground is (adapted to use an LSR system, one might do it with geographic coordinates as well):
public void ImageToGround(Camera sensor, double posx, double posy, double posz,
double dDeltaX, double dDeltaY,
Matrix4D rotationMatrixItg,
double groundheight, out double resultx, out double resultx)
{
// The rotation matrix is right-handed, with x pointing in flight direction, y to the right and z down.
// The image cs is increasing x to the right and y to the bottom (y = 0 is front in flight direction)
Vector3D imagePointRelativeToFocalPoint = new Vector3D(
dDeltaX,
dDeltaY,
-sensor.MetricFocalLength);
// Transform from img to camera coord system and rotate.
// The rotation Matrix contains the transformation from image coordinate system to camera
// coordinate system.
Vector3D imagePointRotated = rotationMatrixItg * imagePointRelativeToFocalPoint;
double dir, dist;
// Create a horizontal plane at groundheight, pointing upwards. (Z still points down)
Plane plane = new Plane(new Vector3D(0, 0, -1), new Vector3D(0, 0, -groundheight));
// And a ray, starting at the given image point (including the real height of the image position).
// Direction is opposite to the vector given above (from image to focal point).
Ray start = new Ray(new Vector3D(imagePointRotated.X, imagePointRotated.Y, imagePointRotated.Z - evHASL),
-(new Vector3D(imagePointRotated.X, imagePointRotated.Y, imagePointRotated.Z)));
// Find the point where the ray intersects the plane (this is on the opposite side of the
// x and y axes, because the ray goes trough the origin).
IntersectionPair p = start.Intersects(plane);
if (p.NumIntersections < 1)
{
resultx = 0;
resulty = 0;
return;
}
resultx = p.Intersection1.x;
resulty = p.Intersection1.y;
}
with posx, posy, posz: Position of image center; dDeltaX, dDeltaY: Position (in meters) of the pixel on the focal plane; rotationMatrixItg: Image to ground rotation matrix, created from yaw, pitch, roll of the image; groundheight: Elevation of the ground; resultx, resulty: Result position on the ground. I have simplified the algorithm, so you might need to adjust it to meet your needs.
The problem gets more complex when the terrain is not flat. If the whole image needs to be projected to the ground, one usually goes the inverse way, because that is more easy for interpolation and can be done in parallel.
I don't exactly know what you mean by "virtual" images, because also those are created by a projection, so there exist some theoretical image parameters that can be used.