Domanda

I am working on a 3D game that is ported on Android and I want to work with touch events in 3D course of a game. I need point in 3D space, right on near clipping plane, but all I can get is a 2D coordinates from an Android display. So, is there any way to map these (x, y) coordinates to (x, y, z) coordinates in 3D space?

EDIT

Well, I am working on a racing game, and I want to insert some items on a course, depending on where I click. I have this function:

void racing_mouse_cb(int button, int state, int x, int y) { //parameters (x,y) are coords of a display
    set_ill_fish(get_player_data( local_player())->view);
}

but for now I am inserting items in front of a player at some distance:

void set_ill_fish(view_t view) {
    item_locs[num_items].ray.pt.x = view.plyr_pos.x;
    item_locs[num_items].ray.pt.z = view.plyr_pos.z - 5;
    item_locs[num_items].ray.pt.y = find_y_coord(view.plyr_pos.x,
            view.plyr_pos.z - 5) + 0.2;
    item_locs[num_items].ray.vec = make_vector(0, 1, 0);
    .
    .
    .
}

, but how to translate this to display surface, I am clueless.

È stato utile?

Soluzione

To remap 2D display coordinates (display_x, display_y) to 3D object coordinates (x,y,z) you need to know

  1. the depth display_z of the pixel at (display_x, display_y)
  2. the transformation T that transforms clip space coordinates (clip_x, clip_y, clip_z) to display coordinates
  3. the transformation M that transforms object coordinates to clip space coordinates (usually combines a camera and a perspective)

The display coordinates are computed as follows

M.transform(x, y, z, 1) --> (clip_x, clip_y, clip_z, clip_w)

T.transform(clip_x / clip_w, clip_y / clip_w, clip_z / clip_w) --> (display_x, display_y, display_z)

M.transform is an invertible matrix multiplication and T.transform is any invertible transformation.

You can recover (x,y,z) from (display_x, display_y, display_z) as follows

T.inverse_transform(display_x, display_y, display_z) --> (a, b, c)

M.inverse_transform(a, b, c, 1) --> (X, Y, Z, W)

(X/W, Y/W, Z/W) --> (x, y, z)

The following gives intuition on why the above computation leads to the right solution

T.inverse_transform(display_x, display_y, display_z) --> (clip_x / clip_w, clip_y / clip_w, clip_z / clip_w)

(clip_x / clip_w, clip_y / clip_w, clip_z / clip_w, clip_w / clip_w) == (clip_x, clip_y, clip_z, clip_w) / clip_w

M.inverse_transform((clip_x, clip_y, clip_z, clip_w) / clip_w) == M.inverse_transform(clip_x, clip_y, clip_z, clip_w) / clip_w

M.inverse_transform(clip_x, clip_y, clip_z, clip_w) / clip_w --> (x, y, z, 1) / clip_w

(x, y, z, 1) / clip_w == (x / clip_w, y / clip_w, z / clip_w, 1 / clip_w)

(x / clip_w, y / clip_w, z / clip_w, 1 / clip_w) == (X, Y, Z, W)

The above used the following matrix (M) vector (v) scalar (a == 1 / clip_w) property:

M * (a * v) == a * (M * v)
Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top