Question

I'm starting with DirectX (and SharpDX, therefore programming only in C#/hlsl) and am trying to build my own camera class. It should be rotatable, allow forward and backward moving and also "sideways" movement (the classical first person movement often mapped to A and D, plus up and down in my case). For easier bugfixing model and world space are the same in my case, perspective projection is not yet implemented, as is rotating the camera, and my camera is supposed to look in the positive Z-axis (into the screen). My bugfixing model is a simple quad with 1.f width and height, z = 0 and being centered on the screen.
For ease of use I found DirectX's Matrix.LookAtLH() and am using it to build my matrix for translation from world to view coordinates, based on positioning of my camera in world coordinates, an up-vector (for now - without rotation - always the positive Y-axis) and a target point in world coordinates.
My (vertex)shader uses a simple multiplication with this matrix:
output.position = mul(position, worldToView);
The LookAt-Matrix is calculated like this:
Matrix.LookAtLH(vec3(0, 0, -1), vec3(0, 0, 0.5f), vec3(0, 1, 0))
resulting in this picture:
Camera at (0, 0, -1) looking at (0, 0, 0.5f
Now I want to move my camera to the right, in this case by adding 1.f to it's X-coordinate. My expected result is the same quad as before, having moved a little to the left.
I construct a new LootAt-matrix, moving eye coordinates and target coordinates by the same vector:
Matrix.LookAtLH(vec3(1, 0, -1), vec3(1, 0, 0.5f), vec3(0, 1, 0))
Resulting in this:
Camera at (1, 0, -1) lookint at (1, 0, 0.5f

This get's more extreme the more I move the camera, but the center of the screen still holds the center of the quad. Is this related to my potential misconceptions of Matrix.LookAtLH?

Was it helpful?

Solution

When you are using D3DX-functions you have to transpose your matrices before sending them to shaders.

A more in depth explanation from here:

In linear algebra, vectors and matrices are multiplied using the standard matrix multiplication algorithm. Thus there are a few rules concerning the order of operations and "shape" of the matrices involved. Mathematicians usually treat vectors as matrices containing a single column of elements, with a translation multiplication looking something like this:

[ 0, 0, 0, tx]  [ x]
[ 0, 0, 0, ty] *[ y]
[ 0, 0, 0, tz]  [ z]
[ 0, 0, 0,  1]  [ 1]

First note that matrix multiplication produces a result of a specific row/column configuration according to this simple rule:

AxB * BxC = AxC.

In other words, a matrix of size A rows and B columns multiplied by a matrix of B rows and C columns will produce a matrix of A rows and C columns. Also, in order to be properly multiplied, B must be equal for both. In this case, we have 4x4 * 4x1, which produces a 4x1, or another column vector. If we changed the order of multiplication, it would be 4x1 * 4x4, which would be illegal.

However, computer scientists often treat vectors as a matrix with a single row. There are several reasons for this, but often because a single row represents a single linear chunk of memory, or a single dimensional array, since arrays are typically addressed as array[row][column]. In order to avoid using 2 dimensional arrays in code, people simple use "row vectors" instead. Thus, in order to achieve the desired result using matrix multiplication, we swap the order to be 1x4 * 4x4 = 1x4, or vector * matrix:

[ x, y, z, 1] * [ 0, 0, 0, 0]
                [ 0, 0, 0, 0]               
            [ 0, 0, 0, 0]
            [ x, y, z, 1]

Notice how the x, y, z, elements of the translation matrix had to be moved in order to preserve the proper result for multiplication (in this case, it is transposed).

When using column vectors, the typical transform order of operations are P* V * W * v, because the column vector must come last to produce the proper result. Remember, matrix multiplications are associated, not commutative, so in order to achieve the appropriate result of vector transformed by world, transformed into view space, transformed into homogenous screen space, we must multiply in that order. This gives us (using associativity) P * (V * (W * v)), thus working from inner parens to outer parens, world transformation first, view next, projection next.

If we use row vectors, then the multiplication is as follows: v * W * V * P. Using associativity, we realize it is simply the same order of operations: ((v * W) * V) * P. Or world first, then view, then projection.

Both forms of multiplication are equally valid, and the DX library chooses to use the latter because it matches memory layout patterns, and it allows you to read your transformation order from left to right.

HLSL supports BOTH orders of operations. The "*" operator performs a simple element by element scaling, it does not perform matrix multiplication. That is performed using the "mul()" intrinsic operation. If you pass a 4 element vector as the first parameter to the mul() intrinsic function, it assumes you wish to treat it as a "row vector." Thus, you must supply matrices that have been multiplied in the proper order, and supplied using the proper row/column format. This is the default behavior when passing in matrices from the DX libraries using the DX effect parameters. If you supply the 4 element vector as the second parameter to the mul() intrinsic, it treats it as a column vector, and you must provide properly formed and multiplied matrices for column vectors.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top