Question

This is my camera class

public class Camera
{
    public Matrix View { get; private set; }
    public Matrix Projection { get; private set; }
    public Viewport Viewport { get; private set; }

    public Camera(Viewport viewport, Vector3 position, Vector3 lookAt)
    {
        this.Viewport = viewport;
        this.Update(position, lookAt);
    }

    public void Update(Vector3 position, Vector3 lookAt)
    {
        this.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Viewport.AspectRatio, 1, 500);
        this.View = Matrix.CreateLookAt(position, lookAt, Vector3.Up);
    }
}

I have created a Camera for the left eye and right eye. Everything is working and the Oculus is display a separate image in each eye. The problem is that the Oculus is blurry, and my brain doesn't "merge" the two views into one image. I am trying to offset the image in the right eye very slightly, but I can't find the correct positioning for the two eyes. Can anyone help?

Was it helpful?

Solution

The double vision effect of the two images not merging doesn't have anything to do with the modelview matrix. The separation of the modelview matrix makes the two images slightly different so that they give a sense of depth, but without the separation, the view through the Oculus should ideally be very much like staring at a non-3D monitor.

The reason you're seeing double is because you're centering the two images in the center of each viewport (I render each eye view individually, so I have a viewport that covers the left eye, and then one that covers the right eye). If the view of what is directly ahead of the player is in the center of the eye view, the viewer will always see double. This is because each lens is not centered over the middle of it's half of the display panel. Instead they're offset towards the center.

In order to properly display content on the Rift you have to account for this offset. There are a number of ways to do this. The best way is to alter your projection matrix so that the center of the frustum is no longer looking directly ahead, but is instead off to one side. This is the method used in the SDK samples.

Another method is to leave the projection matrix alone and instead push the rendered view for each eye inward toward the center of the screen. The reason this isn't the best approach is that it means you will be cropping parts of the interior edges of the image as they get pushed outside the viewport (this isn't really an issue with the view, mostly an issue with performance, because you've already spent part of your performance budget rendering those now cropped pixels). It also means that part of the screen on the outer edges of the display won't be rendered at all, which will reduce the overall field of view of the rendered scene.

OTHER TIPS

I take you got your distortion shader in place and working, because without that you won't see a properly converging image in your Oculus Rift.

You need to determine what real world measurement unit one of your game world units corresponds to. That is something only you as the author of your graphics engine can tell. Game world objects can give clues here. Let's say you are having cars in your game, and one of them is 5.0 game world units long. That would be an indicator that 1 game world unit corresponds to 1 real world meter, as a length of 5 m is a plausible value for a (bigger) car.

What you now have to do is to offset left and right view directions by the players (or your) IPD (interpupillary distance). As outlined in the SDK docu, the human IPD can be anywhere between 54 mm and 72 mm, with the most frequent value at 62 mm. If 1.0 world units ~ 1 meter, 1 mm ~ 1.0 / 1000.0 = 0.001 game world units. So if your IPD is 62 mm, you need to shift your forward view vectors towards the center of the screen by vec3 (0.062, 0.0, 0.0) (left eye) and vec3 (-0.062, 0.0, 0.0) (right eye).

Regard that you need to apply another shift to your projection matrix. You do that by multiplying your projection matrix with an offset matrix that has the projection offsets delivered by the Rift SDK as shifts. That is explained in the SDK docu as well. Make sure to multiply the matrices in the right order. ;)

here is a working tutorial with sample code and sample xna 4.0 project (in the comments)

XNA Oculus Rift Tutorial

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top