Question

I'm trying to create a graphics engine to show point cloud data (in first person for now). My idea is to precalculate individual views from different points in the space we are viewing and mapping them into a sphere. Is it possible to interpolate that data to determine the view from any point on the space?

I apologise for my english and my poor explanation, but I'm can't figure out another way to explain. If you don't understand my question I'll be happy to reformulate it if it's needed.

EDIT:

I'll try to explain it with an example

Image 1: First view point

Image 2:Second viewpoint

In these images we can see two different views of the pumpkin (imagine that we have a sphere map of the 360 view in both cases). In the first case we have a far view of the pumpkin and we can see the surroundings of it and imagine that we have a chest right behind the character (we'd have a detailed view of the chest if we looked behind).

So, first view: surroundings and low detail image of the pumpkin and good detail of the chest but without the surroundings.

In the second view we have the exact opposite: a detailed view of the pumpkin and a non detailed general view of the chest (still behind us).

The idea would be to combine the data from both views to calculate every view between them. So going towars the pumpin would mean to streach the points of the first image and to fill the gaps with the second one (forget all the other elements, just the pumpkin). At the same time, we would comprime the image of the chest and fill the surroundings with the data from the general view of the second one.

What I would like is to have an algorithm that dictates that streching, compriming and comination of pixels (not only forward and backwards, also diagonaly, using more than two sphere maps). I know it's fearly complicated, I hope I expressed myself well enough this time.

EDIT:

(I'm using a lot the word view and I think that's part of the problem, here is the definition of what I mean with "view": "A matrix of colored points, where each point corresponds to a pixel on the screen. The screen only displays part of the matrix each time (the matrix would be the 360 sphere and the display a fraction of that sphere). A view is a matrix of all the possible points you can see by rotating the camera without moving it's position." )

Okay, it seems that you people still don't understand the concept around it. The idea is to be able to display as much detailed enviroments as possible by "precoocking" the maximun amount of data before displaying it at real time. I'll deal with the preprocesing and the compression of data for now, I'm not asking about that. The most "precoocked" model would be to store the 360 view at each point on the space displayed (if the character is moving at, for example, 50 points per frame, then store a view each 50 points, the thing is to precalculate the lighting and shading and to filter the points that wont be seen, so that they are not processed for nothing). Basicaly to calculate every possible screenshot (on a totally static enviroment). But of course, that's just ridiculous, even if you could compress a lot that data it would still be too much.

The alternative is to store only some strategic views, less frecuently. Most of the points are repeated in each frame if we store all the possible ones. The change in position of the points on screen is also mathematically regular. What I'm asking is that, a algorithm to determine the position of each point on the view based on a fiew strategic viewpoints. How to use and combinate data from strategic views on different possitions to calculate the view in any place.

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top