Question

When rotating a scene in a 3d modeling interface, which part of such task is CPU responsible for and which part the GPU takes on? (mesh verticies moving, shading, keeping track of UV coords - perhaps offsetting them, lighting the triangles and rendering transparency correctly)

What rendering mode is normally used via such modeling programs (realtime) - immediate or retained?

Was it helpful?

Solution

First and formost the GPU is responsible for putting points, lines and triangles to the screen. However this involves a certain amount of calculations.

The usual pipeline is, that for each vertex (which is a combination of attributes, that usually includes, but is not limited to position, normals, texture coordinates and so on), the vertex position is transformed from model local space into normalized device coordinates. This is in most implementations a 3-stage process.

  1. transformation from model local space into view=eye space – eye space coordinates are later reused for things like illumination calculations
  2. transformation from view space to clip space, also called projection; this is determined which part of the view space will later be visible in the viewport; it's also where affine perspective is introduced
  3. mapping into normalized device coordinates, by coordinate homogenization (this later step actually creates perspective if an affine projection is used).

The above calculations are normally carried out by the GPU.

When rotating a scene in a 3d modeling interface, which part of such task is CPU responsible for and which part the GPU takes on?

Well, that depends on what kind of rotation you mean. If you mean an alteration of the viewport but nothing in the scene input data is actually changed. The only thing that gets altered is a parameter used in the first transformation step. This parameter is normally a 4×4 matrix. When rotating the viewport a new modelview transformation matrix is calculated on the CPU. This matrix is then passed to the GPU and the whole scene redrawn.

If however a model is actually modified in a modeller, then the calculations are usually carried out on the CPU.

(mesh verticies moving, shading, keeping track of UV coords - perhaps offsetting them, lighting the triangles and rendering transparency correctly)

In an online renderer this normally done mostly by the GPU, but certain parts may be precalculated by the CPU.

It's impossible to make a definitive statement, because how the workload is shared depends on the actual application.

OTHER TIPS

This question is really I mean REALLY vague. Generally whatever comes, because it really depends on the program and it's makers. Older programs were mostly all CPU since GPUs where non existent or too weak to handle massive scenes. However today GPUs are powerful enough to handle massive scenes, the programs creators can offer various solutions but its usually an abstracted system where you have your data and your view and thus they allow you to specify what you want with in the editing viewport : realtime / immediate, preprocessed , high or low detail. Programs usually sacrifice accuracy for speed so you can edit with ease.

For example 3ds max uses rendering devices, viewport handlers and renderers. Renderers handle the production quality output but today they are not limited to CPU since they can take advantage of the GPU ( just think about OpenCL or CUDA ) while maintaining the quality and lowering rendering times. Secondly anyone can make plugins to implement a viewport renderer the way they want it let it be CPU or GPU or a mixed renderer. So abstraction being common in modelling tools the scene information is fed into a viewport renderer which is usually very similar to a game engine's renderer. If you think about it , because it's a tool and has an UI and various systems that it needs to handle the try to offload the CPU as much as possible by doing as much rendering work as possible on the GPU, so editing takes place in memory on the in the "general" data structure than that is displayed to you, I'm not sure but I think they may also use the graphics api ( let it be OpenGL or DirectX ) to do the picking ( selection ).

The rendering mode is as I would describe it "on-demand" it usually renders when it needs to, so in a scene where realtime preview is off, it would only render if you modify something or move the camera, and as soon as you do something that needs constant updating it will do exactly that.

On top of all this there are hybrid methods, where the users desire production like quality which even to this day is difficult with GPUs so they settle with a fast watered down version of a real renderer to get as close to production quality as possible, one simple example is 3dsMax's 'Realistic' viewport it does Ambient Occlusion its not "realtime" but it does it fast enough so it's actually useful. In more advanced cases they make special extension cards to handle fast raytracing to be able to do fast / good quality graphics but still even in these cases, the main thing is the same they store the editable data in a generic internal format and feed that into some sort of renderer that outputs something, not necessarily the same that you would get from very high quality offline renderer but still it gives a good outline of what it will be like.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top