Question

I have designed a program which, basically, cuts a geometrical shape into many small triangles (in a "left canvas"), applies some simple mathematical transformation to the bunch of triangles, and redraws them in their new configuration. See screen capture below.

screen cap 1

In order to draw these triangles, I use QPainter::drawPolygon. Each triangle on the right corresponds to a triangle on the left, so I know what color I want to use to draw it.

So far, fine. Even if I draw many more triangles than this (when I use much smaller triangles to cut the shape), this is fast enough.

I've added a feature to my program: I can draw triangles extracted from a picture instead of plain triangles: see following screen capture.

enter image description here

The problem is that the way I do this is too slow. Here's how I do it:

  1. I run through all the triangles
  2. For each triangle, I compute the coordinates of each pixel that will be displayed.
  3. For each one of these pixels, I compute the coordinates of the corresponding pixel on the picture (this is an easy mathematical operation), and I retrieve that pixel's color.
  4. I use QPainter::setPen(QColor) and QPainter::drawPoint(QPoint) to draw the pixel.

I am new to programming in Qt and I know nothing about graphics, so this is what I could come up with. The problem is that it's "unacceptably" too slow (the paintEvent of each canvas takes about 0.15s, compared to 0.01s in the case of plain triangles).

I ran a profiler to try to understand what's going on, I noticed that in the canvas widget's paintEvent,

  1. 58% of the time is spent in QPainter::drawPoint
  2. 27% of the time is spent in QPainter::setPen

It seems that QPainter::drawPoint is far too complicated and slow: I just want it to print a pixel of a given color, that's it.

I may have found a solution to my problem: store a QImage (as a member variable of my canvas widget) that represents the whole thing I want my canvas to display, and define it entirely in my paintEvent pixel by pixel, and then draw it at once at the end of my paintEvent with QPainter::drawImage. I have a hint that this will be much faster. But before I rewrite my code all over again, I'd like to know whether that's really what I want to do.

I hope I didn't bore you to do death! Many thanks in advance for your insights.

Was it helpful?

Solution

Non-OpenGl solution :

Use a RGB buffer for the destination image. Work through your 3 first steps as you did before. Once you have found the position and the pixel color, you set it on this buffer. Then you use

QImage::QImage ( uchar * data, int width, int height, Format format )

to construct the image based on the previous buffer. It is close to the solution you provided and will be much faster than what you currently have.

OTHER TIPS

OpenGL does image (texture) coordinate mapping really well. You probably want to use some form of OpenGL. Qt has some binding to OpenGL that can help you out.

One way to do this would be to use a class inheriting from QGLWidget instead of the QGraphicsScene/QGraphicsView combo. Unfortunately, the learning curve for OpenGL starts out a little steep. However, it will be very fast because it will happen directly on the graphics card which is optimized for just this kind of operation.
You'll load the image QGLWidget::bindTexture().
You'll associate points in the image with your triangle mesh and send them all to your graphics card. In the legacy version of OpenGL (which is easier to use than the newer API in my opinion), it would look something like this:

glEnable(GL_TEXTURE_2D);

glBegin(GL_TRIANGLES);
for (int ii=0;ii<triangle.size();++ii) {
  for (int jj=0;jj<3;++jj) {
    glTexCoord2d(triangle[ii].tex[jj][0],triangle[ii].tex[jj][1]);
    glVertex2d(triangle[ii].point[jj[0],triangle[ii].point[jj][1]);
  }
}
glEnd();

Where triangle is some data structure that you've made holding the triangle vertices and associated mappings into the image. The graphics card will handle the pixel interpolation for you.

Another option apart from OpenGL is to use OpenCL, which may be easier for you. You just have to memory map the input/output bitmap to the graphics card, write a little kernel in C that handles one triangle, and then queue a kernel execution for each triangle. This will work as much as 100x as fast as a single core on the CPU.

There is a Qt wrapper for the OpenCL host api here:

http://doc.qt.digia.com/opencl-snapshot/index.html

A different approach is to leverage the clipping and transformations already implemented efficiently within the raster paint engine. As long as the transformation between the two triangles can be expressed using a 3x3 augmented transformation matrix, you merely need to set it on the target painter, and then draw the entire source image on the target. It will get clipped and transformed to fill the target triangle. You can also merely draw the bounding rect of the source triangle, instead of the whole image, if profiling shows an advantage to it.

This can be parallelized so that you process as many triangles in parallel as there are CPU cores.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top