PDA

View Full Version : QPainter very slow on QGLWidget even in simple example?



dpkay
24th January 2010, 02:38
As far as I understood it, QPainter would be a convenient way to draw simple shapes in my GL widget without going through the hassle of individual vertices, but also without any significant loss of performance. However, in a simple rectangle test, native OpenGL still seems to be faster by a factor of 30. Am I doing something wrong?

Here's the code of my QGLWidget's paintEvent. The vector testpos contains 10'000 points as of now. I would expect no changes in the OpenGL state between the drawRect calls.



void GLWidget::paintEvent(QPaintEvent *event)
{
makeCurrent();
qglClearColor(Qt::black);
setupViewport(width(), height());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// draw rectangles using native OpenGL code
// normalized coordinates
QTime stopwatch;
stopwatch.start();
glColor4f(0.0f,0.0f,1.0f,1.0f);
foreach(const QPointF * pt, testpos)
{
glBegin(GL_LINE_LOOP);
glVertex2f(pt->x(), pt->y());
glVertex2f(pt->x()+0.03, pt->y());
glVertex2f(pt->x()+0.03, pt->y()+0.03);
glVertex2f(pt->x(), pt->y()+0.03);
glEnd();
}
qDebug() << "elapsed msec native gl: " << stopwatch.elapsed();


// draw rectangles using QPainter
// screen space coordinates
stopwatch.start();
QPainter p(this);
p.setPen(Qt::blue);
foreach(const QPointF * pt, testpos)
{
p.drawRect(pt->x()*500, pt->y()*500, 20, 20);
}
p.end();
qDebug() << "elapsed msec painter: " << stopwatch.elapsed();
}


The output I get is:
elapsed msec native gl: 2
elapsed msec painter: 65
elapsed msec native gl: 2
elapsed msec painter: 76
...

Omitting one or the other drawing method doesn't change anything, and it's not a measurement error. The delay scales with the number of rectangles, so it has to be a problem in the drawRect. What could be wrong here?

Best wishes,
- Dominik

wysota
24th January 2010, 11:15
I don't think anything is wrong. Qt has to translate QPainter calls to OpenGL routines which takes time and final calls obtained might be different. And it's hard to call it a test since you use different objects and figures for each of the loops. Mere multiplication of a floating point number and an integer takes time. If you do it many times, it starts to become significant. Furthermore a single loop is not enough for a benchmark because of QTime precision. Repeat each loop 1000 times and measure the time of that and then divide the result by 1000 and maybe you'll obtain something closer to the truth.