PDA

View Full Version : Images captured using QGLWidget::grabFrameBuffer aren't equal on different platforms



jillian
3rd March 2010, 01:26
I wrote a unit test which compares the contents of a frame buffer to a control image that was captured previously. When I run the unit test on my development platform, the test succeeds -- but when I transfer the unit test and the control image to another platform, the newly captured frame doesn't quite match the control image. Both platforms have the same screen resoution and color depth. What else do I have to ensure is the same on both machines before I can expect the images to comopare successfully? Or, what other method can I use to compare images that is more robust?

Here is a code snippet:


//grab the frame buffer from the QGLWidget
QImage image = m_ui.m_imageWidget->grabFrameBuffer();

//Read in the control image -- was previously saved using QImage::save with png format
QImage control_image("./images/controlGeoImage.png");

//Compare the test image with the expected control image
QCOMPARE(image, control_image); //fails if control_image is from a different platform


If I look at the control image and the newly captured test image from a different platform, I can see the differences, but don't know how to avoid the differences so that I can develop the unit tests on one platform, and run them (for nightly testing) on another. See the test image and control image below:

http://www.volcanovillage.net/testImages/controlGeoImage.png

http://www.volcanovillage.net/testImages/testGeoImage.png

The bottom image, which is the newly captured test image, shows a bit of pink on the edges between the dark blue and light blue sections.

chaoticbob
3rd March 2010, 22:27
By platform do you mean OS or hardware?

Also, are you running the same hardware with the same version of the drivers?

jillian
3rd March 2010, 22:43
I'm running same OS on different hardware. Today I tried turning off hardware acceleration in OpenGL and produced a new control image that way. To the naked eye, the new control image from the development platform and the test image from the test platform now look exactly the same (ie, both look like the bottom image above). But they still don't compare successfully -- so I'm trying to figure out what's different now.

So now that I'm using software rendering in OpenGL, do I still need to worry about whether my drivers are the same (are we talking drivers for the graphics card)?

barnabyr
4th March 2010, 07:27
What about having a separate control image per platform ?

jillian
4th March 2010, 17:29
A separate control image per platform will get to be messy, as there are many unit tests in the making... I'd like an elegant and robust solution to this problem. But yeah, if I can't figure it out, the separate control images would certainly work. -- but I'm holding off for something better...

jillian
5th March 2010, 04:38
Well, I compared pretty closely the new control and test images that I produced with OpenGL software rendering only. I used an application called ImageMagick to create an image showing the differences. Here it is:

http://www.volcanovillage.net/testImages/difference.png

When I looked at the size of the differences, I was surprised to see that almost all pixels that were different were greater by exactly 16 in one or more channel (red, green, or blue) of the test image. So I used ImageMagick again to create an image showing thresholded differences, with the threshold set to 16. Here it is:

http://www.volcanovillage.net/testImages/diff_thresh.png

Percentage-wise, the differences are pretty small, once you allow for that "fuzz factor" of 16.
So now I'm comparing my test and control images using this forgiving comparison function, that I modeled from a function that I found elsewhere on the Qt site, called "fuzzyComparePixels". For the particular case that I've been discussing, I call this function with an rPixelTolerance of 16 and an rImageTolerance of 0.5 percent:



///Compare two images. Ignores the alpha channel.
///Must be same size and format.
///@param rImage1 first image (order does not matter)
///@param rImage2 second image
///@param rPixelTolerance tolerable difference in a color channel
///@param rImageTolerance tolerable percentage of pixels above pixel tolerance
///@return true if the images compare favorable, false otherwise

const bool
CompareImages(QImage& rImage1,
QImage& rImage2,
const uint32 rPixelTolerance,
const float32 rImageTolerance)
{
bool success = false;
if (rImage1.width() == rImage2.width() &&
rImage1.height() == rImage2.height() &&
rImage1.depth() == rImage2.depth() &&
rImage1.format() == rImage2.format())
{
int32 failed_pixel_count = 0;

for (int32 y = 0; y < rImage1.height(); y++)
{
for (int32 x = 0; x < rImage1.width(); x++)
{
QRgb pixel_1 = rImage1.pixel(x,y);
QRgb pixel_2 = rImage2.pixel(x,y);

uint32 redFuzz = qAbs(qRed(pixel_1) - qRed(pixel_2));
uint32 greenFuzz = qAbs(qGreen(pixel_1) - qGreen(pixel_2));
uint32 blueFuzz = qAbs(qBlue(pixel_1) - qBlue(pixel_2));

if (redFuzz > rPixelTolerance ||
greenFuzz > rPixelTolerance ||
blueFuzz > rPixelTolerance)
{
failed_pixel_count++;
}
}
}
float32 percent_failed =
failed_pixel_count * 100.0 / (rImage1.width() * rImage1.height());
qDebug("failed_pixel_count = %d, percent_failed = %f",
failed_pixel_count, percent_failed);
if (percent_failed <= rImageTolerance)
{
success = true;
}
}
return success;
}


I ignore the alpha channel because the alpha channel on my control and test images are totally different, with one having all "0"s in the alpha channel and the other having all "255"s. I don't know why this is. I do the GLWidget::grabFrameBuffer(bool withAlpha=false) call without an argument, allowing it to default to false. Perhaps the alpha channel initializes differently on the two platforms?

Another interesting observation is that the test and control images that have 0 rotation involved (unlike the 10 degree rotation in the images above) seem to be much more alike. Perhaps the two platforms are using different smoothing techniques? I don't know. I checked the versions of the OpenGL and the png libraries, and they were the same on both platforms.

So I feel that I can use the forgiving image comparison technique that I'm showing above, and still faithfully report unit test success or failure -- but I'd like to understand what causes these image differences. The images all have an image depth of 32, and the control images are stored in png format. If you have an idea of what is causing the differences, please respond.

^NyAw^
5th March 2010, 11:54
Hi,

Are the two systems equal? Do they have the same video configuration (antialiasing, ...). I think that "grabFrameBuffer" returns you the displayed image, so if your graphics card applied an antialiasign filter, the images will be different.