Well, I compared pretty closely the new control and test images that I produced with OpenGL software rendering only. I used an application called ImageMagick to create an image showing the differences. Here it is:
When I looked at the size of the differences, I was surprised to see that almost all pixels that were different were greater by exactly 16 in one or more channel (red, green, or blue) of the test image. So I used ImageMagick again to create an image showing thresholded differences, with the threshold set to 16. Here it is:
Percentage-wise, the differences are pretty small, once you allow for that "fuzz factor" of 16.
So now I'm comparing my test and control images using this forgiving comparison function, that I modeled from a function that I found elsewhere on the Qt site, called "fuzzyComparePixels". For the particular case that I've been discussing, I call this function with an rPixelTolerance of 16 and an rImageTolerance of 0.5 percent:
///Compare two images. Ignores the alpha channel.
///Must be same size and format.
///@param rImage1 first image (order does not matter)
///@param rImage2 second image
///@param rPixelTolerance tolerable difference in a color channel
///@param rImageTolerance tolerable percentage of pixels above pixel tolerance
///@return true if the images compare favorable, false otherwise
const bool
CompareImages(QImage& rImage1,
QImage& rImage2,
const uint32 rPixelTolerance,
const float32 rImageTolerance)
{
bool success = false;
if (rImage1.width() == rImage2.width() &&
rImage1.height() == rImage2.height() &&
rImage1.depth() == rImage2.depth() &&
rImage1.format() == rImage2.format())
{
int32 failed_pixel_count = 0;
for (int32 y = 0; y < rImage1.height(); y++)
{
for (int32 x = 0; x < rImage1.width(); x++)
{
QRgb pixel_1 = rImage1.pixel(x,y);
QRgb pixel_2 = rImage2.pixel(x,y);
uint32 redFuzz = qAbs(qRed(pixel_1) - qRed(pixel_2));
uint32 greenFuzz = qAbs(qGreen(pixel_1) - qGreen(pixel_2));
uint32 blueFuzz = qAbs(qBlue(pixel_1) - qBlue(pixel_2));
if (redFuzz > rPixelTolerance ||
greenFuzz > rPixelTolerance ||
blueFuzz > rPixelTolerance)
{
failed_pixel_count++;
}
}
}
float32 percent_failed =
failed_pixel_count * 100.0 / (rImage1.width() * rImage1.height());
qDebug("failed_pixel_count = %d, percent_failed = %f",
failed_pixel_count, percent_failed);
if (percent_failed <= rImageTolerance)
{
success = true;
}
}
return success;
}
///Compare two images. Ignores the alpha channel.
///Must be same size and format.
///@param rImage1 first image (order does not matter)
///@param rImage2 second image
///@param rPixelTolerance tolerable difference in a color channel
///@param rImageTolerance tolerable percentage of pixels above pixel tolerance
///@return true if the images compare favorable, false otherwise
const bool
CompareImages(QImage& rImage1,
QImage& rImage2,
const uint32 rPixelTolerance,
const float32 rImageTolerance)
{
bool success = false;
if (rImage1.width() == rImage2.width() &&
rImage1.height() == rImage2.height() &&
rImage1.depth() == rImage2.depth() &&
rImage1.format() == rImage2.format())
{
int32 failed_pixel_count = 0;
for (int32 y = 0; y < rImage1.height(); y++)
{
for (int32 x = 0; x < rImage1.width(); x++)
{
QRgb pixel_1 = rImage1.pixel(x,y);
QRgb pixel_2 = rImage2.pixel(x,y);
uint32 redFuzz = qAbs(qRed(pixel_1) - qRed(pixel_2));
uint32 greenFuzz = qAbs(qGreen(pixel_1) - qGreen(pixel_2));
uint32 blueFuzz = qAbs(qBlue(pixel_1) - qBlue(pixel_2));
if (redFuzz > rPixelTolerance ||
greenFuzz > rPixelTolerance ||
blueFuzz > rPixelTolerance)
{
failed_pixel_count++;
}
}
}
float32 percent_failed =
failed_pixel_count * 100.0 / (rImage1.width() * rImage1.height());
qDebug("failed_pixel_count = %d, percent_failed = %f",
failed_pixel_count, percent_failed);
if (percent_failed <= rImageTolerance)
{
success = true;
}
}
return success;
}
To copy to clipboard, switch view to plain text mode
I ignore the alpha channel because the alpha channel on my control and test images are totally different, with one having all "0"s in the alpha channel and the other having all "255"s. I don't know why this is. I do the GLWidget::grabFrameBuffer(bool withAlpha=false) call without an argument, allowing it to default to false. Perhaps the alpha channel initializes differently on the two platforms?
Another interesting observation is that the test and control images that have 0 rotation involved (unlike the 10 degree rotation in the images above) seem to be much more alike. Perhaps the two platforms are using different smoothing techniques? I don't know. I checked the versions of the OpenGL and the png libraries, and they were the same on both platforms.
So I feel that I can use the forgiving image comparison technique that I'm showing above, and still faithfully report unit test success or failure -- but I'd like to understand what causes these image differences. The images all have an image depth of 32, and the control images are stored in png format. If you have an idea of what is causing the differences, please respond.
Bookmarks