Thanks!

I look at the qwt implementation, and I think I am doing very similar, except:
Qwt create image on the fry from data with interpolation done in the data, but what I do is to create the QImage first from data, for instance, if I have an 1000x800 data, I first create QImage( 1000, 800, Format_ARGB32 ).

Then when I zoom in, I dig out the exposed sub image, such as at QRect( 400, 500, 60, 70 ), this sub image has to map to the screen, say with QSize( 700, 600 ). I use bilinear interpolation of 4 corner image data to compute QRgba in between.

I split the image's QRgba into QRed, QGreen, QBlue, QAlpha, then interpolate each component, and merge back using QRgba( r, g, b, a ). However, the image quality is still poor.

If I interpolate from data just like Qwt using same interpolation method, then create the image from the interpolated data, result look good.

So what do I do wrong? I thought QColor's r/g/b/a components can do linear interpolation?

(I can't use Qwt due to many reasons).