I have scientific data in the form of x vs. y with a color gradient that assigns a color to each (x, y) pair (the z axis). The problem is that the aspect ratio is crazy. There might be 100K points (or more) in the x dimension, but only 400 points in the y dimension. Because the data matrix is sparse, most of the z axis values will be zero. The z-values are "spiky" - if you scan along the x axis, there will be sharp ~Lorentzian spikes for 10 or so points separated by hundreds of zeros. The spikes might extend for 3 or 4 points in the y dimension.
I need to implement a 2D display that has the ability to display the entire matrix in a "normal" sized window, say 1000 x 1000 pixels. This will generally require expanding the y range so that 400 points fill a 1000 point range. That's not hard and can be implemented with interpolation.
What about the x axis? Given that maybe 100 data points will map to the same x pixel, what is the best choice? Choose the maximum z value in the range? The average z value? It would be confusing to the user if, as the plot was zoomed in, peaks suddenly started appearing where there were no peaks at lower zoom level.
Zooming has to be fast. Resampling a section of a 400 x 100K point matrix into a 1000 x 1000 pixel image could take forever if every pixel has to be recomputed each time. It is sparse data, so I really don't want to have to explode it into a 400 x 100K matrix where most of it is zero.
Finally, what is the best way to implement this for display? Using a QImage? Or implementing it as a texture in OpenGL? Are there GPU-side OpenGL mechanisms to create a texture using the full data matrix size (400 x 100K) that will allow me to map and zoom without huge CPU-side overhead? I could constrain the z-axis gradient to 0 - 255, so a matrix of that size would only be 40 MB.
If would appreciate any advice.
Bookmarks