PDA

View Full Version : QGraphicsScene item population



Royce
12th February 2017, 17:33
I'd like to graph a sound file. I saw QGraphicsScene an thought it might be perfect, but it seems like sound files have too many elements. In debug mode, the item load code just takes a long, long, long, time and pegs one CPU. In release mode it slams my computer so hard I have to power down to recover. (No mouse response, no or significantly delayed Ctrl-Alt-Del response)

My test file is a single channel WAV, 16-Bit, 44.1kHz, about 6 mins long. There are about 15 million samples (30MB file). I had hoped to draw lines between each sample and let the user zoom in and out and scroll left and right. But it would seem the item load cycle is just too intensive to actually pile in 15 Million line segments. Does that sound right, or is there something obviously wrong with the way I'm loading? Is there some faster way? Do I just need to load and unload items such that there is only a thousand or so at time in the scene and handle scrolling manually? (Doesn't seem like the spirit of the class)

If I'm going to handle scrolling and such manually, should I be looking at QRasterWindow instead?



MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);

connect(&audio_decoder, &QAudioDecoder::bufferReady, this, &MainWindow::on_audio_decoded);
connect(&audio_decoder, static_cast<void(QAudioDecoder::*)(QAudioDecoder::Error)>(&QAudioDecoder::error),
this, &MainWindow::on_audio_decode_error);
connect(&audio_decoder, &QAudioDecoder::stateChanged, this, &MainWindow::on_audio_decoder_stateChanged);
}

MainWindow::~MainWindow()
{
delete ui;
}

void MainWindow::on_actionLoad_Sound_File_triggered()
{
QString fileName = QFileDialog::getOpenFileName(this, "Open a sound file", QString(), "Sound Files (*.mp3 *.wav *.ogg)");
if (!fileName.isEmpty())
{
if (QFile(fileName).exists())
{
samples.clear();
audio_decoder.setSourceFilename(fileName);
audio_decoder.start();
}
}
}


void MainWindow::on_audio_decoded()
{
while (audio_decoder.bufferAvailable())
{
QAudioBuffer audio_buffer = audio_decoder.read();
QAudioFormat audio_format = audio_buffer.format();


if (samples.empty())
{
for(int chan_idx = 0; chan_idx < audio_format.channelCount(); ++ chan_idx)
{
samples.push_back(QVector<qreal>());
}

gs = new QGraphicsScene();
}

switch(audio_format.sampleType())
{
case QAudioFormat::SignedInt:
switch(audio_format.sampleSize())
{
case 8:
NormalizeIntSamples<qint8>(audio_buffer.frameCount(), audio_format.channelCount(), audio_buffer.constData<qint8>(), (QSysInfo::Endian)(audio_format.byteOrder()), samples);
break;
case 16:
NormalizeIntSamples<qint16>(audio_buffer.frameCount(), audio_format.channelCount(), audio_buffer.constData<qint16>(), (QSysInfo::Endian)(audio_format.byteOrder()), samples);
break;
case 32:
NormalizeIntSamples<qint32>(audio_buffer.frameCount(), audio_format.channelCount(), audio_buffer.constData<qint32>(), (QSysInfo::Endian)(audio_format.byteOrder()), samples);
break;
}
break;
case QAudioFormat::UnSignedInt:
switch(audio_format.sampleSize())
{
case 8:
NormalizeIntSamples<quint8>(audio_buffer.frameCount(), audio_format.channelCount(), audio_buffer.constData<quint8>(), (QSysInfo::Endian)(audio_format.byteOrder()), samples);
break;
case 16:
NormalizeIntSamples<quint16>(audio_buffer.frameCount(), audio_format.channelCount(), audio_buffer.constData<quint16>(), (QSysInfo::Endian)(audio_format.byteOrder()), samples);
break;
case 32:
NormalizeIntSamples<quint32>(audio_buffer.frameCount(), audio_format.channelCount(), audio_buffer.constData<quint32>(), (QSysInfo::Endian)(audio_format.byteOrder()), samples);
break;
}
break;
case QAudioFormat::Float:
switch(audio_format.sampleSize())
{
case 64:
NormalizeFloatSamples<double>(audio_buffer.frameCount(), audio_format.channelCount(), audio_buffer.constData<double>(), samples);
break;
case 32:
NormalizeFloatSamples<float>(audio_buffer.frameCount(), audio_format.channelCount(), audio_buffer.constData<float>(), samples);
break;
}
break;
case QAudioFormat::Unknown:
break;
}
}
}

void MainWindow::on_audio_decode_error(QAudioDecoder::E rror error)
{
QString decode_error = audio_decoder.errorString();
QMessageBox::warning(this, "Audio Decode Error", decode_error);

}

void MainWindow::DrawSoundLines()
{
qDebug() << "Drawing.";
QPen pen;
pen.setWidth(0);


for(int sample_idx = 1; sample_idx < samples[0].size(); ++sample_idx)
{
QLineF line(sample_idx, samples[0][sample_idx], sample_idx -1, samples[0][sample_idx - 1]);
gs->addLine(line, pen);
}

std::stringstream msg;
msg << "Drew " << samples[0].size() << " samples";
QMessageBox::information(this, "Audio Decoded", msg.str().c_str());

ui->graphicsView->setScene(gs);
}

void MainWindow::on_audio_decoder_stateChanged(QAudioDe coder::State state)
{
if (state == QAudioDecoder::StoppedState)
{
//std::async(&MainWindow::DrawSoundLines,this);
DrawSoundLines();
}
}



class MainWindow : public QMainWindow
{
Q_OBJECT

public:
explicit MainWindow(QWidget *parent = 0);
~MainWindow();



private slots:
void on_actionLoad_Sound_File_triggered();

void on_audio_decoded();
void on_audio_decode_error(QAudioDecoder::Error error);
void on_audio_decoder_stateChanged(QAudioDecoder::State state);

private:
Ui::MainWindow *ui;

template<typename T> void NormalizeIntSamples(const int frame_count, const int channel_count, const T* sample_data, QSysInfo::Endian sample_byte_order, QVector<QVector<qreal> >& samples)
{
for(int frame_idx = 0; frame_idx < frame_count; ++frame_idx)
{
for (int channel_idx = 0; channel_idx < channel_count; ++channel_idx)
{
T sample = 0;
switch (sample_byte_order)
{
case QSysInfo::BigEndian:
sample = qFromBigEndian<T>(sample_data[frame_idx*channel_count + channel_idx]);
case QSysInfo::LittleEndian:
sample = qFromLittleEndian<T>(sample_data[frame_idx*channel_count + channel_idx]);
}
samples[channel_idx].push_back((qreal)sample);
}
}
}


template<typename T> void NormalizeFloatSamples(const int frame_count, const int channel_count, const T* sample_data, QVector<QVector<qreal> >& samples)
{
for(int frame_idx = 0; frame_idx < frame_count; ++frame_idx)
{
for (int channel_idx = 0; channel_idx < channel_count; ++channel_idx)
{
samples[channel_idx].push_back((qreal)(sample_data[frame_idx*channel_count + channel_idx]));
}
}
}


void DrawSoundLines();

QAudioDecoder audio_decoder;
QVector<QVector<qreal> > samples;

QGraphicsScene *gs;
};

d_stranz
14th February 2017, 03:07
It looks to me like:

1 - in line 48 of the first code window, you will be creating a new scene every time this method is called and samples is empty. Create the scene -once- in the main window constructor and just use it. If you need to refill it with different samples, then delete the current objects from the scene and create new graphics items. Likewise, set the scene in the view in the constructor, not each time you draw lines. I also don't see where you delete the old lines - it looks like the draw lines method simply keeps adding more segments.

2 - try using QGraphicsPathItem instead of 15 million individual line segments.

3 - in our plotting code, we also plot scientific data that can contain a million or more points. We optimize by only creating non-redundant segments. That is, we keep track of the x and y -pixel- coordinates of each segment of the line. If the x coordinate is the same as the previous x coordinate, then we don't add a new segment, we just update the current segment's y minimum (y0) and y maximum (y1) if needed. When x finally changes, we output two segments that go from (xLast, yLast) to (xNew, y0) and (xNew, y0) to (xNew, y1). xNew becomes xLast, the last y value before x changes becomes yLast, and we start again for the next set of x, y, values. If the change in x value is only one pixel, then we don't output two segments, we just output the single (xNew, y0) to (xNew, y1) segment. So if your view is 1000 pixels wide, you'll have a path with at most 1000 segments.

When the user zooms in, we discard the old path and create a new one that only includes the zoomed x range.

Admittedly, this optimization ties the scene to a single view, since the calculation is done on the pixel coordinates of the view. On the other hand, if there are only a thousand or so segments in the path, displaying the same data in two different views is not all that wasteful compared to the performance gain.

Royce
20th February 2017, 02:42
I tried item #2. It was way faster to add the sample lines to that object, and even adding the the object to the scene was fast. But when I added the scene to the view it tanked again.

I think I will end up doing something like item #3 but with QRasterWindow or QOpenGLWindow. If I'm going to fuss with pixel level stuff it feels more natural to me to deal with an object plainly targeted at pixel operations. I feel like there may be a way to have a much larger image than the screen displays and have the scroll move which part of the image is displayed.

Anyway, thanks for the help!

d_stranz
20th February 2017, 20:14
But when I added the scene to the view it tanked again.

Probably because the visualization of the QGraphicsPathItem still involves mapping those mmillions of segments into graphics calls. Doesn't look like that class is optimized to eliminate redundant operations.

You could look into how QGraphicsPathItem does its painting. That is most certainly at the pixel level, so you may be able to derive from this class and implement the optimizations there prior to jumping into the OpenGL blender.

You could also look at how level of detail is implemented in the Qt 40000 chips example (http://doc.qt.io/qt-5/qtwidgets-graphicsview-chip-example.html).