Here I am again. These are the answers you asked for:
1) QPtrVector<QByteArray> buffersRing;
2) It's text data formed by records, each record is delimited by \0, so I have first to find the delimiting \0 with the data still in QByteArray format, then I can convert the chunk to QString:
// find next chunk, about 5% of total time
endIdx = (startIdx < (int)ba.size()) ? ba.find(0, startIdx) : -1;
// deep copy here by fromAscii(), about 4% of total time
const QString& chunk(QString::fromAscii(&(ba[startIdx]), endIdx - startIdx));
Where ba it's one of the buffers pointed by buffersRing. I do this outside of fast path, on a timer set to 500ms. There some logic, here omitted, to handle half lines / half records and so on.
3) Times get only slightly better if I first read all of the data and then process it, but the GUI is blank until the end of loading, so the _feeling_ from the user is of more slowness.
Now what I have found:
QProcess::readyReadStdout() is emitted at furious paces, always less then 10ms each, very often less then 5ms. And the data read in proc.readStdout() is very small 5-10KB, so you can imagine how many calls are needed to read 30MB of stuff!
So I have disconnected the signal from the slot and used a timer set to 50ms to manually call the on_readyReadStdout() slot. Now data chunks are quite bigger, about 100-200KB and frequency is less. Total time got better by about 5%, though it's still much higher then when data_producer writes directly to a file.
Setting the timer to a bigger interval 100-200ms does not give better results.
I can perform other test if you want.
Thanks
Marco
Bookmarks