PDA

View Full Version : Qt MySQL big data insertion



^NyAw^
4th July 2013, 10:47
Hi,

I'm developing an application that stores images into a MySQL database. It takes images from a camera, inspects the image and save the "detected" zones of the image as little images into the database. To speedup the process it writes the image as the direct camera buffer(not JPG,BMP...) and the width and height of the images to be able to show them when needed. The camera images are 8000x3500px gray(one byte per pixel) and the "detection" images can be 1000x3500px that is 3.3MB.

The problem that I'm getting is the speed of the execution of the SQL insertion.
Now I'm using the multiple row insertion way also with a transaction.
Inserting 42 "detection" images takes from 350ms to 2800ms on different executions.

Have you any idea how to improve this?

Thanks,

ChrisW67
4th July 2013, 10:53
You are writing 42 by 3MB plus the original image, about 160MB. I think taking less than 3 seconds to transfer that to the RDBMS is not bad, especially if there is a network involved.

^NyAw^
4th July 2013, 11:24
Hi,

The original image is not saved, only the "detected" images. The database is on the same computer on a Windows XP machine so it will use shared memory to transfer the data, right?

Thanks,

ChrisW67
4th July 2013, 22:18
It might use of a number of ways to transfer the data from client to server. If the server is started with the --shared-memory option and the client is using the equivalent of the --protocol=MEMORY option then yes, shared memory should be used. It is entirely possible you are talking via the loopback interface.

You might also see if performing the set of inserts inside a single database transaction has an impact. You get one commit cycle rather than 42.

I have never used a MySql server on XP. Cannot vouch for its performance.

^NyAw^
5th July 2013, 12:01
Hi,



It might use of a number of ways to transfer the data from client to server. If the server is started with the --shared-memory option and the client is using the equivalent of the --protocol=MEMORY option then yes, shared memory should be used

I have tryied starting the server with "--shared-memory" and using the "setConnectionOptions("--protocol=MEMORY")" and it still needs too much time.



You might also see if performing the set of inserts inside a single database transaction has an impact. You get one commit cycle rather than 42.

I'm using the transaction too. I'm using the multiple row insertion way and the transaction and commit calls.

Using the debugger I can see that the time is wasted in "qsql_mysql.cpp" method "bool QMYSQLResult::exec()" line "r = mysql_stmt_execute(d->stmt);".

I also have tryied to convert the image to JPG to reduce the size of the data but it needs much more CPU time that storing the direct data.

Maybe scaling the image to half size and storing the scaling factor into the database will be a good solution thinking on that the quality of the images are not as good as the original.

Thanks.