PDA

View Full Version : QVector or Database



baray98
7th December 2007, 22:08
Guys,

I have an app that is trying to decompress a datafile and plots the data and its really slow

heres the given:
compress file is about 100MB size this about 130 000 records.....
uncompress file would be about 400MB
my plotter can browse to all the uncompress file 200 (maximum records at a time)

heres what i did
decompess input file then save in a vector of records (QVector)
connect my plotter using an index in the vector (this is vey slow)

I guess my other option is to use a database, I uncompress all data and save it all into the database then connect my plotter, do you think this will improve my speed in browsing?

is there any other good way of handling this stress of my life?

baray98

jacek
7th December 2007, 22:20
What kind of data do you have? What do you mean exactly by "browsing"?

wysota
7th December 2007, 22:25
No, I don't think using a database will cause any speed improvement.

You need to find the bottleneck in your application and think how to remove it. You should certainly implement some caching. QGraphicsView is a good candidate for a replacement.

baray98
8th December 2007, 19:53
I implemented my browsing mechanism as below:

vector of records as source (vectorRec),
input from user starting records (startRec)
input from user number of records to show from starting records (recToShow)

i have a loop like this


for (int i = 0 ; i < recToShow ; i++)
{
record = vectorRec.at(i+startRec); // get a record
record.decompress
plotData(record);
}

I noticed that if my vector has lesser count say 30 000 records, I browse faster than i have 60 000 records ofcourse i kept my number of records to show constant at this tests.

It seems that the bottleneck would be the retrieving of records from my vectors, now if i have to implement some simple caching , i might just split up the records into vectors of 10 000 what do you think guys? will it help if i will split it up.

baray98

wysota
8th December 2007, 20:24
It seems that the bottleneck would be the retrieving of records from my vectors
Don't guess. Check it using a profiler.

baray98
9th December 2007, 08:45
i tried using gprof i am very new to this profiler stuff and i got the "undefined refrence mcount" problem and i was stuck i dont know how to compile it with profiler on

here is what i did

since i am using gnu i added -pg option on the CXXFLAGS in my make file is there some more settings that i missed? Or how can i get rid of the mcount problem?

baray98

jpn
9th December 2007, 09:19
Wiki: Profiling

Thomas
4th January 2008, 22:08
I noticed that if my vector has lesser count say 30 000 records, I browse faster than i have 60 000 records ofcourse i kept my number of records to show constant at this tests.

It seems that the bottleneck would be the retrieving of records from my vectors, now if i have to implement some simple caching , i might just split up the records into vectors of 10 000 what do you think guys? will it help if i will split it up.

baray98

A shot in the dark: I would think that you have a memory problem. Please check what the amount of data you want to cache is. If it is more than the amount of physisal memory in your computer, the OS will probably start swapping out inactive pages which can take a noticeable amount of time.