PDA

View Full Version : How big is big for a qtable (2 m rows is to big) ?



tuan
27th January 2007, 15:39
I am having a problem trying to print a vector of 2 000 000 rows x 4 columns of dates and float and display it into a qTable. Around row 400 000 rows, the application cores. with the following error message :

std::bad_alloc()

Bizarrely enough :

myTable->setNumRows(2000 000);
myTable->setNumCols(4);

is fine,

but the program cores in the middle of this loop :


for(int line=0; line < 3000000; ++line)
{
....
myTable->setText(line,0,"dummy string");
....
}

I have 2 gig or RAM + 4 gig of swap, so I I do not think that I run out of memory.

Does anybody have an idea ?

Thanks.

Tuan.

tuan
27th January 2007, 15:42
Sorry for the typo, of course, one should read
for(int line=0; line < 2000000; ++line)

One more precision : when I just do a qWarning of the data, all works fine, only when I try to setText does it core, so the bug is really localized to the table fillup.

Thanks.

jacek
27th January 2007, 15:55
Can we see the backtrace?

http://doc.trolltech.com/3.3/qtable.html#notes-on-large-tables

tuan
27th January 2007, 22:07
Code :


void Form1::populateTable()
{
myTable->setNumRows(2000000);
myTable->setNumCols(3);
for (int i=0; i < 2000000; ++i)
{
for (int j=0; j < 3; ++j)
{
myTable->setText(i,j,QString::number(i));
qWarning("Line = " + QString::number(i) + " Col = " + QString::number(j));
}
}
}

Backtrace :

#0 0x289aeecb in kill () from /lib/libc.so.6
#1 0x288d4236 in raise () from /lib/libpthread.so.2
#2 0x289adb78 in abort () from /lib/libc.so.6
#3 0x2887e3f4 in __gnu_cxx::__verbose_terminate_handler () from /usr/local/lib/gcc-4.1.2/libstdc++.so.6
#4 0x2887c15c in __cxxabiv1::__terminate () from /usr/local/lib/gcc-4.1.2/libstdc++.so.6
#5 0x2887c1a4 in std::terminate () from /usr/local/lib/gcc-4.1.2/libstdc++.so.6
#6 0x2887c2ac in __cxa_throw () from /usr/local/lib/gcc-4.1.2/libstdc++.so.6
#7 0x2887c72a in operator new () from /usr/local/lib/gcc-4.1.2/libstdc++.so.6
#8 0x28585372 in QString::setLength () from /usr/X11R6/lib/libqt-mt.so.3
#9 0x28585718 in QString::grow () from /usr/X11R6/lib/libqt-mt.so.3
#10 0x28586e33 in QString::operator+= () from /usr/X11R6/lib/libqt-mt.so.3
#11 0x0804da65 in operator+ (s1=@0xbfbfe074, s2=0x804e778 " Col = ") at qstring.h:1046
#12 0x0804d700 in Form1::populateTable (this=0xbfbfe8c8) at form1.ui.h:23
#13 0x0804e3e7 in Form1::qt_invoke (this=0xbfbfe8c8, _id=60, _o=0xbfbfe140) at .moc/moc_form1.cpp:84
#14 0x282e16a8 in QObject::activate_signal () from /usr/X11R6/lib/libqt-mt.so.3
#15 0x282e1d19 in QObject::activate_signal () from /usr/X11R6/lib/libqt-mt.so.3
#16 0x285e8a50 in QButton::clicked () from /usr/X11R6/lib/libqt-mt.so.3
#17 0x2836d95e in QButton::mouseReleaseEvent () from /usr/X11R6/lib/libqt-mt.so.3
#18 0x283142bb in QWidget::event () from /usr/X11R6/lib/libqt-mt.so.3
#19 0x28285549 in QApplication::internalNotify () from /usr/X11R6/lib/libqt-mt.so.3
#20 0x28285805 in QApplication::notify () from /usr/X11R6/lib/libqt-mt.so.3
#21 0x28226f23 in QETWidget::translateMouseEvent () from /usr/X11R6/lib/libqt-mt.so.3
#22 0x28225ea3 in QApplication::x11ProcessEvent () from /usr/X11R6/lib/libqt-mt.so.3
#23 0x282375f0 in QEventLoop::processEvents () from /usr/X11R6/lib/libqt-mt.so.3
#24 0x2829993f in QEventLoop::enterLoop () from /usr/X11R6/lib/libqt-mt.so.3
#25 0x28299898 in QEventLoop::exec () from /usr/X11R6/lib/libqt-mt.so.3
#26 0x28284884 in QApplication::exec () from /usr/X11R6/lib/libqt-mt.so.3
#27 0x0804cd9d in main (argc=Cannot access memory at address 0x0
) at main.cpp:10

I'm confused. Thanks for your help.

wysota
27th January 2007, 22:40
I have 2 gig or RAM + 4 gig of swap, so I I do not think that I run out of memory.
This is not so obvious. First of all on 32bit systems you can't have more than 4G of memory "active" at once (it cannot be addressed) so the obvious upper limit on resources for the application is 4GB, more swap won't help. Furthermore you can have system limits per process set. Obiously operator new throws a bad_alloc exception which means the allocator can't allocate memory the process requires. Of course a completely different thing is why this happens. Probably QTable allocates some extra memory when growing which exceeds some limit and causes an exception to be thrown.

jacek
27th January 2007, 23:17
Could you comment out that qWarning() and try valgrind --tool=massif (http://valgrind.org/docs/manual/ms-manual.html) --- this should tell you how much memory your program really uses.

Which Qt version do you use exactly?

tuan
28th January 2007, 09:43
Thanks for your time and advice....

I'm using Qt 3.3.6 and gcc 4.1 on FreeBSD 6.2

Here is the output of valgrind. It is still gibberish for me at this time (newbie in Valgrind)....

$ valgrind --tool=massif /home/tuan/data/c++/test/huge_table/huge
==19668== Massif, a space profiler for x86-linux.
==19668== Copyright (C) 2003, Nicholas Nethercote
==19668== Using valgrind-2.1.2.CVS, a program supervision framework for x86-linux.
==19668== Copyright (C) 2000-2004, and GNU GPL'd, by Julian Seward.
==19668== For more details, rerun with: -v
==19668==
--19668-- INTERNAL ERROR: Valgrind received a signal 11 (SIGSEGV) - exiting
--19668-- si_code=0xC Fault EIP: 0xB8028EE0 (); Faulting address: 0xBFCEBFFC

valgrind: the `impossible' happened:
Killed by fatal signal
Basic block ctr is approximately 793900000
==19668== at 0xB802D8DD: (within /usr/local/lib/valgrind/stage2)
==19668== by 0xB802D8DC: (within /usr/local/lib/valgrind/stage2)
==19668== by 0xB802D8F4: vgPlain_core_panic (in /usr/local/lib/valgrind/stage2)
==19668== by 0xB8034360: (within /usr/local/lib/valgrind/stage2)

sched status:

Thread 1: status = Runnable, associated_mx = 0x0, associated_cv = 0x0
==19668== at 0x810390AF: operator new[](unsigned) (in /usr/local/lib/valgrind/vgpreload_massif.so)
==19668== by 0x815401C8: QString::QString(QChar const*, unsigned) (in /usr/X11R6/lib/libqt-mt.so.3)
==19668== by 0x8152DA95: qulltoa(unsigned long long, int, QLocalePrivate const&) (in /usr/X11R6/lib/libqt-mt.so.3)
==19668== by 0x8152E1AD: QLocalePrivate::longLongToString(long long, int, int, int, unsigned) const (in /usr/X11R6/lib/libqt-mt.so.3)

jacek
28th January 2007, 11:09
--19668-- INTERNAL ERROR: Valgrind received a signal 11 (SIGSEGV) - exiting
Too bad, Valgrind got killed. Reduce the number of rows to 100000 and try again or better prepare a minimal compilable example, so we can test it too.

There are three possibilites:
QTable eats a lot of space,
you suffer from memory fragmentation,
there's a memory leak somewhere.

tuan
28th January 2007, 16:50
Here is the code to be compiled :

On my 2 GB machine, for numRows = 750000, it works fine, but for numRows=850000, I get the following message :

terminate called after throwing an instance of 'std::bad_alloc'
what(): St9bad_alloc

Jacek, I tried valgrind --tools=massif mycode, however, it cored. I think that this might be due to the valgrind port under FreeBSD which somehow is not very stable.

Thanks a lot for your help.



#include <qapplication.h>
#include <qtable.h>


const int numRows = 850000;
const int numCols = 3;

int main( int argc, char **argv )
{
QApplication app( argc, argv );

QTable table( numRows, numCols );

for ( int i = 0; i < numRows; ++i )
{
for (int j = 0; j < numCols; ++j)
{
table.setText( i, j, QString::number(i) );
}
}

app.setMainWidget( &table );
table.show();
return app.exec();
}

wysota
28th January 2007, 17:13
I managed to run the application with 850k rows.

According to top it occupied ~0.6GB of memory. I have 512MB of physical RAM and 1,2GB of swap on my system (top reported about 53% memory usage 428virt + 224res).

With 850k rows and 3 columns it gives 2550 cells + headers. It gives an average of ~200kB per cell. Of course you should substract all other objects from above figures (~6MB are occupied by the libraries themselves).

My guess is that you have some limits per process set in your system.

What does "ulimit -a" return for you?

jacek
28th January 2007, 19:12
Jacek, I tried valgrind --tools=massif mycode, however, it cored. I think that this might be due to the valgrind port under FreeBSD which somehow is not very stable.
Luckily it worked on my system. It seems that there are two problems. First of all QTableItems take a lot of space and there is a problem with memory fragmentation --- note the size of "heap-admin" strips.

I've managed to reduce memory size a bit by removing all of those null pixmaps that were created for each table item, but probably it won't be enough for you. Better try not to use QTableItems at all, as described here: http://doc.trolltech.com/3.3/qtable.html#notes-on-large-tables.


int main( int argc, char **argv )
{
QApplication app( argc, argv );

QPixmap p;
QTable table( numRows, numCols );

for ( int i = 0; i < numRows; ++i )
{
for (int j = 0; j < numCols; ++j)
{
QTableItem *item = new QTableItem( &table,
QTableItem::OnTyping, QString::number(i), p );
table.setItem( i, j, item );
}
}

app.setMainWidget( &table );
table.show();

return app.exec();
}

jacek
28th January 2007, 19:25
You can get even better results with:
for ( int i = 0; i < numRows; ++i )
{
QString s( QString::number( i ) );
for (int j = 0; j < numCols; ++j)
{
QTableItem *item = new QTableItem( &table,
QTableItem::OnTyping, s, p );
table.setItem( i, j, item );
}
}
But still heap administration will cost you a lot, because you create a lot of small objects.

tuan
28th January 2007, 20:47
Thanks a bunch to Jacek and Wysota. I will try to follow your advice tomorrow. Otherwise, if it still does not work, I will probably try and divide the display my vector by slices of 100 K lines (the table displays the content of a 2 million lines vector).

Best regards and thank you both for your time and expertise.

Tuan.

tuan
28th January 2007, 20:49
Otherwise, to answer Wysota :

$ ulimit -a
cpu time (seconds, -t) unlimited
file size (512-blocks, -f) unlimited
data seg size (kbytes, -d) 524288
stack size (kbytes, -s) 65536
core file size (512-blocks, -c) unlimited
max memory size (kbytes, -m) unlimited
locked memory (kbytes, -l) unlimited
max user processes (-u) 5547
open files (-n) 11095
virtual mem size (kbytes, -v) unlimited
sbsize (bytes, -b) unlimited

wysota
28th January 2007, 23:16
data seg size (kbytes, -d) 524288
stack size (kbytes, -s) 65536

You have a 512MB limit on data size and 64MB limit on stack size. The former causes the exception to be thrown. You have to keep below that limit or ask your sysadmin to remove it. Of course the first solution is much better :) The same thing causes valgrind to collapse.

tuan
29th January 2007, 08:59
Thanks Wysota ! Now, I have tried to do something like this :

$ ulimit -s unlimited
ulimit: bad limit: Operation not permitted

Can you please tell me how to set the heap size to unlimited (I'm the administrator for my machine) ? I've searched on the net without really getting relevant infos on that. Thanks a lot.

wysota
29th January 2007, 10:11
$ ulimit -s unlimited
ulimit: bad limit: Operation not permitted

You have to be root to do that. If any user could alter the limit then there wouldn't be much sense in setting that limit in the first place.

I suggest you follow Jacek's hints though, you'll gain more by reducing memory requirements of your application than by lifing resource limits.