I can confirm the observation - also moving data from SQL Server DB via ODBC to Postgres Database.

The following code executes fairly quickly:

Qt Code:
  1. QSqlDatabase odbcsql = QSqlDatabase::addDatabase("QODBC", "odbcsql");
  2.  
  3. ...
  4.  
  5. QSqlDatabase odbcsql = QSqlDatabase::database("odbcsql");
  6.  
  7. // lesen
  8. QSqlTableModel tabModelSource(nullptr, odbcsql);
  9. tabModelSource.setTable(sTableSource);
  10. if (!tabModelSource.select()) {
  11. qCritical() << tabModelSource.lastError();
  12. return;
  13. }
  14.  
  15. int r = tabModelSource.rowCount();
  16. while (tabModelSource.canFetchMore()) {
  17. tabModelSource.fetchMore();
  18. }
  19. qDebug() << "Total number of rows =" << r;
To copy to clipboard, switch view to plain text mode 

This code takes less than a second for 160000 entries - way too many records to transfer in 1 seconds over my slow internet connection. So apparently data is not transferred and cached in the calls to fetchMore(), yet.

Now, when I start to access the data:

Qt Code:
  1. qDebug() << "Caching table data";
  2. std::vector<QList<QVariant> > data;
  3. for (int i=0; i<r; ++i) {
  4. QSqlRecord rec(tabModelSource.record(i)); // <--- very slow access
  5. QList<QVariant> vals;
  6. for (int j=0; j<rec.count(); ++j)
  7. vals.append(rec.value(j));
  8. data.push_back(vals);
  9. }
To copy to clipboard, switch view to plain text mode 

Every time I access a record, the data is crawling over my internet connection at about 30...56kB/s (and my network really isn't than slow!)

Conclusion:

1. ODBC-driver doesn't do any caching
2. ODBC-driver is basically unusable for real-life data in Qt model/view classes

Questions:

Does anyone know how to tell the ODBC-driver to retrieve the data in larger chunks, instead value-by-value?

-Andreas