Hi caduel,
thanks for your suggestions -- memory mapping the file would be an option that could circumvent the problem. However we have a large existing codebase that is based on QDataStream, QFile and so on for loading those large files which we would like to continue to use.
The function call
fcntl(file.handle(), F_NOCACHE, 1);
fcntl(file.handle(), F_NOCACHE, 1);
To copy to clipboard, switch view to plain text mode
successfully disables caching but why does read performance suffer that much -- is Qt doing something else behind the scenes?
When using standard C calls for reading a large file, caching can be disabled while retaining the original high read performance:
int fd = open(src.toStdString().c_str(), O_RDONLY);
fcntl(fd, F_GLOBAL_NOCACHE, 1);
size_t bufferSize = 1024ul*1024ul*2000ul;
char* buffer = new char[bufferSize];
read(fd, buffer, bufferSize);
close(fd);
delete[] buffer;
int fd = open(src.toStdString().c_str(), O_RDONLY);
fcntl(fd, F_GLOBAL_NOCACHE, 1);
size_t bufferSize = 1024ul*1024ul*2000ul;
char* buffer = new char[bufferSize];
read(fd, buffer, bufferSize);
close(fd);
delete[] buffer;
To copy to clipboard, switch view to plain text mode
Any ideas why disabling caching slows down Qt but not the original C calls?
Bookmarks