hi, I tried read with the solution above with memblock on a 200mb text file. it's goes fast.
Then I tried to read file and put it into a vector:
this goes very slow (I didn't see the end of the copy!)Qt Code:
vector<string> text; copy (it_file, eos, inserter (text, text.begin()) );To copy to clipboard, switch view to plain text mode
I tried this operation of "copy" because I need to sort the lines of the file too. Then I thought to put the lines in a vector and then call sort() on it. But the "reading" is too slow and I need speed.
I'm thinking of implement my own sort on memblock. But maybe it won't so fast as vector.sort().......What do you suggest, please?
Regards
What is the format of the file you want to read and what operations you need to perform?
hi, my file contains lines of strings (it's a text); At the moment I'd like order every line of the file ( line1 < line2)
Regards
First of all there's a small application called sort that will do that for you, but I'm not sure if it's available on windows.
If you still want to do it yourself, it sounds like a job for merge sort. You can save some time by starting with the biggest blocks that fit into your RAM and sorting them using quick sort or similar algorithm.
If you know the maximum line length or at least its approximate value, you can create std::vector< std::string > of some constant size and preallocate space in every string (you can do that using two-parameter version of std::vector's constructor).
I'm confused; I know the maximum dimension of lines (1KB) but I cannot know the file size... When I say vector <string> I mean every element of vector is a line of the file.
Are you saying to do somthing like: vector<string> line (10, ""); ?? (if it's so I have allocate 10 lines.. and I don't know how many lines could have the file).
Regards
Luckily it isn't required for merge sort.
Yes. To be exact, something like:Where MAX_LINE_LEN = 1024 and NLINES is around 256k. Remember that the point is to avoid reallocations and swapping.Qt Code:
std::string temp; temp.reserve( MAX_LINE_LEN ); std::vector< std::string > lines( NLINES, temp ); // or if std::string tries to be smart: std::vector< std::string > lines( NLINES ); for( ... ) { line->reserve( MAX_LINE_LEN ); }To copy to clipboard, switch view to plain text mode
This way you can read 256k lines into preallocated space, which should be fast enough. Then you can sort that vector using std::sort, dump everyting into a temporary file and read another set of lines. And so on until the end of file. When you are finished you can free the memory and continue with merge sort algorithm.
No, that's the first step, which allows you to save some time.
Instead of starting the merge sort algorithm with blockSize = 1, you can start it with blockSize = 256000, but you'll need a file that contains sorted blocks of 256000 lines. That's where you are going to use the vector and sort().
- f := [A, B].
- fileId := 0.
- while not end of file X:
- vector := read 256000 lines from file X,
- sort vector,
- write vector to file f[fileId],
- fileId := 1 - fileId.
- blockSize := 256000.
- go to 3 from post #23.
Merge sort is one of the basic algorithms that every programmer should know. If you need more explanations, see Algorithms + Data Structures = Programs by Niklaus Wirth.
sorry, last question: your solution counts on many (dim_file/block_size) temporary files? Because that's what I understand only now..
Regards
sorry but I'm thinking that what I don't understand isn't the merge sort...
You said that I can do with 2 files A and B. So I work with 256000 lines. Sort they and put in file A; then read other 256000 lines, sort(), and put in file B. Now using mergesort to merge two file and write they in the first 500.000 lines of file X. Now I go on. 256.000*2 = 512.000; now the first 512000 lines of X are sorted! But the file isn't finish. Here begin what I don't understand....(I'm thinking this: ); rewind X,A,B. from line 512.001 of X: take a block of 256.000, sort, copy on A; take the next 256.000 block of X, sort, copy on B; mergesort on A and B and overwrite lines from 512.001 to 1024.000; take a block from of 256.000 lines of X from line 1024.000; there's only 120000 lines! Sort them; restart: blocksize 256.000*2=512.000; copy lines from 1 to 512.000 to A (they're sorted); copy lines from 512.001 to B (sorted too); mergesort on A and B and write 1024.000 lines on X; take lines from 1024.000 to end; copy 1024.000 to A and copy the 120.000 lines to B; mergesort on A and B and write on X. Stop. Is it this? It seems me slow...
Regards
This doesn't make much sense... In most cases (users who dropped text-based interface) the OS/Desktop/backgroud tasks... occupy about half of the available memory (sometimes more). Thus it is highly recommened not to load a full file which is bigger than 40% of your available memory (yet it is possible). In such cases the best way is to read pieces of the file, perform some tasks and then discard them before loading some other new pieces... It should be a little tricky to work this way to sort lines of text but it's still doable.
Current Qt projects : QCodeEdit, RotiDeCode
OK! But I don't still understand this:
1. I don't know the number of the lines ( I know only its dimesion with instructions below)
So I cannot allocate space. Instead I know maximum lenght (dimension) of one line. So I can allocate the size for string line.Qt Code:
ifile.seekg (0, ios::end); size = ifile.tellg();To copy to clipboard, switch view to plain text mode
Then, I have to sort only line by line (not the words inside a line and then line by line)
2. After that, in your opinion, can I use
to do all work ( I still understand it'll more speed) ??Qt Code:
memblock = new char [size]; ifile.read (memblock, size);To copy to clipboard, switch view to plain text mode
Regards
EDIT: OK. Such as other user, I couldn't understand the trick to work on block of 256k
Last edited by mickey; 12th July 2007 at 17:40. Reason: error in reply
Regards
Suppose you have file X and you want to sort it. Using merge sort you would do it this way:
- blockSize := 1.
- while not end of file X:
- copy blockSize lines to file A,
- if no more lines in file X:
- END: file A is sorted.
- copy blockSize lines to file B.
- rewind files X, A and B.
- while not end of file A and B:
- merge two blocks from A and B and write them to X,
- rewind files X, A and B.
- blockSize := 2 * blockSize.
- go to 2.
For example, let X= 42, 24, 67, 45, 79, 33, 76, 85, 29, 59, 26, 92, 30, 56, 81, 27.
Iteration 1:
A = 24, 45, 33, 85, 59, 92, 56, 27
B = 42, 67, 79, 76, 29, 26, 30, 81
X = 24, 42, 45, 67, 33, 79, 76, 85, 29, 59, 26, 92, 30, 56, 27, 81
Now X consists of 2-element sorted blocks.
Iteration 2 (this time blockSize is 2):
A = 24, 42, 33, 79, 29, 59, 30, 56
B = 45, 67, 76, 85, 26, 92, 27, 81
X = 24, 42, 45, 67, 33, 76, 79, 85, 26, 29, 59, 92, 27, 30, 56, 81
Now 4-element blocks are sorted.
Iteration 3 (blockSize = 4):
A = 24, 42, 45, 67, 26, 29, 59, 92
B = 33, 76, 79, 85, 27, 30, 56, 81
X = 24, 33, 42, 45, 67, 76, 79, 85, 26, 27, 29, 30, 56, 59, 81, 92
Final iteration:
A = 24, 33, 42, 45, 67, 76, 79, 85,
B = 26, 27, 29, 30, 56, 59, 81, 92
X = 24, 26, 27, 29, 30, 33, 42, 45, 56, 59, 67, 76, 79, 81, 85, 92
Note that:
- you don't have to read whole blocks to merge them,
- you don't have to start with blockSize = 1 (you can use some more efficient algorithm to sort blocks in memory, dump them into a file and then continue with merge sort),
- when the size of the input file isn't a power of 2, you will have a block that is shorter than blockSize.
The thing becames interesting....
I still haven't clear a point: How do I put lines of file X in the vector line?
When I speak of "speed" I mean that I know read a file line by line isn't so efficent..(!?)Qt Code:
while( getline (ifile, temp, '\n') ) { lines[l] = temp; l++; }To copy to clipboard, switch view to plain text mode
Then you said to work on 3 files: but file access and work on files is so efficent? (maybe it'll be better use 2 vector instead of 2 files?)
thanks
Regards
I would use operator>>.
If you read 2GB file into 512MB of RAM, over 3/4 of it will land back on the hard drive in a swap partition/file. If you divide that file into several blocks and sort them in memory and finally merge them, it should be faster than sorting a vector which 3/4 are in swap.
If you write each sorted part of the file into separate temporary file and then merge all of those files at once (not two at a time as in classical merge sort), you will need only one merge operation. This means that you will have to read and write every line from/to hard drive only twice.
Bookmarks