First step : read all data to memory creating one list/vector for one column.
Second step : write data to out files one after the other. In this case, you need a list of file names and you only have one file open at a time.
This will work as long as the file can fit into memory, and a 15 x 50000 entry table is probably OK. If the files get to be too big to fit into memory, then there are at least two more options:

1 - Open the file using memory mapping and let the OS take care managing the memory. If the data is stored in the file in row x column order, then you might want to take your original approach of keeping 10 - 15 files open so the memory mapping will be efficient.

2 - Don't load the file into memory, but read through it 10 - 15 times, once for each column.

It really depends on how often you want to do this conversion. If it is something you do once for each batch of data, then it really doesn't matter if it takes 1 second or 10 seconds. If it is something you will do a lot (eg. you are trying to keep up with real-time data acquisition), then you want it to be as fast as possible.