PDA

View Full Version : GUI Freezes unless there's incoming data



crisp
25th January 2009, 22:03
Hi all

I'm programming a basic IRC client, and have an IRC api/library, the Qt Designer generated interface file, and a separate module in the middle to link the two into some kind of functionality.

In the IRC library, I have a member function that checks a socket I've set up for any incoming data:



def recvLoop(self):
try:
data = self.irc.recv(4096)
if data.find('PING') != -1:
self.irc.send('PONG ' + data.split() [1] + '\r\n')
if len(data) > 0:
incomingBuffer.put(data)
#print data
#time.sleep(1)
except:
print 'Error: Possible interupt.'
Then in the main module (mainwindow.py) that uses the IRC library, I have the following:



@pyqtSignature("")
def on_connectButton_clicked(self):

self.displayBrowser.append('Connecting...')
client = crisp_irc.IRC()
client.newConnection('irc.oftc.net', 6667, 'crispycream', 'crisp', 'crisp')
client.join('#crasp')
client.channelSend('#crasp', 'blah')
client.set_recv_function(crisp_irc.my_receive)

while 1:
client.recvLoop()
QApplication.processEvents()
if crisp_irc.incomingBuffer.qsize() > 0:
print crisp_irc.incomingBuffer.qsize()
item = crisp_irc.incomingBuffer.get()
#print item
self.displayBrowser.append(item)
crisp_irc.incomingBuffer.task_done()
QApplication.processEvents()

...

if __name__ == "__main__":
import sys

app = QApplication(sys.argv)
form = CrispIRCMainWindow()

form.show()
app.exec_()

So if there's data, the IRC lib puts it into a (global - for now) buffer, and the mainwindow.py reads from that buffer to see if its empty or not. If it isnt, it appends it to the main display window.

The QApplication.processEvents() calls have solved half of my problem. The remaining problem is that the GUI freezes UNLESS the buffer is not empty, at which point it updates everything to how it should be, then goes straight back to being frozen/in a crash like state. The client does stay connected during this, and does receive messages etc, hence it updates when there's something to show.

How do I get rid of this behaviour (I've tried sleeping for a second etc, no dice)? Is there a better way to pass data along to the GUI from the library I've made? Many many thanks in advance!!

EDIT: Maybe there's a way to detect when there's data to be received, and only execute the function when there is something to be sent down the socket? Kind of data drive rather than a constant infinite loop that keep polling things, which is possibly the source of my problem?

Cruz
25th January 2009, 23:49
Your gui freezes because you are keeping the gui thread busy with an infinite loop in an event handler. It only works because you step the gui thread forward sometimes with a processEvents().

A much nicer way would be to fork out a parallel process that handles the IRC communication and feeds the data into a buffer. The gui thread should only read from that buffer without performing any unnecessary calculations and show the data on the screen.

Or at least make sure that you are using non blocking network communication, so that the processEvents() calls can be placed regulary, even if there is no data.

crisp
25th January 2009, 23:53
Thanks for the reply. I did have a version of the library that hands made connections out to another separate class as a thread, using the Threading module. I disguarded this some time ago, do you recommend going back to this model? Could you give me a starting point in starting a new parallel process if this is not what you meant?

Thanks once again, this is much appreciated!

wysota
26th January 2009, 01:25
I would recommend against using threads. Instead I would opt for setting up a timer to fire periodically (like every 250ms) and in its slot or event handler check if there is anything to be read from the socket. If your library uses blocking calls, you might want to use QSocketNotifier instead of the timer. It will tell you when something arrives at the socket and then you can call the reading routine without blocking the event loop.

crisp
26th January 2009, 04:13
Well thanks for the advice guys, I'm still really stuck though. I'm so tired and frustrated now that I think I'll leave it until tomorrow, fresh eyes and all that. I've tried implementing a timer (used QTimer) and it didn't seem to help the situation, so tomorrow I'll try QSocketNotifier. If any good samaritans feel like looking over my (sloppy) code, I've posted it here: http://driversgb.com/Qt/crispirc.zip - if I can get over this hurdle then the path is clear for me to bang out a load of code! Cheers guys.

wysota
26th January 2009, 08:45
At worst get rid of the IRC library. IRC protocol is based on text, it's very easy to parse and handle it withough additional libraries.

Cruz
26th January 2009, 11:40
@crisp
Yes, I think using the threaded version of the IRC library is the easiest solution for you.

@wysota
This topic already came up once and we didn't quite finish it. I don't understand your aversion against threads. With any approach based on the processEvents() call you take away the control from the Qt framework and provide your own mechanism to process the gui events with your own pace. With a timer based approach you can at least return the event handler immidiately, but still you set your own pace to step a process forward. With a seperate thread however the optimized mechanisms of the operating system will take care of driving your process. In the case of crisp for example the seperate thread can wait in a blocking recvfrom() call and use no resources at all. New data will be handled exactly when they arrived and not too early or too late as it's usually the case with a timer based approach. Parallel processes also lead to a better utilization of multicore cpus, which are becoming the standard as of now.

wysota
26th January 2009, 12:22
I don't understand your aversion against threads.
I'm not against using threads in general, I like threads. I'm against using threads where they provide more problems than they solve or when they are not needed at all. And in general I am against using processEvents(). See my article at Qt Quarterly for more info.


With a seperate thread however the optimized mechanisms of the operating system will take care of driving your process.
This is completely not true, it would be slower than using a single thread :) And even if it was, try gracefully stopping a thread that is sleeping on read() or write().


In the case of crisp for example the seperate thread can wait in a blocking recvfrom() call and use no resources at all.

This is exactly the same as when using other approaches. The event loops spins anyway, so there is nothing wrong in periodically calling non-blocking select() (which happens under the hood) to see if anything arrived at the socket.

Threads use up resources by the way...


\ New data will be handled exactly when they arrived and not too early or too late as it's usually the case with a timer based approach.
Sure, sky will fall on your head if you see an IRC message 10ms later. What? you are using an LCD screen with refresh rate of 60Hz? Then you will have a bigger delay from your LCD that from using the timer...


Parallel processes also lead to a better utilization of multicore cpus, which are becoming the standard as of now.

This is meaningless in this particular situation as both threads couldn't run at once anyway because at some point you have to synchronize them after reading from the socket because you want to display something on the screen.

People often use threads as an escape from having to write proper event driven code. This is fine as long as the use case is simple. But at some point in time your thread will become more and more complex and you will have more and more problems with synchronizing threads completelly discarding all advantages of using the thread in the first place.

"GUI freezes unless there is incoming data" title of this thread perfectly illustrates one of synchronization issues that has a chance of arising.

By the way, most low latency applications try to avoid using blocking calls in favour of non-blocking ones. In Qt we should always try to avoid making synchronous calls and using threads is not always a good option for that.

Cruz
26th January 2009, 13:06
This is completely not true, it would be slower than using a single thread :) And even if it was, try gracefully stopping a thread that is sleeping on read() or write().


I don't understand this point (why it's slower?). Can you explain it briefly or point me to a link? As for the gracefully stopping, if you need to do that you are most likely doing something wrong and probably used a thread where you shouldn't have. Same goes if you feel the need to synchronize. In my ten years of programing I have never encountered a single case, where a thread needed to be synchronized or influenced in any way.




Sure, sky will fall on your head if you see an IRC message 10ms later. What? you are using an LCD screen with refresh rate of 60Hz? Then you will have a bigger delay from your LCD that from using the timer...


Well no, nothing bad will happen. You can make this simple case work with any approach and there will be no noticable difference to the user. It just doesn't "feel" right, because events occur when the timer says so and not when they actually happen.




This is meaningless in this particular situation as both threads couldn't run at once anyway because at some point you have to synchronize them after reading from the socket because you want to display something on the screen.


I don't see why they can't. Or I don't understand what you mean by needing to synchronize. The gui thread reads data periodically from a buffer and displays it on the screen, whatever happens to be in the buffer at that time. And a worker thread runs in the background and refreshes the buffer whenever new data arrives from the socket. These are two perfectly parallel processes with no need for synchronization. With a large amount of data and the requirement for absolute failsafeness (which is not the case in this little IRC example) you might want to use a double buffering technique to thread safe your data buffer, but that doesn't imply a synchronization of the timings of the two threads in any way.

I actually never looked this up (but stress tested it on a million examples): flipping a pointer is an atomic operation right?




People often use threads as an escape from having to write proper event driven code.


Amen to that. I think we both agree that people tend to abuse concepts, be it threads or not. So I think it all comes down to the synchronization detail.

wysota
26th January 2009, 14:59
I don't understand this point (why it's slower?). Can you explain it briefly or point me to a link?
Because of resource allocation in the kernel and high possibility of preempting the process while sitting in read(), write() or similar system call which is followed by context switch that also takes time.


As for the gracefully stopping, if you need to do that you are most likely doing something wrong and probably used a thread where you shouldn't have.
This is exactly the case here - i.e. I'm using IRC (hence the reading thread is blocked on read()) and I want to close the application.


Same goes if you feel the need to synchronize. In my ten years of programing I have never encountered a single case, where a thread needed to be synchronized or influenced in any way.
Ok, then please write an application using Qt that reads text from a socket in a worker thread and displays in on a widget. Just to make your life easier and help you avoid solutions I will call "incorrect" - QCoreApplication::postEvent() and QCoreApplication::sendEvent() cause synchronization if ran across threads (sorry, no thread-safe atomic queues implemented yet) and signals and slots across threads are implemented by posting or sending an event.


Well no, nothing bad will happen. You can make this simple case work with any approach and there will be no noticable difference to the user. It just doesn't "feel" right, because events occur when the timer says so and not when they actually happen.
Events occur when some timer fires, sees that they occured and notifies you. So there is a timer involved either way and handling many "events" at once can often be much more optimal than handling one at a time (like when inserting data into Qt models). If you are running a semi-busy loop anyway, running a timer won't make things worse as the semi-busy loop will be awoken much more often than the timer fires and the timer can only fire when the loop is awake and running as timers are delivered using events.


I don't see why they can't. Or I don't understand what you mean by needing to synchronize. The gui thread reads data periodically from a buffer and displays it on the screen, whatever happens to be in the buffer at that time. And a worker thread runs in the background and refreshes the buffer whenever new data arrives from the socket. These are two perfectly parallel processes with no need for synchronization.
Sorry, you just failed your exam from the multithreading course - you need to synchronize the buffer or else you'll have a race condition.


With a large amount of data and the requirement for absolute failsafeness (which is not the case in this little IRC example) you might want to use a double buffering technique to thread safe your data buffer, but that doesn't imply a synchronization of the timings of the two threads in any way.
You can use as many buffers you want but at some point you will have to free a cell in the buffer and at the same time another thread might want to append data to the buffer which is a straight way to a segfault or a deadlock.


I actually never looked this up (but stress tested it on a million examples): flipping a pointer is an atomic operation right?
No, it is not bound to be although most architectures assure that, but this is irrelevant here - this is not enough to provide thread safety. Consider a situation where there is a linked list with one item in the list and at the same time one threads wants to append an item to the end of the list and the other wants to remove the item from the list - either you will lose both items (when removing thread is preemptied while doing the operation and before it goes back from sleep the other one has managed to complete it task) or you will get a crash (when the adding thread is interrupted and the removing one manages to delete the item in queue before the other thread is awaken).

Cruz
26th January 2009, 16:53
This is exactly the case here - i.e. I'm using IRC (hence the reading thread is blocked on read()) and I want to close the application.

What happens if you don't use a thread, not even a gui, you just write a plain old sequential command line application that prints IRC input to stdout. And then you close it with CTRL-C while it's hanging in a read(). Isn't that the same situation?


Ok, then please write an application using Qt that reads text from a socket in a worker thread and displays in on a widget.

Actually I did that with a similar technique like I described. A worker thread reads data from a serial connection and notifies an object using signals and slots that new data arrived. The data is passed as argument to the slot. The object builds a widget from the data and addWidget()s it to the gui. You wouldn't call it valid, becuase it uses signals and slots, but nevertheless it worked great so far. So what exactly is wrong with this approach?

Besides that, in what I was suggesting as solution for the IRC example the gui thread is only _reading_ from the buffer. How can a segfault occur like that? The worst thing that can happen is that you read corrupt half old half new data. And against that you can do this:



dataBuffer1[];
dataBuffer2[];
*readPointer = dataBuffer1;
*writePointer = dataBuffer2;
*tempPointer;

wroker_thread()
{
forever()
{
// retreive data from somewhere
data = readDataFromSocket();

// write data into the current write buffer
writeData(writePointer, data);

// flip pointers
tempPointer = readPointer;
readPointer = writePointer; // this needs to be atomic
writePointer = tempPointer;
}

main_thread()
{
*copyPointer;

forever()
{
// obtain a copy of the read pointer
copyPointer = readPointer; // this needs to be atomic

// read the data
data = readData(copyPointer);

/* Even if the worker thread flips the readPointer while we are reading here,
* it doesn't matter because we have a copy. */
}
}



No, it is not bound to be although most architectures assure that, but this is irrelevant here...

I think it's very relevant. If flipping a pointer is atomic, then the code above is safe up to the point where the processes run in such a weird asynchronous way, that the worker thread is executed twice while the main thread is still reading, so that it starts to write into the buffer the main thread is reading from.

All theory aside, I stress tested this approach on Linux and on Windows using C and Java, because I really wanted to know. I had a worker thread writing to the buffer at 100Hz and I spawned first one and then 100 reading threads reading the buffer over and over again with randomized sleeps and some of them at maximum capacity. I recorded how many times the writing of the worker thread and the reading of the main thread overlapped. It was 0. Without the double buffering it occured like 1 out of 1000 times, with it no accidents happened in 1 million read and write operations (over an hour of testing).

If flipping a buffer is not atomic, a segfault can occur because you could get "half" a pointer.

And for the case where two threads need to write to the same buffer, which I yet have to encounter, there are mutexes. What's wrong with them?

crisp
26th January 2009, 18:34
Some interesting points here guys, even if the odd one is over my head. Time to start whacking at this again now. To clarify, if I have the IRC library using a timer to check the socket/write to the buffer if not empty every Xms, and then a timer in the GUI part that checks the buffer every Xms, should that work to unfreeze the GUI? This with the understanding that the data will only be as recent as the last buffer check (but it being IRC, that doesnt matter even if it was a once every second timer). Regardless of the pro's and con's of this approach, would it work in general? Or, should I not bother with a buffer and have the IRC library still on a timer checking the socket, but then sending a signal with the data as a parameter to the GUI? This timer method strikes me as potentially the easiest of the suggested methods.

wysota
26th January 2009, 19:39
What happens if you don't use a thread, not even a gui, you just write a plain old sequential command line application that prints IRC input to stdout. And then you close it with CTRL-C while it's hanging in a read(). Isn't that the same situation?
No, CTRL+C kills the application. In theory you can emit a signal that will break read() or write() with EINTR but let's say it has its drawbacks.



Actually I did that with a similar technique like I described. A worker thread reads data from a serial connection and notifies an object using signals and slots that new data arrived. The data is passed as argument to the slot. The object builds a widget from the data and addWidget()s it to the gui. You wouldn't call it valid, becuase it uses signals and slots, but nevertheless it worked great so far. So what exactly is wrong with this approach?
Without synchronisation it is prone to race conditions. The fact that it worked for you doesn't mean it will always work. Anyway if you used signals and slots then you indeed did synchronisation as, as I already wrote, signals and slots across threads require synchronisation of the event loop of the destination thread. So both threads couldn't have been executing at the same time. Simple as that. There is no noticable slowdown, but the fact remains.


Besides that, in what I was suggesting as solution for the IRC example the gui thread is only _reading_ from the buffer. How can a segfault occur like that?
Because something else may be writing to the buffer at the same time and writing needs exclusive rights. See QReadWriteLock - many readers can access the object at the same time, but when a writer wants it, everyone else has to go (readers included).


The worst thing that can happen is that you read corrupt half old half new data.
The best thing that can happen is that the application will be aborted immediately. In the worst case you will blow up the world.

Your code is incomplete. You probably want to do something with the original "readPointer" after the whole operation, for instance delete it. Then the other thread who still has a copy of the pointer pointing to the same address will try to access it and kaboom!.


I think it's very relevant. If flipping a pointer is atomic, then the code above is safe up to the point where the processes run in such a weird asynchronous way, that the worker thread is executed twice while the main thread is still reading, so that it starts to write into the buffer the main thread is reading from.
If it was as simple as that, implementing thread-safe containers would be trivial. Unfortunately this is not the case. Once you do something with the memory originally pointed by "readPointer", you enter a race condition. Regardless of the variable name currently pointing to the memory area. In your example you have two data cells and as long as you make sure both threads never access the same cell at once, you are safe. But here you assume that "read" and "write" operations will come sequentially (aka synchronically) which is not the case. Semaphores were invented for exactly this reason.

Qt has atomic reference counting with shared data. This makes it re-entrant, but not thread-safe and you still have to protect it when sharing the same object across multiple threads regardless of flipping pointers and atomic reference counting.


All theory aside, I stress tested this approach on Linux and on Windows using C and Java, because I really wanted to know.
You can stress test as much as you want but it won't prove the code to be correct. You need a mathematical proof for that. If there is a single case where the code fails, it is wrong thus it is much easier to prove that it is invalid than that it is valid but an inability to prove its invalid nature doesn't yet prove it is valid.


If flipping a buffer is not atomic, a segfault can occur because you could get "half" a pointer.
It is not atomic. You tested it on an architecture where it is atomic (it's atomic on probably all or almost all modern platforms) but it doesn't prove it is atomic everywhere, especially if the pointer size is larger than the word size of the processing unit, so by nature writing an integer value (of arbitrary size) into memory is not atomic.


And for the case where two threads need to write to the same buffer, which I yet have to encounter, there are mutexes. What's wrong with them?
Nothing (beside the fact real mutexes are hell slow) and stop all but one of the threads which I said to be the case in the very first post and you argued that they can run it parallel.


Some interesting points here guys, even if the odd one is over my head.
Sorry for being offtopic, I just love flame wars ;)


To clarify, if I have the IRC library using a timer to check the socket/write to the buffer if not empty every Xms, and then a timer in the GUI part that checks the buffer every Xms, should that work to unfreeze the GUI?
Yes, although it'd be best if you managed to use QSocketNotifier with the library or switch it into non-blocking mode.


This with the understanding that the data will only be as recent as the last buffer check (but it being IRC, that doesnt matter even if it was a once every second timer).
Rather the other way round - it will be at most as old as the last timer timeout but as you said it doesn't really matter. I'd say that one second is a long interval, I would make it shorter. I often use adaptive timer timeouts - for instance I start at half a second. If during the next timeout there is anything to read, I reduce the next timeout (i.e. halve it). If there is nothing to read I increase (i.e. double) it. At the same time I assure it is always kept within some bounds (for instance between 100ms and 1000ms). This makes the application less active when there is nothing to do and more active when it becomes busy.


Regardless of the pro's and con's of this approach, would it work in general?
Yes although of course other approaches (threads included) are also fine. They all have their pros and cons.


Or, should I not bother with a buffer and have the IRC library still on a timer checking the socket, but then sending a signal with the data as a parameter to the GUI?
What you do with the data is irrelevant just make sure not to stall the event handler for too long. If you emit a signal within the same thread, it will be delivered synchronously so it won't make any difference if you call some method directly or through a signal.


This timer method strikes me as potentially the easiest of the suggested methods.

All methods are potentially equal but using a timer might be simplest if your application grows in complexity. Timers will make your application (not the system!) more active (eating more CPU) than i.e. the threaded approach as the thread would be sleeping most of the time (of course assuming an average traffic on IRC).

Cruz
29th January 2009, 10:31
Hello I'm back! :)

I'm sorry about being so persistent. I'm very interested in this topic and it's important for me to fully understand it.



Because something else may be writing to the buffer at the same time and writing needs exclusive rights.


Is this generally the linux way? Or more like a rule of thumb that you should always stick to?

You criticized my approach in two specific points. One:



In your example you have two data cells and as long as you make sure both threads never access the same cell at once, you are safe. But here you assume that "read" and "write" operations will come sequentially (aka synchronically) which is not the case.


As long as one thread is writing into one buffer and the other thread is reading from the other there is no problem. So the problematic case is when the main thread takes a copy of the readPointer and starts reading buffer1. It gets interrupted and the worker thread is running so fast that it writes buffer2 full, flips the read and write pointer and starts writing into buffer1, while the main thread is still reading it. Then it could happen that both threads try to access the very same memory address, right? And that results in a segmentation fault?

And if this is true, then how come I couldn't make this error appear artificially with my stress tests?

Two:



Your code is incomplete. You probably want to do something with the original "readPointer" after the whole operation, for instance delete it.


Actually no. Either the program runs forever and I would stop it with CTRL-C (I trust whatever it does, it's something I don't know). Or the threads will have their jobs done at some point and then I can safely free all memory from an overlaying main control.

If flipping a pointer is safe on most modern systems, then why not rely on it? We don't carry especially big towels with us to wipe off dinosaur spit, because dinosaurs are extinct. When we know we are going to a place where dinosaurs spit at us, then we can pack an extra towel. But it would be overkill to always carry a heavy towel which we never use.




The best thing that can happen is that the application will be aborted immediately. In the worst case you will blow up the world.


If all C/C++ programmers, who don't exactly know what they are doing, had that kind of power, the universe would have been destroyed a long time ago. Therefore it's either not possible, or we don't exist.

wysota
29th January 2009, 11:37
Is this generally the linux way? Or more like a rule of thumb that you should always stick to?
A rule of thumb. Reading doesn't modify the data so there are no race conditions. Writing modifies data so doing anything else at the same time might end up in accessing part of the old data and part of the new data.


As long as one thread is writing into one buffer and the other thread is reading from the other there is no problem.
Let's consider this example. Thread A wants to read two values from the buffers and thread B wants to write two values to the buffer. Let's assume A goes first:

- A starts reading from cell 1 and is interrupted
- B starts writing to cell 2
- B finishes writing and is interrupted
- A finishes reading
- A starts reading from cell 2 and is interrupted
- B starts writing to cell 1 and is interrupted
- A finishes reading from cell 2 and exits
- B finishes writing to cell 1 and exits

So everything is fine. But the order might be like this:


- A starts reading from cell 1
- A is interrupted
- B starts writing to cell 2
- B finishes writing to cell 2
- B starts writing to cell 1
- B finishes writing to cell 1
- A continues reading from cell 1 => but the old data was already overwritten by new data!

This is a less (or maybe more?) problematic variant as you "only" lose data. In the other version where you delete buffers after operating on them, you'll get a segfault.


Then it could happen that both threads try to access the very same memory address, right?
Yes, but that's not the only problem. One thread might try to access a memory address that has already been freed by the other thread.


And that results in a segmentation fault?
This variant results in the loss of data.


And if this is true, then how come I couldn't make this error appear artificially with my stress tests?
You were lucky or you were using a uni-processor machine :)



Actually no. Either the program runs forever and I would stop it with CTRL-C (I trust whatever it does, it's something I don't know). Or the threads will have their jobs done at some point and then I can safely free all memory from an overlaying main control.
This is an assumption based on a particular case, not on a general case so it's false by definition.


If flipping a pointer is safe on most modern systems, then why not rely on it?
most!=all


We don't carry especially big towels with us to wipe off dinosaur spit, because dinosaurs are extinct.
No, but modern Windows still keep 8+3 names although most computers don't use dos anymore. Besides this is academic talk as atomic int operations don't guarantee thread safety. You need operations such as test-and-set for that. Look here:


tempPointer = readPointer;
readPointer = writePointer; // this needs to be atomic
What happens if the other thread substitutes readPointer variable with another object between lines 1 and 2 of the above code? The fact that substitution is atomic doesn't mean that two substitutions are atomic.


When we know we are going to a place where dinosaurs spit at us, then we can pack an extra towel. But it would be overkill to always carry a heavy towel which we never use.

The fact that you never saw air doesn't mean it's not there. The fact that you didn't drown when being in the water doesn't mean you can breathe under water and you needn't take a life jacket with you. The fact that something didn't happen in a particular case doesn't mean it will never happen. We can argue here forever and come to no conclusions but you can also take a book and learn something from it to see if others have the same opinion as you.


If all C/C++ programmers, who don't exactly know what they are doing, had that kind of power, the universe would have been destroyed a long time ago. Therefore it's either not possible, or we don't exist.

Please don't hire yourself in a nuclear power plant or as a rocket specialist. I'd like to be alive for a few more years :)

crisp
7th February 2009, 06:33
Hi again - thought I'd offer a quick update. I had to leave this for a week or so due to other commitments, so tonight has been my first night back on it. I got rid of the infinite loops which were stopping Qt's event loop from happening and had a similar problem - turns out it was due to blocking sockets. I couldn't get non blocking sockets to work in Python with the couple of hours playing I have it (I realise select is one direction I should be looking in); there's actually a surprising lack of information about on this. Another option open to me, I guess, are threads - but I didn't want to have the trade off of added complexity, so for now I've avoided that route.

Anyway, to cut a long story short, I've set up a QTimer that calls a socket and buffer checking function every 50ms, and have set the sockets to have very low timeouts after the initial connection to a server. It's a nasty hack, and I'll come back to it, but at least for the moment it allows me to move on and solves the problem I was having - no more GUI lockups, and instant incoming data retrieval. I can only think of the grimace you experienced guys must have thinking about this solution... but anyway, I'd just like to say thanks for all of your help. No doubt I'll be back!

wysota
7th February 2009, 10:21
Have you tried using QSocketNotifier?

crisp
7th February 2009, 18:56
Not yet, I have looked into it, the method I've used simply happens to be the one that worked before I tried QSocketNotifier. I am aware that my 'solution' is far from elegant, and I'll come back to improving it (probably with QSocketNotifier actually) at a later date. Just want to move on from this part for now!