PDA

View Full Version : does disconnect cancel pending messges / a way to send one final message



mortoray
16th September 2010, 10:05
I am using signals to communicate the status of object between threads. One thread responds to network events and tells the main GUI thread that something has changed. At some point the object is "finalized" -- it will never emit another signal. I'd like to disconnect all handlers at that time to reduce the load on QT's dispatch mechanism*. The question is how can I do this correctly.

If I call "disconnect" will it clear all pending signals, or will it first allow those signals to be sent and then disconnect?

I first though to just send one additional signal called "doDisconnect", but since threads are involved I have no real way of guaranteeing that all previous signals are handled before any thread gets this signal.

Does anybody know of an easy way to this? I'm trying to avoid a timed disconnected (like 10s after last signal) but if nothing else works I do have this option.


*I am creating a lot of objects that have signal activity for a short part of their life. That is, they are created, send 10-20 signals, and then exist for the duration of the application without ever sending another signal.

wysota
16th September 2010, 16:20
If I call "disconnect" will it clear all pending signals, or will it first allow those signals to be sent and then disconnect?
If you emit signals across threads, the slot is called with a delay to signal emission so if you disconnect a signal from a slot, those signals that have already been sent, will get processed. Note that if you do that within one thread, slots are called synchronously so each emitted signal will be processed too (before you actually disconnect() the signal).

By the way, think if you really need threads for networking.

mortoray
16th September 2010, 16:38
If you emit signals across threads, the slot is called with a delay to signal emission so if you disconnect a signal from a slot, those signals that have already been sent, will get processed.

Is this guaranteed anywhere in the documentation? I would really like to take advantage of that, but I don't want to get burnt on some future upgrade of QT which changes its internals...

...though thinking about it I'm not sure this is possible. Consider the case of object deletion. If you do "emit somthing" then delete the object, that signal should not be sent since "sender()" won't resolve to a valid object. Or is this a different case again?



By the way, think if you really need threads for networking.

It takes the load of the main messaging thread and allows me to use multiple cores. It is also the only way to ensure that the network buffers are cleared promptly without interference.

wysota
16th September 2010, 17:20
Is this guaranteed anywhere in the documentation?
What gets executed is decided in the moment of signal emission. It will not change, it's a very logical behaviour.


If you do "emit somthing" then delete the object, that signal should not be sent since "sender()" won't resolve to a valid object.
That's one of the reasons you shouldn't rely on sender() and also a reason you shouldn't be deleting QObjects in random places. Especially that this may happen within a single thread too and that leads straight to a crash.


It takes the load of the main messaging thread and allows me to use multiple cores.
Does it? What do those threads do besides waiting for client input? Do you use blocking or non-blocking networking API?


It is also the only way to ensure that the network buffers are cleared promptly without interference.
No, not really.

mortoray
16th September 2010, 17:49
That's one of the reasons you shouldn't rely on sender() and also a reason you shouldn't be deleting QObjects in random places. Especially that this may happen within a single thread too and that leads straight to a crash.

I'll use a delayed deletion then since it doesn't appear to be a safe guaranteed way to emit one last message.


Does it? What do those threads do besides waiting for client input? Do you use blocking or non-blocking networking API?

I'm using QTcpSocket and its signals, so it is completely async/non-blocking. Encoding and decoding of messages is a non-trivial amount of time. As is the logical processing of those messages. The drawing in the GUI, under load, can take quite a bit of time. All together putting networking/processing in a different thread allows the actual GUI thread more time to operate.

TCP servers can also be fickle. If you don't retrieve the data quickly enough from the client buffers the server may decide to disconnect you, or it could simply throttle its throughput rate. In some cases the client side OS will disconnect you (the internal buffers are only so big). Under load the GUI thread could become overloaded and leave big intervals between clearing those network buffers. Putting that in a separate thread ensures they are always clearing the buffers at a good rate.

wysota
16th September 2010, 20:45
I'm using QTcpSocket and its signals, so it is completely async/non-blocking.
So most of the time your threads spin idle loops. You are not using all your cores, you are wasting all your cores.


Encoding and decoding of messages is a non-trivial amount of time.
So do that in threads but what is the point of having threads just to receive data from network which happens quite rarely.


All together putting networking/processing in a different thread allows the actual GUI thread more time to operate.
How many connections at most do you serve concurrently? How many cores do you have in your machine? Does your OS has to context switch between threads only for the thread to see it doesn't have anything to do?


If you don't retrieve the data quickly enough from the client buffers the server may decide to disconnect you, or it could simply throttle its throughput rate. In some cases the client side OS will disconnect you (the internal buffers are only so big).
Sure. You get disconnected after waiting 10ms before fetching the data ;)


Putting that in a separate thread ensures they are always clearing the buffers at a good rate.
Really? Spawn some (= the number of cores you have) additional threads and make them do "while(1);". You'll see your other threads are getting delayed even though it's another thread does puts a strain on the cpu.

What you are doing is a wrong approach. Sorry to be so blunt but it is true. Using threads for fetching data from network yields no benefits and is a very common mistake based on false axiom (yes, axiom - without actually proving that!) that networking is better done in threads. It's better to offload real work to threads and make them actually do something useful instead of having tens of empty event loops spinning around in vain.