Hi all

I am just starting up my first QT project and I am having some problems to decide the best architecture and I hope you all might provide some nice ideas. The explaination below will be a bit long so bear with me. I am building a phone application and it contains of 3 layers. I want each layer to be as independent from the other as possible. These are the parts:

-------------
GUI
-------------
|
-------------
Middleware
-------------
|
-------------
Access
-------------

If we start from the bottom the Access part is a C dll that handles everything related to signalling and audio. It is a pure C dll communicating with primitives like strings and int's and callbacks. This part has nothing to do with QT and I have no questions here. In the next layer which I call middleware (dependent only on QtCore ) I want to hide as much intelligence as possible. I want to GUI basically only respond to user input and Signals from the Middleware. So I rather make the Middleware more complex to keep the GUI part very simple.

From the "Access" C dll to the Middleware I translate C callbacks into Qt Signals and this is very clean I think so I am happy with that part.

My main problem right now is the interface between the GUI and the Middleware (both QT based). My gut tells me that I want to have this interface purely Signal/Slot based and not use any direct function calls from the GUI to the Middleware API at all. Does this make sense? I want the Middleware to be a separate dll as well so I want a clear and simple interface between Middleware and GUI just like between the Middleware and Access parts. I would prefer a single .h interface file for all the Middleware features even if that means propagating some Signals an extra step inside the Middleware dll.

So these are my basic ideas. Now while starting with this I quite soon run into one specific problem and even though I hope that some of you have interesting input on the design above, this is a more specific question:

When I am making an outgoing voice call I use a function in the "Access" dll, someting like this:
int Call(int* callId, char* callee); //Make call to callee and get the callId back in return

The Access dll will setup the call and assign the id of the call to the pointer that the Middleware dll will provide. This is all great since the function call will be synch by default since it is just calling a normal C function.

The problem arise if we look from the top down. The user pushes a button in the GUI which triggers a request to the Middleware to start a call. The GUI must somehow be able to relate to the specific call later on, to know when the call has been hung up etc. The easy approach here would probably be to let the GUI call be a sync function call all the way down to the Access dll via the Middleware. This way the GUI would get the callId pointer directly when the function returns. For an incoming call the Access dll would provide the callId in tha callback and the Middleware dll could propagate this all the way up to the GUI. This way everyone knows the current callId and the GUI can connect that to different event related to any active calls (there can be more then one call active at the same time).

My concerns are that:
1. The GUI needs to map the callId to different object and then send some Signals or make function calls internally.
2. The "make call" handling is synchrounous and would not use the Signal/Slot approach. GUI calls normal function in Middlware that calls C function in Access dll.
3. It does not feel very object oriented.

So I continued to think and the best I come up with so far is that I let the GUI pass a handler object to the Middleware when making the call, then the Signal to make a call in the Middleware would look like this:

void makeCall(QObject& callHandler, QString& callee); //Tell the Middleware which GUI object that will take care of all call related signals

By doing this it is my intention that the Middleware could connect the Signals related to this specific voice call directly to the object that is interested in them. So the callHandler object in the GUI would have to support for instance a slot like:
void onCallEnded(); //The other end hung up so we need to update the GUI accordingly

The Middleware would then manage the mapping of the low level callId to emitting a signal to the correct callHandler immidiatly. My concerns on this is:
* I am passing a QObject pointer from GUI to Middleware and then the Middleware connects some signals directly to this object. I can ofcourse define the exact slots that must be supported or maybe require the GUI to subclass a specific class type that is passed instead of the QObject but I am not sure if it is good design to hide the connections of the Signal/Slots inside my Middleware dll like this. Is it ok to do like this or can I get in trouble if the QObject resists in another thread of something wierd like this? Should I define some sort of interface so the user knows how to design the callHandler object?

For the incoming call the Middleware dll would have to request a callHandler object from the GUI and then map the callId to this object. This should however not be a major issue if the design above is ok.

The upside with this approach is that the GUI does not have to be aware about the callId. The Signals needed will be sent directly to the callHandler in the GUI but the connections of the signals and slots must be done internally in the Middleware so the Middleware can connect the prober call based on callId to the correct callHandler object in the GUI.

So this is a very long post. It all boils down to that I:
1. Want the GUI very simple and only reactive to different signals and user input. I do not want the GUI to parse return or error codes from synchrounous calls. It should receive signals for all situations from the Middleware.
2. I want the interface to be as simple as possible between the QT based GUI and Middleware projects. Preferably i single interface file.
3. In particular I am not sure how to make the connection the best way between the GUI and the Middleware projects. Specifically how do I connect the Signals from the Middleware to the GUI. Is it ok to let the Middleware connect the signals it emits itself when it knows the GUI object or should the GUI part always connect the signals emitted from the lower layer (middleware).
4. Most of the functionality I want maps very well to the Signals and Slots approach but the specific case with making calls confuses me since I do not know how to manage the returned callId when the call is setup with a signal instead of a direct function call.

I hope this makes some sense, I am very open for ideas since I am a bit confused about how to do an architecture with the Signals and Slots that will be flexible and consistent.

Thanks for reading this far :-)