PDA

View Full Version : Recommended structure of Qt-based software



Kryzon
27th February 2015, 01:52
Hello. I'm trying to understand how to structure the classes \ framework in my Qt program in a way that guarantees speed \ readability \ mantainability etc.

The biggest conflict right now is deciding whether upon user interaction the Qt widgets take action on the program themselves, or if they report that interaction to some "business logic" class responsible for reacting to that user interaction.

To clarify, consider an art program.
When the user presses the mouse on a "canvas" widget, does the widget begin the drawing operation itself (by calling methods from a custom pixmap painting class), or does it report to a higher-level "core" or "application" object that takes care of handling the painting?
• In the first case: the widget can control the painting functionality (it has access to the painting functionality, it knows of the methods and of the object responsible for this).
• In the second case: the widget can only report the user input and will do so with signals (it doesn't know anything about the painting functionality), and the higher-level object(s) will react to that. The painting functionality is encapsulated in these higher-level objects.

What is usually the production level approach to this?

d_stranz
27th February 2015, 15:50
The biggest conflict right now is deciding whether upon user interaction the Qt widgets take action on the program themselves, or if they report that interaction to some "business logic" class responsible for reacting to that user interaction.


I always design from the perspective that UI widgets are responsible only for what they do, and that the responsibility for telling them what data to use and what is done after the widgets do their thing is the role of business logic.

Sometimes this seems hard to achieve in practice, but if done carefully, it can result in much more reusable lower-level components.

Take your drawing example. You can design it so that the canvas takes responsibility for what happens when the user starts to draw. The canvas handles mouse and keyboard operations internally. On the other hand, you can design it so that the canvas is mostly a dumb output window. By using event filters (or better, in my opinion, signals) the canvas can say, "Hey, mouse click at x,y" and let the business logic decide what to do with it.

I have done it this latter way with a scientific data plotting widget I have written. The widget itself is a dumb plotter. It sends signals when the user clicks, moves, drags, etc. In doing it like this, I have been able to use the business logic to implement "modes" - zoom mode, pan mode, select mode (and various refinements - like restricting panning only to horizontal or vertical directions). The plotting widget knows very little about this. The end result of a zoom or pan operation is to tell the plot widget, "Set your new world viewport to this QRectF". The plot widget has no clue whether this was the result of a zoom, pan, or anything else.

I have actually taken this one step further. The thing that handles the mouse operations on the canvas is called an "interactor". It intercepts the signals from the canvas and implements the state model for what occurs in what mode. It creates rubber band widgets, changes the mouse cursor, etc. It is configurable by the higher level app logic - "allow zooming", "allow horizontal panning only", etc. Where there is something the app might need to know ("rect selected") it sends a signal the app can handle.

By this simple interface, I can use the same interactor class on completely different widgets. I have one widget that is Qwt-based, and a different one that is Graphics Scene / View based. They both use instances of the same interactor class.

You might also think about how Qt implements the Model / View and Graphics Scene / View architectures. There is a clear distinction in responsibility between the business logic of how the model or scene items are stored and manipulated and the view logic for displaying them. Even though item views (like QTableView) support editing and selection operations, what happens (if anything) as a result of the edit or selection is given up to business logic.

qtoptus
27th February 2015, 17:59
There's the model-view-controller pattern which decouples business logic and data from interface. It's back end front end pattern. However avoid overdoing it in a sense that you give the business logic more responsibilities than it should handle. Use layers of responsibilities with dependency inversion model to keep layers decoupled and independent from each other.

Kryzon
28th February 2015, 05:35
Thank you for the insights.

The principle seems to be that the user will interact with a certain widget, and this widget will request that a controller object should modify the application state \ data (known as 'model').
The widget may inform the controller directly by calling the controller methods or indirectly by emitting signals, but in any case the controller will modify the model.
The widgets will usually 'observe' the model for changes (in Qt, by means of signals) so that they can update their presentation based on the data.

If a widget informs a controller directly by calling its methods then the widget needs to know about the interface of the controller. I think this is useful in situations where you don't want the overhead of the signal-slot system (like a real-time interactive widget, such as a painting canvas).

The architecture used in the "View" widgets of Qt is the Model/View, which is different than the Model-View-Controller in that it doesn't have a separate element as the controller. It instead uses a delegate (http://doc.qt.io/qt-5/model-view-programming.html#delegate-classes) that, as I understand it, is subordinate to the widget and is responsible for editing the model. So there's some choice in what architecture to use when designing your application, plus stylistic choices.

I'm still studying the articles from the documentation (Model/View Programming (http://doc.qt.io/qt-5/model-view-programming.html) and Model/View Tutorial (http://doc.qt.io/qt-5/modelview.html)) to fully understand it, but that's the theory that I was after.

d_stranz
28th February 2015, 16:56
I think this is useful in situations where you don't want the overhead of the signal-slot system (like a real-time interactive widget, such as a painting canvas).

If your users are fast enough that they can draw more quickly than a signal / slot can execute, then you have some very fast users indeed. A typical user response time is on the order of 30 - 50 ms; that's why you can watch videos at that frame rate and perceive a seamless and lifelike scene. Whatever processing is done in response to a user action will almost certainly take longer than the time required for the event loop to process a signal / slot event.

Smalltalk has a true MVC architecture from what I have read. Qt and Microsoft's Foundation Classes (MFC), for two examples, combine the Controller with the View in that the View handles input events. My "interactor" class tries to put a little space between the Model and View by using a third party to map raw mouse events into something meaningful to the Model.

But I think you are on thre right track with your thinking.