PDA

View Full Version : User Interface Ontology as a way to guide Qt GUI design



Nathaniel Christen
9th June 2011, 18:23
Hi: I read a very interesting article by Witold Wysota, who is one of the administrators of this site, called "Semantic Model of Application User Interfaces". As someone who is interested in automatically generating GUIs based on a data model, and also on building an ontology of the user interface domain (that is, an ontology OF user interfaces, not user interfaces TO ontologies in other domains e.g. medicine), I was pleased to see specific ideas on this subject proposed by a Qt expert. The journal "Information Theories and Applications" published a proposed for a UI ontology to be hosted on the internet, back in 2003, but did not seem to follow up that proposal. I am a former student at the University at Buffalo, NY, which houses the (US) National Center for Ontological Research and has done extensive work on formal systems for ontological engineering, and also on the theory of partitions, which I believe could be very useful in analyzing how an Application User Interface can best be subdivided so as to incorporate the structure of the Application Data Model in a user-friendly way. There are many possible UI controls both for single data fields and for aggregators like notebooks, dialog boxes, wizards, etc. If some heuristics exist for how to map data structure on UI structure it might be possible to autogenerate useful GUIs on top of a data model, even if a programmer needs to tweak to the results. I would like to develop a User Interface Ontology which catalogs different kinds of UI components (as well as things like User-Generated Events, etc.) including information such as the relative complexity of different UI controls, what UI control types they can contain or be contained inside, etc., as a way to guide the mapping from data-model partitions onto UI controls. I have already developed C++ code generators which can build alot of boilerplate code related to Qt (also wxWidgets) apps, though only if the UI details are explicitly set beforehand (for myself, using a Lisp-like DSL). I'd like to extend this via a kind of decision engine which can infer the best UI controls to represent a given data component, and a UI ontology could provide parameters guiding this decision process. Therefore, I'd like to know whether the ideas in Wysota's article are just conceptual, or whether there are actual Qt-related tools which incorporate them e.g. into source-code analyzers for Qt apps, etc. "Another important field of use is creation and usage of semantics-enabled code generators. Having an abstract model of a user interface makes it possible to implement tools that would transform such definition into source code for different arbitrary development environments" -- I agree; do these tools exist at all?

wysota
9th June 2011, 21:18
Hello, I'm glad to see there is someone interested in my work.

I am just publishing a paper (in http://www.springer.com/series/7092) on semi-automatic porting of user interfaces based on ontological model. The tools involved are rather of purely semantic web nature and nothing Qt specific and certainly nothing that can be used out of the box. I have performed an experiment with extracting an ontology of user interface from an MFC application and using the concept of ontology alignment to port the user interface to Qt. There are many pitfalls involved and the results are not astonishing but I believe I am on the right path.

As for the GUI ontology itself, at current state I have about 50 classes and about 30 properties mapping behaviour of widget-based user interfaces. Unfortunately Semantic Web still lacks the proper tools to express everything I'd like to, so much time is spent on working around problems with OWL 2.0 and Protege as the working environment (especially that I don't feel that much comfortable with it). Part of my approach involves mapping data types (or actually data concepts) to user interface elements that would handle them which doesn't seem far from what you are looking for. I also intend to allow for generation of skeleton code based on a model extracted using semantic tools. Unfortunately there is no working code yet, as it is a side path of my PhD but I believe that once the knowledge is there, the actual code generation for Qt is trivial as formality of the model (and thus its non-ambiguity) helps significantly in this regard.

I would be very happy to share my experiences and also listen to comments and suggestions regarding the subject.

dpatel
10th June 2011, 07:07
Hi: I read a very interesting article by Witold Wysota, who is one of the administrators of this site, called "Semantic Model of Application User Interfaces".

Hi Nathaniel Christen, can you post the link to the article you are talking about. I am also interested.

Nathaniel Christen
13th June 2011, 19:59
Hi: Based on your saying that "Part of my approach involves mapping data types (or actually data concepts) to user interface elements that would handle them", it sounds as if you are working on ideas similar to some experiments described by a pair of engineers for RedWhale software in California (http://www.google.com/url?sa=t&source=web&cd=1&ved=0CDsQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2 Fdownload%3Fdoi%3D10.1.1.35.8463%26rep%3Drep1%26ty pe%3Dpdf&rct=j&q=Adaptation%20in%20Automated%20User-Interface%20Design&ei=JFP2TaSQCuLw0gG8koHxDA&usg=AFQjCNEbGSUkSzV-16II6VGaCXr_8NigHA&cad=rja) (that's a long link but I can't find a shorter address for it). Their article is called "Adaptation in User Interface Design", but as far as I can tell they deal only with "atomic" data elements, or at least small-scale ones, rather than addresses how to automate the grouping of smaller components into larger ones using notebooks, wizards, etc. My guess is that you've actually made more progress than they have in terms of the sheer size of your ontology as far as number of classes and properties.

I recognize that mapping "data concepts" to user interface components -- the article I just cited calls them "interactors", which I think is a useful generic term -- is only one part of "user interface ontology", but I think it is an important part. I also recognize that actually listing all the various possibilities and permutations could be a daunting task. However, I'd like to point out that at least in an informal sense this kind of mapping is an intrinsic part of documenting new widget types or new datatypes. For example, suppose someone designs a new Qt widget and wants to publish the code. They'd have to explain how to use it, which would include explaining the kind of data that can be inserted into its various fields -- and whether or not there is a single special datatype that can initialize the entire widget, or special datatypes for its internal components. Conversely, given a special kind of datatype an interactor can be built to display it in a modular way -- dropped into an application ideally in isolation from other components in the GUI.

So a "user interface ontology" could perhaps be "seeded", so to speak, just by extending this natural documentation process. In other words, perhaps there could be a place on the web where someone who has designed (and might make publically available) some new Qt widget could provide information in some specific format about the kind of data it can receive, alongside details like I mentioned in my original post, e.g.: undo/redo, serialization and persistence, resizing, realtime updating -- whether these concepts make sense for the new interactor and, if so, what further details can be provided, for example, whether an interactor's data can be serialized as XML, as boost_archive, as SQL, etc. If people started filling out forms along these lines, it would be interesting to see what patterns emerged and we might start to get a sense of a bonafide "user interface ontology" emerging.

I'd be happy to set up such a form and database backend for it as a kind of demo piece, if that seems like an interesting project. Maybe I'd do it anyhow, but your experience actually building such an ontology might give you insights into the kind of data that should be in such a form (ditto for anyone else who might be following this thread).

One last comment: the power of "automated ui design" should not be underestimated, even if UI design will never be Fully automated. With the explosion of the internet and web sites, I'm a little frustrated with the lack of software to manage so much of this material. Most small or medium scale web sites I know of do not have software specifically designed for them, which causes their maintainers endless headaches. Web sites whose specific task is to aggregate other sites -- e.g. local chambers of commerce or locally designed tourism sites -- face the extra hardship of managing disparate data sources none of which have software designed with such collaborations in mind. Moreover, more and more applications are designed as "web apps" and not real software -- placing sometimes unnecessary demands on servers and web traffic, with web pages doing lots of AJAX transfers, running slowly, and wasting energy -- the internet apparently contributes 2% of all greenhouse gas emissions, comparable to the aeronautics industry. The problems seems to be that building real, desktop software (e.g. with Qt) is so time-consuming compared to building web sites, for example, that software engineers cannot keep up. But semi-automating the software development process could change all that.

Maybe the connection is too subtle to catch on, but I'd like to pitch semiautomated software development as a form of green technology!

Added after 18 minutes:

One thing: it was requested to link to Witold Wysota's original article: http://blog.wysota.eu.org/wp-content/uploads/2009/09/ISAT2009.pdf

Two other articles worth checking out: http://sci-gems.math.bas.bg:8080/jspui/bitstream/10525/918/1/ijita10-1-p13.pdf
and
http://www.google.com/url?sa=t&source=web&cd=1&ved=0CDsQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2 Fdownload%3Fdoi%3D10.1.1.35.8463%26rep%3Drep1%26ty pe%3Dpdf&rct=j&q=Adaptation%20in%20Automated%20User-Interface%20Design&ei=JFP2TaSQCuLw0gG8koHxDA&usg=AFQjCNEbGSUkSzV-16II6VGaCXr_8NigHA&cad=rja

I'm sure there are others. If anyone knows of others, I'd like to compile a list of resources related to user interface ontology and automated (or semiautomated) user interface design, so I'd appreciate suggestions. Thanks.

wysota
13th June 2011, 20:05
Hi Nathaniel Christen, can you post the link to the article you are talking about. I am also interested.

My papers are available here: http://blog.wysota.eu.org/index.php/bibliography/. I believe this is the article we're talking about: http://blog.wysota.eu.org/wp-content/uploads/2009/09/ISAT2009.pdf


Hi: Based on your saying that "Part of my approach involves mapping data types (or actually data concepts) to user interface elements that would handle them", it sounds as if you are working on ideas similar to some experiments described by a pair of engineers for RedWhale software in California (http://www.google.com/url?sa=t&source=web&cd=1&ved=0CDsQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2 Fdownload%3Fdoi%3D10.1.1.35.8463%26rep%3Drep1%26ty pe%3Dpdf&rct=j&q=Adaptation%20in%20Automated%20User-Interface%20Design&ei=JFP2TaSQCuLw0gG8koHxDA&usg=AFQjCNEbGSUkSzV-16II6VGaCXr_8NigHA&cad=rja) (that's a long link but I can't find a shorter address for it). Their article is called "Adaptation in User Interface Design", but as far as I can tell they deal only with "atomic" data elements, or at least small-scale ones, rather than addresses how to automate the grouping of smaller components into larger ones using notebooks, wizards, etc. My guess is that you've actually made more progress than they have in terms of the sheer size of your ontology as far as number of classes and properties.
I just had a quick look at the article. I think that my approach is a bit different (although the general idea of associating some types to some elements handling those types seems to be similar). The paper you mention seems to focus on data primitives (such as integers, strings, etc.), I aim to have a more high-level look at the domain, using data-type concepts such as "file", "latitude", etc. Of course once you start analyzing those terms, you reach the mentioned primitives (latitude is really either a set of two integers and a real or just a single real value).


For example, suppose someone designs a new Qt widget and wants to publish the code. They'd have to explain how to use it, which would include explaining the kind of data that can be inserted into its various fields
I tend to not focus on individual fields. For me a widget serves some dedicated purpose and you can always give a name to the dataset processed by the element. The fact that it might be a composition of other entities is meaningless, in my opinion that's one of the points of using ontologies (to one person 'latitude' is a set of two ints and a real and to another person it's just a real, so let's simply use the term 'latitude' and not dwell about details).


So a "user interface ontology" could perhaps be "seeded", so to speak, just by extending this natural documentation process.
Having an ontological description of a widget could be useful, indeed. For instance you could have a search engine that would find you widgets/controls compliant to the set of terms you feed it. It's an extension to the 'tags' mechanism where individual tags could be some related to each other and being able to find those relationships could yield more interesting results. Another thing could be that an agent-driven environment could negotiate the user interface based on an ontological description. I think that's quite strongly related to automated UI generation you speak of.


In other words, perhaps there could be a place on the web where someone who has designed (and might make publically available) some new Qt widget could provide information in some specific format about the kind of data it can receive,
I could even imagine this to be a registry of controls without specifying the toolkit they are meant for. It could allow to find "replacements" of elements from one technology in another framework.


alongside details like I mentioned in my original post, e.g.: undo/redo, serialization and persistence, resizing, realtime updating -- whether these concepts make sense for the new interactor and, if so, what further details can be provided,
I'm not so sure about such details but in general, yes, it even fits my model to have a class of "Undoable objects" (as a subclass of objects able to perform 'undo' and 'redo' actions) the same way I have "clickable" objects.

Here is a subset of classes from my ontology. The part related to data handling is focused around the ValueInput class.

6573

Nathaniel Christen
15th June 2011, 18:43
I'm pleased to hear your reaction to the RedWhale paper, because I had the same impression but thought perhaps I was not reading carefully enough. I agree that matching atomic data elements to small-scale widgets like QLabel is a small piece of the larger puzzle. If a decision engine, say, needs to design a widget to display some C++ object, it should start by choosing a large-scale container like a frame, notebook (e.g. QTabWidget), "scrollwindow", etc. On the other hand, as the engine "walks down" the object it needs to fit smaller widgets into larger ones, and eventually it may need to choose a different top-level container. I think the top-down and the bottom-up approaches complement each other. Certain heuristics, such as measuring "object complexity", could help the decision engine choose the best top-level container to begin with. On the other hand, fine-grained details about an object's fields can help the engine reuse existing designs.

Also, you point out that "a widget serves some dedicated purpose and you can always give a name to the dataset processed by the element...that it might be a composition of other entities is meaningless...that's one of the points of using ontologies", which is well-taken, but there still needs to be some way of relating the ontology to source code. If a C++ class, for example, models some concept described by an ontology, there are different ways of actually expressing this intention. Maybe "Concept C++" will standardize that, but for now we can use multiple inheritance and include some "concept class" among our base classes; we can use a member typedef; we can use some "concept object" as a data member; we can use a namespace-scoped type traits struct; etc. One difference between a dataset which conceptually includes, say, concepts Y and Z, and a dataset which is modeled by an actual class X, is whether a widget designed to represent that dataset has a constructor which takes, say, a const X& or whether the constructor takes a const Y& and a const Z&. Given that C++ lacks Java or C# reflection, we have our choice of strategies re how to embed "ontological" information about C++ classes so that this information is available at runtime and could be used, say, to autogenerate a viewer for some object while an applicaton is running.

Nathaniel Christen
16th June 2011, 16:48
My last post read a bit inelegantly, so to try to improve it via the portion of the ontology you linked above -- BTW, great work ... I especially liked how you recognized conceptual differences that would not have occurred to me; for example, distinguishing "decorative object" from "display object". I think in conceptualizing the domain of GUI I was attending to behavior and not paying enough attention to purpose. I do believe that behaviors (e.g. persisting object state) should be part of an ontology, but often in the form of "cross-cutting concerns", as AOP says, rather than subclasses.

OK, then, suppose I want to use this ontology for a Qt application. In general, I don't think that classes should model ontologies through their inheritance trees; the two systems should be orthogonal. Perhaps the different categories in this ontology could be encapsulated into objects of their own with say a name (e.g. "datetime input"), pointers to one or a list of parent categories as appropriate, etc. Maybe there could be a static method or some kind of "ontology query" object with methods which take a Qt widget and return a pointer to the category in which it fits. Having said that, suppose some runtime engine is trying to find a widget in order to display some value or collection of values -- when querying the set of widgets available for dynamically adding to a running application, I think this information would have to include details about how the widget is initialized, say, which is not always clear from the ontological category itself. For example, if all we know about a widget is that it displays datetimes, we don't know whether it can be initialized by strings, a number representing milliseconds from the epoch, boost::date_time values, etc.

wysota
16th June 2011, 18:06
On the other hand, as the engine "walks down" the object it needs to fit smaller widgets into larger ones, and eventually it may need to choose a different top-level container.
I never thought of that this way but this seems right.


Also, you point out that "a widget serves some dedicated purpose and you can always give a name to the dataset processed by the element...that it might be a composition of other entities is meaningless...that's one of the points of using ontologies", which is well-taken, but there still needs to be some way of relating the ontology to source code.
The thing is that different frameworks or languages might implement the same concept in a totally different way. If you design an ontology having a particular implementation in mind (e.g. C++, Qt, etc.) then the resulting model will be too tightly bound to the methodology and impossible to express in a different framework.
As part of my last (non-published yet) paper I've been studying a few approaches to modelling UI in a portable way and in some of them this tight coupling was evident.


If a C++ class, for example, models some concept described by an ontology, there are different ways of actually expressing this intention.
Exactly. And the other way round (as I have mentioned above).


but for now we can use multiple inheritance and include some "concept class" among our base classes; we can use a member typedef; we can use some "concept object" as a data member; we can use a namespace-scoped type traits struct; etc.
I wouldn't do that. If at all, then a qualified comment or something like what Qt has -- Q_CLASSINFO would best fit this purpose. This is meta-data and shouldn't influence the implementation in any way.


Given that C++ lacks Java or C# reflection, we have our choice of strategies re how to embed "ontological" information about C++ classes so that this information is available at runtime and could be used, say, to autogenerate a viewer for some object while an applicaton is running.
I've been pursuing this approach as well. We've developed a concept of "views" where different stakeholders look at the same data from a different perspective and thus the data should be displayed differently for them. Some of the concepts ended up here: http://www.springerlink.com/content/p282445q12566807/ (the paper is also available from my website).


In general, I don't think that classes should model ontologies through their inheritance trees;
Definitely not. One has to remember the notion of "class" in object oriented programming has (almost) nothing to do with the notion of "class" in semantic web. This is even more evident when talking about subclasses where in SW if C2 is a subclass of C1 then every instance of C2 is also an instance of C1 (and nothing more).

One of the issues I've encountered is that there is no tool that makes it possible to decide upon implementation of a concept using only semantic web approach, it requires some hardcoded "domain knowledge" to be able to make such a decision. Sometimes a single concept is implemented by a single object, sometimes a single object can implement sever concepts and sometimes a single concept requires more than one object (in this particular technology).