Developing an interactive application that should be run on several devices and even should support access via several modalities can be a tedious and cumbersome task. The heterogeneous landscape of devices available, equipped with different operating systems, requiring applications to be implemented with different computer languages, different user interface toolkits, need to follow different interaction patterns and style guides, usually requires implementing the same software more than once and raises the costs of maintaining these different realizations of the same applications afterwards as well.
Model-driven design and development of user interfaces (MDDUI), which is practiced since the late 1980’s in Human-Computer Interaction, seems to be a promising approach to support software developers, developing interactive applications. MDDUI aims at modelling the different aspects of the user interface on certain levels of abstraction and offers a declarative way of modelling multi-platform user-interfaces. Each model is refined to more specific ones. By this approach the more abstract models can be shared for the development of several platforms and modalities. The more concrete ones are used to address certain capabilities of specific platforms. MDDUI is concerned with two basic problems:
- The identification of suitable sets of model abstractions and their relations for analyzing, designing and evaluating interactive systems.
- The specification of software engineering processes that change the focus from (manual) implementation to the tool-driven design of models.
One often mentioned disadvantage of MDDUI approaches is the long, often very abstract and complex process requiring the developers to learn new languages and concepts (such as task and dialogue models, abstract and concrete user interfaces) and to follow time consuming processes and tool chains until first results can be inspected. These are often not the expected ones since even for experienced developers it is often hard to mentally bridge the gap between the manipulations on a high level of abstraction and the concrete outcome for a whole bunch of different devices and modalities.
My research interests about MDDUI are focused on developing resolutions to bridge the gap between the abstraction of models and concrete realizations of user interfaces as well as closing the technical gap between what’s being designed and what’s realized by an implementation. Therefore I am researching about methods that offer intermediary steps allowing deriving prototypes as early and often as possible. Further on, I am interested in realizing runtime environments that can directly execute the design models and in collaborative design environments that are targeted to support interaction designers explicitly and additionally to software developers.
Publications
- S. Feuerstack, “A Method for the User-centered and Model-based Development of Interactive Applications,” PhD Thesis, 2008.
[Bibtex]
@PHDTHESIS{Feuerstack2008a,
author = {Sebastian Feuerstack},
title = {A Method for the User-centered and Model-based Development of Interactive
Applications},
school = {Technische Universität Berlin},
year = {2008},
abstract = {Nowadays, internet access is available in nearly every situation,
but applications fail to adapt to the user's situation. Applications
are still developed for a certain device and a specific context-of-use
and adaption happens on the user's end-devices without taking into
account the applications' semantics. This results in applications
that cannot be convenient accessed on a device that the application's
designer has not explicitly considered. Currently most of the internet
services cannot be comfortably accessed by today's mobile device's
browsers that are often only able to scale down images or offer enhanced
scrolling capabilities. Further context-of-use adaptations or support
for additional modalities have only be realized for very specific
applications like for instance in-car navigation systems.
This dissertation research focuses on creating a method for interactive
software design that implements a strong user-centric perspective
and requires testing steps in all phases of the development process
enabling to concentrate on the end-users' requirements and indentifying
how the users' way of thinking can be considered as the basic way
of controlling the application. The method involves the creation
of various models starting by very abstract models and continously
substantiating abstract into more concrete models. All of the models
are specified in a declarative manner and can be directly interpreted
to form the interactive application at run-time. Therefore all design
semantics can be accessed and manipulated to realize context-of-use
adaptations at run-time. By the abstract-to-detail modeling approach
that includes deriving one model to form the basis of the next, more
concrete model, consistency can be archieved if an application needs
to be designed to support different platforms and modalities. The
method is supported by tools for all steps of the development process
to support the designer in specifying and deriving the declarative
models. Additionally all tools support the generation of prototypes
and examples enabling the developer testing intermediary results
during the design process and to consider users' feedback as often
and early as possible.
The method is complemented by an associated run-time architecture,
the Multi-Access Service Platform that implements a model-agent concept
to make all of the design models alive to bridge the actual gap between
design-time and run-time of an interactive model-based system. Since
the run-time-architecture is able to synchronize changes between
the different models it can be used on the one hand to manipulate
a running system in order to prototype and test new features and
on the other hand it enables to personalize a running system instance
to a certain user's requirements. The architecture considers the
tools of the method and enables the designer to deploy changes of
the models directly into the running system.
In the past a lot of user interface description languages (UIDL) have
been proposed to specify an application on different model abstraction
levels. As of now, most of these proposals have not received a broad
acceptance in the research community where most research groups implement
their own UIDLs. Both, the method and the run-time architecture abstract
from a specifc UIDL but pay attantion to the actual types of abstraction
levels that have been identified during the state of the art analysis.
Instead of proposing yet another UIDL, the work concentrates on identifying
and realizing missing aspects of existing UIDLs (1), in enhancing
well accepted approaches (2), and by introducing alternative aproaches
that have not been proposed so far (3). Regarding (1) the work describes
an approach for a layout model that specifies the user interface
layout by statements containing interpretations of the other design
models and can be designed and tested by using an interactive tool.
Specifying a task model has been widely accepted to implement a user-centric
development process. Thus, we enhance the ConcurTaskTree notation
to support run-time interpretation and explicit domain model annotations
to support our work (2). Finally, the work proposes a different way
of specifying an abstract user interface model (3) that is based
on a derivation based on a domain model instead of using a task model
as the initial derivation source.},
competencecenter = {Human Machine Interaction},
file = {:Feuerstack2008a.pdf:PDF},
offisdivision = {Verkehr / Human Centered Design},
owner = {blumendorf},
timestamp = {2008.07.27}
}
- S. Feuerstack, M. Blumendorf, and S. Albayrak, “Bridging the Gap between Model and Design of User Interfaces,” in Informatik für Menschen, 2006, pp. 131-137.
[Bibtex]
@INPROCEEDINGS{Feuerstack2006,
author = {Sebastian Feuerstack and Marco Blumendorf and Sahin Albayrak},
title = {Bridging the Gap between Model and Design of User Interfaces},
booktitle = {Informatik für Menschen},
year = {2006},
editor = {Christian Hochberger, Rüdiger Liskowsky},
volume = {P-94},
number = {2},
series = {GI-Edition - Lecture Notes in Informatics},
pages = {131-137},
month = {October},
publisher = {Bonner Köllen Verlag},
abstract = {The creation of user interfaces usually involves various people in
different roles and several tools that are designed to support each
specific role. In this paper we propose a tool for rapid prototyping
that allows all parties involved to directly interact with the system
under development. The tool is based on task tree development and
integrates the system designer, the user interface designer, the
usability expert, and the user interface developer in a common process.
The final system is derived from two sources, the task model specified
by the system architect and the final user interface specified by
the user interface developer and designer. Aggregating the runtime
system and the design tools into one complete integrated system is
our approach to bridge the gap between the user interface designer
working on system mock-ups and the actual developers implementing
the system.},
competencecenter = {Human Machine Interaction},
file = {Feuerstack2006.pdf:Feuerstack2006.pdf:PDF},
offisdivision = {Verkehr / Human Centered Design},
owner = {sfeu},
timestamp = {2006.10.12}
}
- S. Feuerstack, M. Blumendorf, M. Kern, M. Kruppa, M. Quade, M. Runge, and S. Albayrak, “Automated Usability Evaluation during Model-Based Interactive System Development,” in HCSE-TAMODIA ’08: Proceedings of the 2nd Conference on Human-Centered Software Engineering and 7th International Workshop on Task Models and Diagrams, Berlin, Heidelberg, 2008, p. 134–141.
[Bibtex]
@INPROCEEDINGS{Feuerstack2008b,
author = {Sebastian Feuerstack and Marco Blumendorf and Maximilian Kern and
Michael Kruppa and Michael Quade and Mathias Runge and Sahin Albayrak},
title = {Automated Usability Evaluation during Model-Based Interactive System
Development},
booktitle = {HCSE-TAMODIA '08: Proceedings of the 2nd Conference on Human-Centered
Software Engineering and 7th International Workshop on Task Models
and Diagrams},
year = {2008},
pages = {134--141},
address = {Berlin, Heidelberg},
publisher = {Springer-Verlag},
abstract = {In this paper we describe an approach to efficiently evaluate the
usability of an interactive application that has been realized to
support various platforms and modalities. Therefore we combine our
Multi-Access Service Platform (MASP), a model-based runtime environment
to offer multimodal user interfaces with the MeMo workbench which
is a tool supporting an automated usability analysis. Instead of
deriving a system model by reverse-engineering or annotating screenshots
for the automated usability analysis, we use the semantics of the
runtime models of the MASP. This allows us to reduce the evaluation
effort by automating parts of the testing process for various combinations
of platforms and user groups that should be addressed by the application.
Furthermore, by testing the application at runtime, the usability
evaluation can also consider system dynamics and information that
are unavailable at design time.},
competencecenter = {Human Machine Interaction},
doi = {http://dx.doi.org/10.1007/978-3-540-85992-5_12},
file = {Feuerstack2008b.pdf:Feuerstack2008b.pdf:PDF},
isbn = {978-3-540-85991-8},
keywords = {model-based user interface development, automated usability evaluation},
location = {Pisa, Italy},
offisdivision = {Verkehr / Human Centered Design},
owner = {sfeu},
timestamp = {2008.08.08}
}
- G. Lehmann, M. Blumendorf, S. Feuerstack, and S. Albayrak, “Utilizing Dynamic Executable Models for User Interface Development,” in Interactive Systems – Design, Specification, and Verification, 2008.
[Bibtex]
@INPROCEEDINGS{Lehmann2008a,
author = {Grzegorz Lehmann and Marco Blumendorf and Sebastian Feuerstack and
Sahin Albayrak},
title = {Utilizing Dynamic Executable Models for User Interface Development},
booktitle = {Interactive Systems - Design, Specification, and Verification},
year = {2008},
editor = {T. C. Nicholas Graham and Philippe Palanque},
publisher = {Springer-Verlag Gmbh},
abstract = {In this demonstration we present the Multi Access Service Platform
(MASP), a model-based runtime architecture for user interface development
based on the idea of dynamic executable models. Such models are self-contained
and complete as they contain the static structure, the dynamic state
information as well as the execution logic. Utilizing dynamic executable
models allows us to implement a rapid prototyping approach and provide
mechanisms for the extension of the UI modeling language of the MASP.},
competencecenter = {Human Machine Interaction},
file = {Lehmann2008a.pdf:Lehmann2008a.pdf:PDF},
keywords = {human-computer interaction, model-based user interfaces, runtime interpretation},
offisdivision = {Verkehr / Human Centered Design},
owner = {blumendorf},
timestamp = {2008.05.15}
}
|