Modelling Multimodal Interaction
Written by Sebastian Feuerstack
October 4th, 2015

Current Human-Computer Interaction by using the classical desktop-metaphor implemented by a screen, mouse, and keyboard setup requires the complete attention of the user’s eyes as well as his hands. Whereas this type of interaction has been proven to be efficient as it’s the main way of interacting with a computer, in a lot of situations this way is disadvantageous – especially in those situations where you need your hands or eyes for other things, like for instance when driving a car, during cooking, cleaning, playing with your kids and for most other activities in your daily life.

Multimodal interaction, by enabling in principle to consider all senses of the human for interaction has two main benefits. On the one hand, it can make interaction more intuitive and easy to learn. On the other hand, it can substitute senses of the user that are currently focused on other actions than interacting with the computer. Or even more important, it can completely substitute a certain way of interaction (e.g. a graphical driven one) by combining other modalities (e.g. speech, gestures and tactile) for (e.g. seeing-) impaired people.

Multimodal interaction requires specialized devices to capture and address the human senses, algorithms to calculate the multimodal fusion and fission as well as design processes and frameworks to implement such systems. In my research I am especially interested in prototyping multimodal interaction by the construction of design methods and notations to support the developer and coupling these design models to a runtime environment that can directly interpret the design models to support rapid prototyping and to bridge the gap between what’s designed and what gets implemented.

Publications

  • [PDF] M. Blumendorf, S. Feuerstack, and S. Albayrak, “Multimodal User Interfaces for Smart Environments: The Multi-Access Service Platform,” in Proceedings of the working conference on Advanced visual interfaces, 2008.
    [Bibtex]
    @INPROCEEDINGS{Blumendorf2008a,
    author = {Marco Blumendorf and Sebastian Feuerstack and Sahin Albayrak},
    title = {Multimodal User Interfaces for Smart Environments: The Multi-Access
    Service Platform},
    booktitle = {Proceedings of the working conference on Advanced visual interfaces},
    year = {2008},
    editor = {Paolo Bottoni and Stefano Levialdi},
    publisher = {ACM},
    note = {Proceedings of the working conference on Advanced visual interfaces
    2008},
    abstract = {User interface modeling is a well accepted approach to handle increasing
    user interface complexity. The approach presented in this paper utilizes
    user interface models at runtime to provide a basis for user interface
    distribution and synchronization. Task and domain model synchronize
    workflow and dynamic content across devices and modalities. A cooking
    assistant serves as example application to demonstrate multimodality
    and distribution. Additionally a debugger allows the inspection of
    the underlying user interface models at runtime.},
    competencecenter = {Human Machine Interaction},
    file = {Blumendorf2008a.pdf:Blumendorf2008a.pdf:PDF},
    offisdivision = {Verkehr / Human Centered Design},
    owner = {blumendorf},
    timestamp = {2008.03.07}
    }
  • [PDF] M. Blumendorf, S. Feuerstack, and S. Albayrak, “Multimodal Smart Home User Interfaces,” in Intelligent User Interfaces for Ambient Assisted Living: Proceedings of the First International Workshop IUI4AAL 2008, 2008.
    [Bibtex]
    @INPROCEEDINGS{Blumendorf2008,
    author = {Marco Blumendorf and Sebastian Feuerstack and Sahin Albayrak},
    title = {Multimodal Smart Home User Interfaces},
    booktitle = {Intelligent User Interfaces for Ambient Assisted Living: Proceedings
    of the First International Workshop IUI4AAL 2008},
    year = {2008},
    editor = {Kizito Mukasa and Andreas Holzinger and Arthur Karshmer},
    publisher = {IRB Verlag},
    abstract = {Interacting with smart devices and smart homes becomes increasingly
    complex but also more and more important. Computing systems can help
    the aging society to live autonomously in their own home. However,
    interacting with such systems is a challenge not only for older people.
    Multimodal user interfaces and new interaction paradigms can help
    addressing these problems, but are not yet mature enough to be of
    direct use. In this paper we describe our work in the area of smart
    home environments and multimodal user interaction. We present the
    Ambient Assisted Living Testbed set up at the Technical University
    of Berlin and the Multi-Access Service Platform, allowing multimodal
    interaction in this smart environment with adaptivity, session management,
    migration, distribution and multimodality as key features for future
    services.},
    competencecenter = {Human Machine Interaction},
    file = {Blumendorf2008.pdf:Blumendorf2008.pdf:PDF},
    offisdivision = {Verkehr / Human Centered Design},
    owner = {blumendorf},
    timestamp = {2008.03.07}
    }
  • [PDF] M. Blumendorf, S. Feuerstack, and S. Albayrak, “Event-based Synchronization of Model-Based Multimodal User Interfaces,” in MDDAUI ’06 – Model Driven Development of Advanced User Interfaces 2006, 2006.
    [Bibtex]
    @INPROCEEDINGS{Blumendorf2006,
    author = {Marco Blumendorf and Sebastian Feuerstack and Sahin Albayrak},
    title = {Event-based Synchronization of Model-Based Multimodal User Interfaces},
    booktitle = {MDDAUI '06 - Model Driven Development of Advanced User Interfaces
    2006},
    year = {2006},
    editor = {Andreas Pleuss and Jan Van den Bergh and Heinrich Hussmann and Stefan
    Sauer and Alexander Boedcher},
    month = {November},
    publisher = {CEUR-WS.org},
    note = {Proceedings of the MoDELS'06 Workshop on Model Driven Development
    of Advanced User Interfaces},
    abstract = {in his daily life, moving interaction with computers from a single
    system to a complex, distributed environment. User interfaces available
    in this environment need to adapt to the specifics of the various
    available devices and are distributed across several devices at the
    same time. A problem arising with distributed user interfaces is
    the required synchronization of the different parts. In this paper
    we present an approach allowing the event-based synchronization of
    distributed user interfaces based on a multi-level user interface
    model. We also describe a runtime system we created, allowing the
    execution of model-based user interface descriptions and the distribution
    of user interfaces across various devices and modalities using channels
    established between the system and the end devices.},
    competencecenter = {Human Machine Interaction},
    file = {Blumendorf2006.pdf:Blumendorf2006.pdf:PDF},
    keywords = {Multimodal interaction, user interface model, distributed user interfaces,
    synchronization, ubiquitous computing, smart environments},
    offisdivision = {Verkehr / Human Centered Design},
    owner = {sfeu},
    timestamp = {2006.10.12}
    }
  • [PDF] S. Feuerstack, M. Blumendorf, and S. Albayrak, “Bridging the Gap between Model and Design of User Interfaces,” in Informatik für Menschen, 2006, pp. 131-137.
    [Bibtex]
    @INPROCEEDINGS{Feuerstack2006,
    author = {Sebastian Feuerstack and Marco Blumendorf and Sahin Albayrak},
    title = {Bridging the Gap between Model and Design of User Interfaces},
    booktitle = {Informatik für Menschen},
    year = {2006},
    editor = {Christian Hochberger, Rüdiger Liskowsky},
    volume = {P-94},
    number = {2},
    series = {GI-Edition - Lecture Notes in Informatics},
    pages = {131-137},
    month = {October},
    publisher = {Bonner Köllen Verlag},
    abstract = {The creation of user interfaces usually involves various people in
    different roles and several tools that are designed to support each
    specific role. In this paper we propose a tool for rapid prototyping
    that allows all parties involved to directly interact with the system
    under development. The tool is based on task tree development and
    integrates the system designer, the user interface designer, the
    usability expert, and the user interface developer in a common process.
    The final system is derived from two sources, the task model specified
    by the system architect and the final user interface specified by
    the user interface developer and designer. Aggregating the runtime
    system and the design tools into one complete integrated system is
    our approach to bridge the gap between the user interface designer
    working on system mock-ups and the actual developers implementing
    the system.},
    competencecenter = {Human Machine Interaction},
    file = {Feuerstack2006.pdf:Feuerstack2006.pdf:PDF},
    offisdivision = {Verkehr / Human Centered Design},
    owner = {sfeu},
    timestamp = {2006.10.12}
    }
  • [PDF] S. Feuerstack, M. Blumendorf, and S. Albayrak, “Prototyping of Multimodal Interactions for Smart Environments based on Task Models,” in Constructing Ambient Intelligence: AmI 2007 Workshops Darmstadt, 2007.
    [Bibtex]
    @INPROCEEDINGS{Feuerstack2007,
    author = {Sebastian Feuerstack and Marco Blumendorf and Sahin Albayrak},
    title = {Prototyping of Multimodal Interactions for Smart Environments based
    on Task Models},
    booktitle = {Constructing Ambient Intelligence: AmI 2007 Workshops Darmstadt},
    year = {2007},
    abstract = {Smart environments offer interconnected sensors, devices, and appliances
    that can be considered for interaction to substantially extend the
    potentially available modality mix. This promises a more natural
    and situation aware human computer interaction. Technical challenges
    and differences in interaction principles for distinct modalities
    restrict multimodal systems to specialized systems supporting specific
    situations only. To overcome these limitations enabling an easier
    integration of new modalities supporting interaction in smart environments,
    we propose a task-based notation that can be interpreted at runtime.
    The notation supports evolutionary prototyping of new interaction
    styles for already existing interactive systems. We eliminate the
    gap between design- and runtime, since support for additional modalities
    can be prototyped at runtime to an already existing interactive system.},
    competencecenter = {Human Machine Interaction},
    file = {Feuerstack2007.pdf:Feuerstack2007.pdf:PDF},
    offisdivision = {Verkehr / Human Centered Design},
    owner = {blumendorf},
    timestamp = {2007.10.27}
    }
  • [PDF] A. Rieger, R. Cisse, S. Feuerstack, Jens Wohltorf, and S. Albayrak, “An Agent-Based Architecture for Ubiquitous Multimodal User Interfaces,” in International Conference on Active Media Technology, Takamatsu, Kagawa, Japan, 2005.
    [Bibtex]
    @INPROCEEDINGS{Rieger2005,
    author = {Andreas Rieger and Richard Cisse and Sebastian Feuerstack and Jens
    Wohltorf and Sahin Albayrak},
    title = {An Agent-Based Architecture for Ubiquitous Multimodal User Interfaces},
    booktitle = {International Conference on Active Media Technology},
    year = {2005},
    address = {Takamatsu, Kagawa, Japan},
    competencecenter = {Human Machine Interaction},
    file = {Rieger2005.pdf:Rieger2005.pdf:PDF},
    offisdivision = {Verkehr / Human Centered Design},
    owner = {sfeu},
    timestamp = {2006.08.28}
    }
Last Updated 7:22 pm