Adaptive Layouting of User Interfaces
Written by Sebastian Feuerstack
October 4th, 2015

Today information systems can be accessed at any time, any place and in any situation. The majority of people on the streets own a smart phone enabling them a mobile access to the internet. In our environments screens have been continuously growing in the last years from 12 inch up to 24 inch computer screens at a work desk, huge projections or multi-screen displays in control centres or up to 60 inch ultrathin LCD displays supporting a high-resolution television at home. Further on, more and more of these displays get interactive enabling single- or multi-touch control based on finger- or pen-based interaction, gesture recognition or tangible control. They are used by a single-user (like a cell phone), by several persons (e.g. public displays), or by multiple persons at once (in a control centre).

In the future, interaction will be performed by interfaces that can follow the user on all his devices, get automatically adapted to the actual context of the user, and consider his preferences. The interactions can even span over several devices by distributing user interfaces for one or several users to several devices.

Such scenarios require flexible and robust (re-) layouting mechanisms of the user interface and need to consider the underlying tasks and concepts of an interactive application to generate a consistent layout presentation for all states and distributions of the user interface. My research is about to figure out possible layout-adaptation based on the user’s context and the modalities and interaction devices used.

The broad range of possible user interface distributions and the diversity of available interaction devices make a complete specification of each potential context-of-use scenario during the application design impossible. Thus, I am trying to figure out ways of modelling such layout adaptations more efficiently at design-time based on interactive tools that take benefit of other already existing information and relate them to the layout generation. Further on I try to identify layout-algorithms at runtime that are capable of calculate adaptations to interaction devices and contexts that have been unknown during design-time.


  • [PDF] S. Feuerstack, M. Blumendorf, V. Schwartze, and S. Albayrak, “Model-based Layout Generation,” in Proceedings of the working conference on Advanced visual interfaces, 2008.
    author = {Sebastian Feuerstack and Marco Blumendorf and Veit Schwartze and
    Sahin Albayrak},
    title = {Model-based Layout Generation},
    booktitle = {Proceedings of the working conference on Advanced visual interfaces},
    year = {2008},
    editor = {Paolo Bottoni and Stefano Levialdi},
    publisher = {ACM},
    note = {Proceedings of the working conference on Advanced visual interfaces
    abstract = {Offering user interfaces for interactive applications that are flexible
    enough to be adapted to various context-of-use scenarios such as
    supporting different display sizes or addressing various input styles
    requires an adaptive layout. We describe an approach for layout derivation
    that is embedded in a model-based user interface generation process.
    By an interactive and tool-supported process we can efficiently create
    a layout model that is composed of interpretations of the other design
    models and is consistent to the application design. By shifting the
    decision about which interpretations are relevant to support a specific
    context-of-use scenario from design-time to run-time, we can flexibly
    adapt the layout to consider new device capabilities, user demands
    and user interface distributions. We present our run-time environment
    that is able to evaluate the relevant model layout information to
    constraints as they are required and to reassemble the user interface
    parts regarding the updated containment, order, orientation and sizes
    information of the layout-model. Finally we present results of an
    evaluation we performed to test the design and run-time efficiency
    of our model-based layouting approach.},
    competencecenter = {Human Machine Interaction},
    file = {:Feuerstack2008.pdf:PDF},
    offisdivision = {Verkehr / Human Centered Design},
    owner = {blumendorf},
    timestamp = {2008.03.28}
  • [PDF] Veit Schwartze, Sebastian Feuerstack, and S. Albayrak, “Behavior-sensitive User Interfaces for Smart Environments,” in HCII 2009 – User Modeling, 2009.
    author = {Veit Schwartze, and Sebastian Feuerstack, and Sahin Albayrak},
    title = {Behavior-sensitive User Interfaces for Smart Environments},
    booktitle = {HCII 2009 - User Modeling},
    year = {2009},
    abstract = {In smart environments interactive assistants can support the user’s
    daily life by being ubiquitously available through any interaction
    device that is
    connected to the network. Focusing on graphical interaction, user
    interfaces are
    required to be flexible enough to be adapted to the actual context
    of the user. In
    this paper we describe an approach, which enables flexible user interface
    adaptations based on the current context of use (e.g. by changing
    the size of
    elements to visually highlight the important elements used in a specific
    situation). In a case study of the “4-star Cooking assistant?
    application we
    prove the capability of our system to dynamically adapt a graphical
    interface to the current context of use.},
    competencecenter = {Human Machine Interaction},
    file = {:Schwartze2009.pdf:PDF},
    offisdivision = {Verkehr / Human Centered Design},
    owner = {vschwartze},
    timestamp = {2009.09.28}
Last Updated 7:23 pm