Studierstube Augmented Reality Project

   Home       Research       Development       Projects       Search   

 

 

APRIL - Augmented Presentation and Interaction Authoring Language

Introduction

APRIL is a high-level descriptive language for authoring presentations in augmented reality (AR). AR presentations include, besides virtual objects and multi-media content, artifacts of the real world or the users real-world environment as important parts of the context and content of the presentation.

To describe the configuration and content of such a presentation, various aspects have to be taken into account. First of all, the context of the presentation in terms of environment, available input, processing and output devices has to be known to the presentation system. Since it is possible for multiple presentations to share such a description of context, it should be possible to access this description centrally from within a context, for example as a configuration file on a specific machine or network.

The presentation itself is described on multiple levels of abstraction: on the top level, we need an abstract representation of what should be "going on" during the presentation, what we call the presentations' story. A story might be implemented in several ways, one could imagine text-only and virtual reality versions of the same story, presented to the user depending on available input and output devices. We need some kind of media management to define media objects that should be used to render the story, to make it visible to the user.

During the presentation, these media objects do not necessarily remain static, they might change some of their properties. The reaction of an object to time and/or events is called its behavior, and can also be expressed in APRIL. Depending on the environment, even real objects can have behaviors, for example if a robot is remote-controlled or computer-controlled lighting is available to show and hide objects of the real world. As mentioned, these behaviors can be triggered by points in time or by events, usually originating in some kind of user interaction. Describing possible user interaction (depending again, of course, on the available tools and devices in the environment) is another aspect of APRIL.

Together, these components result in a quite powerful system to describe AR environments and presentations. However, it is impossible to cover all these aspects in every detail in a single specification. Therefore, APRIL makes use of existing standards wherever possible, either by including parts of them directly or using the same concepts as established, specialized description languages. Technically, APRIL is an XML dialect described in an XML schema or, alternatively, a DTD.

Components

Hardware description

The APRIL hardware description layer is usually shared by multiple presentations and therefore stored in a separate file, that can be included into the presenation file. It covers available displays for rendering the presentation, and all the input devices that can be used for user interaction. The description of input devices is based on OpenTracker, a flexible framework for tracking device abstraction. OpenTracker elements to describe the basic tracking setup can be loaded from a separate configuration file or included inline into the hardware description.

Story Authoring

For an abstract representation of the flow of the presenation we use UML statecharts. The story is modelled in any UML authoring tool, and can then be exported to XMI, the official standard for serializing UML diagrams. This XMI description is then simplified and incuded into the presentation file to represent the presentations' story.

In our usage, states of the statechart represent behaviors, whereas transition represent interactions. Both are represented in the story as human-readable tokens, that should say something about their meaning for the story, hiding details about used interaction tools or story objects (which are added in the interaction and behavior layers, respectively). The story diagram can be augmented with HTML annotations, which may include images and multimedia objects to illustrate what should happen (Fig. 1).

Fig. 1: An example story with annotations.

This abstract model of the presentations' story gives us a document, that can be used very early in the development process as an artifact for discussion, brainstorming and refinement within the development team and with outside collaborators, without having to have any content developed at this stage. It serves as a guideline and map throughout designing, implementing and testing the presentation.

Media management

To render the presentation, it has to be brought to life by actual content - the real objects of the environment, 3-dimensional virtual objects, sound, images, textual annotations, to name only a few possibilities. Depending on the hardware platform and the environment, different content may be used to redner the same object. For example, if speakers are available, we could have a narrator explaining something to the user, otherwise we might omit this information or render it as textual annotation into the scene.

In the media mangement layer, media object in well known formats can be loaded into the presentation and alternatives can be specified. Depending on availability and the platforms capabilities, the playable content with the highest priority is then used for running the presentation.

Behavior mapping

Behaviors describe the reaction of story objects to time and events. To describe the behaivors of story objects, we borrow concepts and syntax from SMIL Animation, a standard to animate media objects in 2-dimensional presentations. We extended SMIL animation to work with 3-dimensional content and to integrate nicely into our story concept.

For each state in the story's statechart, a list of animation commands can be scheduled for each story object. Properties of story objects (e. g. position, rotation, color etc.) can be animated over time, set to a specified value or connected to another objects property. In addition, special commands can be executed (and are guaranteed to execute) on entering or leaving a state, therefore leaving the object in a well-defined state during transitions.

Interaction mapping

Transitions between two states in the story are interpreted as interactions. These might be - in the most cases - interactions of humans with the story, but also automatically triggered events.

Documentation

Specification Documents

Authoring workflow

To come...

Applications

To come...

Related Work

To come...

Further Information

Contact: Florian Ledermann ledermann@ims.tuwien.ac.at

APRIL was developed within the Virtual Showcase project.

 

  Webmaster studierstube.icg.tu-graz.ac.at