Categories
Android Research

Will they use it? Will it be useful? In-Situ Evaluation of a Tactile Car Finder.

When we develop new technology, we want to know if it will have the potential to be successful in the real world.

This is not trivial! People may sincerely enjoy our technology when we expose them to it in a lab- or a field study. They may perform better than with previous solutions at the tasks that we ask them to fulfill as part of the study.

However, once they leave our lab they never again encounter the need to use it in their daily routines. Or, the utility we prove in our studies may not be evident in the contexts where this technology is actually deployed.

In our work, we made use of Google Play to answer these questions in a novel way. We wanted to study if a haptic feedback can make people less distracted from the environment, when they use their phone for pedestrian navigation in daily life. We developed a car finder application for Android phones with a simple haptic interface: whenever the user points into the direction of the car, the phone vibrates.

The data provides evidence that about half of the users use the vibration feedback. When vibration feedback is enabled, users turn off the display and stow away the device more often. They also look less at the display. Hence, when using vibration feedback, users are less distracted.

Our work shows that app distribution channels, such as Google Play or the iOS Store, can serve as a cheap way of bringing a user study into the daily life of people instead of bringing people into the lab. Compared to the results of a lab study, these findings have high external validity, i.e. we can be sure that our findings can be generalized to a large number of users and usage situations.

This work will be presented at NordiCHI ’12: The 7th Nordic Conference on Human-Computer Interaction, which takes place in Copenhagen in October 2012. The paper is available here (pdf).

Thanks to http://www.v3.co.uk/ for summarising this work so nicely in their article Buzzing app helps smartphone dudes locate their car.

Share this:
Share
Categories
Research

Tacticycle: Supporting Exploratory Bicycle Trips

Navigation systems have become a common tool for most of us. They conveniently guide us from A to B along the fast or shortest route. Thanks to these devices we do not fear to get lost anymore when traveling through unfamiliar terrain.

However, what if you are a cyclist and your goal is an excursion rather than reaching a certain destination and all you want is to stay oriented and possibly learn about interesting spots nearby? In that case, the use of a navigation system becomes more challenging. One has to look up the addresses of interesting points and enter them as (intermediate) destination. Sometimes the navigation system might not even know all the small paths, so we end up checking the map frequently, which is dangerous when done on the move.

The Tacticycle is a research prototype of a navigation system that is specifically targeted at tourists on bicycle trips. Relying on a minimal set of navigation cues, it helps staying oriented while supporting spontaneous navigation and exploration at the same time. It field three novel features:

  1. First, it displays all POIs around the user explicitly on a map. A double-tap quickly selects a POI as travel destination. Thus, no searching for addresses is required.
  2. Second, the system relies on a tactile user interface, i.e. it provides navigation support via vibration. Thus, the rider does not have to look at the display while driving.
  3. Third, the Tacticycle does not deliver turn-by-turn instructions. Instead, the vibration feedback just indicates the direction of the selected POI “as the crow flies”. This allows the travelers to find their own route.
The direction “as the crow flies” of the selection POI is encoded in the relative vibration of the two actuators in the handle bars. In this picture, the POI is about 20° to the right, so the vibration in the right handle bar is a little stronger.

In cooperation with a bike rental, we rented the Tacticycle prototype to tourists who took it on their actual excursions. The results show that they always felt oriented and encouraged to playfully explore the island, providing a rich, yet relaxed travel experience. On the basis of these findings, we argue that providing minimal navigation cues only can very well support exploratory trips.

This work has been presented at MobileHCI ’12, ACM SIGCHI’s International Conference on Human-Computer Interaction with Mobile Devices and Services, which took place in September 2012 in San Francisco. The paper is available here (pdf).

Share this:
Share
Categories
Research

PocketMenu: Non Visual Menus for Touch Screen Devices

It’s a chilly Sunday afternoon and you are out for a walk, listening to music from your MP3 player, and you want to select the next song. How do you do that?

A few years ago you probably didn’t even take the MP3 player out of the pocket. You just used your fingers to feel for the shape of the next button and press it.

Today, we don’t own dedicated MP3 players anymore, but use our smartphones. And since most input in modern smartphones is done via large touch screen displays, you need to take the phone out of your pocket, unlock the screen, and spot the button visually to press it.

The PocketMenu addresses this problem, by providing haptic and auditory feedback to allow in-pocket input. It combines clever ideas from previous research on touch screen interaction for sensory and motor impairments in a novel way.

All menu items are laid out along the screen bezel. The bezel therefore serves as a haptic guide for the finger. Additional speech and vibration output allow identifying the items and obtaining more information. Watch the video to see how exactly the interaction works.

In a field experiment, we compared the PocketMenu concept with the state-of-the-art VoiceOver concept that is shipped with the iPhone. The participants had to control an MP3 player while walking down a road with the device in the pocket. The PocketMenu outperformed VoiceOver in terms of completion time, selection errors, and subjective usability.

This work will be presented at MobileHCI ’12, ACM SIGCHI’s International Conference on Human-Computer Interaction with Mobile Devices and Services, which takes place in September 2012 in San Francisco. The paper is available here (pdf).

Share this:
Share
Categories
Research

TouchOver Map


Touch-screens and interactive maps are two things which are common-place nowadays. We find both in modern smartphones and location-based services. In the HaptiMap Project (FP7-ICT-224675), we aim at making maps and location-based services accessible. Thus, my colleagues Benjamin Poppinga, Charlotte Magnusson, Kirsten Rassmus-Gröhn, and me investigated, how to make maps on touch screens accessible for visually impaired users.

We developed TouchOver Map, a simple prototype aimed at investigating the feasibility of speech and vibration feedback. It allows to non-visually explore a map – currently the street network to be exact. Our approach is dead simple: as long as the user touches a street the phone vibrates and the street name is spoken.

To evaluate how well people can understand street layouts with TouchOver Map, we conducted a user study. Eight sighted participants explored the map while the device was covered by an empty box.

While the participants explored the map, they were asked to reproduce it on a piece of paper. Although the results are far from perfect, the participants were able to reproduce the streets and their relationships.

Obviously, our study has a number of limitations. There were only a few testers and none of them was blind. The TouchOver Maps also only displayed streets but no other geographic features. Finally, the non-visual rendering could clearly benefit from fine-tuning and clever filtering of features to display. Nevertheless, out pilot study shows that it is possible to convey geographical features via touch screen devices by making them “visible” through speech and vibration.

The TouchOver Map is a collaboration of Certec, the Division of Rehabilitation Engineering Research in the Department of Design Sciences, Faculty of Engineering, Lund University and the Intelligent User Interfaces Group of the OFFIS Institute for Information Technology, Oldenburg, Germany. It was published as a Works-in-Progress at the 13th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’11).

The paper can be downloaded from here.


Share this:
Share
Categories
Research

In Situ Field Studies using the Android Market

Recently, researchers have started to investigate using app distribution channels, such as Apple’s App Store or Google’s Android Market to bring the research to the users instead of bringing the users into the lab.

My colleague Niels, for example, used this approach to study how people interact with touch screens of mobile phones. But, instead of collecting touch events in a boring, repetitive task he developed a game where users have to burst bubbles by touching them. And instead of conducting this study in the sterile environment of a lab he published the game on the Android Market for free, so it was installed and used by hundreds of thousands of users. So, while these users were enjoying the game they generated millions of touch events. And unlike traditional lab studies, this data was collected from all over the world and many different contexts of use. The results of this study were reported at MobileHCI ’11 and were received enthusiastically.

Since my work is on pedestrian navigation systems & conveying navigation instructions on vibration feedback, lab studies are oftentimes not sufficient. Instead we have to go out and conduct our experiments in the field, e.g. by having people navigate through a busy city center.

So, if we can bring lab studies “into the wild” can we do the same with field experiments?

My colleague Benjamin and I started addressing this question in 2010. We developed a consumer-grade pedestrian navigation application called PocketNavigator and released on the Android Market for free. Then, we developed algorithms that allow us to infer specific usage patterns we were interested in. For example, these algorithms allow us to infer whether users follow the given navigation instructions or not. We also developed a system that allows the PocketNavigator to collect these usage patterns along with relevant context parameters and send these to one of our servers. On a side-note, the collected data does not contain personally identifiable information, so it does not allow us to identify, locate, or contact users.

With this setup we conducted a quasi-experiment. Since my research is about the effect of vibration feedback on navigation performance and the user’s level of distraction, we compared the usage patterns of situations where the vibration feedback was turned on versus turned off. Our results show that the vibration feedback was used in 29.9 % of the trips with no effect on the navigation performance. However, we found evidence that users interacted less with the touch screen, looked less often at the display, and turned off the screen more often. Hence, we believe that users were less distracted.

The full report of this work has been accepted to the prestigious ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’12) and has been presented in May 2012 in Austin, Texas.

The paper can be downloaded from here.

Share this:
Share
Categories
Research

PocketNavigator Video on Youtube

Working day and night, turning researchers into actors, mastering the use of iMovie, we present our video on the PocketNavigator Familiy.

The video shows three demonstrators:

  • The Tacticycle is a bicycle navigation system for tourists, which uses vibrating handle bars to provide directions.
  • The PocketNavigator is an OSM-based pedestrian navigation systems that uses vibration patterns to tell the user which direction to go.
  • The Virtual Observer is a research tool that allows collecting usage data (GPS tracks, Images, Experience Sampling Questions) and play them back in order to study in-situ usage of above (and other) applications.

The work presented here is part of the EU-funded HaptiMap research project (FP7-ICT-224675) , which aims at making maps and location-based services more accessible. The PocketNavigator is one of the project’s outcome developed at the Intelligent User Interfaces Group of the OFFIS Institute for Information Technology, Oldenburg, Germany.

The PocketNavigator is available for free on the Android Market: https://market.android.com/details?id=org.haptimap.offis.pocketnavigator

Share this:
Share
Categories
Android Research

Ambient Visualisation of Social Network Activity

Social network, such as Facebook or Twitter, are an important factor in the communication between individuals of the so called digital natives generation. More and more often, they are used to exchange short bursts of thoughts are comments as a means of staying connected with each other.

The instant communication enabled by those social networks has however created a form of peer-group pressure to constantly check for updates. For example, has an informal get-together been announced or has somebody requested to become your friend? This emerging pressure can make people return to the computer more often than they want. This is why we find our colleagues regularly looking for new status updates in meetings, or on our parties we see it more often that our friends cannot resist to check their Facebook account.

One solution to this is notifying users when something important happened. Mobile phones as personal, ubiquitous, and always connected devices lend themselves as platform, as they are carried with the user most of the time. This, it is no surprise that our phone now not only notify about incoming short messages, but do the same for Twitter @mentions, Facebook message, or friend requests. However, these notifications may go unnoticed, too. Thus, instead of checking our Facebook & Twitter account, we keep looking at our mobile phone for notification items.

With AmbiTweet, we investigate conveying social network statuses by ambient displays. We use a live wallpaper showing a beautiful blue water.The wallpaper can be connected with a Twitter account and visualizes the level of activity in an ambient way. The higher the level of activity on this Twitter account, the brighter and the more busy the water becomes. This can be perceived even in the periphery of the field of vision. Thus, users can become aware of important activity without the need to focus the eyes on the phone.

Ambient displays, in general, have the advantage that they convey information in a continuous but unobtrusive way. They exploit the fact that the brain can process information pre-attentive, i.e. without generating apparent cognitive load. AmbiTweet therefore allows concentrating on a primary task while remaining aware of the level of activity on a social network account.

Share this:
Share
Categories
Research

Don’t Ask Users What They Want!

As Human Computer Interaction as a research field is receiving more and more recognition in Germany my collages and me, as members of the Intelligent User Interfaces Group at OFFIS, are more often approached when it comes to developing novel user interfaces or findings innovative solutions for industry partners.

Usually, the idea is that we conduct interviews with (potential) end users and ask them what they want to come up with a innovate ideas. While this may work in some occuasions, I believe that this naïve approach misses a few important points. I will try to elaborate my view in the following:

Most people cannot think outside the box

With his famous quote “If I’d asked customers what they wanted, they would have said “a faster horse”. Henry Ford wanted to say that customers cannot clearly express their needs. What people actually want is getting fast and cheap from A to B with little maintenance and overhead. Most people are not able to imagine a car, if all they know are horse and carriage.

Instead try understanding their problems

Roger L. Cauvin points out that instead of asking users want they want it is more important focus on their problems by asking the right questions and interpreting answers carefully. Sometimes, he argues, it may even be better to ask no question at all but just observe and listen to your users.

Go into detail and then envision perfect solution

Frankie Johnson suggests to do that by going into detail and ask what people dislike about current practices (e.g. horses are time consuming, smelly, and may at times act unpredictably). He believes that talking about people’s dislike and then asking how a perfect carriage looks like would have resulted into the answer “a carriage without horse”.

Serendipity & Being Prepared

In addition, I believe that serendipity – the faculty of making fortunate discoveries by accident – plays another important role. A famous example is the serendipitous discovery of Penicillin by Sir Alexander Fleming, who, returning from holiday, found bacteria cultures having been killed by dishes Penicillium contamination. However, “by accident” may be misleading. Most people without the scientific background of Alexander Fleming would not have recognized the importance of that observation. Thus, I believe, that it is important to “go pregnant” with an problem and keep your eyes open for things that might fit that problem.

Get out there!

Helen Walters gives a few tips on increasing the chance of serendipitous findings, including to get outside the office, build prototypes instead of talking about the idea, and explore instead of execute. Further, it is important to not expect results too soon, which is really a tragic insight given that most work today is driven by deadlines and milestones.

Share this:
Share
Categories
Research

Tactile Compass presented at ICMI ’11

This week I had the chance to attend the International Conference on Multimodal Interaction.It took place in Alicante, Spain, which even at this time of the year (mid of November) can be warm and sunny!

I presented the results from a user study of our Tactile Compass. The basic idea of the tactile compass is that vibration patterns tell you, in which direction to go, so you can use it as a navigation system but never have to look at the mobile device.

In this study, we asked 21 participants to follow three routes through the city centre of Oldenburg. In random order they were equipped with the Tactile Compass, a common visual navigation system, or both. In brief, we found that

  • with the visual system the participants walked fastest, which is a sign that the users had to think least
  • with the tactile system the participants were least distracted and paid most attention to the environment
  • with the combination of both systems the participants made least navigation errors.

For the details and our conclusions please refer to the full paper.

Share this:
Share
Categories
Android NDK OpenAL

OpenAL4Android

In the comments to my post on OpenAL on Android some visitors asked to provide some high-level examples of how to use OpenAL.

In this post you will find a light-weight Android Java library, consisting of four classes only, that allows you to create complex 3D sound scenes. An additional Hello World example building upon this library will show how to create a scene with three different sound sources.

OpenAL4Android Library

Download the library from http://pielot.org/wp-content/uploads/2011/11/OpenAL4Android.zip. The library contains the following classes:

  • OpenAlBridge: this class contains all the native methods used to communicate with the OpenAL native implementation
  • SoundEnv: this class allows to manage the sound scene. It for example allows registering new sounds and moving around the virtual listener
  • Buffer: a buffer is one sound file loaded into the RAM of the device. A buffer itself cannot be played.
  • Source: a source turns a buffer into an actually sounding object. The source allows changing the parameters of the sound, such as its position in 3D space, the playback volume, or the pitch. Each source as one buffer, but one buffer can be used by different sources.

If you turn it into an Android library, you can use it in several projects at the same time. Go to Properties -> Android and make sure that the check box “Is Library” is checked.

The following Hello World example shows how to use the library.

HelloOpenAL4Android

HelloOpenAL4Android is a demo application illustrating how to use OpenAL4Android. The complete code + Eclipse project files can be downloaded here.

Create a new Android project. Use Android 1.6 at least. Visit the project properties and add OpenAL4Android as library project (project -> android -> library). The the following code shows how to create a complex 3D scene.

To run without errors, the program requires two sound files named “lake.wav” and “park.wav” in the project’s assets folder. If the folder does not exist, just create it on the top level of the project, next to src, res, … .

package org.pielot.helloopenal;

import org.pielot.openal.Buffer;
import org.pielot.openal.SoundEnv;
import org.pielot.openal.Source;

import android.app.Activity;
import android.os.Bundle;
import android.util.Log;

/**
 * This tutorial shows how to use the OpenAL4Android library. It creates a small
 * scene with two lakes (water) and one park (bird chanting).
 * @author Martin Pielot
 */
public class HelloOpenAL4AndroidActivity extends Activity {

    private final static String    TAG    = "HelloOpenAL4Android";

    private SoundEnv            env;

    private Source                lake1;
    private Source                lake2;
    private Source                park1;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        Log.i(TAG, "onCreate()");

        this.setContentView(R.layout.main);

        try {
            /* First we obtain the instance of the sound environment. */
            this.env = SoundEnv.getInstance(this);

            /*
             * Now we load the sounds into the memory that we want to play
             * later. Each sound has to be buffered once only. To add new sound
             * copy them into the assets folder of the Android project.
             * Currently only mono .wav files are supported.
             */
            Buffer lake = env.addBuffer("lake");
            Buffer park = env.addBuffer("park");

            /*
             * To actually play a sound and place it somewhere in the sound
             * environment, we have to create sources. Each source has its own
             * parameters, such as 3D position or pitch. Several sources can
             * share a single buffer.
             */
            this.lake1 = env.addSource(lake);
            this.lake2 = env.addSource(lake);
            this.park1 = env.addSource(park);

            // Now we spread the sounds throughout the sound room.
            this.lake1.setPosition(0, 0, -10);
            this.lake2.setPosition(-6, 0, 4);
            this.park1.setPosition(6, 0, -12);

            // and change the pitch of the second lake.
            this.lake2.setPitch(1.1f);

            /*
             * These sounds are perceived from the perspective of a virtual
             * listener. Initially the position of this listener is 0,0,0. The
             * position and the orientation of the virtual listener can be
             * adjusted via the SoundEnv class.
             */
            this.env.setListenerOrientation(20);
        } catch (Exception e) {
            Log.e(TAG, "could not initialise OpenAL4Android", e);
        }
    }

    @Override
    public void onResume() {
        super.onResume();
        Log.i(TAG, "onResume()");

        /*
         * Start playing all sources. 'true' as parameter specifies that the
         * sounds shall be played as a loop.
         */
        this.lake1.play(true);
        this.lake2.play(true);
        this.park1.play(true);
    }

    @Override
    public void onPause() {
        super.onPause();
        Log.i(TAG, "onPause()");

        // Stop all sounds
        this.lake1.stop();
        this.lake2.stop();
        this.park1.stop();

    }

    @Override
    public void onDestroy() {
        super.onDestroy();
        Log.i(TAG, "onDestroy()");

        // Be nice with the system and release all resources
        this.env.stopAllSources();
        this.env.release();
    }

    @Override
    public void onLowMemory() {
        this.env.onLowMemory();
    }
}
Share this:
Share