When Attention is not Scarce – Detecting Boredom from Mobile Phone Usage (UbiComp ’15)

[Looking for the app to recommend a URL of your choice when it thinks you are bored? Click here]

In times of information overload, attention has become a limiting factor in the way we consume information. Hence, researchers suggested to treat attention as a scarce resource coined the phrase attention economy. Given that attention is also what pays the bills of many free internet services through ads, some even speak of the Attention War. Soon, this war may start extending to our mobile devices, where already today, apps try to engage you through proactive push notifications.

bored

Yet, attention is not always scarce. When being bored, attention is abundant, and people often turn to their phones to kill time. So, wouldn’t it be great if more services sought your attention when you are bored and left you alone when you were busy?

Since mobile phones are often used to kill time, we — that’s Tilman Dinger from the hciLab of the University of Stuttgart, and Jose San Pedro Wandelmer, Nuria Oliver, and me from Telefonica’s scientific group — saw an opportunity in detecting those moments automatically. If phones knew when their users are killing time, maybe they could suggest them to make better use of the moment.

To identify, which usage patterns are indicative for boredom, we logged phone usage patterns of 54 volunteers for 2 weeks. At the same time, we asked them to frequently report how bored they felt. We found that patterns around the recency of communication activity, context, demographics, and phone usage intensity were related to boredom.

esm

These patterns allow us to create a model that predicts when a person is more bored than usual with an AUCROC of 74.5%. It achieves a precision of over 62%, when its sensitivity is tuned detecting 50% of the boredom episodes.

precrecallfull

While this is far from perfect, we proved its effectiveness in a follow-up study: we created an app (available on Google Play, more info here) that, at random times, created notifications, which suggest to read news articles.

buzzfeednotification

When predicted bored, the participants opened those articles in over 20% of the cases and kept reading the article for more than 30 seconds in 15% of the cases. In contrast, when they were not bored, they opened the article in only 8% of the cases and kept reading it for more than 30 seconds in only 4% of the cases.
buzzfeedresults
Statistical analysis shows that the predicting accounts for significant share of the observed increase.

While we certainly don’t feel that recommending Buzzfeed articles will be the cure peoples’ boredom, at least not for the majority of them, the study provides evidence that the prediction works.

Now how can mobile phones better serve users, when they can detect phases of boredom? We see four application scenarios:

  • Engage users with relevant contents to mitigate boredom,
  • Shield users from non-important interruptions when not bored,
  • Propose useful but not necessarily boredom-curing activities, such as clearing a backlog of To Do’s or revisiting vocabulary lists, and
  • Suggest to stop killing time with the phone and embrace boredom, as it is essential to creative processes and self-reflection.

Relatedly to this work, in a follow-up study, we also showed that mobile phones can predict the boredom proneness, the predisposition of experiencing boredom.

The work was presented in September 2015 at the ACM International Joint Conference on Pervasive and Ubiquitous Computing, taking place in Osaka, Japan, where it received best-paper award.

When Attention is not Scarce – Detecting Boredom from Mobile Phone Usage.
Martin Pielot, Tilman Dingler, Jose San Pedro, and Nuria Oliver
UbiComp’ 15: ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015.

Share this:

Exporting RandomForest Models to Java Source Code

This post shares a tiny toolkit to export WEKA-generated Random Forest models into light-weight, self-contained Java source code for, e.g., Android.

It came out of my need to include Random Forest models into Android apps.

Previously, I used to use Weka for Android. However, I did not find a way to export a Random Forest model in a way that my apps can load it reliably across devices, so the apps had to compute the model on each start — which can take minutes.

androidrf solves the problem in a simple way: a python script parses the console output of WEKA when training a RandomForest model with the -printTree option enabled. Then, it creates a single Java source file implementing those trees with simple if-then statements.

The library ships with three additional Java classes that allow to run and test the generated classifiers.

The code is available on Github under the MIT Licence: androidrf

How to use it

(for people who are familiar with WEKA):

Load your data set into WEKA, choose RandomForest as classifier, and enabled the ‘printTrees’ option for your RandomForest classifier. Hint: limit the depth of the trees with the ‘maxDepth’ option, because otherwise the resulting source files may become huge.

Screen Shot 2015-06-30 at 15.50.09

Save the output of the results buffer into a .txt file. Best, save it into the ‘data’ folder of the androidrf project.

Screen Shot 2015-06-30 at 15.50.26

Open a terminal, enter the ‘data’ folder of the androidrf project, and execute
python to_java_source.py -M filename (without .txt).

Screen Shot 2015-06-30 at 17.53.07

A class with the name FilenameRandomForest should appear in androidrf/src/org/pielot/rf

Screen Shot 2015-06-30 at 17.55.50

All you need to do is to copy the Java class together with the three pre-existing Java classes (Prediction, Evaluation, RandomForest) into your project. It should compile without error.

Screen Shot 2015-06-30 at 18.00.33

The features have been added as fields to your classifier. Hence, in order to specify the features, simply populate those fields. Then, run runClassifiers(List predictions) to obtain a Prediction with the details of the prediction (predicted class, certainty, ..).

Voila! You have a light-weight, portable, working Random Forest model.

 

Share this:

How the Phone’s Vibration Alarm can help to Save Battery

Not sure how long my hero’s battery will last with GPS on and my phone vibrating every second to indicate if on right track!?!

– This and similar concerns have frequently been expressed when I presented the PocketNavigator – a navigation system guiding pedestrians by vibration patterns instead of spoken turning instructions.

To quantify how much battery power is actually lost to constantly repeated vibration pulses, I tested the battery consumption of two different patterns in comparison to a non-vibrating phone.

In brief, in my setup, the vibration cost less than 5% of the battery life. As comparison: leaving the screen on will drain the phone’s battery in 2-3 hours. In consequence, instead of draining the battery fast, vibration can even help to save battery if it allows users to leave the screen turned off.

Test Configuration

The apparatus created heartbeat-like vibration patterns, i.e. patters consisting of two pulses followed by a long pause. The apparatus was run three times. Each run used a different pulse length, i.e. 30 ms, 60 ms, and 0 ms (no vibration as baseline).

Results

The following diagrams show the remaining battery as it changed while the app was running.



The battery lasted

  • 24.71 hours for 0 ms pulse length (baseline)
  • 23.48 hours for 30 ms pulse lengths = 95.0 % of the baseline, and
  • 23.48 hours for 60 ms pulse lengths = 95.0 % of the baseline.

Using linear approximation to account for the fact that the battery was never 100% charged when the trials commenced, we also calculated the trend lines (see Diagrams, used Excel’s linear approximation), which changes the prediction to

  • 24.18 hours for 0 ms pulse length (baseline)
  • 23.28 hours for 30 ms pulse lengths = 96.3 % of the baseline, and
  • 23.60 hours for 60 ms pulse lengths = 97.6 % of the baseline.

Discussion

Battery life in all cases was around 24 hours, sufficient for normal use. Constant vibration reduced battery life by 2.4 – 5.0 % minutes. Increasing the vibration length from 30 to 60ms per vibration pulse had no effect on battery life. As comparison, when the screen is constantly kept on, the battery drains within about 2-3 hours.

Hence, the additional battery loss is justifiable when considering that at the same time we gain the ability to continuously communicate information to the user. When using short vibration pulses, desigers do not even have to consider the effect of the pulses’ lenghts on battery life.

Take Away

This data shows that the impact of having the phone emitting vibration pulses constantly is not very high.

This means that as means to constantly convey information, e.g. as navigation system that is supposed to convey information all the time, vibration has a much lower impact on battery life compared to the screen, which empties the battery in a few hours. On a Nexus One, vibration can allow to constantly convey information for almost 24 hours, enough for the typical smartphone user who has gotten used to charge the phone every night.

Share this:

App Store Studies : How to Ask for Consent?

App Stores, such as Apple’s App Store or Google Play, provide researchers the opportunity to conduct experiments with a large number of participants. If we collect data during these experiments, it may be necessary to ask for the users’ consent beforehand. The way we ask for the users’ consent can be crucial, because nowadays people are very sensitive to data collection and potential privacy violations.

We conducted a study suggesting that a simple “Yes-No” form is the best choice for researchers.

Tested Consent Forms

We (most of the credit goes to Niels Henze for conducting the study) tested four different approaches to ask for the consent to collect non-personal data. All consent forms contain the following text:
By playing this game you participate in a study that investigates the touch performance on mobile phones. While you play we measure how you touch be we DON’T transmit personalized data. By playing you actively contribute to my PhD thesis.

Checkbox Unchecked

The first tested consent form showed an unchecked check next to a text reading “Send anonymous feedback”. In order to participate in the study a user had to tick the checkbox and then press the “Okay” button.

Checkbox Checked

The second consent form is the same as the previous one, except that the checkbox is pre-checked. To participate in the study the user has to merely click the “Okay” button.

Yes/No Button

The third consent form features two buttons are provided reading “Okay” and “Nope”. To participate the user has to click “Okay”. Clicking “Nope” will end the app immediately.

Okay Button

The foorth consent form only contains a single “Okay” Button. By clicking “Okay” the user participates in the study. To avoid participation, the user has to end the app through the phone’s “home” or “return” buttons.

Study

These consent forms were integrated into a game called Poke the Rabbit! by Niels Henze. At first start, the application randomly selected one of the four consent forms. If the use accepted to participate in the study, the app transmitted the type of the consent form to a server.

Results

We collected data from 3,934 installations. The diagram below shows the conversion rate. The conversion rate was estimated by dividing the number of participants per form by 983,5 (we assume perfect randomisation, i.e. each consent form was presented in 25% of the installations).

Conversion rate per consent form. The x-axis shows the type of consent form. The y-axis shows the estimated fraction of users that participated in the study after download.

We were surprised about the high conversion rate. Only the consent form with the unchecked checkbox yielded in a too low conversion rate.

Conclusions – use Yes/No Buttons

We suggest using the consent form with Yes-No buttons. The consent form with the checked checkbox may considered unethical, since the user may not have read the text and was not forced to consider unchecking the checkbox. The consent form with the “Okay” button may be considered unethical, too, because users may not be aware that they can avoid data collection by using the phone’s hardware buttons. The “Yes-No” form, in contrast, forces users to think about their choice and offer a clear way to avoid participating in the study.

Yes-No buttons are ethically safe and resulted in the second highest conversion rate.

Would you suggest otherwise? We are not at all saying that this is definite! Please share your opinion (comments or mail)!

More Information

This work has been published in the position paper App Stores – How to Ask Users for their Consent? The paper was presented at the ETHICS, LOGS and VIDEOTAPE Ethics in Large Scale Trials & User Generated Content Workshop. It took place at CHI ’11: ACM CHI Conference on Human Factors in Computing Systems, which was held in May 2011 in Vancouver, Canada.

Acknowledgements

The authors are grateful to the European Commission, which has co-funded the IP HaptiMap (FP7-ICT-224675) and the NoE INTERMEDIA (FP6-IST-038419).

 

Share this:

Will they use it? Will it be useful? In-Situ Evaluation of a Tactile Car Finder.

When we develop new technology, we want to know if it will have the potential to be successful in the real world.

This is not trivial! People may sincerely enjoy our technology when we expose them to it in a lab- or a field study. They may perform better than with previous solutions at the tasks that we ask them to fulfill as part of the study.

However, once they leave our lab they never again encounter the need to use it in their daily routines. Or, the utility we prove in our studies may not be evident in the contexts where this technology is actually deployed.

In our work, we made use of Google Play to answer these questions in a novel way. We wanted to study if a haptic feedback can make people less distracted from the environment, when they use their phone for pedestrian navigation in daily life. We developed a car finder application for Android phones with a simple haptic interface: whenever the user points into the direction of the car, the phone vibrates.

The data provides evidence that about half of the users use the vibration feedback. When vibration feedback is enabled, users turn off the display and stow away the device more often. They also look less at the display. Hence, when using vibration feedback, users are less distracted.

Our work shows that app distribution channels, such as Google Play or the iOS Store, can serve as a cheap way of bringing a user study into the daily life of people instead of bringing people into the lab. Compared to the results of a lab study, these findings have high external validity, i.e. we can be sure that our findings can be generalized to a large number of users and usage situations.

This work will be presented at NordiCHI ’12: The 7th Nordic Conference on Human-Computer Interaction, which takes place in Copenhagen in October 2012. The paper is available here (pdf).

Thanks to http://www.v3.co.uk/ for summarising this work so nicely in their article Buzzing app helps smartphone dudes locate their car.

Share this:

Ambient Visualisation of Social Network Activity

Social network, such as Facebook or Twitter, are an important factor in the communication between individuals of the so called digital natives generation. More and more often, they are used to exchange short bursts of thoughts are comments as a means of staying connected with each other.

The instant communication enabled by those social networks has however created a form of peer-group pressure to constantly check for updates. For example, has an informal get-together been announced or has somebody requested to become your friend? This emerging pressure can make people return to the computer more often than they want. This is why we find our colleagues regularly looking for new status updates in meetings, or on our parties we see it more often that our friends cannot resist to check their Facebook account.

One solution to this is notifying users when something important happened. Mobile phones as personal, ubiquitous, and always connected devices lend themselves as platform, as they are carried with the user most of the time. This, it is no surprise that our phone now not only notify about incoming short messages, but do the same for Twitter @mentions, Facebook message, or friend requests. However, these notifications may go unnoticed, too. Thus, instead of checking our Facebook & Twitter account, we keep looking at our mobile phone for notification items.

With AmbiTweet, we investigate conveying social network statuses by ambient displays. We use a live wallpaper showing a beautiful blue water.The wallpaper can be connected with a Twitter account and visualizes the level of activity in an ambient way. The higher the level of activity on this Twitter account, the brighter and the more busy the water becomes. This can be perceived even in the periphery of the field of vision. Thus, users can become aware of important activity without the need to focus the eyes on the phone.

Ambient displays, in general, have the advantage that they convey information in a continuous but unobtrusive way. They exploit the fact that the brain can process information pre-attentive, i.e. without generating apparent cognitive load. AmbiTweet therefore allows concentrating on a primary task while remaining aware of the level of activity on a social network account.

Share this:

OpenAL4Android

In the comments to my post on OpenAL on Android some visitors asked to provide some high-level examples of how to use OpenAL.

In this post you will find a light-weight Android Java library, consisting of four classes only, that allows you to create complex 3D sound scenes. An additional Hello World example building upon this library will show how to create a scene with three different sound sources.

OpenAL4Android Library

Download the library from http://pielot.org/wp-content/uploads/2011/11/OpenAL4Android.zip. The library contains the following classes:

  • OpenAlBridge: this class contains all the native methods used to communicate with the OpenAL native implementation
  • SoundEnv: this class allows to manage the sound scene. It for example allows registering new sounds and moving around the virtual listener
  • Buffer: a buffer is one sound file loaded into the RAM of the device. A buffer itself cannot be played.
  • Source: a source turns a buffer into an actually sounding object. The source allows changing the parameters of the sound, such as its position in 3D space, the playback volume, or the pitch. Each source as one buffer, but one buffer can be used by different sources.

If you turn it into an Android library, you can use it in several projects at the same time. Go to Properties -> Android and make sure that the check box “Is Library” is checked.

The following Hello World example shows how to use the library.

HelloOpenAL4Android

HelloOpenAL4Android is a demo application illustrating how to use OpenAL4Android. The complete code + Eclipse project files can be downloaded here.

Create a new Android project. Use Android 1.6 at least. Visit the project properties and add OpenAL4Android as library project (project -> android -> library). The the following code shows how to create a complex 3D scene.

To run without errors, the program requires two sound files named “lake.wav” and “park.wav” in the project’s assets folder. If the folder does not exist, just create it on the top level of the project, next to src, res, … .

package org.pielot.helloopenal;

import org.pielot.openal.Buffer;
import org.pielot.openal.SoundEnv;
import org.pielot.openal.Source;

import android.app.Activity;
import android.os.Bundle;
import android.util.Log;

/**
 * This tutorial shows how to use the OpenAL4Android library. It creates a small
 * scene with two lakes (water) and one park (bird chanting).
 * @author Martin Pielot
 */
public class HelloOpenAL4AndroidActivity extends Activity {

    private final static String    TAG    = "HelloOpenAL4Android";

    private SoundEnv            env;

    private Source                lake1;
    private Source                lake2;
    private Source                park1;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        Log.i(TAG, "onCreate()");

        this.setContentView(R.layout.main);

        try {
            /* First we obtain the instance of the sound environment. */
            this.env = SoundEnv.getInstance(this);

            /*
             * Now we load the sounds into the memory that we want to play
             * later. Each sound has to be buffered once only. To add new sound
             * copy them into the assets folder of the Android project.
             * Currently only mono .wav files are supported.
             */
            Buffer lake = env.addBuffer("lake");
            Buffer park = env.addBuffer("park");

            /*
             * To actually play a sound and place it somewhere in the sound
             * environment, we have to create sources. Each source has its own
             * parameters, such as 3D position or pitch. Several sources can
             * share a single buffer.
             */
            this.lake1 = env.addSource(lake);
            this.lake2 = env.addSource(lake);
            this.park1 = env.addSource(park);

            // Now we spread the sounds throughout the sound room.
            this.lake1.setPosition(0, 0, -10);
            this.lake2.setPosition(-6, 0, 4);
            this.park1.setPosition(6, 0, -12);

            // and change the pitch of the second lake.
            this.lake2.setPitch(1.1f);

            /*
             * These sounds are perceived from the perspective of a virtual
             * listener. Initially the position of this listener is 0,0,0. The
             * position and the orientation of the virtual listener can be
             * adjusted via the SoundEnv class.
             */
            this.env.setListenerOrientation(20);
        } catch (Exception e) {
            Log.e(TAG, "could not initialise OpenAL4Android", e);
        }
    }

    @Override
    public void onResume() {
        super.onResume();
        Log.i(TAG, "onResume()");

        /*
         * Start playing all sources. 'true' as parameter specifies that the
         * sounds shall be played as a loop.
         */
        this.lake1.play(true);
        this.lake2.play(true);
        this.park1.play(true);
    }

    @Override
    public void onPause() {
        super.onPause();
        Log.i(TAG, "onPause()");

        // Stop all sounds
        this.lake1.stop();
        this.lake2.stop();
        this.park1.stop();

    }

    @Override
    public void onDestroy() {
        super.onDestroy();
        Log.i(TAG, "onDestroy()");

        // Be nice with the system and release all resources
        this.env.stopAllSources();
        this.env.release();
    }

    @Override
    public void onLowMemory() {
        this.env.onLowMemory();
    }
}
Share this:

A Tactile Compass for Eyes-free Pedestrian Navigation

The idea came up when I was heading back to the hotel from a conference dinner at MobileHCI 2008 in Amsterdam. I had no orientation. The only guide I had was a map on my Nokia phone. Not being familiar with Amsterdam, the route let me right through the busy areas of the city center.

The day before, a cyclist had stolen a mobile phone right out of the hand of another conference attendee. Knowing that made me quite afraid something similar could happen to me too. Without the phone I would have been completely lost.

Here, serendipity hit. Since my research group was already working on tactile displays for navigation and orientation, I wondered whether it was possible to create a navigation system for mobile phones that guided by vibration only, so it could be left in the pocket.

Back at OFFIS we quickly tested a few prototypes, including a hot/cold metaphor and a compass metaphor. The compass metaphor prevailed. The design was to encode the direction the user should be heading (forward, left, right, backwards) in different vibration patterns. Our testing participants liked that design most. Later we tested the vibration compass design a forest and found that it can replace navigation with a map.

The development and the studies was presented at the 13th IFIP TCI3 Conference in Human-Computer Interaction (INTERACT) in Lisbon, Portugal in September 2011. The article is available here.

If you own an Android phone you can try this vibration compass by downloading our PocketNavigator navigation application for free from the Android market.

 

Share this:

Android User Hate Parade

Stupid! – Garbage – Hate it!!!!!!!

…. these are some of the few comments when publishing apps in the Android Market for free. This can be really frustrating for developers. Here are some of the worst example I have encountered in my life as an Android developer:

The “it does not work -> 1 star” fraction

Examples:

  • “poor! doesnt work on my G1 keeps force closing! Uninstalled!”
  • “Keeps forcing close fix it and it is horrible. Hate it!!!!!!!!! One star and I am uninstalling this stupid thing”

Guys, I can understand your anger, but please, just send the developer a mail, describe the error and the circumstances where it occurred in as much detail as possible, and give the developer a chance to learn about it and fix it!

The “my obscure feature is not present -> 1 star” fraction

Examples

  • “No “dk” map [Ingen dk kort]”
  • “Not in Russian [?? ?? ??????]”

Yes, you may want to have the app in Russian, Urdu, and Aramaic, but as a hobby developer one has limited resources. Please respect that many apps are developed on a very tight or even non-existent budget. Why not just be glad  to have that many apps for free?

The “I hate your app -> 1 star” fraction

Examples

  • “Stupid!”
  • “Garbage”
  • “Slow and no instructions”

These guys would probably even rate the app with 0 stars if that was possible. The comment “no instructions” was even wrong at the time of writing, since there was a manual accessible from the main menu.

*update*

The “Weird -> 1 star” fraction (proposed by Niels).

Example

  • “Has swear word on end button if you don’t do well. Not something I want my kid playing.”
  • “My mom is crying. Uninstalled”
  • “Stupid and offincive to my pet rabbit bayleigh”.

Conclusions

If you download, rate, and comment apps, please be nice with your developers. Many of us are guys like you and me, only spending their free time to work on their apps. Please accept that you cannot get perfect solutions in no-time. Rather help us to improve our apps and appreciate that we deliver them for free!

Share this:

OpenAL on Android

Although being slightly late with 3D Audio Support for Android 2.3 announced – this tutorial shows how to compile OpenAL for Android, so you can provide 3D Sound in your apps with 2.2 and below. The code has successfully been tested with the Nexus One (2.2) and the G1 (1.6).

Update: the resulting project for download as a single .zip file. To run the example create a directory called wav on your device’s SD card and put a sound file called lake.wav into it.

Update: some people reported latency issues. It can possibly be fixed in the OpenAL source. See last paragraph for a possible solution.

Update: if you are using a NativeActivity, and the app crashes on device = alcOpenDevice( NULL ); please take a look at Garen’s fix to the getEnv() method: http://pielot.org/2010/12/14/openal-on-android/#comment-1160

 

Preparation

Understand how to compile NDK resources

This tutorial requires working with the Android NDK. We will have to compile OpenAL from source into a native Shared Object and then build a Java Native Interface to work with it. The techniques I use to work with the NDK (on Windows) have been described in a previous tutorial. You might want to take a look to understand what exactly I am doing.

Remember to PRESS F5 after you COMPILED the SHARED OBJECT. Otherwise, Eclipse will not use the new .so.

Create HelloOpenAL Project

Create a normal Android SDK Project.

package org.pielot.helloopenal;

import android.app.Activity;
import android.os.Bundle;
import android.util.Log;

public class HelloOpenAL extends Activity {
 /** Called when the activity is first created. */
 @Override
 public void onCreate(Bundle savedInstanceState) {
 super.onCreate(savedInstanceState);
 setContentView(R.layout.main);
 }

 private native int play(String filename);
}

Compile OpenAL for Android

The first step will be to compile OpenAL for Android. The goal will be to produce a Shared Object library libopenal.so that can be loaded into Android apps.

Download patched source of OpenAL

 

Thanks to Martins Mozeiko and Chris Robinson a version of OpenAL exists that has been adapted to the Android platform. Go to http://repo.or.cz/w/openal-soft/android.git and download latest version of the patched OpenAL sourcecode. For this tutorial I used version that can be downloaded here.

Extract into project folder. Rename top folder of downloaded source from ‘android’ to ‘openal’.

Create config.h

To compile OpenAL a file called config.h is needed.Copy it from <PROJECT_HOME>/openal/android/jni to <PROJECT_HOME>/openal/include.

Create Android.mk

To tell the NDK compiler what files to compile, we now need to create Android.mk in <PROJECT_HOME>/jni . The file should contain:

TARGET_PLATFORM := android-3
ROOT_PATH := $(call my-dir)

########################################################################################################
include $(CLEAR_VARS)

LOCAL_MODULE     := openal
LOCAL_ARM_MODE   := arm
LOCAL_PATH       := $(ROOT_PATH)
LOCAL_C_INCLUDES := $(LOCAL_PATH) $(LOCAL_PATH)/../openal/include $(LOCAL_PATH)/../openal/OpenAL32/Include
LOCAL_SRC_FILES  := ../openal/OpenAL32/alAuxEffectSlot.c \
 ../openal/OpenAL32/alBuffer.c        \
 ../openal/OpenAL32/alDatabuffer.c    \
 ../openal/OpenAL32/alEffect.c        \
 ../openal/OpenAL32/alError.c         \
 ../openal/OpenAL32/alExtension.c     \
 ../openal/OpenAL32/alFilter.c        \
 ../openal/OpenAL32/alListener.c      \
 ../openal/OpenAL32/alSource.c        \
 ../openal/OpenAL32/alState.c         \
 ../openal/OpenAL32/alThunk.c         \
 ../openal/Alc/ALc.c                  \
 ../openal/Alc/alcConfig.c            \
 ../openal/Alc/alcEcho.c              \
 ../openal/Alc/alcModulator.c         \
 ../openal/Alc/alcReverb.c            \
 ../openal/Alc/alcRing.c              \
 ../openal/Alc/alcThread.c            \
 ../openal/Alc/ALu.c                  \
 ../openal/Alc/android.c              \
 ../openal/Alc/bs2b.c                 \
 ../openal/Alc/null.c                 \

LOCAL_CFLAGS     := -DAL_BUILD_LIBRARY -DAL_ALEXT_PROTOTYPES
LOCAL_LDLIBS     := -llog -Wl,-s

include $(BUILD_SHARED_LIBRARY)

########################################################################################################

Compile OpenAL

Now compile the source code using the NDK. I used a technique described in another tutorial on Using cygwin with the Android NDK on Windows. I am creating a batch file make.bat in the projects directory containing:

@echo on

@set BASHPATH="C:\cygwin\bin\bash"
@set PROJECTDIR="/cygdrive/d/dev/workspace-android/helloopenal"
@set NDKDIR="/cygdrive/d/dev/SDKs/android-ndk-r4b/ndk-build"

%BASHPATH% --login -c "cd %PROJECTDIR% && %NDKDIR%

@pause:

Save the file and execute it. If there is no error you have just compiled the OpenAL library into a Shared Object! You can find it in <PROJECT_HOME>/libs/armeabi. Now let’s see how we can make use of it.

The Native Interface

The next steps will be to create a Java Native Interface that allows us to access to OpenAL Shared Object.

Define Native Interface in Activity

Extend the HelloOpenAL Activity, so it looks like

package org.pielot.helloopenal;

import android.app.Activity;
import android.os.Bundle;
import android.util.Log;

public class HelloOpenAL extends Activity {
 /** Called when the activity is first created. */
 @Override
 public void onCreate(Bundle savedInstanceState) {
 super.onCreate(savedInstanceState);
 setContentView(R.layout.main);

 System.loadLibrary("openal");
 System.loadLibrary("openaltest");
 int ret = play("/sdcard/wav/lake.wav");
 Log.i("HelloOpenAL", ""+ret);
 }

 private native int play(String filename);
}

Implement Native Interface

In <PROJECT_HOME> execute

javah.exe -classpath bin -d jni org.pielot.helloopenal.HelloOpenAL

to create the c header for the native function. Now <PROJECT_HOME>/jni should contain the file org_pielot_helloopenal_HelloOpenAL.h. Create org_pielot_helloopenal_HelloOpenAL.c in <PROJECT_HOME>/jni and fill it with

#include "org_pielot_helloopenal_HelloOpenAL.h"

JNIEXPORT jint JNICALL Java_org_pielot_helloopenal_HelloOpenAL_play
 (JNIEnv * env, jobject obj, jstring filename) {
 return 0;
}

Compile Native Interface

Add new library to Android.mk

########################################################################################################

include $(CLEAR_VARS)

LOCAL_MODULE     := openaltest
LOCAL_ARM_MODE   := arm
LOCAL_PATH       := $(ROOT_PATH)
LOCAL_C_INCLUDES := $(LOCAL_PATH)/../openal/include
LOCAL_SRC_FILES  := org_pielot_helloopenal_HelloOpenAL.c     \

LOCAL_LDLIBS     := -llog -Wl,-s

LOCAL_SHARED_LIBRARIES := libopenal

include $(BUILD_SHARED_LIBRARY)

########################################################################################################

Now compile again. The above created make.bat will do. You now should have two libraries in <PROJECT_HOME>/libs/armeabi/, namely libopenal.so and libopenalwrapper.so.

I you want you can run the app now. It should not crash and print ‘HelloOpenAL   0’ into the log.

Testing OpenAL

Now we have two libraries, one containing OpenAL and the other a native interface. We will now fill the latter with life to demonstrate the use of OpenAL.

Initialize and Release Audio Components

Therefore open org_pielot_helloopenal_HelloOpenAL.c and extend the existing code by:

#include "org_pielot_helloopenal_HelloOpenAL.h"

#include <stdio.h>
#include <stddef.h>
#include <string.h>
#include <AL/al.h>
#include <AL/alc.h>

JNIEXPORT jint JNICALL Java_org_pielot_helloopenal_HelloOpenAL_play
 (JNIEnv * env, jobject obj, jstring filename) {

 // Global Variables
 ALCdevice* device = 0;
 ALCcontext* context = 0;
 const ALint context_attribs[] = { ALC_FREQUENCY, 22050, 0 };

 // Initialization
 device = alcOpenDevice(0);
 context = alcCreateContext(device, context_attribs);
 alcMakeContextCurrent(context);

 // More code to come here ...

 // Cleaning up
 alcMakeContextCurrent(0);
 alcDestroyContext(context);
 alcCloseDevice(device);

 return 0;
}

This code will now acquire the audio resource and release them. You should be able to compile the code and execute the HelloOpenAL app. However, nothing will yet happen, as we still have to load and play sound.

Methods for Loading Audio Data

Now we need to load audio data. Unfortunately, OpenAL does not come with functions for loading audio data. There has been the very popular ALut toolkit, but this is not part of OpenAL anymore. We therefore need to provide our own methods to load .wav files.

The following code snippets have been posted by Gorax at www.gamedev.net. These are one struct and two methods methods to load .wav data and buffer it in the memory.

Add the following code to org_pielot_helloopenal_HelloOpenAL.c. Add the code above JNIEXPORT jint JNICALL Java_org_pielot_helloopenal_HelloOpenAL_play

typedef struct {
 char  riff[4];//'RIFF'
 unsigned int riffSize;
 char  wave[4];//'WAVE'
 char  fmt[4];//'fmt '
 unsigned int fmtSize;
 unsigned short format;
 unsigned short channels;
 unsigned int samplesPerSec;
 unsigned int bytesPerSec;
 unsigned short blockAlign;
 unsigned short bitsPerSample;
 char  data[4];//'data'
 unsigned int dataSize;
}BasicWAVEHeader;

//WARNING: This Doesn't Check To See If These Pointers Are Valid
char* readWAV(char* filename,BasicWAVEHeader* header){
 char* buffer = 0;
 FILE* file = f open(filename,"rb");
 if (!file) {
 return 0;
 }

 if (f read(header,sizeof(BasicWAVEHeader),1,file)){
 if (!(//these things *must* be valid with this basic header
 memcmp("RIFF",header->riff,4) ||
 memcmp("WAVE",header->wave,4) ||
 memcmp("fmt ",header->fmt,4)  ||
 memcmp("data",header->data,4)
 )){

 buffer = (char*)malloc(header->dataSize);
 if (buffer){
 if (f read(buffer,header->dataSize,1,file)){
 f close(file);
 return buffer;
 }
 free(buffer);
 }
 }
 }
 f close(file);
 return 0;
}

ALuint createBufferFromWave(char* data,BasicWAVEHeader header){

 ALuint buffer = 0;
 ALuint format = 0;
 switch (header.bitsPerSample){
 case 8:
 format = (header.channels == 1) ? AL_FORMAT_MONO8 : AL_FORMAT_STEREO8;
 break;
 case 16:
 format = (header.channels == 1) ? AL_FORMAT_MONO16 : AL_FORMAT_STEREO16;
 break;
 default:
 return 0;
 }

 alGenBuffers(1,&buffer);
 alBufferData(buffer,format,data,header.dataSize,header.samplesPerSec);
 return buffer;
}

WARNING, I had to put spaces into the word f open, f read, and f close. Delete the spaces when you copy that piece of code. For some reason, WordPress does not accept the words without space in a post.

Load Audio Data into Buffer

In method JNIEXPORT jint JNICALL Java_org_pielot_helloopenal_HelloOpenAL_play located the comment

// TODO More Code comes here

and replace it by

// Create audio buffer
 ALuint buffer;
 const char* fnameptr = (*env)->GetStringUTFChars(env, filename, NULL);
 BasicWAVEHeader header;
 char* data = readWAV(fnameptr,&header);
 if (data){
 //Now We've Got A Wave In Memory, Time To Turn It Into A Usable Buffer
 buffer = createBufferFromWave(data,header);
 if (!buffer){
 free(data);
 return -1;
 }

 } else {
 return -1;
 }

 // TODO turn buffer into playing source

 // Release audio buffer
 alDeleteBuffers(1, &buffer);

This piece of code tries to load PCM .wav audio data from the passed filename. The audio data is loaded into an OpenAL buffer. The buffer itself is merely the cached audio data but cannot be played. It therefore has to be attached to a sound source.

Create a playing source from the buffer

In method JNIEXPORT jint JNICALL Java_org_pielot_helloopenal_HelloOpenAL_play located the comment

// TODO turn buffer into playing source

and replace it by

 // Create source from buffer and play it
 ALuint source = 0;
 alGenSources(1, &source );
 alSourcei(source, AL_BUFFER, buffer);

 // Play source
 alSourcePlay(source);

 int        sourceState = AL_PLAYING;
 do {
 alGetSourcei(source, AL_SOURCE_STATE, &sourceState);
 } while(sourceState == AL_PLAYING);

 // Release source
 alDeleteSources(1, &source);

This piece of code creates a sound source from the buffer and plays it once.

Test on the Device

Compile the native code again by using make.bat. It should compile without errors. If you are using Eclipse, select the project HelloOpenAL in the Package Explorer and press F5. Otherwise, Eclipse might not be aware that the Shared Objects were updated.

Next, go to your devices SD Card and add a .wav file. I created a folder called “wav” and put a mono .wav file called “lake.wav” into this folder. Make sure it matches the filename you passed play(String filename) in the HelloOpenAL activity.

Now it time for the big test! Once you start the app, the .wav file should be played once.

This has been tested on the Nexus One and the G1/HTC Dream.

Solving Latency Issues

Some people seem to have experienced a 0.5 sec lag between triggering the sound and the actual playback. In the comments aegisdino suggested the following solution:

In alcOpenDevice() of ALc.c source,
“device->NumUpdates” seems to apply the lag issue.
In normal cases, device->NumUpdates will be 4, then I can feel about 0.5sec lag.
But when I fix it to 1, the lag disappeared.

I did not test the solution, but as NumUpdates was 1 in my version of ALc.c it could be the solution.

Share this: