Too Tense for Candy Crush – How Emotions Affect what Kind of Distractions we Welcome

Push notifications are increasingly being used to engage mobile device users with app content. News organizations deliver breaking-news notifications, social platforms inform about new content, games inform about status updates game, etc … with the goal of making the user engage with the service.

In this research, we – Kostadin Kushlev, University of Virginia, Bruno Cardoso, KU Leuven, and myself – explored to what extent users’ current affect, that is, how they are feeling, impacts user engagement. To this end, we analyzed data from a study conducted by Telefónica Research where the participants (N = 337) downloaded a custom-developed app that delivered notifications.

After attending to a notification (N = 32,704), participants reported how they felt in a mini questionnaire. Besides asking how the participants felt, the questionnaire also offered them to voluntarily engage with further content. Participants were not aware that we our main interest was in observing their interaction with said content — they believed that it was mainly there as a courtesy to make their participation in the study more fun.

Participants always had two choices: a mentally demanding and a simple/diverting task. The tasks in these groups were chosen from a list of 4 options each. The mentally demanding offers included: browsing trending games on Google Play, reading the Wikipedia article of the day, filling out a personality questionnaire, or playing a thinking game. The simple and diverting option included watching a trending video, reading fun facts, playing an action game, and watching trending gif images.

The results show a clear impact of affect on the choice of the content:

  • When feeling good, people tend to avoid mentally demanding tasks. Hence, proactive recommendations for content that requires mental effort should target moments of neutral or even negative valence.
  • When tense, people tend to avoid diverting tasks. Thus, people who want to reduce task-induced stress might want to rely on external timers to schedule regular breaks with fun activities.
  • When energetic, people tend to avoid suggestions for further distraction altogether. Hence, proactive recommendations should target moments of low energetic arousal, such as moments of boredom.

These findings show that the current emotional state affects the kind of content users choose to engage with. Future “smart” devices should not only be technologically smart, but also psychologically smart. They should strive to understand how users feel in order to engage them with the most appropriate content at the most opportune of times.

The work will be presented at ACM MobileHCI ’17, the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, which will take place in September 2017 in Vienna, Austria.

Citation:

Too Tense for Candy Crush: Affect Influences User Engagement With Proactively Suggested Content.
Kostadin Kushlev, Bruno Cardoso, Martin Pielot.
MobileHCI ’17: ACM International Conference on Human-Computer Interaction with Mobile Devices and Services, 2017.

Share this:
Share

When Attention is not Scarce – Detecting Boredom from Mobile Phone Usage (UbiComp ’15)

In times of information overload, attention has become a limiting factor in the way we consume information. Hence, researchers suggested to treat attention as a scarce resource coined the phrase attention economy. Given that attention is also what pays the bills of many free internet services through ads, some even speak of the Attention War. Soon, this war may start extending to our mobile devices, where already today, apps try to engage you through proactive push notifications.

bored

Yet, attention is not always scarce. When being bored, attention is abundant, and people often turn to their phones to kill time. So, wouldn’t it be great if more services sought your attention when you are bored and left you alone when you were busy?

Since mobile phones are often used to kill time, we — that’s Tilman Dinger from the hciLab of the University of Stuttgart, and Jose San Pedro Wandelmer, Nuria Oliver, and me from Telefonica’s scientific group — saw an opportunity in detecting those moments automatically. If phones knew when their users are killing time, maybe they could suggest them to make better use of the moment.

To identify, which usage patterns are indicative for boredom, we logged phone usage patterns of 54 volunteers for 2 weeks. At the same time, we asked them to frequently report how bored they felt. We found that patterns around the recency of communication activity, context, demographics, and phone usage intensity were related to boredom.

esm

These patterns allow us to create a model that predicts when a person is more bored than usual with an AUCROC of 74.5%. It achieves a precision of over 62%, when its sensitivity is tuned detecting 50% of the boredom episodes.

precrecallfull

While this is far from perfect, we proved its effectiveness in a follow-up study: we created an app (available on Google Play, more info here) that, at random times, created notifications, which suggest to read news articles.

buzzfeednotification

When predicted bored, the participants opened those articles in over 20% of the cases and kept reading the article for more than 30 seconds in 15% of the cases. In contrast, when they were not bored, they opened the article in only 8% of the cases and kept reading it for more than 30 seconds in only 4% of the cases.
buzzfeedresults
Statistical analysis shows that the predicting accounts for significant share of the observed increase.

While we certainly don’t feel that recommending Buzzfeed articles will be the cure peoples’ boredom, at least not for the majority of them, the study provides evidence that the prediction works.

Now how can mobile phones better serve users, when they can detect phases of boredom? We see four application scenarios:

  • Engage users with relevant contents to mitigate boredom,
  • Shield users from non-important interruptions when not bored,
  • Propose useful but not necessarily boredom-curing activities, such as clearing a backlog of To Do’s or revisiting vocabulary lists, and
  • Suggest to stop killing time with the phone and embrace boredom, as it is essential to creative processes and self-reflection.

Relatedly to this work, in a follow-up study, we also showed that mobile phones can predict the boredom proneness, the predisposition of experiencing boredom.

The work was presented in September 2015 at the ACM International Joint Conference on Pervasive and Ubiquitous Computing, taking place in Osaka, Japan, where it received best-paper award.

When Attention is not Scarce – Detecting Boredom from Mobile Phone Usage.
Martin Pielot, Tilman Dingler, Jose San Pedro, and Nuria Oliver
UbiComp’ 15: ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015.

Share this:
Share

Exporting RandomForest Models to Java Source Code

This post shares a tiny toolkit to export WEKA-generated Random Forest models into light-weight, self-contained Java source code for, e.g., Android.

It came out of my need to include Random Forest models into Android apps.

Previously, I used to use Weka for Android. However, I did not find a way to export a Random Forest model in a way that my apps can load it reliably across devices, so the apps had to compute the model on each start — which can take minutes.

androidrf solves the problem in a simple way: a python script parses the console output of WEKA when training a RandomForest model with the -printTree option enabled. Then, it creates a single Java source file implementing those trees with simple if-then statements.

The library ships with three additional Java classes that allow to run and test the generated classifiers.

The code is available on Github under the MIT Licence: androidrf

How to use it

(for people who are familiar with WEKA):

Load your data set into WEKA, choose RandomForest as classifier, and enabled the ‘printTrees’ option for your RandomForest classifier. Hint: limit the depth of the trees with the ‘maxDepth’ option, because otherwise the resulting source files may become huge.

Screen Shot 2015-06-30 at 15.50.09

Save the output of the results buffer into a .txt file. Best, save it into the ‘data’ folder of the androidrf project.

Screen Shot 2015-06-30 at 15.50.26

Open a terminal, enter the ‘data’ folder of the androidrf project, and execute
python to_java_source.py -M filename (without .txt).

Screen Shot 2015-06-30 at 17.53.07

A class with the name FilenameRandomForest should appear in androidrf/src/org/pielot/rf

Screen Shot 2015-06-30 at 17.55.50

All you need to do is to copy the Java class together with the three pre-existing Java classes (Prediction, Evaluation, RandomForest) into your project. It should compile without error.

Screen Shot 2015-06-30 at 18.00.33

The features have been added as fields to your classifier. Hence, in order to specify the features, simply populate those fields. Then, run runClassifiers(List predictions) to obtain a Prediction with the details of the prediction (predicted class, certainty, ..).

Voila! You have a light-weight, portable, working Random Forest model.

 

Share this:
Share

How the Phone’s Vibration Alarm can help to Save Battery

Not sure how long my hero’s battery will last with GPS on and my phone vibrating every second to indicate if on right track!?!

– This and similar concerns have frequently been expressed when I presented the PocketNavigator – a navigation system guiding pedestrians by vibration patterns instead of spoken turning instructions.

To quantify how much battery power is actually lost to constantly repeated vibration pulses, I tested the battery consumption of two different patterns in comparison to a non-vibrating phone.

In brief, in my setup, the vibration cost less than 5% of the battery life. As comparison: leaving the screen on will drain the phone’s battery in 2-3 hours. In consequence, instead of draining the battery fast, vibration can even help to save battery if it allows users to leave the screen turned off.

Test Configuration

The apparatus created heartbeat-like vibration patterns, i.e. patters consisting of two pulses followed by a long pause. The apparatus was run three times. Each run used a different pulse length, i.e. 30 ms, 60 ms, and 0 ms (no vibration as baseline).

Results

The following diagrams show the remaining battery as it changed while the app was running.



The battery lasted

  • 24.71 hours for 0 ms pulse length (baseline)
  • 23.48 hours for 30 ms pulse lengths = 95.0 % of the baseline, and
  • 23.48 hours for 60 ms pulse lengths = 95.0 % of the baseline.

Using linear approximation to account for the fact that the battery was never 100% charged when the trials commenced, we also calculated the trend lines (see Diagrams, used Excel’s linear approximation), which changes the prediction to

  • 24.18 hours for 0 ms pulse length (baseline)
  • 23.28 hours for 30 ms pulse lengths = 96.3 % of the baseline, and
  • 23.60 hours for 60 ms pulse lengths = 97.6 % of the baseline.

Discussion

Battery life in all cases was around 24 hours, sufficient for normal use. Constant vibration reduced battery life by 2.4 – 5.0 % minutes. Increasing the vibration length from 30 to 60ms per vibration pulse had no effect on battery life. As comparison, when the screen is constantly kept on, the battery drains within about 2-3 hours.

Hence, the additional battery loss is justifiable when considering that at the same time we gain the ability to continuously communicate information to the user. When using short vibration pulses, desigers do not even have to consider the effect of the pulses’ lenghts on battery life.

Take Away

This data shows that the impact of having the phone emitting vibration pulses constantly is not very high.

This means that as means to constantly convey information, e.g. as navigation system that is supposed to convey information all the time, vibration has a much lower impact on battery life compared to the screen, which empties the battery in a few hours. On a Nexus One, vibration can allow to constantly convey information for almost 24 hours, enough for the typical smartphone user who has gotten used to charge the phone every night.

Share this:
Share

App Store Studies : How to Ask for Consent?

App Stores, such as Apple’s App Store or Google Play, provide researchers the opportunity to conduct experiments with a large number of participants. If we collect data during these experiments, it may be necessary to ask for the users’ consent beforehand. The way we ask for the users’ consent can be crucial, because nowadays people are very sensitive to data collection and potential privacy violations.

We conducted a study suggesting that a simple “Yes-No” form is the best choice for researchers.

Tested Consent Forms

We (most of the credit goes to Niels Henze for conducting the study) tested four different approaches to ask for the consent to collect non-personal data. All consent forms contain the following text:
By playing this game you participate in a study that investigates the touch performance on mobile phones. While you play we measure how you touch be we DON’T transmit personalized data. By playing you actively contribute to my PhD thesis.

Checkbox Unchecked

The first tested consent form showed an unchecked check next to a text reading “Send anonymous feedback”. In order to participate in the study a user had to tick the checkbox and then press the “Okay” button.

Checkbox Checked

The second consent form is the same as the previous one, except that the checkbox is pre-checked. To participate in the study the user has to merely click the “Okay” button.

Yes/No Button

The third consent form features two buttons are provided reading “Okay” and “Nope”. To participate the user has to click “Okay”. Clicking “Nope” will end the app immediately.

Okay Button

The foorth consent form only contains a single “Okay” Button. By clicking “Okay” the user participates in the study. To avoid participation, the user has to end the app through the phone’s “home” or “return” buttons.

Study

These consent forms were integrated into a game called Poke the Rabbit! by Niels Henze. At first start, the application randomly selected one of the four consent forms. If the use accepted to participate in the study, the app transmitted the type of the consent form to a server.

Results

We collected data from 3,934 installations. The diagram below shows the conversion rate. The conversion rate was estimated by dividing the number of participants per form by 983,5 (we assume perfect randomisation, i.e. each consent form was presented in 25% of the installations).

Conversion rate per consent form. The x-axis shows the type of consent form. The y-axis shows the estimated fraction of users that participated in the study after download.

We were surprised about the high conversion rate. Only the consent form with the unchecked checkbox yielded in a too low conversion rate.

Conclusions – use Yes/No Buttons

We suggest using the consent form with Yes-No buttons. The consent form with the checked checkbox may considered unethical, since the user may not have read the text and was not forced to consider unchecking the checkbox. The consent form with the “Okay” button may be considered unethical, too, because users may not be aware that they can avoid data collection by using the phone’s hardware buttons. The “Yes-No” form, in contrast, forces users to think about their choice and offer a clear way to avoid participating in the study.

Yes-No buttons are ethically safe and resulted in the second highest conversion rate.

Would you suggest otherwise? We are not at all saying that this is definite! Please share your opinion (comments or mail)!

More Information

This work has been published in the position paper App Stores – How to Ask Users for their Consent? The paper was presented at the ETHICS, LOGS and VIDEOTAPE Ethics in Large Scale Trials & User Generated Content Workshop. It took place at CHI ’11: ACM CHI Conference on Human Factors in Computing Systems, which was held in May 2011 in Vancouver, Canada.

Acknowledgements

The authors are grateful to the European Commission, which has co-funded the IP HaptiMap (FP7-ICT-224675) and the NoE INTERMEDIA (FP6-IST-038419).

 

Share this:
Share

Will they use it? Will it be useful? In-Situ Evaluation of a Tactile Car Finder.

When we develop new technology, we want to know if it will have the potential to be successful in the real world.

This is not trivial! People may sincerely enjoy our technology when we expose them to it in a lab- or a field study. They may perform better than with previous solutions at the tasks that we ask them to fulfill as part of the study.

However, once they leave our lab they never again encounter the need to use it in their daily routines. Or, the utility we prove in our studies may not be evident in the contexts where this technology is actually deployed.

In our work, we made use of Google Play to answer these questions in a novel way. We wanted to study if a haptic feedback can make people less distracted from the environment, when they use their phone for pedestrian navigation in daily life. We developed a car finder application for Android phones with a simple haptic interface: whenever the user points into the direction of the car, the phone vibrates.

The data provides evidence that about half of the users use the vibration feedback. When vibration feedback is enabled, users turn off the display and stow away the device more often. They also look less at the display. Hence, when using vibration feedback, users are less distracted.

Our work shows that app distribution channels, such as Google Play or the iOS Store, can serve as a cheap way of bringing a user study into the daily life of people instead of bringing people into the lab. Compared to the results of a lab study, these findings have high external validity, i.e. we can be sure that our findings can be generalized to a large number of users and usage situations.

This work will be presented at NordiCHI ’12: The 7th Nordic Conference on Human-Computer Interaction, which takes place in Copenhagen in October 2012. The paper is available here (pdf).

Thanks to http://www.v3.co.uk/ for summarising this work so nicely in their article Buzzing app helps smartphone dudes locate their car.

Share this:
Share

Ambient Visualisation of Social Network Activity

Social network, such as Facebook or Twitter, are an important factor in the communication between individuals of the so called digital natives generation. More and more often, they are used to exchange short bursts of thoughts are comments as a means of staying connected with each other.

The instant communication enabled by those social networks has however created a form of peer-group pressure to constantly check for updates. For example, has an informal get-together been announced or has somebody requested to become your friend? This emerging pressure can make people return to the computer more often than they want. This is why we find our colleagues regularly looking for new status updates in meetings, or on our parties we see it more often that our friends cannot resist to check their Facebook account.

One solution to this is notifying users when something important happened. Mobile phones as personal, ubiquitous, and always connected devices lend themselves as platform, as they are carried with the user most of the time. This, it is no surprise that our phone now not only notify about incoming short messages, but do the same for Twitter @mentions, Facebook message, or friend requests. However, these notifications may go unnoticed, too. Thus, instead of checking our Facebook & Twitter account, we keep looking at our mobile phone for notification items.

With AmbiTweet, we investigate conveying social network statuses by ambient displays. We use a live wallpaper showing a beautiful blue water.The wallpaper can be connected with a Twitter account and visualizes the level of activity in an ambient way. The higher the level of activity on this Twitter account, the brighter and the more busy the water becomes. This can be perceived even in the periphery of the field of vision. Thus, users can become aware of important activity without the need to focus the eyes on the phone.

Ambient displays, in general, have the advantage that they convey information in a continuous but unobtrusive way. They exploit the fact that the brain can process information pre-attentive, i.e. without generating apparent cognitive load. AmbiTweet therefore allows concentrating on a primary task while remaining aware of the level of activity on a social network account.

Share this:
Share

OpenAL4Android

In the comments to my post on OpenAL on Android some visitors asked to provide some high-level examples of how to use OpenAL.

In this post you will find a light-weight Android Java library, consisting of four classes only, that allows you to create complex 3D sound scenes. An additional Hello World example building upon this library will show how to create a scene with three different sound sources.

OpenAL4Android Library

Download the library from http://pielot.org/wp-content/uploads/2011/11/OpenAL4Android.zip. The library contains the following classes:

  • OpenAlBridge: this class contains all the native methods used to communicate with the OpenAL native implementation
  • SoundEnv: this class allows to manage the sound scene. It for example allows registering new sounds and moving around the virtual listener
  • Buffer: a buffer is one sound file loaded into the RAM of the device. A buffer itself cannot be played.
  • Source: a source turns a buffer into an actually sounding object. The source allows changing the parameters of the sound, such as its position in 3D space, the playback volume, or the pitch. Each source as one buffer, but one buffer can be used by different sources.

If you turn it into an Android library, you can use it in several projects at the same time. Go to Properties -> Android and make sure that the check box “Is Library” is checked.

The following Hello World example shows how to use the library.

HelloOpenAL4Android

HelloOpenAL4Android is a demo application illustrating how to use OpenAL4Android. The complete code + Eclipse project files can be downloaded here.

Create a new Android project. Use Android 1.6 at least. Visit the project properties and add OpenAL4Android as library project (project -> android -> library). The the following code shows how to create a complex 3D scene.

To run without errors, the program requires two sound files named “lake.wav” and “park.wav” in the project’s assets folder. If the folder does not exist, just create it on the top level of the project, next to src, res, … .

package org.pielot.helloopenal;

import org.pielot.openal.Buffer;
import org.pielot.openal.SoundEnv;
import org.pielot.openal.Source;

import android.app.Activity;
import android.os.Bundle;
import android.util.Log;

/**
 * This tutorial shows how to use the OpenAL4Android library. It creates a small
 * scene with two lakes (water) and one park (bird chanting).
 * @author Martin Pielot
 */
public class HelloOpenAL4AndroidActivity extends Activity {

    private final static String    TAG    = "HelloOpenAL4Android";

    private SoundEnv            env;

    private Source                lake1;
    private Source                lake2;
    private Source                park1;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        Log.i(TAG, "onCreate()");

        this.setContentView(R.layout.main);

        try {
            /* First we obtain the instance of the sound environment. */
            this.env = SoundEnv.getInstance(this);

            /*
             * Now we load the sounds into the memory that we want to play
             * later. Each sound has to be buffered once only. To add new sound
             * copy them into the assets folder of the Android project.
             * Currently only mono .wav files are supported.
             */
            Buffer lake = env.addBuffer("lake");
            Buffer park = env.addBuffer("park");

            /*
             * To actually play a sound and place it somewhere in the sound
             * environment, we have to create sources. Each source has its own
             * parameters, such as 3D position or pitch. Several sources can
             * share a single buffer.
             */
            this.lake1 = env.addSource(lake);
            this.lake2 = env.addSource(lake);
            this.park1 = env.addSource(park);

            // Now we spread the sounds throughout the sound room.
            this.lake1.setPosition(0, 0, -10);
            this.lake2.setPosition(-6, 0, 4);
            this.park1.setPosition(6, 0, -12);

            // and change the pitch of the second lake.
            this.lake2.setPitch(1.1f);

            /*
             * These sounds are perceived from the perspective of a virtual
             * listener. Initially the position of this listener is 0,0,0. The
             * position and the orientation of the virtual listener can be
             * adjusted via the SoundEnv class.
             */
            this.env.setListenerOrientation(20);
        } catch (Exception e) {
            Log.e(TAG, "could not initialise OpenAL4Android", e);
        }
    }

    @Override
    public void onResume() {
        super.onResume();
        Log.i(TAG, "onResume()");

        /*
         * Start playing all sources. 'true' as parameter specifies that the
         * sounds shall be played as a loop.
         */
        this.lake1.play(true);
        this.lake2.play(true);
        this.park1.play(true);
    }

    @Override
    public void onPause() {
        super.onPause();
        Log.i(TAG, "onPause()");

        // Stop all sounds
        this.lake1.stop();
        this.lake2.stop();
        this.park1.stop();

    }

    @Override
    public void onDestroy() {
        super.onDestroy();
        Log.i(TAG, "onDestroy()");

        // Be nice with the system and release all resources
        this.env.stopAllSources();
        this.env.release();
    }

    @Override
    public void onLowMemory() {
        this.env.onLowMemory();
    }
}
Share this:
Share

A Tactile Compass for Eyes-free Pedestrian Navigation

The idea came up when I was heading back to the hotel from a conference dinner at MobileHCI 2008 in Amsterdam. I had no orientation. The only guide I had was a map on my Nokia phone. Not being familiar with Amsterdam, the route let me right through the busy areas of the city center.

The day before, a cyclist had stolen a mobile phone right out of the hand of another conference attendee. Knowing that made me quite afraid something similar could happen to me too. Without the phone I would have been completely lost.

Here, serendipity hit. Since my research group was already working on tactile displays for navigation and orientation, I wondered whether it was possible to create a navigation system for mobile phones that guided by vibration only, so it could be left in the pocket.

Back at OFFIS we quickly tested a few prototypes, including a hot/cold metaphor and a compass metaphor. The compass metaphor prevailed. The design was to encode the direction the user should be heading (forward, left, right, backwards) in different vibration patterns. Our testing participants liked that design most. Later we tested the vibration compass design a forest and found that it can replace navigation with a map.

The development and the studies was presented at the 13th IFIP TCI3 Conference in Human-Computer Interaction (INTERACT) in Lisbon, Portugal in September 2011. The article is available here.

If you own an Android phone you can try this vibration compass by downloading our PocketNavigator navigation application for free from the Android market.

 

Share this:
Share

Android User Hate Parade

Stupid! – Garbage – Hate it!!!!!!!

…. these are some of the few comments when publishing apps in the Android Market for free. This can be really frustrating for developers. Here are some of the worst example I have encountered in my life as an Android developer:

The “it does not work -> 1 star” fraction

Examples:

  • “poor! doesnt work on my G1 keeps force closing! Uninstalled!”
  • “Keeps forcing close fix it and it is horrible. Hate it!!!!!!!!! One star and I am uninstalling this stupid thing”

Guys, I can understand your anger, but please, just send the developer a mail, describe the error and the circumstances where it occurred in as much detail as possible, and give the developer a chance to learn about it and fix it!

The “my obscure feature is not present -> 1 star” fraction

Examples

  • “No “dk” map [Ingen dk kort]”
  • “Not in Russian [?? ?? ??????]”

Yes, you may want to have the app in Russian, Urdu, and Aramaic, but as a hobby developer one has limited resources. Please respect that many apps are developed on a very tight or even non-existent budget. Why not just be glad  to have that many apps for free?

The “I hate your app -> 1 star” fraction

Examples

  • “Stupid!”
  • “Garbage”
  • “Slow and no instructions”

These guys would probably even rate the app with 0 stars if that was possible. The comment “no instructions” was even wrong at the time of writing, since there was a manual accessible from the main menu.

*update*

The “Weird -> 1 star” fraction (proposed by Niels).

Example

  • “Has swear word on end button if you don’t do well. Not something I want my kid playing.”
  • “My mom is crying. Uninstalled”
  • “Stupid and offincive to my pet rabbit bayleigh”.

Conclusions

If you download, rate, and comment apps, please be nice with your developers. Many of us are guys like you and me, only spending their free time to work on their apps. Please accept that you cannot get perfect solutions in no-time. Rather help us to improve our apps and appreciate that we deliver them for free!

Share this:
Share