Use 18pt Font Size for Readers with Dyslexia

Dyslexia: a common reading disability

Dyslexia is a neurological reading disability, which impairs a person’s ability to read and write. In the media, we often hear about dyslexia as a gift in the context of famous people, such as Steve Jobs. However, in reality, depending on the language, a significant chunk of the people suffer from dyslexia, e.g. 10 to 17.5% in the US. For most of these, dyslexia is not a gift: the most common way of identifying dyslexia in children is bad performance in our reading-centric education system.

Can the right presentation parameters improve reading?

The good news is that reading increasingly takes place via electronic displays, where we can adapt the presentation of text to make it easier to read for people with dyslexia. Therefore, led by Luz, we (Luz Rello, Martin Pielot, Mari-Carmen Marcos, and Roberto Carlini) set out to find optimal values for the most simple parameters of presentation: font size and line spacing.

Eye-tracking study exploring font size and line spacing

The study was conducted by Luz Rello in the Universitat Pompeu Fabra (UPF) in Barcelona, Spain. 28 people (15f, 13m), aged 14-38, with a confirmed diagnosis of dyslexia took part in the study. They were asked to read Wikipedia articles that were presented with different font sizes and line spacings. The study used eye tracking and questionnaires to measure readability and comprehension.

The experiment compared:

  • Font sizes: 10, 12, 14, 18, 22, and 26 pt.
  • Line spacings: 0.8, 1.0, 1.4, and 1.8.

Findings

To make a long story short, line spacing did not have much of an impact. Only 1.8 line spacing lead to worse comprehension compared to 0.8 line spacing.

Regarding font size, however, the results were surprising. When we look for optimal font size in the web, we either find soft recommendations, such as “allow to adjust
or values around 12pt / 14pt.

However, our results provide strong evidence that for people with dyslexia, readability and comprehensibility of a text increases with font size, which an optimum around 18pt.

In particular, we found that:

  • The objective readability, which is indicated by the fixation duration recorded with the eye-tracker, steadily increased until 18pt.
  • The subjective readability was highest for 18pt and 22pt.
  • The subjective comprehensibility was highest for the three largest fonts: 18pt, 22pt, 26pt.

Conclusions: use 18pt font size for your website

Hence, when designing a website that shall be friendly to readers with dyslexia (remember, 10-17.5% of the population!), use large fonts. Since there was no improvement at larger font sizes, 18 pt font size hits the sweet spot.

Complete report

The complete scientific report can be found below.

Luz Rello, Martin Pielot, Mari-Carmen Marcos and Roberto Carlini.
Size Matters (Spacing not): 18 Points for a Dyslexic-friendly Wikipedia.
W4A ’13: 10th International Cross-Disciplinary Conference on Web Accessibility, 2013.

This work was published at the 10th International Cross-Disciplinary Conference on Web Accessibility, held 13-15th May 2013 in Rio de Janeiro, Brazil.

 

Share this:

How the Phone’s Vibration Alarm can help to Save Battery

Not sure how long my hero’s battery will last with GPS on and my phone vibrating every second to indicate if on right track!?!

– This and similar concerns have frequently been expressed when I presented the PocketNavigator – a navigation system guiding pedestrians by vibration patterns instead of spoken turning instructions.

To quantify how much battery power is actually lost to constantly repeated vibration pulses, I tested the battery consumption of two different patterns in comparison to a non-vibrating phone.

In brief, in my setup, the vibration cost less than 5% of the battery life. As comparison: leaving the screen on will drain the phone’s battery in 2-3 hours. In consequence, instead of draining the battery fast, vibration can even help to save battery if it allows users to leave the screen turned off.

Test Configuration

The apparatus created heartbeat-like vibration patterns, i.e. patters consisting of two pulses followed by a long pause. The apparatus was run three times. Each run used a different pulse length, i.e. 30 ms, 60 ms, and 0 ms (no vibration as baseline).

Results

The following diagrams show the remaining battery as it changed while the app was running.



The battery lasted

  • 24.71 hours for 0 ms pulse length (baseline)
  • 23.48 hours for 30 ms pulse lengths = 95.0 % of the baseline, and
  • 23.48 hours for 60 ms pulse lengths = 95.0 % of the baseline.

Using linear approximation to account for the fact that the battery was never 100% charged when the trials commenced, we also calculated the trend lines (see Diagrams, used Excel’s linear approximation), which changes the prediction to

  • 24.18 hours for 0 ms pulse length (baseline)
  • 23.28 hours for 30 ms pulse lengths = 96.3 % of the baseline, and
  • 23.60 hours for 60 ms pulse lengths = 97.6 % of the baseline.

Discussion

Battery life in all cases was around 24 hours, sufficient for normal use. Constant vibration reduced battery life by 2.4 – 5.0 % minutes. Increasing the vibration length from 30 to 60ms per vibration pulse had no effect on battery life. As comparison, when the screen is constantly kept on, the battery drains within about 2-3 hours.

Hence, the additional battery loss is justifiable when considering that at the same time we gain the ability to continuously communicate information to the user. When using short vibration pulses, desigers do not even have to consider the effect of the pulses’ lenghts on battery life.

Take Away

This data shows that the impact of having the phone emitting vibration pulses constantly is not very high.

This means that as means to constantly convey information, e.g. as navigation system that is supposed to convey information all the time, vibration has a much lower impact on battery life compared to the screen, which empties the battery in a few hours. On a Nexus One, vibration can allow to constantly convey information for almost 24 hours, enough for the typical smartphone user who has gotten used to charge the phone every night.

Share this:

NordiCHI 2012

NordiCHI is the bi-annual Nordic Conference Human-Computer Interaction. Though a country from Scandinavia traditionally hosts it, the conference attracts designers and researchers from all over the world. This year, 2012, it took place at the ITU University of Copenhagen, Denmark.

NordiCHI Venue | IT University of Copenhagen (picture courtesy of Heiko Müller)

My colleague Heiko and I found this year’s NordiCHI to be an excellent forum for exchanging ideas in a super-friendly environment. We received plenty of valuable feedback for our ongoing research on using ambient light to remind office worker about upcoming tasks. In addition, there were plenty of interesting talks in up to four parallel sessions.

Highlights

Edward Cutrell et al. investigated the question of “how bad is good enough?” with respect to the quality of mobile videos. This work addresses the problem of mobile video consumption in areas where data connection is highly expensive. They therefore explored, which level of quality is still acceptable for low-income mobile-phone users in urban India. The results provide evidence that these people will accept a significant loss of quality in order to save money.

Interesting insights were that in these areas of the world, people need to “count” their bytes, which is hardly supported by today’s phones and applications. Also, Ed Cutrell suggested that the acceptance of low-quality videos depends on the user expectance. This may be relevant in developed countries, too, e.g. when people try to go online in mass events.
Anne Oeldorf-Hirsch, Jonathan Donner, Edward Cutrell : How Bad is Good Enough? Exploring Mobile Video Quality Trade-offs for Bandwidth-Constrained Consumers.

Charlotte Magnusson et al. presented Context Cards, a novel, light-weight way of raising developer’s awareness about different contexts in which mobile applications can be used. Each card shows a different setting, such as a young mother with a kid on one hand, the phone in the other, while pushing a baby buggy. They distributed the Context Cards at the Mobile World Congress, the “world’s premier mobile industry event” and received very positive feedback from attendees. Context Cards are free to use and can be accessed in printable form from here.
Charlotte Magnusson, Andreas Larsson, Anders Warell, Håkan Eftring, Per-Olof Hedvall : Bringing the mobile context into industrial design and development.

Thomas Visser, in his talk “I Heard You Were on Facebook” explored the creation of awareness systems, i.e. systems that provide a subtle sense of what is going on in one’s social network. They developed an awareness system that allows recording short sound bites from daily life and share them via Facebook. On the basis of a study with three groups of four persons each, they conclude that sharing sound bites increases the perceived social awareness of the group members.
Stefan Veen, Thomas Visser, and David V. Keyson : “I Heard You Were on Facebook” – Linking Awareness Systems to Online Social Networking.

Ole Sejer Iversen et al. presented insights from participatory design with mentally disabled users, which was done for a local art museum. They pointed out that we cannot just bring our set of values to the table but values emerge and require mediating in participatory design with diverse user groups.
Ole Sejer Iversen, Tuck W Leong – ‪Values-led Participatory Design – Mediating the Emergence of Values.

 

Share this:

App Store Studies : How to Ask for Consent?

App Stores, such as Apple’s App Store or Google Play, provide researchers the opportunity to conduct experiments with a large number of participants. If we collect data during these experiments, it may be necessary to ask for the users’ consent beforehand. The way we ask for the users’ consent can be crucial, because nowadays people are very sensitive to data collection and potential privacy violations.

We conducted a study suggesting that a simple “Yes-No” form is the best choice for researchers.

Tested Consent Forms

We (most of the credit goes to Niels Henze for conducting the study) tested four different approaches to ask for the consent to collect non-personal data. All consent forms contain the following text:
By playing this game you participate in a study that investigates the touch performance on mobile phones. While you play we measure how you touch be we DON’T transmit personalized data. By playing you actively contribute to my PhD thesis.

Checkbox Unchecked

The first tested consent form showed an unchecked check next to a text reading “Send anonymous feedback”. In order to participate in the study a user had to tick the checkbox and then press the “Okay” button.

Checkbox Checked

The second consent form is the same as the previous one, except that the checkbox is pre-checked. To participate in the study the user has to merely click the “Okay” button.

Yes/No Button

The third consent form features two buttons are provided reading “Okay” and “Nope”. To participate the user has to click “Okay”. Clicking “Nope” will end the app immediately.

Okay Button

The foorth consent form only contains a single “Okay” Button. By clicking “Okay” the user participates in the study. To avoid participation, the user has to end the app through the phone’s “home” or “return” buttons.

Study

These consent forms were integrated into a game called Poke the Rabbit! by Niels Henze. At first start, the application randomly selected one of the four consent forms. If the use accepted to participate in the study, the app transmitted the type of the consent form to a server.

Results

We collected data from 3,934 installations. The diagram below shows the conversion rate. The conversion rate was estimated by dividing the number of participants per form by 983,5 (we assume perfect randomisation, i.e. each consent form was presented in 25% of the installations).

Conversion rate per consent form. The x-axis shows the type of consent form. The y-axis shows the estimated fraction of users that participated in the study after download.

We were surprised about the high conversion rate. Only the consent form with the unchecked checkbox yielded in a too low conversion rate.

Conclusions – use Yes/No Buttons

We suggest using the consent form with Yes-No buttons. The consent form with the checked checkbox may considered unethical, since the user may not have read the text and was not forced to consider unchecking the checkbox. The consent form with the “Okay” button may be considered unethical, too, because users may not be aware that they can avoid data collection by using the phone’s hardware buttons. The “Yes-No” form, in contrast, forces users to think about their choice and offer a clear way to avoid participating in the study.

Yes-No buttons are ethically safe and resulted in the second highest conversion rate.

Would you suggest otherwise? We are not at all saying that this is definite! Please share your opinion (comments or mail)!

More Information

This work has been published in the position paper App Stores – How to Ask Users for their Consent? The paper was presented at the ETHICS, LOGS and VIDEOTAPE Ethics in Large Scale Trials & User Generated Content Workshop. It took place at CHI ’11: ACM CHI Conference on Human Factors in Computing Systems, which was held in May 2011 in Vancouver, Canada.

Acknowledgements

The authors are grateful to the European Commission, which has co-funded the IP HaptiMap (FP7-ICT-224675) and the NoE INTERMEDIA (FP6-IST-038419).

 

Share this:

Will they use it? Will it be useful? In-Situ Evaluation of a Tactile Car Finder.

When we develop new technology, we want to know if it will have the potential to be successful in the real world.

This is not trivial! People may sincerely enjoy our technology when we expose them to it in a lab- or a field study. They may perform better than with previous solutions at the tasks that we ask them to fulfill as part of the study.

However, once they leave our lab they never again encounter the need to use it in their daily routines. Or, the utility we prove in our studies may not be evident in the contexts where this technology is actually deployed.

In our work, we made use of Google Play to answer these questions in a novel way. We wanted to study if a haptic feedback can make people less distracted from the environment, when they use their phone for pedestrian navigation in daily life. We developed a car finder application for Android phones with a simple haptic interface: whenever the user points into the direction of the car, the phone vibrates.

The data provides evidence that about half of the users use the vibration feedback. When vibration feedback is enabled, users turn off the display and stow away the device more often. They also look less at the display. Hence, when using vibration feedback, users are less distracted.

Our work shows that app distribution channels, such as Google Play or the iOS Store, can serve as a cheap way of bringing a user study into the daily life of people instead of bringing people into the lab. Compared to the results of a lab study, these findings have high external validity, i.e. we can be sure that our findings can be generalized to a large number of users and usage situations.

This work will be presented at NordiCHI ’12: The 7th Nordic Conference on Human-Computer Interaction, which takes place in Copenhagen in October 2012. The paper is available here (pdf).

Thanks to http://www.v3.co.uk/ for summarising this work so nicely in their article Buzzing app helps smartphone dudes locate their car.

Share this:

Tacticycle: Supporting Exploratory Bicycle Trips

Navigation systems have become a common tool for most of us. They conveniently guide us from A to B along the fast or shortest route. Thanks to these devices we do not fear to get lost anymore when traveling through unfamiliar terrain.

However, what if you are a cyclist and your goal is an excursion rather than reaching a certain destination and all you want is to stay oriented and possibly learn about interesting spots nearby? In that case, the use of a navigation system becomes more challenging. One has to look up the addresses of interesting points and enter them as (intermediate) destination. Sometimes the navigation system might not even know all the small paths, so we end up checking the map frequently, which is dangerous when done on the move.

The Tacticycle is a research prototype of a navigation system that is specifically targeted at tourists on bicycle trips. Relying on a minimal set of navigation cues, it helps staying oriented while supporting spontaneous navigation and exploration at the same time. It field three novel features:

  1. First, it displays all POIs around the user explicitly on a map. A double-tap quickly selects a POI as travel destination. Thus, no searching for addresses is required.
  2. Second, the system relies on a tactile user interface, i.e. it provides navigation support via vibration. Thus, the rider does not have to look at the display while driving.
  3. Third, the Tacticycle does not deliver turn-by-turn instructions. Instead, the vibration feedback just indicates the direction of the selected POI “as the crow flies”. This allows the travelers to find their own route.
The direction “as the crow flies” of the selection POI is encoded in the relative vibration of the two actuators in the handle bars. In this picture, the POI is about 20° to the right, so the vibration in the right handle bar is a little stronger.

In cooperation with a bike rental, we rented the Tacticycle prototype to tourists who took it on their actual excursions. The results show that they always felt oriented and encouraged to playfully explore the island, providing a rich, yet relaxed travel experience. On the basis of these findings, we argue that providing minimal navigation cues only can very well support exploratory trips.

This work has been presented at MobileHCI ’12, ACM SIGCHI’s International Conference on Human-Computer Interaction with Mobile Devices and Services, which took place in September 2012 in San Francisco. The paper is available here (pdf).

Share this:

PocketMenu: Non Visual Menus for Touch Screen Devices

It’s a chilly Sunday afternoon and you are out for a walk, listening to music from your MP3 player, and you want to select the next song. How do you do that?

A few years ago you probably didn’t even take the MP3 player out of the pocket. You just used your fingers to feel for the shape of the next button and press it.

Today, we don’t own dedicated MP3 players anymore, but use our smartphones. And since most input in modern smartphones is done via large touch screen displays, you need to take the phone out of your pocket, unlock the screen, and spot the button visually to press it.

The PocketMenu addresses this problem, by providing haptic and auditory feedback to allow in-pocket input. It combines clever ideas from previous research on touch screen interaction for sensory and motor impairments in a novel way.

All menu items are laid out along the screen bezel. The bezel therefore serves as a haptic guide for the finger. Additional speech and vibration output allow identifying the items and obtaining more information. Watch the video to see how exactly the interaction works.

In a field experiment, we compared the PocketMenu concept with the state-of-the-art VoiceOver concept that is shipped with the iPhone. The participants had to control an MP3 player while walking down a road with the device in the pocket. The PocketMenu outperformed VoiceOver in terms of completion time, selection errors, and subjective usability.

This work will be presented at MobileHCI ’12, ACM SIGCHI’s International Conference on Human-Computer Interaction with Mobile Devices and Services, which takes place in September 2012 in San Francisco. The paper is available here (pdf).

Share this:

In Situ Field Studies using the Android Market

Recently, researchers have started to investigate using app distribution channels, such as Apple’s App Store or Google’s Android Market to bring the research to the users instead of bringing the users into the lab.

My colleague Niels, for example, used this approach to study how people interact with touch screens of mobile phones. But, instead of collecting touch events in a boring, repetitive task he developed a game where users have to burst bubbles by touching them. And instead of conducting this study in the sterile environment of a lab he published the game on the Android Market for free, so it was installed and used by hundreds of thousands of users. So, while these users were enjoying the game they generated millions of touch events. And unlike traditional lab studies, this data was collected from all over the world and many different contexts of use. The results of this study were reported at MobileHCI ’11 and were received enthusiastically.

Since my work is on pedestrian navigation systems & conveying navigation instructions on vibration feedback, lab studies are oftentimes not sufficient. Instead we have to go out and conduct our experiments in the field, e.g. by having people navigate through a busy city center.

So, if we can bring lab studies “into the wild” can we do the same with field experiments?

My colleague Benjamin and I started addressing this question in 2010. We developed a consumer-grade pedestrian navigation application called PocketNavigator and released on the Android Market for free. Then, we developed algorithms that allow us to infer specific usage patterns we were interested in. For example, these algorithms allow us to infer whether users follow the given navigation instructions or not. We also developed a system that allows the PocketNavigator to collect these usage patterns along with relevant context parameters and send these to one of our servers. On a side-note, the collected data does not contain personally identifiable information, so it does not allow us to identify, locate, or contact users.

With this setup we conducted a quasi-experiment. Since my research is about the effect of vibration feedback on navigation performance and the user’s level of distraction, we compared the usage patterns of situations where the vibration feedback was turned on versus turned off. Our results show that the vibration feedback was used in 29.9 % of the trips with no effect on the navigation performance. However, we found evidence that users interacted less with the touch screen, looked less often at the display, and turned off the screen more often. Hence, we believe that users were less distracted.

The full report of this work has been accepted to the prestigious ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’12) and has been presented in May 2012 in Austin, Texas.

The paper can be downloaded from here.

Share this:

PocketNavigator Video on Youtube

Working day and night, turning researchers into actors, mastering the use of iMovie, we present our video on the PocketNavigator Familiy.

The video shows three demonstrators:

  • The Tacticycle is a bicycle navigation system for tourists, which uses vibrating handle bars to provide directions.
  • The PocketNavigator is an OSM-based pedestrian navigation systems that uses vibration patterns to tell the user which direction to go.
  • The Virtual Observer is a research tool that allows collecting usage data (GPS tracks, Images, Experience Sampling Questions) and play them back in order to study in-situ usage of above (and other) applications.

The work presented here is part of the EU-funded HaptiMap research project (FP7-ICT-224675) , which aims at making maps and location-based services more accessible. The PocketNavigator is one of the project’s outcome developed at the Intelligent User Interfaces Group of the OFFIS Institute for Information Technology, Oldenburg, Germany.

The PocketNavigator is available for free on the Android Market: https://market.android.com/details?id=org.haptimap.offis.pocketnavigator

Share this: