The evaluation of mobile, multi-modal, accessible applications via traditional observation techniques (e.g. video recording) is a challenge because of mainly three related problems: the context problem (i.e. users behave different in various contexts), the invisibility of multi-modal feedback (i.e. vibration patterns cannot be seen), and the context-sensitivity of these applications (i.e. is a user scanning or not).


As part of HaptiMap we further investigated and understood these problems and are developing our own observation framework: the Virtual Observer. This framework is based on the well-known logging observation method. Logging describes the process of recording arbitrary information, which is subsequently saved into a file. There rather exist ultimate logging frameworks. Moreover, the to-be-logged events are identified and implemented by the developers themselves. The Virtual Observer is designed for location-based applications and therefore comes with a selection of logged events (e.g. latitude, longitude, speed, etc.). We also investigated more specific events, which are of interest for designers and experimenters of location-based applications, e.g. disorientation, navigation errors, and how the device is held in the user’s hand.

 

For a developers/experimenters convenience we are also developing a tool, named ContextPlayer, which is capable to display and interact with the recorded context information in a convenient way. Beside our logged values the ContextPlayer offers support for displaying images from the Microsoft SenseCam, a tiny camera worn around the neck.

 

screen shot from the virtual observer

 

 

With respect to the three identified problems for the observation of mobile applications, the Virtual Observer and ContextPlayer help as follows. To address the context problem the Virtual Observer provides lots of additional context information. Thus, the subjective context perception through the experimenter can be replaced by the objective sensor measures. That helps to make the separation of cause and effect more accurate and reliable. Further, the Virtual Observer also addresses the invisibility of multi-modal feedback and context-sensitive application problems. The Virtual Observer records whether e.g. tactile feedback is enabled and if the device is held parallel to the ground or not (exemplary designed for the  PocketNavigator, one of the HaptiMap demonstrators). Thus, it can be accurately determined if either the scanning mode or pocket mode is active. In addition, unfiltered compass values are logged. Together with the exact user location and the next waypoint, the exact multimodal feedback (e.g. tactile feedback) displayed to the user in this situation can be reconstructed.


A native C implementation of the Virtual Observer logging framework is part of the HaptiMap toolkit mantle. An Android wrapper can be found in the toolkit crust. A more sophisticated Java/Android-only version of the Virtual Observer will be added to the toolkit soon. At this point also the ContextPlayer will be released as a side-project to the actual toolkit.


Link: https://haptimap.ee.qub.ac.uk/svn/haptimap/trunk/crust/android/HaptiMapToolkit/src/org/haptimap/hcimodules/VirtualObserver.java