Periodic Reporting for period 1 - AI-CU (Automated Improvement of Continuous User interfaces)
Reporting period: 2018-06-01 to 2019-11-30
As mobile devices become smaller and smaller, it becomes more difficult to use traditional symbolic user interfaces, based on typed commands or discrete selections in menus. A solution to this is to use user interfaces that are based on actions that are extended in time: gestures, swipes or patterns of taps for instance (we will refer to these as continuous user interfaces).
There are several problems with using such signals. An important problem is how to design an optimal set of actions, given a certain user interface technology (a touch screen, motion sensors or a microphone, for instance). A set of actions is optimal if it is easy to learn, easy to perform and there is a low probability of confusion of different actions. A related problem is that in different stages of use, different sets of actions may be optimal. For a beginning user, it is most important that actions are easy to remember. For an expert user, it is more important that actions can be performed quickly and comfortably. It is difficult to design actions that are both, as actions that are easy to remember tend to be more involved than the kind of actions an expert user would use (an analog here would be the difference in handwriting style of a primary school child versus that of an adult).
The technique for automating design of continuous user interfaces is called iterated learning. In this paradigm, users are trained with a set of user interface actions. They are then asked to reproduce these from memory. Their reproductions are then used as training examples for the next user. This process is repeated a number (typically 8–10) of times. Professor de Boer has found that, if care is taken to make sure that the distinctions between actions are maintained, this procedure results in user interface actions that are more easily learned by users.
The AI-CU project has produced a proof-of-concept software tool to conduct this automated design. At the moment it is applicable to swipes on touch screens, but it can easily be modified for any continuous input device.
The project has also investigated the market for these techniques, and found that they are most promising for interactive multimedia, automotive applications, robotics, devices for extreme environments and for devices for people with disabilities, such as intelligent wheelchairs.