Revolutionising the User Experience with a Sense of Agency

Posted 19 December 2017

What’s the single greatest problem with user interface design? If you’ve ever pressed an elevator button repeatedly or opened two copies of the same programme simultaneously (because nothing seemed to happen the first time you tried), then you’ve experienced it. This problem is the sense of agency – or in those examples, the lack of it. Failure to provide a strong sense of agency in user interfaces is such a common flaw that it’s easily overlooked, but it undoubtedly costs billions of dollars in wasted time, user complaints, product returns, and may even cause damage and injury.

What is a sense of agency? It’s simply the feeling that our actions produce an obvious effect. More broadly speaking, it’s a feeling of being in control of our environment – or at least, clearly having an influence on it.

The sense of agency is usually present in the natural world. Maybe you’re reading this article on a phone or a tablet, or scrolling through it with a mouse, or even reading it on paper. In any case, your hand is gripping a physical object – a phone, a mouse, or a magazine. At this moment, your hand feels that object; you can intuitively sense its weight and texture, and easily move it around. Compare that familiar sensation to fumbling to pick up an object in a video game. Even with the best VR headsets, the sense of agency in a virtual world is far weaker, because objects seem unreal, with no presence or texture. Virtual objects are no more solid than air.

As technology becomes more sophisticated, progressing from physical interfaces to virtual interfaces, the sense of agency becomes weaker. It’s only then that we appreciate how important it is. For example, an automatic door that unexpectedly doesn’t open as we approach, or an elevator button or power button on a phone or computer that, frustratingly, doesn’t produce an immediate response, so we press it harder or press it repeatedly.

Humans evolved in the natural world of physical objects, and our minds and senses are fine-tuned by evolution to handle that environment. This is the essential source of numerous serious problems with virtual environments – a category that includes everything from full VR setups to desktop and mobile user interfaces. Those virtual environments and user interfaces superficially mimic the appearance of the natural world, without offering the full range of sensory feedback and response that our brains have evolved to handle, over millions of years. The result is unintuitive user interfaces and dissatisfied users.

Of course, no user will ever complain to a vendor that “your product lacks a sense of agency”. They’ll just feel frustrated, say the product is confusing and hard to use, or maybe complain that it seems slow and doesn’t react reliably to their inputs. Then they’ll give it a one-star review online – or even RMA the product because they pressed an unresponsive button so hard that it broke.

Why a sense of agency is important

Years of scientific research and testing have emphasised the critical importance of the sense of agency in providing a good user experience. Users “strongly desire the sense that they are in charge of the system and that the system responds to their actions… for every operator action, there should be some system feedback,” wrote Ben Shneiderman, who pioneered basic guidelines for human–computer interaction in his influential ‘Eight Golden Rules of Interface Design’ at the University of Maryland in the 1980s.

In some situations, simply providing a stronger sense of agency could even be enough to make users ignore other problems with a user interface. That’s because actions that provide a strong sense of agency also feel more satisfying. For example, research has shown that when we feel we’re in control, we’re less aware of delays in response. Researchers discovered this phenomenon as they sought methods of quantifying sense of agency. In 2002, researcher Patrick Haggard and others showed that our perception of time appears to speed up when we feel we are in control of our actions. The shift in perceived time can be significant: adding or subtracting 30 to 50 percent to the total time in some cases. For the user, time actually seems to be compressed as their sense of agency becomes stronger.

This research was formalised as the intentional binding effect – a quantitative measure of how strongly we feel our intentional actions are connected to their outcomes. Intentional binding strength is based on two components, action binding and outcome binding, each of which measures how much our internal perception of time differs from objective reality. The specialised scientific use of the word ‘binding’ may seem confusing – it is referring to how closely, in terms of milliseconds, our perception of an event matches reality.

Action binding is the perceived time difference for our action, and outcome binding is a similar measurement for the outcome of that action.  For example, when we take the action of pressing a button, we generally perceive this action occurred 10-30ms later than objective measurement shows – this shift in time is referred to as the action binding.

For designers and engineers, a useful and practical lesson to learn from this phenomenon is that people will tend to feel that any user interface that makes them feel in control is better than one where their sense of agency is less clear. In fact, most users will simply feel that the product itself is better and more responsive.

Many designers will already be familiar with the concept that users will feel bored and impatient waiting ten seconds for a programme to do something, but will hardly notice the passage of time if they spend that same ten seconds clicking on some buttons to achieve the same result – this paradox obviously appears to be related to the user’s sense of agency and the time compression effect discovered by researchers.

To achieve the goal of an apparently more responsive interface and a stronger sense of agency, we could try to do as much as possible to speed up a product’s interface by conventional means. But this will require faster, more powerful hardware and carefully re-designed software – however, we could also produce the same benefits if we can simply reduce the perceived interval between action and outcome, by strengthening the user’s sense of agency.

Sense of touch: the missing element needed to create a strong sense of agency

Building on this earlier work on intentional binding and time perception, more recent research has shown how our sense of agency becomes stronger or weaker depending on which of our senses are used to make an action. For example, researchers found that users have a weaker sense of agency using voice interface compared to a physical (haptic) input. The input is one side of the interaction, the feedback provided by the interface can also alter the intentional binding phenomenon. A recent study found that the perceieved action-outcome time interval was shorter when provided with haptic feedback compared to visual feedback. Importantly the findings show that haptic feedback provides a stronger sense of agency than visual feedback. This therefore provides an opportunity for designers and developers to harness the sense of touch in order to achieve a stronger sense of agency.

Perhaps this should not be a surprise, the sense of touch is one of the most primal senses – one of the first to evolve. The skin is by far the largest sensory organ on the human body, with an average surface area of almost 2 square meters – approximately the area of a double bed – and it is packed with about five million specialised touch-sensitive receptors. Those receptors are more densely packed in certain areas where they are needed most, for example there are approximately 3000 touch receptors in each fingertip.

So although commercial products have focussed on sound and vision for decades, in fact touch-based haptic feedback can create a much stronger sense of agency.

“We found that gesture-based systems exhibited significant higher intentional binding when the input action was accompanied by haptic or auditory outcomes, compared with visual outcome… Our work also suggests that audio and haptic feedback in gesture-based touchless interactions are a good candidate for increasing users’ sense of being in control and feeling of interacting with a more responsive system,” wrote University of Sussex researcher Patricia Ivette Cornelio Martinez and her colleagues in their 2017 paper, “Agency in Mid-air Interfaces”, from which the charts in this section were derived.

In addition, research suggests that adding haptic feedback may provide users with more information, but without distracting them. “It is known that audio and haptic feedback releases the visual channel to focus on additional tasks; this interplay is suitable for driving scenarios, for example,” Martinez writes.

Early researchers swiftly noticed that just adding the sense of touch or force feedback had a remarkable effect on the user’s sense of agency. The change made simulated environments seem far more real, and made users feel they were a part of those environments. “There’s a funny thing that happens when you provide feedback to the user,” Dr. Mark Cutkosky of Stanford University told the New York Times. “Suddenly, it no longer feels like, ‘I’m here with my glove and I’m controlling that robot hand over there’. Suddenly you feel like, ‘that’s my hand over there, it’s an extension of me’.”

Multiple research projects have shown how a stronger sense of agency, created by harnessing the sense of touch as an additional sensory input channel, make user interfaces easier to use and reduce errors. Improvements are seen when the haptic technology is simply a primitive buzzer-type device, and performance improves as haptic resolution is increased. “The addition of tactile feedback to the touchscreen significantly improved finger-based text entry, bringing it close to the performance of a real physical keyboard. A second experiment showed that higher specification tactile actuators could improve performance even further,” researchers at the University of Glasgow wrote in 2009, in a test of touchscreen keyboard interfaces enhanced with simple haptic components.

In fact, practical experience with real world products has shown that haptic feedback can offer numerous benefits. These include: reinforcing the sense of agency and sense of reality, allowing faster and more accurate control when there’s no real physical contact, or physical contact provides limited useful sensory input (such as a smooth touch screen), and strengthening feedback to the user in cases when other sensory feedback is limited, weak or confusing (such as in a noisy environment, or a situation in which vision is obscured or the user’s attention is focussed elsewhere).

How haptic technologies are being used on the market today

For most of us today, our first experience of artificial haptic feedback is in mobile phones and tablets. The ubiquitous tiny vibration motor built into every mobile phone provides a surprisingly wide variety of sensations by simply spinning a small off-centre weight at different speeds. The basic vibration effect serves to provide notifications, alerts, confirmations and warnings. Even with this low-cost component, one of the key unique advantages of haptic technology is demonstrated by the fact that a notification can be sensed in an environment – such as a mobile phone in a pocket on a noisy train – in which the user would probably not notice visual or audible signals.

Game controllers, virtual reality and augmented reality systems also incorporate similar vibration components. The more sophisticated entertainment setups typically use more than one haptic device, in order to provide a wider variety of effects. In a game, haptic feedback could be used to strengthen the sense of agency when a player presses a button or punches an enemy, and to increase the sense of immersion by simulating the unpleasant buzz of an electric shock or the jolt of a vehicle hitting a bump.

VR and AR obviously have numerous applications in simulation and training, not just in entertainment. Adding the sense of touch can make simulations more realistic and more effective, and thereby shorten costly training sessions. Indeed, the ability of haptics to add an additional, intuitive, sensory channel to sound and vision makes it extremely valuable in professional applications.

The aerospace industry has decades of experience and knowledge of haptics and similar technologies – probably more than any other industry – for both simulator training and operational flight. In large aircraft, force feedback is a standard feature of key flight controls. The stick shaker stall warning is a good example of the multiple benefits of haptics. This device physically shakes flight controls to simulate the violent shaking that occurs in smaller and older aircraft when dangerously low speeds make the aircraft’s flight surfaces unstable and prone to stalling. The stick shaker acts as an additional sensory input channel that is certain to get the pilot’s attention, and the feature also leverages the pilot’s existing training and experience – because almost all pilots learn to fly in much smaller aircraft – so pilots can instinctively understand the warning and react quickly, without needing extensive re-training.

The medical profession also uses these technologies extensively in training and in the real world. Diagnostic and surgical procedures depend very much on the sense of touch – especially modern, minimally-invasive keyhole surgery and microsurgery. Surgeons may have very restricted vision of a sensitive area of the body, or no vision at all, so must often feel their way to detect a problem, or to find the correct position to make an incision or implant a medical device. From the simplest injection to the most complex brain or heart surgery, the sense of touch plays a key role. Haptic feedback allows more precise surgery, shortening the time required, increasing the chances of success, and reducing the affected area to help the patient recover faster.

Rather like surgery, operators of industrial equipment often work in constricted environments with limited vision. Once again, haptic feedback can help them work faster and more precisely, and avoid accidents.

What’s wrong with haptics?

While the widespread applications of haptics listed above make it seem as if haptics is a solved problem and haptic devices are already everywhere, there are actually serious drawbacks to current haptics technology that make it expensive and difficult and, in many cases, impossible to implement. In fact, it’s obvious that haptic feedback is far less common than visual and audio output, and where it is available, it is often primitive, using components such as simple vibration motors.

Haptics’ most obvious problem is that it requires physical contact. To put it simply: to perceive the feeling of touch, you obviously need to be touching something: a video game controller, or a joystick that controls the position of a microscopic scalpel or tool, and so on. Generally speaking, the real world interface will not closely match the virtual object in shape or texture. A video game controller is not a gun or a ball, a joystick is not a scalpel or a spanner. In addition, haptic devices present a wide variety of other problems: they may be bulky and unwieldy, and they may block the user’s view of the display or environment they are trying to control.

Next generation contactless haptic technology arrives

This serious problem, that you require a real physical object if you want to simulate the feeling of a physical object, seems insurmountable. But, in fact, technology already exists that can provide invisible, contactless haptic feedback over a range of up to a metre. In addition to working at a distance, the technology can produce a much wider range of effects than the vibration motors currently used. It can create the sensation of moving objects, like a ball held in the fingers, a virtual pushbutton or dial, flowing water, a strong breeze, or even bubbles bursting against the skin. Even materials with a variety of textures can be simulated.

This contactless haptic feedback technology, developed by scientists and engineers at UK-based Ultrahaptics, is currently available in development kit form.

“Magical” was the word that one writer for the US based Institute of Electrical and Electronics Engineers’ magazine, IEEE Spectrum, used to describe the Ultrahaptics technology. Another IEEE writer described the experience: “I pushed my hand forward, palm down… My real-life hand interrupted a virtual stream of bubbles that I felt gently popping against my palm. Wow… [the technology] allows the user to feel distinct textures, not just the general sense that something is virtually there.”

“The demo involved a ball and a block… I wasn’t expecting much. But, to my surprise, not only did I feel the ball in my hand when I picked it up, but when I turned my hand palm up to cradle the ball, I still felt it sitting in my palm,” the writer said, in an enthusiastic description of an early VR demo.

The technology behind contactless tactile feedback

Ultrahaptics’ solution is based on sophisticated software and mathematical processing, combined with mostly simple, low-cost hardware: an array of dozens of ultrasound transducers, arranged in a flat, square or rectangular grid. The transducers do not move; each simply emits an inaudible ultrasound signal.

By varying the output of each transducer, a sensation of physical force can be created at points in the air where the ultrasonic sound waves intersect. The waves interfere with each other and – precisely at those points, a few millimetres wide – they produce a stronger and much lower frequency signal that stimulates the skin’s tactile mechanoreceptors just as a physical object would.

Sophisticated, proprietary signal processing algorithms can move and precisely position these areas of virtual force by rapidly adjusting the output of each transducer, even creating the feel of a textured surface. Ultrahaptics refers to these points of acoustic radiation force as “control points”. A variety of processing hardware, based on an ARM-based CPU and an FPGA, is used to run the system and perform complex frequency calculations in real time. Development kits connect to a host PC via USB.

In current development kits, a simple, low-cost, Leap Motion, infra-red camera-based sensor tracks the position of the user’s hand in three dimensions. This motion sensor allows the ultrasound array to keep the force steadily focussed on the surface of the skin, so users can actually pick up and move virtual objects. The sensors can also recognise gestures as part of a control or user interface system. Ultrahaptics’ versatile technology platform allows developers to work with other hardware and software, for example by substituting another 3D motion sensor for the Leap Motion device and calibrating its output with a user-editable XML file.

For developers, the Ultrahaptics SDK API supports C++ and C#, and the company also provides an integration package for the Unity Gaming Engine.

The potential applications for touch-free touch feedback

Now that we can invisibly project haptic feedback at a distance to create tactile sensations in free space without any physical connection, a whole world of new applications instantly opens up – plus many more which have yet to be imagined.

The biggest worldwide supplier of car components, German-based Bosch, has demonstrated a car including an entertainment system with virtual mid-air controls that are easy to feel and adjust without the danger of looking away from the road – made possible by Ultrahaptics’ technology. Jaguar Land Rover, Harman and other companies are working on similar contactless touch-enabled entertainment and control systems for vehicles. As well as the promise of improved safety, another potential benefit of the invisible ultrasound based technology is that it cannot be easily masked by noise or vehicle movement, unlike current devices that rely on audio or physically-transmitted haptic feedback.

Revitalising their products with contactless virtual touch gives these companies an opportunity to stand out from competitors in a crowded market. For manufacturers and designers, the technology’s novelty value alone can become a selling point, even before its other benefits are taken into account.

The Ultrahaptics technology seems to have great potential in augmented reality, because traditional, physical haptic feedback devices tend to obscure the user’s view of the world – that’s not a problem for ultrasound. Nike and Dell are collaborating with Ultrahaptics on an augmented reality design system. This system enables designers to see, feel and precisely manipulate any virtual object, bringing the sense of touch to virtual design.

Other prototypes show how Ultrahaptics’ technology can be used as a control system for home appliances. For example, Ultrahaptics has demonstrated a contactless kitchen, in which cooking heat is controlled by virtual mid-air controls that can easily be felt and manipulated. The technology could eventually be used to create a warning force field around hot areas, to help prevent children and adults from accidentally touching them.

The new haptic technology promises to create applications that would be difficult or impossible without it. “Mid-air haptic feedback represents a good means for private communication in cases where audio cannot be played, allowing the user to still experience agency,” Patricia Ivette Cornelio Martinez and her colleagues wrote in their 2017 paper, “Agency in Mid-air Interfaces”. For example, in a hospital ward, a doctor could easily adjust medical device settings without audible feedback and without disturbing sleeping patients. In a darkened movie theatre, maybe a user could interact with their phone, even with the screen off and without sound.

The kitchen and hospital applications suggest yet another benefit of the technology: by avoiding contact, users can avoid spreading dirt and germs. A 2014 hygiene study by the University of Toronto revealed that almost two thirds of hospital elevator buttons were infested with bacteria – in fact they were measurably dirtier than toilets.

That study didn’t investigate whether the buttons became even dirtier as impatient users pressed them repeatedly (because unclear feedback had reduced their sense of agency), but in any case, a touchless haptic interface could keep buttons, control panels, screens and other surfaces cleaner and safer in hospitals, other working environments, on public transport and in homes – reducing cleaning costs and avoiding the spread of infection.


Contactless haptic tech greatly strengthens sense of agency, enhancing existing applications for haptic technology, increasing user satisfaction and safety, and even making completely new applications and products possible. In very competitive consumer electronics markets, filled with commodity products, companies can use exciting new technology like invisible contactless haptic feedback to make their products stand out.

Contenuti correlati

Scopri le novità scelte per te x