14 Sep 2021

The UX choice for AI-human interaction

In September 2021, the CW User Experience SIG hosted a fascinating session exploring the nuances of human reactions to artificial intelligence agents featuring Iulia Ionescu, a Senior Lecturer and Course Leader at the University of Arts London, and two members of the BBC R&D team – Tristan Ferne and Libby Miller.

View the recording online

Tristan’s session explored the many ways to explain AI and make it more understandable to users. This is a vitally important task as AI agents are becoming more ubiquitous in our lives covering fields ranging from personal assistants to insurance pricing. AI-based systems often do not flag to a user when AI is used and even if they did AI is a hard concept to explain and even the programmers may not understand how an AI agent came to a decision – and yet the agents are also fallible, with the potential to incorporate biases and errors from training data.

Making AI more accessible feels the right thing to do, but it is unclear how this can be achieved. For example, what would you explain to a general user – the way AI works in general? How a particular AI system works? How a particular system uses AI? And where would you provide this information – in the code to the developers? In the user interface or the context around a service? Or on general educational channels – the BBC have plenty of these available to them.

And it has used them to introduce a “Machine’s Guide to Birdwatching” that makes a start at explaining how image classification works. It details the mental models that the AI agent uses for producing a result such as analysing colour, size, shape. It explains the level of certainty that an AI agent might apply to a result, and outlines the pixels within the image that the agent recognised and based the decision on. And the guide also explains the importance of the training data for enabling an agent to make accurate decisions. It is a good educational tool to start increasing the public's general understanding of AI.

The level of understanding that a user has of artificial intelligence played a role in the research that Iulia has been conducting at the University of Arts London. Her phd is on the design of anthropomorphic AI agents, looking at human perceptions and conceptualisations of AI. Part of her research was based on the Three Factor Model of Psychological Anthropomorphism which theorises that the extent of anthropomorphic thought is influenced by three factors: ‘elicit agent knowledge’ (the amount of prior knowledge held about an object), ‘effectance motivation’ (the incentive to act effectively and seek meaning) and ‘sociability motivation’ (our desire to establish social connections with others). Based on this theory, the idea would be that the more we understand about AI – the greater our ‘elicit agent knowledge’, the less likely we are to anthropomorphise an AI system.

However, throughout her research she found that the more users understood how an AI agent worked, the more likely they were to personify them. Later research showed that users were more than capable of stereotyping an agent and assuming increased or decreased levels of knowledge based on the accent the AI agent (in this case a modified personal assistant) deployed.  This leads us to consider the Media Equation Theory – the fact that people tend to treat computers and media like they are people and respond to them with social queues. These natural social responses should provide AI agent designers with clear scripts to pay attention to.

Yet Libby Miller disagrees that making an AI agent more human is something that should be aspired to. She has experimented as a hobby at home in creating different AI systems based on off-the-shelf software, producing effective results in the fields of automatic text generation and voice replication. But she is conscious that these simple AIs are little more than “syntactic zombies” with no understanding of the meaning of what they are responding to. But in her view even commercial AIs lack common sense and an understanding of their place in the world and can be easily confused. Giving these commercial AIs human-like qualities is a slight of hand to make their systems more impressive – but they are being given a degree of sentience  and individuality that they don’t deserve. 

Libby believes that we need to find better metaphors to explain AI capabilities more honestly and use these to inspire imaginative interfaces. If anything, user interfaces to AI need more friction in general, not less. Using human-like interfaces is both misleading and raises expectations beyond the point that they can be met.

You can watch the full recording of this thought-provoking session online now. It is available to everyone for two months and then only visible by CW members. Find out more about becoming a CW Member here.