Skip to main contentSkip to navigationSkip to navigation
an illustration of elton john's head from the 1970s complete with star sunglasses
AI was used to judge the success of John Lewis’s Elton John 2018 Christmas advert. Illustration: Lynsey Irvine/The Observer
AI was used to judge the success of John Lewis’s Elton John 2018 Christmas advert. Illustration: Lynsey Irvine/The Observer

AI can read your emotions. Should it?

This article is more than 4 years old

Advertisers, tech giants and border forces are using face tracking software to monitor our moods – whether we like it or not

It is early July, almost 30C outside, but Mihkel Jäätma is thinking about Christmas. In a co-working space in Soho, the 39-year-old founder and CEO of Realeyes, an “emotion AI” startup which uses eye-tracking and facial expression to analyse mood, scrolls through a list of 20 festive ads from 2018. He settles on The Boy and the Piano, the offering from John Lewis that tells the life story of Elton John backwards, from megastardom to the gift of a piano from his parents as a child, accompanied by his timeless heartstring-puller Your Song. The ad was well received, but Jäätma is clearly unconvinced.

He hits play, and the ad starts, but this time two lines – one grey (negative reactions), the other red (positive) – are traced across the action. These follow the second-by-second responses of a 200-person sample audience who watched the ad and allowed Realeyes to record them through the camera of their computer or smartphone. Realeyes then used its AI technology to analyse each individual’s facial expression and body language. The company did this with all of Jäätma’s list of 20 Christmas ads from 2018, watching 4,000 people, before rating each commercial for attention, emotion, sentiment and finally giving it a mark out of 10.

What is wrong with The Boy and the Piano, then? “So these are the metrics we measure: are you happy? Confused? Sad? Disgusted?” explains Jäätma, as the video plays. “If you look at the grey line, the negative emotions, you see that the UK audience is not that excited about the Elton John parts. The negativity goes up, people are tired about this promotion of the celebrity, they have had enough of this Elton John stuff.” Only when Elton as a child makes an appearance is there a spike of red. “Now when it goes into family and kids and it’s not the celebrity any more,” Jäätma goes on, “that’s where the positivity goes up.”

John Lewis’s Christmas 2018 advert with Elton John. Photograph: John Lewis & Partners/PA

Realeyes assigned the John Lewis commercial a score of 7.8 out of 10, placing it 12th out of the 20 Christmas ads they tested. It was rated fifth out of the retailer’s eight Christmas commercials since 2011. “So we concluded that this ad… the kid was the star,” says Jäätma. “It would have been better without Elton John almost. Or at least shorter.” The ad ran at 2min 20sec – “way too long. You can only put that on TV, no one’s going to watch that online.”

This is clearly a big deal for John Lewis: their Christmas campaign is by far the most important of the year. But what difference does it make to us? Quite a lot, potentially, argues Jäätma. Realeyes was founded in 2007 with the goal of teaching computers how we feel. “There’s obviously this whole big debate about: is AI going to come and kill us or is it going to come and help us?” says Jäätma, an Estonian who started Realeyes with two friends while doing an MBA at the University of Oxford. “Right now, AI is like this big brain that is very intelligent and knows everything, but what we say is that it misses a heart. What we want to build into the AI is emotional understanding and consciousness. So the better we make it, the more likelihood that all this AI stuff will actually be good for humanity and not the other way around.”

Jäätma and his partners – another Estonian, Martin Salo, and an Azerbaijani, Elnar Hajiyev, who work on the technical side – decided initially to focus on advertising because, they felt, it would be the quickest path to grow and make money. And, over a decade, they have created a database that they think is the richest and most sophisticated in the world. It is made up of 420m frames of people watching video on their devices (all have given their permission to be captured). These images are then labelled by seven different people, based on the American psychologist Paul Ekman’s list of basic emotions: anger, disgust, fear, happiness, sadness and surprise. Realeyes has gone to efforts to make the training algorithm culturally sensitive, by having Europeans tag Europeans or Asians annotate Asians, but for Jäätma, the beauty of Ekman’s theory, which dates from the early 1970s, is that the basic emotions are expressed the same way whether the subject is from Japan or Madagascar.

Realeyes software in operation. Photograph: Realeyes

Realeyes is doing well. It now has a staff of 78 – including scientists and engineers in Budapest, with smaller sales and customer support teams in London and New York – and clients that include Coca-Cola and Mars. It has raised nearly £30m from investors, including £9.5m in the latest round, much of it from NTT, the Japanese telecoms company. And while its focus at the moment is advertising and entertainment, its aspirations are clearly much greater: education and healthcare are on Jäätma’s list of “86 new projects” (he’s serious). In fact, Realeyes believes that it won’t be long before emotion AI is a feature of almost every area of our lives.

“AI is going to be the next industrial revolution,” Jäätma predicts. “So it’s going to have a huge impact on all industries. And emotion AI is going to be a core part of all AI, so we want to be at the forefront of when that happens and to have the right impact on that as well.”

“The right impact” is a pertinent phrase. There are obvious advantages to our machines being more sensitive to our needs. It should mean a better, more intuitive service, less time wasted. Also, probably a billion things we haven’t thought of yet. One of Jäätma’s dreams is that soon emotion AI will be able to alert us when, for example, we are experiencing depression and encourage us to seek help.

“Even today we spend more time with our phone than with our mothers or any other family member, so they just know and see you more than anything else,” he says. “And the technology is also getting to a place where you can get these facial and vocal reactions, which can be very early cues if you are too stressed for too long. If you can pick those signals up early, you can do something about it.”

But emotion AI raises concerns, too. Do we really want our emotions to be machine-readable? How can we know that this data will be used in a way that will benefit citizens? Would we be happy for our employers to profile us at work, and perhaps make judgments on our stress management and overall competence? What about insurance companies using data accrued on our bodies and emotional state?

Prof Andrew McStay, author of Emotional AI: The Rise of Empathic Media, cautions against going down a “Black Mirror avenue”, referring to the sci-fi TV series created by Charlie Brooker. “It’s quite easy to do the dystopian stuff,” he says, “and I always urge: rather than saying it’s really good or really bad, it’s about specifics.”

For McStay, though, emotion AI is dangerous when it affects “future life opportunities” and he is especially concerned about its impact on the workplace. “What we are talking about is 360-degree surveillance,” he says. “Who benefits from that? I certainly don’t think it’s the mass of people within an organisation, ie its workers. We have this suite of technologies but then we also have this suite of financial opportunities. And it’s whether these financial opportunities are a little bit too lucrative for people to be ethically minded. Follow the money, as it were.”

McStay also worries about the adoption of Ekman’s basic emotions. A recent report in the journal Psychological Science in the Public Interest was damning about the idea that it was possible to accurately interpret emotions simply by analysing a person’s face. One example: you might scowl when you’re angry, but also when you are concentrating or have a headache. “I think you should be very, very sceptical of the Ekman-based approach,” McStay warns. “From the point of view of ad testing, it doesn’t really matter too much because it’s not terribly important. But when you start thinking about it in the context of where life decisions are made about a person, I find that very problematic.”

Dr Brent Mittelstadt, a philosopher specialising in data ethics at the University of Oxford’s Internet Institute, finds that he has come across “more troubling examples than positive ones” with regard to emotion AI. “What if emotion AI is used to evaluate the honesty of an individual, or whether they pose a threat?” he says. “With regard to the former, I’m thinking of the iBorderCtrl system, currently in a pilot phase on the Hungarian, Greek and Latvian borders, which is used to evaluate whether an individual is giving truthful responses to questions at immigration control. As the relative risk of an application goes up, so too should our demands for accuracy and transparency. In the case of iBorderCtrl, the problem is that individuals deemed to be lying are not given their test results or information about the functionality and accuracy of the system. Without these, it is very difficult to contest the decision of an automated system.”

Despite these concerns, emotion AI looks set to become part of our lives, whether we like it or not. In fact, it already is. If you stand near the Eros statue in Piccadilly Circus looking at the vast advertising screens across the road (AKA the Piccadilly Lights) you will be recorded by two cameras (above Gap) that are the property of Land Securities Group, or LandSec. “When they identify a face, the technology works out an approximate age, sex, mood( –based on whether they think you are frowning or laughing – and notes some characteristics such as whether you wear glasses or whether you have a beard,” LandSec’s website explains.

Piccadilly Circus. LandSec has two facial detection cameras above the Gap sign. Photograph: Matt Crossick/PA

Why do they do this? “Based on the information,” the website continues, “we can display advertising on Piccadilly Lights that the audience are more likely to find relevant and interesting. We can also measure whether people are interested in the advertising by understanding how long they are staring towards the camera.”

There are also clear advantages to emotion AI being utilised in retail – and again, it already is. Cameras track you as you move through shops, they see how you react to different products or what you think of in-store advertising displays. Have you given your consent for this exchange? By entering the store, perhaps you have, tacitly. Realeyes did an experiment with Mothercare where it found that customers who were greeted at the door with a smile tended to spend more.

Maybe this doesn’t bother you. In November 2015, Prof McStay asked 2,000 UK citizens about whether they would be happy to be the subject of any kind of emotion detecting. He found that more than half (50.6%) were “not OK” with it; just over 30% were not too bothered so long as the data collected did not make you personally identifiable (the policy at Piccadilly Lights; LandSec says it does not collect or store data). There was greater tolerance among young people (only 31.2% had concerns) and most suspicion among the over-65s.

McStay notes that emotion AI is now attracting the attention, and funding, of the technology giants. For a long time, it was mainly worked on by startups, largely he thinks because “the Amazons, the Facebooks, the Googles” were put off by “the creepiness factor”. That, however, has now changed: Apple bought the emotion AI startup Emotient in 2016, while Facebook is developing its own emotion-based products. In May, Amazon announced that it had improved its smart assistant Alexa by using AI to detect human emotion. How much we are “aware or in control of” sensitive information we are sharing in these contexts, Mittelstadt argues, is a deeply troubling grey area.

“Most companies and organisations agree that we are going to see more of this stuff in the early 2020s,” says McStay. “I would guess around 2021, 2022. That sounds quite fanciful in a sense, but if you begin from the premise that there’s personal, economic and organisational value in understanding human emotion, then there’s a certain inevitability about these technologies.”

Most viewed

Most viewed