Is there an ‘Uncanny Valley’ of advertising AI?
As AI is getting increasingly ubiquitous, improving exponentially every year, there comes a point when people get an eerie feeling from its proximity to 'the real thing' - this point is called the uncanny valley.
The uncanny valley is the hypothesis that the uncanny nature of human replicas which appear not quite human elicit feelings of eeriness and revulsion.
This was first applied to robots as robots become more humanised they become more appealing:
Until they reach the point of looking nearly human, but not quite.
There are several theories that attempt to explain why this stuff freaks us out: some suggest it conflicts with our sense of human norms or our perceptual cues as we don’t know how to perceive it. Other suggest it threatens our sense of individuality, distinctiveness or human identity.
As it is a knee-jerk reaction, the most likely theories are those which suggest it’s our non-conscious, hard-wired processes for avoiding pathogens or selecting a mate – an uncanny robot is unconsciously perceived as a human in poor health.
Either way, it appears that we get along nicely with these creations until they violate our human, interpersonal processes.
AI in Chatbots
The same is potentially true of AI. Getting immediate answers from a bot is great because it’s convenient, fast and accurate 24x7.
But when I discover I have been tricked into thinking an AI bot is a person, or when I discover that my AI has made a decision that I do not agree with on my behalf and without consultation, the uncanny valley appears. People don’t like losing control, and they don’t like to be tricked.
Recent research by Washburn University psychologist Gregory Preuss and colleagues suggests that to differing degrees people blame themselves for being duped. They tell themselves that they should have known better than to be fooled.
As our AI helpers move beyond answering FAQs or checking a status, and we begin to develop a more engaged relationship with them, it creates a greater opportunity for negative reactions; the more we trust AI, the more violated we feel when it lets us down.
AI in advertising
In marketing, as more and more of the information we are served is driven algorithms and AI, can it know too much?
UK digital ad spending is just about to hit £10 billion with Google and Facebook taking 60% of the revenue here . Google adwords is testing different reporting messages when people object to ads that include a “Ad knew too much”.
Report this ad - default options
Report this ad - Testing options
This is a response to people who believe that some advertising AI has hit the creepy stage of the uncanny valley.
As people become more aware of AI in advertising the uncanny valley curve will actually flatten out and stop becoming so pronounced.
We also start to blame AI for apparent coincidences when it could also be a frequency illusion. In this phenomenon you may not have processed the a particular ad before, but since it is now on your mind people start to notice it more.
Like in many industries, AI is making a big impact into the way we run advertising campaigns (Lab uses Kenshoo) and the way people perceive advertising.
As a consumer we tolerate AI when it sits in the expected norms but when it gets close to an apparent human level of targeting we quickly become very uncomfortable.
This is the same way we will tolerate humans having traffic accidents, but we would not be as forgiving of an AI having the same traffic accident.