7th December 2021

Does AI Have a Positioning Problem?

Artificial intelligence (AI) was first coined by American computer scientist Prof. John McCarthy in 1955, he said, ‘Our ultimate objective is to make programs that learn from their experience as effectively as humans do.’  In the 66 years since AI has only come into its own, particularly in the past decade, where the technology has caught up with the theory. In 2016, Amazon, Apple, DeepMind, Google, IBM and Microsoft formed the ‘Partnership of AI’ to set societal and ethical best practices for artificial intelligence research.

So, where is AI today?

AI in the 20s

IBM’s definition of AI is, ‘Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.’ The reality, widely accepted, is that true AI doesn’t exist in business yet. What does exist are AI applications, which learn from interactions and make recommendation-based responses based on those learnings, namely;

  • speech recognition, (such as Siri, Alexa),
  • virtual agents, (Slack, FB messenger),
  • recommendation engines,
  • and self-driving tech.

The uptake of these *applications grew during the pandemic as people were forced to become more connected during the lockdown.

AI has made inroads into enterprise technology, streamlining workflows etc., and while there is a big ambition for the adoption of AI, the reality is its uptake among global companies is not where it should be according to Forbes Insights. A KPMG report (2020) suggests that there is an AI 'trust gap', which has ‘prevailed amid a lack of quality data and an ensuing reluctance to hand critical business decisions over to machines’. This lack of data is perceived as a barrier to adoption. Similarly, the 2021 McKinsey AI report, suggests that its findings show no increase in AI adoption’.

What’s the problem with AI?

As a branding agency working in the technology sector, we have seen many AI-driven propositions in our time, interesting and cool stuff that isn’t accelerating as fast as perhaps it should. It seems to us that AI has a bit of a positioning problem in the following three areas:

  1. Language – exclusive and not very clear.
  2. Perception – fear and distrust.
  3. Clarity – what is the problem being solved?

1. Language 

AI is sometimes positioned as a silver bullet or a panacea, but that doesn’t make it real for end-users who aren’t clear about how it works or what problem it's solving. The word ‘artificial’ doesn’t evoke trust or security and has not changed since its inception in the 50s. There are many definitions as to what AI is, again confusing, and on top of that, there are subfields (machine learning and deep learning) often used in conjunction with AI which muddies the water even further.

AI washing in marketing

This isn't helped by the many products and services that claim to use AI but simply don't, a marketing practise known as ‘AI washing’. We see much of this kind of activity in FMCG, where a fast turnaround is often driven by ‘new and improved’ messaging in a bid to shift more products.

For example, in 2019, an ‘AI’ toothbrush was launched, claiming to track brushing and supply feedback, it was sold on the marvels of new AI tech. Does the brush decide the brushing technique based on what it has learned? No, it doesn’t. This type of activity can confuse consumers about the reality and capabilities of AI.

Marketers may be guilty of feeding the mistrust when they launch products that appear to be AI-enabled and future-forward, but which in reality are simply tech-enabled devices. Therefore, the power of language is important, and so is how brands use it to hit cues of connection and understanding with consumers. If we examine other categories, such as the car industry and self-driving cars, we can see genuine strives in the AI arena, where the tech is looking at and reading situations to make decisions. One of the big players in this sector is Tesla with its Autopilot technology.

Tesla’s use of ‘Autopilot’ is interesting. It’s a word we all feel familiar and even safe with (from our experience of flying), but in this context, Tesla cars always require a driver. Following well-documented Autopilot crashes, the need for clarity of information on what the tech actually does is more important than ever. The brand has been warned about this–a German court found that Tesla’s ad copy misled consumers and has banned the car company from using the terms, ‘full potential for autonomous driving’ and ‘Autopilot inclusive’, in its advertising materials.

In conclusion, the importance of clear language is paramount for safety and understanding because when consumers read about crashes and misinformation, this is when fear and distrust set in.

2. Perception: fear and distrust

A global survey by Statista revealed that only 23% of UK respondents said they trust AI. There are many reasons for this but some of the distrust is fuelled by what they read about AI and their perception of what it is doing or going to do.

People trust mainstream media to supply true and faithful narratives, it's where they often source their information as ‘fact’ and therefore it shapes their perception. But the Media is also guilty of sensationalism, so, while people are unsure about AI, they are acutely aware of when it goes wrong, with headlines such as Tesla’s driving-assisted cars involved in fatal accidents or a Microsoft chatbot spewing racism or smart speakers caught recording and analysing private conversations. All of which have a negative impact on trust.

AI has been the subject matter for many fantastical movies, which always seem to take a fatalistic POV, think about Terminator or Transcendence or AI, where the human is always being persecuted. In the real world, it also doesn’t help either when industry leaders such as Elon Musk, publicly fuel the misguided fear that AI will quickly evolve from being a benefit to human society to taking it over. He is quoted as saying, ‘mark my words… AI is far more dangerous than nukes. The irony of the Tesla owner's remark hasn’t gone unnoticed.

There are some very cool and interesting developments in AI and its subfields, which are being undermined by the negative impact of sensational press stories and pop culture, which are shaping a skewed narrative that is damaging to both the perception and understanding of AI.  We need to challenge this by being clear about what it is, what it does and how it helps us... and we should celebrate those stories.

3. Clarity - what is the problem being solved?

Are humans being augmented or are they being replaced? There’s a big difference between the two. Given that AI is unproven as a replacement for people, we think it would be wiser in the near term to position AI as an aid rather than a replacement e.g., it’s about helping humans do something better.

Currently, there are too many unknowns, therefore defining a clear narrative on AI and the implications and limitations of adoption will increase the chance of success in the long term. We need to talk about AI as an aid, not as a decision-maker or as something that takes control away from us. We need to market AI as something that enhances our choices and decisions, this could help drive adoption and ultimately move us to a place where AI could become fully autonomous in a way that would be more acceptable to the masses. Marketing should focus on addressing the consumer pain points rather than the technology itself, such as ‘park assist’ – i.e., a useful tool for a specific problem.

Moving forward

To reposition AI, we need to rethink the terminology and how we talk about it to make it accessible and transparent. The solution pivots around a people-first approach to AI-enabled tech with a clearer definition of what it is and what it isn’t.

Let’s make it feel more real, not fantastical, it's about using smarter technology to speed things up, enhance performance, and drive better outcomes faster.

...

*Uptake in speaker ownership is 38% in the UK, up from 23% in 2018, citing the pandemic as a purchase driver. (UK Smart Speaker Consumer Adoption Report 2021 by VoiceBot, 2021)

John Galpin is a Co-Founder at Design by Structure.

This article was first published in Top Business Tech.

Published by: Fara Darvill in Thought leadership

Comments are closed.