The ‘Mass Effect’ effect: why you shouldn’t oversell the capabilities of AI

Discussing the difference between actual AI and high-tech probability algorithms.

Artificial Intelligence was the top trend on the minds of UX experts in 2019, according to our latest State of UX survey, just as it seems to be on the tips of the tongues of every client across the industry.

Certainly, computers that can think for themselves and do our jobs for us would be a miraculous time-saver. Thinking computers, however, do not yet exist.

When we talk about AI, we really just mean very advanced computers that we program to fix things without us having to click ‘OK’ first.

Essentially, you might have a program that runs through your user journey in the live environment every five minutes. It might hit a 404 error caused by a missing file, so perform a search to find a new version of the file and redirect without needing any help. Handy, but we’re not exactly in HAL-9000 territory just yet.

In his classic sci-fi-comedy series, The Hitchhikers Guide To The Galaxy, Douglas Adams wrote of an ancient alien race who invented an enormous super-computer to answer the Great Question of Life, The Universe and Everything. After thousands of years of computations, the computer churned out the answer “42.” They then had to invent an even larger computer to work out what the question was, so that the answer made sense.

This perfectly exemplifies the difference between a mega-powerful computer that can calculate the most-probable answer to a question without even knowing what the question is, and a true AI that would know it would be a lot easier to find the answer with a more-specific question and start with that.

Your Google Assistant might sound real enough that you feel the urge to thank her for her help, but in reality, she’s just converting your voice into text, putting it into Google and reading out the result – it’s hardly Skynet.

This might sound like I’m splitting hairs, but the difference between true AI and high-tech probability algorithms is something you need to make exceptionally clear to your clients before you start talking to them about the benefits of AI. To explain why, let me pass on a warning about AI from an infamous video game.

The Mass Effect series is largely considered to be one of the best video game franchises ever made; in fact, many have described it as their generation’s Star Wars. The story begins in the far future, where mankind has expanded out into space and joined a loose ‘United Nations’ of alien races working together to investigate and salvage the advanced technology left behind by ancient alien races.

You play the part of Commander Shepard – a high-tech, cyborg super-soldier, and veteran of a war against AI robots, given command of an advanced stealth spaceship. Shepard could be male or female, look however you wanted them to look and have your choice of back story and abilities to use in the first-person shooting sections of the game.

The series has a magnificent, complex plot and back story, however, what really appealed to Mass Effect players were the choices available, particularly the romantic ones.

You see, by choosing from options within conversations you have with your crew and the other various characters and aliens you meet in your travels, you can influence the story’s outcome.

For example, the first game sees your party split up on an adventure. Two of your group find their way into danger at the same time and it is up to you to choose which to rescue – the other dies.

Likewise, each game sees you given the option of picking a member of your crew to pursue romantically, including aliens and sometimes members of the same gender. Make moral choices and you would gain a reputation for being an intergalactic good guy, able to talk enemy ships out of attacking you. Make bad choices and you began to look like a Dark Lord of The Sith and even your own crew might start to distrust you.

As the game progressed through its trilogy, it actually accessed your old save files for the previous instalments and transferred your choices. This meant that the members of your crew who died, stayed dead, and your old girlfriend from the last game might not be happy to run into you and your new boyfriend.

This kind of drama saw people become deeply emotionally invested in the characters, relationships and events that had occurred in their own version of the game. Of course, in reality, this was all down to a very simple option tree. You chose option A at the first choice, option B at the second choice, option A at the third and option B at the last – therefore you saw ending 2 of 7 possible endings.

The game was so successful at making this organic, however, that players felt like they were experiencing a personalised version of the story that was unique to them. That was what propelled the game into the history books, but it’s also what eventually alienated almost the entire fan base.

You see, when Mass Effect 3 brought the series to its ultimate conclusion, each individual player was expecting to see all their choices culminate in a personally tailored ending customised to them. This was, of course, impossible. No game console or PC had the storage to contain every possible outcome to every combination of choices across all three games, amounting to over a hundred hours of story. As such, the finale of the Mass Effect trilogy was one of three possible endings, based entirely on the outcome of one, final choice.

The players were incensed. Twitter was up in arms. The forums and sites were filled with complaints. The game developers received hundreds of cupcakes in one of three flavours as a peaceful protest at their choice. They even had to release a free update to the game that expanded the ending with extra scenes to flesh out the story, cutting heavily into their profits.

Ultimately, once the dust died down, the company tried to release a new spin-off Mass Effect game. Lacklustre sales ensured the Mass Effect series would not continue further.

So, what’s the moral of the story? Simple: hell hath no fury like a customer scorned.

The Mass Effect team spent three games persuading gamers that their game offered a truly interactive experience – the opportunity to live in a story and really affect its outcome with their choices – something no mere video game is yet capable of.

When the ending revealed the man behind the curtain – that the seeming interactivity of their experience was simply a pleasant illusion – the players were not just disappointed, but furious.

Back in the real world, with such an exciting prospect as our first steps towards true artificial intelligence, it’s easy to get excited. In the middle of a sales pitch, that excitement can easily translate into false or exaggerated promises.

Even if it doesn’t, you might be describing an algorithm that offers a nice little user experience, while your client might hear “AI” and start thinking about the car from Knight Rider.

You could then build them the most advanced chatbot prototype ever conceived by humanity, but the first time it says “sorry, I can’t help you with that, let me pass you to an agent”, they will feel like they’ve been conned.

When it comes to AI, under-promise and over-deliver, or your client might be just as upset as Mass Effect players were.

Learn how to reduce product failure during financial uncertainty