Waking up...
Article
Indigo Set to Provide the Perfect Automated Customer Service

From its roots back in the 1950s, artificial intelligence development has gone through multiple stages, sometimes progressing rapidly and at other times stagnating. Currently, we are seeing a stage of development that is so rapid that many modern thinkers are becoming concerned bots might ultimately take over. Of course, that’s not going to happen, at least not for the foreseeable future. But what we are seeing are considerable improvements in what AI can achieve and how we can collaborate with it to improve the way we do things.

In the case of Indigo, our goal is to develop it further to provide the perfect automated customer service. Here we look at where AI is today, particularly human-bot collaboration, the human-bot interface and its challenges, including the uncanny valley phenomenon, and what we expect to see in the future. We are eager to receive your feedback, so please let us know what you think and where you want to see Indigo go.

The Turing test continues to defy AI

In 1950 Alan Turing devised a test for computer intelligence, now called the Turing Test. The goal is for a computer to fool an interrogator into believing the computer is human. The machine can lie and play any diversionary tactic it can muster. To pass the test, it must fool a proportion of interrogators. To date, no AI programme has passed the basic undiluted Turing test.

Nothing to worry about

Recently AI has achieved some breakthrough goals. It has beaten the world chess champion at chess: it has beaten the world Go champion at Go (after having taught itself to play). It can diagnose cancers more reliably than medical professionals; it can translate languages almost as well as professional translators, along with a multitude of other impressive achievements. However, AI tends to be very good at specific tasks but less than impressive in human-like general intelligence. Artificial General Intelligence (AGI) is the holy grail for AI developers, and according to certain soothsayers such as Nick Bostrom (Superintelligence: Paths, Dangers. Strategies); the late Stephen Hawking (“Artificial intelligence could end mankind”); Bill Gates (who is very concerned that we will lose control of AI); Elon Musk and others, it poses a mortal danger that threatens humankind.

While without doubt AI is playing an increasingly important role in business, science, medicine, entertainment and transport and sparks the imagination of writers and filmmakers (Ex Machina, for example), its stated dangers would appear to be hyperbolic. Yes, AI is changing the way we live, work and play; it is impacting the job market, creating more jobs than it destroys, and moving society in a host of different ways, but an existential threat? We don’t think so. Humans themselves are too good at that.

Collaborative AI

So, while computers can’t fool humans that they are other than machines, what AI can do increasingly well is collaborate with people. In other words, humans and AI agents (or robots) are getting very good at working together to achieve shared goals. AI systems can assist humans in many things, including complex decision making, medical diagnosis, art creation, and in the case of Indigo, running a reception desk. We will return to that later.

However, a critical element in collaborative working is trust. If we are to accept what the AI is telling us and act on it, we must first trust it. Unfortunately, trust rarely comes naturally to people. If we find it difficult to trust other people, how can we learn to trust a seemingly intelligent machine, especially when it is based on untransparent neural networks and can’t explain its thinking processes?

There is an irony that when a bot closely resembles a human closer without entirely doing so, the less we tend to trust it. The phenomenon is known as the “uncanny valley”, which is worth exploring in some detail.

The Uncanny Valley

Do we genuinely want our bots to resemble human beings? Have you ever experienced a strange creepy feeling when encountering something that is almost but not quite human? That odd feeling is called the “uncanny valley” – a negative emotional response to objects that appear almost human. The sentiment ranges from slight unease to revulsion, and the closer the object resembles a human without completely doing so, the more distasteful we tend to find it. Our emotional response increases, and the uncanny valley grows deeper.

The uncanny valley syndrome has been extensively researched since Masahiro Mori first identified it in 1970 and called it “bukimi no tani genshō”, subsequently translated as “uncanny valley” by Jasia Reichardt. Mori explained:

“I have noticed that, in climbing toward the goal of making robots appear like a human, our affinity for them increases until we come to a valley which I call the uncanny valley.”

The effect also applies to animals, avatars, prosthetic limbs and, as we shall explore, to chatbots. The below graph, which plots familiarity against human likeness, best illustrates it.

We feel a minimal affinity with an industrial robot and view it as a machine. As it becomes more like a humanoid robot, our empathy for it increases progressively until it becomes too human-like while not quite human – in the plot, this is shown as zombie-like. We fall into the uncanny valley of unease, distrust, and revulsion at this stage.

The uncanny valley and chatbots

The uncanny valley is never far away for anyone developing any human-computer interactive system. For instance, we design chatbots to be as human-like as possible in their interactions. But the problem arises when the human’s expectations are not fully met. For example, when we chat with a voice recognition system, we usually expect the computer side to be a good conversationalist. Unfortunately, while the best chatbots may begin conversations promisingly, once they reveal a lack of fully human intelligence, they can soon become irritating. When we expect too much of them, they always let us down.

Thus, managing expectations is crucial. For instance, Indigo will engage visitors in conversation, share jokes with them, carry out personalised conversations welcoming people by their name and use previously discovered information about them, such as a recent holiday. Such interactions give Indigo a personality, and we are continually increasing its abilities in this direction. However, to avoid falling into the uncanny valley, we must handle each improvement with care. For instance:

  • Agency and experience

    Two fundamental dimensions of mind are agency - the ability to plan and act, and experience - in terms of the ability to sense and feel.

    A long term view has been that a bot’s agency appears to avoid the uncanny valley, while experience creates unease. Once a bot demonstrates apparent social intelligence, for instance, once it seems to recognise emotions and show powers of social cognition (complex mental abilities relating to perceiving, processing, interpreting and responding to social stimulus), it can elicit powerful feelings of eeriness.

    However, recent research into the uncanny valley of chatbots suggests that eeriness can arise from perceiving a machine that seems to possess its own mind. This effect applies to both a bot’s agency and experience. The more autonomous the bot appears, the more likely it will generate eeriness.

  • Uncertainty – is it human, or is it a bot?

    Another problem arises when service providers attempt to pass off a chatbot as a human. The temptation to do so increases as bots become more sophisticated. When customers cannot tell whether they are interacting with a person or a bot, the tendency may be to conceal the bot’s true identity. However, it would appear that this practice treads perilously close to the uncanny valley.

    In reality, though, humans are rarely fooled into mistaking a bot for a human. On the contrary, people tend to be happy to converse with a friendly bot, fully aware that they are communicating with a non-human entity.

  • Personal and pleasant chatbots win the day

    User experience is enhanced when chatbots connect at a more personal level. However, when bots fail to achieve this, they evoke negative feelings.

  • Conversational skills

    Good conversational skills increase the desire and willingness to interact with the bot. In addition, the ability to provide accurate and relevant information is crucial.

Conclusion

Indigo is getting smarter all the time. It can already engage visitors in meaningful and entertaining conversations and carry out many different tasks such as scheduling meetings and providing critical business information. While some people may have concerns that Indigo might take away the personal touch of a traditional receptionist, we believe that the advantages of Indigo significantly outweigh this. As our work and that of other developers of intelligent agents concur, such agents can provide service while maintaining high degrees of customer satisfaction.

Indigo is highly capable of the personal touch. It has excellent conversational skills, demonstrates high levels of experience and agency, and is entirely transparent regarding its identity.

While the uncanny valley appears to be a real phenomenon, there is no evidence that Indigo will ever enter it. Instead, we believe that Indigo is now set to provide the perfect automated front of house customer service.

Get in Touch

Want to Say Hello to Indigo?


Discover the cost-saving and benefits that Indigo can unlock for your shared office or coworking space...