Wyser
Beauty and the Beast:

Building trust in Artificial Intelligence

There is no doubt that there is beauty in Artificial Intelligence (AI); people are attracted to it as they believe it has superhuman powers, powers that will lead to greater efficiencies and better outcomes. But there is also a beast in AI, as it extracts meaning from data and can make suggestions that, although effective, are a struggle for humans to understand, and this can lead to fear.

As we explore and find more uses for Artificial Intelligence, we at Wyser also consider the human implications associated with this technology. We anticipate that some users might be resistant, or even opposed to computers making decisions that were previously made by humans, but we have found they are mostly intrigued and simply want to know how it works.

By developing explainable AI, we enabled them to follow the decision-making process and make sense of the recommendations, which they were then happy to follow. Over time, users began to trust the systems that we have built, and their ability to interpret situations correctly - to make fair, equitable decisions.

The issue with using machine learning is therefore working out not only how the system has made its decision, but also how we demonstrate that to the users in a meaningful way to better build trust. This is especially important when the outcomes of the model diverge significantly from what the users might expect.

We have many years of working with AI, wrangling data to ensure the AI works efficiently and as it was intended. Through our analysis, we can uncover anomalies in the training data which we can address to improve the accuracy of the models. This analysis is particularly important when working with data which has been manually inputted.

We’ve taken this one step further, as we think that it’s not only important to explain why a decision has been made, but also to still allow users to challenge that decision. The model is only as good as the data that has been used to train it. So, whilst we utilise multiple models to remove any bias, we accept that there will always be exceptions and times when we need human intervention (a human or humans in the feedback loop that calibrate the models) and we feel that is the key to the success of our implementations.

We always remember that we are using AI to provide a service and, as Lou Downe states in her excellent book, Good Services – How to Design Services That Work, she says, “A good service clearly explains why a decision has been made. When a decision is made within a service, it should be obvious to a user why this decision has been made and clearly communicated at the point it’s made. A user should also be given a route to contest this if they need to”.

If you want to know more about how our AI works and how we have improved transfer learning between implementations, then please contact us.