Computers may solve the problems, but will we understand the answers?

From climate change to healthcare, computer algorithms are being developed to help solve some of the world’s biggest problems. But will we understand the answers? An article published yesterday presents this challenge – if we’re using computers to solve some of the key questions in life, then we need to be able to understand the answers.

Data visualisation: understanding the problem

Analytics techniques can help us to better visualise key outcomes by representing large volumes of data. We can use this data to produce simulations that show the impacts of climate change. These tools help us to understand and relate to the potential consequences of extreme weather patterns on our day to day environment. By helping us to see more clearly what’s going on, this can spur us into action.

Similarly, when it comes to our health, visualising the data helps to further our understanding. By centralising our health and lifestyle data into a single app, we can construct a more rounded view of our own health. Informed by this richer data-driven understanding of our own health, we can work with medical professionals to develop more personalised treatments that suit our individual needs.

AI that goes beyond our understanding

But what if the same app that showed you all your health data could analyse that data and automatically tell you what treatment plan to follow? AI-powered diagnosis and treatment based on the data collected is a growing possibility.

Returning to the case of climate change, perhaps AI could recommend steps to tackle the growing climate emergency. For example, it could show us how to optimise supply chains to minimise carbon emissions.

These powerful algorithms might tell us how to solve some of the biggest global problems. But if the solution is a black box, how can we be confident that this is truly the best course of action to take? Surely it’s important to understand how the algorithm arrived at the solution and what trade-offs are being made.

Algorithmic visualisation: probing the black box

In the same way that visualisation tools enable us to understand and relate to these problem, perhaps they could also help us to understand the answers that AI presents.

A promising example of this is Google’s deep learning model deployed at 2 hospitals. The neural network model used feature learning on electronic health records data to predict a range of health outcomes such as the patient’s medical condition and their risk of death. What’s most impressive is that when making the prediction for the patient, the model highlighted what key parts of the data it used to make its decision. Certain antibiotics, test results or other features from the electronic medical record were flagged as being important in the algorithm’s decision-making process.

By allowing doctors to understand the basis of a model’s prediction, they have the opportunity to make a judgement on whether the algorithm’s conclusion seems clinically appropriate. In some cases the model may come up with a surprising result. Algorithmic visualisation would allow insights into the drivers behind the automated recommendation. This could help the scientific community to probe further and potentially even gain a new understanding of certain medical conditions.

We need to understand the answers

Making sure we have systems in place to understand the answers to the problems that computers are solving is critical. We need strong algorithmic visualisation systems to help us see inside the ‘black box’. Gaining insight into the computer’s solution is so important that these systems should be embedded into the design of AI from its inception.

A true data scientist needs to be able to understand the answers. After all, isn’t science  about furthering human understanding of everything around us?

Leave a comment