Blog

AI, big data and clinical trials

AI 2.jpg
The MIT Technology Review reported on a new research collaboration between Google’s DeepMind – their machine learning division – and Moorfields Eye Hospital in London; a renowned specialist in eye diseases and injuries.

According to the article by Jamie Condliffe, DeepMind’s artificial intelligence software will work its way through over a million eye scans; analysing the common patterns in visual degeneration cases. The software can detect precise detail that humans can’t see, and of course it works much faster too. Eventually, DeepMind’s software will learn how to spot early signs of sight loss and catch at-risk patients while there is still time to help protect their vision.

This isn’t Google’s first foray into healthcare research and clinical trials. Their Connectivity Bridge and wearable health sensor are just a couple of other examples.

The emerging realm where technology, big data and clinical research meet is starting to produce some of the most exciting and innovative shifts in how we’ll conduct medical research in future. But it’s also raising new issues around data privacy and informed consent that are catching researchers and the wider biosciences industry off guard.

We’ve already seen examples where our enthusiasm for the potential of new technology has led researchers to overlook data privacy risks and concerns. DeepMind’s work with the Royal Free Hospital London, on an app called Streams that helps HCPs detect acute kidney injury, led to criticism that users were not properly informed about how and what data would be shared with Google. Similarly, NHS England’s care.data programme was delayed several times over concerns around data protection and opt-out options.

Tech giants like Google and Apple move fast – much faster than the heavily regulated world of clinical research is used to. The opportunity presented by new technology and big data could revolutionise healthcare and how we do research; so how do clinical researchers and regulatory authorities avoid standing in the way of progress, while ensuring patient welfare and informed consent are not compromised?

Some of the key questions that need to be answered are:

- How do we guarantee permanent anonymisation of data in a world where we can’t predict how future innovations might change the nature of the protections we put in place now?

- As we get better at detecting early signs of health risks among people who have let researchers use their data, do we have a responsibility to feed that information back to patients? If yes, how do we reconcile that with the need for anonymisation and data privacy?

The answers to these questions are not simple. We may need to re-evaluate how we work with research participants, to ensure they can benefit from and have some ownership over research findings that are increasingly detailed and personalised.

As we journey towards realising the full benefits of artificial intelligence and big data for healthcare research, some things are for sure:

More than ever before, we need to maintain and build on the trust relationships between researchers and their participants.

The capacity for clinical trial regulatory authorities to evolve in a timely way will truly be tested.