Regulating Medical AI

alex-knight-199368-unsplashunsplash-logoAlex Knight

There has been a lot written about AI, and how it will change the world. While Go-playing AI, Killer Robots and job losses  are important in a general sense, we are interested in AI in medicine.

There has been a lot written on this as well – using AI to detect skin cancers and read Chest X-rays –  although much of it falls prey to the tendency to see medicine as a technical problem (and hence amenable to technical fixes – even when they are flawed), rather than seeing it as a complex socio-technological enterprise with all the problems that entails.

In the Computational Oncology group, we work on developing and using medical “AI” – covering a mixture of machine learning, image analysis, near-patient data capture and others. I use the term “AI” in a loose sense – some of it is applications of cutting edge AI, and some of it fairly simple rules-based work. On top of that, I am involved in wider discussion about medical AI through the Royal College of Radiologists (RCR) and within the hospital where I work.

There are lots of problems in developing medical AI, particularly in access to suitably labelled training data, and deployment of systems in clinical practice. However, one of the major problems is around regulation, and it’s one we need discuss. There is an argument about whether AI should be regulated in general (see Andrew Ng’s argument here), but medicine is already regulated, and it seems unlikely we won’t regulate medical AI.

Existing regulatory systems for medicines and medical devices assume that medicines can be produced and quality assured, and that if drugs are given in the same doses and in the same formulations, they produce a reliable dose in patients. Medical devices are classified based on their level of potential harm (three levels by the FDA; four by the EU) – overview of the differences and similarities here, but need to provide evidence that they function as expected.

Despite complaints, these approaches work for the regulation of hardware. Changes to hardware tend to be slow, and it is easy to “freeze” products in a certain state. That doesn’t mean we always know how they will work – drugs can have variable effects in different people; products may malfunction – but the product itself doesn’t change. However, when we think about the regulation of new medical AI, many of the devices are software based, and can change as often as you update the code; in fact, they can update themselves, given some learning function.

This makes regulation difficult: How do we regulate a device built on sand? Which version of the software are we regulating?

The regulators are aware that this is a problem – we have new regulations from the FDA  and new guidance from the European Commission. Most of these are based on the distinction between software that can be used as medical devices – devices that make a diagnosis, or suggest a treatment, and that which is used for “screening” or advice, although the FDA are also looking at the idea of certifying the process of software development, rather than just the product. While such an approach might speed up the development/ approval cycle, it also risks locking device development into big tech companies, who are likely to be to the only ones who can afford the costs and timespans for medical device development. That largely removes one of the features we have seen in the software industry – the ability of nimble new companies to disrupt established markets. So, the approvals process for medical AI might lead us to a more staid market than we might expect from all the buzz around AI. This isn’t a bad thing – as Frances Oldam Kelsey proved – but runs counter to a lot of the hype about medical AI.

But I think that this approach – regulate AI like hardware – is flawed, for two very different reasons:

1. Medical AI isn’t new, and those who herald a new dawn of medical AI are either disingenous, or ignorant.

2. Regulating medical AI comes down to regulating the users – doctors won’t use devices that aren’t CE marked. But what happens when the users are patients, not doctors?

We will explore what these mean in future posts.

Categories: AI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s