Use of AI in Healthcare #WHFEvents

General News
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Artificial intelligence in healthcare – opportunities, risks and implementation #WHFEvents

The use of AI in dentistry has been explored in some depth in Dental Review, from the design of clear aligners to CAD used in restoration technology, to predicting the outcome of perio treatment. But what is on the horizon for general medical healthcare? We attended a forum on that very subject...

Jim Shannon MP, Vice Chair, All-Party Parliamentary Group on Health in all Policies, welcomed his attendees and speakers to the Westminster Health Forum policy conference on the use of artificial intelligence in healthcare.

The speakers would go on to cover the subjects of AI opportunities, support, implementation, ethics, safeguarding, data, and engagement with clinicians and patients.

Mr Shannon highlighted the skill mix of his experts, and promised that they would challenge their audience with their insights. COVID-19, he said, has driven AI development in the health sector at an incredible pace, but what might that mean to the surgery of the future?

Dr Nicole Mather, Non-Executive Director, Health Research Authority and Executive Partner, IBM (above right, pictured with Dr Angeliki Kerasidou who will be speaking later) started the morning by talking about the current situation of AI in healthcare.

What is AI, she asked, and what needs to be done when using it? AI is the technology that allows machines to mimic what humans can do, including speech recognition (more of which in a later presentation), natural languages, pattern recognition and analysis etc. It can include a problem-solving application that makes decisions based on complex rules or if/then logic.

Machine learning is a subset of AI applications that learns by itself. It can reprogramme itself as it digests more data to perform specific tasks with ever greater accuracy. Beyond that is deep learning, a subset of machine learning that can teach itself to perform a specific task, again with increasingly greater accuracy, but in this model without human intervention or with unlabelled data.

AI must be explainable in order to be trusted. Dr Mather posited the need to embed ethical principles into AI applications that are built on trust and transparency so that large scale implementation can be adopted with fairness and accountability. Only through explainability can human users comprehend and trust the output of artificial intelligence and use it to accelerate science.

A constant theme running through the morning was that AI must work for everyone, and that it should reflect the diversity of the population, that its data capture and usage should be simple and accessible, and its outcomes are based on thoughtful, well-designed frameworks, built around regulation, relevant data, and technology. The quality of the data going into the AI model is crucial to the value of the outcome, the result is only as good as the material used.

The COVID crisis has driven home the importance of data quality in analysing and researching the biome responsible for the pandemic at a molecular level, and the effectiveness of medications trialled, leading to the incredibly fast roll-out of effective vaccinations and the long-awaited easing of societal restrictions.

AI’s burgeoning problem-solving ability has led to the discovery and design of new molecules and the generation of new anti-microbials. It has also led to the recognition and diagnosis of early-stage life altering conditions including Parkinson’s disease, prostate cancer, diabetes, and Alzheimer’s disease. In the NHS alone over 40 healthcare innovations have resulted from AI research.

AI has streamlined the passage of innovation from the lab through trials to the clinic, accelerating the healthcare process and transforming patients’ lives, while protecting their confidential information. Any data collated to create a profile of a disease and its treatment has been anonymised; only the information pertinent to building an effective treatment model will be used.

However, developers have had to learn to be pragmatic about data collection, being aware that busy – and sometimes stressed – clinicians might not have data recording at the front of their minds during the long hours needed to treat patients; even when data-driven innovation has been central to finding solutions to complex and urgent challenges, as we have seen so recently.

As a way of dealing with data collection and supporting innovation, the Accelerated Access Collaborative (AAC) and the Care Quality Commission (CQC) have worked in partnership with providers and stakeholders to develop a set of six evidence-based principles that are crucial for providers to be effective at adopting innovation. These are:

• Develop and deploy innovation with the people that will use them
• Develop a culture where innovation can happen
• Supporting people
• Adopt the best ideas and share leaning
• Focus on outcomes and impact
• Be flexible when managing change

And yet, without patient consent to data use innovation can’t be driven forward. Patients need to understand that any information used in any AI study will only involve specifically relevant data (not counting details such as a patient’s name, address or any personal details through which they might be identified).
A single person’s medical data will be collated and analysed along with hundreds of others in order to build foundations for an effective informational architecture, leading towards effective treatment for all.

Johan Ordish, Deputy Director, Medical Device Software and Digital Health, Medicines and Healthcare Products Regulatory Agency, discussed the latest developments and future of AI as a medical device.

Johan first stressed that his presentation did not represent government policy; then opened his talk with the opinion that medical devices are inevitably going digital, and that MHRA wants to see a combination of the very best of digital design working alongside optimum, intelligent healthcare.

At the core of digital technology is sophisticated software, and, in fact, the Software can be a Medical Device in itself, (SaMD) or a component of a more complex, multi-layered, AI as Medical Device innovation (AIaMD). There are novel risks associated with SaMD, not least the speed with which they can be deployed, and the fact that as dynamic devices they might change.

The change happening in smart software is nonlinear to its potential effects, in other words what it is might not follow through with what it does. AI offers an element of surprise that must be allowed for. It might also be susceptible to malign bugs and cyber-attacks. Cyber security is barely keeping up with hackers trying to steal healthcare information or turn AI against its users.

There is no set definition of an AIaMD as yet, although there are international efforts to standardise a definition. Latest intelligence suggests that the AIaMD market is strongly oriented towards triage and diagnosis. AIaMD interprets clinical evidence and adaptivity, meaning it can find novel/unexpected outcomes and raise new challenges.

If the AIaMD model is uninterpretable we enter the algorithmic ‘black box’ environment, which is the difference between a data modelling culture where the pathways are identifiable, and an algorithmic modelling culture which works according to its own rules and can be difficult to link to clinical/scientific data and so validate the model. It might also outstrip the human factor in the AI/human team.

The gold standard for any research is human, but AI is constantly retraining itself. If the human model is based on static data, it can become stale. AI is always learning, and perhaps the future contains a dynamic analytical AI model that accelerates beyond our human ‘gold standard’ and takes us somewhere new, and very exciting.

Dr Nathalie Moreno, a Partner at Addleshaw Goddard, addressed the matter of assessing health data sharing, security concerns and governance in the development of AI.

The tricky issue with AI is in data sharing and protection. While the data might not contain specific personal information, the patient needs to clearly understand the logic behind the technology and the purpose behind the medical profiling in order to meet GDPR guidelines and offer informed consent for its use.

There are documented cases of cyber-attacks against medical organisations and medical devices. It is important to ensure that patients and operators can’t be held under the threat of possible harm from an infected AIaMD. It is also important to recognise the importance of confidentiality with regard to anonymised research data. Some researchers believe that anonymous data can be shared freely. They are mistaken.

Even anonymised research data is subject to confidentiality and comes under GDPR guidelines. We must also wait to see what impact the EU guidelines will have on the UK during 2021 and post-Brexit.

First Dr Angeliki Kerasidou, Nuffield Department of Population Health and Reuben College, University of Oxford, and then Professor Alastair Denniston, Consultant Ophthalmologist, Research and Innovation, University Hospitals Birmingham NHSFT, and more (see below) took up the thread of safeguarding the use of AI in healthcare.

Dr Kerasidou asked how we can build trust in AI in healthcare? She described AI as calculative rationality without the interactive emotional core of the human heart. AI can analyse data faster than we can, but can you make healthcare professionals trust it and use it? Can you force patients to trust and accept healthcare professionals who depend on, and use, AI?

Paraphrasing Annette Baier in 1986, Dr Kerasidou asked ‘Trust me? Why should I until I have reason to?’ Then she reasoned that AI is a novel technology rapidly entering healthcare with great potential to improve healthcare at an individual and total population level. It can assist cash-strapped healthcare systems and improve access to healthcare for all. Learning to trust it calls for proven predictability of outcomes and open communication between professional and patient.

Professor Alastair Denniston (top) who is also Deputy Director, Birmingham Health Partners Centre for Regulatory Science and Innovation; and a Member of the Regulatory Horizons Council (UK), concluded the session.

Amongst his other job titles Professor Denniston describes himself as one of the gatekeepers tasked with safeguarding the use of AI in healthcare, or what he called the human/machine decision interface. In other words, here is an exciting new innovation, shiny and just out of its box. Can you use it to devise something useful for deployment in the NHS? Is it safe and confidential?

He provided two examples of AI working in the health service:

Example A. He wakes up one morning with blurred vision in his left eye. Using his smart-phone he contacts his local healthcare centre. Using AI, the centre identifies him using voice recognition and asks triage questions in order to direct him to the correct healthcare professional. He talks to a specialist ophthalmologist who calls him in for a retinal examination.

The AI in the retinography scanner diagnoses the early stages of macular degeneration, the most common cause of blindness in the elderly. Treatment can start that same afternoon, plus he downloads an app through which he can be constantly assessed for his condition. His healthcare has been successfully fast-tracked by AI.

Example B. He wakes up one morning with blurred vision in his left eye. He uses his smart-phone to call his local healthcare centre. Unfortunately, the voice recognition algorithm can’t recognise his accent, and he gets an ‘error’ message. Because there is no human fallback service he has to keep trying to get past the AI. After three days he gives up and visits A&E.

He is now on completely the wrong track and caught up in a bottleneck situation that will delay treatment. This is because the system has relied on AI rather than investing in people, and the AI is suffering from data poverty, or what Prof. Denniston describes as the digital ‘does have’ and the digital ‘does not have’ divide. But what does that mean?

Example B happens because the AI has not been provided with a diverse enough sample of demographic speech patterns. It needs sufficient data to be able to analyse speech across a number of groups based on age, gender and ethnicity, avoiding digital bias. Otherwise, the system is like a sat-nav that only works in certain select areas – it must be relevant everywhere.

He added a final warning. The danger is that just because something is new and exciting people might choose to ignore traditional methods such as randomised clinical trials of new medications, and instead focus on molecular analysis. We still need techniques and processes that maintain multiple layers of cross-referencing to prove effectiveness, and third-party validation.