Position statement

03/09/18

03 September 2018

Artificial intelligence (AI) in health

AI explained

AI is often misunderstood as one concept, when in reality its methods and application vary and its manifestations will soon be playing a role in many areas of our lives, not just health. It can take the form of a diagnostic or prognosis tool, a service-planning tool, or support self–care and prescriptions, among other activities.

There is a significant opportunity through new technology to support the role of doctors and provide solutions to existing problems but careful consideration must be taken with regards to the safety, societal, legal, educational and ethical implications it presents.

 

Key recommendations

1. Incentivise industry to address real-world challenges in healthcare, such as prescribing errors, drug adherence and the use of, and resistance to, antibiotics.

In healthcare particularly, it is essential to ensure that new technology is safe and effective. However, this can be increasingly difficult in an age where technology can quickly outstrip the regulatory environment. Many clinicians are unclear where responsibility lies or the legal implications for support given by AI products and on what basis, it has formed the conclusion. Support should be given for both doctors and patients to engage with AI development and testing in a meaningful way.

2. The RCP should support regulators, NHS England and NHS Digital to adapt to a changing environment, develop guidance, principles and appropriate evaluation methods to assess AI, including clinical and patient input where possible and supporting dissemination of their assessment results.

3. The RCP should support all clinicians to critically appraise new technology, ask questions and engage in discussion about the accuracy and impact of its advice, efficacy and evidence base. This will ensure that no matter how technology progresses, doctors will be able to apply core professional principles in their approach and have the confidence to either agree or disagree with the recommendations made by AI technology.

Greater trust in new technology can be supported through greater transparency and discussion. Industry should work with clinicians to explain the development of new technologies that use AI, particularly the type of data used for development of the product, which must take into account the diversity of the population it intends to interact with so that it does not disadvantage some groups. 

4. Industry should take a transparent approach to explaining the evidence base for new technology, ensuring testing takes place using real-life data that is diverse enough to represent the intended population. Where possible, findings should be subject to peer-reviewed publication.

Technology such as AI can support doctors to focus on strengthening the patient-doctor relationship, allowing them to focus on eliciting patient preferences, shared decision making and taking a more holistic view of their care. Context and conversation, particularly around difficult decisions, is a crucial part of determining treatment plans that can only be supported, not replaced, by technology. 

5. New technology should support physicians in taking a more person-centred approach to care.

High-quality data is needed to ensure the conclusions drawn from AI are valid and safe. This applies in particular to the use of clinical data recorded in electronic records by health professionals. Clinical headings in the record need to be standardised so that data from different organisations can be integrated without changing the meaning.

6. Industry and healthcare managers should ensure that electronic record systems adhere to existing information standards. Provider organisations should invest in appropriate training to develop knowledge and change the culture of record-keeping attitudes. The RCP should support developing standards for recording clinical data where there are gaps, and should support clinicians in high-quality record keeping to recognised standards.