This POSTnote summarises the ethical implications and regulatory considerations for deploying Artificial Intelligence (AI) in mental healthcare.
POSTnote's are research briefings for UK Parliamentarians on specific topics.
In recent years the number of AI tools available for mental healthcare and wellbeing purposes has increased. This builds from a burgeoning digital health sector in which 20,000+ wellbeing apps are reportedly available on the app store. These apps are distinct both from AI tools which have been purpose-built for NHS use, and from unintended uses of companion chatbot apps – which were never intended for mental health purposes. All the cases of severe harm identified through this research were from unintended uses of general companion chatbot apps. But there are ethical considerations around the use of all AI tools in mental healthcare.
Public sector responses are underway to improve data availability and support improvement in evidence generation and deployment. There are also collaborative responses underway to address the ethical challenges from multiple government agencies in the UK and globally. This builds on considerable existing regulation and guidance (examples are outlined in the POSTnote).
Piers was interviewed and provided external peer review for the developments of this POSTnote.