Digitising the Mental Health Act

Menu
Get in touch

Are we facing the app-ification and platformisation of coercion in mental health services?

Piers Gooding (book chapter)

Routledge (2023)

(Excerpt)

Socio-technical systems such as video conferencing, digital care work platforms and electronic health records are taking an increasing role in mental health-related law, particularly since the COVID-19 pandemic. Reflecting on these experiments can help navigate an increasingly digital future for mental health services and the laws that govern them. This chapter looks to England and Wales, where an explicit policy aim to ‘digitise the Mental Health Act’ has heralded three key developments: (1) remote medical assessments of persons facing involuntary intervention, (2) the remote operation of tribunals that authorise involuntary interventions, and (3) and the rise of digital platforms for Mental Health Act assessment setup. The chapter argues that although courts appear responsive to the issues posed by the first two developments, there appear to be less obvious oversight of digital platforms used to setup mental health crisis work. The chapter considers legal issues raised by ‘digitising mental health legislation’ and draws in a political economy perspective to reflect on the role of the private sector in emerging configurations of digitised health and social services. It recommends attention to safeguards in both the procurement and commissioning of private sector practices concerning mental health crisis work and in the proliferation of digital platforms in health and social care services.

Introduction

In 2019, an industry publication called the Medical Futurist (2019) imagined a near future:

[P]atients might go to the hospital with a broken arm and leave the facility with a cast and a note with a compulsory psychiatry session due to flagged suicide risk. That’s what some scientists aim for with their A.I. system developed to catch depressive behavior early on and help reduce the emergence of severe mental illnesses.

This is one imagined future for mental health services. Others have rejected a forecast of expanded risk predictions and coercive intervention (see eg. McQuillan, 2018), instead promoting cooperative support relationships augmented by selective use of data-driven technology (see e.g. Bossewitch, 2016; Cosgrove et al., 2020).

These contested futures are beginning to appear in the mainstream—often in the pages not of mental health journals but financial news. Consider Elon Musk’s claim that his ‘AI-brain-chips company could “solve” autism and schizophrenia’ (Hamilton, 2019) or the MIT Technology Review description of a mobile app ‘that can tell you’re depressed before you know it yourself’ (Metz, 2018). The latter app, called Mindstrong, was developed with funding from Jeff Bezos’ capital firm (Murtha, 2018), and its inaugural director and co-founder, Thomas Insel, joined the company after leaving a role at Google, where he had pursued a ‘Big Data’ approach to mental health (Reardon, 2017). Insel had previously been the director of the U.S. National Institute of Mental Health (NIMH, 2017). Between 2009-15, NIMH disbursed US$445 million to projects concerned with ‘technology-enhanced mental health interventions’.

Despite the attention-grabbing claims of the Medical Futurist, and indeed of Musk, many such proposals are the stuff of speculative fiction. Often, there is scant evidence for the technical feasibility of their claims, except in the most exploratory of terms. Even claims about the Mindstrong app are promissory, and like many algorithmic and data-driven proposals in mental healthcare, they lack a robust evidence base; its website supports the product with reference to a single study with a total sample of 27 people (Corbyn, 2021; Dagum, 2018). This point is important because even critical responses to new technologies, such as an analysis of their legal and ethical downsides, risks amplifying the sensational claims behind them. This inadvertent hype can exaggerate the technical feasibility of a proposal and risks promoting a ‘distorted picture of science’s potential’ (Horgan, 2021). An even greater risk, according to historian of computational technology, David Brock (2019), is that ‘wishful worries’ about speculative futures can distract us from the ‘actual agonies’ of technology-use today.

Indeed, for the purposes of this chapter, there are several forms of digital data-driven technologies that are being used today, and they are starting to reshape processes of mental health-related law in some countries. These technologies may not be as sensational as ‘AI brain-chips’ or forms of algorithmic pre-vision, but these more mundane technical systems – in particular, videoconference software, online platforms for managing healthcare labour, and electronic health records – have significant legal ramifications for people facing involuntary psychiatric intervention. This chapter will turn to these sociotechnical systems1 as a way to reflect on the digital futures of mental health-related law.

Specifically, the chapter will focus on the explicit policy aim to ‘digitise mental health legislation’ in England and Wales, with reference to the Mental Health Act 1983 (England and Wales) (MHA). I will look at three developments: (1) the rise of digital platforms to coordinate MHA assessments of those facing involuntary intervention, (2) the use of videocall technology to make remote medical assessments to authorise intervention under the Act, and (3) remote video hearings of mental health tribunals. Some of these developments are not unique to England and Wales, and brief comparative points will be made with other jurisdictions. Yet, the chapter will focus on England and Wales given its explicit policy aim, unlike elsewhere, to ‘digitise mental health
legislation’ (HM Government, 2018, p.213).

Beyond an account of the evolving law, policy and practice, the final section will reflect on these experiments and look to the levers in policy and law that can help govern them responsibly. I will consider rights-based concerns raised by ‘digitising’ mental health legislation, and draw in a political economy perspective to reflect on the role of the private sector in emerging configurations of digitised health and social services. I will argue that although courts appear responsive to the impact of videocall platforms on the direct application of the MHA – specifically for remote medical assessments and Tribunal hearings – there appear to be less obvious means to monitor and respond to the use of digital platforms in administering MHA-related crisis work, and in the broader ‘platformisation’ of health and social services (Faulkner-Gurstein and Wyatt, 2021). Even asthere remain open questions about the role of videocall technologies in hearings and assessments (for example, concerning access to justice for those subject to MHA orders or the experimental use of remote medical assessments for people under involuntary community treatment orders in jurisdictions like Scotland), more pressing questions arise regarding the checks and balances in place for the use of private digital platforms for facilitating the setup by mental health practitioners of compulsory treatment and hospital detention.

(This is an excerpt only)


  1. 1 ‘Sociotechnical systems’ refers to a ‘system involving the interaction of hard systems and human beings, in ways that either cannot be separated or are thought to be inappropriate to separate’(“socio-technical system,” n.d.). It offers a concept for analysing the dynamic interaction of people and technology, rather than narrowly focusing on merely the technology itself. The concept has origins as an extension Sociotechnical Theory, which emerged from organisational development in organising complex work, and which offers language for describing,
    analysing and designing organisations, and has become a widely used concept for studies in human computer interaction (Baxter and Sommerville, 2011). For the purposes of this paper it is used to refer to the digital systems under discussion, and the people and social structures that interact with them 

Piers Gooding, Research Lead (2021–23). Piers is a Senior Research Fellow at the University of Melbourne Law School. He is a socio-legal scholar whose research focuses on the law and politics of disability and mental health. Piers has acted as a board member and advisor in a range of local, national and international bodies working on the rights of disabled people, and has advised policy-makers at national and international levels. He posts here on Twitter and you can read more about his work here.