Digital Futures
in Mind

Menu
Get in touch

Reflecting on Technological Experiments in Mental Health & Crisis Support

Jonah Bossewitch, Lydia X.Z. Brown, Piers Gooding, Leah Harris, James Horton, Simon Katterl, Keris 89Myrick, Kelechi Ubozoh and Alberto Vasquez Encalada (monograph)

University of Melbourne (2022)

(Excerpt)

Urgent public attention is needed to make sense of the expanding use of algorithmic and data-driven technologies in the mental health context. On the one hand, well-designed digital technologies that offer high degrees of public involvement can be used to promote good mental health and crisis support in communities. They can be employed safely, reliably and in a trustworthy way, including to help build relationships, allocate resources and promote human flourishing.1

On the other hand, there is clear potential for harm. The list of ‘data harms’ in the mental health context is growing longer, in which people are in worse shape than they would be had the activity not occurred.2 Examples in this report include the hacking of psychotherapeutic records and the extortion of victims, algorithmic hiring programs that discriminate against people with histories of mental healthcare, and criminal justice and border agencies weaponising data concerning mental health against individuals. Issues also come up not where technologies are misused or faulty, but where technologies like biometric monitoring or surveillance work as intended, and where the very process of ‘datafying’ and digitising individuals’ behaviour – observing, recording and logging them to an excessive degree – carry inherent harm.

Public debate is needed to scrutinise these developments. Critical attention must be given to current trends in thought about technology and mental health, including the values such technologies embody, the people driving them and their diverse visions for the future. Some trends – for example, the idea that ‘objective digital biomarkers’ in a person’s smartphone data can identify ‘silent’ signs of pathology, or the entry of Big Tech into mental health service provision – have the potential to create major changes not only to health and social services but to the very way human beings experience ourselves and our world. This possibility is also complicated by the spread of ‘fake and deeply flawed’ or ‘snake oil’ AI,3 and the tendency in the technology sector – and indeed in mental health sciences4 – to over-claim and under-deliver.

Meredith Whitaker and colleagues at the AI Now research institute observe that disability and mental health have been largely omitted from discussions about AI-bias and algorithmic accountability.5 This report brings them to the fore. It is written to promote basic standards of algorithmic and technological transparency and auditing, but also takes the opportunity to ask more fundamental questions, such as whether algorithmic and digital systems should be used at all in some circumstances—and if so, who gets to govern them.6 These issues are particularly important given the COVID-19 pandemic, which has accelerated the digitisation of physical and mental health services worldwide,7 and driven more of our lives online.


  1. Claudia Lang, ‘Craving to Be Heard but Not Seen – Chatbots, Care and the Encoded Global Psyche’, Somatosphere (13 April 2021) http://somatosphere.net/2021/chatbots.html/. Lang describes the potential for tech to ‘weave together code and poetry, emotions and programming, despair and reconciliation, isolation and relatedness in human-techno worlds.’  

  2. Joanna Redden, Jessica Brand and Vanesa Terzieva, ‘Data Harm Record – Data Justice Lab’, Data Justice Lab (August 2020) <https:// datajusticelab.org/data-harm-record/>.  

  3. Frederike Kaltheuner (Ed.) Fake AI (Meatspace Press, 2021) https://fakeaibook.com/ (accessed 7/12/2021).  

  4. Anne Harrington, Mind Fixers: Psychiatry’s Troubled Search for the Biology of Mental Illness (W. W. Norton & Company, 2019  

  5. Meredith Whittaker et al, Disability, Bias, and AI (AI Now, November 2019) 8.  

  6. Frank Pasquale, ‘The Second Wave of Algorithmic Accountability’, Law and Political Economy (25 November 2019) <https://lpeblog. org/2019/11/25/the-second-wave-of-algorithmic-accountability/>.  

  7. John Torous et al, ‘Digital Mental Health and COVID-19: Using Technology Today to Accelerate the Curve on Access and Quality Tomorrow’ (2020) 7(3) JMIR Mental Health e18848.