Illusions of intelligence, connection and reality
Date
Authors
Looi, Jeffrey C.L.
Allison, Stephen
Bastiampillai, Tarun
Reutens, Sharon
Looi, Richard C.H.
Journal Title
Journal ISSN
Volume Title
Publisher
Access Statement
Abstract
For people with mental illnesses that impair reality testing, such as psychosis, severe depression and bipolar disorder, Artificial Intelligence (AI) Large-Language Models (LLMs) may represent threats to mental health. LLMs are unable to detect delusional beliefs, may encourage and validate delusions and cognitive distortions, miss opportunities to reinforce reality-based thinking, and exacerbate risks of self-harm and harm to others. Psychiatrists need to understand these risks of LLMs for people with severe mental illnesses, and educate patients and carers on avoiding these potential harms. Risk assessments need to be informed by an awareness of the inputs that patients receive from LLMs.
Description
Citation
Collections
Source
Australasian Psychiatry
Type
Book Title
Entity type
Publication