Health experts from several leading universities in artificial intelligence research have warned that artificial intelligence not only distorts the thoughts and minds of users as a result of misleading information, but also creates a feeling of psychological confusion in them.
Several recently published studies have discovered that AI can alter perceptions of reality as part of a “feedback loop” between the AI conversational platforms and the psychiatric patient, reinforcing any delusional beliefs the patient may have.
A team from the University of Oxford and London College stated in a research paper that has not yet been published: “While some users talk about psychological benefits of using artificial intelligence, disturbing cases are emerging, including reports of suicides, violence, and delusional thoughts linked to romantic relationships in which the user falls with the chat platform.”
The team of researchers warn that the “rapid adoption of chat platforms as personal social companions” is understudied.
Another study, conducted by researchers at King’s College London and New York University, indicated 17 cases of psychosis diagnosis after interacting with chat platforms such as GBT Chat and CoPilot.
The second team added: “Artificial intelligence may reflect, validate, or amplify delusional or exaggerated content, especially in users already vulnerable to psychosis, in part because models are designed to increase interaction with the user.”
According to the scientific journal Nature, psychosis can include “hallucinations, delusions, and false beliefs… This condition can result from mental disorders such as schizophrenia, bipolar disorder (a psychological disorder that causes bouts of depression and other bouts of abnormal euphoria), severe stress, and drug abuse.”
A different study published recently showed that chat platforms appear to encourage people who talk to them about suicide to do so.
AI chat platforms have become notorious for “hallucinations”, where they provide inaccurate or exaggerated answers to queries and claims from users, while more recent research indicates that it is impossible to eradicate this trait from automated chat platforms.