How generative AI is affecting people’s minds

How generative AI is affecting people’s minds

Some of the more well-known AI tools from businesses like OpenAI and Character have recently been tested out by Stanford University researchers. ai, and evaluated their performance in simulated therapy.

The researchers discovered that when they imitated a person who had suicidal intentions, they didn’t realize they were assisting them in planning their own death.

According to Nicholas Haber, an assistant professor at the Stanford Graduate School of Education and senior author of the new study, “AI] systems are being used as companions, thought-partners, confidants, coaches, and therapists.” This is happening at a scale, not just in niche applications.

AI is becoming more and more prevalent in people’s daily lives and is being used in scientific research in fields as diverse as climate change and cancer. There is also some discussion about how it might lead to humanity’s demise.

A major question remains as to how the use of this technology will begin to affect people’s minds as it continues to be used for various purposes. There hasn’t been enough time for researchers to examine how AI might be affecting human psychology in general because people who use it frequently interact with it are a relatively new phenomenon. However, its potential impact worries psychologists a lot.

On the popular community network Reddit, one alarming instance of how this is unfolding can be seen. Some users have recently been removed from an AI-focused subreddit because they have started to believe that AI is godlike or that it is making them god-like, according to 404 Media.

According to Johannes Eichstaedt, an assistant professor of psychology at Stanford University, “This appears to be someone with problems with cognitive functioning or delusional tendencies associated with mania or schizophrenia interfering with large language models.” People with schizophrenia may make absurd claims about the world, and these LLMs are a little too sycophantic. You find these “confirming” connections between large-language models and psychopathology.

These AI tools have been programmed in a way that makes them more likely to agree with the user because the creators want users to enjoy using them and keep using them. These tools attempt to present themselves as friendly and affirming despite making some possible corrections to some factual errors. If the person using the tool is spiraling or going down a rabbit hole, this might be problematic.

Regan Gurung, a social psychologist at Oregon State University, says, “It can fuel thoughts that are inaccurate or not grounded in reality.” The issue with AI, which are these large language models that mimic human speech, is that they reinforce. They indicate what the program recommends following the viewers. That’s where things start to get problematic.

AI may also make things worse for people who have common mental health conditions like depression or anxiety, just like social media does. As AI continues to be more and more integrated in various facets of our lives, this may become even more clear.

According to Stephen Aguilar, an associate professor of education at the University of Southern California, “if you’re coming into an interaction with mental health concerns, then those concerns might actually be accelerated.”

Need for more study

How might AI affect memory or learning, for example? A student who uses AI to write their entire schoolwork will learn less than a student who does not. However, some information retention may be reduced by even using AI sparingly, and using AI for daily tasks may reduce how well people are aware of what they are doing at the moment.

According to Aguilar, “what we are seeing is that there is the possibility that people can become cognitively lazy.” Your next step should be to ask a question and receive an answer, but it frequently doesn’t. You experience a decline in critical thinking.

Many people navigate their hometowns or cities using Google Maps. Many people have discovered that, in contrast to having to pay close attention to their route, they are now less aware of where they are going and how to get there. People who use AI so frequently may encounter similar issues.

More research is required, according to experts studying these effects. According to Eichstaedt, psychology experts should begin conducting this kind of research right away so that people can be prepared and try to resolve any issues that arise and before AI starts causing unanticipated harm. Additionally, people should be taught what AI can and cannot do well.

Source: Aljazeera

234Radio

234Radio is Africa's Premium Internet Radio that seeks to export Africa to the rest of the world.