Vicky Pattison warns of deepfake abuse as ChatGPT plans to launch adult content

Vicky Pattison warns of deepfake abuse as ChatGPT plans to launch adult content

https://i2-prod.mirror.co.uk/incoming/article36338779.ece/ALTERNATES/s615/2_vicky4-1jpeg.jpg

Vicky Pattison, the star of Strictly Come Dancing, expressed her concerns about the rise of AI-enabled sexual abuse as OpenAI revealed plans to introduce an “adult mode” feature for people over 18 years old.

In a bid to compete with Grok, Elon Musk’s explicit chatbot, ChatGPT will allow users to create porn. Companions, or sexualized avatars, can be created using Grok for both naked and sexual photos and videos.

The artificial intelligence platform ChatGPT is owned by OpenAI. A new erotica feature was announced for the software in October. Consent has become a source of concern because of sexualized AI content.

Now, Strictly Come Dancing star and TV legend Vicky Pattison is raising the alarm about the rise of AI-enabled sexual abuse. Previously, she delved into the world of fake pornographic videos for a Channel 4 documentary named Vicky Pattison: My Deepfake Sex Tape. She said that she is “deeply concerned about the direction this technology is moving in.”

READ MORE: AI model ‘cracking jokes’ in viral video sparks concerns as users question what’s realREAD MORE: ‘Stranger fell for AI video of me and shelled out thousands in cruel romance scam’

Owner of ChatGPT, open AI boss Sam Altman, announced in October that users over the age of 18 must register with the site’s “erotica” generation tool.

He stated on X: “We are going to be able to safely relax the restraints in most cases now that we have been able to mitigate the serious mental health issues and have new tools.” We will allow even more erotica for verified adults as part of our “treat adult users like adults” philosophy.

Altman didn’t give details on what this might look like, but given that ChatGPT can carry out conversations and generate images, the erotica could be text-based and visual. Open AI also owns Sora, the viral AI video generator that created fake short videos of Jake Paul and TikToker HSTikkyTokky, that looked so realistic they fooled many online.

Geordie Shore alum Vicky took the extreme measure of uploading a deepfake sex tape created by AI to highlight the dangers of the situation as part of her Channel 4 programme.

“Having spent the last year immersed in the realities of deepfake porn and AI-enabled sexual abuse, I’m deeply concerned about the direction this technology is moving in,” she told The Mirror. “The issue isn’t consensual adult content, it’s that powerful AI tools are being rolled out before we’ve properly addressed consent, safeguards, and misuse. Women are already disproportionately harmed by deepfake abuse, and this risks making that worse.

“I’m concerned about the message that these tools are sending about intimacy,” Patterson said. When AI is intended to flirt and act on demand, it can skew the perception of how women interact with one another, especially for younger users who are already navigating tangled lines between consent and consent.

Every new sexualized AI tool runs the risk of strengthening dangerous narratives rather than protecting people until lawmakers and tech companies catch up.

In order to improve their age prediction before releasing the new feature, Fidji Simo, OpenAI’s CEO of Applications, announced earlier this month that she anticipates “adult mode” to debut in the first quarter of 2026. The company is already testing the Simo model in some nations, according to The Verge, to see if it can identify teenagers without “mis-identifying adults.”

There will be some controls, but according to Altman, no one will be able to see or obtain erotica without getting permission. However, Tolga Bilge, a researcher on AI policy, criticizes Altman for backtracking on a statement he previously made in an interview about OpenAI’s inaction, such as creating a “sexbot avatar.”

Given the current state of AI-generated sexual content, discussion about ChatGPT’s upcoming adult content is constrained.

In February, the UK Home Office released a report as the UK became the first country in the world to create new AI sexual abuse offences. They found AI tools are being used to generate child sexual abuse images in a number of sickening ways, such as “‘nudifying’ real-life images of children or by stitching the faces of other children onto existing child sexual abuse images.

In this odious material, “Innocent survivors of traumatic abuse are being re-victimized,” the children’s voices are frequently used.

“Perpetrators are also using fake images, such as streaming live images, to intimidate kids and force victims into even more horrifying abuse.” As more children are being groomed and abused online, perpetrators are using AI tools to help them hide their initial identities.

But AI-generated sexual content has already seeped into classrooms across the world. In Spain last year, 15 boys were sentenced to a year’s probation after being convicted of using AI to produce fake naked images of their female schoolmates. Most of the girls affected were aged 14 and the youngest was 11. The images were shared on WhatsApp groups.

In Australia, 50 school students reported that “mutilated” and “incredibly graphic” images of them had been fakes and shared. One mother said her daughter was so horrified by the images that she vomited.

READ MORE: TikTok set to give users more power to limit amount of AI content on their feedsREAD MORE: A Place In The Sun’s Laura Hamilton victim of AI deepfakes

In the US, more than 30 female students at a school in New Jersey found deepfake sexual images had been shared among their male classmates on Snapchat.

One in ten secondary school teachers in the UK were aware of students’ attempts to create “deepfake, sexually explicit videos” at their institutions during the previous academic year, according to a poll conducted by Teacher Tapp for the Guardian.

As required by Ofcom’s Online Safety Act, OpenAI are reviewing their age verification system. User-to-user search companies are required by the Act to take appropriate precautions to reduce the likelihood of users encountering “illegal content,” including child sexual abuse material, terrorism, encouraging and assisting suicide, and other law-breaking incitement.

To prevent children from accessing pornographic content, all websites must have highly effective age assurance.

Continue reading the article.

Source: Mirror

234Radio

234Radio is Africa's Premium Internet Radio that seeks to export Africa to the rest of the world.