Are tech companies using your private data to train AI models?

Are tech companies using your private data to train AI models?

Users in the United States are constrained by how much personal information they can extract from their data to train AI tools because leading tech companies are rushing to release and improve artificial intelligence (AI) products.

Meta (which owns Facebook, Instagram, Threads and WhatsApp), Google and LinkedIn have all rolled out AI app features that have the capacity to draw on users ‘ public profiles or emails. Google and LinkedIn provide users with ways to opt out of AI features, whereas Meta’s AI tool doesn’t offer a way for them to say “no, thanks.”

Recommended Stories

list of 3 itemsend of list

In a November 8 Instagram post, it was stated that “99% of Gmail users have no idea that Gmail just flipped a dangerous switch on October 10, 2025.”

Posts warned that the platforms’ rollouts of AI tools make the most sensitive data available for tech company harvesting. “Every conversation, every photo, every voice message, fed into AI and used for profit”, a November 9 X video&nbsp, about Meta said.

Technology companies are rarely fully transparent when it comes to the user data they collect and what they use it for, Krystyna Sikora, a research analyst for the Alliance for Securing Democracy at the German Marshall Fund, told PolitiFact.

Unsurprisingly, this lack of transparency can lead to a lot of misinformation, Sikora said, which in turn can cause fear mongering and the dissemination of false information.

The best way for people to learn about and respect their privacy rights is to read the terms and conditions, which frequently specify how the data will be used and whether it will be shared with third parties, Sikora said. The US doesn’t have any comprehensive federal laws&nbsp, on data privacy for technology companies.

What we discovered about how the AI on each platform handled your data:

Social media claims that Meta will begin reading your DMs, photos, videos, voicemails that are fed into AI and used for profit starting on December 16th. – November 9 X post with 1.6 million views as of November 19.

Facts: On December 16, &nbsp, Meta&nbsp announced a new policy, but that alone does not mean that its AI tool will receive your direct messages, photos, and voice messages. The policy relates to how Meta will customize users’ advertisements and content based on their interactions with Meta AI.

For example, if a user interacts with Meta’s AI chatbot about hiking, Meta might start showing that person recommendations for hiking groups or hiking boots.

Your data is still being used for AI purposes, though. Meta does collect user content in “public” mode, even though it doesn’t train its AI on Instagram, WhatsApp, or Messenger. This can include photos, posts, comments and reels. According to Meta&nbsp, the system is designed to prevent translating these conversations into ads if the user’s Meta AI conversations involve religious viewpoints, sexual orientation, and racial or ethnic origin. Meta&nbsp, the AI tool, says&nbsp, will only use the microphone if users grant permission to use it to ask questions of users to Meta AI using its voice feature.

There is a caveat: The tech company&nbsp, says&nbsp, its AI might use information about people who don’t have Meta product accounts if their information appears in other users ‘ public posts. In a public image caption, a Meta user might mention a non-user, which could be used to train the AI.

Can you change your mind? No. Your data might be used by Meta AI if you are using Meta platforms in these ways, including using the chatbot and making some of your posts public. In Instagram, Facebook, or Threads, Meta AI cannot be deactivated. WhatsApp users can deactivate the option to talk with Meta AI in their chats, but this option is available only per chat, meaning that you must deactivate the option in each chat’s advanced privacy settings.

People were incorrectly advised to fill out the X-post’s opt-out form. However, Meta’s AI only provides a way for users to report an answer that contains a person’s personal information.

David Evan Harris, who teaches AI ethics at the University of California, Berkeley, told PolitiFact that because the US has no federal regulations about privacy and AI training, people have no standardised legal right to opt out of AI training in the way that people in countries such as Switzerland, the United Kingdom and South Korea do.

It’s frequently challenging to locate the settings when social media platforms offer opt-out options for US customers, according to Harris.

According to Meta’s spokesperson, deleting your Meta accounts does not prevent Meta AI from using your previously publicly available data.

Google

Social media claim: “Did you know Google just gave its AI access to read every email in your Gmail – even your attachments”?&nbsp, – November 8 Instagram post with more than 146, 000 likes as of November 19.

Facts: Google makes a number of products that interact with private information in a variety of ways. Google made the announcement on November 5 that Gemini Deep Research, its AI product, can connect to users’ Gmail, Drive, and Chat accounts. But, as Forbes reported, users must first give permission to employ the tool.

Users can choose which data sources, including Google search, Gmail, Drive, and Google Chat, to allow Gemini Deep Research to access personal information across products.

Google also gathers people’s data in other ways:

  • Through searches and prompts in Gemini apps, including its mobile app, Gemini in Chrome or Gemini in another web browser
  • Any photos or videos that a user has uploaded to Gemini.
  • If users give permission to use apps like Spotify and YouTube, they can do so through interactions.
  • Through message and phone calls apps, including call logs and message logs, if users give permission.

A Google spokesman told PolitiFact that the company doesn’t train AI when registered users are under the age of 13.

When Google activates smart features in its Gmail and Google Workplace settings, which are automatically enabled in the US, Google can access users’ data to help them create emails or suggest Google Calendar events. With optional paid subscriptions, users can access additional AI features, including in-app Gemini summaries.

Google’s smart features can be disabled by turning off Gmail’s smart features, but that doesn’t stop the Gemini app from being downloaded or accessed in a browser.

Gemini is accused of spying on users’ private communications in a lawsuit filed in California. The lawsuit says an October policy change gives Gemini default access to private content such as emails and attachments in people’s Gmail, Chat and Meet. Users had to manually enable Gemini to access the private content in October; now, they must go to their privacy settings to disable it. The lawsuit contends that the updated Google policy violates the unapproved wiretapping and recording of confidential communications without permission under California’s 1967 Invasion of Privacy Act.

Can you change your mind? People can use “temporary” chats or chats without logging into their Gemini accounts if they don’t want their conversations used to train Google AI. A Google spokesman said that doing so would prevent Gemini from saving a person’s chat history. Otherwise, opting out of having Google’s AI in Gmail, Drive and Meet requires turning off smart features in settings.

LinkedIn

According to social media, “LinkedIn will begin using your data to train AI starting November 3.” – As of November 19, there were more than 18 000 likes for the November 2 Instagram post.

The facts: &nbsp, LinkedIn, owned by Microsoft, &nbsp, announced on its website that starting November 3, it will use some US members ‘ data to train content-generating AI models.

Details from people’s profiles and public posts are among the data the AI gathers.

According to LinkedIn, the training does not utilize private messages of recipients.

LinkedIn also said, aside from the AI data access, that Microsoft started receiving information about LinkedIn members – such as profile information, feed activity and ad engagement – as of November 3 in order to target users with personalised ads.

Can you change your mind? Yes . Autumn Cobb, a LinkedIn spokesperson, confirmed to PolitiFact that members can opt out if they don’t want their content used for AI training purposes. Additionally, they have the option to reject personalized, targeted advertisements.

To stop using your data for training purposes, go to data privacy under “Data for Generative AI Improvement,” and then uncheck the option that reads “Use my data for training content creation AI models.”

Source: Aljazeera

234Radio

234Radio is Africa's Premium Internet Radio that seeks to export Africa to the rest of the world.