Skip to main content

Incognito — March 2024: Generative AI

March 1, 2024

Welcome to the March 2024 issue of Incognito, the monthly newsletter from DeleteMe that keeps you posted on all things privacy and security.

Here’s what we’re talking about this month: 

  • Generative AI. They may be fun and useful for some tasks, but gen AI tools like ChatGPT, Microsoft Bing, and Lensa AI come with a ton of privacy and security risks. 
  • Recommended reads, including “Reddit Sells User-Generated Content to AI Company.”
  • Q&A: I got a scam email from my own email. Have I been hacked?

Last month, OpenAI (best known for ChatGPT) unveiled Sora – a text-to-video AI model. While it looks impressive (it hasn’t yet been released to the general public), it also raises security and privacy concerns. 

For the moment, the spotlight is primarily on how Sora could aid misinformation campaigns and crypto scams. But what about someone using this technology to replicate how you look or speak? Or extend an already existing video of you? 

Oh, wait. That’s already possible

Before we dive into how bad actors use gen AI models, though, let’s talk about the privacy risks of gen AI in general – and how you can use these tools safely (or as safely as possible). 

Use At Your Own Risk

“Should I worry about privacy if I use ChatGPT like a therapist?” asked a Reddit user last year. 

The answers ranged from:

“If ChatGPT helps you, keep using it; it’s a small risk to take” to: “Your data is going to be sold and resold and end up on data brokers.” 

Had the user gone straight to ChatGPT with this question, they would have received the following answer, “Yes, ChatGPT can pose privacy concerns, depending on what information is shared and how it’s handled.”

ChatGPT’s answer applies to all sorts of generative AI use cases, not just therapy – and not just ChatGPT, either. 

Whether you use generative AI models to vent, summarize documents, or turn yourself into a historical character, the data you share typically becomes part of the service’s LLM data set, and there is no knowing where it will end up or who will see it. 

We’ve already seen a bug expose (some) ChatGPT users’ conversation histories; security researchers have proved that OpenAI’s custom chatbots (GPTs) can be manipulated to leak training information and other private data; and both ChatGPT and Microsoft Bing say human AI trainers can monitor user conversations – which may or may not be a big deal (insider threats are as big a risk as ever). 

At Least Follow These 3 Best Practices 

Ultimately, if you’re going to use generative AI apps, there are a few rules to follow:

  • Read the app’s privacy policy. What data protection measures does the app have in place? Who might it share your data with? Make sure you know the answer to these questions before you use it. For reference, here’s ChatGPT’s privacy policy
  • Don’t share any sensitive information in your conversations. OpenAI and Google Bard explicitly warn against doing so. But note that even taking precautions does not guarantee privacy. Researchers have found that gen AI models can infer personal information about people from totally innocuous conversations. Moreover, “even if you use a chatbot through an anonymous account or a VPN the content you provide over time could reveal enough information to be identified or tracked down,” said Ron Moscona, a partner at the law firm Dorsey & Whitney, in a Guardian article
  • Make the most out of gen AI apps’ privacy and security controls. For example, ChatGPT lets you turn off chat history and export your data. 

Too Late? 

Not using generative AI apps doesn’t automatically mean your personal data is safe. 

Most generative AI models are trained on publicly available data, which, according to Scientific American, can include public LinkedIn profiles, company sites, personal web pages, online forums like Reddit, online marketplaces, government web pages, news outlets, voter registration databases, and more. 

Basically, your personal information is already being used to train AI models, whether you’re aware of this or not. 

There are tools like Glaze, which can make images unreadable to AI, but they’re not future-proof. To avoid having your information used to train AI without your consent, your best bet is to stop posting on the internet. 

It’s Not You, It’s Them 

Speaking of other people using your data, if a professional or business you’re dealing with uses gen AI, they could inadvertently compromise your privacy (for example, a lawyer uploading confidential client data into a gen AI platform). On the bright side, more companies are restricting their use of gen AI. 

Even More Risks

Cybercriminals also love gen AI tools and are increasingly using them for: 

Phishing emails/texts. With gen AI, bad actors can write fraudulent emails and texts in any language and even personalize them at scale without the typical spelling and grammar mistakes that often give phishing scams away. Going forward, watch out for out-of-date and factually incorrect information. 

Voice scams. Thanks to gen AI, cybercriminals only need three seconds of someone’s voice to clone it and use it in a scam. Be wary of calls from your “family” or “friends” who are in trouble

Deepfake videos. You might have heard about the finance worker who was tricked out of $25 million. The attack started with an email, which the worker suspected was a phishing attempt. However, after attending a video call with whom he thought was the chief financial officer and a few other members of staff (in reality, deepfake recreations), he was thoroughly taken in by the scam. 

Less Is More

One thing you can do to reduce the likelihood that criminals will use your personal information for gen-AI-enabled attacks like those above is to shrink your digital footprint. The less data (text, photos, audio recordings, videos, etc.) there is about you online, the less likely you are to be impersonated in an email, voice scam, or a deepfake video. 

We’d Love to Hear Your Privacy Stories, Advice and Requests

Do you have any privacy stories you’d like to share or ideas on what you’d like to see in Incognito going forward? 

Don’t keep them private!

We’d really love to hear from you this year. Drop me a line at laura.martisiute@joindeleteme.com.  

I’m also keen to hear any feedback you have about this newsletter.

Recommended Reads

Our recent favorites to keep you up to date about digital privacy. 

Reddit Sells User-Generated Content to AI Company

Reddit has reportedly signed a contract with an AI company (according to Reuters, Google), allowing it to train its models on user-generated content from its site. The deal is apparently worth around $60 million annually. This comes after Reddit said that if gen AI companies don’t pay for data, it would block Google’s and Bing’s search crawlers

Security Glitch at Wyze Leaves Users Seeing Into Other Users’ Homes

A security glitch at the smart camera maker Wyze meant that about 13,000 users could see thumbnails from other users’ cameras, and about 1,500 users tapped on these thumbnails, enlarging them (some users could see actual footage, too). Following the incident, Wyze added a new verification layer for viewing thumbnails and videos. 

1 In 2 Americans Would Trade Their Email for a Discount

A recent survey by NordVPN found that many Americans would share their personally identifiable information (PII) in exchange for discounts and gifts. 87% of respondents said they’d give away at least one piece of PII to get a bargain. 54% said they’d share their email, 51% would disclose their full name, and 12% would reveal their employer’s info. 

DuckDuckGo Has a New Sync Feature

DuckDuckGo has introduced a new end-to-end encrypted “Sync & Backup” feature to its browser. The new feature lets users privately sync and access favorites, bookmarks, and passwords across devices. It’s available on the latest version of the DuckDuckGo browser for Windows, iOS, macOS, and Android devices and comes with recovery tools. 

You Asked, We Answered

Here are some of the questions our readers asked us last month.

Q: Does the “Tell websites not to sell or share my data” setting in Firefox actually do anything for privacy? 

A: Great question! 

This Firefox setting enables the Global Privacy Control mechanism, which automatically asks websites you visit not to share or sell information about your browsing activities on that website. 

As to whether that does anything for your privacy… 

Well, that depends on the websites you visit. Some websites take into account your request not to share/sell your information. Others don’t. 

Here’s a list of companies that support and honor the GPC. Websites that honor the GPC also usually say so in their privacy policy. 

Some companies only honor the GPC if you’re from a specific state/country. That’s because certain laws, like the California Consumer Privacy Protection Act in California and the General Data Privacy Rights Act in Europe, empower citizens to object to third-party processing. 

For example, the exercise equipment and media company Peloton says in its privacy policy that it “honor[s] certain technologies broadcasting an Opt-Out Preference Signal, such as the Global Privacy Control (GPC) depending on the state you are in.” 

Sadly, many more companies completely ignore the GPC. 

Still, it doesn’t hurt to enable this privacy setting. For anyone who uses Firefox and doesn’t have this setting turned on, go to “Settings” and “Privacy & Security.” Scroll down to “Website Privacy Preferences” and enable “Tell websites not to sell or share my data.” 

Q: I got a scam email from my own email. Have I been hacked?

A: Maybe, but probably not.  

To see if your email account was hacked, go to your “Sent” folder and see if there are any emails there you don’t recognize. Also, check your login history. 

If you don’t see anything suspicious, it’s more likely that your email address was spoofed, i.e., the scammer forged their email address to look like yours. 

There’s no real risk to you here, but the scammer could email your friends and family pretending to be you (if they know who your friends and family are). 

In any case, situations like these are a good reminder to use a strong password (that’s not reused anywhere else) and multi-factor authentication. 

Back to You

We’d love to hear your thoughts about all things data privacy.

Get in touch with us. We love getting emails from our readers (or tweet us @DeleteMe).

Don’t forget to share! If you know someone who might enjoy learning more about data privacy, feel free to forward them this newsletter. If you’d like to subscribe to the newsletter, use this link.

Let us know. Are there any specific data privacy topics you’d like us to explore in the upcoming issues of Incognito? 

That’s it for this issue of Incognito. Stay safe, and we’ll see you in your inbox next month. 

Laura Martisiute is DeleteMe’s content marketing specialist. Her job is to help DeleteMe communicate vital privacy information to the people that need it. Since joining DeleteMe in 2020, Laura h…

Don’t have the time?

DeleteMe is our premium privacy service that removes you from more than 30 data brokers like Whitepages, Spokeo, BeenVerified, plus many more.

Save 10% on DeleteMe when you use the code BLOG10.

Hundreds of companies collect and sell your private data online. DeleteMe removes it for you.

Our privacy advisors: 

  • Continuously find and remove your sensitive data online
  • Stop companies from selling your data – all year long
  • Have removed 35M+ records of personal data from the web

Special Offer

Save 10% on any individual and family privacy plan with code: BLOG10

Related Posts

We originally published this post on our Online Privacy Blog, but we’ve updated it here as the s…
We originally published this post on our Online Privacy Blog, but we’ve updated it here as the s…
We originally published this post on our Online Privacy Blog, but we’ve updated it here as the s…