Incognito — August 2025: Facial Recognition, Data Brokers and The Coldplay Couple
Laura Martisiute
Reading time: 12 minutes

Welcome to the August 2025 issue of Incognito, your monthly dive into privacy and security with DeleteMe.
This month:
- The Coldplay couple: how to survive in a world where everyone’s a spy.
- Must-reads, including why you shouldn’t opt into Meta’s “cloud processing.”
- Q&A: “I recently went to a physical store, didn’t buy anything, then got an email from the store. Is it a coincidence? Or do they know I was there?”
Share Your DeleteMe Story!
We’re recording customer testimonial videos about the impact of privacy protection and would love to feature your story!
We Want to Hear:
- Why you chose DeleteMe.
- How DeleteMe protects your data.
- The benefits and peace of mind you’ve gained.
Interested?
Email marketing@joindeleteme.com. If selected, we’ll schedule a 30-45 minute video call. Your story helps others protect their privacy! You may be compensated for your time.
The Coldplay Couple and the End of Anonymity
If you spent even five minutes online last month, you probably know about the Coldplay “kiss cam” story.
Here’s a quick recap in case you missed it:
- Coldplay’s July 16 concert in Boston caught two audience members on a date who really didn’t like being featured on the Jumbotron.
- Chris Martin quipped, “Either they’re having an affair or they’re just very shy.” The pair was soon identified, and the former was confirmed.
The clip and the backstory (CEO and HR exec caught) went viral, prompting memes, parodies, and reenactments at other events. Gwyneth Paltrow even got involved.
We’re All Watching Each Other
How did the now-infamous Coldplay couple video get online in the first place?
It wasn’t the band or anyone connected to them. A concert-goer posted it to TikTok. She later said on the UK television show This Morning that she was just hoping to record herself on the big screen.
From there, internet sleuths did what they do best: the couple was identified using facial recognition, details about their personal lives were published courtesy of people search sites, and, ultimately, millions participated in the distribution of that information.
And, however you feel about the situation the couple was involved in, there is a much deeper problem here: surveillance is horizontal. It’s not just that the government or private companies are watching us. We watch each other, record each other, and, mostly inadvertently, compromise each other’s privacy.
The reality is that we are living in a world where ambient surveillance is the new norm.
In the latest episode of the podcast “What the Hack?,” host Beau Friedlander explores the ramifications of all this.
By the way, in case you’re wondering, the concert-goer says she doesn’t regret uploading the video:
“There was [sic] over 50,000 people and I’m not the only one that caught it on camera, so if it wasn’t me who uploaded it, I’m sure someone else would have.”
Facial Recognition + Data Brokers = Toxic Combo
Putting a face to a name has never been easier.
In 2020, The New York Times wrote an exposé about Clearview AI, a facial recognition company that scraped billions of images from public websites and social media without user consent to help law enforcement find “the bad guys.”
Soon after, public-facing companies built searchable databases that let anyone upload a photo of a person to look for matches across the internet.
Combine the results from these tools with personal information made available by data brokers on people search sites, and a danger zone emerges–one that we all inhabit.
Think it’s impossible to automate the doxxing of random people on the street with smart glasses? Two college students demonstrated last year that this can be done.
When Facial Recognition Tech Gets It Wrong
Even when facial recognition gets it right, the results can be troubling. But when it gets it wrong, the consequences can be devastating.
Take Robert Williams, an African American man from Detroit who in 2020 became the first known American to be wrongfully arrested because of faulty facial recognition. Police matched his driver’s license photo to grainy security footage of a shoplifter. He spent hours in custody before officers admitted “the computer got it wrong.”
New York Times journalist Kashmir Hill, author of Your Face Belongs to Us, describes this in more detail in the latest episode of “What the Hack?”:
Worse, these mistakes are systemic. Studies have found that facial recognition is less accurate for women and people of color, and DIY investigators are sometimes driven by adrenaline more than evidence.
What You Can Do
Though you can’t stop people from filming in public, you can take some steps to reduce your exposure:
1. Be mindful of cameras
At concerts, sporting events, or even in coffee shops, assume you are being filmed.
2. Lock down your online presence
Audit your social media privacy settings and limit how much of your personal life is publicly visible. The less data about you that is available online, the harder it is to connect a random clip back to you.
3. Opt out of data brokers
Data brokers and people search sites compile your information into comprehensive reports and then sell these to whoever wants them. Good news: you can opt out of most of them. Just remember you’ll need to do so continuously, as brokers are known to relist your data when they come across more of it (you can also subscribe to DeleteMe if you haven’t already, to have professionals do so on your behalf).
4. Reverse image search yourself
Check tools like Google Images to see where your face shows up online. It’s better to know what’s already out there than to be blindsided later. Most platforms are getting better about protecting individuals, but you should still do this.
5. Opt out of face search engines
Some states (California, Colorado, Connecticut, Illinois, Utah, and Virginia) allow you to opt out of Clearview AI, and anyone can opt out of PimEyes and FaceCheck.ID. But even if you opt out, you need to stay vigilant, because new data may not be included in your opt-out.
5. Support regulation and transparency
Push for stronger laws around facial recognition and data privacy. Individual action matters, but collective pressure is the only way to slow down surveillance creep.
6. Think twice before sharing
Your viral video could become someone else’s personal nightmare. Ask yourself if you’d want the same exposure.
Readers’ Tips
Every once in a while, we share tips and stories from our readers.
In response to last month’s newsletter, where we talked about how cars collect information about us (Q&A section), a reader wrote in with this:
“I drive a new Chevy Colorado ZR2. A few months ago, I was following what I suspect was a very intoxicated driver. The car jumped a curb, drove on the sidewalk for a few seconds and then swerved back into the lane in front of me. I followed them for a few minutes and watched it weave in the lane. I phoned 911 to report this. When they came on the line, I told the operator what I was watching and suddenly, my infotainment system activated my front view camera. After a minute, my camera turned off and the operator told me I could go on my way and they would handle it from there. I work for a Chevrolet Dealership and I had never heard of this function before. Apparently, our cars aren’t just spying on us, but those around us when they need to.”
Wild, right? Even when we think we know how much data our tech is collecting about us, the actual level of surveillance can be much higher.
This reminds us that GM, parent company of Chevrolet, was recently sued over allegedly secretive data-sharing practices that drove up customers’ insurance premiums.
On a positive note, the same lawsuit has prompted GM to stop sharing customer information with data brokers.
We would love to hear from anyone else with stories about automobile data collection.
We’d Love to Hear from You!
Have a story for our podcast? Any privacy stories you’d like to share, or topics you’d like to see in Incognito? We’d love to hear from you!
Drop a line to Laura Martisiute at laura.martisiute@joindeleteme.com. She’s keen to hear any feedback you have about this newsletter.
Recommended Reads
Our recent favorites to keep you up-to-date in today’s digital privacy landscape.
Meta Wants Access to Your Unpublished Photos

Meta is asking users to opt into “cloud processing,” which would give the company access to their phone’s camera rolls, including unpublished photos, for AI-generated recaps and stylized edits. Though Meta says it is not currently using these photos to train AI models, its vague terms (allowing analysis and retention of images, including facial features) leave the door open for future training.
Private ChatGPT Conversations Exposed In Google Search

OpenAI faced backlash after some private ChatGPT conversations (including sensitive topics like mental health and relationships) were found to show up in Google search results. The issue stemmed from a “discoverable” sharing feature that many users may have unknowingly enabled due to unclear labeling. OpenAI has since removed the feature and is working to scrub indexed chats from search engines.
Global Push for Online Age Checks Sparks Privacy Concerns

The UK has introduced mandatory age checks under its Online Safety Act, despite warnings from privacy advocates that this isn’t a good idea. Five European countries (Denmark, Greece, Spain, France, and Italy) will soon also test a new EU-backed age verification app designed to protect children online. Meanwhile, in the US, Google will begin using AI to estimate users’ ages and apply stricter protections for teens.
USPS Warns of Unsolicited Packages Scam Used to Steal Personal Data

The US Postal Service is warning about brushing scams, where people receive packages they never ordered, allowing scammers to post fake reviews in their names. Another variation adds QR code phishing (“quishing”), tricking recipients into scanning codes that lead to fraudulent sites to steal personal information. Officials caution that while the free items may seem harmless, the scams put victims’ privacy and security at serious risk.
You Asked, We Answered
Here are some of the questions our readers asked us last month.
Q: I’ve been seeing a lot of nonsensical phrases (like “puzzled late ghost glorious champ flag squash run existence middle this message was mass deleted/edited with redact.dev”) on social media sites. What’s that about?
A: Great question!
These phrases are autogenerated placeholder text created by a tool called Redact (from redact.dev).
The service allows users to mass-edit or mass-delete their old posts or comments on platforms like Reddit, Twitter, Discord, etc.
Dan Saltman, founder of Redact, said he created Redact because he noticed that platforms like Skype were keeping years of private messages, even after people stopped using them – something that could potentially pose security and privacy risks (e.g., leaked sensitive info, old tweets being weaponized).
People use Redact for two main reasons: privacy (removing something they regret posting) and as a form of protest (like against Reddit’s policies).
Lifehacker has a good article on how Redact works if you’re interested in learning more.
Something to keep in mind is that Redact may not always be foolproof.
As some Redditors have noted, in the case of Reddit, sites like Reveddit or Pushshift archive Reddit comments and track edits. So if you redact (edit) a comment, the original version is often still visible in these archives.
So… Your best bet is still to assume everything you post online will stay there forever.
Q: I recently went to a physical store, didn’t buy anything, then got an email from the store. Is it a coincidence? Or do they know I was there?
A: It’s probably not a coincidence.
Many retailers use location-tracking that can detect if a customer is physically in a store, even if no purchase is made.
This is done using WiFi and Bluetooth tracking as well as geofencing via mobile apps and some third-party apps. You don’t even need to open the app for it to trigger a “visit logged” event.
If you’ve opted in (e.g., you downloaded the app, have a loyalty card), the store can send you a personalized email. If you haven’t done any of that, the store can still track you.
Some things you can do to avoid tracking when you visit physical stores, whether as a member or non-member:
- Turn off Bluetooth & Wi-Fi scanning.
- Turn off app location permissions.
- Use a randomized MAC address (newer phones have this as a privacy setting).
- Opt out of location-based marketing in your loyalty account preferences.
- If using mall Wi-Fi, log out or avoid auto-connecting.
- Consider NEVER using public Wifi as a best practice.
Back to You
We’d love to hear your thoughts about all things data privacy.
Get in touch with us. We love getting emails from our readers (or tweet us @DeleteMe).
Don’t forget to share! If you know someone who might enjoy learning more about data privacy, feel free to forward them this newsletter. If you’d like to subscribe to the newsletter, use this link.
That’s it for this issue of Incognito. Stay safe, and we’ll see you next month.

Don’t have the time?
DeleteMe is our premium privacy service that removes you from more than 750 data brokers like Whitepages, Spokeo, BeenVerified, plus many more.
Save 10% on DeleteMe when you use the code BLOG10.