When the Stranger Isn’t a Stranger
Beau Friedlander
Reading time: 9 minutes
What Social Media Is Telling Us About Privacy that We’re Not Teaching Our Kids
We talked with cybercrime expert Paul Raffile about sextortion this week on “What the Hack?” You won’t want to miss this. Within 60 seconds of sending a faked flirty photo, Jordan Stevens—a UK pop artist participating in a documentary experiment—was confronted by a countdown timer and a man’s voice threatening to destroy his life unless he paid £200.
While the photo wasn’t real, the threat was. This is sextortion, and it happened to Stevens on Instagram after his account sat live for just one week. During that week, eleven suspicious accounts made contact. He didn’t have to do anything—no friend requests were sent, no sexually suggestive messages. He just had to exist on the platform as a young male and the scammers found him.
That speed should terrify every parent, but it reveals something more fundamental about our privacy problem: we’ve been having the wrong conversation about stranger danger for twenty years.
The Privacy Paradox
Paul Raffile is an internationally recognized sextortion expert. He participated in the documentary and the Stevens’ experiment. He states the problem bluntly: “Instagram is the largest directory of teens for scammers and predators out there.”
Raffile rejects the social connection part to reframe what we’re actually dealing with. These platforms aren’t just places where bad things can happen—they are themselves the infrastructure enabling industrial-scale targeting: a bit like algorithm as accomplice. And the personally identifiable information (PII) that makes it work isn’t hidden in some dark web database. It’s sitting in plain sight on your kid’s Instagram followers list.
When a sextortion scammer screenshots a victim’s entire contact list—which Instagram allows them to do after a friend request is accepted—they’re not just gathering blackmail material. They’re conducting reconnaissance using data the platform has deliberately made accessible. They can search that list for the victim’s last name, instantly identifying family members. They don’t need to visit people-search sites or pay for background checks. Meta has done the work for them.
The privacy vulnerability isn’t a bug. It’s a feature that should be disabled.
Stranger Danger Update
Remember the original stranger danger talk? Don’t get in cars with people you don’t know. Don’t give out your phone number. If someone approaches you, run to a trusted adult.
Here’s the social media version: Be careful online. Don’t talk to strangers. Never send nudes.
The “never” part survives but there’s no “what to do when” part any more. And that omission is literally killing children. At least 53 teenagers have died by suicide in the past three years directly as a result of sextortion schemes. The number of reported cases has doubled from 28,000 to an expected 50,000 this year—and those are just the cases involving minors in the United States. Adult cases aren’t even being tracked publicly.
The fundamental problem isn’t that teenagers are naive. It’s that the threat model has changed entirely while our safety conversations have remained static. Stranger danger assumed physical proximity. It assumed you’d recognize the moment of risk—a car pulling up, an adult approaching. It assumed you had agency in the encounter.
Sextortion operates differently. The danger arrives through infrastructure your teenager has been told is safe, even necessary for social participation. It arrives looking exactly like what Instagram is supposed to deliver: an attractive peer showing romantic interest. And when the threat materializes, it doesn’t come from “strangers” in any traditional sense—it comes from someone who already knows your kid’s friends, their school, their family structure.
The bottomline here is that the stranger isn’t strange anymore. They’ve been normalized by the platform.
AI can’t shake your hand and neither can a scammer in Nigeria. But then comes the harder part, the conversation that makes parents uncomfortable: “Open that safe space where if something bad does happen, they feel comfortable to come to you with no judgment, no questions asked,” Raffile said.
The true privacy discussion we need isn’t about hiding information, but about providing a safe space for disclosure once that information is compromised. The scammer’s power depends entirely on shame; the moment the secret is shared, their leverage vanishes.
One in six male human beings experience sexual abuse before age 18. That’s not a stranger danger statistic—that’s an everyone-knows-someone-who-knows-someone reality. Which means most adults reading this either experienced something similar or know someone who did. And yet we persist in treating these conversations as exceptional rather than necessary, as warnings rather than contingency plans.
The second half of the safety talk sounds like this: If you do send something you regret, tell someone immediately. Never pay to have the problem go away. That’s like chumming for sharks at the beach where you swim. The scammer will come at you hard for a few days with different accounts, different platforms, different threats. And then they’ll move on, because you’re not worth their time. Block them everywhere. Don’t engage. It’s not sexy advice. It doesn’t prevent the crime. But it prevents the suicide.
The PII Problem Nobody’s Solving
When the documentary crew—using Stevens’ phone number from the barbershop—added the scammer’s contact information, Snapchat helpfully suggested they connect. The barber who had just blackmailed someone on Instagram was immediately identifiable on a completely different platform, linked by the phone number Instagram had helped make relevant and Snapchat had decided to surface.
This is the PII problem in miniature: information collected for one purpose, shared across platforms, weaponized by bad actors, and enabled by design choices that prioritize engagement over safety.
Raffile notes that scammers routinely Google their victims, finding school information, family members’ names and occupations, addresses—“amazing leverage for the blackmailers,” though it’s mostly psychological terrorism. They rarely follow through on threats to distribute material (less than 10% of cases), because doing so costs them their leverage and gets their accounts banned. But teenagers don’t know that. They just know someone has detailed information about their life and is threatening to destroy it.
The privacy violation isn’t the blackmail. The privacy violation is that all this information was collectable in the first place.
Meta has known this for years. Internal documents reviewed by Reuters revealed that up to 10% of Meta’s revenue comes from scam advertisements—meaning the company is directly profiting from the infrastructure that enables these crimes. When Meta announced teen accounts would be private by default and would hide followers lists from accounts detected as potential sextortionists, it sounded like progress. But as Raffile points out, “If they think that there’s a chance of them being involved in child sextortion, why are they not just banning those accounts originally?”
And besides, scammers are already circumventing the protection by adding multiple people from the same friend group and location, making their fake accounts appear legitimate. Because criminals are good at criming. They understand how automated systems work, and they know that all you need to do is add a ghost to the machine—something unguesable, outside the expected pattern—and you’re invisible.
What Legislation Can’t Reach
The temptation is to call for better laws, stricter platform accountability, international cooperation with Nigerian law enforcement. And all of that matters. But legislation can’t reach the fundamental problem: we’ve allowed enterprise to build social infrastructure for minors that requires them to publish comprehensive dossiers about themselves to participate in peer culture, and we’ve allowed those same private companies to monetize that requirement while disclaiming responsibility for the consequences.
Raffile won’t call the resulting deadly fallout “negligent homicide” directly, but he thinks prosecutors should. When a platform creates the conditions for industrial-scale targeting, profits from ads placed by scammers, and responds to a doubling of cases by implementing easily circumvented protections, at what point does the suicide that follows become predictable and therefore perhaps not even “negligent” homicide?
Here’s what you can do:
Talk. Keep the lines of communication open. Tell funny stories about yourself. Make a fool of yourself. Your child needs to know that if they send something to a stranger—whether it’s a sext or a compromising photo—they can tell you about it. No big deal.
Scream. The scammer’s only leverage is shame. The cycle of coercion is broken the minute there’s no secret.
Never pay. You’re just chumming for sharks at the beach where you swim.
Remove your PII. Actively work to remove your personal information from people-search sites and data brokers. Consider asking your extended family to do the same. Not because it will stop determined criminals, but because it removes one more tool from their reconnaissance toolkit.
Have the second conversation. Not just “never send nudes” but “here’s what to do when you do.” Because the when isn’t hypothetical—it’s happening 50,000 times a year to minors alone, and those are just the reported cases.
The stranger danger we need to teach isn’t about strangers anymore. It’s about infrastructure designed to make strangers feel like friends, and friends feel like they could become strangers who know everything about you.
One week. Eleven contacts. Sixty seconds from photo to threat. The directory is open. The scammers are searching. And we’re still telling kids not to talk to strangers?
Our privacy advisors:
- Continuously find and remove your sensitive data online
- Stop companies from selling your data – all year long
- Have removed 35M+ records
of personal data from the web
Save 10% on any individual and
family privacy plan
with code: BLOG10
news?
Don’t have the time?
DeleteMe is our premium privacy service that removes you from more than 750 data brokers like Whitepages, Spokeo, BeenVerified, plus many more.
Save 10% on DeleteMe when you use the code BLOG10.




