Skip to main content

Big Breaches and Click Bots to AI and Beyond

Big Breaches and Click Bots to AI and Beyond

When a young engineer uncovered a sizeable click fraud situation at Google, he discovered a bigger problem; namely, the perverse ingenuity that drives online fraud and scams. “Big Breaches” author Neil Daswani joined us to talk click farms, data breaches, AI exploits, and the big picture of cybercrime today.

Episode 225

https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/chrt.fm/track/E78194/tracking.swap.fm/track/tcQd6Q6C0RUUlOHq1Ytj/mgln.ai/e/51/pscrb.fm/rss/p/traffic.megaphone.fm/TPG9555779862.mp3?updated=1762835184
Loading title…
0:00

Beau: What would you do if you worked at Google and realized a digital crime ring was being used to defraud advertisers? 

Neil Daswani: Google’s CFO, George Ray, he asked, so you’re telling me it’s two weeks before the end of the quarter and there’s over a hundred thousand machines that could potentially click on Google’s ads and defraud Google’s advertisers. How much revenue? Should we not take credit for it? 

Beau: But what if it doesn’t stop with Google? 

Neil: In the Equifax breach, approximately half the Social Security numbers in the United States got stolen, but just last year, an organization called National Public Data got breached and all Social Security numbers of all Americans got stolen in that breach.

Beau: Your money, your identity, your digital fingerprint, your social media likes and dislikes, all for sale, and now there’s a new player in the mix. Yep. AI is getting really good at fooling people. 

Neil: Being able to tell the difference between what’s fake and what’s real. It was hard enough before, it’s even harder now 

Beau: Today the alchemy that transformed an early encounter with digital crime into, uh, leading cybersecurity experts lifelong mission to document an understand the biggest breaches. And the security risks that made them possible and why AI might be writing the scariest chapter yet. 

I’m Beau Friedlander and this is “What the Hack?” the show that asks, ‘In a world where your data is everywhere, how do you stay safe online?’

Beau: Neil Dewani, welcome to the show. 

Neil: Thank you, Beau for having me. 

Beau: Now, it’s great to have you here and uh, I’m really excited to talk about your book, big Breaches, cybersecurity Lessons for Everyone. And you do have a co-author, Moudy Elbayadi 

Neil: Moudy and I were tiied together at the hip, at uh, at LifeLock. Uh, I was the CISO there. He was the CIO. There. We fought many battles together, and so from all those battles and all our learnings, we thought it would be good to, uh, you know, publish what we learned. 

B-Roll: We’re joined today by Neil Dewani. Uh, among other things, he, data breaches happen literally all the time. Etc

Beau: Big breaches is an important book by any measure. When I told a friend of mine an InfoSec that’s info information security, uh, about this interview, he was like, oh, Daswani’s book, Big Breaches, should be required reading for everyone. And he meant it at first blush. I guess that seems extreme, but there is an awareness gap in the world when it comes to cybersecurity. In my opinion, it should be taught. Along with privacy hygiene as a basic life skill in high school. But that aside for the time being, I wanna know how you got into this field, Neil. What makes somebody decide to become a CISO? Uh, a Chief Information Security Officer. 

Neil: I, um, was studying computer science as a graduate student, uh, at Stanford. And I think I have a, a bit of a kind of a, a healthy paranoia, kind of like the kind of paranoia that you know, Andy Grove a legendary CEO talks about in his book, Only the Paranoid Survive. And so I, I think that given, given that natural kind of characteristic and interest and curiosity, it just seemed like a, a, a good, a good marriage of my personal characteristics and my computer science interests. 

Beau: Wherever there’s money access or data online, someone’s already trying to find a way to take it, guaranteed.

Neil: And I think my career in cybersecurity really got kicked off after I had joined Google and was one of the first, uh, security engineers to focus on the problem of click fraud. To define click fraud, I’d just say that firstly, whenever somebody clicks on an ad on the web or a mobile device, money changes hands and goes from an advertiser to somebody else.

And so if you’re a cyber criminal. There are a whole bunch of motivations that you can employ to try to make money off of this kind of ecosystem. 

Beau: So gimme an example of like what a motivation would be and what kind of money we’re talking about. 

Neil: As a cybercriminal, you can spin up a website. And you can lease out the ad space to an advertising network.

And when the ads get clicked on, you get a majority of the revenue of that, of the ad click from the advertiser. But what you can do is you can hire a whole bunch of bots to click on those ads to basically generate artificial clicks. And then as a sub criminal, you get to benefit from that revenue that comes in.

Beau: And was that your job to figure out if the attack was happening? Like, did you identify, these were, what do you call them? Were they click farms? Were they bots? What, what, what did you call ’em and, and did you figure it out 

Neil: Up until May, May, 2006 there were tons of click farms where people in Third World countries were getting hired to click on ads to conduct click fraud.

Beau: Oh, so actual people just click, click, click clicking. 

Neil: Yeah. Yeah, they would, they would hire actual people in Russia, India, wherever. Right? Huh? But if you wanna get even more efficient with your attack, what you do is you write software, you write malicious software or malware to click on the ads for you instead of hiring humans to do it.

And what happened in May of 2006 was that there was a piece of malware. The first name, piece of malware, click bot a, that was, uh, getting distributed. It got out onto a few hundred machines on the internet and it, uh, wasn’t clicking on any Google’s of Google’s ads at the time. I happened to be running that particular sample in a personal lab at home.

And what happened in mid-June 2006? Is that, you know, that malware got distributed to over a hundred thousand machines. And then at around the same time, my firewall from my lab reported to me that it clicked on a Google ad. 

Beau: That’s weird. And that is very weird that, that, that happened. It shouldn’t be doing that, right? 

Neil: Yeah. So, so, so, so malware, uh, can have all kinds of motivations. And what I suspect was happening is that the cyber criminal. Was testing right when the malware was only on a few hundred machines and was clicking on perhaps ads of other less defended ad networks. Uh, they were seeing how well their attack, uh, might work, and they were testing out their infrastructure.

And then of course, once they saw things working, then they were like, okay, let’s target, let’s target the big ad network. Let’s target Google. 

Beau: Think of it like a counterfeiting ring, sort of, right? Only instead of printing fake money. They’re printing fake clicks. They’re not printing them, but they’re doing them millions of tiny digital bills, as it were, that look like the real thing to the system.

Neil: Google’s CFO, George Rays, he had, you know, previously said that click fraud could be the biggest threat to Google’s business model. And he was exactly right and when that particular incident had occurred, it’s two weeks before the end of the quarter. There’s over a hundred thousand machines that could potentially click on Google’s ads and defraud Google’s advertisers.

Neil: He asked exactly the right question. 

Beau: On the surface, clicks are tiny fractions of a cent, but Google’s whole businesses built on scale. Uh, advertisers pay for clicks and expect those clicks to be real customers. If criminals can fake clicks and funnel those tiny payouts into their own pockets, the immediate result is stolen ad dollars and the even bigger worry is loss of trust. 

Advertisers only keep spending if the traffic and metrics actually mean something, right? If it converts, that’s what it’s called, convert. So, you know, you listen to an ad and you’re like, I, I want that in my case, mushroom coffee. (I didn’t name a mushroom coffee, but if you wanna send me a whole case of mushroom coffee. I’m all ears anyway.) 

Advertisers pull budgets, right? Partners demand refunds. Regulators start sniffing around as they should, and the company, you know, X, Y, Z, they have to spend millions on detection and remediation all while trying to prove its numbers are honest, which they may not be. Maybe they are.

Who knows? It’s a problem set. It’s an, it’s a, it’s a problem set that doesn’t need to exist, but it does. In short, it’s not just fraud, it’s an attack on the product Google sells–trust. Well, Google doesn’t scream trust to a lot of people. I mean, they did have the slogan “Don’t be evil,” and they had that because, well, one would assume there were certain tendencies, but at any rate, trust 

Neil: Google’s CFO, he asked how much revenue should we not take credit for? I was, you know, honest, honest engineer. I said, I don’t know. But I committed to my management that if you gimme two weeks, I’ll figure it out. So we called the Code Yellow, which gave me the opportunity to tag anybody that I needed at the company and. Tell ’em, Hey, whatever you’re working on right now, could you please put that on hold? I need help with this. 

Beau: Yeah. Yeah. 

Neil: And so there were a few dozen engineers across the company that we mobilized in all kinds of ways. And after two weeks of getting like only four hours of sleep a night, we got to the answer. The answer was that the amount of, uh, fraud that was taking place was only $50,000 against Google.

So that’s round off hour for a company like Google. Yeah. The real lesson was, Hey, we’re a big company. Why are we relying on Neil’s lab at home to tell us about this sort of stuff? Right? And Google did have a bunch of click fraud defenses already in place, but what we realized is that we needed more. And so that incident was really the launching pad for my career in cybersecurity.

Beau: Do these click farm sometimes, or not click farms anymore, but bots, bot networks, can they actually rack up a lot more fraudulent clicks. 

Neil: Yes, absolutely. There there is potential for attackers to try to make a lot more money from click fraud. 

Beau: So in in general, if you were to generalize, you know, we are talking about, well, north of six figures, maybe seven figures in fraud.

Neil: Absolutely. Cyber criminals regularly monetize at those levels using, using a variety of attacks. Click fraud can be one of them. 

Beau: It is interesting ’cause I don’t think of click fraud as being like that huge because of tonnage. You know? Just, just because you can, nowadays the ads are, are sold at auction, you know, big blocks and it’s not that expensive and you can buy things for micro pennies and, but I get, I get that it can, it can rack up now.

Neil: I think that Google’s approach to this was to not try to. Battle it with mechanics or try to find every possible piece of malware that could do this. Instead, they took the approach of we want to make click fraud economically unprofitable. And that’s the reason that probably we don’t hear as much about click fraud as we did many years ago.

Beau: I know from having set things up in Google myself in the past that what they did was they, you have to be you to do it. Now it’s a lot harder to get in there as a, as a, as a criminal, and, and again, the way in. If someone’s doing it is by using somebody’s personal information, there’s an element of identity theft going on.

Neil: Yes, that’s right. So they have relied a lot more on vetting the identities behind the people that lease out ad space to websites, the advertisers. In fact, it would very often be, you know, malicious actors from Russia, from from China, from other geographies. That would be conducting these attacks. And so, so that is also, you’re exactly right why identity is important.

When somebody can steal an identity, then it gives them a launching pad to launch a whole bunch of other types of attacks as well. 

Beau: Which is why, like, I’m wondering, so what is your view now of the glut of personal information that’s out there? You were, you were at LifeLock, so you’re aware of the peril.

You know exactly how it is used. Like in your view, has it gotten better or worse since you were at LifeLock? 

Neil: So at LifeLock I was glad to have the opportunity to help protect the identities of the several million LifeLock members. I was also responsible for. Securing a database that was used by one of life flock subsidiaries called ID Analytics that had the credit histories of all 300 plus million Americans.

That’s the database that kept me up the most at night. We would, we would use that to detect fraud rings proactively, and help protect both our members as well as other people too. So there’s been, there has been a lot of progress. Uh, that’s, that’s been made. There’s still a lot of attacks that that happen because in the Equifax breach, approximately half the Social Security numbers of the country got stolen.

But just last year an organization called National Public Data got breached and all Social Security numbers of all Americans got stolen in that breach. 

Beau: I was at Cyberscout before I was with DeleteMe and the high-touch identity theft resolution is one part that’s really important because if you are the victim of identity theft, you’re gonna spend hours and I mean double or even triple digit hours sorting it out depending on how far into the, your life and your finances and your property they’ve gotten.

Now, in addition to that. Some things are gonna be hard to get back, and that’s where cyber, and that’s where there is insurance for this now, right, Neil? That there’s some places that will actually help you become whole again. 

Neil: Yeah. So the progression of identity theft protection services started with making the assumption that information about you can already be stolen 

Beau: And because once your information is out there, your name, your number, your life, your email, your shares, your clicks, you know it, it exists as a file, as a dossier. And, and it’s not an account that you can change a password on, right? You, you, you can’t. It’s just out there. 

Neil: And in any cyber attack you wanna prevent as much as you can. But all preventative defenses may not be a hundred percent, right? So you wanna be able to detect when the preventative measures have failed. You wanna be able to contain the attack once it occurs, and you’ll be able to recover from it as much as possible. And so as relating to identity theft protection, it is important to be able to detect when your information’s getting abused.

But one other important part is the insurance part of it, and so. Dy Theft Protection Services typically started to give what were called service guarantees so that if your identity gets stolen, a member services agent will help do all the calls that need to be done and reach out to all the different organizations that need to be reached out to recover from that.

And initially, at aap, production companies would give some. Amount of service guarantee, like say, a million dollar service guarantee to pay for those lawyers and attorneys and all the people that follow up and and help. But one great advancement in the field that was first rolled out by LifeLock was to not just have $1 million in service guarantees. But on their higher end plan, what you’d get is stolen funds reimbursement insurance. So if working together with LifeLock, you were not able to get the money back then up to say a million dollars, they would then work with an insurance company underneath the hood to get you paid back dollar by dollar for whatever funds got stolen. 

Beau: Protection is catching up, but so are the threat actors when we come back. How AI makes really convincing liars out of ordinary criminals.

Ad break

Beau: Okay, Neil, let’s talk about romance games. Because unlike breaches or hacks, money and information is willingly given in that scenario, and that money cannot be sucked back by anybody and it can’t be returned because at the end of the day, you, if you are the victim of a romance scam, you have believed that you’re sending the money to somebody who, you’re in a relationship who’s asked for it and you willingly gave it to them, and the fact that you were bamboozled is, is not legally a sustainable position.

Anytime a chunk of change is moving, you know, changing hands, you’d need to be extremely careful and, and saying, you know, once, twice, three times. Is this you? Am I supposed to be sending this money to you? Can you confirm that you are blank with blank and calling back and saying, did you just get a call about this confirmation, making sure you’re calling the real number, all of that matters because we’re now entering a world where AI is being used in the commission of various crimes. 

Now, let’s start there. Everyone’s talking about Sora right now, a Open AI’s new program where you can create a persona and then make videos based on that persona. It seems to me that’s a hop, skip, and a jump away from a deep fake generator that is very user-friendly.

Neil: I do think that over the long term, there are many benefits that are gonna come out of the use of AI and generative AI of all kinds. So AI is a dual use technology. It could be used for good, it could be used for bad. I, I think one way to frame this is that prior to the release of ChatGPT 3.5, back in November, 2022 many users would have trouble telling just what’s the difference between what’s real and what’s fake With phishing messages, for instance, it was hard enough back then. And one of the, one of the challenges that’s come due to generative AI is that it’s able to generate much, much better phishing attacks, malware attacks, et cetera.

So being able to tell the difference between what’s fake and what’s real, it was hard enough before. It’s even harder now. And so I think that when deep fakes get used as part of wire fraud. Scams, you know, romance scams, et cetera. We need to put more focus and more vigilance in that to the point that I think until there’s more defenses, more scalable defenses.

You know, one recommendation might be if that you’re doing a dollar transaction, if you’re doing a wire over a certain amount, like meet in person with the person that you’re transferring money to, right? If there is an internal sponsor at a company that’s buying something from a supplier. Then if there’s a in-person relationship and you can meet in person and look at that person’s ID and authenticate them in a very strong way, that would be, you know, much harder for a cyber criminal to just do remotely and online.

Beau: AI can’t shake your hand. The pretexting and the phishing emails now are better. And they’re better because people are using generative equipment, uh, software to create these messages. And, and they sound great. They sound exactly the way they’re supposed to sound. Even when they’re hallucinating. Even, I don’t care, even if they get it a little bit wrong, try talking to me at six in the morning. I get things a little wrong too. You know, that’s one part of it, but the thing that keeps me up at night, when you were talking earlier, you said, you know, ID analytics kept you up at night because of the sheer amount of data there that that could cause so much harm. The thing that keeps me up at night now is like, what if every single AI prompt I ever put into the three different AI software companies I use right now got leaked. I guess it wouldn’t be that big a deal ’cause it’s for work and you’d be like, oh, I didn’t realize he was that dumb. But, uh, but the fact is, whereas I use it for work, I think more about children who are using it as a friend and their, their spill in their guts to this, this chat that seems like it’s there and has empathy and is, is interested and caring and it’s, and it’s not—what it’s doing is it’s gathering information. 

Neil: So I’ll tell you that the level of responsibility and accountability that I felt I had for that database with all the credit histories, uh, you know, definitely, definitely kept me out at night. And I guess one of the things I should mention is that a lot of the research for the big Breaches book I had done before I had even started at LifeLock because I had, I had said to myself that, you know, if we, God forbid, got breached for some other reason, that, you know, some other organization has already gotten breached, well that’s not good.

You know, if there was, you know, some nation state that targeted us and, and whatnot, and, and there was some, you know, zero day attack that was used and it was super sophisticated. I mean, that’s a different story. But, you know, I took that responsibility very seriously. 

With regards to what you’re saying around AI and to bridge things there, you know, thus far there have been breaches of AI systems. So for instance, in March, 2023. OpenAI had two breaches that took place. One was due to a software bug in a database that we were using called Redis. There was a concurrency bug in which third parties could see the initial prompts, conversation, titles, responses that were going in.

Neil: So the kind of concerns that you mentioned are exactly on the mark, and they had to shut down all of chat GPT for more than five hours in order to address that issue before they were able to bring things back online. 

Beau: Five hours. That’s how long one of the biggest AI systems in the world went dark after a leak. Five hours. Five hours to patch, reboot and pretend nothing happened. Nothing to see here. I mean, in another industry that had, uh grounded flights for days, but we don’t have a TSA for this stuff, right? 

Neil: Yes. 

Beau: We don’t have the TSA, but we’ve had the events and five hours is not a long time to shut something down that just did something bad.

And what do I mean by bad? A breach is bad. We can agree breach bad. So if we had that TSA. I’m gonna go out on a limb and say they might be shut down for a week. If it were a, if it were really, if it worked, if they had to prove, if they had to get the black box and they had to tell us exactly what happened (I know that’s not TSA, but stay stick with me.) We’re still in air, air aviation security. You know, if they had to go get that black box and they had to analyze it and figure out what happened, it would be a second before they let the Airbus take off again. It was. In the airline industry, why doesn’t tech have the same thing?

Five hours? Gimme a break. If there’s a real thing like that happening, I want, I’m gonna sit here with my arms crossed and say, show me. It’s not gonna happen again. 

Neil: By the way, back in 2021 when the Colonial oil pipeline got breached, it was down for an entire week. Indeed. And when. The Russian Cybercriminal gang that did that attack, you know, when Colonial Oil Pipeline wasn’t able to ensure just the safety of all the people that even worked for them for the oil pipeline.

I mean, they had to shut down for a week and then they paid the ransomware attackers. So I agree with you. Five hours, not a big deal. One week of an oil pipeline being down, that’s when ransomware was deemed a US national security threat. So it’s been good that there’s been recognition of how important these things are.

Yeah. But you know, like you mentioned, there is no equivalent of a no. 

Beau: But I guarantee you five years ago, 10 years ago, you and I could have been sitting, having lunch somewhere and we could have spun up half the things that had happened since then and said this could happen and, and so let’s talk about the low hanging fruit of the this that could happen and it’s AI. I think that there still could be, and I know you think it too. That there might be a big AI breach. What are we looking at? Are they gonna shut down for five hours? 

Neil: I do think it is really important to raise awareness and share knowledge about, yeah, the additional attack surface that comes with AI security.

I mean, that’s one of the reasons that. You know, in the work that I do with Stanford online and the Advanced Cybersecurity Program there, we just rolled out a course on AI security that specifically focuses on, uh, you know, teaching folks how to, uh, address hallucinations, how to address pump injections, jailbreaks adversarial examples, how to avoid abuses from deep fakes and this sort of thing.

And that’s why I think education’s really important, but I do think that that is just a first step. 

Beau: It’s serious. People really had their days ruined. Right. And there’s a certain point where the welfare of users needs to take precedence over the march of progress. And so how do you–as a cybersecurity professional who probably would like to get it right every single time, but knows that your whole business is predicated on the fact that that’s impossible–how do you thread that needle right now, Neil? What’s the what? How do we thread that needle? I. 

Neil: I think we need to put more funding into AI security in addition to traditional cybersecurity. If you speak to people like Vinod Khosla, one of the titan investors of Silicon Valley, he has said that he believes we should overfund AI safety as a society so long as it doesn’t stop progress. Because there’s risks on both sides. 

Beau: But is that like a government thing or would that be a third party business? Or would that be AI companies self-governing. 

Neil: Yeah. So that’s not a government thing, that’s, that’s more of Vinod Khosla as a visionary investor giving guidance to what he thinks should happen so that we can leverage both the benefits of AI, but also mitigate the risks. 

Beau: I understand it’s guidance, but what would it look like if it were in practice, not guidance. Would it be a third party business or is it something that, you know, ask the government to regulate it, which I think everyone will say a resound, everyone will set it up and go, no, no, no, no, that’s not what we mean. That’s not what we mean. So what, what does it look like? What does it, what does an actualization of AI security look like right now? 

Neil: So I think that if we look at the EU AI Act, the AI Act that came up in Europe, what they do is for. Every application of AI they break it down in different levels of risk categories, and so for the highest risk where there could be harm to humans and their lives and such things, there’s more regulation. Whereas for systems that are low risk, there’s less regulation. And that’s one model. There’s a super interesting discussion taking place in the country around should we regulate, should we not? How much should we regulate? How much should we not? And one analogy that we could keep in mind is that of like the German Audubon.

I think a lot of people are worried about making sure that the US wins the AI game, right? Overall. And so we wanna go fast. But if we look at the and if we look at like the German Audubon, it is the fastest highway in the world. It’s one of the most famous, fastest highways in the world. And it’s not because there’s no rules. it’s because the lanes are well engineered, there’s guardrails, the drivers are disciplined and because of all of that, cars can move faster on that highway than on other highways. So if we, one hypothesis is that if we regulate right, it will actually allow us to move faster instead of having to hit the brakes every time that there’s some obstacle that comes up. And I think, for instance, Facebook learned this. They initially had the mantra of move fast and break things, but Mark Zuckerberg himself after there were a bunch of data breaches, changed that to let’s move fast with stable infrastructure. 

Beau: It doesn’t, doesn’t have the same ring to it, but from a cybersecurity point of view, it does feel a little safer.

I mean, you know, when I think about things moving fast, especially cars, I think about dummy tests. Right. And I don’t wanna be the dummy. I don’t. And so what are some tips for me–I don’t want to be a crash dummy–for listeners to this podcast who want to move fast? Who want to get where they’re going quickly, but they don’t wanna be a crash dummy.

They want to, they wanna be safe, but they don’t wanna be. Sorry. Um, what are some rules for the road for regular people when it comes to cybersecurity? ’cause we know at the enterprise level there’s a lot of stuff, but for listeners to the show, there’s some CISOs in the crowd, but there’s also a lot of people who are just interested and want to learn how to do stuff better.

Neil: Got it. Okay. So basically for like your average consumer, if you’re using ai, what should you do? So I think what you should do is, uh, be very careful about who you’re trusting with your data, which AI services you use. And uh, you know, for instance, if you’re using Google for a whole bunch of services, whether it be Gmail, those that, and you trust Google, then maybe that’s a reason to use Gemini because you already trust them and they already have a whole bunch of your data anyway. And because they’ve been using AI

Beau: Might as well keep it all in the same place. 

Neil: Yeah, yeah. Right. Uh, that, that’s the reason. There are many newer, uh, AI companies. Uh, if I think about philanthropic, for instance, one of the interesting things about philanthropic is that, uh, you know, the two founders, Dario and his wife, they left open AI to start philanthropic, in part because they wanted to develop a safer, more secure AI model.

So I think it is important to do research between the, behind the different AI models and services. And you know, based on that, figure out what’s the, what’s the right one for you? Who do you feel you could trust and why?

Beau: Neil Deswani, thanks so much for being on “What the Hack?” The book is Big Breaches: Cybersecurity Lessons for Everyone. If you want to go deeper into the stories we touched on. Big breaches is a really good place to start. Thanks again, Neil.

And now it’s time for the Tinfoil Swan. You’re paranoid takeaway to keep you safe on and offline. PII, personally identifiable information. It’s not just data. It’s the raw material, the literal weapon used by scammers and hackers to target you and is often sitting online for the taking thanks to people search sites, your name, your addresses, your phone numbers, your digital fingerprint, what you like based on what you post on social media data brokers package. All of it. The raw material of a fraud scape that costs people billions every year.

Your vulnerability is worth a fortune to all the wrong people.

Here’s what you can do about it. Find your leaks and treat them like an active breach. Go to haveibeenpwned.com. That’s HAVE I BEEN poned PWNED.com. Plug in every email you’ve ever used. If you get a hit a criminal owns those credentials, put your phone number in two again that’s out there. The only defense change what you can change.

Change that password for sure immediately that you find on, haveibeenpwned? And if you reused it anywhere, change it there too. A single leaked login is the key to unlocking your whole life if you’re not good at this stuff. And a lot of people. Let’s just be honest. You either are or you’re not, and you know the answer to that question.

Second, destroy the commodity. You can scrub the building blocks of your actionable PII your addresses, your phone numbers, email address, family, old roommates. Yes, all of that can be figured out with people. Search with a service, like delete me. You can get rid of some of that stuff. You can start chipping away at what’s out there chosen.

By the New York Times wire cutter as the best in class for personal information removal. It’s human assisted automated removals, and they, they don’t stop to needing to be done. That’s why it’s on a subscription. Your PII is like a, like a weedy garden. You know, you can pull the weeds, that’s your stuff, right? But they will pop up again and again. So you gotta stay on top of it. 

Now bottom line is the less PII that exists about you in the ecosystem, the harder it is for a scammer to create a pretext to scam you. That’s your job. The paranoia isn’t about being unhackable because no one is unhackable and is. I got hacked today by our head of security. Thanks, Reuben. It’s about becoming harder to hit. Be a good steward of your privacy. Be paranoid. Be safe and see you next week. Thanks for listening.

“What the Hack?” is produced by Beau Friedlander. That’s me. And Andrew Steven, who also edits the show. “What the Hack?” is brought to you by DeleteMe. DeleteMe makes it quick and easy and safe to remove your personal data. Online and was recently named the number one pick by a New York Times Wirecutter for personal information removal.

You can learn more about delete me if you go to joindeleteme.com/wth. That’s joindeleteme.com/wth and if you sign up there on that landing page, you will get a 20% discount. I kid you not a 20% discount. So yes, color me fishing, but it’s worth it.

SHARE THIS EPISODE
Hundreds of companies collect and sell your private data online. DeleteMe removes it for you.

Our privacy advisors: 

  • Continuously find and remove your sensitive data online
  • Stop companies from selling your data – all year long
  • Have removed 35M+ records
    of personal data from the web
Special Offer

Save 20% on any individual and
family privacy plan
with code: WTH

What the Hack Podcast
Dive into the latest episode of ‘What the Hack?’, your go-to podcast for real stories, shocking cybersecurity breaches, and mind-blowing digital hacks.
Want more privacy
news?
Join Incognito, our monthly newsletter from DeleteMe that keeps you posted on all things privacy and security.

Icon mail and document

Exclusive Listener Offer

What The Hack brings you the stories and insights about digital privacy. DeleteMe is our premium privacy service that removes you from more than 750 data brokers like Whitepages, Spokeo, BeenVerified, plus many more.

As a WTH listener, get an exclusive 20% off any plan with code: WTH.

Listen to Recent Episodes

Twas the Week Before Cyber Monday

Episode 228
November 26, 2025
29 min

On Digital Fingerprints and Dirty Money

Episode 226
November 26, 2025
41 Min

Big Breaches and Click Bots to AI and Beyond

Episode 225
November 12, 2025
35:34 min