Skip to main content

This Week on What the Hack: AI Regulation

This Week on What the Hack: AI Regulation

Join Ben Winters, director of AI and Privacy at the Consumer Federation of America, for a front-row seat to the chaotic world of AI regulation where dirty data, aggressive lobbying, and public confusion drive a “winner takes all” vibe. From shady therapy bots to data-fueled discrimination, Ben reveals the urgent need to create guardrails for AI before it jumps the tracks.

Episode 208

https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/chrt.fm/track/E78194/tracking.swap.fm/track/tcQd6Q6C0RUUlOHq1Ytj/mgln.ai/e/51/pscrb.fm/rss/p/traffic.megaphone.fm/TPG5546827291.mp3?updated=1753154628
Loading title…
0:00

Ep. 208: “Will AI Write Its Own Laws?”

What the Hack?” is DeleteMe’s true cybercrime podcast hosted by Beau Friedlander

[00:00:00] Beau: It started with a promise that AI could solve problems faster, smarter, and more objectively than any human ever could.

[00:00:08] Ben: When there is very limited or no controls on the way you have data about what credit card transactions you’re making, what websites you are going to, what the content of your emails are, all of those things are being sold often for the purposes of advertisements or in other cases more nefarious things.

[00:00:23] Beau: This week, we dig into what happens when regulation stays largely aspirational, oversight is virtually nonexistent, and the technology gets deeper grooves in our collective psyche by the minute.

[00:00:33] Ben: But if you’re not allowed to do any sort of common sense regulation, then those things will not only continue to happen, it’ll actually get worse.

[00:00:41] Beau: Because beyond the headlines and promises is the bad moon of a Mars-or-bust, move-fast-and-break-things mentality.

[00:00:46] Ben: The data is dirty, and the way they are implementing the system is irresponsible and unaccountable.

[00:00:52] Beau: I’m Beau Friedlander and this is What the Hack, the show that asks, in a world where your data is everywhere, how do you stay safe online?

[00:01:05] Beau: Ben Winters, thank you so much for joining us today. Ben is the director of AI and Privacy for Consumer Federation of America down in Washington, DC. Good afternoon, Ben.

[00:01:15] Ben: Good afternoon.

[00:01:15] Beau: It’s the day after the battle. Thanks so much for making some time to talk.

[00:01:19] Ben: Yeah, it has been a very long few days after a long month or two.

[00:01:25] Beau: Okay, so last week the Senate voted 99 to one in favor of 86’ing the ban on AI regulation at the state level from Trump’s, quote, “Big, Beautiful Bill.” Unquote. A lot of experts have warned this would gut basic consumer protections at the exact moment AI is starting to cause real harm, and that’s not according to me. That’s according to a recent MIT study about the effect AI has on brain engagement. Now, Ben, what concerns you most about a move like this?

[00:01:46] Ben: I mean, I would say there’s two different main things I’m concerned about. One is the political part of it, right? That this moratorium idea, “you can’t do anything to control what we do,” is gonna take hold, is gonna be given validity, is going to actually happen and continue to be treated as legitimate. And the second thing stems from that. There’s a lot of harms going on right now that state laws, or potentially federal laws, could address. But if you’re not allowed to address them, that is the biggest worry, right? Whether that is scammers using generative AI systems to pretend to be your child or to pretend to be a celebrity that is advertising some bad crypto investment or a bad supplement or something like that. There are concerns about the use of automated systems in hiring, housing, credit. And so, you know, and then there are also mental health and manipulation concerns about the way generative AI as well as non-generative AI is being integrated into it. So, you know, that second concern is really, if we are challenged to do any sort of common sense regulation, then those things will not only continue to happen, then it’ll actually get worse. People, you know, will be able to act with impunity, you know, no concern about having the law enforced against ’em. We’d see more DeepFakes, more scams, we’d see more accidents with things like Waymo or self-driving Teslas, and a lack of ability for government to protect their people.

[00:03:16] Beau: Exactly. And what makes this even more alarming is that it’s not just about blocking future regulation, it’s also about stopping states from enforcing existing laws, laws they’ve already passed.

[00:03:26] Ben: That’s right. Yeah. So one of the important parts to understand about this AI moratorium effort is that the federal government was trying to restrict states from regulating or enforcing laws that were already passed about AI, but they were not offering, “Here’s how we want to regulate it at the federal level.” Oftentimes when they’re preventing states from doing something, they are saying, “We’re regulating it here and we’re creating a national standard, and you guys can’t deviate from that.” That happens occasionally, but what happened here is unprecedented: “We’re not doing anything and you are not able to do anything either.” There have been laws passed throughout the country on things like deepfakes, the use of therapy and mental health chatbots, um, you know, AI speech in elections. And many others are being actively considered in state houses around the country while the federal government is failing to do anything to protect consumers.

[00:04:19] Beau: What we saw was a clear signal. The special interests with billions on the line are pulling out all the stops. That’s why they pushed so hard for a 10-year moratorium, even though they lost. If it had passed, it would’ve tied the hands of lawmakers and stripped away protections for consumers and creators alike for way too long. It’s been called anti-democratic, anti-accountability and would lead to incalculable damage to all Americans. Earlier you mentioned that some kids today are turning to AI as a kind of therapist, a friend, confiding extremely personal things, and we know not all AI systems are built the same. There is a difference between say, ChatGPT and some sketchier LLMs, but what they all seem to be doing is racing ahead, trying to get as much done as possible before the law can catch up. Is that a fair thing to say?

[00:05:13] Ben: Yeah, that is accurate. I think depending on service to service, the policies around how they can collect and use your data—whether that be data about the device you’re using or the inputs to the messages that you’re sending to the chatbots—for the most part, companies are saying they can do whatever they want with those inputs. They can train and improve their model. They can use it for advertising. They can do a lot of different things, but it really depends, company to company, offering to offering, how the system is moderated and the policies around it. There are those big offerings like a ChatGPT or a Claude as well as a Grok. But then there’s, you know, things that are character-based. Like there’s something like Character.AI where there’d be something that says like, “I am your therapist. I’m a licensed therapist, and everything you say here is confidential.” In reality, that is not the case. You know, there are clear statements to the contrary in the privacy policy. So it’s both misleading to consumers, especially for companion services or mental health services. It’s exploitative in general, because they’re taking your data and doing whatever they want with it, and so this connects to the lack of data privacy laws around the country as well.

[00:06:25] Beau: Right, and that is where we’re going. Back in May, you wrote a piece with Kara Williams from the Electronic Privacy Information Center. In the run up to this “Big, Beautiful Bill, the first thing that jumps out to me is how vulnerable Americans are to AI-driven tech, and the fact that most Americans don’t understand what that looks like because it’s not just ChatGPT. Can you talk a bit about what AI means in the big picture to somebody who’s an expert in privacy right now as these technologies grow?

[00:07:00] Ben: Yeah, I mean, when we’re talking about a ban on all AI regulation or automated systems regulations, we’re talking about so many different things. We are talking about something like ChatGPT or an image generator, but we’re also talking about the algorithms that determine whether you get a line of credit, whether you get a next interview in a job process. There are algorithms that are used for personalized pricing. There are algorithms used throughout the criminal legal cycle, and there are all sorts of AI systems, automated systems used everywhere, throughout the industry that is pretty much uniformly very opaque and unaccountable. And so, whether it is generative AI that a lot of people are talking about, or the AI that orders how you see something on your social media feed or on a Netflix feed, all of those things could reasonably be called AI. Same thing that goes for self-driving cars, that is AI too. Especially when people are trying to ban this whole swath of technologies and regulation about those technologies, it is sort of nonsensical. There’s, you know, each topic, each application has different challenges and different harms and should have a different regulatory framework.

[00:08:09] Beau: A lot of the focus right now is on generative AI, large language models that write essays, answer questions, and so on, but you’re also talking about an older form of AI: algorithmic personalization. That’s the system that in the best case scenario customizes our online experiences using personal data and worst case scenario manipulates us via social media, retail sites, as well as news and political content. These algorithms can actually shape what people believe to be true. So how is something that powerful still unregulated in the United States, and what efforts, if any, have been made to change that?

[00:08:44] Ben: So in order to fuel that algorithmically driven information ecosystem, including the ads you get served, the content you get served on the services you’re using, and the increasing challenge and problem of what’s called AI slop—AI-generated text and photos and videos that are just there to try to get clicks—there’s a few different elements there. So that one first element is the privacy problem, right? When there is very limited or no controls on the way you have data about what credit card transactions you’re making, what websites you are going to, what the content of your emails are, all of those things right now, including like your location through your phone, are being sold often for the purposes of advertisements, or in other cases we’ve seen examples of data brokers that sell live geolocation services to people, for example, at Planned Parenthood, then selling that to a third-party advertiser using that to target anti-abortion rhetoric. Same thing has been seen for the sale of sensitive information in other cases. There’s very few limits on collecting any type of data we generate through our phone, through our card, through our location, through our any other devices, our smart watches, whatever it is. For the most part, there are not meaningful restrictions on both primary uses and the secondary uses of that data; primary uses referring to the data you think that you are giving for that specific reason, right? You’re giving credit card information to buy that car, right? You’re not giving that credit card information to then buy something else or to be saved, right? That same logic goes for, if you are trying to fill out a form to get a quote for moving services, you fill out your phone number and email, and then you get thousands of phone calls from contractors because they were able to sell off your data immediately. That is a secondary use. They’re also able to use that data and sell it for other purposes.

[00:10:50] Beau: Is the data that you mentioned earlier, including geolocation and when someone goes somewhere, which most ISPs will know because your phone’s changing locations, is that data being sold to train AI as well, or- Is everything fair game right now?

[00:11:09] Ben: Yeah, I mean, I’m not familiar of a specific, like, at least commercially available tool built off of location data. I’m not exactly sure what it would be, but there are lots of pieces of location data that are used to model out, you know, people’s, even if it’s just spending habits. If it’s like you go to McDonald’s after going to Target, they are more likely to sell you an ad when you’re at Target to go to the nearest McDonald’s. There are no restrictions on what you’re doing with this data in most U.S. states right now.

[00:11:40] Beau: That’s what I was getting at, is that there’s a lot of raw data out there and we’re providing it without thinking much about it on a daily basis. It exists. It’s what I’ve clicked on. It’s what phone call I made. It’s where I was, it’s what I bought. I do things that I do because I’m a living, breathing person in this society, and I’m just trying to mind my own business and do that. Meanwhile, that data is available to anyone who has an idea and wants to develop a killer app. That is kind of the Wild West problem that we’ve been facing, which is that we have companies using machine learning in its various forms to maximize whatever it is they’re building or trying to serve. And they’re trying to get to market as fast as possible, so it’s not getting tested. It’s not like there’s any, God forbid, there were a government agency that actually had to be like, “Now hold on.” Like, there’s no USDA of data practices.

[00:12:57] Ben: No, there’s not. You know, there’s been a lot of ideas over the years of like an FDA for algorithms, that you would have to go through some pre-clearance process, show that you are doing the development of the tool responsibly, that you’ve thought out how it should and shouldn’t be used, you’ve tested to see if A) it works to see what you’re saying it actually does, and B) if it has, especially when it’s making decisions about people like in credit, employment and housing, that it’s not having a disparate impact. There are lots of different proposals out there to try to get that. The closest thing we have to that right now is the Federal Trade Commission, which is, you know, tasked with doing sort of everything on commerce. The FTC over the last bunch of years has done a lot of work on privacy and AI, but they’re not set up with, and they’re not sort of resourced enough to be able to do this for everybody. There was a law put forth by Senator Gillibrand several years ago that would’ve made a data protection agency part of a privacy law. It would have a dedicated group of regulators focusing on protecting people from data abuse. But that has not passed.

[00:14:03] Beau: Now one of the things that drives me a little crazy, I’m not gonna exaggerate and say it drives me nuts ’cause it doesn’t, but I’m pretty annoyed about this, what I take to be a cabal of kids. I know they’re not kids, but they behave like children. They control the game ’cause they own it. And you know, as a result, there is no transparency.

[00:14:24] Ben: Yeah, I mean, it’s extraordinarily frustrating and damaging that there is no market incentive for these companies to be transparent about, “This is how we build the system. These are the decisions we make about what we allow to come out of the system.” All of these key decisions that companies are making… A lot of times they’re trying to make it sound like they have no control over their AI systems. They make all of the decisions about how it’s built and how it operates. When there is any sort of effort to create transparency around these AI systems, there really becomes this full-court press from the tech lobby of, “We cannot do that. We have these competitive reasons why we can’t share the methods we used, the sources of data we used to train it. We can’t share the decisions, what weight different parts of the model are.” They’re relying on trade secrets law. This is the same thing that makes the KFC recipe or the Coca-Cola recipe secretive. And it basically relies on this ability for companies to say, “You know, we have a competitive interest in keeping this information private. And without it, our competitive value would drop.” We see this levied by tech companies all the time. And I think it’s really overused. There might be certain circumstances where, “You know, we figured out this one little different way to crack this model, and that’s what makes our company profitable and our product better.” But for the most part, these people are doing the same things. They’re making these decisions about how to train the model and how to roll it out, but then relying on things like trade secrets, both legally, but then really as of right now, at least, it’s the sort of court of public opinion and lobbying of like, “We can’t be pressured to disclose anything because we’re gonna have competition from other companies and other countries,” and things like that. It becomes this very vague fear-mongering.

[00:16:17] Beau: One of the issues that the “Big, Beautiful Bill” brought up about AI was that state efforts to regulate AI were “burdensome.” Now, burdensome to whom? I’m just talking about the states that might actually have rules about this, but the burdensome quality here should be, I mean… Aren’t we thinking about it wrong? Shouldn’t the burden be on the people who are making these killer apps?

[00:16:52] Ben: Yeah, I mean, a hundred percent. I think there’s two big elements of compliance that companies are really upset about and are behind the effort to wipe out all of the state different regulations. One thing they’re saying is that there are 50 different states, and the figure they threw out there was that there was over a thousand bills introduced just in the first five months of 2025. And it would create such a patchwork that would be impossible to comply with even if you wanted to, even if you were being the most responsible. One response to that is it’s significantly less than that. Even bills that were introduced, it’s more like a few hundred that were actually substantively about AI. Second of all, it’s bills that were being introduced, not passed. That’s just basic legislative process. Bills are introduced all the time and not passed. There’s messaging bills, there’s, you know, all sorts of things. And that’s also just a democratic process, right? Like throwing out different ideas of how to address a problem. Just because different people are throwing out those ideas and the companies, unsurprisingly, don’t feel like that would be beneficial to them, and that would cost them extra headache and extra money, just because they don’t wanna do that, that doesn’t mean that they shouldn’t have to. The second part of it is, as you mentioned, it’s affiliated with figuring out what exactly they have to do with the different patchwork of state laws. It’s the cost. Significant numbers thrown out about tech regulation and that it would be ruinous for companies of all sizes. As you mentioned, there’s a trade off. Without safety provisions, guardrails, things like this that regulation would require, the harm, both in money, in PR, in cleanup, in figuring out how to help people from the company side is really bad. Potentially worse than the compliance costs. More importantly is the harm that it causes, right? If it hurts people, you know, has really negative impacts in terms of creating deepfakes of people, leading to scams of people, you know, leading to people losing opportunities. The simple thing is the trade off. The tech lobby likes to speak in one voice about all tech regulation affecting every entity. The majority of these bills introduced at the state level are specific to only some companies and what they’re doing. It gets back to the beginning of our conversation where we were talking about how AI is referred to as different things. A lot of those laws are only about deepfakes, self-driving cars, and ad delivery. It’s not like everyone has to comply with everything. And so it’s an additional point in how disingenuous their corporate burden argument is.

[00:19:21] Beau: Well, disingenuousness aside, because at the end of the day, someone can be the best actor on earth and still be a bad actor. And the thing that would actually make it harder for these bad actors to operate would be one law. If you have, let’s go back to consumer products, a lipstick that’s illegal in the European Union because it contains an ingredient that’s banned in the EU, a lot of companies, big companies, are not going to make an EU version and then a rest-of-the-world version. They will make the EU version. Why not just make a product that passes muster with the most stringent privacy guardrails?

[00:20:19] Ben: Yeah, this is something that we’ve seen with car emissions. They call it the California effect for a reason. If California passes a law of emissions, they’re not gonna make a different car in California than in the rest of the country. Same thing goes for privacy laws, right? It’s just using privacy laws as opposed to AI laws because it’s been around a lot longer and there are more. In reality they will comply with the most stringent version of the restriction. By default, you are complying with the rest of it. So you don’t need a federal standard. You go to the highest common denominator, which is one of the benefits of having 50 different states protecting their own citizens.

[00:20:53] Beau: Right, and that’s part of the strength of state-level regulation. But politically, this debate is also shaped by how different groups feel targeted or prioritized by these systems. Can we talk a bit about how concerns over bias, especially from conservatives, have influenced the pushback against AI and big tech?

[00:21:15] Ben: The biggest driver of Republican senators being against big tech in the last bunch of years, from Ted Cruz, Josh Hawley, has been this concern and framing that, “Our speech gets silenced on these platforms. It gets deprioritized, it gets shadow-banned,” whatever it is. They’re complaining about the impact on their own ability to go viral and the people that are associated with them. Other groups would have the same complaints about groups they’re part of.

[00:21:43] Beau: All right, setting that aside, one area where there is surprising agreement across the aisle is around children’s safety online. Senator Marsha Blackburn, for example, has been pushing KOSA, the Kids Online Safety Act, which I actually support. If that were to pass, do you think it could open the door to a broader federal privacy law that protects everyone, not just children?

[00:22:16] Ben: That’s a good question. I’m not sure if it necessarily leads to a broader comprehensive privacy law for everybody. We have had COPPA for the last 30 years, the Children’s Online Privacy Protection Act. That has not necessarily led to protections for everyone else. I think KOSA would have both positive and negative impacts on online speech and on the impact of the online experience for kids. I don’t necessarily think it would be the start of a domino effect of a bunch of other strong tech legislation being passed.

[00:22:50] Beau: So blue sky, what is the next move? What is the next victory that we want to achieve for those of us who are concerned about online privacy?

[00:23:00] Ben: Yeah, I mean, we want to pass laws in states and ideally at the federal level that have data minimization as the standard of what people can collect and how they can use it. Basically, very simple concept of, you can only collect and keep and use the information for the service that a consumer is requesting. That would limit that secondary sale or use of a tool for something that the consumer would have no imagination they would be using it for. And the second big bucket of something in a privacy law that is essential is a private right of action. That is the ability for the person that is harmed to sue when they’re harmed, not just relying on their state’s Attorney General or the Federal Trade Commission. Too many cases, and the incentives are a little bit off for an individual case of harm to be the subject of an AG complaint. So, those are the two biggest aspects of a privacy law that we are looking for, in addition to across-the-line, bright-line rules: you cannot collect the sensitive data of people under 16 and you can’t collect or sell the precise geolocation data of people. Those two things just passed in an amendment to the Oregon privacy law like three weeks ago. They’re in Maryland going into effect later this year. That is the standard, as well. So, getting more of those really strong laws to an extent where at the federal level it requires meeting that standard, when, if we ever get to a comprehensive data privacy law.

[00:24:26] Beau: So I think I already hear the coming war, right? Because it’s a non-starter to try and legislate the right to sue. You can go for it, but big tech has got, I mean, I can’t imagine the cannons of legal firepower that big tech would point at that. And I wouldn’t wanna see everything else killed on the altar of the consumer’s right to sue. And I’m not saying the consumer doesn’t have a right to sue, but the Consumer Financial Protection Bureau, CFPB, which has recently been hobbled, could point in the right direction for what the enforcement of privacy issues could look like in the United States so that you have a government agency responsible for taking these complaints and figuring out where there’s real harm and addressing it. What are the chances of us seeing something like that? I know there have been some attempts in congress.

[00:25:32] Ben: So, the private right of action is really the gold standard, but would not be a hill to die on. There are a lot of different ways that enforcement could be done better than it is right now. One is a dedicated federal agency focusing on privacy and consumer protection. CFPB has been doing an incredible job intervening when people are getting screwed by banks, mortgage lenders, et cetera. It would be great to have a dedicated team of people doing that on privacy and AI. There’s been some proposals for a data protection agency. Data protection authorities are in every EU state. They are in a lot of countries around the world. It’s a very common thing that we don’t have. There even is a privacy regulator in California. Whether you have a standalone dedicated entity or set aside real money and create a division of something like the FTC focused exclusively on data privacy, and we’re giving you the money to hire 50 lawyers and investigators and technologies to do that. You know, the resources is really what’s required in these bills, both in state bills and federal bills. If there’s not money to enforce it, it won’t get enforced. And so there’s a few different ways to do that, but it’s a really critical element of any helpful law.

[00:26:43] Beau: Right. And even when the framework does exist, it all comes down to whether there’s real funding and power behind it. But then with AI specifically, we’ve seen these companies not just resisting enforcement, they’re actively trying to write the rules or just block them. So how do you see that playing out in this federal moratorium push?

[00:27:12] Ben: Yeah, I mean, I think on AI especially, right, and we’ve sort of gone back and forth talking about AI and privacy because they are so interrelated. But on AI especially, this moratorium is an example of how hostile the companies are to any sort of meaningful regulation. And again, it is just so galling and shocking that there was this, “You can’t regulate at the state level and we are also not going to.” So just that sort of boldness and the fact that it was picked up, I think is a meaningful thing in and of itself. Moving forward, I think this is going to be an approach that’s taken by Senator Cruz, Representative Ulti and others in potentially proposing federal AI laws that nominally regulate it in a very weak way, but includes something like this moratorium provision. I think it’s very clear that AI companies are scared of playing by the rules. We’ve seen this in copyright, environmental regulations for data center creation, and in privacy and AI regulation. I think they are susceptible to being defeated, but they are very much on the offensive right now.

[00:28:18] Beau: Now a 10-year moratorium is a remarkable thing to ask for when you consider the fact that so much can change in 10 weeks in this particular emerging sector. So is that part of the thinking here?

[00:28:37] Ben: Yeah, I think the 10-year part of it is really just an absurd type of starting point of a negotiation, I think in their hope, where it’s putting out 10 years and then we can come down. And we saw this on Saturday night when Cruz and Blackburn thought they had a deal and they brought it down to five years. Think about the state of different technologies in 2019, and especially in 2015, they were very different than they are now. They still needed to be controlled then, but I think part of it is you go with the bold ask and hope that we can be a reasonable compromise by banning all states of regulating AI for five years or three years, when in fact that’s really just this absurd notion. So I think that’s why they started out with that sort of ridiculous number.

[00:29:18] Beau: All right Ben, before we wrap up, I wanna zoom out for a second. We’ve covered a lot of territory from regulation to enforcement to the broader power dynamics at play, but I’m curious about your own path here. So what drew you to this work around AI and privacy?

[00:29:48] Ben: I got involved from being aware of different algorithms used in the criminal justice system. There were some interesting articles in 2016 when I was in law school about actuarial risk assessment tools used by courts to determine whether someone should be on bail or get parole or whatever it is, fill in the blank. And there were all sorts of these interesting studies about how it actually had a disproportionate impact on black people. They were getting higher scores for the same answers as white people. Same other things about, you know, predictive policing tools and facial recognition tools used in the criminal justice system. I was really introduced to the accountability problems, the lack of transparency, the lack of ability to, the lack of willingness to study or disclose anything about the systems. I got involved with a nonprofit and fell in love with the ability to do advocacy around that. And then, because so many of the themes are the same with everything we’re talking about today—there’s a lack of willingness to be transparent, racial and gender-based discrimination, a need to evaluate the impact of what’s being done but a lack of willingness to do it, and sort of government agencies that are either not well positioned or not willing to do enforcement to bridge that gap. So, I think just over time, especially as AI was more used in different fields around the country, I sort of just widened my gap to focus on all of AI stuff. I think it is something that is constantly changing and always interesting at least.

[00:31:14] Beau: So in the beginning, what piqued your interest was crime and punishment. Was the data skewed or the systems that processed it?

[00:31:21] Ben: So, the data they were collecting and building a model off of was policing data often. We know for a lot of different reasons that, what a lot of scholars called dirty data. If this data is tinged with a racist sort of policing ecosystem, then the data is going to make it look like one type of person is significantly more likely to commit a crime. Because, you know, that made me really think about like, what is… when we’re talking about what data is, it is a choice of how we are… we’re saying, “Oh, we’re gonna take the arrest data.” That’s not necessarily crime data, right? It’s a very thin distinction, but it’s the data of what the cops did. So, inherently that’s going to be tinged with a lot of issues. And so they’re building it off of that. And then in the systems, the way they are built, especially with lack of transparency, it makes it hard for anybody in the process to be like, “Wait, this does not make sense to include this as a factor in determining whether someone is a risk of re-offending or risk to fail to appear.” Same thing goes for like predictive policing systems that we’re trying to predict where a next crime would likely happen. The data is dirty, and the way they are implementing the system is irresponsible and unaccountable.

[00:32:32] Beau: So does the way AI gets information track along the same lines as law enforcement when it comes to dirty data?

[00:32:54] Ben: I do think there are concerns about whether the data used to build or feed AI systems is rightfully obtained. This is where we get into copyright issues and privacy issues for generative AI, as well as privacy issues: is the data about your browsing history, fed into an ad system, an acceptable type of data to build a system off of or to inform a decision on? One really quick case that I can share is the Kurbo case from the FTC, where there were these massive violations of the Children’s Online Privacy Protection Act for this kids’ weight loss app. And essentially what they did was collect a lot of information about kids under 13 and did not do, you know, the proper checks to make sure they’re getting parental verification. And then they built this AI system using that data from those kids. And what the FTC required them to go back and delete everything that is the fruit of the poison tree. Right? You were not able to collect that data in the first place legally, and so you shouldn’t be able to benefit from the systems that that data helped build. And so that is that dirty data, right? It’s the fruit of the poison tree. If you shouldn’t have been able to collect or use that data for that purpose in the first place, then you shouldn’t be able to use that system.

[00:34:14] Beau: And that gets to the heart of the privacy vacuum in this country, which is that there’s people collecting, companies collecting data of all stripe on all manner of thing, and then reselling it in ways that are not at all transparent, and we don’t know how they’re being used. It’s a different kind of dirty.

[00:34:34] Ben: Yeah.

[00:34:36] Beau: Doing us dirty. Given how you got into this work, seeing firsthand how bad data and broken systems can cause real harm, I’m curious how that would shape the way you think about, well, I don’t know. Let’s do a hypothetical. You get on an elevator, you’re waiting for an elevator, it opens up, and you get it, you’re standing next to the CEO of one of these major LLMs. Big AI guy. You’ve got a minute. Or no, whatever, you’ve got three minutes, whatever. You have some time. In the elevator speech, what are you gonna say to make the world a little better before those doors open again?

[00:35:06] Ben: What I would say is that there have to be better responsibility taken from the companies putting out AI systems. Especially when you are talking about a big company like yours. You have impacts on how things like OpenAI gets rolled out. You have impacts on how military contracts get carried out. You have a lot of control over how people have their everyday data. Especially a lot of these people have to use it for work. It should really be something that you take responsibly, and not test new ideas of how you can score somebody, surveil somebody, and try to squeeze every little bit out of every data point that you can get someone to generate. The second really big thing is that you should be more proactive and open to working with regulators and legislators around the world. It does not have to be your adversary. If you have to do one little thing that controls your ability to do anything you want to do, it does not have to be the end of the world. You can still be a billionaire, you can still be a massive company and help model actual responsible sort of business practice, a responsible technology. You have the ability to sort of walk the walk you are talking about—responsible AI and American leadership in AI—but right now what you’re doing is you just want to be able to do whatever you want. Take the responsibility more seriously of what you have with people’s information, and model a better world.

[00:36:35] Beau: The way that we approach these conversations matters because yes, there’s a lot of resistance on the business side, but there are human beings on the other side with children and with their own privacy concerns as well, and so I would appeal to their better angels in that regard, and also point out, not only should you do all these things, but if you do these things, you’ll have more credibility in the marketplace. And when you have more credibility in the marketplace, there are going to be people who want to be your customers because you’re walking the walk and talking the talk, and not just privacy-washing everything.

[00:37:18] Ben: Yeah, that’s right.

[00:37:26] Beau: Thank you so much for joining us today, and thank you for your insights about AI and privacy and the future of regulation in this country when it comes to privacy and our data. If you want to learn more about what Ben does, head over to Consumer Federation of America at consumerfed.org, that is consumerfed.org, and you will find articles that he’s written in the past and other information that impacts us all. Ben, thanks again.

[00:37:37] Ben: Thanks for having me. Hopefully I can come back and celebrate some great federal laws we get, or a bunch of state laws.

[00:37:44] Beau: Thanks, Ben. Okay, now it’s time for the Tin-Foil Swan, our paranoid takeaway to keep you safe and on and sometimes offline. And yeah, this one freaked me out. Today we’re talking about something called about a tap trap, which uses a technique known as tap jacking. Now, so, what is tap jacking? Tap jacking is when a malicious app tricks you into tapping on something you can’t see by placing an invisible, it’s like a screen, not a screen because you’re on a screen, but interface underneath what you think you’re interacting with. So you’re clicking, and it’s a form of UI (user interface) deception that let’s attackers hijack your taps to trigger real system actions without your knowledge, so here’s how it works. You download a harmless-looking game like one where you tap to squash bugs, or spiders if you don’t like spiders. Anyway, but under the surface every tap you make is actually hitting a system prompt. There’s a button, you know, like “Do you want to erase your phone? Tap here.” And it is that sort of thing. Grant full permission, enable screen-recording, factory-reset this device. You think you’re playing a game, and you are, but it’s designed to make you tap very specific parts of your phone, giving control away, one tap at a time. So how do you protect yourself? Only download apps from trusted sources like Google Play. Avoid third-party sites. Be cautious of tap-heavy games or apps that seem overly simplistic. Check app permissions. If something like a tapping game wants access to your camera, microphone, or system settings, bye bye! Enable Play Protection on Android to scan for harmful apps. Keep your devices updated, or your device. I only have one. Security patches help block known exploits. Avoid clicking on app links sent via text, social media, or email. “Hey, this game’s really cool. I’ll click here.” Don’t do it. Trust your instincts. If something feels sketchy, yeah, it probably is. So right now, these attacks are only known to affect Android devices, and we’re not singling them out. Similar tactics could appear elsewhere on your iPhone, probably not your little flip phone, but admit it. None of you have one. If you use a smartphone, stay alert. And that is your Tin-Foil Swan this week. Stay safe. What the Hack is brought to you by DeleteMe. DeleteMe makes it quick and easy and safe to remove your personal data online and was recently named the #1 pick by New York Times’ Wirecutter for personal information removal. You can learn more if you to joindeleteme.com/wth. That’s joindeleteme.com/wth. Stay safe out there.

Learn More:

SHARE THIS EPISODE
Hundreds of companies collect and sell your private data online. DeleteMe removes it for you.

Our privacy advisors: 

  • Continuously find and remove your sensitive data online
  • Stop companies from selling your data – all year long
  • Have removed 35M+ records
    of personal data from the web
Special Offer

Save 20% on any individual and
family privacy plan
with code: WTH

What the Hack Podcast
Dive into the latest episode of ‘What the Hack?’, your go-to podcast for real stories, shocking cybersecurity breaches, and mind-blowing digital hacks.
Want more privacy
news?
Join Incognito, our monthly newsletter from DeleteMe that keeps you posted on all things privacy and security.
Icon mail and document

Exclusive Listener Offer

What The Hack brings you the stories and insights about digital privacy. DeleteMe is our premium privacy service that removes you from more than 750 data brokers like Whitepages, Spokeo, BeenVerified, plus many more.

As a WTH listener, get an exclusive 20% off any plan with code: WTH.

Listen to Recent Episodes

This Week on What the Hack: Flock Safety Privacy Concerns

Episode 242
March 11, 2026
50:48 min

This Week on What the Hack: The Surveillance Economy

Episode 237
February 2, 2026
46:45 min