Skip to main content

Privacy for Sale, Questioning Consent & AI Regulation: Feb 2023 Newsletter

February 22, 2023

Privacy for Sale: ‘Pay-for-Data’ Arrangements for a Post-Cookie World

Will consumers trade personal information for direct compensation? Forrester notes that new B2C startups, looking forward to a post-cookie world, are increasingly implementing measures for increased zero-party data collection, including direct ‘pay-for-data’ arrangements where customers are remunerated for voluntarily sharing specific kinds of personal information with brands.  New apps featuring these pay-for-data arrangements include Caden, TIKI, Tapestri, and Invisibly, each offering a range of benefits,  from free access to paywalled media content, to direct cash rewards for access to users’ mobile-device data.

Our Take

First-Party Loyalty Programs—many of which provide rewards and incentives for increased access to customer information—are nothing new. But both B2C and B2B loyalty/rewards programs are also increasingly being targeted by both the FTC as well as California’s Attorney General’s office (newly empowered by CCPA) for abusive business practices as well as lack of transparency with end users.  They have also been a popular target by hackers and data thieves who see them as easy-to-exploit, one-stop-shops for troves of user personal information, as well as cash equivalent forms of data (like loyalty ‘points’ or coupon codes) that can be sold in dark-web exchanges. Whether these new business models will survive scrutiny in a regulatory environment demanding greater transparency remains to be seen.


UPenn’s Annenberg School for Communication has released a research report arguing “Consent” has become a broken concept in the online economy.  

“Genuine opt-out and opt-in consent requires that people have knowledge about commercial data-extraction practices as well as a belief they can do something about them…discoveries from our survey paint a picture of an unschooled and admittedly incapable society that rejects the internet industry’s insistence that people will accept tradeoffs for benefits and despairs of its inability to predictably control its digital life ”

The report makes a case that even recently-enacted state privacy regulatory frameworks (like California’s and others) are insufficient in that they presume consumers have the ability to make informed and rational choices about data sharing.

Our Take

The piece makes a case for “a paradigm shift in information-economy law and corporate practice,” which, while compelling as theory, has little likelihood of influencing lawmakers or business leaders, who —even when sympathetic—are constrained by the legal frameworks we already have. Furthermore, they have enough of a challenge to make incremental improvements to these legal frameworks, as ongoing debates around Federal data privacy laws like ADDPA have shown.


Are Regulators Ready for the AI Arms Race?

With Microsoft’s recent acquisition of OpenAI and rapid implementation of its ChatGPT tool into its Bing search-engine functionality (followed by Google’s Me-Too launch of Bard), lawmakers are coming to grips with the reality that technology is quickly moving beyond the scope of existing regulatory frameworks.  While primary concerns at the moment are around issues of national security and potential for spread of ‘disinformation’, data-privacy risks are also coming to the forefront, as Amazon warns its employees against use of the tools after discovering confidential company information appearing in ChatGPT results.

Our Take

It’s unlikely DC will make any immediate changes to the current self-regulatory environment around AI tools until examples of adverse events using the technology make themselves apparent.  But those examples may not be far-off; the boom in government services fraud during the COVID pandemic has prompted many in Congress to pay greater attention to how new technologies have enabled information abuse at scale, and rules around stricter identity authentication may eventually result in requirements that AI tools disclose themselves as such to 3rd parties.


3 Personal Information Insights to Help Companies Protect Themselves Against Hackers

How does personal information removal help companies protect themselves against hackers? To answer this question, DeleteMe’s CEO Rob Shavell sat down with ethical hacker Rachel Tobac in a recent webinar. 

The full recorded version of this webinar is available for download. If you are responsible or concerned about your organization’s cyber security (or executive safety) in 2023, it’s unmissable. 


Check Out Our Latest Guides


DeleteMe in the News


Upcoming Events

DeleteMe was created in 2010 when we realized the difficulty of navigating privacy issues in today’s interconnected and digital world. Our mission is to provide everyone with the power to control their digital identity.

How does DeleteMe privacy protection work?

  1. Employees, Executives, and Board Members complete a quick signup 
  2. DeleteMe scans for exposed personal information
  3. Opt-out and removal requests begin
  4. Initial privacy report shared and ongoing reporting initiated
  5. DeleteMe provides continuous privacy protection and service all year

    Your employees’ personal data is on the web for the taking.

    DeleteMe is built for organizations that want to decrease their risk from vulnerabilities ranging from executive threats to cybersecurity risks.

    Related Posts

    10 Ways to Reboot Your Privacy at Work

    When personal data is out there on the open web it can lead to privacy and security incidents at…

    Our 2022 Cybersecurity Excellence Award Speech: How We Started, Where We’re Going

    We are excited to announce that DeleteMe was recognized (twice!) with 2022 Cybersecurity Ex…

    The Time is Now to Limit Russian Hacker Access to Publicly Available PII

    Although the launch of ContiLeaks and the information revealed there didn’t slow the Russian Hac…