In the March 2023 edition of our business privacy newsletter, you’ll find our take on:
‘Double Extortion’ ransomware tactics are not especially new. Company data is first exfiltrated and then encrypted, allowing criminals to demand payment for the return of data access or to refrain from public dumps of information (or both).
What is new is that with ransomware payouts declining, criminal gangs are now often skipping encryption, focusing on stolen, sensitive individual data, and selectively leaking it to create public embarrassment, exposing target companies to litigation risk.
Similar incidents have occurred, like with the Washington DC police force in 2021, where selected personal details about officers and informants were leaked in advance to try and compel rapid payout. With companies getting better at technical protection of systems from encryption, direct blackmail is likely to become more common; recent IT surveys indicate more than a third of breaches are now being followed by extortion of customers.
Personal data for over 56,000 Federal workers—including house representatives, staffers, and senate members—was stolen in a breach of DC Health Link services in early March. Investigators warned that the information included the social security numbers of some capitol employees, and the FBI has already arrested people believed connected to the public sale of the data.
Meanwhile, security firm Cyble estimates that more than 74 million U.S. telecom customers have already had their data leaked on the dark web in 2023. T-Mobile, AT&T, and Verizon all acknowledged significant data losses in 2023, even as the White House recently appealed to the industry to take more robust measures against growing cybersecurity risks.
Now that Congress has joined the rest of America in the daily reality of PII risk, we hope it might motivate some of them to advance stronger privacy regulations that limit data-breaking fallout finally.
On March 20th, the world’s most-popular chatbot, OpenAI’s ChatGPT, suffered its first data-breach incident, exposing names, email addresses, payment-related information, and chat history of ~1% of its users during a 9-hour window. Since its launch in November 2022, ChatGPT has become one of the fastest-growing consumer apps in history, hitting 100 million unique monthly users in January alone.
While the scope of the breach is small, it highlights potential privacy risks associated with the rapid adoption of any new technology and serves as a warning for future implementation of AI tech at scale without strong user safeguards built in from the outset.
On March 23rd, Utah signed legislation that would impose strict restrictions on minors’ use of social media. A growing number of states have similar forms of ‘child-protection’ legislation currently in progress, some of which are close to passage.
Most of these laws share a common requirement for online services—from social media like Instagram and Twitter to adult-content sites—to impose stricter processes for positively identifying users and verifying their age. No law specifies how online providers are supposed to accomplish this, and technical hurdles for doing so remain significant.
While branded as Child Privacy” laws, the framework would require millions of adults to share official identification documents or other sensitive personal information with various new 3rd parties. And, as the French Data Privacy Authority (CNIL) has pointed out, all current age-verification technologies are both easily-exploitable and vastly increase user data risks. We share the Electronic Freedom Foundation’s view that the proposed ‘solution’ in this case is worse than the problem.
DeleteMe is built for organizations that want to decrease their risk from vulnerabilities ranging from executive threats to cybersecurity risks.