Scheduled to become enforceable by mid-2024, the California ADCA is modeled after the UK’s Age Appropriate Design Code, and would apply to businesses that provide “an online service, product, or feature likely to be accessed by a child.” Unlike other US ‘kids’ online privacy’ laws (like COPPA), children are defined as ‘anyone under 18’. Social networks, gaming sites, internet-enabled toys, voice assistants, and digital learning tools for schools are likely to be the most scrutinized for compliance. Among other things, covered businesses will be required to:
- Establish the age of users with a reasonable level of certainty, and automatically configure all default privacy settings to their highest level for children using services;
- Undertake a Data Protection Impact Assessment (DPIA) for their products, determining risks associated with child access to the business’s products;
- Design and develop products in a manner that prioritizes “privacy, safety, and wellbeing of children over commercial interests”.
Earlier in September, EU regulators issued a $400 million fine to Meta/Facebook for violating GDPR data protection rules in its treatment of children’s data on Instagram.
‘Kids’ Privacy’ has been an attractive topic for politicians. They can be seen as ‘doing something’ without actually tackling the broader problems of online data privacy. Key problems with this bill include its vague scope and definitions, and worse, the age-verification requirements may obligate companies to implement intrusive information gathering on adult users, potentially making internet anonymity impossible.
Phishing of company employees has been a norm for a long time, but cyberattacks are increasingly targeting workers in more tactical and personal ways —working remotely, via family members, on less-secure personal devices. Twilio and Cloudflare were highlighted last month when similar attacks targeted employee and family-members home phone numbers.
Cybersecurity researchers have repeatedly warned of weaknesses in simple multi-factor authentication processes for the last two years. As we mentioned in a recent blog post about the incident, there are three main takeaways from this one. The ease of acquiring credentials through “Man in the Middle” MFA-spoofing attacks means often all you need to breach an organization is to know workers’ personal cell phone numbers, and provide a convincing spoof of employer communications. While better use of hardware authentication and FIDO2-compliant processes will help mitigate risk, protection of public PII should be part of every organization’s workforce security.
FTC has recently filed suit against Kochava, a location data broker, and other members of Congress have called for an investigation of other companies (like Fog Data Science) who provide location-surveillance services to local law enforcement. And FCC recently completed an information gathering exercise with US carriers to determine how location data was collected and commercialized.
Location data, like facial recognition, is one of the most widely-collected and easily-abused forms of personal data, and growing public awareness—particularly driven by the overturning of Roe vs. Wade—is prompting policymakers and agencies to start treating these specific categories of PII more seriously. While we have low expectations for the passage of a comprehensive data-privacy law (like ADPPA) to succeed this year, we think the potential for targeted law specifically regulating how businesses share consumer location information could potentially be politically easier, and a more immediately-beneficial step forward.
Check out our log of where DeleteMe has been featured in the news in September; including interviews and quotes where we discuss privacy, cybersecurity, our solution, and everything in between.
Come see us at Booth 102 at IAPP Privacy. Security. Risk