How AI Will Affect Privacy in 2025
Neil DuPaul
Reading time: 6 minutes
In our recent blog post series for Data Privacy Day, we’ve covered such topics as the types of privacy services and the types of data brokers. Now we want to talk about our predictions for how AI will affect the privacy and safety of your personal data in the coming year.
AI-driven cybercrime will grow significantly
Since the advent of ChatGPT brought AI into the mainstream, AI-driven fraud has become top of mind for many companies and consumers. A recent article from ZDnet showed that many experts believe the AI cybercrime wave has only just begun.
Whereas once phishing emails were more easily identified by spelling mistakes or poor grammar, ChatGPT is now capable of generating error-free copy in seconds. As a result, cybercriminals are automating the entire phishing process and sending thousands of messages around the clock.
Deepfakes have produced a similar issue. While many experts worried about image-based deepfakes in the early days of the AI boom, a more pressing concern may be voice deepfakes.
In late 2023, an IT company fell victim to a deepfake copying the voice of one of its employees. The resulting breach hit dozens of the company’s cloud customers. Early last year, another deepfake scam resulted in a $25 million payout to fraudsters.
In the absence of comprehensive federal guidelines, this crime wave will no doubt continue to grow.
AI agents will share data on your behalf, with disastrous consequences
Agentic AI refers to tools that can act on your behalf, performing tasks such as booking flights, managing accounts, or even negotiating contracts. For example, ChatGPT’s new Operator tool can search the web and finalize purchases or travel arrangements. While these features promise convenience, they also introduce significant risks – both for individual consumers and for businesses.
For consumers, using AI tools like Operator often involves granting access to sensitive personal information, including financial credentials, travel preferences, and other private details. These tools need to accept privacy policies and process transactions autonomously, which can result in inadvertent data sharing or misuse. A poorly designed or misused AI agent could expose users to financial fraud or privacy breaches, even with safeguards in place.
For businesses, the stakes are even higher. Agentic AI tools embedded in enterprise software could handle sensitive corporate data, including employee credentials, customer information, and financial records. If these tools lack sufficient oversight, they may inadvertently expose proprietary information or create vulnerabilities that cybercriminals could exploit.
According to Gartner, by 2028, 33 percent of enterprise software will incorporate agentic AI. This widespread adoption could amplify risks in organizational cybersecurity and employee privacy, particularly if companies do not prioritize human oversight and robust policies.
The integration of agentic AI into everyday workflows means both personal privacy and enterprise security are at risk. Businesses and individuals must remain vigilant and establish clear boundaries around what these tools are allowed to access and automate.
Employees sharing data with AI tools will significantly increase business risk
Even apart from AI agents, employees are already sharing too much information with AI tools. For companies that invest in the enterprise version of ChatGPT, the situation is somewhat safer. By default, ChatGPT will not train based on their information – meaning it won’t process user input to learn and improve its responses. However, smaller businesses that have individual accounts for employees should know that the AI will train on everything employees input unless a particular setting is changed.
If that training data becomes available through a hack or breach or even through a cybercriminal simply chatting with the AI tool after stealing access credentials, companies shouldn’t be surprised if their proprietary information becomes widely available for fraudsters, scammers, and ransomware gangs.
This year, companies will have to create organizational policies that cover what employees can share with AI tools. However, if the past is anything to go by, such policies will lag far behind the actual risks, and even large companies will not be immune to AI oversharing this year.
AI tools from Big Tech will train on your personal data
One thing that has become very clear over the past few years is that new AI tools like Claude, Google’s Gemini, and ChatGPT are data-hungry. They scrape the internet for data, and while many providers may attempt to prevent personal information from being caught up in training data, they have not always been successful.
Experts have exposed Gemini for training on personal data, and Elon Musk’s X has instated policies that allow it to train AI models freely on user information. Unfortunately, these types of incidents demonstrate what consumers should already know: Big Tech will not respect your personal information in the coming year, and there’s a high chance these companies will sell your data to data brokers or use it for training AI tools.
For consumers, the best option at the moment is to opt out of sharing personal information. Be aware of the privacy policies for companies like X, and if your data is freely available online, opt out or use a service like DeleteMe to protect your information from getting caught up in web scraping.
AI regulations will face serious challenges under the current administration
Trump has already repealed Biden’s executive order addressing AI risks, and we’re back to a Wild West of data privacy when it comes to these technologies. And it seems unlikely that this year will bring a regulation that sufficiently addresses the problem.
In fact, with the current administration’s tech and business-friendly policies often aiming to reduce regulatory oversight and red tape, the problem will very likely worsen throughout the next few years.
Consumers must protect themselves
AI comes with a lot of risks for data privacy, but the benefits in efficiency and productivity that AI brings to businesses mean that its adoption at this point is inevitable. Unfortunately, the government and Big Tech have little protection to offer. This means it is up to businesses and consumers alone to protect themselves from AI risks this Data Privacy Day and throughout the coming year.
There are some services that allow you to opt out of AI tools using your data for training or remove your information from data brokers so it doesn’t get swept into web scraping for AI tools. And as long as companies stay aware of AI risks and monitor carefully, they can prevent employees from accidentally exposing too much data. For now, this seems to be the most we can hope for when it comes to AI and data privacy.
- Employees, Executives, and Board Members complete a quick signup
- DeleteMe scans for exposed personal information
Opt-out and removal requests begin - Initial privacy report shared and ongoing reporting initiated
- DeleteMe provides continuous privacy protection and service all year
DeleteMe is built for organizations that want to decrease their risk from vulnerabilities ranging from executive threats to cybersecurity risks.
Want more privacy
news?

Is employee personal data creating risk for your business?
