AI browsers leak personal data
Researchers in the UK and Italy tested ten popular AI-powered web browser assistants, including OpenAI’s ChatGPT, Microsoft’s Copilot, and Merlin AI, and found widespread collection and sharing of sensitive user data. The team used both public websites and private portals such as a university health service, asking assistants targeted questions to see if previously viewed information persisted. By decrypting traffic between the browsers, their servers, and third‑party trackers, the researchers observed that most assistants transmitted full page content and user input to remote servers, sometimes including medical records, online banking details, and national identification numbers.
Several browser extensions continued to record activity even when users navigated to private areas. Some tools forwarded user prompts and identifying data, such as IP addresses, to analytics services like Google Analytics, enabling potential cross‑site profiling and ad targeting. The study found that certain assistants built persistent profiles—storing chat histories and inferring attributes like age, gender, income, and interests—to personalise responses across sessions. In one case, a popular extension captured a user’s social security number entered on a tax website.
The researchers flagged potential breaches of US privacy laws protecting health information and concluded that the observed practices likely contravene the EU General Data Protection Regulation (GDPR). Privacy policies for several services disclose broad data collection—names, contact details, credentials, transaction and payment data, and content from user prompts—used to personalise services, improve products, and handle legal requests. Even when policies state that EU and UK user data are afforded rights, some providers store data outside the region, raising compliance and jurisdictional concerns.
The findings underscore significant transparency and data‑protection gaps in AI browser assistants and highlight the need for stronger safeguards, clearer user consent mechanisms, and stricter oversight to ensure compliance with GDPR requirements such as data minimisation, purpose limitation, lawful basis for processing, and robust security measures. Data protection officers, regulators, and organisations deploying these tools should reassess risk, review supplier contracts, and implement technical controls to prevent unauthorized transmission of sensitive personal data.