Spanish Data Protection Authority Issues Guidance on Agentic AI
The Spanish Data Protection Authority (AEPD) published an extensive guidance, addressing the privacy and data protection challenges posed by agentic AI systems under the EU General Data Protection Regulation (GDPR). The guidance targets companies that process personal data using AI systems capable of operating autonomously and adapting their behavior to achieve goals. It explains the concept of agentic AI, highlights privacy risks, and offers practical recommendations to help organizations comply with GDPR requirements when deploying such technologies.
Agentic AI systems use large language models to complete tasks by adapting to changing circumstances and learning from experience. These systems can operate autonomously, perceive their environment, take actions beyond generating text, anticipate needs, plan sequences of actions, and adapt over time. The AEPD provides examples such as AI managing an employee’s business trip by autonomously handling bookings, currency exchange, and transport arrangements. Such systems often rely on complex chains of reasoning and multiple data sources, including third-party services, which increase the complexity of data processing and risk management.
The AEPD warns about significant privacy risks linked to agentic AI, including lack of accountability, poor data access management, and inadequate oversight of the AI’s decision-making processes. Other concerns include risk of unauthorized data leaks through the system’s memory, prompt injection attacks that manipulate AI behavior, and disruptions caused by reliance on external services. These risks highlight the need for organizations to carefully manage AI systems to prevent unauthorized processing and ensure robust security and compliance with GDPR principles such as data minimization and transparency.
To mitigate these risks, the AEPD recommends establishing strong governance frameworks tailored to the organization, continuous monitoring and evaluation of AI operations, strict data minimization policies, effective control over AI memory and data retention, and meaningful human oversight at all stages. Organizations should involve data protection officers and ensure staff are trained and empowered to manage AI risks effectively. The guidance serves as a crucial resource for companies using agentic AI, especially those operating in or targeting the EU market, to align their AI practices with GDPR obligations and protect individuals’ privacy rights.