With the expansion of AI, particularly large language models like ChatGPT, reliance on legitimate interests has increased. However, there is a gap in demonstrating this lawful basis, leading to low trust and confidence from both businesses and regulators.
To address this issue, the IAF initiated a project to develop a normative framework using multi-dimensional balancing, aiming to build capability in the business community and create greater confidence among regulators. The IAF developed a directory of rights, interests, stakeholders, and consequences, using colors, symbols, and a mathematical model to represent and balance these factors.
While data protection authorities have begun issuing guidance on applying legitimate interests in AI processing, they do not provide instructions on conducting the balancing. The IAF’s draft model legitimate interest assessment (Draft Model LIA) offers a framework for businesses and regulators to demonstrate legitimate interest multi-dimensional balancing in AI processing. The IAF believes that the balancing requirements in the Draft Model LIA are similar to those needed for Data Protection Impact Assessments (DPIA), Fundamental Rights Impact Assessments (FRIA) mandated by the EU AI Act for high-risk systems, and various U.S. state privacy laws and emerging AI regulations.
The IAF hopes that the Draft Model LIA will be widely adopted to support the development of AI regulatory guidance throughout Europe and demonstrate the utility of legitimate interest as a lawful basis for AI scenarios in jurisdictions developing new data protection and privacy laws or revising existing ones.