EU Commission Publishes Directive on the Liability of AI
On September 28, 2022, the European Commission published its long-promised proposal for an AI Liability Directive.
On September 28, 2022, the European Commission published its long-promised proposal for an AI Liability Directive.
The purpose of this document is to educate policymakers about transparency in the context of AI systems and offer suggested policy approaches.
The AI RMF is intended for voluntary use to address risks in the design, development, use, and evaluation of AI products, services, and systems.
The Playbook includes suggested actions, references, and documentation guidance for stakeholders to achieve the outcomes for “Map” and “Govern”.
The European Commission has managed to postpone discussions on the Council of Europe’s treaty on Artificial Intelligence.
Apart from its undemocratic nature, there are many reasons why biometric mass surveillance is problematic for human rights and footabll fans’ rights.
Efforts to outlaw the use of AI cameras to scan and identify people’s faces in public spaces are gaining traction.
The French Supervisory Authority (CNIL) has recently tested tools that could potentially help its auditors understand the functioning of an AI system. The CNIL tested two different tools, IBEX and Algocate.
A new partial compromise on the AI Act further elaborates on the concept of the ‘extra layer’ that would qualify an AI as high-risk.
The Swedish Privacy Agency (IMY) is now starting a pilot project with regulatory testing activities.
Google’s algorithms wrongly flagged photos taken by two fathers as being images of child abuse, causing innocent people to be investigated by the police.
Lawmakers spearheading discussions on the AI Act pitched a compromise on the obligations for high-risk AI systems and a consolidation of the past text.