Created using Dall-E.

.

The finalization of the European Union’s Artificial Intelligence Act (AI Act) ushers in a new paradigm in technological regulation. On March 13, 2024, the European Parliament adopted the AI Act, the world’s first comprehensive horizontal legal framework for AI.1 This risk-based stratagem has one primary directive: to ensure AI systems in the European Union (“EU”) market are inherently safe and aligned with the Union’s bedrock principles of human rights. The Act proposes a strict liability regime that apportions liability for all infractions, regardless of the perceived triviality of the error, especially within data sets that feed AI algorithms.2

Who is impacted?3

The scope of the AI Act encompasses a broad range of entities including both public sector bodies and private sector companies that introduce their AI systems to the EU market, or whose AI system usage impacts individuals within the EU. This extends to entities outside of the EU that engage in business activities within its borders. Therefore, responsibility for adherence to the AI Act falls on both creators of AI technologies and those who integrate these AI systems for use in the EU.

Exceptions are in place for initial prototyping and the development phase before the AI systems are launched in the market, as well as for uses related to military functions or national security interests.

Risk-Based Categorization: A Closer Look

Central to the AIA is the categorization of AI systems into four principal risk categories:

  1. Unacceptable Risk (Prohibited AI): AI systems under this classification are prohibited due to their potential to contravene personal freedoms and societal values. Examples include indiscriminate surveillance technologies and manipulative AI that exploit vulnerabilities of individuals.4There are four such classes of unacceptable-risk AI systems. They include an AI system: (1) Capable of using subliminal techniques which “materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm” (Article 5(1)(a));5 (2) that “exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm” (Article 5(1)(b));6 (3) that provides social scoring by evaluating or classifying “the trustworthiness of natural persons over a certain period based on their social behavior or known or predicted personal or personality characteristics, with the social score” (Article 5(1)(c));7 (4) can run “real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement” (Article 5(2)).8
  2. High Risk (Strictly Regulated AI): AI applications with significant potential impacts require compliance with stringent requirements before deployment. This includes systems related to critical infrastructure, employment, law enforcement, and others that directly influence individual rights and societal operations.9 Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).10
  3. Limited Risk (Transparency Obligations AI): AI systems that interact with users must disclose their artificial nature, ensuring transparency and allowing users to make informed decisions.11
  4. Minimal Risk (Unrestricted AI): This category allows for free deployment of AI systems that are considered to pose negligible risk to rights or safety, such as AI-enabled video games or spam filters.12

Each tier stipulates a corresponding level of regulatory scrutiny, scaling with the potential impact of the AI system in question.

Created using Dall-E

Exemptions for Law Enforcement on Biometric Systems

In general, law enforcement agencies are not permitted to utilize real-time biometric identification (RBI) systems. However, exceptions exist in detailed and precise circumstances. The deployment of “real-time” RBI is conditionally allowed when stringent conditions are fulfilled, such as confinement in duration and location, along with mandatory prior authorization from judicial or administrative bodies. Instances that may warrant such an exception include the focused search for an individual who is missing or the prevention of an imminent terrorist threat. The retrospective application of RBI systems (“post-remote RBI”) is classified as high risk and necessitates a judicial order, which must be related to a specific criminal act.

The High-Risk Arena: Navigating the Regulatory Seas13

High-risk AI systems face a gauntlet of pre-market conditions aimed at mitigating risks and ensuring compliance. This necessitates a thorough risk assessment and robust mitigation systems, including:

  • High-Quality Data Sets: To curtail risks and discriminatory outcomes, high-quality data sets are imperative, ensuring the AI’s decisions are as unbiased and accurate as possible.
  • Traceability Logging: Detailed activity logs are mandated to trace the AI’s decision-making process, a safeguard against opaque algorithmic functioning.
  • Comprehensive Documentation: Entities must furnish exhaustive documentation detailing the AI system’s functionality and compliance measures for regulatory assessment.
  • Deployment Transparency: Clear and comprehensive information must be made available to deployers and end-users, explicating the AI system’s operational parameters and purpose.
  • Human Oversight: To ensure that AI does not operate in a vacuum, appropriate human oversight is mandated, integrating a human-in-the-loop framework to intercept potential risks.

Examples of high-risk AI uses include “critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).”

Low and Minimal Risk AI: Fostering Innovation While Ensuring Transparency14

For AI systems categorized under limited risk, the Act stipulates specific transparency obligations without imposing onerous compliance requirements. This ensures user awareness of interacting with an AI, fostering an environment where users retain agency in AI interactions.15

Conversely, minimal or no-risk AI enjoys a laissez-faire stance, facilitating innovation and user engagement without the burden of heavy regulatory compliance, provided basic human oversight and monitoring are in place.

General Purpose AI

General-purpose AI systems can operate as standalone high-risk entities or as integral parts of other high-risk systems. Given their unique characteristics, it’s essential for these system providers to collaborate effectively along the AI value chain. Regardless of whether their systems are utilized independently as high-risk AI or as elements within other high-risk frameworks, these providers must work in concert with those who deploy the high-risk AI systems. This cooperation is vital to meet the compliance standards set by this Regulation. Moreover, they must also engage with the relevant regulatory authorities to ensure adherence to the obligations mandated by this Regulation, except in cases where the Regulation specifies otherwise.

What are the penalties?16

  • For violations involving the use of prohibited AI: Fines can reach up to €35 million or 7% of the global annual revenue, whichever is greater.
  • For most other infractions: Penalties may go up to €15 million or 3% of the global annual revenue, whichever is greater.
  • For providing false information: The fines can be as much as €7.5 million or 1.5% of the global annual revenue, again, whichever is greater.
  • For smaller businesses, penalties will correspond to the lower figure in each range, while larger enterprises will face the higher figure.

In addition to these fines, the EU’s General Data Protection Regulation (GDPR) mandates notifications for automated decision-making. The European Parliament has indicated that under GDPR, individuals must be informed when their data is utilized in AI training. With the AI Act introducing its own notice and transparency obligations, non-compliance could lead to compounded penalties. Under GDPR, non-compliance with transparency obligations can attract fines up to €20 million, or 4% of the company’s global annual revenue for the last financial year, depending on which is greater.

Created using Dall-E.

Prohibitions and Protections: The AIA’s Ethical Compass17

The AIA classifies certain AI deployments, such as real-time biometric identification, within the realm of unacceptable risk, effectively banning them due to their profound implications for civil liberties. This includes systems that categorize individuals by sensitive attributes using biometrics and the indiscriminate collection of facial images from online sources or CCTV for facial recognition databases. The application of emotion detection technology within professional and educational settings, societal scoring systems, predictive policing reliant solely on personal profiling or trait assessment, as well as AI designed to influence human actions or exploit vulnerabilities, are all strictly prohibited to safeguard citizens’ rights. The Act also prohibits manipulative AI that could cause physical or psychological harm, reflecting the Union’s prioritization of ethical considerations in AI development.

Civil rights proponents have advocated for a blanket ban on targeted biometric identification by law enforcement and immigration authorities, citing risks of unjustified intrusions on personal freedom. Moreover, the AIA addresses the deceptive potential of deepfake technology, mandating disclosure when content has been synthetically generated or manipulated, further enshrining the value of transparency within the digital milieu.

Conclusion: Balancing Act between Innovation and Regulation

The Artificial Intelligence Act stands as a legislative monolith, poised to sculpt the future of AI in the European Union and abroad.18 By implementing a comprehensive risk-based framework, it endeavors to protect individuals’ rights while providing a structured landscape for AI development. The Act’s stringent regulations for high-risk AI, coupled with lighter touch approaches for lower-risk categories, strive to balance the scales between safeguarding fundamental rights and fostering technological innovation.

As AI continues to evolve, the AIA’s limitations and strengths will be tested. The act’s success will hinge on its ability to adapt to the rapid advancements in AI while maintaining its core mission of protecting citizens and upholding the values of the Union. This legislation not only stands as a regulatory framework for the present but also as a prophetic blueprint for the ethical stewardship of AI globally.

Created using Dall-E.

  1. Artificial Intelligence Act: MEPs adopt landmark law, European Parliament (Mar. 13, 2024), https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law. ↩︎
  2. The AI Act Explorer, EU Artificial Intelligence Act https://artificialintelligenceact.eu/ai-act-explorer/. ↩︎
  3. Art. 2 Scope, EU AI Act https://www.euaiact.com/article/2. ↩︎
  4. Article 5, Artificial Intelligence Act https://artificialintelligenceact.com/title-ii/article-5/. ↩︎
  5. Id. ↩︎
  6. Id. ↩︎
  7. Id. ↩︎
  8. Id. ↩︎
  9. Article 6, Artificial Intelligence Act https://artificialintelligenceact.com/title-iii/chapter-1/article-6/. ↩︎
  10. Artificial Intelligence Act: MEPs adopt landmark law, European Parliament (Mar. 13, 2024), https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law. ↩︎
  11. Regulatory framework proposal on artificial intelligence, Configurar el futuro digital de Europa (Mar. 24, 2023), https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai. ↩︎
  12. EU AI Act: first regulation on artificial intelligence, European Parliament (June 8, 2023), https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. ↩︎
  13. High-level summary of the AI Act, EU Artificial Intelligence Act https://artificialintelligenceact.eu/high-level-summary/. ↩︎
  14. Ai Act (no date) Shaping Europe’s digital future. Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (Accessed: 25 April 2024) ↩︎
  15. Id. ↩︎
  16. Art. 71 Penalties, EU AI Act https://www.euaiact.com/article/71. ↩︎
  17. Artificial Intelligence Act: MEPs adopt landmark law, European Parliament (Mar. 13, 2024), https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law. ↩︎
  18. The European Union and the United States strengthen their cooperation in the area of Future Network Systems, Shaping Europes digital future (Apr. 16, 2024), https://digital-strategy.ec.europa.eu/en/news/european-union-and-united-states-strengthen-their-cooperation-area-future-network-systems. ↩︎

Leave a comment

Quote of the week

Civilization is the progress toward a society of privacy. The savage’s whole existence is public, ruled by the laws of his tribe. Civilization is the process of setting man free from men.

~ Ayn Rand