In August 2024, the European Union’s Artificial Intelligence Act (EU AI Act) came into force, marking a groundbreaking moment in the regulation of artificial intelligence (AI). Unlike previous regulatory frameworks, the EU AI Act does not primarily aim to confer individual rights, as seen with the General Data Protection Regulation (GDPR). Instead, it seeks to impose strict controls on AI service providers, categorizing AI applications according to their potential to cause harm. This approach is unprecedented, reflecting the EU’s proactive stance in addressing the challenges and risks posed by AI technologies. Let us survey the key aspects of the EU AI Act, look at how it categorizes AI applications, study its regulatory mechanisms, and review its potential global implications.
What Is the EU AI Regulation About
The EU AI Act represents the world’s first comprehensive attempt to regulate AI technologies systematically. Unlike the GDPR, which focuses on individual data rights, the AI Act is designed to manage the providers and developers of AI systems. This shift in focus reflects the unique challenges posed by AI—technologies that can have serious and widespread effects on societies and economies.
The primary goals of the AI Act are to:
- Ensure Safety and Fundamental Rights: The Act aims to protect citizens from harmful applications of AI that could undermine their safety, privacy, or fundamental rights.
- Foster Innovation: By providing clear rules, the EU seeks to create a predictable and trustworthy environment for businesses to develop and deploy AI technologies.
- Set Global Standards: As a leader in technology regulation, the EU aspires to shape global norms and inspire similar legislative efforts worldwide.
The future will show how well the regulations were put together and whether they will have to be revised or not. Let us look at the classification of AI tools by the level of risk that they may have for the human society.
Categorization of AI Applications
Central to the EU AI Act is the categorization of AI applications into four classes, based on their potential for harm. This framework determines the level of regulation required for different AI systems:
- Unacceptable Risk: These applications are outright banned due to their potential for severe societal harm. Examples include:
- Social Scoring Systems: Inspired by the dystopian concept of ranking individuals based on their behavior, often associated with China’s social credit system.
- Public Facial Recognition: Real-time biometric identification in public spaces, which raises significant concerns around privacy and surveillance.
- High-Risk Applications: These systems require stringent oversight, including mandatory risk assessments, documentation, and compliance measures. High-risk applications include:
- AI systems used in critical infrastructure (e.g., transportation safety).
- Applications in education (e.g., grading systems) or employment (e.g., automated hiring tools).
- Medical and legal decision-making tools.
- Limited Risk: These systems are subject to transparency requirements. For example, chatbots and recommendation systems must disclose their AI nature to users.
- Minimal Risk: Most AI applications fall into this category, requiring no specific regulatory measures. Examples include AI systems embedded in video games or spam filters.
Class 1. Unacceptable Risk Applications
The category of “unacceptable risk” reflects the EU’s commitment to safeguarding fundamental rights. Applications that fall under this category are not merely regulated but are entirely prohibited.
Banned Applications
- Facial Recognition in Public Spaces: The Act explicitly bans the use of facial recognition technology in public areas, except under narrowly defined circumstances such as preventing serious crimes. The risks of pervasive surveillance and the chilling effects on freedom of expression and assembly underpin this prohibition.
- Social Scoring Systems: Inspired by the concerns surrounding social credit systems, the Act prohibits practices that judge individuals’ trustworthiness or behavior to assign scores that could affect their access to services or opportunities.
Ethical Considerations
The prohibition of such applications underscores the EU’s commitment to ethical AI. The legislation prioritizes human dignity, autonomy, and freedom over the potential benefits of such technologies, setting a precedent for other jurisdictions.
Class 2: High-risk Applications
High-risk applications are central to the EU AI Act’s framework. These systems are not banned but are subject to strict regulatory oversight to ensure they do not harm individuals or society.
According to the regulation, Manufacturers of such high-risk systems must test them thoroughly before launching them on the market, importers and downstream retailers must ensure that the systems comply with the law, and users must monitor their use. The final decision-making and supervisory authority remains the human being. The regulation gives the addressees of the decisions of such systems rights of objection, information and appeal. High-risk systems include in particular: Legal advice, AI systems as components of products subject to the EU Product Safety Regulation, AI systems from one of the following eight categories: Recognition and classification based on biometric features, Operation of critical infrastructure, Education and vocational training, Labour, personnel management, access to self-employment, Basic provision of public or private services, Law enforcement, Migration, asylum, border control.
Take lending, for example: according to the AI Act, banks are not allowed to let the machine alone decide on the customer’s creditworthiness. A human must check the score calculated by the machine and be responsible for approving or rejecting the loan.
High-risk AI systems are those whose use does not automatically harm health, safety, fundamental rights, the environment, democracy or the rule of law, but which are susceptible to deliberate or negligent misuse. They are typically used to manage critical infrastructure, material or non-material resources or personnel.
Class 3: Transparency risk
The EU legislator categorises AI as moderately risky, or at least non-transparent, if it does not conflict with fundamental rights but leaves users in the dark about the nature and sources of the service. This applies to chatbots, but above all to so-called generative AI, i.e. programmes that generate artificial texts, images or videos (e.g. deepfakes). According to the law, such apps must identify themselves as machines, label their products as artefacts, document training data and its sources, protect the copyrights of the sources and prevent the generation of illegal content.
Class 4: Low risk
No restrictions apply to simple AI systems such as spam filters or recommendation services.
Compliance Requirements
Providers of high-risk AI systems must meet rigorous requirements, including:
- Risk Management: Developers must conduct comprehensive risk assessments to identify and mitigate potential harms.
- Data Governance: High standards of data quality, bias mitigation, and data security are mandatory.
- Transparency and Accountability: Providers must maintain detailed documentation of their systems, including algorithms and data sources, for auditing purposes.
- Post-Market Monitoring: Continuous monitoring of deployed systems to ensure they operate as intended and do not pose unforeseen risks.
Key Sectors Impacted
The regulation of high-risk applications is particularly impactful in sectors such as:
- Healthcare: AI systems used in diagnostics, treatment planning, and drug development must ensure patient safety and minimize biases.
- Employment: Automated hiring tools must demonstrate fairness and prevent discriminatory practices.
- Law Enforcement: AI tools used in crime prevention or risk assessment must adhere to high standards of accuracy and accountability.
Encouraging Innovation Through Clear Guidelines
While the AI Act imposes stringent requirements, it also seeks to foster innovation by providing clear rules and support for AI developers and providers.
Sandboxing and Innovation Support
The Act includes provisions for “sandboxing”—creating controlled environments where AI systems can be tested and refined under regulatory guidance. This approach enables businesses to experiment and innovate while ensuring compliance with safety and ethical standards.
Harmonized Rules
By establishing uniform regulations across the EU, the AI Act reduces fragmentation and legal uncertainty. This harmonization benefits businesses by creating a single market with consistent rules, reducing compliance costs and barriers to entry.
Support for SMEs
Recognizing the challenges faced by small and medium-sized enterprises (SMEs), the Act provides targeted support, including simplified compliance procedures and access to resources and expertise.
Global Implications: Setting a Precedent
The EU AI Act’s significance extends beyond Europe, influencing global norms and inspiring similar legislative efforts worldwide.
A Template for Global Regulation
The Act’s comprehensive framework provides a model for other jurisdictions seeking to regulate AI. Its emphasis on categorization, risk-based regulation, and ethical considerations is likely to shape global discussions on AI governance.
Competitive Advantage
By setting high standards, the EU positions itself as a leader in ethical AI, potentially giving European companies a competitive edge in global markets where trust and accountability are increasingly valued.
Challenges and Criticism
While the Act is groundbreaking, it has also faced criticism:
- Compliance Costs: The stringent requirements may disproportionately burden smaller companies, potentially stifling innovation.
- Global Competitiveness: Some critics argue that the Act’s strict rules could disadvantage European companies in the global AI race, particularly against competitors in less-regulated environments.
- Enforcement Challenges: Ensuring compliance across diverse industries and technologies is a complex task that requires significant resources and expertise.
In summary of all the above information, let us say that the EU AI Act represents a bold and pioneering approach to regulating artificial intelligence. By focusing on providers of AI systems and categorizing applications based on their potential for harm, the Act aims to strike a balance between innovation and protection. Its prohibition of unacceptable risk applications, such as public facial recognition and social scoring, underscores the EU’s commitment to ethical AI. At the same time, its stringent requirements for high-risk applications ensure accountability and safety without stifling progress.
As the first comprehensive AI regulation, the EU AI Act sets a global benchmark, influencing the development of similar frameworks worldwide. While challenges remain, including enforcement and potential impacts on innovation, the Act’s emphasis on safety, transparency, and fundamental rights positions the EU as a leader in the governance of emerging technologies. As AI continues to evolve, the EU AI Act provides a critical foundation for ensuring that these powerful tools are developed and deployed responsibly, for the benefit of all.

Leave a comment