Artificial intelligence (AI) is profoundly transforming economic, social, and technological sectors. However, its rapid development raises crucial questions regarding ethics, security, and respect for fundamental rights. In response, the European Union has adopted the AI Act, the world’s first comprehensive legal framework to regulate AI based on a risk-based approach. Entered into force in July 2024, this regulation will be gradually implemented until 2027, with key deadlines to be met starting in 2025.
The AI Act does not merely impose constraints: it offers businesses a unique opportunity to structure their AI practices in an ethical, secure, and competitive manner. This text establishes governance, combining European coordination and local adaptation. For professionals, understanding its stakes is essential to anticipate changes and transform these obligations into levers for responsible innovation.
Who is affected by the AI Act, and which systems are regulated?
The AI Act applies to any organization, whether European or international, that develops, deploys, or uses AI systems in the European market. This includes companies based outside the EU if their solutions are intended for users or customers in Europe. This approach reflects the European Union’s desire to impose its standards globally, similar to the GDPR for data protection.
A classification based on four levels of risk
To adapt obligations to real-world stakes, the AI Act classifies AI systems into four categories, based on their potential impact on fundamental rights, security, and citizens’ health. This classification allows regulations to be targeted where they are most needed, without stifling innovation in low-impact areas.
Systems with unacceptable risk are purely and simply banned. They include practices deemed too dangerous for fundamental rights or security, such as subliminal manipulation, which aims to influence behaviors without individuals’ awareness, or social scoring systems, which evaluate and rank citizens based on their behavior. Real-time biometric identification in public spaces is also prohibited, except in strictly regulated cases (such as counter-terrorism).
Systems with high risk are subject to strict obligations, as they concern sensitive areas such as recruitment, education, healthcare, or critical infrastructure. For example, an algorithm used to evaluate candidates during a recruitment process, or to diagnose diseases, must comply with enhanced requirements regarding transparency, traceability, and human oversight. These systems are detailed in Annex III of the AI Act, which lists critical use cases such as biometrics, employment, or essential public services.
Systems with limited risk, such as chatbots or content generation tools, must comply with transparency rules. Users must be clearly informed that they are interacting with AI to avoid confusion or manipulation. This category aims to regulate the most common uses of AI without over-regulating them.
Finally, most AI systems, such as spam filters or video games, fall under minimal risk. They are not subject to any specific obligations beyond existing laws, preserving a space for innovation in low-risk areas.
What are the obligations for high-risk AI systems?
Companies that develop or use AI systems classified as high-risk must comply with a set of strict requirements, designed to ensure security, transparency, and respect for fundamental rights. These obligations, detailed in Title III of the AI Act, cover the entire lifecycle of the system, from design to market withdrawal.
Among the main requirements is continuous risk management, which involves regularly assessing potential dangers and implementing measures to mitigate them. The data used to train, validate, and test the system must be relevant, representative, and free of bias, to avoid discrimination or judgment errors.
A comprehensive technical documentation must be prepared before market release. This document details the system’s operation, its limitations, the security measures implemented, and the results of risk assessments. It serves as a reference for regulatory authorities and demonstrates the system’s compliance.
Transparency towards users is also a key obligation. Companies must clearly inform users about the system’s capacities and limitations, as well as the decisions it can make. This requirement aims to build trust and allow users to understand and potentially challenge automated decisions.
Human oversight must always be possible, whether to supervise, correct, or cancel decisions made by the system. This requirement prevents excessive automation and ensures that humans retain final decision-making authority in critical processes.
Systems must also be robust and secure, with protection against cyberattacks and malfunctions. This robustness is particularly important for systems used in sensitive areas such as healthcare or critical infrastructure.
Finally, high-risk systems must be registered in a dedicated European database and subject to post-market monitoring to quickly detect and correct any issues. Some systems may be exempt if they meet specific conditions, as explained in Article 6(2) of the AI Act.
Who oversees the enforcement of the AI Act?
The enforcement of the AI Act relies on a two-tier governance system, combining central coordination at the European level and local implementation by Member States. This approach ensures consistent application of the regulation while accounting for national specificities.
At the European level: centralized coordination
The European AI Office, operational since February 2024, plays a central role in coordinating and supervising general-purpose AI models. It is responsible for drafting delegated acts, managing the European database of high-risk AI systems, and ensuring uniform application of the regulation.
The European AI board brings together representatives from each Member State. Its role is to harmonize practices between countries and ensure consistent application of the AI Act.
A consultative forum, composed of experts and stakeholders, provides technical and strategic advice to European institutions. This forum integrates field feedback and adapts guidelines to the realities faced by businesses and users.
At the national level: designated authorities in each country
Each EU Member State designates its own authorities to monitor AI system compliance and sanction non-compliance. These authorities vary by country but share a common mission: ensuring compliance with legal obligations and supporting businesses in their compliance efforts.
In France for example, the DGCCRF (Directorate General for Competition Policy, Consumer Affairs, and Fraud Control) and the CNIL (National Commission on Informatics and Liberty) are the main competent authorities. The DGCCRF monitors the compliance of AI systems with legal obligations, while the CNIL focuses on data protection and fundamental rights.
Key deadlines to remember
The AI Act is being implemented gradually, with specific deadlines to meet:
- February 2, 2025: enforcement of bans on unacceptable-risk systems and rules on AI literacy.
- August 2, 2025: establishment of national authorities and application of rules for general-purpose AI models.
- August 2, 2026: general application of the regulation, except for systems linked to harmonized legislation.
- August 2, 2027: full application of the AI Act, including for systems subject to conformity assessment.
Harmonized standards: how to prove compliance?
To facilitate compliance, the European Union has introduced the concept of harmonized standards. These standards, published in the Official Journal of the EU, offer a presumption of conformity to companies that apply them. In other words, if a company complies with these standards, its AI systems will be considered compliant with the AI Act, unless proven otherwise.
Seven standards in preparation to cover all key aspects
Currently, seven European standards are being drafted to support the implementation of the AI Act. They will address various themes, such as:
- Risk management, to assess and mitigate potential dangers throughout the system’s lifecycle.
- Data quality, to ensure that data used to train AI models is relevant, representative, and free of bias.
- Cybersecurity, to protect systems against cyberattacks and malfunctions.
- Transparency, to clearly inform users about the capabilities and limitations of AI systems.
These standards will provide companies with precise guidelines for complying with the AI Act’s requirements while clarifying regulators’ expectations. They will also facilitate access to the CE marking, which certifies a product’s compliance with European requirements.
Two paths to obtain CE marking
Companies have two options to obtain CE marking and prove the compliance of their AI systems:
- Apply harmonized standards: this is the simplest and most secure approach, as it guarantees automatic compliance.
- Demonstrate compliance through other means: this path is more complex, as it requires relying on internal specifications or alternative standards. However, it can be useful for companies whose products are not covered by existing harmonized standards.

What are the risks of non-compliance?
Non-compliance with the obligations imposed by the AI Act exposes companies to severe financial penalties, among the highest ever introduced for a technological regulation. These fines aim to deter non-compliant practices and ensure respect for fundamental rights and citizen safety.
Companies using banned systems (those deemed unacceptable risk) face fines of up to 35 million euros or 7% of their global turnover, whichever is higher. These sanctions target the most dangerous practices, such as subliminal manipulation or social scoring, which are deemed incompatible with European values.
For high-risk systems, non-compliance with obligations can result in fines of up to 15 million euros or 3% of global turnover. These penalties apply to companies that fail to meet requirements regarding risk management, transparency, or human oversight.
Finally, providing incorrect, incomplete, or misleading information to authorities or users can cost up to 7.5 million euros or 1% of global turnover. This sanction aims to ensure the reliability of communicated information and prevent circumvention of the rules.
Beyond fines, non-compliant companies face legal risks, such as lawsuits for non-compliance with fundamental rights, as well as reputational risks, with a loss of trust from customers, partners, and regulators. Non-compliance can also lead to operational risks, such as forced product withdrawals or the interruption of strategic projects.
How to prepare for the AI Act?
To anticipate the implementation of the AI Act and achieve compliance, companies must adopt a proactive and structured approach. Here are the key steps to follow:
1. Map the AI systems used or developed
The first step is to identify all AI systems used or developed within the company and classify them according to their risk level (unacceptable, high, limited, or minimal). This mapping helps prioritize actions and allocate the necessary resources for compliance.
2. Assess risks and document processes
Once the systems are identified, it is essential to assess the risks associated with each and document compliance processes. This involves verifying that the data used is relevant and free of bias, that the systems are robust and secure, and that users are informed transparently.
3. Train teams on the stakes of the AI Act
Training teams is a key element in ensuring effective compliance. Employees must understand the implications of the AI Act for their daily work, whether in data management, transparency, or human oversight. This training helps create a corporate culture aligned with the principles of responsible AI.
4. Monitor the evolution of standards and directives
Finally, companies must stay informed about the latest regulatory updates and adapt their practices accordingly. This includes tracking harmonized standards in development, as well as recommendations from European and national authorities. Active monitoring helps maintain compliance and seize opportunities offered by a clear and stable regulatory framework.
Conclusion: the AI Act, an opportunity for responsible artificial intelligence
The AI Act represents much more than a mere regulatory constraint: it is an ambitious framework for developing artificial intelligence that is safe, transparent, and aligned with European values. By preparing now, companies can not only avoid sanctions but also strengthen trust with their customers and partners, while contributing to a more responsible digital ecosystem.