The European Union’s risk-based AI Act came into force on Thursday, 1 August 2024. It will be implemented in stages until mid-2026. Within six months, it is planned to enforce bans on several types of AI use in specific scenarios, such as law enforcement’s use of remote biometrics in public spaces.
The AI Act uses a gradation of AI applications based on potential risks. Under this approach, most AI applications are considered “low-risk” and therefore will not be regulated at all.
The “limited risk” level applies to AI technologies such as chatbots or tools that can be used to create deepfakes. They are required to provide a level of transparency that does not mislead users.
“High-risk” AI applications include biometric processing and facial recognition, AI-powered medical software, or the use of AI in areas such as education and employment. Such systems must be registered in the EU database, and their developers are required to ensure compliance with risk and quality management requirements.
Penalties and Requirements for AI Developers
The AI Act provides for a multi-tiered system of penalties: fines of up to 7% of global annual turnover for using prohibited AI applications, up to 3% for violating other obligations, and up to 1.5% for providing regulators with false information.
A separate section of the new law concerns developers of the so-called General Purpose Artificial Intelligence (GPAI). The EU has also adopted a risk-based approach to GPAI, with the main requirement for developers of such systems being transparency. It is expected that only some of the most powerful models will be forced to carry out risk assessment and mitigation measures.
Specific recommendations for GPAI developers have not yet been developed, since there is no experience of legal application of the new law, notes NIX Solutions. The AI Office, the body for strategic oversight and construction of the AI ecosystem, has launched a consultation and called for the participation of developers in this process. The AI code is due to be fully completed in April 2025. We’ll keep you updated on any developments in this area.
A “Playbook on the AI Act” released late last month by OpenAI said OpenAI expects to “work closely with the EU AI Authority and other relevant bodies as we implement the new law in the coming months,” which includes producing white papers and other guidance for GPAI model providers and developers.
“If your organisation is trying to determine how to comply with the AI Act, you should first try to classify any AI systems by scope. Identify which GPAI and other AI systems you use, determine how they are classified, and consider what obligations arise from your use cases,” the playbook says. “These issues can be complex, so you should consult with legal counsel.”
As the implementation of the AI Act progresses, we’ll keep you updated on any significant changes or clarifications that may affect AI developers and users in the European Union.