The Czech Republic wants the Commission to evaluate how to best adapt the obligation of the AI Act to general purpose AI, according to the latest compromise text seen by EURACTIV. Other aspects covered include law enforcement, transparency, innovation and governance.
The compromise, circulated on Friday (23 September), completes the third revision of the AI Act, a landmark proposal to regulate Artificial Intelligence using a risk-based approach. The document will be discussed at a Telecom Working Party meeting on 29 September.
General purpose AI systems
How to approach general purpose AI has been a much-debated topic. These systems, such as large language models, can be adapted to perform various tasks, meaning the provider might not be aware of the final use of its system.
The question is if general purpose AI should respect the regulation’s application in case they can be used or integrated into high-risk applications. During the discussions in the EU Council, several countries lamented the lack of any evaluation on what the direct application of these obligations might imply in terms of technical feasibility and market developments.
The Czech Presidency proposed that the European Commission should adapt the relevant obligations via implementing acts within one year and a half from the regulation’s entry into force, carrying out public consultation and impact assessment on how to best consider the specific nature of such technology.
However, for the Presidency, these future obligations for general purpose AI systems should not apply to SMEs, as long as they are not partners or linked to larger companies.
Moreover, the EU executive could adopt additional implementing acts detailing how the general purpose system providers for high-risk AI must comply with the examination procedure.
In cases where the providers do not envisage any high-risk application for its general purpose system, they would be relieved by the related requirements. If the providers become aware of any misuse, the compromise mandates that they take measures proportionate to the seriousness of the associated risks.
The compromise reduced the Commission’s discretion to adopt common technical specifications for high-risk and general-purpose AI systems.
A series of provisions have been included in favour of law enforcement authorities.
The Czechs proposed extending the registration to the public database from the provider of high-risk systems to all public bodies using such AI, with the notable exception of law enforcement, border control, migration or asylum authorities.
Moreover, the obligation to report to a provider of a high-risk system the identification of serious incidents or to provide information for post-market monitoring would not apply to sensitive operational data related to law enforcement activities.
Similarly, the market surveillance authority would not have to reveal sensitive information when informing its peers and the Commission that a high-risk system has been deployed without conformity assessment via the emergency procedure.
The article mandating confidentiality to all entities involved in applying the AI regulation has been extended to protect criminal and administrative proceedings and the integrity of information classified under EU or national law.
For what concerns the testing of new AI in real-world conditions, the obligation that the subject should provide informed consent has been exempted for law enforcement on the condition that it does not negatively affect the subject.
In terms of transparency, if an AI system is meant for human interaction, then the person must be made aware that it is a machine unless it is obvious “from the point of view of a reasonable natural person who is reasonably well-informed, observant and circumspect.”
The same obligations apply to biometric categorisation and emotional recognition AI systems, with the only exception in all these cases for law enforcement investigations. Still, in this case, the disguise must be “subject to appropriate safeguards for the rights and freedoms of third parties.”
The list of actors from the AI ecosystem involved in the regulatory sandboxes has been made broader to include “relevant stakeholder and civil society organisations.”
Regarding support activities that the member states will have to put in place, Prague is pitching to include in the organisation of training originally meant to explain the application of the AI rulebook to SMEs and start-ups and also to local authorities.
Within the European Artificial Intelligence Board, which will gather all EU’s competent national authorities, the Czechs propose setting up two subgroups that would provide a platform for cooperation among market surveillance authorities.
Wording has been added that would empower the Commission to carry out market evaluations related to identifying specific questions that would require urgent coordination among market surveillance authorities.
For Prague, when setting the penalties, EU countries to consider the principle of proportionality for non-professional users.
The compromise specifies which violations would entail an administrative fine of €20 million or 4% of a company turnover. These include breaches of the obligations regarding high-risk system providers, importers, distributors, and users, as well as the requirements for notified bodies and legal representatives.
The percentage has been lowered for SMEs and start-ups from 3% to 2% of the annual turnover.
[Edited by Nathalie Weatherald]