Czech Presidency puts forward narrower classification of high-risk systems – EURACTIV.com

A new partial compromise on the AI Act, seen by EURACTIV on Friday (16 September) further elaborates on the concept of the ‘extra layer’ that would qualify an AI as high-risk only if it has a major impact on decision-making.

The AI Act is a landmark proposal to regulate Artificial Intelligence in the EU following a risk-based approach. Therefore, the category of high-risk is a key part of the regulation, as these are the categories with the strongest impact on human safety and fundamental rights.

On Friday, the Czech Presidency of the EU Council circulated the new compromise, which attempts to address the outstanding concerns related to the categorisation of high-risk systems and the related obligations for AI providers.

The text focuses on the first 30 articles of the proposal and also covers the definition of AI, the scope of the regulation, and the prohibited AI applications. The document will be the basis for a technical discussion at the Telecom Working Party meeting on 29 September.

High-risk systems’ classification

In July, the Czech presidency proposed adding an extra layer to determine if an AI system entails high risks, namely the condition that the high-risk system would have to play a major factor in shaping the final decision.

The central idea is to create more legal certainty and prevent AI applications that are “purely accessory” to decision-making from falling under the scope. The presidency wants the European Commission to define the concept of purely accessory via implementing act within one year since the regulation’s entry into force.

The principle that a system that takes decisions without human review will be considered high-risk has been removed because “not all AI systems that are automated are necessarily high-risk, and because such a provision could be prone to circumvention by putting a human in the middle”.

In addition, the text states that when the EU executive updates the list of high-risk applications, it will have to consider the potential benefit the AI can have for individuals or society at large instead of just the potential for harm.

The presidency did not change the high-risk categories listed under Annex III, but it introduced significant rewording. In addition, the text now explicitly states that the conditions for the Commission to take applications out of the high-risk list are cumulative.

High-risk systems’ requirements

In the risk management section, the presidency modified the wording to exclude that the risks related to high-risk systems can be identified through testing, as this practice should only be used to verify or validate mitigating measures.

The changes also give the competent national authority more leeway to assess which technical documentation is necessary for SMEs providing high-risk systems.

Regarding the human review, the draft regulation requires at least two persons to oversee high-risk systems. However, the Czechs are proposing an exception to the so-called ‘four eye principles’, namely for AI applications in the area of border control where EU or national law allows it.

As regards financial institutions, the compromise states that the quality management system they would have to put in place for high-risk use cases can be integrated with the one already in place to comply with existing sectorial legislation to avoid duplications.

Similarly, the financial authorities would have market surveillance powers under the AI regulation, including the carrying out of ex-post surveillance activities that can be integrated into the existing supervisory mechanism of the EU’s financial service legislation.

Definition

The Czech presidency kept most of its previous changes to the definition of Artificial Intelligence but deleted the reference to the fact that AI must follow ‘human-defined’ objectives as it was deemed “not essential”.

The text now specifies that an AI system lifecycle would end if it is withdrawn by a market surveillance authority or if it undergoes substantial modification, in which case it would have to be considered as a new system.

The compromise also introduced a distinction between the user and the one controlling the system, which might not necessarily be the same person affected by the AI.

To the definition of machine learning, the Czechs added that it is a system capable of learning but also of inferring data.

Moreover, the previously added concept of autonomy of an AI system has been described as “the degree to which such a system functions without external influence.”

Scope

Prague introduced a more direct exclusion of research and development activities related to AI, “including also in relation to the exception for national security, defence and military purposes,” the explanatory part reads.

The critical part of the text on general-purpose AI was left for the next compromise.

Prohibited practices

The part on prohibited practices, a sensitive issue for the European Parliament, is not proving controversial among member states that did not request major modifications.

At the same time, the text’s preamble further defines the concept of AI-enabled manipulative techniques as stimuli that are “beyond human perception or other subliminal techniques that subvert or impair person’s autonomy […] for example in cases of machine-brain interfaces or virtual reality.”

Leading MEPs raise the curtain on draft AI rules

The two European Parliament co-rapporteurs finalised the Artificial Intelligence (AI) draft report on Monday (11 April), covering where they have found common ground. The most controversial issues have been pushed further down the line.

[Edited by Zoran Radosavljevic]

Leave a Reply

Your email address will not be published.