insAIde #44 [AI Act special series #4]: Requirements and obligations for high-risk AI systems
Continuing the examination of the AI Act and its risk-based approach, after introducing the prohibited uses listed in Article 5 of the regulation, it is time to move on to high-risk uses.
The class of high-risk applications is the most relevant as falling or not falling into this category for a company will entail a different compliance burden.
It is no coincidence that it was the subject of heated debate among the co-legislators, with many changes compared to the Commission's initial proposal. The crux of the matter is simple: on the one hand, the pressure was on not to expand the list of high-risk uses too much, in order to avoid this additional burden for companies; on the other hand, especially the parliament did not want, for the sole purpose of avoiding more controls, to allow certain applications to be used without due guarantees and safeguards.
What are high-risk applications
High-risk applications are therefore those AI applications that are used for safety components of a product (toys, medical devices, cars, aircraft, etc.) or if they fall under Annex III uses. Annex III states that AI for remote biometric recognition (but not real-time biometric recognition, which is considered prohibited, subject to the exceptions provided for in Article 5), AI for emotion recognition, and AI used for biometric categorisation, based on sensitive or protected attributes or characteristics, are high risk.
Also at high risk are AI used in critical infrastructures (for gas, electricity and water supply) and those used to assess eligibility for essential public and private services such as access to care, credit, or to calculate the amount of a life or health insurance policy. Uses to assess the priority of emergency calls to the police, hospitals or fire brigades also complete the list.
In education, parliament has made several changes by making provision for high risk whenever AI is used to determine admission to a school, of any grade, or passing an exam, or if a student has not behaved appropriately during an exam. It is worth recalling the Italian case, sanctioned by the Garante a few years ago, of a university that had used, during the Covid period, a system that automatically detected whether a student had behaved inappropriately during the exam, without providing adequate guarantees in the event of an error.
The European Parliament also reworded the provisions in the employment sphere, considering those AIs used to evaluate and skim CVs as well as those used to evaluate employees, including promotions and dismissals or the assignment of tasks, to be high-risk.
With regard to the administration of justice, where permitted by national or European laws, law enforcement agencies will be able to use systems to assess the likelihood of a person being the victim of a crime; they will also be able to use polygraphs and instruments to assess the reliability of the evidence in their possession. They may use systems to assess a person's likelihood of committing or re-offending, provided that this assessment is not based solely on their profile, a practice prohibited by Article 5. These systems may be used by border control authorities to control migration flows, to assess entry requests and to identify migrants.
AI systems can also be used to assist (not replace, fortunately) judges in analysing case law and the law to be applied.
The fact that all these use cases are considered high-risk constitutes that balance between the willingness to use new technologies, even if in high-risk environments, and the protection of fundamental rights.
The exception to Article 6 and the FRIA
In this regard, the parliament has, in our opinion, made a wise move by adding new paragraphs to Article 6 compared to the original text proposed by the Commission.
If Annex III applications do not have an impact on fundamental rights, health, safety, or the final decision that would have been taken by a person, then they may not be considered as such. However, these are cases where AI is used for ancillary actions, or to improve the outcome of a human-made action. To be exempted, however, it will be necessary to provide documentary proof that AI is not to be considered high-risk.
The big point scored by parliament in the negotiations was then the introduction of the Fundamental Rights Impact Assessment (FRIA). Originally conceived by parliament for all high-risk applications, the compromise envisages it as mandatory for public entities and private entities offering public services, such as banks. Prior to public use, the likely consequences for those at risk and the solutions to be adopted in terms of human control and internal organisation must be identified.
To help companies, if they have already done a privacy impact assessment (DPIA), the FRIA will simply supplement it, and at the same time the AI Office will have to provide a template to facilitate compliance. An important request, which, however, has only remained in the preamble of the regulation, is that companies invite stakeholders (trade associations, consumer protection associations, etc.) to come together to better understand the real risks at stake.
What obligations for providers
High-risk AI providers are therefore required to have a risk management system in place throughout the AI lifecycle. Risks to health, safety and fundamental rights must first be identified. For risks that cannot be eliminated, solutions will have to be provided so that they can be mitigated. Deployers will have to be informed and, where appropriate, trained in the use of the system. Particular attention must be paid to children and vulnerable persons in risk management.
With regard to data training, adequate datasets must be chosen, which are as representative as possible of those possibly affected by negative consequences.
The use of sensitive data is only permitted where strictly necessary and where it is not possible to achieve the same optimal results to avoid bias with synthetic or anonymous data. Such data must be carefully managed and protected, may not be transferred to third parties and, once the purpose has been achieved, must be deleted.
Providers will have to keep the necessary documentation, which will be simplified for start-ups. Logs will have to be recorded in order to better identify the source of any problems that occur. Information to deployers will have to be comprehensible so that they can understand how to use the AI systems provided by the providers.
Conclusions
There is no doubt that the AI Act makes various compliance demands on providers and developers, which may seem exorbitant today. But let us remember that these demands serve to protect us all. Existing requirements in electronics, hydraulics, medicine, nuclear energy, automobiles, have not stopped the world from innovating in these fields. For AI, the issue is even more delicate because the outputs are still unpredictable in many cases, so it is even more crucial to keep track of how we got there, to be able to investigate possible malfunctions.
Like any new challenge, it may seem difficult at first, as it was to some extent with GDPR, but once we find the standards and good practices, we will look back with a smile on our faces.
⏰ EVENTS
Today Gabriele Franco will hold a seminar on AI regulation within the Digital Lab at the Philosophy and Digital Transformation degree course of the University of Udine.
Tomorrow, Vincenzo Tiani, LL.M. Tiani will be among the speakers of the open panel organized by the Universiteit van Amsterdam on the AI Act.
The other speakers are:
- Gabriele Mazzini (EU Commission) – Introduction to the AI Act
- Dr. Gemma Newlands (Oxford Internet Institute) – Current Perspectives on (Generative) AI and Work
- Koen Hindrix, Stefan Schlobach, Guszti Eiben, Christine Moser.
More info : https://lnkd.in/eebeBe3y
⏰ That's all for us, see you insAIde, in two weeks, at 08:00.
Rocco Panetta , Federico Sartore , Vincenzo Tiani, LL.M. , Davide Montanaro , Gabriele Franco