insAIde #43 [AI Act special series #3]: the risk-based approach and prohibited AI practices
Day zero of the AI Act is less and less to go. In recent weeks, the European institutions are proceeding with the final formalities before final approval and subsequent publication in the Official Journal. From that moment on, the various timelines for the application of the new regulation will be triggered, with companies and public bodies having to act promptly given a new, long, and complex compliance season.
The first regulations to take effect will be those on prohibited AI practices. It is precisely to this set of provisions that this third in-depth study on the AI Act is devoted (after the first one on the framework of the new legislation and the second on the relationship between rules and innovation). First, however, it is necessary to give an account of one of the structural elements of the AI Act, we could say a part of its soul: the risk-based approach.
Underlying the risk-based approach
Using the authentic interpretation provided by the European legislator, the consistency and ultimate meaning of this regulatory approach are summarised in recital 26 of the AI Act: 'In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable AI practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.'.
In other words, it is a question of balancing regulatory obligations in such a way that, as the risk coefficient ('the combination of the probability of the occurrence of harm and the severity of the harm itself', using the definition of risk in Article 3 of the regulation) increases, so do the regulatory requirements, in the form of recommendations, requirements, obligations, and even prohibitions. This is not a new concept: the data protection legislation, and, in particular, the GDPR, has also already experimented with and variously declined this approach.
Certainly, this is not the only possible methodology to regulate the development and use of artificial intelligence systems, but it is the one that has been chosen by the European Union and that many other countries are already emulating, thus manifesting - once again - the well-known Brussels effect. And it is here that we can identify a further key to understanding the risk-based approach.
The effects of the risk-based approach
The choices made by the European legislator in defining this dynamic and hierarchical balance between levels of risk and related obligations and requirements represent the concretization, in a legal text, of the system of ethical principles, fundamental rights, and constitutional values that characterize the European community. This is particularly evident when looking at the list of prohibited AI systems, to which we shall return in the next section.
Thus, on the one hand, it can be said that the European legislator, by setting this approach as the foundation of its regulatory construction, has erected the AI Act on the value system of the Union's own constitutional tradition. On the other hand, this translates into the possibility that this approach, and hence the set of values that constitute its raison d'être, will become the international model to strive for in the legal and ethical regulation of artificial intelligence.
In other words, with the AI Act (and similarly to what has already happened with the GDPR) the Brussels effect will not only concern the choice of regulating a phenomenon and the legislative technique used but also, above all, a vision of what role the rights and fundamental values of the individual should play in relations with technology.
Prohibited AI practices
As mentioned above, the value dimension that permeates the risk-based approach of the AI Act emerges clearly in the list of prohibited AI systems. These are practices that the European legislator outlaws as “Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights enshrined in the Charter, including the right to non-discrimination, to data protection and to privacy and the rights of the child.” (recital 28).
These prohibited practices are punctually listed in Section 5 of the AI Act and include, for instance, AI-based cognitive-behavioral manipulation techniques, emotion recognition systems in certain contexts, social scoring, predictive policing algorithms, and the scraping of facial images from the internet for the creation of databases. Each exception is delimited by the aforementioned provision, with the help of the related recitals, in an attempt to precisely define the scope of each prohibition.
The list in Article 5 is the fruit of the work and compromise found by the European institutions involved in the process of approving the regulation. In this sense, the case of the use of real-time remote biometric identification systems in publicly accessible spaces is emblematic. This was perhaps the banned AI practice that was discussed the most, both between institutions and within civil society. In the end, the EU legislator imposed a general prohibition, allowing the use of these systems only in specific circumstances and after authorization by the authority, with the possibility of derogating from the latter requirement in cases of urgency, provided, however, that authorization is obtained within 24 hours.
Conclusions
As mentioned, the rules on prohibited AI systems will be the first to be fully implemented, six months after the AI Act comes into force. This is therefore the real test case for the new regulation and the risk-based approach that is one of its cornerstones.
With these prohibitions in force, the vision of an AI that is anthropocentric and at the service of mankind, operating in respect and support of people's fundamental rights and freedoms, will find its first and decisive affirmation. This will be done to protect our value system, while at the same time being able to become the model for many other countries to follow.
⏰ That's all for us, see you insAIde, next Wednesday, at 08:00.
Rocco Panetta , Federico Sartore , Vincenzo Tiani, LL.M. , Davide Montanaro , Gabriele Franco