insAIde #42 [AI Act special series #2]: it was urgent to regulate Artificial Intelligence
Will regulations be a brake on innovation? Doubts already raised with previous regulations, later copied worldwide
To regulate or not to regulate, that is the dilemma, Shakespeare would have said, today, in the age of artificial intelligence.
A dilemma widely discussed and debated by Stefano Rodotà, first, and Giovanni Buttarelli, later, in their writings and witnessed - with clear favour towards the need to regulate - through their institutional action and militancy.
Europe has been much criticised in recent years for choosing to regulate something that is still in the making. Above all, the United States, but also many local commentators of a more liberal matrix, have bitterly criticised this choice, fearing now the risk of stopping what cannot be stopped, innovation, and now that of desertifying the continent's financial markets, to the advantage of those across the Channel or overseas.
Fomenting fear of new regulations: history repeats itself
With the GDPR, six years ago, more or less the same thing happened, and not only did the economy not collapse and the Big Techs are all still here, and the only 'inconvenience' we have encountered so far is that some services arrived a month later in Europe than when they were launched in the United States, but came out largely transformed and improved - see the case of Open AI's Chat GPT and Replika apps that came out much improved after the block and the prescriptions imposed by the Italian Data Protection Authority in 2023 for non-compliance with the GDPR.
But even within the EU, in particular, France and Germany were reticent until a few months ago, so much so that they risked blowing up the negotiating table on the AI Act, after two years of work, in order to protect their own national interests and their own companies.
Added to these protectionist measures is the noise of many commentators who, through ignorance or bad faith, want to muddy the waters by exploiting their own notoriety and authority. The other day, for example, I heard a distinguished economist say that the AI Act will make life more difficult for companies that want to use AI to increase their own productivity, which will damage GDP. This kind of release only creates fear among the thousands of SMEs and start-ups that, not having the resources to closely follow what is happening in Brussels, limit themselves to reading and listening to the media and influencers of the moment.
Let it be clear, the AI act is not perfect, nor is the GDPR, nor is any law, because every regulation is always the result of a political compromise and, when it comes to Europe, the confrontation involves so many actors.
AI, why regulate something you don't yet know well? The EU is not alone
There is one thing, however. Europe does not do things at random. It does not, like other great powers, chase after the emergency of the moment and the legislative process, although it may seem cumbersome and not perfect, is not improvised. Many have been saying for some time, and even more so since the vote on 13 March, that perhaps it was too soon and that we should wait until we have the technologies on the market. And that is partly true, adjusting something that you don't know is not easy, but that doesn't mean it shouldn't be done. But then again, it is also already too late, because AI is already among us. On the other hand, the consequences of slowness in regulating social media are there for all to see, with risks on the democratic front as well.
Moreover, since China is often cited as another power that invests a lot in the development of AI and less in laws, it must be remembered that even in the East the new rules are anything but soft.
The whole study that led to the AI Act
How, then, to regulate something that one does not know well? With study and comparison. The AI Act is the end of an initial journey that began in 2018 with the creation, by the European Commission, of the High Level Expert Group, which included, among others, the Italians Luciano Floridi, Stefano Quintarelli, Andrea Renda and Francesca Rossi. Also on the basis of that working group's input, the Commission, led by other Italians such as Roberto Viola and Lucilla Sioli, then published a white paper in 2020 and the proposal for a regulation in 2021.
This was followed, but had already begun, by several meetings with stakeholders of all kinds, from industry to civil society, from big tech to start-ups. When the Council had already closed its version, ChatGPT appeared on the market in November 2022, later followed by Google's Bard (now Gemini) and everything changed. New questions and doubts to which the European Parliament had to find answers, first on its own and then in trialogue with governments and the Commission.
The balance between firm rules and flexibility, with the human being at the centre
Stop, then, in front of the unknown? Far from it. Finding the right balance between the need to regulate with certain standards and the need to be flexible enough not to have to do it all over again just because there is a new technology on the market.
It is therefore difficult to understand those who, at the same time, criticise the fact that the rules are not precise and at the same time risk over-regulating something that is not yet there. The risk-based approach serves precisely to find the right, and difficult, balance between these extremes.
It is also worth mentioning with glee the amendment that parliament has adopted to the very beginning of Article 1, in order to put the human being back at the centre: "The objective of this Regulation is to improve the functioning of the internal market and to promote the adoption of human-centred and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety and fundamental rights enshrined in the Charter of Fundamental Rights, including democracy, the rule of law and the protection of the environment, from the harmful effects of artificial intelligence, and to support innovation. "
This is a reminder that IA is at the service of man, and not vice versa. And this means, first and foremost, respect for fundamental rights. Fundamental rights which, with the amendment defended by parliament, have entered the regulation with Article 27.
The Fundamental Rights Impact Assessment (FRIA) for high-risk IAs was in fact strongly desired by the parliament, as opposed to the governments, who saw it as a new burden. It was said that there was no need for it because there was already the DPIA (data protection impact assessment) of the GDPR. But the FRIA and DPIA are two similar but not identical instruments, with the DPIA acting as a support for the drafting of the FRIA. To keep the FRIA in the text, the parliament had the support of civil society and academia, with appeals gathering over a hundred signatures in a matter of hours.
The challenges of governance
The question of who will be in charge of governance at European and national level remains open. Having set aside, in Italy, the natural choice that would have seen the creation of an independent super-authority, a Data and AI Authority, twinned from the structure of the current Garante, plus grafts of Agcm, Agcom, Agid and Acn, we will instead move towards a new governmental structure, more agile and based on the Agid and Acn model. The guarantee, in both cases, is given by the quality of the staff that already works in these institutions and that, as usual in our country, often makes the difference also in relation to Authorities and Agencies in other EU states.
Conclusions
So we can say without a doubt that yes, there was a need to regulate AI without waiting ten years. The way training is done with data today and the ways in which AI is already being adopted in the public (and by law enforcement authorities) and in the private sector suggest to us that it is better to start this revolution right, avoiding the risk of not being able to fix it because too much has been invested in it. We can do it well, without halting progress.
This contribution was originally published in Italian on Agenda Digitale.
⏰ EVENTS
Rocco Panetta, Vincenzo Tiani and Federico Sartore, next week will be in Washington for the IAPP Global Privacy Summit. Are you coming?
⏰ That's all for us, see you insAIde, next Tuesday, at 08:00.
Rocco Panetta , Federico Sartore , Vincenzo Tiani, LL.M. , Davide Montanaro , Gabriele Franco