The path towards the adoption of the new rules on artificial intelligence continues. While the European Union has now done its job, with the publication of the AI ACT in the Official Journal on 12 July and its entry into force on 1 August, the ball is now undoubtedly in the court of the national states. Although it is a regulation, in fact, which by its very nature does not require national legislative instruments for transposition, as is the case with EU directives, some choices are still up to the Member States, including the designation of competent national authorities.
The Italian government took care of this with the draft bill (DDL) on ‘Provisions and delegation of powers to the government on artificial intelligence’, adopted at the end of April and on the way to being converted into law, which recalls, among other things, the principles to which an anthropocentric development of AI must adhere, and provides for some provisions in sectors such as health, labour, intellectual professions, PA, judicial activity and national cybersecurity.
Giorgia Meloni's government, on this issue led by the Undersecretary for Technological Innovation Alessio Butti, has designated the Agency for Digital Italy (AgID), headed by Mario Nobile, and the Agency for National Cybersecurity (ACN), headed by Bruno Frattasi, as the ones in charge of overseeing the AI Act rules. It must be acknowledged that this government, as was the case with many of the previous executives, immediately accepted the challenge of having to be competitive in the technology sector, always placing the issues of artificial intelligence and the digital economy and data in general at the centre of the debate.
The need to move forward at a fast pace
If so far we count few Italian excellences in the sector and we lag behind our French cousins, who have been demonstrating for years that they are aiming to become Europe's Silicon Valley, the funds announced by the Meloni government seem to be going in that direction, even though they are only the first step on a road that is still a long way to go.
The collaboration between the Government and the Garante (Italian DPA)
Although it is no mystery that many operators saw in the Italian Data Protection Authority the natural regulator - also with a view to simplification and a ‘one-stop shop’ - given its thirty years of experience in the data economy and the protection of fundamental rights and, no less important, given its track record in handling the world's first cases on artificial intelligence such as Replika and OpenAI, (cases for which, having dealt with them professionally, we have seen the quality of the work of the Garante's team, recognised internationally and taken as an example, from Washington to Sydney, and as a model of virtuous regulation of AI) - the Government's decision to proceed differently has not prevented the loyal cooperation expected of two such important institutions. Collaboration that we greet with interest and that we hope will soon make us forget the times when, regardless of political colour, the Garante was unfairly attacked for simply applying the law.
The ChatGpt and Replika blockades: cases that set the standard
Just to cite the last case in chronological order of friction between politics and the Authority, the temporary blocking of ChatGPT and Replika in the spring of 2023 led not a few to cry out in scandal, fearing the risk of a blockade of innovation in Italy and neo-oscurantism.
Instead, in record time, after only a month, ChatGPT was back in operation, enriched by the changes made by OpenAI, the result of work done in concert with the Garante and crystallised in a measure that has set the international standard, so much so that it has become the global ‘golden rule’ that inspires everyone in the construction of models based on sustainable generative AI that is also attentive to people's rights and interests. And all this was a year before the final approval of the AI Act. This is what Italian excellence is able to do, anticipating the times and demonstrating how balancing opposing interests and rights is always possible and everyone gains, first and foremost humanity.
The Garante's opinion on the DDL AI (draft bill)
The Government is well aware of the importance of personal data protection, so much so that the independence and role of the Garante are often referred to in the DDL. As required by law, the Government requested the opinion of the Garante on the text, an opinion published on 2 August and which should be read in a positive light, except for a few suggestions aimed at reinforcing some of the choices made in the European regulation itself or in the wake of the case law of the Garante and the European Supervisory Authorities in recent years.
The need for greater attention to the protection of minors
Among these is undoubtedly the need for greater attention to the protection of minors. If the DDL reaffirmed that the consent of a person over the age of 14 is to be considered valid, the Garante emphasises the need to ‘ensure adequate age verification systems, to avoid the otherwise easy circumvention of the provided age threshold for the provision of consent’. An issue, that of the ‘age (gate) verification’ born for social networks, because of the massive unauthorised presence of minor users, but which still struggles to find a universally shared and effective ‘technological’ solution, but which, certainly, must be taken into consideration given the greater impact that artificial intelligence systems can have on minors.
Healthcare: it should be mentioned a preference for synthetic or anonymous data
In the health sector, the Garante recommends mentioning a preference for synthetic or anonymous data, to reduce the risks of data leaks or misuse in a sensitive sector such as health. This is a still partially unexplored frontier. We first dealt with synthetic data in 2018 when we worked on the MyHealthMyData project with the European Commission, forerunner of many strands on the use and reuse, even altruistic, of data in the health sector.
Better coordination between competent authorities
Last but not least, the Garante suggests, precisely with a view to better coordination between the competent authorities and given the centrality of personal data protection in the digital economy, to ‘provide for the participation of the Garante in the coordination committee referred to in Article 18(2)’ and that ‘AgID and ACN transmit to the Garante the acts of the proceedings in relation to which profiles likely to be relevant in terms of data protection emerge, also requesting the opinion of the Authority with respect to cases, under their examination, involving data protection aspects. The Garante will, for its part, provide information on profiles falling within the competence of AgID or ACN that may emerge in the handling of its own proceedings’.
We believe that the Garante's suggestion should be seen as a concrete opportunity to lay the groundwork for reviewing the way in which the various independent Authorities and Agencies will have to work in the future to enforce AI rules in the digital world. We have been arguing for some time that the legislator should reform the system of Authorities, starting precisely with the Italian DPA, which should change its form into a Data Authority, because by now, in the data economy, everything is inevitably intertwined and it is unthinkable, especially in the light of the many new European regulations in the sector (DSA, DMA, DGA, Data Act, EHDS), that we can work well where the boundaries between one Authority and another are not clear.
Towards a new convergence
It is therefore necessary to work towards a new convergence, for the good of the industries, the country and the citizens, and the recent moves by the Government and the Data Protection Authority seem to be moving fruitfully, albeit still at an early stage, in this direction.
⏰ That's all for us, see you insAIde, next time!
Rocco Panetta , Federico Sartore , Vincenzo Tiani, LL.M. , Davide Montanaro , Gabriele Franco