The new Italian law on AI: practical tips to comply with the regulation
The professionals of Andersen’s Technology Industry Group analysed the most relevant changes introduced by the new Italian law on AI with respect to employment, intellectual professions, privacy and data security, liability pursuant to Legislative Decree 231/2001 and copyright, providing operational advice to ensure effective legal compliance and to strengthen internal governance.
Entered into force on 10 October 2025, the Law No. 132 of 23 September 2025 is the first AI law to support the AI ACT in promoting the responsible use of artificial intelligence, while defining a system of principles, governance and specific measures to mitigate the risks of artificial intelligence. Maintaining objectives consistent with EU legislation, the national legislation adopts a human-centric approach, aimed at promoting the fair, transparent and responsible use of AI, while ensuring vigilance on economic and social risks, as well as on the impact on fundamental rights.
The provisions introduced concern, on one hand, the use of AI in particularly sensitive sectors (health, employment, intellectual professions, scientific research, justice and public administration), and on the other, industrial policy and innovation measures, up to including significant innovations in the field of liability and copyright.
Law no. 132/2025 on employment and intellectual professions
The Law establishes that AI can only be used respecting transparency, proportionality, security, protection of personal data, confidentiality, accuracy, non-discrimination, gender equality and sustainability.
When the employer uses AI systems for recruitment, personnel management, task assignment or surveillance, it must inform employees by giving them a notice to clearly explain: what aspects of the employment are involved, what the system is for, how it works, what data it uses, how it is controlled, how accurate it is and whether it can generate discrimination. This information must also be communicated to the company trade unions.
On the intellectual professions side, the Law provides that AI can only be used as a support but cannot replace the judgment of the professional. Those who use artificial intelligence must explain to the client what tools they use, in a simple and understandable language.
Artificial Intelligence and data protection: how to strengthen governance
The new Italian law on AI, in Article 4, takes up the principles of transparency, minimization, privacy by design and by default already contained in the GDPR, placing them at the basis of a responsible development of artificial intelligence.
In addition to identifying a suitable legal basis for the processing of personal data through AI systems (and for the training of AI models), the company that intends to deploy or develop an AI system must comply with the principle of proportionality in relation to the sector in which the data is used.
The GDPR also provides for specific rules in the event that the AI system can make decisions in an automated way (e.g. credit scoring systems). In this case, subject to the exceptions provided for by the law (performance of the contract, legal obligation or user consent), the data subject has the right not to be subject to a decision based on automated processing. If, on the other hand, one of the above exceptions occurs, the company must guarantee the user the right to obtain human intervention, express his opinion and challenge the decision.
The company that intends to use or develop an AI system will therefore have to design the data processing carried out through the AI system in the same way as any other processing. This mainly involves carrying out a thorough risk assessment and setting up internal procedures to regulate the use of AI.
Artificial Intelligence, Legislative Decree 231/2001 and new cybercrimes: the framework after the AI Act and Law 132/2025
The evolution of Artificial Intelligence is directly affecting the liability profiles of companies, requiring a reinterpretation of the 231 Models in the light of recent regulatory changes, both at European and national level.
Law no. 132/2025 updated the catalogue of cybercrimes relevant for the purposes of Legislative Decree no. 231/2001, strengthening the sanctioning response with respect to conduct such as abusive access, damage to computer systems, digital fraud, illegal interception of communications and data manipulation, even when carried out through automated tools or AI algorithms. In this context, the ungoverned use of Artificial Intelligence can become a real risk factor in light of the Legislative Decree 231/2001.
This therefore requires companies to update risk mapping, integrate internal cybersecurity procedures, and include training sessions on cyber risk and the conscious use of AI.
IA and copyright: the protection of the author and works in light of the Law no. 132/2025
The use of AI systems in the lives and work of professionals and companies raises some issues related to copyright:
- Who is the author of works created using AI systems?
- (ii) Can I prevent my work from being used to train AI systems?
The answers of the Italian law on AI in the field of copyright
Article 25 of the Italian AI Law confirmed that works made with the aid of AI systems are also protected by copyright law “as long as they are the result of the intellectual work of the [human] author”. In other words, the use of AI systems is allowed but, for the purposes of protection, the human intellectual contribution must be pre-eminent (thus any hypothesis of ownership of copyright by AI systems must be excluded).
On the second issue, the Italian AI Law limited itself to recognizing the lawfulness of the activities of reproduction and extraction (text and data mining) from “works or other subject-matter contained on the web or in databases to which one has legitimate access, for the purpose of extracting text and data through AI models and systems, even generative”, as long as this does not happen for commercial purposes.
However, the author’s right to prevent the extraction and use of their work for the training of AI systems is safeguarded through the exercise of the so-called opt-out.
For greater protection, it is certainly advisable to introduce express opt-out clauses in the licensing agreements for one’s works.
- Leggi l'approfondimento! (PDF, 3.68 MB)