The Impact of AI on the Workplace

COMPLIANCE & RISK MANAGEMENT / May 2026

In this edition of the Compliance & Risk Management Newsletter, professionals from Andersen’s 231/Privacy and Employment & Labor Service Lines have explored the impact of AI on the workplace, with the aim of highlighting the increasingly critical need for companies to adopt specific technical and organizational measures capable of mitigating risks associated with the use of AI, thereby improving their governance and resilience.

Supervisory Body and AI: increasingly integrated and stringent controls

The growing adoption of artificial intelligence systems is reshaping internal control frameworks, requiring an increasingly integrated and structured approach between corporate functions and the Supervisory Body. Within the scope of Legislative Decree 231/2001, AI represents both an opportunity and a source of new risks, prompting companies to promptly adapt their organizational safeguards.

First and foremost, companies are required to map the processes in which AI is used, assessing the related risks (such as algorithmic bias, lack of transparency, and data management issues) and integrating them into the internal control system and the 231 Model. This entails the adoption of dedicated policies, the clear allocation of responsibilities, and the implementation of continuous monitoring tools, including data analytics solutions.

In this context, the Supervisory Body plays a synergistic role by verifying the adequacy of the safeguards and their effective implementation, while also fostering a cultural shift toward the responsible management of emerging technologies.

The updates introduced by Law 132/2025 on workplace health and safety

On 10 October 2025, Law No. 132/2025 came into force, transposing and supplementing the European AI Act and introducing specific obligations in the field of workplace health and safety. The law defines the general principles for the use of AI in the workplace: intelligent systems must be safe, reliable and transparent, used with respect for human dignity and the protection of personal data, and aimed at improving working conditions and the quality of performance.

A central role is also assigned to the information provided to workers, who must be aware of the use of AI systems, especially for decision-making or monitoring purposes that affect significant aspects of the employment relationship.

Finally, Law 132/2025 provides for the establishment of a National Observatory on Artificial Intelligence, tasked with monitoring, analysing and supporting the informed adoption of these technologies, pending the publication of operational guidelines to assist businesses and professionals in the practical application of the new provisions.

The risks arising from the use of AI systems in the workplace

The European AI Act, transposed by Law No. 132 of 23 September 2025, and the ministerial guidelines already identify key critical issues.

The first risk is so-called ‘algorithmic discrimination’. AI systems used for recruitment may develop biases based on gender, age or origin if not properly trained. Article 5 of the AI Act prohibits biometric categorisation based on inferences regarding race, political opinions or trade union membership. Article 8 of the Workers’ Statute prohibits inquiries into political, religious or trade union views: the scope of this provision must certainly be extended to automated systems.

Invasive surveillance is the second risk. Constant monitoring of productivity can cause stress, violate privacy and create constant pressure. The legislation provides for transparency and information obligations to ensure human supervision at all times. The AI Act also prohibits behavioural manipulation systems using subliminal techniques. Finally, to mitigate risks, companies must map and classify AI systems, ensuring transparency and human supervision