AI Act: the future is now!
On July 12th, the Official Journal of the European Union published the Regulation establishing a series of collective regulations on artificial intelligence, the so-called AI Act. The purpose of the Regulation is to improve the functioning of the internal market and promote innovation through the dissemination of artificial intelligence (AI) that is based on the real needs of humans and is at the same time reliable and able to guarantee health and safety in addition to what is enshrined in the Charter of Fundamental Rights of the European Union.
Professionals from Andersen’s Industry Group Technology, coordinated by partners Francesco Marconi and Paola Finetto, have written an in-depth analysis that examines the main points contained in the AI Act, its critical issues, and its scope of application.
What is an artificial intelligence system
Article 3.1 of the AI Regulation defines an artificial intelligence system as an automated system designed to operate at various levels of autonomy. This system is capable of improving itself based on input received from users and can generate outputs such as predictions, content, recommendations, or decisions, influencing physical or virtual environments. This definition aligns with the OECD Recommendation on AI dated May 3, 2024.
Objectives of the AI Act
The AI Act aims to improve the functioning of the internal market and promote the dissemination of human-centric and reliable artificial intelligence. The Regulation effectively serves to ensure a high level of protection of health, safety, and fundamental rights, including democracy, the rule of law, and environmental protection. It supports innovation and research and development activities, with particular attention to SMEs and startups.
Who the AI Regulation is addressed to
The new rules apply to developers, suppliers, distributors, importers, users, and beneficiaries of AI products and systems.
However, they do not apply to AI systems and products developed and put into service exclusively for scientific research or for military, defense, or national security purposes.
What the AI Act states
The AI Act adopts a risk-based approach, classifying AI systems into four risk categories:
- Unacceptable Risk: Prohibited practices (Article 5)
- High Risk: High-risk AI systems (Articles 6 to 49)
- Medium Risk: Transparency obligations for providers and deployers of certain AI systems (Article 50)
- Minimal Risk: General-purpose AI models (Articles 51 to 56)
The Regulation emphasizes the importance of AI literacy, providing the necessary knowledge to make informed and aware decisions about its functioning.
Sanctions for violating the AI Act Regulation
Member States will have to establish specific rules regarding penalties for violations of the Regulation. The penalties include:
- Up to 35,000,000 EUR or 7% of annual global turnover for non-compliance with the prohibition of prohibited practices (Article 99.3)
- Up to 15,000,000 EUR or 3% of annual global turnover for non-compliance with other provisions of the Regulation (Article 99.4)
- Up to 7,500,000 EUR or 1% of annual global turnover for providing inaccurate or misleading information (Article 99.5)
Entry into force of the AI Act
The Regulation will enter into force on August 2, 2024, with the application of the general rules from August 2, 2026. Some specific provisions will apply from February 2, 2025, and others from August 2, 2025, with additional obligations for high-risk systems starting from August 2, 2027.
- Read the full insight (PDF, 287.29 KB)
- Read the full insight (PDF, 184.22 KB)