What European businesses need to know about Italy’s AI Act implementation
Italy has become one of the first EU member states to go beyond the baseline European AI Act with its own national legislation. Law 132/2025, published in the Gazzetta Ufficiale in early 2025, does not replace the EU AI Act — it adds a distinctly Italian layer on top, including something no other major EU country has introduced yet: criminal sanctions for serious AI misuse.
If you run a business that develops, deploys, or even just uses AI tools, this law changes the compliance landscape significantly. And since Italy often sets precedents that other member states follow, every European SMB should pay attention.
Critical sectors under the microscope
The EU AI Act already classifies certain AI applications as high-risk, but Italy’s implementation puts particular emphasis on sectors that matter most to its economy and legal traditions.
Healthcare receives special treatment. Any AI system involved in diagnosis, treatment planning, or patient management must operate under direct human oversight by a qualified medical professional. The law makes it explicit: a doctor’s professional liability does not shrink because an algorithm helped with the decision.
Employment and labor is another area where Italy goes further than the EU baseline. Businesses using AI for recruitment, performance evaluation, task allocation, or workforce monitoring must inform employees and consult worker representatives. This aligns with Italy’s traditionally strong labor protections, and it applies even to off-the-shelf HR software that uses algorithmic decision-making.
Public administration faces strict transparency rules when AI supports decisions affecting citizens’ rights — from tax assessments to permit approvals. And the law extends protections specifically to minors and vulnerable persons interacting with AI systems.
Beyond these Italian additions, the full EU high-risk list still applies: biometric identification, critical infrastructure management (energy, transport, water, digital), education and vocational training, credit scoring and insurance, law enforcement, migration and border control, and administration of justice.
The penalty structure: where criminal law enters the picture
Here is what makes Italy’s approach genuinely different. The EU AI Act imposes administrative fines only. Italy added criminal liability, making it one of very few member states to attach prison time to AI violations.
Administrative fines (EU-wide)
The fine structure follows three tiers based on the severity of the violation:
- Prohibited AI practices (such as social scoring or unauthorized real-time biometric surveillance): up to EUR 35 million or 7% of global annual turnover, whichever is higher.
- High-risk system non-compliance (missing documentation, no conformity assessment, inadequate human oversight): up to EUR 15 million or 3% of turnover.
- Providing incorrect information to authorities: up to EUR 7.5 million or 1.5% of turnover.
There is a built-in proportionality safeguard for SMEs and startups: the lower of the two figures (flat amount versus turnover percentage) applies. A company with EUR 2 million in annual revenue would face a maximum of EUR 140,000 for a high-risk violation — still substantial, but not existential.
Criminal sanctions (Italy-specific)
Law 132/2025 introduces imprisonment of one to five years for the most serious AI-related offenses. These include unlawful use of AI that causes harm to health, safety, or fundamental rights, manipulative or exploitative AI targeting vulnerable individuals, and deploying prohibited systems without authorization.
The law also introduces aggravating factors when AI is used as a tool to commit existing crimes such as fraud, identity theft, or defamation. If you use a deepfake to defraud someone, the AI component makes the sentencing worse, not better.
Criminal enforcement goes through Italy’s ordinary courts, not the administrative authorities that handle fines — meaning investigations, prosecutions, and trials with full judicial process.
Compliance timeline: what to do and when
The deadlines follow the EU AI Act’s phased rollout, with Italy’s national provisions already in force:
Already active (February 2025): Prohibitions on unacceptable-risk AI systems apply. AI literacy obligations under Article 4 of the EU AI Act require that staff operating or overseeing AI systems receive adequate training. Italy’s criminal sanctions took effect upon publication of the law.
August 2025: Rules on general-purpose AI models kick in. National governance structures, including AgID as the primary market surveillance authority and ACN for cybersecurity coordination, must be fully operational.
August 2026: The bulk of high-risk AI system obligations become enforceable — conformity assessments, quality management systems, technical documentation, registration in the EU database, and mandatory human oversight.
August 2027: Extended deadline for high-risk AI already regulated under existing EU sectoral laws (medical devices, machinery, automotive).
Practical steps for SMBs
Compliance does not require a legal department the size of a multinational’s. Start with these concrete actions.
Audit your AI inventory. List every AI-powered tool your business uses, from chatbots on your website to automated invoice processing. Classify each by risk level: unacceptable, high, limited, or minimal. Most off-the-shelf business tools fall into the minimal or limited categories, but HR screening software and credit assessment tools likely qualify as high-risk.
Address transparency first. If you deploy chatbots, AI-generated content, or emotion recognition systems, you must disclose this to users. This is a quick, low-cost win that applies regardless of risk classification.
For high-risk systems, build documentation now. Technical documentation, conformity assessments, and human oversight protocols take time to develop. Starting early avoids the August 2026 rush and lets you iterate.
Train your team. AI literacy is not optional — it is a legal requirement since February 2025. Staff who interact with AI systems need to understand what the technology does, where its limitations are, and when to override it.
Leverage SME-specific provisions. Both the EU AI Act and Italy’s law include measures designed to ease the burden on smaller companies: access to regulatory sandboxes supervised by AgID, reduced fees for conformity assessments, and dedicated guidance channels.
Why this matters beyond Italy
Italy’s decision to introduce criminal sanctions sends a signal. Other member states are watching to see whether this approach deters AI misuse more effectively than fines alone. If it works — or if a high-profile prosecution makes headlines — expect other countries to follow.
For any business operating across EU borders, the strictest national implementation effectively becomes your compliance baseline. If you sell into Italy, Italian law applies to your AI systems in that market. Planning your compliance around the most demanding interpretation saves you from retroactive scrambling when other countries adopt similar measures.
The combination of EU-wide administrative fines and Italian criminal liability creates a two-track enforcement system that business owners need to understand. The administrative track catches negligence and procedural failures. The criminal track targets intentional misuse and reckless disregard for safety. Knowing where the line falls — and staying well on the right side of it — is no longer optional for any European business deploying artificial intelligence.
Need support on this topic? Contact us for a free consultation — let’s assess your company’s situation together.
Stay updated every week on cybersecurity, AI and technology for SMBs: subscribe to our newsletter.