Loading...

21 October 2025

How to prepare for the AI Act: compliance obligations and strategies for large enterprises

The new European regulation on artificial intelligence presents complex compliance challenges. Mashfrog helps companies tackle them with targeted solutions for AI system alignment and governance.

AIAct

Artificial intelligence has become a cornerstone of business models and large-scale operational processes. However, the evolving European regulatory framework marks a decisive shift: with the AI Act, the European Union introduces binding rules for the use, development, and distribution of AI systems. This new landscape requires large companies not only to comply, but to proactively develop strategies for governance, risk mitigation, and transparency.

In this context, Mashfrog Group offers its proven expertise to support companies through every stage of the compliance process: technology audits, risk assessments, preparation of regulatory documentation, definition of internal policies, employee training, and the development of ethical governance models. This article provides an in-depth look at the AI Act, the real risks it poses for businesses, and the practical steps required to achieve solid and proactive compliance.

The european regulatory framework for AI

The AI Act (EU Regulation 2024/1689) is the world’s first attempt to establish a comprehensive and binding legal framework for artificial intelligence. Approved in 2024, the regulation will fully enter into force in 2026, although certain provisions—such as the ban on practices with high negative impact and the requirement for staff AI literacy—will already apply starting in 2025.
Its goal is to ensure that AI systems are safe, transparent, ethical, and under human oversight, while also fostering innovation and strengthening Europe’s competitiveness.

The guiding principle of the AI Act is the risk-based approach, which classifies artificial intelligence systems into four main levels, each with specific regulatory obligations.

  1. Unacceptable risk

    This category includes AI systems considered a threat to fundamental rights and democracy. These applications are strictly prohibited within the European Union. Examples include government social scoring, behavior manipulation through subliminal techniques, and real-time biometric surveillance in public spaces—except in rare cases authorized by judicial authorities. Their use is deemed incompatible with core European values and is subject to severe penalties.
     
  2. High risk

    This is the most heavily regulated category and includes AI systems used in sensitive areas such as healthcare, justice, finance, critical infrastructure, employment, and education. These systems are not banned, but they must comply with a wide range of obligations: risk management, technical documentation, transparency, traceability, human oversight, and post-deployment monitoring. Examples include algorithms for personnel selection, automated medical diagnosis systems, or tools that support judicial decision-making. Companies are required to conduct impact assessments and, in many cases, obtain certification from notified bodies.
     
  3. Limited risk

    These systems do not pose significant risks to fundamental rights but may produce content or interactions that are not immediately recognizable as artificial. For this reason, they are subject to transparency obligations: users must be clearly informed when they are interacting with an AI system. This category includes chatbots, automated content generation systems (such as images, videos, or text), and software that analyzes emotions through facial or voice recognition. The goal is to ensure user awareness and prevent deception.
     
  4. Minimal risk

    Lastly, there are low-impact AI systems that do not pose risks to safety, rights, or transparency. In these cases, the AI Act does not impose binding obligations but encourages the voluntary adoption of best practices. This category includes spam filters, recommendation systems for e-commerce or entertainment, machine translation tools, and logistics optimization software. While exempt from formal requirements, companies are still advised to maintain a basic level of oversight and control.

This four-level framework makes the AI Act flexible yet impactful, adjusting the degree of regulation to the potential risk posed by the system. Additionally, the regulation has extraterritorial reach: non-European companies must also comply if they intend to market or distribute AI systems within the EU, including appointing a local legal representative.

For general-purpose AI models, such as large language or generative models, the EU has introduced a voluntary Code of Conduct that anticipates stricter future requirements.
Here too, transparency and responsible risk management are key to ensuring the ethical use of artificial intelligence.

Legal, operational, and reputational risks for large enterprises

The integration of artificial intelligence into business processes brings a range of complex and interconnected risks that large enterprises cannot afford to overlook. From a legal standpoint, the implications of the AI Act are significant: the regulation allows for fines of up to €35 million or 7% of global annual turnover for the most serious violations, such as the use of prohibited practices or failure to comply with high-risk system requirements.
Even less severe infractions, such as lack of transparency or inadequate documentation, can result in substantial penalties—especially when compounded by additional national obligations, such as those introduced in Italy by Law 132/2025, which amended Model 231 and established new organizational liabilities.

On the operational front, the use of AI in critical areas—ranging from recruitment and credit approval to healthcare—exposes companies to potential systemic errors. If an algorithm produces discriminatory decisions or relies on biased data, the consequences can go far beyond individual cases, undermining process efficiency and threatening organizational stability. This is further compounded by the technical complexity of managing increasingly sophisticated models, the risk of cybersecurity vulnerabilities, and the challenge of ensuring effective performance monitoring in real-world environments.

Lastly, there is a growing reputational risk. In a context where stakeholder trust is crucial, a single incident of algorithmic bias or rights violation can cause lasting damage to a company’s image. The media and public opinion are increasingly sensitive to these issues, and companies that fail to adopt high ethical and regulatory standards may find themselves exposed to criticism, boycotts, or loss of competitiveness.

Principles and compliance requirements for high-risk systems

At the heart of the AI Act is a set of specific obligations aimed at those who develop or use artificial intelligence systems classified as “high-risk.” These systems, being deployed in particularly sensitive areas, must meet strict requirements spanning from initial design to deployment and ongoing monitoring.

Companies are required to implement a structured risk management process, including an AI impact assessment capable of identifying potential issues before the system is placed on the market or used operationally. This assessment cannot be a mere theoretical exercise—it must translate into concrete mitigation measures, both technical and organizational.

Another key pillar is technical documentation: every system must be accompanied by a detailed file describing its architecture, datasets used, algorithmic logic, testing procedures, implemented security measures, and expected performance. This is not merely an archive, but a critical tool to ensure system traceability and transparency.

Human oversight plays an equally central role. AI cannot operate fully autonomously in critical domains—human operators must be able to understand, intervene in, and, if necessary, override automated decisions. For generative models—such as those that produce text, images, or videos—the regulation also mandates clear disclosure of the artificial nature of the content and the adoption of technical measures to distinguish it from human-generated content.

Finally, the regulation requires independent verification of compliance. In many cases, the involvement of notified bodies will be necessary to certify adherence to the required standards. Even after deployment, companies must ensure continuous performance monitoring, collect user feedback, and respond swiftly to any unforeseen issues.

Implementation roadmap for large enterprises

To successfully navigate the transition required by the AI Act, large enterprises must adopt a structured, multidisciplinary approach that encompasses technological, organizational, legal, and cultural aspects.

The first step is to carry out a complete mapping of all artificial intelligence systems currently in use, identifying for each one its purpose, application context, and risk level according to the criteria set by the European regulation. This initial audit helps identify any gaps in compliance and plan the necessary corrective actions.

At the same time, it is essential to establish dedicated AI governance. This involves creating an internal committee with cross-functional expertise—from legal and cybersecurity to IT and compliance—responsible for defining policies, overseeing projects, and ensuring alignment between business objectives and regulatory requirements. Some companies choose to appoint an AI Compliance Officer to coordinate and monitor these efforts.

The design of AI systems must incorporate compliance principles from the outset. This means embedding transparency, accountability, and oversight throughout the model’s lifecycle by adopting rigorous testing practices, version tracking, controlled environment simulations, and cross-validation. Once the systems are ready, organizations must go through the formal compliance process, which in some cases includes certification by third-party bodies. A complete and continuously updatable technical file must be prepared—serving as the system’s “identity card” and demonstrating adherence to the required standards.

But compliance doesn’t end at deployment. Companies must implement continuous monitoring, using tools capable of detecting anomalies, collecting reports, tracking performance, and updating corrective measures. At the same time, it is crucial to invest in employee training, fostering a corporate culture centered on ethics and responsibility in the use of AI.

The final step in the roadmap is a component often overlooked: continuous updates. Regulations evolve, as do technologies. Companies must establish mechanisms to stay informed about regulatory developments, actively engage in the ongoing dialogue, and promptly adapt their processes. Only in this way can compliance become a lasting competitive advantage.

Conclusion

In a rapidly evolving regulatory landscape, compliance with the AI Act is not merely a box-ticking exercise—it is a strategic lever to strengthen business resilience, customer trust, and technological leadership. Large enterprises that successfully integrate governance, operational policies, and internal culture will find that regulatory requirements serve as a catalyst to enhance the reliability and sustainability of their AI applications.

Mashfrog Group is the ideal partner to support businesses through this complex journey. From the initial audit and risk assessment to the preparation of the technical file, certification, training, and regulatory updates, Mashfrog provides guidance backed by technical, legal, and ethical expertise.
In our article "AI Act: Challenges and Opportunities of the New European Regulation", we explore additional implications and case studies, offering insights into how regulatory obligations can be transformed into a competitive advantage.