June 12, 2025

Guide to the AI Act for Businesses

Requirements, deadlines, and tools to stay compliant with the new European regulation

The AI Act is the new European regulation on artificial intelligence, the first global law designed to ensure AI systems are safe, transparent, and respectful of fundamental rights. It adopts a risk-based approach, banning unacceptable systems and imposing strict requirements on high-risk ones, with penalties of up to €35 million or 7% of global turnover.
This framework directly affects those who develop or implement Virtual Assistants. In this article, we’ll explore the implications of the regulation, analyze the obligations tied to different risk levels, provide an operational checklist, and show how our platform supports companies in achieving compliance.

EU AI Act: Key Dates to Mark on Your Calendar

The AI Act’s implementation schedule includes new obligations rolling out roughly every six months, directly impacting those who develop or use Virtual Assistants. From February 2, 2025, two core measures take effect, a total ban on systems classified as unacceptable risk and a requirement to ensure AI literacy among staff. This requirement goes beyond an internal training course; companies must demonstrate that those designing, integrating, or overseeing AI possess appropriate knowledge of risks and mitigation strategies.

August 2025 – Governance and General-Purpose AI

The second deadline, set for August 2, 2025, is crucial for those leveraging large language models. Providers must produce technical documentation, register with the European AI Office, and submit models to regular safety testing. Those integrating these models into their Virtual Assistants must verify provider compliance, track the versions used, and update risk assessments whenever the model changes.

August 2026 – Full Enforcement for High-Risk Systems

A year later, from August 2, 2026, full obligations will apply to systems classified as high risk, impact assessments, data lifecycle management, human oversight, detailed logging, and incident reporting. National supervisory authorities will be operational and authorized to carry out inspections and impose sanctions.

AI Act Risk Category Map

The AI Act classifies AI systems into four risk categories, each with different rules.

Unacceptable AI Risk

This scenario includes AI uses deemed a serious threat to safety or rights, such as social scoring systems or algorithms that manipulate human behavior in harmful ways. These applications are banned from the European market.

High AI Risk

AI systems that can have a significant impact on health, safety, or fundamental rights, such as those used for decision-making in the medical, financial, employment, or public security sectors, can only be placed on the market if they meet strict design and control requirements. In most cases, compliance is verified internally by the provider, without the involvement of external bodies. Only certain specific categories require independent third-party assessments, as outlined in Annex III of the AI Act.

Limited AI Risk

Systems posing minimal risk to users and subject only to transparency obligations. This classification includes most virtual assistants used in customer service or support. Users must be clearly informed that they’re interacting with an AI, not a human. Generative systems that produce text or images must also signal their artificial nature, e.g., through disclaimers or watermarks, especially when used in public information contexts.

Minimal or No AI Risk

This level covers most everyday AI applications with no significant risk, such as spam filters, basic recommendation engines, or AI in video games. These applications are not subject to specific obligations, but good practices, like ethical assessments and basic cybersecurity, are still recommended to maintain a responsible approach to AI.
For business decision-makers, it is essential to map their use cases of conversational AI against these categories to understand the specific rules that apply.

Figure: Visual categorization of the risk levels introduced by the AI Act

AI Act Compliance Checklist

Once you’ve identified your virtual assistant’s risk profile, ensure it meets the AI Act’s compliance requirements.

Data Lineage and Documentation

Maintain complete documentation of the AI system. It’s mandatory to track details on training data and its lineage (origin and quality), as well as development and testing methodologies, and the model’s purpose and functionality. For advanced systems, the AI Act requires full technical documentation, reports, model cards highlighting known biases and limitations, and clear user manuals.

Human-in-the-Loop (Human Oversight)

Ensure proper human involvement in the AI assistant’s lifecycle and usage. For higher-risk assistants, this includes human oversight measures such as monitoring conversations, manual intervention options for critical decisions, and escalation procedures when interactions exceed defined boundaries. The AI Act mandates human supervision for high-risk systems to prevent or mitigate undesirable outcomes. Even for limited-risk scenarios, offering a human fallback (e.g., speaking to a live agent on request) and periodically reviewing chat logs to identify issues is considered best practice.

Bias Testing and Mitigation

Implement continuous risk and bias assessment before and after deployment. During development, run rigorous tests on data quality and model behavior to spot discriminatory biases or systematic errors. The AI Act requires training data to be high-quality, representative, and error-free to minimize the risk of discriminatory outcomes. For instance, an HR support assistant must not reinforce gender or ethnic biases. Diverse dataset testing helps identify bias, which can then be corrected by balancing data, refining prompts, or restricting certain responses.
Also test system robustness, how it handles incorrect or malicious inputs, hallucination rates, inaccurate answers, and apply corrections to improve accuracy.
Lastly, schedule regular post-deployment reviews. The AI Act encourages continuous system monitoring to ensure long-term compliance and safety. This requirement might include annual performance audits and targeted checks after major model updates.

Logging and Traceability

Activate comprehensive logging of all interactions and decisions made by the conversational agent, and store logs securely for audits. Traceability is a core AI Act principle, particularly to ensure accountability. High-risk system providers must record events throughout the AI lifecycle to trace how specific outputs were generated. Even for lower-risk assistants, detailed logs (timestamps, queries, responses, human interventions) help reconstruct system behavior, identify anomalies or misuse, and demonstrate due diligence during investigations.

Security and Privacy

Don’t overlook complementary compliance aspects like cybersecurity and data privacy. If the system processes personal data, it must comply with GDPR and data minimization principles. The AI Act aligns with laws like GDPR, promoting a holistic compliance approach. Robustness also includes defenses against malicious inputs that could alter AI behavior.

The indigo.ai Approach to the European Union AI Act

Complying with the AI Act isn’t just about ticking boxes. It’s a continuous journey that integrates technology, processes, and organizational culture. Even though our solution is not classified as “high risk,” our platform and features are designed to meet, and often exceed, the AI Act’s core principles.

1. Transparency by Design (Art. 50)

No matter where a Virtual Assistant is deployed, website, mobile app, or voice, we provide built-in tools to ensure full user clarity. Informative banners, welcome messages, and pre-configured prompts clearly indicate that the user is interacting with an AI system. These features are easily activated with just a few clicks, requiring no additional development.
This easiness helps businesses meet transparency requirements while building trust. Communicating clearly from the first interaction avoids confusion and enhances perceived reliability and fairness.

2. Protection Against Misuse: Anti-Jailbreak Detection (Art. 5)

We understand how prompt injection or jailbreak techniques can compromise a conversational model’s behavior. That’s why we’ve developed an advanced Anti-Jailbreak Detection tool that identifies and blocks suspicious or harmful inputs in real time, preventing the chatbot from generating responses that breach company policies or violate regulations.
This functionality reduces the risk of AI manipulation and misuse, protects brand reputation, and supports compliance with Article 5, which prohibits unacceptable use.

3. Integrated Human Oversight (Art. 4 and 26)

Human-in-the-loop is a core principle of our conversational model. That’s why we enable automatic handoff to a human agent when conversations go beyond predefined parameters, real-time monitoring from centralized dashboards, and escalation rules for low, medium, or high-criticality scenarios.
This approach allows businesses to demonstrate active system control while ensuring uninterrupted service, even in sensitive situations.

4. Data Traceability and Security (AI Act and GDPR)

Every interaction handled by our system is logged with timestamps, metadata, and immutable records, ready for internal audits or regulatory requests. The platform includes secret management tools and fine-grained access controls, maintaining data security even in highly regulated environments.
This infrastructure is designed to natively support data governance, eliminating the need for external tools.

What Remains the Deployer’s Responsibility

Using reliable technology is not enough for full AI Act compliance. Responsible system management is also essential. A responsible approach  includes clearly informing users they’re interacting with AI, avoiding prohibited uses such as manipulating cognitive vulnerabilities or real-time biometric profiling, and ensuring all involved personnel are properly trained, with active human oversight on critical decisions and continuous performance and risk monitoring.
Our platform makes transparency, traceability, and security core components of its architecture, not add-ons. Our goal is to help businesses embrace the AI Act not as a burden, but as an opportunity to build trust, governance, and long-term value.

Complying with the AI Act is more than a regulatory requirement, it’s an opportunity to strengthen governance, trust, and competitiveness. Companies that integrate transparency, oversight, and traceability into their Virtual Assistants today are laying the foundation for a more robust and sustainable digital transformation.

FAQs

Do I need to comply with the AI Act if I only use a virtual assistant for customer care?

Yes. If the assistant interacts directly with users, you must meet at least the transparency obligations. You must clearly inform users they are speaking with AI and monitoring responses to avoid errors or bias.

What if I use a third-party generative AI model?

As the integrator, you’re responsible for verifying the provider’s compliance. You must document the model, track the versions used, and regularly update risk assessments, especially after each model update.

Is indigo.ai’s platform ready to support these requirements?

Yes. Our platform offers built-in tools to ensure transparency, oversight, traceability, and security, helping companies adapt to the AI Act quickly and confidently.

Sign up for our newsletter
Don't take our word for it
Try indigo.ai, in a few minutes, without installing anything and see if it lives up to our promises
Try a Demo
Non crederci sulla parola
This is some text inside of a div block. This is some text inside of a div block. This is some text inside of a div block. This is some text inside of a div block.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.