Back to Home
Legal

EU AI Act Statement

Last updated: 26 April 2026 · Regulation (EU) 2024/1689

Our Position on the EU AI Act

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and applies progressively through 2027. Cortex AI welcomes this regulatory framework as it aligns with our founding principle: AI should be transparent, accountable, and trustworthy. We are actively preparing for full compliance ahead of each applicable deadline.

1. Our Role Under the AI Act

Cortex AI operates primarily as a provider (developing AI systems for clients) and deployer (operating AI systems on behalf of clients) as defined in Article 3 of the AI Act. In some engagements, we also act as an importer or distributor of third-party AI components.

2. Risk Classification of Our Systems

We classify all AI systems we build according to the AI Act's risk tiers:

Unacceptable Risk

We do not build and will not build systems in this category (e.g., social scoring, real-time biometric surveillance in public spaces, subliminal manipulation).

High Risk (Annex III)

We apply full conformity assessment procedures for any high-risk systems (e.g., HR screening tools, credit scoring). These include technical documentation, human oversight mechanisms, accuracy/robustness testing, and registration in the EU database.

Limited Risk

Most of our client solutions (chatbots, document assistants, automation tools) fall here. We apply transparency obligations: users are always informed they are interacting with an AI system.

Minimal Risk

Analytics dashboards, recommendation engines, and similar tools. No specific obligations beyond our standard quality practices.

3. General-Purpose AI (GPAI) Models

We use GPAI models (Mistral AI, OpenAI, Anthropic) as components within our solutions. We comply with the downstream obligations applicable to deployers of GPAI models, including transparency to end users and appropriate use restrictions. We favour EU-native models (Mistral AI) wherever technically feasible.

4. Human Oversight

All AI systems we deploy include meaningful human oversight mechanisms. We do not build fully autonomous decision-making systems that affect individuals without human review capability. Our standard implementation includes audit logs, override controls, and confidence thresholds that trigger human escalation.

5. Compliance Timeline

  • February 2025: Prohibited AI practices provisions applied — already compliant.
  • August 2025: GPAI model obligations — in preparation.
  • August 2026: High-risk AI system obligations (Annex III) — conformity assessment procedures being developed.
  • August 2027: Full Act application — target date for complete compliance framework.

6. Contact

For AI Act compliance questions or to discuss how our solutions are classified: [email protected]

Cookie Preferences

GDPR Compliant

We use cookies to improve your experience on cortexai.lu. Some are essential for the site to work; others help us understand how visitors use it. Privacy Policy