EU AI Act · Regulation 2024/1689104 days to 2 Aug

A living operating record for the AI Act.

Inventory every AI system, map its obligations to the exact articles, and assign control owners. Evidence auto-marks stale at 90 days. Close a signed monthly review and you have the record an auditor expects.

  • From €124/mo billed yearly
  • 14-dayfree trial
  • Nocredit card

System brief

Audit-ready

Customer Support Copilot

OpenAI · gpt-4o · Live chat triage · EU customers

82/ 100Readiness
Classification

High-risk

Art. 6(3) · Annex III.4

Obligations

11 mapped

9 covered · 2 open

Evidence

18 items

2 stale · last reviewed 2h ago

Next action

Sign off monthly review

Due in 6 days

Free · no signup

Is your AI system in scope of the EU AI Act?

Run a 60-second readiness check against Articles 5, 6, and 50 and the Annex III high-risk list. Based on the public text of Regulation (EU) 2024/1689 — informational guidance, not legal advice.

The five-step check

  1. 01Describe the system
  2. 02Pick deployment areasAnnex III
  3. 03Who signs off?Art. 6(3) · 14
  4. 04Quick Art. 5 / 50 triggers
  5. 05Optional · email a copy
  • Maps selected sectors to the operative Annex III point and cites the article triggered.
  • Returns specific operational next steps — which articles apply, what evidence to gather, and what to escalate.
  • If the shallow result suggests high-risk, the full /assessment walks every Article 5 / Annex I / Annex III branch.
Your result includes
  • Risk class
  • Articles triggered
  • Operational next steps

Who is on the receiving end, what data goes in, what comes out. Plain language is fine. Minimum 30 characters.

0 / 30+

Where is the system deployed?

Where is the system deployed?

Annex III

Pick every area the system operates in. The first eight map onto an operative Annex III high-risk point; the rest are common contexts outside Annex III.

Who signs off on the output?

Who signs off on the output?

Art. 6(3) · 14 · 50

Several AI-Act questions hinge on who acts on the output — the Art. 6(3) high-risk exceptions, Art. 14 human oversight once a system is high-risk, and Art. 50 transparency when people interact with the system directly.

Quick triggers — does either apply?

Art. 5 · 50

Either one alone determines the outcome. Tick whichever is true; leave both unchecked if neither applies.

Optional

We keep this only to email you the memo and to let you claim the result into an Attevera account. We will not sell it or add you to a newsletter. See Privacy.

5 runs per day. Guidance only — not legal advice.

The spine

One chain. Five steps. Nothing skipped.

01/05

Register

Inventory every AI system with vendor, purpose, data, and deployer context.

14 systems

02/05

Classify

Decision tree across Art. 5, Art. 6, Annex III, and Art. 50. Rationale stored.

LIMITED · Art. 50

03/05

Obligations

Map the articles that apply, broken into controls with named owners.

Art. 9–15 · 14 mapped

04/05

Evidence

Capture proof, link it to the control it supports. Stale after 90 days.

22 items · 2 stale

05/05

Audit

Every classification, assignment, and sign-off. Append-only ledger.

1,247 entries

The artifact

The System Readiness Packet

System readiness packet · Rev. 012

Customer Support Copilot

OpenAI GPT-4o · Deployer obligations · EU live support

Risk tier

LIMITED

Art. 50

Obligations

14 / 16

mapped · ready

Evidence

22

items · 2 stale

Sign-off

04 APR

by R. Choi

Obligation map · Articles applied

Art. 9· Risk mgmtArt. 10· Data govArt. 13· TransparencyArt. 14· Human oversightArt. 15· Accuracy+9 more

System profile

Ready

Customer Support Copilot · OpenAI GPT-4o · deployed to live support queue since Q1.

Owner: Rina ChoiScope: EU customers

Risk classification

Limited · reviewed

Classified Limited risk — chatbot with transparency obligations under Art. 50.

Reviewed by legal · 12 days ago

Obligation coverage

14 mapped · 2 open

Art. 13 transparency notice and Art. 14 oversight log still missing evidence.

Art. 13 · notice textArt. 14 · oversight log

Evidence state

22 items · 2 stale

Bias test report exceeded 90-day freshness window. Review or replace.

Bias test · 94 days old

Generated documents

3 generated · 1 placeholder

System description has a placeholder for incident contact. Fix before export.

Audit trail

1,247 entries · append-only

Every classification change, assignment, and sign-off preserved with actor + timestamp.

12

Days since sign-off

Rina Choi · 04 Apr 2026

2

Material changes since

Packet marked stale

3

Open evidence requests

Owners notified

Definition

What Attevera is. What it isn't.

Four promises · four refusals

#Attevera isWe are not
§01

An AI Act specialist.

Built for Regulation (EU) 2024/1689. Art. 5 screenings, Art. 6(3) exceptions, Annex III mapping, Art. 50 transparency limbs, the Art. 73 15-day incident clock, and Art. 4 literacy.

Not SOC 2 with an AI bolt-on.

§02

An operating record.

Your AI systems, their obligations, controls, evidence, and sign-off state. When a system changes, the packet marks stale and the affected articles flag for re-review.

Not another document generator.

§03

A monthly rhythm, not a file.

Fresh readiness review every month. Evidence marks stale at 90 days. Article 73 sets a 15-day countdown the moment a serious incident is logged. Open work lands on the dashboard.

Not a one-shot audit export.

§04

Evidence you can prove.

Source, owner, linked control, external URL, date, stale state. Reviewed in one click. Append-only audit ledger.

Not a GRC spreadsheet.

For developers · Agent-populated register

Your codebase already knows your AI systems. Let your AI do the register.

Open Claude, Cursor, or Codex in your repo and point it at our OpenAPI spec. The agent scans for model calls and POSTs each one into your register with vendor, model, purpose, and deployer role. You confirm risk tier, assign control owners, attach evidence, and sign off. An afternoon, not a quarter.

Operations
28
Resources
9
Auth
Bearer

agent brief → api → response

you → claude / cursor / codex

Scan src/ for model calls to OpenAI, Anthropic, Bedrock, Cohere, HF.
For each, POST /api/v1/systems with name, vendor, model,
purpose inferred from surrounding code, deployer_role=deployer.
Skip dev-only mocks and tests. Stop and ask when unclear.

agent → POST /api/v1/systems

curl https://app.attevera.com/api/v1/systems \
  -H 'Authorization: Bearer att_live_…' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "Customer Support Copilot",
    "vendor": "OpenAI",
    "model": "gpt-4o",
    "purpose": "Live chat triage · EU customers",
    "deployer_role": "deployer"
  }'

201 · response

{
  "id": "sys_01HX7Q…",
  "name": "Customer Support Copilot",
  "classification": "unclassified",
  "next_action": "Run classification"
}

2 Aug 2026 · high-risk applicability

Ready when the audit is.

Start with one system. Walk it through the spine. Come back every month to a record that auto-flags stale evidence and a packet your auditor will recognize.

  • 14 days, full product
  • No card · cancel any time
  • Import + export both directions