System brief
Audit-readyCustomer Support Copilot
OpenAI · gpt-4o · Live chat triage · EU customers
High-risk
Art. 6(3) · Annex III.4
11 mapped
9 covered · 2 open
18 items
2 stale · last reviewed 2h ago
Sign off monthly review
Due in 6 days
Inventory every AI system, map its obligations to the exact articles, and assign control owners. Evidence auto-marks stale at 90 days. Close a signed monthly review and you have the record an auditor expects.
System brief
Audit-readyOpenAI · gpt-4o · Live chat triage · EU customers
High-risk
Art. 6(3) · Annex III.4
11 mapped
9 covered · 2 open
18 items
2 stale · last reviewed 2h ago
Sign off monthly review
Due in 6 days
Free · no signup
Run a 60-second readiness check against Articles 5, 6, and 50 and the Annex III high-risk list. Based on the public text of Regulation (EU) 2024/1689 — informational guidance, not legal advice.
The five-step check
The spine
Each step feeds the next. Evidence auto-marks stale at 90 days, monthly reviews surface on schedule, and Article 73 starts a 15-day countdown the moment a serious incident is logged.
Inventory every AI system with vendor, purpose, data, and deployer context.
14 systems
Decision tree across Art. 5, Art. 6, Annex III, and Art. 50. Rationale stored.
LIMITED · Art. 50
Map the articles that apply, broken into controls with named owners.
Art. 9–15 · 14 mapped
Capture proof, link it to the control it supports. Stale after 90 days.
22 items · 2 stale
Every classification, assignment, and sign-off. Append-only ledger.
1,247 entries
The artifact
One system, one packet. Classification, obligations, controls, evidence, records, and sign-off — assembled in one place, exportable in one click.
System readiness packet · Rev. 012
OpenAI GPT-4o · Deployer obligations · EU live support
Risk tier
LIMITED
Art. 50
Obligations
14 / 16
mapped · ready
Evidence
22
items · 2 stale
Sign-off
04 APR
by R. Choi
Obligation map · Articles applied
System profile
Ready
Customer Support Copilot · OpenAI GPT-4o · deployed to live support queue since Q1.
Risk classification
Limited · reviewed
Classified Limited risk — chatbot with transparency obligations under Art. 50.
Obligation coverage
14 mapped · 2 open
Art. 13 transparency notice and Art. 14 oversight log still missing evidence.
Evidence state
22 items · 2 stale
Bias test report exceeded 90-day freshness window. Review or replace.
Generated documents
3 generated · 1 placeholder
System description has a placeholder for incident contact. Fix before export.
Audit trail
1,247 entries · append-only
Every classification change, assignment, and sign-off preserved with actor + timestamp.
Definition
Four promises · four refusals
An AI Act specialist.
Built for Regulation (EU) 2024/1689. Art. 5 screenings, Art. 6(3) exceptions, Annex III mapping, Art. 50 transparency limbs, the Art. 73 15-day incident clock, and Art. 4 literacy.
Not SOC 2 with an AI bolt-on.
An operating record.
Your AI systems, their obligations, controls, evidence, and sign-off state. When a system changes, the packet marks stale and the affected articles flag for re-review.
Not another document generator.
A monthly rhythm, not a file.
Fresh readiness review every month. Evidence marks stale at 90 days. Article 73 sets a 15-day countdown the moment a serious incident is logged. Open work lands on the dashboard.
Not a one-shot audit export.
Evidence you can prove.
Source, owner, linked control, external URL, date, stale state. Reviewed in one click. Append-only audit ledger.
Not a GRC spreadsheet.
For developers · Agent-populated register
Open Claude, Cursor, or Codex in your repo and point it at our OpenAPI spec. The agent scans for model calls and POSTs each one into your register with vendor, model, purpose, and deployer role. You confirm risk tier, assign control owners, attach evidence, and sign off. An afternoon, not a quarter.
agent brief → api → response
you → claude / cursor / codex
Scan src/ for model calls to OpenAI, Anthropic, Bedrock, Cohere, HF. For each, POST /api/v1/systems with name, vendor, model, purpose inferred from surrounding code, deployer_role=deployer. Skip dev-only mocks and tests. Stop and ask when unclear.
agent → POST /api/v1/systems
curl https://app.attevera.com/api/v1/systems \
-H 'Authorization: Bearer att_live_…' \
-H 'Content-Type: application/json' \
-d '{
"name": "Customer Support Copilot",
"vendor": "OpenAI",
"model": "gpt-4o",
"purpose": "Live chat triage · EU customers",
"deployer_role": "deployer"
}'201 · response
{
"id": "sys_01HX7Q…",
"name": "Customer Support Copilot",
"classification": "unclassified",
"next_action": "Run classification"
}2 Aug 2026 · high-risk applicability
Start with one system. Walk it through the spine. Come back every month to a record that auto-flags stale evidence and a packet your auditor will recognize.