For B2B SaaS selling AI into Europe

Answer procurement's AI governance questionnaire without slowing the deal.

A living record of your AI systems, article-mapped obligations, and the evidence behind them. Exportable as a System Readiness Packet the customer's legal team can read without a call back to you.

  • From €124/mo billed yearly
  • 14-dayfree trial
  • Nocredit card

The trigger

It isn't Brussels. It's procurement.

The first AI Act enforcement moment most SaaS teams actually feel isn't a letter from a regulator. It's a 30-to-60 question AI governance addendum that lands during security review for a deal you were about to close.

“Which articles of the EU AI Act apply to your system?” “What's your risk classification?” “Who is the named control owner for Article 14 human oversight?” “Can you provide evidence the system was reviewed in the last 90 days?” Legal forwards it to compliance. Compliance forwards it to engineering. Engineering sends back a Notion link. The deal stalls.

Attevera is the record those questions assume you already have.

What the packet contains

One system, one packet, six parts.

AI system register

Every AI system in your product — model, vendor, data subjects, EU exposure, owner, deployment scope.

Art. 26 · deployer record

Obligation map

Each system's obligations mapped to the exact articles that apply, with plain-language rationale.

Art. 5 · 6(3) · 9–15 · 50

Control ownership

Named humans — legal, product, engineering — own each obligation. No diffused accountability.

Art. 14 · oversight

Evidence with review dates

Linked controls, source URLs, acknowledged reviewers, 90-day staleness flags.

Art. 12 · 17 · record-keeping

Append-only audit trail

Every classification, assignment, evidence upload, and sign-off preserved with actor and timestamp.

Art. 12 · logging

Signed System Readiness Packet

One exportable PDF per system — the artifact you hand to procurement, counsel, or a regulator.

Deployer-facing output

Against the DIY version

What you'd otherwise stitch together.

AI system list in a Notion page that nobody updates after quarter-end

Register that's the source, not a derivative

Obligation mapping in a Google Doc a consultant wrote six months ago

Article mapping updated when systems change — not when the doc gets opened

Evidence in three Drives, two Slack threads, and one person's inbox

Evidence with links, review dates, and staleness flags in one place

A PDF generated once for a deal, already stale by the next deal

A packet that marks stale automatically when a system changes

Answering the same 40 governance questions from scratch every customer

Export, send, move on

Honest scope

What this does and doesn't cover.

In scope

  • Deployer obligations for AI systems you put into service in your own product (Art. 26).
  • Provider obligations for narrow-scope AI systems you build, fine-tune, or place on the market under your brand.
  • Article 50 transparency — chatbot disclosure, AI-generated content labeling, emotion-recognition notices.
  • Article 73 serious-incident 15-day reporting workflow with evidence capture.
  • Article 4 AI literacy — staff training records.

Out of scope

  • GPAI provider obligations (Art. 51–56) — if you train frontier models, Attevera is not your tool.
  • Annex I sector conformity — medical devices, aviation, rail, toy safety, and other embedded-product regimes.
  • SOC 2 / ISO 27001 evidence collection. Run Attevera alongside Vanta or Drata, not in place of them.
  • Legal advice. Your counsel signs off. Attevera helps you produce the record they're reviewing.

Pricing

Starter at €124/mo billed yearly. Growth for most product teams.

Starter covers up to 5 AI systems and 3 team members — enough for a single product with one or two AI features. Growth at €332/mo billed yearly opens it up to 25 systems and 10 members, which is where most AI-heavy SaaS teams settle. No procurement dance, no demo gate — sign up, import your systems, export a packet the same day.

Questions teams ask

Before you commit a procurement answer.

Am I a provider or a deployer?
Most B2B SaaS teams are deployers of their own AI system — you put the AI into service under your own name in your product. You cross into provider obligations if you substantially modify a third-party model, train or fine-tune your own, or place an AI system on the market under your brand that meaningfully differs from what you sourced. Most teams are deployers for LLM-wrapper features and providers only for models they train themselves.
Do I need a conformity assessment?
Usually no. Conformity assessment is required for high-risk systems under Annex III or Annex I. If your AI is a chatbot, copilot, summarizer, or ranking feature outside those lists, you are typically in the Limited or Minimal tier — your main obligation is Article 50 transparency plus whatever your deployer contracts with downstream customers create.
Is this SOC 2 for AI?
No. Attevera is an AI Act specialist — Articles 5, 6(3), 9–15, 50, and 73 are encoded natively. It sits alongside a SOC 2 or ISO 27001 program, not instead of one. If your buyer wants SOC 2, use Vanta or Drata. If your buyer wants an AI governance answer, Attevera is the record behind it.
Will your packet satisfy my customer's legal team?
The packet gives legal what they usually ask for: system description, risk classification with citations, mapped obligations, named control owners, evidence with review dates, and a signed monthly review. Whether that closes the question for any specific counsel is their call. We help you produce the record; we do not replace your customer's legal review.
What about Colorado AI Act, NIST AI RMF, ISO 42001?
The register, ownership, evidence ledger, and monthly review cadence satisfy the structural requirements of all four frameworks. The EU AI Act article mapping is the most specific; the others inherit the same underlying record.

Next step

Start with one system. Export a packet. See what your legal team says.