Skip to content
Back to Home

AI Governance

We use AI to accelerate threat modeling. Here's exactly how—and who bears which responsibilities.

EU AI Act Risk Classification

The Ansvar AI threat modeling platform is classified as minimal/limited risk under the EU AI Act.

  • Does not perform Annex III high-risk functions (biometrics, credit scoring, etc.)
  • Does not profile natural persons—processes technical architecture only
  • Preparatory task only—human experts validate all outputs
  • No impact on fundamental rights or freedoms

Deployment Models & EU AI Act Responsibilities

We offer two deployment models. The EU AI Act "deployer" classification—and associated obligations—differs between them.

SaaS (Ansvar-Managed)

Ansvar is Deployer

We manage the complete infrastructure including LLM provider credentials.

Ansvar's EU AI Act Obligations:

  • Transparency obligations (informing clients of AI use)
  • Human oversight implementation
  • Appropriate use and monitoring

Client-Hosted (BYOLLM)

Client is Deployer

You host our workflow orchestration software. You bring your own LLM provider and API credentials.

Ansvar Provides:

  • Orchestration software (VM + Docker)
  • Remote workflow execution
  • Expert report delivery

Client's EU AI Act Obligations:

  • LLM provider relationship
  • Deployer obligations under AI Act
  • Data processing with LLM provider

Per EU AI Act Recital 13: providers of general-purpose software tools that are not AI systems themselves are not subject to AI-specific obligations.

Human Oversight

Regardless of deployment model, all AI outputs require human validation:

  • AI systems do not make autonomous security decisions
  • All outputs reviewed before client delivery
  • Personnel trained on capabilities and limitations
  • Authority to override, correct, or reject AI outputs

Data Handling

Strict controls on what data AI systems process:

Processed

  • System descriptions
  • Architecture documentation
  • Technical specifications

Never Processed

  • Personally Identifiable Information (PII)
  • Individual profiling data
  • Production system access

Documentation containing PII is rejected and returned for sanitisation.

Approved AI Providers

Anthropic ClaudeGPAI Provider
OpenAI GPTGPAI Provider
Azure OpenAIEnterprise Option

All providers assessed for EU AI Act compliance, data processing practices, and security controls. Full subprocessor details in our Data Processing Agreement.

Our AIMS Journey

We're building an AI Management System aligned with ISO/IEC 42001 to formalise our governance:

Risk Framework

Systematic assessment aligned with EU AI Act categories

Governance Structure

Clear roles and accountability for AI decisions

Continuous Monitoring

Ongoing review of AI outputs and compliance

Documentation

Audit-ready records supporting ISO 27001 alignment

Building AI-Powered Products?

If your organisation is deploying AI systems under the EU AI Act, understanding your threat landscape is critical. We help identify risks specific to AI-powered architectures—from model manipulation to data poisoning to prompt injection.

Explore threat modeling for AI systems

Questions about our AI governance? team@ansvar.eu

This page summarises our internal AI Policy. Full policy available upon request.