8
Core Sections
UK
Governance Lens
24h
Incident Reporting
1.3
Template Version
ClearOwnership
SafeAI Use
BoardReadable

1. Policy Purpose & Scope

An AI Governance Policy is a formal organisational document that sets the rules, responsibilities, and controls for how AI is used across the business.

This policy applies to: all employees, contractors, and third-party suppliers who use, develop, procure, or oversee AI systems on behalf of [Organisation Name].

In Scope

General AI assistants, machine learning tools, automated decision support, AI-powered software, APIs, internal AI tools, third-party AI services.

Out of Scope

Basic macros, scheduled rules, and standard rule-based software that does not learn, infer, generate, or adapt using AI techniques.

2. Core Governance Principles

[Organisation Name] commits to using AI in line with the following core governance principles.

Accountability

All AI use is traceable to a named individual or team.

Transparency

Staff, clients, and users are informed where appropriate.

Data Minimisation

Only the minimum data needed for the stated purpose is used.

Human Oversight

AI recommendations are inputs, not final decisions.

Fairness

Bias risks are assessed and adverse outcomes reviewed.

Security

AI tools must meet the same security standards as other systems.

3. Approved AI Tools & Authorisation Process

Only AI tools that have completed the organisational approval process may be used for business purposes. Use of an unapproved AI tool for work is a policy breach.

Tool Category Examples Approval Level Required Data Classification Restriction
General AI Assistants ChatGPT, Microsoft Copilot, Claude IT Manager + DPO Public or internal data only. No personal, confidential, or commercially sensitive data unless explicitly approved.
HR & Recruitment AI CV screening tools, interview scoring HR Director + DPO + Legal Strict minimisation. Bias assessment required before deployment.
Customer-Facing AI Chatbots, automated email, recommendation engines Operations Director + DPO + Board Human escalation path mandatory. Article 22 and transparency review required.
Financial AI Fraud detection, forecasting, expense automation Finance Director + IT + DPO Audit trail required for all AI-informed decisions.
Content Creation AI Copywriting, image generation, translation Department Head No client, confidential, or commercially sensitive information unless formally approved.

To request approval for a new AI tool, submit an AI Tool Approval Request to your nominated governance owner. A DPIA is required where personal data is processed at scale or the AI use case creates material risk to individuals.

5. Governance Structure & Accountability

Role Responsibilities Escalation Path
AI Steering Committee Strategic oversight, approval of high-risk AI deployments, quarterly performance and risk review. Reports to Board
Data Protection Officer GDPR compliance, DPIAs, data rights, and regulatory liaison where needed. ICO where required
IT / Information Security Security assessment, access control, vendor review, and AI-related incident response. CTO / Operations Director
AI Champions Department liaison, staff support, issue reporting, and practical AI feedback. Department Head
Line Managers Ensure staff use approved AI tools and follow policy. Department Head
All Staff Use approved tools only, report incidents, complete training, and follow data rules. Line Manager / AI Champion

6. AI Incident Response

An AI incident is any event where an AI system causes harm, creates a significant error, is used outside policy boundaries, or creates a data protection concern. All incidents must be reported within 24 hours of discovery.

Severity 1. Critical

Financial loss, data breach, unlawful automated decision, or major reputational damage. Immediate escalation to DPO, senior leadership, and legal.

Severity 2. Significant

Incorrect business decision, unapproved tool used with client data, or suspected bias issue. Report to AI Champion and IT within 24 hours.

Severity 3. Minor

Inaccurate output caught before use, near-miss, or policy clarification issue. Log and report within 48 hours.

7. Policy Compliance & Consequences

Compliance with this policy is mandatory. Response depends on the seriousness of the breach, the data involved, and whether the action was deliberate.

Serious breach example: entering client personal data, confidential information, or commercially sensitive material into a non-approved AI service.

Breach Type Example Consequence
Minor Using an approved tool outside its permitted data scope Guidance, retraining, added oversight
Significant Using an unapproved AI tool for work purposes Formal warning, mandatory retraining
Serious Entering personal or client data into a non-approved tool Disciplinary action and possible regulatory review
Critical Deliberate misuse to bypass controls Termination and legal escalation where applicable

8. Policy Review & Document Control

This policy should be reviewed at least annually, or sooner where regulation, organisational AI use, or technology changes materially.

Regulatory Trigger

New ICO guidance, EU AI Act changes, or sector-specific AI rules.

Incident Trigger

Any Severity 1 or Severity 2 incident revealing a policy gap.

Technology Trigger

New AI capability, new vendor class, or AI moving into a new business function.

Version Date Changes
1.0 January 2024 Initial policy published
1.2 August 2024 Added EU AI Act references, updated DPIA triggers, added AI Champions role
1.3 February 2026 Updated approved tool categories, incident severity wording, and content creation AI section

Need Help Implementing AI Governance?

AI-Si can adapt this policy template for your organisation and help implement the full governance framework, from policy drafting and tool approval through to staff training, incident handling, and audit trail setup.

BOOK YOUR FREE AI STRATEGY DISCUSSION NOW
Scroll to Top