RISK AND COMPLIANCE

AI Risk Register: A Simple Structure Leaders Can Maintain


Most AI risk registers die within three months of creation. They start as a spreadsheet someone fills in after a governance meeting, then get filed, forgotten, and quietly ignored. When the auditors arrive, or when something goes wrong, the document is either missing or six versions out of date.

The reason is not that people do not care about AI risk. It is that the register was built to satisfy a process, not to actually manage risk. I have seen registers with forty-three columns, colour-coded probability matrices, and no named owner for a single item. Nobody updates what nobody owns.

This article gives you a structure that works in practice. Tight enough to be useful. Flexible enough to survive contact with your actual organisation.

What an AI Risk Register Is — and What It Is Not

A risk register is a live document that records specific risks from your AI activity, who is responsible for each one, what controls are in place, and what triggers an escalation.

It is not a policy document. It does not explain your ethics position, your acceptable use principles, or your vendor selection criteria. Those live elsewhere. The register records the actual risks you are carrying right now, in the tools you are actually using.

Keep those two things separate. Conflating them produces documents that are too broad to act on and too vague to audit.

Six Risk Categories for AI

Group every entry under one of these six categories. They cover the full range of AI-specific exposure without creating unnecessary complexity.

THE SIX CATEGORIES

  1. Data risk — personal data, training data quality, data sovereignty, DSAR exposure
  2. Model risk — hallucination, bias, output accuracy, model drift over time
  3. Vendor risk — dependency, lock-in, contractual gaps, third-party security posture
  4. Operational risk — process failure, staff capability gaps, over-reliance on automation
  5. Legal and compliance risk — UK GDPR, ICO obligations, sector regulations, IP ownership
  6. Reputational risk — public-facing AI errors, deepfakes, client trust, media exposure

You do not need a separate register for each category. One document, six labels. If a risk sits across two categories, pick the dominant one and note the secondary in the description field.

The Seven Fields Every Entry Needs

Strip the register back to seven fields. Anything beyond this becomes a burden rather than a tool.

Field What to record
Risk ID Sequential reference. AI-001, AI-002. Simple and permanent.
Description One plain-English sentence. Describe the specific harm that could occur, not a vague category.
Category One of the six above. Data, Model, Vendor, Operational, Legal/Compliance, Reputational.
Score (L × I) Likelihood 1–5 multiplied by Impact 1–5. A score of 1–9 is low, 10–16 is medium, 17–25 is high.
Current controls What is in place right now. Be honest — “none” is a valid and useful answer.
Owner A named individual, not a team or department. If nobody will own it, escalate it to the board immediately.
Review date Quarterly at minimum. High-scoring risks should be reviewed monthly.

Optional eighth field: escalation trigger. Define the specific condition that moves this risk from monitored to incident. For example: “Triggers escalation if more than two staff report AI output errors in the same week.” Without a trigger, risks sit in the register indefinitely without action.

What to Put In It First

Start with the AI tools your organisation is actively using, not the tools you think you might adopt in future. Speculative risks clutter the register and dilute attention from the real ones.

  • List every AI tool currently in use across the business — including tools staff are using without formal approval
  • For each tool, identify what data it processes and who in your organisation has access
  • Check whether each vendor’s terms allow your data to be used for model training
  • Identify which business-critical processes would fail if the tool became unavailable tomorrow
  • Note any tools where no contract exists, no DPA has been signed, or no security assessment has been completed

Those five checks will generate the first ten to fifteen entries in your register. They will almost certainly include at least one high-scoring risk you were not formally aware of.

Keeping It Alive

A register that is not maintained is worse than no register. It creates false assurance.

Set three rules from the outset. First, every new AI tool or use case must generate at least one register entry before it goes into production. The procurement or approval step should include this explicitly. Second, any AI incident — however minor — triggers a review of the relevant entry. Third, ownership of the register sits with a named person at senior level, not with IT or the AI team alone.

Watch for this failure pattern: The register has entries but every owner is listed as a team rather than an individual. “IT Department” cannot be called at 9pm when something goes wrong. Neither can “Operations”. Name a person, and make sure they know they are named.

The Minimum Viable Risk Register

If your organisation is at the start of its AI governance journey, here is the minimum I would accept before advising a client to proceed with any AI deployment at scale.

MINIMUM REGISTER ENTRIES AT LAUNCH

  • ✓ One entry per active AI tool covering data handling risk
  • ✓ One entry covering vendor dependency and continuity
  • ✓ One entry covering staff capability and over-reliance
  • ✓ One entry covering UK GDPR obligations and ICO notification thresholds
  • ✓ One entry for any public-facing AI output (chatbots, automated communications)
  • ✓ A named register owner and a scheduled quarterly review date

Six entries. That is a register. It is not comprehensive governance — but it is a live document with owners and dates, which puts you ahead of the majority of UK SMEs and many councils I assess.

Connecting It to Your Wider Governance

The risk register does not stand alone. It should feed into your AI Governance Policy, your incident response process, and your board reporting cycle. High-scoring risks should appear on board-level risk dashboards. Incident closures should update the register within five working days.

If you do not yet have an AI Governance Policy or an AI Incident Response process, build those in parallel. The register without governance context is a list. With governance context, it becomes a management tool.

READY TO START?

I can build your AI risk register with you.

A 90-minute session produces a populated, owned, and board-ready register for your organisation. No templates to guess at. No consultancy jargon.

Talk to Simon

Scroll to Top