AI SECURITY

The UK AI Security Institute (AISI): What It Is, Who Runs It, and Why It Matters


Most UK business leaders have heard the acronym. Few know what the organisation actually does, who runs it, or why the government renamed it. That gap matters — because the work happening inside AISI is quietly shaping the standards your vendors, auditors, and regulators will use to assess your AI activity.

This article gives you the facts, the context, and the practical implications for any UK organisation deploying AI.

It Was Renamed — and That Tells You Something

For its first fourteen months, the organisation was called the UK AI Safety Institute. On 14 February 2025, the Department for Science, Innovation and Technology (DSIT) announced it would be renamed the AI Security Institute — keeping the same abbreviation: AISI.

The rename was deliberate. The shift from “Safety” to “Security” signals a narrowing of focus. The institute moved away from broad ethical AI concerns — bias, fairness, freedom of expression — and concentrated on threats with direct national security implications: weapons misuse, cyberattacks, and large-scale criminal exploitation of AI.

That is not a criticism of the decision. It is a governance signal. If AISI is now focused on security-grade threats, organisations need to manage the ethical and fairness dimensions of their AI themselves — because no government body is primarily responsible for it.

What AISI Actually Is

AISI is a directorate of the Department of Science, Innovation and Technology. It is not an independent regulator. It does not have enforcement powers over your organisation. What it does is conduct and fund research into the capabilities and risks of advanced AI — and then use those findings to inform policymakers and shape how AI is developed globally.

Its stated mission is to equip governments with a scientific understanding of the risks posed by advanced AI, and to develop and test mitigations for those risks.

Practically, that means three things:

WHAT AISI DOES IN PRACTICE

  1. Pre-deployment model testing. AISI tests leading AI systems before they are released publicly. It has direct access agreements with major AI developers to evaluate models for dangerous capabilities.
  2. Research and grants. The institute runs in-house research teams and distributes more than £15 million in research grants to external academics and organisations studying AI risk.
  3. Policy influence. Findings feed into UK government AI policy, international governance frameworks, and the regulatory baseline that sectors like financial services and public sector bodies are expected to meet.

Who Runs It

The Director is Adam Beaumont, previously the Chief AI Officer at GCHQ. His background is signals intelligence and national security — which reflects the institute’s current focus precisely.

The Chief Technology Officer is Jade Leung, who also serves as the Prime Minister’s AI Advisor. Leung previously led the Governance team at OpenAI. That combination — advising the PM while running the technical direction of AISI — gives the institute significant influence over UK AI policy in practice.

The Chief Scientist is Geoffrey Irving and the Research Director is Chris Summerfield. Both have led teams at OpenAI, Google DeepMind, and the University of Oxford. This is not a civil service research shop. It is a technically serious operation with direct links into the frontier AI labs it evaluates.

The Funding Position

AISI has £66 million in annual funding confirmed for the current financial year, with long-term resourcing commitments beyond that. For a government research directorate, that is a substantial operational budget. It funds the pre-deployment evaluation programme, the grants, and a staff base drawn from some of the most technically capable people working in AI globally.

What this means for vendors: Any major AI company wanting market access in the UK, or seeking credibility with UK public sector clients, now has a strong incentive to cooperate with AISI testing. The companies that do not cooperate will face questions they cannot easily answer during procurement.

Why This Matters for UK Organisations

AISI does not regulate you directly. But its work has three practical consequences for any UK business or council deploying AI.

First, vendor scrutiny will increase. If your AI vendor has not engaged with AISI evaluation processes for their frontier models, that is a legitimate question to ask in procurement. It is not a disqualifier on its own, but it is a signal about how seriously the vendor takes independent safety and security assessment.

Second, the governance baseline is rising. AISI research feeds into ISO standards, sector-specific regulatory guidance, and the ICO’s expectations around automated decision-making. The gap between what you are doing today and what will be expected of you in eighteen months is closing faster than most organisations realise.

Third, the threat taxonomy AISI uses is becoming standard. AISI focuses on misuse for weapons development, cyberattacks, and large-scale fraud. If your AI tools could plausibly assist with any of these — even indirectly — your risk register and acceptable use policy need to address that explicitly.

What the Rename Did Not Change

Removing “Safety” from the name did not make the ethical questions disappear. Bias in hiring algorithms, discriminatory outputs in public-facing services, and lack of explainability in automated decisions are still live issues under UK GDPR and the Equality Act. They are simply no longer AISI’s primary focus.

That makes your internal governance more important, not less. The organisation responsible for your AI ethics is now you.

What to Do With This

  • Check whether your key AI vendors have participated in any AISI evaluation or published third-party safety assessments
  • Review your AI risk register to confirm it includes the threat categories AISI focuses on: misuse for harmful content generation, cyberattack enablement, and fraud facilitation
  • Ensure your AI Governance Policy is not waiting for regulation to catch up — AISI output is already informing what auditors and public sector procurement teams will ask
  • If you operate in financial services, healthcare, or local government, treat AISI publications as early signals of where your sector regulator is heading
  • Do not assume that because AISI does not regulate you, its work is irrelevant — the organisations that track it now will be better prepared when the regulatory baseline firms up

AISI is not the only body shaping AI governance in the UK. But it is the one with the most direct access to frontier models, the most technically capable staff, and the closest relationship to the government officials writing future AI policy. Understanding it clearly is not optional for anyone taking AI governance seriously.

STAY AHEAD OF THE STANDARD

I help boards understand what governance changes are coming — before they arrive.

Whether you need a governance review, a risk register, or a board briefing on the UK regulatory direction, I can get you ready.

Talk to Simon

Scroll to Top