The UK AI Security Institute (AISI) is the world’s first state-backed organisation dedicated to understanding and mitigating the risks of advanced AI. It sits within the UK government’s Department for Science, Innovation and Technology. Its stated mission is to build the world’s leading understanding of advanced AI risks and deliver that intelligence to governments so they can keep the public safe.

It is not a think tank. It is not an advisory panel. It is a functioning research operation with the authority of government and the structure of a technology organisation.

How It Started

In 2023, then-Prime Minister Rishi Sunak expressed his intention to make the UK the intellectual and geographical home of global AI safety regulation, and unveiled plans for an AI Safety Summit.

That summit took place at Bletchley Park in November 2023. It was a pivotal moment. World leaders and major AI companies agreed that advanced AI posed real risks and that independent testing was essential. At that summit, the United Kingdom and the United States both created their own AI Safety Institutes. The UK’s evolved from an earlier Foundational Model Taskforce.

Sunak secured commitments from OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei to provide pre-release access to their frontier AI models for safety evaluation. That was a significant move. For the first time, a government body had direct access to the most advanced AI systems before public release.

In February 2025, the organisation was renamed. The UK rebranded its AI Safety Institute as the “AI Security Institute,” with Technology Secretary Peter Kyle describing the change as “the logical next step” in the UK’s approach to responsible AI development.

The new name reflects a sharper focus.

Who Is Behind It

The organisation was designed to operate like a startup inside government, combining state authority with private sector expertise and speed.

The leadership team is serious. Interim Director Adam Beaumont was previously GCHQ’s Chief AI Officer. Chief Technology Officer Jade Leung also serves as the Prime Minister’s AI Advisor and previously led the Governance team at OpenAI. Chief Scientist Geoffrey Irving and Research Director Chris Summerfield have collectively led teams at OpenAI, Google DeepMind, and the University of Oxford. Chair Ian Hogarth brings experience as a leading tech investor and entrepreneur.

The advisory board includes national security and machine learning leaders, among them Yoshua Bengio, one of the most cited researchers in the field. Over 100 technical staff bring experience from leading industry, academic and nonprofit labs.

This is not a committee of generalists. It is a specialist operation at the intersection of AI research, national security, and government policy.

What It Actually Does

The AISI runs evaluations on frontier AI models before they are released to the public. The technical team has now tested more than 30 of the world’s most advanced models. AI developers continue to work with the institute because its evaluations are rigorous, reproducible and grounded in real-world risk.

The work covers specific threat categories. The team focuses on serious AI risks with security implications, including how AI can be used to develop chemical and biological weapons, carry out cyber-attacks, or enable crimes such as fraud and child sexual abuse.

End-to-end biosecurity red-teaming with OpenAI and Anthropic revealed dozens of vulnerabilities, including new universal jailbreak paths.

In 2025, the institute published its first paper in the journal Science. It was a large-scale study with over 76,000 participants, exploring the levers of AI-enabled persuasion. That is peer-reviewed, published research at the highest level.

The institute also launched significant funding programmes. These included a £15m Alignment Project, described as one of the largest global alignment research efforts; an £8m Systemic Safety Grants programme; and a £5m Challenge Fund aimed at rapid advances on urgent questions.

The Governance Shift That Matters

The name change from “Safety” to “Security” was not cosmetic. Observers interpreted it as a signal that the institute would not focus on ethical issues such as algorithmic bias or freedom of speech in AI applications.

The institute will partner across government, including with the Defence Science and Technology Laboratory, the Laboratory for AI Security Research, and the national security community, building on the expertise of the National Cyber Security Centre. It also launched a new criminal misuse team, working jointly with the Home Office to conduct research on crime and security issues that threaten British citizens.

This is AI governance operating at national security level. Not at the level of corporate ethics statements.

Why This Matters for UK Businesses

If you are a UK business deploying AI, the AISI’s work directly affects your operating environment. The institute shapes policy, informs regulation, and sets the technical baseline for what safe AI deployment looks like in the UK.

The risks it identifies today become the governance requirements and liability frameworks of tomorrow.

Businesses that engage with this work now, understand its outputs, and build their AI governance strategies around its findings will be positioned ahead of the compliance curve. Those that ignore it will face it unprepared.

The AISI exists because AI poses real, measurable risks to national security, public safety, and institutional integrity. Understanding what it does, and why, is not optional for any serious board.


This blog was produced by AI-Si.com. Simon Steggles is a Fractional AI Director advising SMEs, councils, and executive teams across the Midlands on AI strategy, governance, and risk. If your board needs to understand what AI means for your organisation, get in touch.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top