WHY YOUR ORGANISATION NEEDS THIS
Without a policy, one member of staff could create a serious compliance incident — and you’d have no defence.
Staff are already using AI tools — whether you know about it or not. ChatGPT, Copilot, Gemini. They’re inputting client data, drafting contracts, responding to complaints. Without a formal policy, your organisation has no legal standing, no audit trail, and no protection under UK GDPR.
Want a policy customised for your organisation? Contact us — we deliver bespoke AI policies as part of our governance service.
Staff AI Acceptable Use Policy
This policy sets out the rules and expectations for all staff using artificial intelligence tools as part of their work. It is designed to be issued alongside AI literacy training and should be read before using any AI tool for work purposes.
Template Notice
Adapt this template to reference your specific approved AI tools, HR policies, and organisational context. Obtain sign-off from HR and your DPO before issuing to staff.
- Customise approved tool list
- Reference your HR policy
- HR & DPO sign-off required
1. Why This Policy Exists
AI tools offer significant productivity and quality benefits for your role. They also carry real risks if used without appropriate care — including data protection breaches, copyright infringement, reputational damage, and poor decision-making based on inaccurate AI outputs.
This policy does not exist to restrict your use of AI. It exists to ensure you use AI tools in ways that protect you, your colleagues, our clients, and the organisation — and that you get the most out of these tools safely and effectively.
What This Policy Covers
- What AI tools can be used for
- What is strictly prohibited
- Data you must never enter into AI
- How to verify AI outputs
- Reporting concerns & incidents
- Staff acknowledgement requirements
2. What AI Tools Can Be Used For
The following uses of approved AI tools are permitted without requiring additional authorisation:
- Drafting and editing documents using non-confidential information
- Summarising publicly available or internal-only documents
- Generating ideas, outlines, and first drafts for review
- Formatting, proofreading, and improving existing text
- Answering general knowledge questions for your own learning
- Creating meeting summaries from notes you have already anonymised
- Generating templates and frameworks for internal use
What Is Strictly Prohibited
The following actions are prohibited and may result in disciplinary action:
- Entering client names, contact details, or personal data into any AI tool
- Sharing confidential organisational strategies or financial data
- Using AI to make final HR decisions (recruitment, performance, dismissal)
- Publishing AI-generated content without human review and approval
- Presenting AI-generated work as your own expert opinion without verification
- Using unapproved AI tools for any work purpose
- Using AI to create deceptive, misleading, or harmful content
3. What You Must Never Enter Into AI Tools
The following categories of information must never be entered into AI tools unless the tool has been specifically approved for that data type and a DPIA has been completed:
Personal Data
Names, addresses, email addresses, phone numbers, national insurance numbers, dates of birth, medical information, or any information that could identify a living individual.
Client Information
Client company names, contacts, contract details, commercial terms, pricing, strategic plans, or any information provided in confidence by clients.
Financial Data
Bank account details, payroll information, financial forecasts, unreported trading results, or commercially sensitive financial information.
Legal Matters
Details of ongoing litigation, legal advice received, settlement terms, regulatory investigations, or commercially sensitive legal strategy.
Staff Information
Employee performance records, disciplinary matters, salary information, health conditions, or any HR data about named individuals.
Passwords & Access Credentials
System passwords, API keys, access tokens, security codes, or any authentication credentials for organisational systems.
4. Verifying AI Outputs
AI tools can produce plausible-sounding information that is factually incorrect. This is known as “hallucination.” You are responsible for verifying any AI-generated content before using it.
Always Verify
Statistics, research findings, legal or regulatory requirements, technical specifications, dates, and any factual claims in AI-generated content.
Do Not Use Without Review
AI-generated legal or medical advice, financial calculations, compliance statements, or any content that will be published externally or presented to clients.
Document Your Verification
For regulated activities, document that AI was used and that outputs were verified by a qualified professional. Keep records as you would for any other professional process.
5. Reporting AI Concerns & Incidents
If you encounter any of the following, you must report it to your AI Champion or line manager immediately:
| Situation | Who to Contact | Timescale |
|---|---|---|
| You accidentally entered personal or confidential data into an AI tool | Your manager + Data Protection Officer | Immediately (within 1 hour) |
| An AI tool produced output that you believe is biased or discriminatory | Your AI Champion + HR | Within 24 hours |
| AI-generated content was used in a client deliverable without proper verification | Your manager | Within 24 hours |
| A colleague is using an unapproved AI tool or using AI in a prohibited way | Your AI Champion or manager | Within 48 hours |
| You are unsure whether a planned use of AI is permitted under this policy | Your AI Champion | Before proceeding |
6. Staff Acknowledgement
All staff with access to approved AI tools are required to sign an acknowledgement confirming that they have read, understood, and will comply with this policy. The acknowledgement should be renewed annually and after any material policy update.
- Required for all staff with AI tool access
- Must be renewed annually
- Renewed after any material policy update
- Records retained by HR
Sign Below
“I confirm that I have read and understood the [Organisation Name] Staff AI Acceptable Use Policy. I understand my responsibilities regarding the safe, ethical, and compliant use of AI tools in my work. I agree to comply with this policy and to report any concerns, incidents, or breaches in accordance with the reporting requirements set out above.”
HOW TO IMPLEMENT THIS POLICY
5 Steps to Successful Policy Implementation
Replace [Organisation Name] throughout. Review each section against your existing IT and HR policies. Add any sector-specific requirements (e.g., NHS DSP Toolkit, council procurement rules).
The policy intersects with employment law, data protection, and IT security. Have all three reviewed before issuing. Allow 5 working days for review cycle.
Send the policy alongside a plain-English briefing note. Host a 30-minute all-hands Q&A. Every employee must acknowledge receipt in writing (email confirmation is sufficient).
A policy without training is just paper. Our AI training programme covers exactly what staff need to know to use AI safely, compliantly, and productively. Staff-level, champion-level, and board briefings available.
AI evolves fast. Set a calendar reminder to review this policy every 12 months — or immediately after any AI-related incident. Update the version number and redistribution date each time.
Want expert help implementing this? We deliver AI governance implementations including staff briefings, HR integration, and training as part of our fractional AI director service. Book a free call →
RELATED RESOURCES
Implementing This Policy?
This policy works best as part of a broader AI governance and training programme. Explore the supporting resources below.
GOVERNANCE
AI Governance & Risk
ISO 42001, UK GDPR compliance, prompt injection defence, and board-level risk management frameworks that complement this policy.
Explore Governance →TRAINING
AI Training Programme
Staff rollout support, fear reduction workshops, and AI champion certification. Ensures your team understands and complies with this policy.
Explore Training →TEMPLATES
Executive AI Resources
Full library of AI governance templates including the AI Governance Policy Template, AI Readiness Audit Framework, and more — all free to download.
View All Templates →IMPLEMENTATION GUIDE
How to Implement This Policy in 5 Steps
Customise
Edit the template with your organisation name, approved tools, and specific prohibited uses.
Legal Review
Have your DPO or employment solicitor review the adapted policy before circulation.
Train Staff
Run a half-day staff workshop covering the policy and practical AI use. See our training programme.
Communicate
Publish via HR systems, email all staff, and require signed acknowledgement for audit trail.
Review Annually
AI capabilities evolve rapidly. Schedule a formal policy review every 12 months or after major AI tool changes.
Need help with training your team on this policy? We deliver half-day staff AI literacy workshops across all departments.
See Training ProgrammeImplementing AI Policies With Your Team?
Our AI Training programmes include policy rollout support — helping staff understand the policy, complete acknowledgement forms, and ask questions in a safe environment.
BOOK YOUR FREE AI STRATEGY DISCUSSION NOW