Skip to main content

AI and Automation Policy

An AI and automation policy governs how organisations deploy artificial intelligence tools, machine learning services, and automated decision systems. The policy establishes boundaries between permitted and prohibited uses, defines data handling requirements when using AI services, and creates accountability for automated decisions affecting beneficiaries, staff, and operations. Organisations adapt this framework based on their risk tolerance, operational context, and the sensitivity of data they handle.

Artificial intelligence
Systems that perform tasks requiring human-like reasoning, learning, or decision-making, including large language models, machine learning classifiers, computer vision systems, and predictive analytics tools.
Generative AI
AI systems that create new content including text, images, code, audio, or video based on training data and user prompts. Examples include ChatGPT, Claude, Midjourney, and GitHub Copilot.
Automation
Rule-based systems that execute predefined actions without human intervention, including robotic process automation, workflow engines, scheduled scripts, and triggered integrations.
Automated decision system
Any system that makes or substantially informs decisions affecting individuals without case-by-case human review, including eligibility determinations, risk scoring, and resource allocation algorithms.
AI provider
Third-party organisation offering AI capabilities through APIs, platforms, or embedded features, including both dedicated AI vendors and AI features within broader software products.

Policy scope

This policy applies to all AI and automation tools used for organisational purposes, regardless of whether the organisation procures, builds, or accesses them through free tiers. Coverage extends to dedicated AI platforms, AI features embedded within existing software, browser extensions with AI capabilities, and personal AI tools used for work purposes. A staff member using a free AI chatbot to draft work communications operates under this policy identically to enterprise AI platform usage.

The policy binds all authorised users including employees, contractors, volunteers, and partner staff with access to organisational systems or data. Geographic location does not alter applicability; field staff in any location hold identical obligations to headquarters personnel. The policy applies from the point of access provisioning until access termination, with certain obligations regarding confidentiality persisting beyond the employment relationship.

Automation systems within scope include robotic process automation platforms, integration tools connecting multiple systems, scheduled data processing jobs, and any automated workflows that execute without real-time human oversight. Simple conveniences like email filters and calendar automation fall outside scope unless they process sensitive data or make consequential decisions.

Approved uses

AI and automation tools serve legitimate organisational purposes when deployed appropriately. Approved uses enhance productivity, improve service quality, and extend organisational capacity without creating unacceptable risks to data protection, accuracy, or accountability.

General productivity applications

Staff may use approved AI tools for drafting and editing written content, summarising documents, translating materials, generating ideas, and answering general knowledge questions. These applications accelerate routine work while keeping humans accountable for outputs. A programme officer drafting a donor report may use AI to generate initial text, restructure paragraphs, and check grammar, provided they review and take responsibility for the final content.

Code generation and technical assistance constitute approved uses for staff with relevant responsibilities. Developers may use AI coding assistants to generate code snippets, explain error messages, suggest optimisations, and document functions. Generated code requires review before deployment; AI assistants accelerate development without replacing developer judgement about security, correctness, and appropriateness.

Research and analysis applications include using AI to synthesise information from multiple sources, identify patterns in qualitative data, and generate preliminary analyses. Staff must verify AI-generated insights against primary sources and apply professional judgement before acting on AI-assisted analysis. AI serves as a research accelerator, not a replacement for domain expertise.

Conditional uses requiring approval

Certain AI applications require explicit approval before deployment due to elevated risk profiles. Approval authority rests with the designated AI governance function, which evaluates proposed uses against risk criteria and may impose conditions on approved deployments.

Processing personal data through AI systems requires approval regardless of the AI application. The approval process examines data protection implications, assesses provider data handling practices, and determines whether a data protection impact assessment is required. Approval conditions specify permitted data categories, retention limitations, and required safeguards.

Customer-facing or beneficiary-facing AI deployments require approval before launch. Chatbots, automated response systems, and AI-generated communications reaching external parties undergo review for accuracy, appropriateness, and disclosure compliance. Approval conditions address quality monitoring, escalation paths to human agents, and transparency requirements.

AI applications informing decisions about individuals require approval and enhanced oversight. Using AI to assess beneficiary eligibility, prioritise cases, allocate resources, or evaluate staff performance triggers additional requirements including bias assessment, appeal mechanisms, and ongoing monitoring. Such applications never operate as fully automated decision systems; human review remains mandatory for consequential determinations.

Use categoryApproval requirementConditions
General productivity (drafting, summarising, translation)Pre-approved for listed toolsNo sensitive data; human review of outputs
Code generationPre-approved for listed toolsSecurity review before deployment; no credentials in prompts
Internal data analysisDepartment head approvalAnonymised data preferred; no protection-sensitive data
Personal data processingAI governance approvalDPIA may be required; specified safeguards
Beneficiary-facing applicationsAI governance approvalQuality monitoring; human escalation; disclosure
Decision support for individualsAI governance approvalBias assessment; appeal mechanism; human review mandatory
Procurement over £10,000AI governance approvalVendor assessment; contract review

Prohibited uses

Certain AI applications are prohibited regardless of potential benefits, available safeguards, or approval authority. These prohibitions reflect non-negotiable boundaries protecting beneficiaries, staff, data subjects, and organisational integrity.

Fully automated consequential decisions

AI systems must not make final decisions affecting individual rights, benefits, employment, or wellbeing without meaningful human review. A system may inform, recommend, or flag cases for attention, but a human must review and authorise consequential determinations. This prohibition applies regardless of the AI system’s accuracy claims; accountability requires human judgement in decisions affecting people’s lives.

Prohibited examples include automatic rejection of beneficiary applications, algorithmic termination of staff employment, and unsupervised allocation of humanitarian assistance. Permitted alternatives involve AI pre-screening with human review of all rejections, AI-assisted performance analysis with human employment decisions, and AI-optimised allocation recommendations with human approval.

Biometric and surveillance applications

AI-powered facial recognition, emotion detection, and behavioural analysis are prohibited except where explicitly mandated by law enforcement with proper legal authority. The organisation does not deploy such technologies for access control, performance monitoring, or beneficiary identification. This prohibition extends to third-party services embedding such capabilities and to procurement of systems with latent surveillance features.

Biometric data collection for AI training purposes is prohibited. Staff and beneficiary biometric information must not feed machine learning systems regardless of anonymisation claims. The prohibition encompasses facial images, voice recordings, fingerprints, and behavioural patterns.

Deceptive applications

AI must not generate content designed to deceive recipients about its origin or nature. Deepfakes, synthetic media impersonating real individuals, and AI-generated content falsely attributed to human authors violate this policy. Legitimate AI-assisted content creation remains permitted when appropriately disclosed; prohibition targets intentional deception rather than AI augmentation of human work.

Social engineering attacks using AI-generated content are prohibited regardless of purported justification. Security testing involving AI-generated phishing or manipulation requires explicit approval through security governance channels with appropriate ethical safeguards.

High-risk prohibited applications

ApplicationProhibition rationaleAlternatives
Fully automated eligibility decisionsAccountability requires human judgementAI recommendation with human review
Facial recognition for identificationPrivacy and surveillance risksTraditional identity verification
Emotion detection and sentiment analysis of individualsUnreliable technology; dignity concernsDirect communication and feedback mechanisms
Social media monitoring of staffPrivacy violation; chilling effectClear performance management processes
Predictive profiling of beneficiariesDiscrimination risk; dignity concernsNeeds-based assessment with human judgement
AI-generated evidence or documentationIntegrity of recordsHuman-authored documentation with AI editing assistance
Autonomous weapons or targeting systemsEthical prohibitionNot applicable

Data handling requirements

AI tools process data through mechanisms that differ fundamentally from traditional software. Understanding these mechanisms enables appropriate data handling decisions. Most AI services transmit user inputs to remote servers for processing, with varying retention, training, and sharing practices. Data shared with AI services may persist beyond the immediate interaction, inform model improvements, or become accessible to provider personnel.

Data classification for AI processing

Data classification determines which information may be shared with AI services under what conditions. The classification scheme aligns with broader organisational data classification while addressing AI-specific risks including training data incorporation and prompt injection vulnerabilities.

ClassificationAI processing permittedConditions
PublicYesNo restrictions
InternalYes, approved tools onlyEnterprise agreements with data protection terms
ConfidentialLimited, with approvalSpecified tools only; no training on data; audit logging
RestrictedNoProhibited for AI processing
Protection-sensitiveNoProhibited for AI processing; no exceptions

Public information carries no restrictions on AI processing. Published reports, website content, and materials intended for public distribution may be processed through any AI service without special precautions.

Internal information may be processed through approved AI tools operating under enterprise agreements that prohibit training on customer data. Staff may use such tools to analyse internal documents, draft internal communications, and process operational data. Unapproved tools, free tiers of commercial services, and open-source models without clear data handling remain prohibited for internal data.

Confidential information requires explicit approval for AI processing and may only use specified tools with enhanced protections. Processing requires audit logging, and outputs must be reviewed for inadvertent disclosure before sharing. Examples include strategic plans, financial projections, and unpublished research.

Restricted and protection-sensitive data must never be processed through AI services regardless of provider assurances. This absolute prohibition recognises that AI data handling practices remain opaque and that re-identification risks in AI contexts are poorly understood. Personal data of vulnerable individuals, safeguarding case information, and data subject to legal privilege fall within this prohibition.

Prompt hygiene

Information entered into AI systems through prompts faces exposure risks distinct from data stored in traditional systems. Prompt content may be logged, reviewed by provider staff, used for model training, or leaked through model outputs to other users. Treating prompts as potentially public communications guides appropriate use.

Staff must not include credentials, API keys, or authentication tokens in AI prompts. A developer seeking help with an authentication error must sanitise code examples before sharing. Staff must not include personal data in prompts except through approved channels with appropriate safeguards. A staff member may ask an AI to help draft a letter but must not include the recipient’s actual personal details in the prompt.

Confidential business information requires careful handling in prompts. When processing confidential documents through approved AI tools, staff should use summaries rather than full text where possible, remove identifying details not essential to the task, and verify that the specific tool is approved for confidential data processing.

Provider data practices

AI provider data handling practices vary substantially and change over time. The organisation evaluates provider practices during procurement and monitors for changes that affect risk profiles. Key considerations include data retention periods, training data usage, human review of prompts, data location, and subprocessor arrangements.

Enterprise AI agreements include data protection terms prohibiting training on organisational data and limiting retention. Free tiers and consumer versions of AI services lack these protections and remain prohibited for any non-public data. Staff must verify they are accessing enterprise instances of approved tools rather than consumer versions with different data handling.

Procurement and vetting

AI procurement follows standard technology procurement processes with additional evaluation criteria addressing AI-specific risks. The vetting framework applies to dedicated AI procurements, AI features in broader software purchases, and free tools proposed for organisational use.

Vetting criteria

Vetting examines AI tools across technical, ethical, legal, and operational dimensions. Evaluation depth scales with risk: low-risk productivity tools require lighter assessment than high-risk decision support systems.

CriterionAssessment questionsEvidence sources
Data handlingWhere is data processed? How long retained? Used for training? Accessible to provider staff?Privacy policy; DPA; technical documentation
SecurityEncryption in transit and at rest? Access controls? Security certifications? Vulnerability management?Security documentation; certifications; penetration test results
Accuracy and reliabilityWhat accuracy levels demonstrated? On what populations? How are errors handled?Technical papers; benchmark results; customer references
Bias and fairnessWhat bias testing performed? On what populations? What mitigation measures?Model cards; audit reports; fairness assessments
TransparencyIs model behaviour explainable? Can decisions be audited? Are limitations documented?Technical documentation; explainability features
Vendor stabilityCompany financial stability? Acquisition risk? Contingency for service termination?Financial statements; market position; exit provisions
Regulatory complianceGDPR compliance? AI Act readiness? Sector-specific requirements?Compliance documentation; certifications; legal review

Approved tool list

The organisation maintains a list of AI tools approved for use, specifying permitted use cases and any conditions for each tool. The list distinguishes between tools approved for general use and those requiring additional authorisation for specific applications.

Staff must select from approved tools for organisational AI use. Requests to add tools to the approved list follow standard procurement processes with AI-specific vetting. Emergency use of unapproved tools requires written approval from the AI governance function and triggers expedited vetting.

Shadow AI, meaning use of unapproved AI tools for work purposes, violates this policy regardless of whether data exposure occurs. The proliferation of AI capabilities embedded in consumer tools creates ongoing shadow AI risk; staff must verify approval status before using AI features in any tool.

Ethical considerations

AI deployment carries ethical implications extending beyond legal compliance. The organisation commits to ethical AI use that respects human dignity, avoids harm, and maintains accountability. Ethical considerations inform both policy boundaries and case-by-case decisions about AI deployment.

Bias and fairness

AI systems reflect and can amplify biases present in training data and design choices. Systems trained on historical data may perpetuate historical discrimination. Systems designed without diverse input may perform poorly for underrepresented populations. These risks require active assessment and mitigation rather than assumptions of algorithmic neutrality.

AI applications affecting individuals require bias assessment before deployment. Assessment examines potential for discriminatory outcomes across protected characteristics and vulnerable populations relevant to the organisation’s work. Where bias risks exist, mitigation measures must be implemented and monitored. Unmitigable bias risks preclude deployment regardless of other benefits.

Ongoing monitoring tracks AI system performance across population subgroups. Outcome disparities trigger investigation and remediation. Systems demonstrating persistent bias despite mitigation face withdrawal from service.

Human dignity

AI deployment must respect the dignity of individuals affected by AI-assisted decisions and interactions. Dignity considerations include autonomy, privacy, transparency, and the right to human contact. Efficiency gains from AI do not justify treating people as mere data subjects or denying them meaningful human engagement.

Beneficiaries and service users retain the right to human interaction for consequential matters regardless of AI availability. AI may handle routine enquiries and initial triage, but escalation paths to human staff must exist and be communicated. Individuals may request human review of any AI-assisted determination affecting them.

Staff dignity requires protection from invasive AI monitoring and from AI systems that undermine professional autonomy. Performance monitoring must not rely on AI analysis of behaviour, communications, or emotional state. AI recommendations must support rather than supplant professional judgement.

Environmental impact

AI systems, particularly large language models, consume substantial computational resources with associated environmental costs. A single large model training run can produce carbon emissions equivalent to hundreds of transatlantic flights. Inference operations create ongoing energy demands that scale with usage.

Environmental considerations factor into AI procurement and deployment decisions. Preference goes to efficient models adequate for the task over maximum-capability models. Unnecessary AI usage wastes resources; staff should apply AI where it adds genuine value rather than habitually routing all tasks through AI systems.

Transparency and disclosure

Transparency obligations require disclosure of AI use in specified contexts. Disclosure enables informed consent, maintains trust, and supports accountability. The appropriate disclosure mechanism varies by context and audience.

External disclosure requirements

Communications with beneficiaries generated substantially by AI require disclosure. A chatbot must identify itself as automated; an AI-drafted letter must note AI assistance. Disclosure need not be prominent for routine communications but must be clear and accessible.

Published content created with substantial AI assistance requires acknowledgment. Reports, research outputs, and public communications disclose AI contributions in authorship notes or methodology sections. Minor AI editing assistance (grammar checking, translation) does not require disclosure.

Automated decision systems affecting individuals require transparency about the role of automation and the availability of human review. Beneficiaries subject to AI-assisted eligibility determinations receive explanation of how AI contributes to decisions and how to request human review.

ContextDisclosure requirementMechanism
Chatbots and automated respondersImmediate disclosure of automated natureOpening message or persistent indicator
AI-drafted correspondenceNote AI assistanceFooter or signature block
Published content with substantial AI generationAcknowledge AI contributionAuthorship note or methodology section
Automated decision supportExplain AI role and human review availabilityDecision notification and appeals information
Marketing and fundraisingDisclose AI-generated contentAppropriate placement in materials
AI-generated images or mediaIdentify as AI-generatedCaption or metadata

Internal disclosure

Staff using AI assistance for significant deliverables should disclose AI contribution to supervisors and collaborators. This supports quality assurance, manages expectations about capabilities, and maintains honest attribution. Disclosure is particularly important for technical work, analysis, and recommendations where AI involvement affects reliability assessment.

AI use in hiring, performance evaluation, and other employment processes requires disclosure to affected staff. Candidates and employees must know when AI tools assess their applications or performance, what role AI plays in decisions, and how to raise concerns.

Automation governance

Automation systems outside the AI category require governance proportionate to their complexity and impact. Simple workflow automation carries different risks than sophisticated robotic process automation handling sensitive operations. The governance framework scales requirements to automation characteristics.

Automation categories

Automation ranges from simple triggered actions to complex multi-system orchestrations. Category assignment determines governance requirements including approval, documentation, testing, and monitoring standards.

Category one automation encompasses simple, single-system automations with limited scope. Email rules, calendar automation, and basic form processing fall within this category. Users may implement category one automation without formal approval, though they remain responsible for appropriate configuration.

Category two automation involves cross-system integrations and workflow automation affecting business processes. Integration platform workflows, automated reporting, and scheduled data processing require department head approval, documentation of logic and data flows, and testing before deployment.

Category three automation includes robotic process automation, AI-augmented automation, and any automation affecting sensitive data or consequential decisions. These require IT governance approval, formal development and testing processes, and ongoing monitoring. Production deployment follows change management procedures.

Documentation requirements

All category two and three automations require documentation enabling maintenance, troubleshooting, and transfer of responsibility. Documentation includes purpose and business justification, data inputs and outputs, logic and decision rules, exception handling, and responsible owner.

Documentation must remain current; changes to automation require documentation updates. Undocumented automations identified through audit face remediation deadlines; automations not documented within specified timeframes face deactivation.

Monitoring and maintenance

Automated systems require monitoring appropriate to their criticality. At minimum, automation owners review execution logs monthly to identify errors, unexpected behaviours, and drift from intended operation. Category three automations require real-time monitoring with alerting for failures and anomalies.

Automation maintenance responsibilities include monitoring, error investigation, updating for environment changes, and eventual decommissioning. Ownership must be assigned and maintained; automations without assigned owners face review and potential deactivation. Staff transitions require explicit automation handover.

Testing standards

Automation testing requirements scale with category and risk. Category one automation requires informal verification that the automation performs as expected. Category two requires documented testing covering normal operation, edge cases, and error handling. Category three requires formal test plans, user acceptance testing, and regression testing for changes.

Test environments should mirror production configuration to the extent practical. Testing with production data requires data protection safeguards; synthetic or anonymised data is preferred where adequate for testing purposes.

Policy review

AI and automation capabilities evolve rapidly, outpacing traditional policy review cycles. This policy undergoes annual review with additional reviews triggered by significant developments in AI capabilities, regulations, or organisational AI adoption.

The AI governance function monitors developments affecting policy relevance. Emerging capabilities, new risks, regulatory changes, and incident learnings inform policy updates. Staff may propose policy amendments through the governance function at any time.

Approved tool lists require more frequent updates than the policy itself. Tool additions, removals, and condition changes may occur outside full policy review cycles following appropriate vetting and approval.

See also