Skip to main content
Build a Responsible AI Policy
Create a tailored policy that ensures your AI adoption is ethical, transparent and aligned with best practice.
Welcome to the Responsible AI Policy Builder!
This tool is designed to help you develop a customised responsible AI policy for your organisation by answering a few guided questions and following the instructions on this page.
As you work through the sections, you’ll choose values, principles and compliance options that reflect your needs. Use the information on this page to create and refine your own policy in a separate document, the tool will guide you, but the final policy is yours to write and share.
What is a Responsible AI Policy?
This policy sets out how an organisation will use Artificial Intelligence in a safe, fair, and accountable way. It does three things:
Explains purpose
by outlining why AI is being used in the business and what goals it supports.
Sets boundaries
around what users of AI are allowed to do and what practices are prohibited.
Clarifies responsibilities
by identifying who is accountable for decisions, oversight and maintenance.
How to use this tool
This tool is made up of two types of content. Standard components which outline core areas of Responsible AI, and custom components which help align the policy template to your business.
- Standard components can be directly inserted into your policy.
- Custom components offer you a choice to insert different approaches into your policy.
Let's help you write your policy....
Part 1: Introduction to the policy
This policy outlines a set of guidelines, principles and practices that you will use to manage the approach to AI in your organisation. We want you to make sure that your use of AI is safe, fair and transparent.
Using the policy, you will:
- Follow the law by helping the organisation stay in line with legislation and regulations.
- Remain accountable by outlining what will happen if something goes wrong. In this case, the policy helps figure out what happened and how to fix it.
Part 2: Our values for AI
Our values shape how and why we use AI, ensuring every idea and application is rooted in purpose. They guide our decisions, define the way we work, and keep us accountable in building responsible AI. We don’t just talk about values — we live them and embed them in everything we do.Now it’s your turn:Choose 3 values for your organisation from the groupings below and add them to your new policy document.
Ideate with PURPOSE
We explore new AI ideas with clear intent. Every concept or solution we pursue should address real-world challenges an bring value to people, business and communities.
Act with RESPONSIBILITY
We own the impact of our AI use. Whether it’s something successful or a mistake, we stay accountable and ready to improve.
Build with INTEGRITY
We commit to honest, ethical AI development and use of AI. From data collection to deployment, we are transparent and accountable at every stage of our AI lifecycle.
Innovate with TRUST
We design and use AI that earns trust; by being reliable, understandable and respectful of its impact on others.
Use AI with HUMANS in mind
We use AI to support and empower people, not replace or harm them. Humans are at the core of how we use AI.
Part 3: Our principles for AI
While our values inspire our motivations, our principles outline fundamental rules which provide a consistent framework for decisions-making and behaviour. While the values inspire our motivations, our principles translate these into actionable standards. They give structure and direction to our ethical and professional conduct.
Our principles are the guiding rules that help inform our decision-making. We rely on our principles to:
- Provide a framework for making decisions and using AI.
- Provide direction for innovation projects and development workstreams.
- Provide alignment and consistency in our approach to the development, training, support and collaboration for AI.
Time for you to choose your principles...
You can choose from three sets of principles, select the one below that best fits your organisation and add it to your new policy document.
FAST Principles
Fair, Accountable, Sustainable and Transparent.
Fairness means AI must treat people equally and without bias.
Read more
Fair
Fairness means AI must treat people equally and without bias. This includes making sure that the data used to train AI does not discriminate against certain groups when we are using AI. Fair AI helps us to build trust and avoids causing harm by being inclusive and respectful to everyone.
How will we use this principle?
- We will ensure that our datasets are inclusive and representative by implementing string data governance processes and correcting imbalances that could lead to unfair outcomes.
- We will promote equity in access to AI systems by removing reasonable technical, economic and language barriers that exclude users.
- We will provide clear feedback and grievance mechanisms so people can report perceived unfairness or harm from AI decisions, so we can take corrective actions.
- We will assess fairness at every decision point by explicitly asking “Who benefits and who might be harmed?” and will make this reflection a routine part of AI design and deployment, adoption, and governance.
- We will teach people to recognise and challenge bias by offering materials that help people to spot and address bias in data, algorithms, and outputs.
Accountability means that humans, not machines, are responsible for what AI does.
Read more
Accountable
Accountability means that humans, not machines, are responsible for what AI does. People must be able to explain how AI decisions and tasks are complete, and we must take responsibility when things go wrong. Accountable teams help build trust and help us to follow rules and laws.
How will we use this principle?
- We will assign clear human oversight for every AI use case and system to ensure that qualified individuals are responsible for each stage of AI design, use, and outcomes.
- We will maintain complete audit trails to enable traceable decisions, data changes, and AI use across the AI lifecycle.
- We will define roles and responsibilities within the organisation to clarify who is accountable, and informed throughout AI development, adoption, and use.
- We will train our team in ethical and legal responsibility to equip them to make informed, accountable choices when building or using AI technologies.
- We will implement explainability protocols to ensure that users and regulators can understand the rationale and reasoning behind our use of AI, including AI-generated decisions, designs, and procurement.
Sustainability focuses on AI’s impact on society and organisation, and the long-term reliability of AI.
Read more
Sustainable
Sustainability focuses on AI’s impact on society and organisation, and the long-term reliability of AI. AI should not just work today, it should benefit organisations and people over time, use resources wisely, and support an inclusive future. It ensures that AI programmes remain useful and maintainable over time.
How will we use this principle?
- We will assess the long-term societal impacts of our use of AI to ensure that our use of technology contributes positively to the economy, environment, and public good.
- We will include diverse stakeholders in AI design and procurement processes to ensure that the long-term needs of impacted communities are considered and addressed.
- We will design and maintain, procure, and adopt AI system for long-term performance by establishing plans for continuous monitoring, review, and AI management.
- We will implement responsible decommissioning procedures to safely retire or replace AI systems or processes that are out-of-date, harmful, or no longer needed.
- We will monitor and manage technical debt to prevent future breakdowns or inefficiencies in our AI systems, processes, and infrastructure.
Transparency means clearly showing how AI works, what it is for and how it’s used.
Read more
Transparent
Transparency means clearly showing how AI works, what it is for and how it’s used. It must be easy to understand what data is being used. How it’s making decisions or completing tasks, and when it’s being. People trust AI when they can understand it. It helps make informed decisions and ensures that AI isn’t being used in secret or harmful ways.
How will we use this principle?
- We will notify users when AI is involved or being used by providing clear notices in accessible and understandable formats.
- We will offer user-friendly explanations of AI processes that are tailored to knowledge levels and the needs of various stakeholders.
- We will maintain a registry of our AI systems to provide oversights, enable internal governance, and support public or regulatory transparency where appropriate.
- We will document and record the AI lifecycle by keeping visible records of data sources, design decisions, model training, tuning, and how AI systems are procured, adopted, and used.
- We will make documentation a standard, not a suggestion by requiring artefacts like data sheets, assessments, and decision logs as part of every use case and governance process for AI.
RAFT Principles
Reliable, Accountable, Fair and Trustworthy.
Reliable AI works the way it is supposed to – safely and consistently over time.
Read more
Reliable
Reliable AI works the way it is supposed to – safely and consistently over time. We should not use AI as a perfect solution, and are aware of its ability to make mistakes, or change behaviour without warning. Reliable AI is one which we can depend on.
How do we use this principle?
- We will monitor AI behaviour over time through continuous performance tracking to detect degradation, drift, or unintended changes in outputs.
- We will acknowledge and communicate AI limitations by being transparent about what AI can and cannot do, and how confident it is in its outputs.
- We will implement fallback options when AI fails so that critical processes don’t collapse when the AI encounters uncertainty, error, or novel conditions.
- We will avoid blind automation and enforce human-in-the-loop controls in domains where safety, ethics, or unpredictability demand careful oversight.
- We will define and test AI systems for consistent performance by validating outputs across a range of conditions, use cases, and real-world scenarios.
Accountability means that humans, not machines, are responsible for what AI does.
Read more
Accountable
Accountability means that humans, not machines, are responsible for what AI does. People must be able to explain how AI decisions and tasks are complete, and we must take responsibility when things go wrong. Accountable teams help build trust and help us to follow rules and laws.
How will we use this principle?
- We will assign clear human oversight for every AI use case and system to ensure that qualified individuals are responsible for each stage of AI design, use, and outcomes.
- We will maintain complete audit trails to enable traceable decisions, data changes, and AI use across the AI lifecycle.
- We will define roles and responsibilities within the organisation to clarify who is accountable, and informed throughout AI development, adoption, and use.
- We will train our team in ethical and legal responsibility to equip them to make informed, accountable choices when building or using AI technologies.
- We will implement explainability protocols to ensure that users and regulators can understand the rationale and reasoning behind our use of AI, including AI-generated decisions, designs, and procurement.
Fairness means AI must treat people equally and without bias.
Read more
Fair
Fairness means AI must treat people equally and without bias. This includes making sure that the data used to train AI does not discriminate against certain groups when we are using AI. Fair AI helps us to build trust and avoids causing harm by being inclusive and respectful to everyone.
How will we use this principle?
- We will ensure that our datasets are inclusive and representative by implementing string data governance processes and correcting imbalances that could lead to unfair outcomes.
- We will promote equity in access to AI systems by removing reasonable technical, economic and language barriers that exclude users.
- We will provide clear feedback and grievance mechanisms so people can report perceived unfairness or harm from AI decisions, so we can take corrective actions.
- We will assess fairness at every decision point by explicitly asking “Who benefits and who might be harmed?” and will make this reflection a routine part of AI design and deployment, adoption, and governance.
- We will teach people to recognise and challenge bias by offering materials that help people to spot and address bias in data, algorithms, and outputs.
Trustworthy AI earns people’s confidence by being safe, fair, and easy to understand.
Read more
Trustworthy
Trustworthy AI earns people’s confidence by being safe, fair, and easy to understand. Trust is built over time when AI use and AI systems are used as expected, provide consistency, and respect people’s rights.
How do we use this principle?
- We will design and use AI which earns trust through consistent outcomes by ensuring that people can rely on AI to behave as expected, within known use cases and boundaries.
- We will communicate clearly about how and why AI is used so that people understand its role in decisions and can make informed judgements about it.
- We will act quickly and transparently when trust is broken by investigating issues, communicating with stakeholders, and correcting failures that lead to loss of confidence.
- We will build trust factors into our use of AI by being transparent, explainable, fair, and demonstrating our overall confidence in AI systems and use cases.
- We will foster internal cultures of trustworthiness by empowering teams to raise concerns and prioritise ethical use of AI.
SOAP Principles
Social, Open, Accountable and Protected
Social responsibility means that AI should help people and support positive changes in society.
Read more
Social
Social responsibility means that AI should help people and support positive changes in society. It should not harm communities, spread false information, or increase inequality. It helps us consider the effects of AI on communities, organisations, and people.
How do we use this principle?
- We will assess the social impact of our AI use and decisions by asking whether AI support community wellbeing, reduce harm, and benefit users.
- We will build safeguards against harmful content and misinformation by testing for and mitigating risks (where appropriate, like AI-generated falsehoods, abuse, or manipulation.
- We will educate team members and users on the social risks and responsibilities of AI by offering supportive materials that explore ethical AI, misinformation risks, and how AI can shape behaviour.
- We will review social outcomes, not just technical ones by tracking impacts on trust, equity, misinformation, and accessibility as part of our AI management.
Openness means sharing ideas, tools, and our use of AI with others.
Read more
Open
Openness means sharing ideas, tools, and our use of AI with others. We do not hide where AI is helping us to deliver or where it is augmenting out work. It helps us to learn, work together, and improve our use of AI safely. Being open helps us to be innovative and fair.
How do we use this principle?
- We will clearly communicate when and how AI is used by disclosing when AI is used, influences decisions, augments human work, or generates content.
- We will promote open collaboration and knowledge sharing by contributing tools, learnings, or documentation to communities where appropriate.
- We will foster a culture of open discussion about AI risks and benefits by encouraging teams to raise concerns, ask questions, and explore the potential unintended consequences of AI openly.
- We will be open about our AI goals and motivations by sharing why we are using AI in each context, wheat we hope to achieve, and how we plan to monitor impact.
- We will document our AI use and systems clearly and accessibly by creating explainable summaries of how they work, what data they use, and their intended use and limitations.
Accountability means that humans, not machines, are responsible for what AI does.
Read more
Accountable
Accountability means that humans, not machines, are responsible for what AI does. People must be able to explain how AI decisions and tasks are complete, and we must take responsibility when things go wrong. Accountable teams help build trust and help us to follow rules and laws.
How will we use this principle?
- We will assign clear human oversight for every AI use case and system to ensure that qualified individuals are responsible for each stage of AI design, use, and outcomes.
- We will maintain complete audit trails to enable traceable decisions, data changes, and AI use across the AI lifecycle.
- We will define roles and responsibilities within the organisation to clarify who is accountable, and informed throughout AI development, adoption, and use.
- We will train our team in ethical and legal responsibility to equip them to make informed, accountable choices when building or using AI technologies.
- We will implement explainability protocols to ensure that users and regulators can understand the rationale and reasoning behind our use of AI, including AI-generated decisions, designs, and procurement.
Protection means keeping data, users and systems safe.
Read more
Protected
Protection means keeping data, users and systems safe. This includes protecting people’s privacy, reducing cyber risks, and making sure that AI is not misused or harmful to people or organisations.
How do we use this principle?
- We will design AI systems and use cases with privacy and security from the start by embedding privacy and security design practices throughout the AI lifecycle.
- We will limit access to AI systems and data by using strong access controls, role-based permissions, and clear audit trails to prevent unauthorised use of AI.
- We will train team members on AI-specific safety and privacy risks by providing access to useful materials and support.
- We will monitor AI systems and use cases for emerging risks after deployment by establishing alerting, logging, and response protocols to catch harmful behaviour, drift or breaches.
- We will detect, prevent, and address the misuse of AI systems by identifying how AI might be used poorly or maliciously, and building guardrails to reduce these risks.
Part 4: Our Commitment to Comply with Legislation and Regulations
We are committed to the ethical, lawful and responsible development, deployment, procurement and use of AI. We will operate in accordance with the most comprehensive legislative and regulatory frameworks which are locally applicable.
We will align all relevant AI workstreams and processes with industry best practices and adhere to applicable laws and regulations. We will ensure that our AI -related activities comply with:
- UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, particularly relating to data processing, profiling, automated decision-making, and data subjects.
- Equality legislation, including but not limited to the Equality Act 2010 and the Northern Ireland Act 1998, ensuring that the use of AI and AI systems do not result in discriminatory outcomes based on protected characteristics (e.g., age, gender, religion, political opinion, disability, race, or sexual orientation).
- The Human Rights Act 1998 and the principles of the European Convention on Human Rights, particularly regarding the right to privacy, freedom from discriminations, and protections against decisions made without a clear, consistent, or justifiable rationale.
- The EU AI Act provisions, where applicable due to cross-border activities, services to EU-based clients, or data involving EU citizens.
- The Information Commissioner’s Office (ICO) guidance on AI, data ethics, and explainability.
AI Legislation
We recognised the unique position of Northern Ireland which is influenced by the United Kingdom principles for AI, and by the Windsor Framework which maintains the applicability of certain EU law in specific contexts. As such, we will ensure our AI systems development, AI adoption, or AI use will be reviewed for compliance with EU AI legislations.
We will not engage in, support, or permit any AI use cases that are prohibited under recognised AI legislation, including but not limited to the EU AI Act. In accordance with this, we will not use:
- AI systems that use subliminal techniques or purposefully manipulative methods to distort human behaviour and impair decision-making, resulting in significant harm.
- AI systems that exploit vulnerabilities due to age, disability, or specific social or economic situations to cause significant harm.
- AI systems that evaluate or classify individuals based on their social behaviour or personal characteristics, leading to unjustified or disproportionate treatment.
- AI systems used solely for predicting criminal offences based on profiling or assessing personality traits.
- AI systems that create or expand facial recognitions databases through untargeted scraping of images from the internet or CCTV footage.
- AI systems designed to infer emotions in workplaces and education settings, unless used for medical or safety reasons.
- AI systems that categorise individuals based on biometric data to infer sensitive characteristics such as race, political opinions, or sexual orientation.
- AI systems that use real-time biometric identification systems.
GDPR and Data Protection
The UK GDPR and the Data Protection Act 2018 set out the legal framework for processing personal data, ensuring that individuals’ rights are respected and protected.
We will uphold the principles and obligations of the UK GDPR and the Data Protection Act 2018 across all systems and processes. We will:
- Establish a lawful basis for processing: Ensure that all personal data processed by AI systems is supported by a lawful basis under Article 6 (e.g., consent, contract, legitimate interest), and, where applicable, a condition under Article 9 for special category data.
- Conduct Data Protection Impact Assessment (DPIAs): Where appropriate we will identify and assess risks associated with AI use, especially where processing is likely to result in a high risk to individuals (e.g., automated profiling, larger data processing operations). DPIAs will be completed before any high-risk AI deployment.
- Ensure data minimisation and purpose limitation: Use only data necessary to achieve clearly defined and legitimate objectives. Data must not be repurposed in a way that conflicts with the original purpose without a valid lawful basis.
- Provide meaningful transparency: Inform individuals about how and why AI system processes their data, particularly when AI is used for profiling or automated decision-making. Explanation must be concise, understandable and accessible.
- Enable data subject rights: Implement mechanisms for individuals to exercise their rights, including:
- The right to be informed about the use of AI.
- The right of access to their data and processing logic.
- The right to object to profiling or automated decisions.
- The right to human intervention and to contest decisions made solely by automated means.
- Record and demonstrate compliance: Maintain internal documentation of processing activities, lawful bases, and technical safeguards in line with accountability principles.
Equality Legislation & Human Rights
We recognise that AI has the potential to impact people’s lives. It is essential that AI is designed and deployed in ways that respect human dignity, uphold individual rights and prevent discriminatory or unjust outcomes.
We acknowledge our obligations under Equality and Human Rights legislation, and will embed equality, human rights and fairness into the core of our AI use and governance model by committing to the following actions:
- Design and using AI for non-discrimination: Ensure that AI systems are used and designed to avoid discriminatory outcomes or the reinforcement of social, racial, gender, or economic biases. This includes:
- Using diverse and representative datasets.
- Avoiding proxies that may indirectly encode bias (e.g., postcode as proxy for ethnicity or socio-economic status).
- Ensure explainability and contestability: AI decisions and processes must be:
- Transparent: Capable of being explained in human-understandable terms.
- Reviewable: Open to appeal or human challenge.
- Documented: With clear audit trails that record how decisions were made and on what basis.
- Monitor for disparate impact: We will proactively monitor for disparate impact regarding our use of AI (i.e., outcomes where one group is disproportionately disadvantaged) and take corrective actions where such effects are identified.
- Enable human oversight: All AI systems with the potential to affect rights or protected groups will be subject to human oversight throughout their lifecycle. Where appropriate this may include decision-making authority, redress mechanisms, and system audits.
Part 5: Privacy and Data Security for AI
We place a high priority on the protection of personal and sensitive data used in the development and operation of AI systems. Our approach draws from recognised best practices in data privacy and information security, including principles reflected in standards such as the ISO/IEC 27001 (Information Security Management) and ISO/IEC 27701 (Privacy Information Management), while adapting to the specific needs of our organisation and the regulatory environment in which we operate.
Data minimisation and purpose limitation
We are committed to collecting and using only the data necessary for legitimate, clearly defined purposes. Data will not be used for unrelated objectives unless there are lawful grounds and appropriate consents.
- We commit to defining our reasons for using AI and for using any specific data which we use in or for our AI use cases. We commit to being able to explain these reasons and our data requirements in plain and accessible language.
- As we create an AI registry, we will commit to keeping this information stored alongside respective entries.
- We will maintain clear data governance frameworks within projects to document purposes, data flows, and retention periods, in line with data protection principles.
Data anonymisation and de-identification
- Where practical, personal data should be anonymised, pseudonymised, or de-identified before further processing or sharing. These techniques help reduce the risk of identifications while maintaining utility for AI development.
- Responsibility for data provisioning will typically rest with the data originator or any collaborating partner. Where internal teams perform data anonymisation, recognised methods will be used, and their effectiveness will be periodically evaluated considering emerging technologies such as AI-driven re-identification risks.
- We encourage the adoption of different privacy or synthetic data generation where anonymisation alone is insufficient for risk mitigation.
Personal data and designated safe spaces
Where personal data must be accessed or used by team members during AI development or use, this will occur within a clearly defined ‘safe space’.
- A safe space refers to an isolated or controlled environment in which the data is not used for model training, inference, or unintended secondary purpose beyond the agreed scope of use.
- This includes safeguards to prevent inadvertent data leakage, retention beyond necessity, or integration into broader datasets.
Our process of anonymisation is supported by risk assessment and documented evaluations of re-identification likelihood, particularly when data is to be shared externally or used for model training.
- For a table which outlines our standard scale for assessing the potential for re-identification from datasets, please refer to Appendix 2.
If you have existing privacy or data security policy, consider inserting the following language into the section for privacy and data security:
The use of AI within the organisation is governed by the organisation’s existing privacy and data security policies. The privacy and data security provisions in this policy are intended to complement and augment those existing policies. In the event of any inconsistency or conflict between this policy and other applicable privacy or data security policies, the governance framework provides a high standard of protection with respect to the use of AI shall take precedence.
The determination of the “more appropriate” framework shall be guided by the nature of the data involved, the risk posed by the AI application, and the objective of ensuring the highest reasonable degree of privacy, security, and responsible AI use.
Part 6: How we handle data
We retain data only for as long as necessary to fulfil the purposes for which it was collected, to deliver services, to support our reasons for using AI, or to comply with legal and regulatory obligations, resolve disputes and enforce agreements.
Retention periods may vary depending on several key factors, including:
- User expectations and consent: Where possible, we consider data subjects’ preferences regarding the duration of data retention, particularly where explicit consent has been provided for continued use or early deletion has been requested.
- Sensitivity of the data: Highly sensitive data, such as personal, confidential, or commercially critical information, may be subject to shorter retention periods and stricter deletion or access protocols.
- Availability of controls: Where technically feasible, we implement automated or manual data retention controls to ensure information is not retained longer than necessary.
- Legal and contractual obligations: Data may need to be retained to comply with applicable legislation, industry regulations, or contractual commitments with third parties.
- Business and operational requirements: Data may also be retained for a defined period if needed to maintain system integrity, support audits, ensure reproducibility of results and systems, or maintain business continuity.
We commit to review the data we hold to ensure that it remains accurate, relevant, and necessary. When data is no longer required, we will securely delete, anonymise, or otherwise dispose of it using methods that maintain confidentiality and prevent unauthorised access.
We do not retain data beyond the lifespan of the organisation, unless required by law, If the organisation ceases operations or is dissolved, all retained data will be securely and permanently deleted, unless a legal obligation requires otherwise.
In cases where the organisation merges with or is succeeded by another entity, data may be transferred to the successor only if:
- Our successor continues the original purposes for which the data was collected; and
- Our successor agrees to uphold equivalent or stronger data protection standards; and
- Affected individuals or organisations are informed of the transfer, and their data rights are respected.
Where data has been anonymised or aggregated such that it can no longer be linked to an individual or specific project, it may be retained indefinitely for purposes such as research, analysis, system improvement, and commercialisation.
For a table which outlines standard data retention periods, please refer to Appendix 1.
The data usage and retention provisions are intended to supplement existing organisational policies that govern the use of data with AI, data protection, and information governance. It does not replace or override any current frameworks already in effect.
- In the event of any conflict or inconsistency between this policy and other applicable organisational policies and procedures on the governance framework that offers the highest level of protection for the responsible and ethical use of AI shall take the precedence.
- This approach ensures alignment with the organisation’s commitment to accountability, transparency, and the protection of individuals, systems, and data in the deployment and use of AI technologies.
If you have existing policies or guidelines which address data usage for AI, consider inserting the following pop-out language into the section for privacy and data security:
The data usage and retention provisions are intended to supplement existing organisational policies that govern the use of data with AI, data protection, and information governance. It does not replace or override any current frameworks already in effect.
- In the event of any conflict or inconsistency between this policy and other applicable organisational policies and procedures on the governance framework that offers the highest level of protection for the responsible and ethical use of AI shall take the precedence.
- This approach ensures alignment with the organisation’s commitment to accountability, transparency, and the protection of individuals, systems, and data in the deployment and use of AI technologies.
Part 7: Our reasons for using AI
As part of its commitment to responsible AI, the organisation will adopt AI systems, whether developed in-house or procured from third parties to enhance operations, decision-making, and service delivery. These systems are used in clearly defined contexts, always with respect for individual rights, organisation values, and regulatory obligations.
The following provisions outline the legitimate purposes for which AI systems and data may be used within the organisation. Each purpose reflects a distinct operations or strategic objective and is accompanied by actionable safeguards to ensure responsible implementation. Whether the AI is used to automate tasks, drive insights, influence behaviour, or support predictions, its use must remain transparent, accountable, and aligned with the other parts of this policy.
These provisions apply to AI models developed internally and to those which are acquired through third party partnerships, vendors, or platforms.
Now it's your turn....
Select all the reasons below that apply to how your organisation will use AI when writing your policy.
To develop, fine-tune, or adapt AI models using data in order to improve their performance, accuracy, and relevance for intended tasks.
Read more
To Train AI
- To develop, fine-tune, or adapt AI models using data in order to improve their performance, accuracy, and relevance for intended tasks.
- We may collect and use data to train or fine-tune AI systems, ensuring alignment with defined use cases, ethical standards, and applicable compliance frameworks.
To enable AI systems to autonomously or semi-autonomously initiate or support operations activities based on data-driven outputs.
Read more
Take action using AI
- To enable AI systems to autonomously or semi-autonomously initiate or support operations activities based on data-driven outputs.
- We may deploy or use AI systems to execute or assist with actions based on predefined parameters, with oversight mechanisms in place to monitor for unintended outcomes or risks.
To inform, guide, or influence human behaviour, choices, or perceptions.
Read more
Influence people
- To inform, guide, or influence human behaviour, choices, or perceptions through personalised content, recommendations, or automated decision support.
- Where is AI is used to influence individuals, we will implement transparency, consent, and fairness safeguards to uphold autonomy, mitigate bias, and avoid manipulative practices.
To streamline processes, reduce redundancy, optimise resource allocation and automate repetitive tasks through AI-driven solutions.
Read more
Improve efficiency
- To streamline processes, reduce redundancy, optimise resource allocation and automate repetitive tasks through AI-driven solutions.
- Where we used AI to enhance operational efficiency, we will maintain oversight to ensure outcomes are continuously assessed for accuracy, fairness and alignment with business goals and AI objectives.
To identify patters, trends or relationships within data that would otherwise be difficult, burdensome or impossible to detect manually.
Read more
Unlock insights
- To identify patters, trends or relationships within data that would otherwise be difficult, burdensome or impossible to detect manually.
- We may use AI to derive insights from data, with processes in place to validate outputs and protect sensitive or personally identifiable information.
To anticipate future events, behaviours or needs based on historical and real-time data using predictive AI models.
Read more
Predict something
- To anticipate future events, behaviours or needs based on historical and real-time data using predictive AI models.
- Where we use predictive AI systems may be utilised to forecast or improve planning or responsiveness, however, these will include mechanisms to audit AI performance and mitigate unintended impacts.
To support human or automated decision-making by providing relevant, contextual, and timely data-driven recommendations or risk assessments.
Read more
Make informed decisions
- To support human or automated decision-making by providing relevant, contextual, and timely data-driven recommendations or risk assessments.
- Where we us AI to enhance decision-making capabilities, we will ensure human oversight and accountability are retained, especially where decisions carry ethical, legal, or significant operational implications.
Part 9: Our lines of responsibility for AI
We commit to creating and maintaining clear lines of responsibility for using AI. This helps us work responsibly and ethically. Each person or group involved in a project will understand their role, what they are responsible for, and who to contact when/if an issue arises.
We will organise our lines of responsibility into the following structure:
Level 1 – Primary points of contact
This is the main team working directly with AI on a particular use case or project. They will manage the day-to-day work with AI and are the first people to speak to about a project or use case.
We will make sure that people at this level:
- Are responsible for carrying out AI tasks
- Communicate regularly with teams, partners or users (where appropriate)
- Are available to solve common or routine problems
- Help support the AI operations every day
Level 2 – Escalation points of contact
A designated senior contact for cases where problems cannot be handled by the core team. This person will have the authority to make important or strategic decisions.
This point of escalation contact will:
- Address more serious or complex challenges
- Help resolve disagreements or risks that could affect the project
- Support alignment with the wider organisation and users
If you would like to add other contacts to your accountability structures, consider adding the following wording to your policy:
Level 1.5 – Other primary points of contact - supervisor
This level provides additional supervision and support to the main point of contact. This is used where there are multiple persons who may be looking after a project. Thie role must be assigned to check that the project is on the right path.
We will make sure that this person:
- Makes sure the project follows ethical, legal, and technical rules
- Supports where challenges are raised
- Ensures the project matches organisation AI values and AI purposes
Level 2 – Escalation points of contact
A designated senior contact for cases where problems cannot be handled by the core team. This person will have the authority to make important or strategic decisions.
This point of escalation contact will:
- Address more serious or complex challenges
- Help resolve disagreements or risks that could affect the project
- Support alignment with the wider organisation and users
Part 10: Updating and maintaining this policy
This policy will be reviewed and updated on a regular basis to ensure it remains aligned with the latest developments in Responsible AI practices, ethical standards, and the evolving impact of AI in real-world context.
Review will consider emerging regulatory requirements, advances in technology, societal expectations, and organisation learnings from AI development, deployment, educations, and adoption.
Updates to this Policy will be made to reinforce ethical permissibility, promote responsible innovation, and mitigate any identified risks associated with AI systems in use.
Appendices
Appendix 1 - Our retention periods for data and AI
Appendix 2 - Addressing re-identification of data in AI
Appendix 3 - Core Actions & Promises for AI
Appendix 4 - Our Don'ts for AI
Appendix 5 - Policy dictionary for AI
Commitment and Disclaimer
The Responsible AI Hub provides resources for learning, examples of AI in action and support for responsible AI. Use of these materials does not create any legal obligations or liability with the AICC.