Build a Responsible AI Policy

Create a tailored policy that ensures your AI adoption is ethical, transparent and aligned with best practice.

Welcome to the Responsible AI Policy Builder!

This tool is designed to help you develop a customised responsible AI policy for your organisation by answering a few guided questions and following the instructions on this page.

As you work through the sections, you’ll choose values, principles and compliance options that reflect your needs. Use the information on this page to create and refine your own policy in a separate document, the tool will guide you, but the final policy is yours to write and share.

Image for Welcome to the Responsible AI Policy Builder!

What is a Responsible AI Policy?

This policy sets out how an organisation will use Artificial Intelligence in a safe, fair, and accountable way. It does three things:

Explains purpose

by outlining why AI is being used in the business and what goals it supports.

Sets boundaries

around what users of AI are allowed to do and what practices are prohibited.

Clarifies responsibilities

by identifying who is accountable for decisions, oversight and maintenance.

How to use this tool

This tool is made up of two types of content. Standard components which outline core areas of Responsible AI, and custom components which help align the policy template to your business.

  • Standard components can be directly inserted into your policy.
  • Custom components offer you a choice to insert different approaches into your policy.

Let's help you write your policy....

Image for How to use this tool

Part 1: Introduction to the policy

This policy outlines a set of guidelines, principles and practices that you will use to manage the approach to AI in your organisation. We want you to make sure that your use of AI is safe, fair and transparent.

Using the policy, you will:

  • Follow the law by helping the organisation stay in line with legislation and regulations.
  • Remain accountable by outlining what will happen if something goes wrong. In this case, the policy helps figure out what happened and how to fix it.
Image for Part 1: Introduction to the policy

Part 2: Our values for AI

Our values shape how and why we use AI, ensuring every idea and application is rooted in purpose. They guide our decisions, define the way we work, and keep us accountable in building responsible AI. We don’t just talk about values — we live them and embed them in everything we do.Now it’s your turn:Choose 3 values for your organisation from the groupings below and add them to your new policy document.

Ideate with PURPOSE

We explore new AI ideas with clear intent. Every concept or solution we pursue should address real-world challenges an bring value to people, business and communities.

Act with RESPONSIBILITY

We own the impact of our AI use. Whether it’s something successful or a mistake, we stay accountable and ready to improve.

Build with INTEGRITY

We commit to honest, ethical AI development and use of AI. From data collection to deployment, we are transparent and accountable at every stage of our AI lifecycle.

Innovate with TRUST

We design and use AI that earns trust; by being reliable, understandable and respectful of its impact on others.

Use AI with HUMANS in mind

We use AI to support and empower people, not replace or harm them. Humans are at the core of how we use AI.

Part 3: Our principles for AI

While our values inspire our motivations, our principles outline fundamental rules which provide a consistent framework for decisions-making and behaviour. While the values inspire our motivations, our principles translate these into actionable standards. They give structure and direction to our ethical and professional conduct.

Our principles are the guiding rules that help inform our decision-making. We rely on our principles to:

  • Provide a framework for making decisions and using AI.
  • Provide direction for innovation projects and development workstreams.
  • Provide alignment and consistency in our approach to the development, training, support and collaboration for AI.

Time for you to choose your principles...

You can choose from three sets of principles, select the one below that best fits your organisation and add it to your new policy document.

Image for Part 3: Our principles for AI

FAST Principles

Fair, Accountable, Sustainable and Transparent.

Image for Fair

Fair

Fairness means AI must treat people equally and without bias.

Image for Accountable

Accountable

Accountability means that humans, not machines, are responsible for what AI does.

Image for Sustainable

Sustainable

Sustainability focuses on AI’s impact on society and organisation, and the long-term reliability of AI.

Image for Transparent

Transparent

Transparency means clearly showing how AI works, what it is for and how it’s used.

RAFT Principles

Reliable, Accountable, Fair and Trustworthy.

Image for Reliable

Reliable

Reliable AI works the way it is supposed to – safely and consistently over time.

Image for Accountable

Accountable

Accountability means that humans, not machines, are responsible for what AI does.

Image for Fair

Fair

Fairness means AI must treat people equally and without bias.

Image for Trustworthy

Trustworthy

Trustworthy AI earns people’s confidence by being safe, fair, and easy to understand.

SOAP Principles

Social, Open, Accountable and Protected

Image for Social

Social

Social responsibility means that AI should help people and support positive changes in society.

Image for Open

Open

Openness means sharing ideas, tools, and our use of AI with others.

Image for Accountable

Accountable

Accountability means that humans, not machines, are responsible for what AI does.

Image for Protected

Protected

Protection means keeping data, users and systems safe.

Part 4: Our Commitment to Comply with Legislation and Regulations

We are committed to the ethical, lawful and responsible development, deployment, procurement and use of AI. We will operate in accordance with the most comprehensive legislative and regulatory frameworks which are locally applicable.

We will align all relevant AI workstreams and processes with industry best practices and adhere to applicable laws and regulations. We will ensure that our AI -related activities comply with:

AI Legislation

We recognised the unique position of Northern Ireland which is influenced by the United Kingdom principles for AI, and by the Windsor Framework which maintains the applicability of certain EU law in specific contexts. As such, we will ensure our AI systems development, AI adoption, or AI use will be reviewed for compliance with EU AI legislations.

We will not engage in, support, or permit any AI use cases that are prohibited under recognised AI legislation, including but not limited to the EU AI Act. In accordance with this, we will not use:

GDPR and Data Protection

The UK GDPR and the Data Protection Act 2018 set out the legal framework for processing personal data, ensuring that individuals’ rights are respected and protected.

We will uphold the principles and obligations of the UK GDPR and the Data Protection Act 2018 across all systems and processes. We will:

Equality Legislation & Human Rights

We recognise that AI has the potential to impact people’s lives. It is essential that AI is designed and deployed in ways that respect human dignity, uphold individual rights and prevent discriminatory or unjust outcomes.

We acknowledge our obligations under Equality and Human Rights legislation, and will embed equality, human rights and fairness into the core of our AI use and governance model by committing to the following actions:

Part 5: Privacy and Data Security for AI

We place a high priority on the protection of personal and sensitive data used in the development and operation of AI systems. Our approach draws from recognised best practices in data privacy and information security, including principles reflected in standards such as the ISO/IEC 27001 (Information Security Management) and ISO/IEC 27701 (Privacy Information Management), while adapting to the specific needs of our organisation and the regulatory environment in which we operate.

Data minimisation and purpose limitation

We are committed to collecting and using only the data necessary for legitimate, clearly defined purposes. Data will not be used for unrelated objectives unless there are lawful grounds and appropriate consents.

Data anonymisation and de-identification

Personal data and designated safe spaces

Where personal data must be accessed or used by team members during AI development or use, this will occur within a clearly defined ‘safe space’.

Our process of anonymisation is supported by risk assessment and documented evaluations of re-identification likelihood, particularly when data is to be shared externally or used for model training.

If you have existing privacy or data security policy, consider inserting the following language into the section for privacy and data security:

The use of AI within the organisation is governed by the organisation’s existing privacy and data security policies. The privacy and data security provisions in this policy are intended to complement and augment those existing policies. In the event of any inconsistency or conflict between this policy and other applicable privacy or data security policies, the governance framework provides a high standard of protection with respect to the use of AI shall take precedence.

The determination of the “more appropriate” framework shall be guided by the nature of the data involved, the risk posed by the AI application, and the objective of ensuring the highest reasonable degree of privacy, security, and responsible AI use.

Part 6: How we handle data

We retain data only for as long as necessary to fulfil the purposes for which it was collected, to deliver services, to support our reasons for using AI, or to comply with legal and regulatory obligations, resolve disputes and enforce agreements.

Retention periods may vary depending on several key factors, including:

We commit to review the data we hold to ensure that it remains accurate, relevant, and necessary. When data is no longer required, we will securely delete, anonymise, or otherwise dispose of it using methods that maintain confidentiality and prevent unauthorised access.

We do not retain data beyond the lifespan of the organisation, unless required by law, If the organisation ceases operations or is dissolved, all retained data will be securely and permanently deleted, unless a legal obligation requires otherwise.

In cases where the organisation merges with or is succeeded by another entity, data may be transferred to the successor only if:

Where data has been anonymised or aggregated such that it can no longer be linked to an individual or specific project, it may be retained indefinitely for purposes such as research, analysis, system improvement, and commercialisation.

For a table which outlines standard data retention periods, please refer to Appendix 1.

The data usage and retention provisions are intended to supplement existing organisational policies that govern the use of data with AI, data protection, and information governance. It does not replace or override any current frameworks already in effect.

If you have existing policies or guidelines which address data usage for AI, consider inserting the following pop-out language into the section for privacy and data security:

The data usage and retention provisions are intended to supplement existing organisational policies that govern the use of data with AI, data protection, and information governance. It does not replace or override any current frameworks already in effect.

Part 7: Our reasons for using AI

As part of its commitment to responsible AI, the organisation will adopt AI systems, whether developed in-house or procured from third parties to enhance operations, decision-making, and service delivery. These systems are used in clearly defined contexts, always with respect for individual rights, organisation values, and regulatory obligations.

The following provisions outline the legitimate purposes for which AI systems and data may be used within the organisation. Each purpose reflects a distinct operations or strategic objective and is accompanied by actionable safeguards to ensure responsible implementation. Whether the AI is used to automate tasks, drive insights, influence behaviour, or support predictions, its use must remain transparent, accountable, and aligned with the other parts of this policy.

These provisions apply to AI models developed internally and to those which are acquired through third party partnerships, vendors, or platforms.

Now it's your turn....

Select all the reasons below that apply to how your organisation will use AI when writing your policy.

Image for To Train AI

To Train AI

To develop, fine-tune, or adapt AI models using data in order to improve their performance, accuracy, and relevance for intended tasks.

Image for Take action using AI

Take action using AI

To enable AI systems to autonomously or semi-autonomously initiate or support operations activities based on data-driven outputs.

Image for Influence people

Influence people

To inform, guide, or influence human behaviour, choices, or perceptions.

Image for Improve efficiency

Improve efficiency

To streamline processes, reduce redundancy, optimise resource allocation and automate repetitive tasks through AI-driven solutions.

Image for Unlock insights

Unlock insights

To identify patters, trends or relationships within data that would otherwise be difficult, burdensome or impossible to detect manually.

Image for Predict something

Predict something

To anticipate future events, behaviours or needs based on historical and real-time data using predictive AI models.

Image for Make informed decisions

Make informed decisions

To support human or automated decision-making by providing relevant, contextual, and timely data-driven recommendations or risk assessments.

Part 9: Our lines of responsibility for AI

We commit to creating and maintaining clear lines of responsibility for using AI. This helps us work responsibly and ethically. Each person or group involved in a project will understand their role, what they are responsible for, and who to contact when/if an issue arises.

We will organise our lines of responsibility into the following structure:

Level 1 – Primary points of contact

This is the main team working directly with AI on a particular use case or project. They will manage the day-to-day work with AI and are the first people to speak to about a project or use case.

We will make sure that people at this level:

Level 2 – Escalation points of contact

A designated senior contact for cases where problems cannot be handled by the core team. This person will have the authority to make important or strategic decisions.

This point of escalation contact will:

If you would like to add other contacts to your accountability structures, consider adding the following wording to your policy:

Level 1.5 – Other primary points of contact - supervisor

This level provides additional supervision and support to the main point of contact. This is used where there are multiple persons who may be looking after a project. Thie role must be assigned to check that the project is on the right path.

We will make sure that this person:

Level 2 – Escalation points of contact

A designated senior contact for cases where problems cannot be handled by the core team. This person will have the authority to make important or strategic decisions.

This point of escalation contact will:

Part 10: Updating and maintaining this policy

This policy will be reviewed and updated on a regular basis to ensure it remains aligned with the latest developments in Responsible AI practices, ethical standards, and the evolving impact of AI in real-world context.

Review will consider emerging regulatory requirements, advances in technology, societal expectations, and organisation learnings from AI development, deployment, educations, and adoption.

Updates to this Policy will be made to reinforce ethical permissibility, promote responsible innovation, and mitigate any identified risks associated with AI systems in use.

Appendices

Appendix 1 - Our retention periods for data and AI

Appendix 2 - Addressing re-identification of data in AI

Appendix 3 - Core Actions & Promises for AI

Appendix 4 - Our Don'ts for AI

Appendix 5 - Policy dictionary for AI

Commitment and Disclaimer
The Responsible AI Hub provides resources for learning, examples of AI in action and support for responsible AI. Use of these materials does not create any legal obligations or liability with the AICC.

We'd love to hear from you

Innovation thrives through connection. Whether you're an SME, researcher, or professional exploring AI, we’re here to help.

Our Partners