Responsible AIPolicy

The AICC’s Responsible AI Policy outlines our approach to AI development, adoption, advisory guidance, research & development, and ethical permissibility for AI in Northern Ireland.

Part 1: Our Values

  1. The AICC Team is united by a shared commitment to harnessing AI’s potential for good, solving real-world problems, and building a future where technology empowers people, businesses and communities.
  2. We believe our responsible AI values help define the way we work with AI. We aim to live our values and integrate them into our approach as we work across Northern Ireland.
  3. At the AICC, we:
    • Ideate with CARE
      • When we have new ideas for using AI or engage in new projects with SMEs, we do it carefully. We think about how these ideas might impact organisations and people, and consider not only what AI can do, but what it should do, and what’s the best way to do it.
    • Build with INTEGRITY
      • Whether it’s a new proof of concept or a relationship, we aim to develop with honesty, transparency, and accountability. We are committed to truthfulness in data usage, fairness in development, and clarity in our decision-making and guidance.
    • Innovate with TRUST
      • Innovation is only sustainable if it earns and maintains trust. We aim to build relationships, engage SME projects and design solutions that can be relied upon – for performance and alignment with ethical standards.

Part 2: Our Principles

  1. Principles are the guiding rules that form the foundation of our decision-making and approach at the AICC. We rely on our AI Principles to:
    • Provide a framework for making decisions and providing guidance to SMEs, industry, other bodies or persons in Northern Ireland.
    • Provide direction for innovation projects and development workstreams in the AICC Transformer Programme.
    • Provide alignment and consistency in the AICC approach to AI development, training, support and collaboration.
    • Provide advice on trustworthy, ethical and responsible AI across AI development, adoption and policy areas.
  2. Our principles are how we take our responsible AI values into everything we do. We don’t believe that policies are just words on a page, we live our values and integrate them into our ways-of-working using these principles:
    • Principle 1: Fairness
      • Fairness means that AI is unbiased, equitable and non-harmful. AI should treat all people fairly, without favouring or excluding any group based on gender, race, disability, age, or other protected characteristics under equality legislations and regulations.
      • How do we use this principle:
        • In decision-making: We always consider who could be helped or harmed. Asking: “Is this fair to everyone?”
        • For innovation: Testing our approach when building proof of concepts with an aim to find and resolve bias. We include diverse voices when designing potential AI solutions.
        • In training & education: Teaching people and models to identify bias in data, models, algorithms and outputs.
        • In policy advice: Recommending fairness reviews, bias checks, harm assessments, and inclusive practices.
    • Principle 2: Accountability
      • Accountability helps ensure that we interact with AI development and with people in a way that is auditable, responsible, and offers redress. At the AICC we emphasise that humans are always responsible for AI actions, and we must be able to explain, review, and correct those actions. Clear responsibility helps manage risk, protect rights, and improve trust in AI.
      • How do we use this principle:
        • In decision-making: Assign responsibility amongst our trusted team of experts who ensure explainability and reasoning are demonstrated for each decision we make.
        • For innovation: We track who created, tested and approved approaches for each proof of concept we are involved in.
        • In training & education: We educate on the benefits of record keeping. We commit to sharing how AI can be monitored and audited for the purposes of adoption, AI project progress, and success measuring.
        • In policy advice: We recommend frameworks for governance, redress and the safe use of AI.
    • Principle 3: Sustainability
      • Sustainability focuses on AI’s impact on society and organisations, and the long-term reliability of AI. AI should not just work today – it should benefit organisations and people over time, use resources wisely, and support an inclusive future. It also ensures that AI programmes remain useful and maintainable over time.
      • How do we use this principle:
        • In decision-making: We consider the impact of AI. Asking: “Will this be good for an organisation or people in the long run?”
        • For innovation: We advise on the right tool for the project, avoid overuse or over-complicating solutions. We design AI for reuse and maintenance, avoiding black-box AI systems which are not understood.
        • In training & education: We encourage understandable resources and AI content and promote long-term thinking for AI.
        • In policy advice: We promote social and organisational responsibility in the AI journey. We advise on frameworks which offer continued governance and AI oversight.
    • Principle 4: Transparency
      • Transparency means clearly showing how AI works, what it is for, and how it’s used. It must be easy to understand what data is being used, how it’s making decisions or completing tasks, and when it’s being used. People trust AI when they can understand it. Transparency helps make informed decisions and ensures that AI isn’t being used in secret or harmful ways.
      • How do we use this principle:
        • In decision-making: We consider how AI works. Asking: “Can we explain what this AI does and why it does it?”
        • For innovation: We build systems for proof of concept which are easy to understand and accompanied with clear documentation. We also endorse explainable solutions, and focus on visibility into the entire AI process, not just a single moment in time.
        • In training & education: We encourage documentation and communication around the use and purpose of AI.
        • In policy advice: We recommend explainable AI, open data use policies, and clear communications.

Part 3: Compliance with Legislation & Regulation

  1. The AICC is committed to the ethical, lawful and responsible development, deployment, and use of Artificial Intelligence (AI). The AICC will operate in accordance with the most comprehensive legislative and regulatory frameworks which are locally applicable.
  2. We will align all relevant AI workstreams and processes with the EU’s AI Act. This includes:
    • Adhering to risk classifications within the Act.
    • Ensuring transparency, human oversight, and technical robustness.
    • Respecting fundamental rights, democratic values, and the rule of law.
  3. We recognise the unique position of Northern Ireland which is influenced by the United Kingdom principles for AI, and by the Windsor Framework which maintains the applicability of certain EU law in specific context. As such, at the AICC:
    • AI systems developed, shared, or used in Northern Ireland will be reviewed for compliance with EU regulations.
    • We will maintain clear procedures for navigating dual regulatory obligations where they arise.
  4. We will not engage in, support, or permit any AI use cases that are prohibited under the EU AI Act, including but not limited to:
    • AI systems that use subliminal techniques or purposefully manipulative methods to distort human behaviour and impair decision-making, resulting in significant harm.
    • AI systems that exploit vulnerabilities due to age, disability, or specific social or economic situations to cause significant harm.
    • AI systems that evaluate or classify individuals based on their social behaviour or personal characteristics, leading to unjustified or disproportionate treatment.
    • AI systems used solely for predicting criminal offences based on profiling or assessing personality traits.
    • AI systems that create or expand facial recognition databases through untargeted scraping of images from the internet or CCTV footage.
    • AI systems designed to infer emotions in workplaces and educations settings, unless used for medical or safety reasons.
    • AI systems that categorise individuals based on biometric data to infer sensitive characteristics such as race, political opinions, or sexual orientation.
    • AI systems that use real-time biometric identification systems.
  5. As per this Responsible AI Policy, projects within the AICC will:
    • Undergo internal legal and ethical assessments.
  6. Any system approaching high-risk or restricted use will be subject to escalation, audit and review within the AICC.
    • If an organisation is aligned to a prohibited use case, the AICC may:
      • Terminate the collaboration with the organisation under the Transformer Programme; or
      • Change the scope of the collaboration to focus on aligning the project to be compliant with the EU AI Act.
  7. The AICC recognises that the use of AI intersects with sector-specific regulatory frameworks. Under the Transformer Programme, the AICC will ensure that AI systems comply not only with general AI laws, but also with industry-specific regulations, including but not limited to:
    • Healthcare: Compliance with GDPR, medical device regulations, and clinical safety standards when AI is used for diagnostics, treatment planning, or patient management.
    • Finance: Adherence to anti-money laundering (AML), algorithmic trading, consumer protection, ad transparency rules set by financial regulators (e.g., FCA, EBA).
    • Employment & HR: Compliance with fair hiring practices, anti-discrimination laws, and transparency in AI-driven decision-making processes affecting workers.
    • Education: Respect for academic integrity, data protection for minors, an equitable access when deploying Ai in learning environments.
    • Law Enforcement: Alignment with human rights standards, due process, and proper authorisation when deploying surveillance or decisions-support tools.
    • All AI projects will be evaluated for compliance with relevant industry codes, standards, and best practices, with domain-oversight where appropriate.
  8. All AI projects will be evaluated for compliance with relevant industry codes, standards, and best practices, with domain-oversight where appropriate.

Part 4: The Transformer Programme

  1. The AI Collaboration Centre’s Transformer Programme is a comprehensive 20-day journey designed to empower businesses with the tools and expertise needed to leverage artificial intelligence effectively.
  2. Through a blend of hands-on sessions and expert guidance participants will progress from understanding the fundamentals of AI to building and integrating tailored AI models into their operations.
  3. Our commitment in the AICC Transformer Programme:
    • Identify lines of responsibility for both internal purposes and external engagement, including highlighting the avenues for escalation or redress.
    • We will conduct our work efforts in line with the values and principles of the AICC Responsible AI Policy.
    • We will not keep any organisational data which has been shared beyond what is necessary for the completion of the engagement.
    • We will provide in-depth reports on our activities before concluding any SME engagement as part of the Transformer Programme.
    • We will use simple language and avoid jargon or technical complexities to ensure understanding during projects and at the point of handover.
  4. We will complete Core Assessments throughout the Transformer Programme to evaluate and advise on organisation and project compliance and governance. Where necessary this includes, but is not limited to:
    • Creating Data Fact Sheets: A summary of the data which is used in an AI project/system. The Data Fact Sheet aims to help people understand the impact of data on a project and data management needs.
    • Completing Harm Assessments: A summary of the types of harm and risk associated with the use of AI. This assessment identifies harm in the real-world and unseen harms which may not be as obvious.
    • Checking Organisation’s Policies & Governance: A review to make sure rules, policies, and processes in an organisation are established and clear. It helps find problems, reduce risk, and make sure everything is set-up for success. Considers 6 Areas: Values & Purpose, Vision, Policies, Guidance, People, Responsibility.
    • Assessing AI Projects: Review of an AI project to check if it is well planned, useful, and safe. It looks at the goals, data, tools, and risks to make sure the project is working the right way and brings value to the organisation. Considers 6 areas: General Information, Legislation, Fairness, Accountability, Sustainability, Transparency.
  5. The list of activities outlined above is non-exhaustive. Where required and identified as necessary, the AICC team will provide additional support. This may include, but is not limited to:
    • Navigation of Specific Regulations: Review and interpretation of relevant sector-specific, national, or international regulations to ensure AI projects / systems align with legal and ethical standards.
    • Navigation of Legislation in Northern Ireland: Identify and apply Northern Ireland specific legal and ethical frameworks that govern the development and deployments of AI technologies.
    • Navigation of Northern Ireland and other Geographies: Compare and contract AI-related legislation across Northern Ireland and other relevant jurisdictions to maintain compliance across regions.
    • In-depth analysis of specific use cases: Conduct evaluations of individual AI projects / systems to assess risks, benefits, and regulatory implications in context.
  6. The AlCC will make consideration of your AI project / system during your organisation onboarding (see Considerations for Engagement with AICC in this policy), however, if the AICC find unacceptable risk following the ethical assessments during the active engagement, the AICC may need to re-evaluate its engagement under any agreement or Statement of Work.

SECTION 1: Governance – Lines of Responsibility in AICC

  1. We will establish and communicate the following lines of responsibility for all engagements with the AICC Transformer Programme.
  2. Defining the Levels of Responsibility
    • Level 1 – Primary Points of Contact: This level includes the direct delivery team responsible for the execution of the project. These are the first points of engagement and contact for operational activities, day-to-day communications, and implementation. They handle routine matters and are on-hand to manage most issues internally.
    • Level 2 – Core Points of Contact: This level provides governance, oversight, and support to the primary team. They are responsible for ensuring compliance, maintaining quality standards, and handling escalation that cannot be resolved at Level 1. They also serve as liaison points to ensure the project aligns with broader objectives and policies.
    • Level 3 – Escalation Level 1 - Points of Contact: This level is brought in only when issues cannot be resolved by the core team. It typically involves more strategic oversights and decision-making authority. Individuals at this level may engage with advisory bodies or act as intermediaries to execute leadership.
    • Level 4 – Escalation Level 2 - Points of Contact: This level is reserved for serious or unresolved matters that require institutional or ethical scrutiny. Engagement is rare and only necessary when issues have significant implications for policy, ethics, or institutional compliance.
  3. Defining the Roles and Responsibilities in the AICC Transformer Programme
  1. All relevant personnel in the outlined roles are available via AICC.CO and will be highlighted in the communications for the Transformer Programme.
  2. f a team member is, or becomes unavailable, the AICC will communicate and adapt the lines of responsibility, as possible, to limit the impact on programme continuity.

Section 2: Governance – Communication of AI in Projects

  1. We believe that AI projects should include consistent, clear and transparent communications. Our communication should enhance the operational progress of our Transformer Programme. We will rely on effective communication to:
    • Explain our approach and decision making: Foster communication practices that support transparency around our combined or independent decision-making for your AI project.
    • Support alignment with ethical and legal standards: Use communication to reinforce how AI development and deployment align with values, industry norms, and applicable regulations.
    • Communicate risks, limitations and issues clearly: Ensure that known limitations, uncertainties, and potential impacts of AI systems are communicated in language appropriate for all audiences, including non-technical stakeholders.
    • Encourage open dialogue and feedback loops: Promote a culture where concerns, suggestions, and differing perspectives on the AI project are welcomed and acted upon throughout the Transformer Programme.
    • Build shared understanding: Ensure that everyone involved – from technical teams to leaderships – has access to clear, consistent information about AI projects, methods, risks, outcomes, and use.
  2. The AICC will assign responsibilities for communication amongst its lines of responsibility, including communication with all organisation individuals and groups, and AICC project team members that are involved in the AI project lifecycle or those that may be impacted by its progress.
  3. During your engagement with the AICC in the Transformer Programme, there are two core phases of communication and engagement:
    • Phase 1: Focuses on the communication which takes place before beginning the active engagement (i.e., project activities). This Phase focuses on getting you prepared for engagement.
      • As per the AICC Lines of Responsibility, the main persons responsible for communication would be the Deputy Director of Business Engagement and the Deputy Director of AI Technology.
    • Phase 2: Focuses on the active engagement (i.e., project activities) which are to be complete to realise the purpose of the AI project / system.
      • As per the AICC Lines of Responsibility, the main persons responsible for communication would be the Project Lead and identified members of the Project Team.

Section 3: Outcomes of the Transformer Programme

  1. The Transformer Programme works with you to help realise your AI ambitions, and will work to produce reports, insights, and in some cases technical based for Proof of Concept or Pilot.
  2. At the AICC we:
    • don’t deliver AI solutions as production-ready: We do not build or claim to provide full-scale, production-grade AI systems, our work stops at exploration, prototyping, and pilot phases.
    • don’t present Proofs of Concept or Pilots as deployable solutions: We never position experimental models or prototypes as final products, they are intended only for validation and learning.
    • don’t own or assume responsibility for post-pilot deployment: We do not oversee or take accountability for operationalising AI beyond the agreed pilot or experimental scope.
    • don’t proceed without a clear exit or transition strategy: We do not initiate pilots without defining how they will be concluded, handed over, or decommissioned responsibly.
    • don’t retain ongoing responsibility for deployed models: We do not offer long-term support, monitoring, or model lifecycle management after the pilot phase — this is the domain of operational teams in your organisation.

Part 6: Considerations for Engagement with AICC

  1. The Partnership Agreement outlines the terms and conditions under which the AICC agrees to work with SMEs. It details responsibilities and terms for the engagement and confirms the SMEs position on the AICC Transformer Programme.
  2. Engagement with SMEs in the Transformer Programme includes multiple assessments for assessing Ethics & Governance in AI work. However, the AICC may not engage with SMEs for the Transformer Programme, if:
    • the company or organisation is using technology in ways that undermine democracy, civil rights, or health & well-being; or
    • the company or organisation is demonstrating a culture of secrecy and or closedness which would impact our collaboration; or
    • the company or organisation is aiming to profit from addiction, misinformation, or the exploitation of vulnerable populations; or
    • the company or organisation is engaged in fraud, embezzlement, bribery, or unethical lobbying which undermine trust that sustains business and society; or
    • the company or organisation demonstrate an unwillingness to be open and collaborative as they progress with their use of AI and the expertise of the AICC.
  3. The list above is not explicit or exhaustive, these considerations are made before a project may progress into an active engagement within the Transformer Programme, however, may not always act as a barrier to engagement.

Part 7: High-Risk Areas & Corresponding Projects

  1. SME projects in the Transformer Programme may be considered ‘High Risk’. These projects will be continuously monitored throughout the engagement due to their sensitive nature.
  2. Projects which are identified as ‘High Risk’ may be subject to additional action or termination of the Partnership Agreement under the Transformer Programme.
  3. Projects which are classified as “High-Risk” are not prevented from engagement in the Transformer Programme, but approaches may impact the collaboration scope between the AICC and SME.
  4. ‘High Risk’ projects will be identified before the kick-off of the Transformer Programme engagement. These include:
    • Biometric Identification Systems: A type of artificial intelligence system that uses biometric data – such as facial-features, fingerprints, iris patterns, or voice recognition – to automatically identify or verify the identity of individuals. These are particularly sensitive due to their potential impact on fundamental rights and civil liberties, including privacy, data protection, and non-discrimination.
      • Example: Real-time facial recognition in public spaces by law enforcement.
      • Context: Biometric identification systems pose a high risk to individual privacy due to the deeply personal nature of the data they collect—such as fingerprints, facial features, iris patterns, or voiceprints—which are unique and unchangeable. Unlike passwords or PINs, biometric traits cannot be easily revoked or reset if compromised, making individuals perpetually vulnerable to identity theft and misuse.
    • AI for Military or Defence Purposes: AI used in defence contexts (e.g., autonomous weapons, surveillance, decision support) raise ethical concerns around human control, accountability, proliferation, and potential escalation of conflicts.
      • Example: Autonomous drones used for target selection and engagement.
      • Context: Without human oversight, these systems could make life-and-death based on flawed data or bias, raising ethical and legal accountability issues.
    • AI Systems for Education & Training: AI tools that assess students, personalise learning, or make decisions about educational paths can reinforce bias, misjudge individual needs, and affect long-term opportunities.
      • Example: AI scoring systems for standardised testing or automated grading.
      • Context: These tools may misinterpret diverse learning styles or non-standard responses, disadvantaging students from underrepresented or non-traditional backgrounds.
    • AI used for Medical Purposes: AI in diagnostics, treatment recommendations, or healthcare resource allocation must be accurate, explainable, and bias-free due to its direct impact on patient health & safety.
      • Example: AI diagnostics tools analysing X-rays or MRI scans.
      • Context: An undetected bias in training data can lead to misdiagnoses for certain demographics, compromising patient outcomes and trust in healthcare systems.
    • AI for Public Services: AI systems in areas such as welfare, law enforcement, or social support may lead to systemic bias, exclusion, and a lack of recourse for affected individuals.
      • Example: AI-based eligibility assessment for unemployment benefits.
      • Context: Errors or opaque logic may unjustly deny people access to essential support, with limited human appeal processes and disproportionate effects on vulnerable populations.
    • AI for Workplace Hiring: AI used in recruitment, performance evaluation, worker management, or workplace monitoring risks discrimination, data misuse, and lack of transparency in employment decisions.
      • Example: AI resume screening software that filters candidates based on keywords or predicted job performance.
      • Context: These systems can reinforce existing biases in hiring data, excluding qualified applicants from marginalised groups without explanation or recourse.
    • AI used for Gambling or Gaming Purposes: AI in gaming and gambling can amplify addictive behaviours, manipulate user engagement, and target vulnerable individuals with little accountability.
      • Example: AI-driven personalisation in online casino platforms.
      • Context: By learning user behaviour, Ai can increase engagement in harmful ways, exploiting addictive tendencies and resulting in significant financial and psychological harm.
    • AI impacting Child Safety on Digital & Online Platforms: AI moderation, content recommendations, and interaction analysis must be rigorously controlled to prevent exploitation, exposure to harmful content, and data misuse involving minors.
      • Example: Content recommendation algorithms on platforms like YouTube or TikTok.
      • Context: These can expose children to inappropriate content, radicalisation, or online predators, often without adequate parental controls or age verification.
    • AI used by Police or Similar Agencies: Predictive policing, surveillance, and risk profiling using AI can institutionalise bias, violate privacy, and erode public trust if not governed by clear legal frameworks.
      • Example: Predictive policing systems that forecast crime “hot spots”.
      • Context: These systems often disproportionately target low-income or racially diverse communities, perpetuating historical biases and increasing surveillance in already over-policed areas.
    • AI used as an Autonomous Systems: AI operating with minimal human intervention (e.g., drones, vehicles, robots)  introduces safety, accountability, and control risks, particularly in dynamic or public environments.
      • Example: Self-driving cars navigating urban environments.
      • Context: A malfunction or incorrect decision can cause traffic accidents, with questions around liability, ethical decision-making, and fallback protocols remaining unresolved.
    • AI for Judicial or Political Purposes: AI influencing legal decisions, case analysis, or political campaigns can threaten due process, fairness, and democratic integrity if not transparent and equitable.
      • Example: AI tools predicting recidivism rates to inform sentencing or parole.
      • Context: These tools may rely on historical data that reflect systemic bias, potentially leading to harsher penalties for certain groups and undermining judicial fairness.
    • AI to Manage & Operate Critical Infrastructure: AI in energy, transport, water, and communication systems must be robust and resilient to avoid catastrophic failures and maintain public safety and continuity.
      • Example: AI controlling electrical grid demand and supply balancing.
      • Context: A malfunction or cyberattack could trigger widespread blackouts, economic disruption, or public safety hazards if not properly secured and supervised.
    • AI used for Immigration or Border Activities: AI in migration, asylum, or borer control can affect fundamental rights, rick unjust profiling, and lack the transparency needed for fair human oversight.
      • Example: AI tools assessing visa or asylum applications based on predicted integration success.
      • Context: These systems may encode cultural bias or use unverifiable metrics, impacting people’s fundamental rights and access to fair immigration processes.
    • General Purpose AI with Risk: Versatile AI models capable of multiple tacks (e.g., LLMs) pose unpredictable risks when applied beyond their intended scope, including misinformation, bias, and malicious use.
      • Example: Large Language Models generating news articles or legal documents.
      • Context: If misused, these systems can produce convincing misinformation, fake legal advice, or impersonations, with broad societal implications for trust and truth.

Part 7: Privacy and Data Security

  1. At the AICC, we place a high priority on the protection of personal and sensitive data used in the development and operation of artificial intelligence systems. Our approach draws from recognised best practices in data privacy and information security, including principles reflected in standards such as the ISO/IEC 27001 (Information Security Management) and ISO/IEC 27701 (Privacy Information Management), while adapting to the specific needs of our organisation and the regulatory environment in which the AICC and its Transformer Participants operate.
  2. Data Minimisation and Purpose Limitation
    • When engaging with the Transformer Programme, the AICC Team will focus on collecting and using only the data necessary for the agreed legitimate purposes as defined by agreement.
    • The AICC does not use shared data for unrelated objectives unless clear, lawful grounds and appropriate consents are established.
  3. Data Anonymisation and De-Identification
    • Where practical, personal data should be subject to anonymisation or re-identification techniques to reduce risks of identification before being shared with the AICC.
    • Where appropriate the responsibility for data provision will lie with the SME collaborating with the AICC during the Transformer Programme, not the AICC itself.
    • If data is to be anonymised internally by members of the AICC team it will be done using recognised methodologies and periodically evaluated for effectiveness considering emerging technologies.
  4. Consent & Transparency
    • The AICC is committed to providing SME organisations and individuals with clear, accessible information on how their data is used in AI applications.
    • Such information shall always be outlined in clear and understandable language included in handover documentation related to the Transformer Programme.
    • Where required, the AICC will endorse informed, opt-in consent, and ensure mechanisms are in place to support user rights regarding their data.
    • Opt-in consent is a data protection and privacy principle in which individuals are given a clear, affirmative choice to allow the collection, processing, or use of their personal data within artificial intelligence. It requires that no personal data is processed unless the individual has explicitly agreed to it through a clear informed action – such as checking a box, clicking an “I agree” button, or signing a digital or paper form.
      • In the context of Responsible AI, and engagement on the Transformer Programme with the AICC, we endorse opt-in consent for:
      • Building Trust: It respects individual autonomy and foster transparency by ensuring users are fully aware of how their data will be used in AI systems.
      • Preventing Assumed or Implied Consent: It prevents data from being used under vague or passive assumptions – such as pre-ticked boxes or bundled consent – ensuring more ethical data collection.
      • Meeting Legal Standards: Many data protection laws, including the General Data Protection Regulation emphasise the importance of opt-in models for collecting data, especially sensitive personal information or collecting data for purposes beyond what is strictly necessary.
  5. Secure Data Handling
    • Our data handling practices reflect key principles from internationally accepted information security frameworks, including structured access controls, encryption of sensitive data, and continuous monitoring.
    • The AICC will regularly review internal processes to identify and mitigate risks to data integrity and confidentiality.
    • The AICC will manage data as aligned to its Data Usage Policy and its Privacy Policy.
  6. Third Party and Supply Chain Risk Management
    • Any third-party vendors or partners involved in the AI lifecycle must adhere to comparable data privacy and security standards.
    • Due diligence and contractual safeguards are required to manage risks related to external data processors or service providers.
    • The AICC endorses that collaborators working with external third-party providers should complete regular audits and assessments for handling personal or sensitive data.
  7. Data Quality and Integrity
    • During the Transformer Programme, the AICC will review data quality and requirements for the progression of any AI project.
    • The data used for an AI project must be accurate, complete, up-to-date and appropriate for the purposes agreed between the AICC and the SME. The AICC will aim to utilise data which is representative and fit-for-purpose.
    • The AICC will review and advise if the data quality is not sufficient for the purposes of the AI project. To identify fit-for-purpose data, the AICC will review the following areas:
      • Relevance: Does the data answer the business question or need?
      • Accuracy: Is the data free from significant errors?
      • Completeness: Is the data missing critical values or entire data sets?
      • Timeliness: Is the data recent and updated enough for the needs of the business?
      • Consistency: Is the data uniform across sources and time?
      • Validity: Does the data conform to defined rules or standards?
      • Accessibility & Usability: Can you access and work with the data effectively?
      • Bias & Representation: Does the data represented the population or area of interest?
      • Volume: Is there enough data to support the analysis, and is it detailed enough?
      • Cost & Effort: Is the value of the data worth the effort to clean and process it?
    • The AICC will endeavour to avoid error propagation (i.e., refers to the way inaccuracies, biases, or flaws in data or models can spread or amplify through the various stages of an AI systems). To reduce the potential for error propagation, the AICC will:
      • Aim to Avoid Poor-Quality Input Data: AICC Project Teams in the Transformer Team will review and assess the data which will be used to train, feed, or demo an example of AI in action.
      • Consider Data over the AI Journey: Project Teams in the Transformer Team will review data in the multi-step pipelines for AI systems and corresponding operational processes (e.g., data preparation > preprocessing > model training > inference > deployment > maintenance).
      • Avoid Assumption or Misunderstanding of Data: AI has the potential to make assumptions (e.g., linear relationships, independence of variables) that may not prove true in real-world data. The AICC Team will endeavour to ensure that your AI project interprets and interacts with your data as required and is appropriate under the agreed scope of activities
  8. Outside of the Transformer Programme, the AICC believes that the following areas can help SME and AI projects avoid error propagation:
    • Poor Quality Data: Implement robust data validation and cleansing pipelines to detect inaccuracies, inconsistencies, or missing values before use.
    • Compounding Errors in processing Pipelines: Design modular, auditable pipeline components and insert checkpoints to assess output quality after each stage.
    • Model Assumptions and Simplifications: Conduct model stress-testing and scenario analysis to challenge assumptions and expose hidden weaknesses.
    • Human-in-the-Loop Feedback Loops: Establish bias detection audits in human-AI decision systems and provide guidance/training for users interpreting AI outputs.
    • Lack of Explainability or Traceability: Use explainable AI (XAI) tools and maintain comprehensive logs to trace inputs, transformations, and decisions.
    • Poor Monitoring Post-Deployment: Implement continuous monitoring for data drift, model performance degradation, and user-reported issues.
    • Unrepresentative Training Data: Perform data representation analysis and retrain models with diverse, real-world scenarios.

Part 8: Our 10 Don’ts for Responsible AI at the AICC

At the AICC, we:


Part 9: Updating & Maintaining this Policy

  1. This Policy will be reviewed and updated on a regular basis to ensure it remains aligned with the latest developments in Responsible AI practices, ethical standards, and the evolving impact of AI in real-world context.
  2. Reviews will consider emerging regulatory requirements, advances in technology, societal expectations, and organisational learnings from AI development, deployment, education and adoption.
  3. Updates to this Policy will be made to reinforce ethical permissibility, promote responsible innovation, and mitigate any identified risks associated with AI systems in use.

Glossary of Terms

Responsible AI

The practice of designing, developing, and deploying AI systems in ways that are ethical, transparent, fair, accountable, sustainable, and aligned with human rights and societal well-being.

Transformer Programme

AICC’s structured 20-day engagement journey designed to help SMEs understand, explore, and apply AI solutions effectively and ethically.

SME

Small and Medium-sized Enterprises – business engaged with AICC support through the Transformer Programme.

Values

Core beliefs guiding the AICC’s approach: Careful ideation (CARE), honest and fair development (INTEGRITY), and innovation built on dependable relationships (TRUST).

Principles

Operations guidelines shaping how the AICC applies its values to AI projects: ensuring equity (FAIRNESS), clear responsibility (ACCOUNTABILITY), long-term benefits and positive impacts (SUSTAINABILITY), and openness (TRANSPARENCY).

Compliance

Adherence to legal and regulatory requirements, particularly the EU AI Act, UK AI Principles, and sector-specific laws, ensuring AI systems are lawful and ethical.

EU AI Act

The European Union’s regulatory framework setting legal standards for AI based on risk levels, transparency, human oversight, and protection of rights.

Windsor Framework

Agreement impacting Northern Ireland’s regulatory landscape, maintaining partial alignment with EU rules post-Brexit.

CAGE

Core Assessment of Governance & Ethics which refer to evaluations conducted during the Transformer Programme (or outside of this programme), including Data Fact Sheets, Harm Assessments, Policy Review, and AI Project Review to ensure ethical compliance.

Data Fact Sheet

A structured document summarising the data used in an AI project, explaining its origins, limitations, and relevance to the project.

Harm Assessment

A structured evaluation to identify real-world risks, biases, and unintended consequences in AI projects.

Governance

The framework of roles, responsibilities, and oversight practices ensuring responsible and effective AI project management.

Primary Points of Contact

Individuals responsible for daily management and smooth operation of AI engagements during the Transformer Programme.

Escalation Points of Contact

Individuals responsible for addressing critical concerns that cannot be resolved by the primary points of contact.

Proof of Concept (PoC)

An AI prototype developed during the Transformer Programme to test and validate ideas before larger implementation or development.

Pilot

An experimental deployment of an AI with limited scope to assess feasibility, without guaranteeing final production deployment.

Exit Strategy

A define plan for the responsible conclusion or transition of an AI pilot or project, ensuring handover or decommissioning.

Ethical Permissibility

Evaluating and ensuring that AI projects are not only legal but also morally acceptable and socially beneficial.

High-Risk-AI Systems

Systems classified under the EU AI Act as posing significant risk to health, safety, fundamental rights, or democratic processes, requiring strict controls.

Prohibited AI Practices

AI Applications forbidden by law (e.g., subliminal manipulation, exploitative targeting, unauthorised biometric categorisation) as outlined in the EU AI Act.

Explainability

The ability to clearly explain how an AI system makes its decisions or predictions in understandable terms for users and stakeholders.

Accountability

The principle that humans, not AI, are responsible for outcomes and decisions made through AI systems.

Transparency

The principle of making AI usage, decisions, data sources, and risks understandable and visible to users and affected parties.

Sustainability

Ensuring that AI solutions are designed for long-term benefit, resource efficiency, maintainability, and positive societal impact.

Fairness

Ensuring that AI systems are free from bias, discrimination, or exclusion of individuals or groups.

Ethics & Governance Assessment

A structured evaluation performed by AICC to determine if AI projects meet ethical, regulatory, and governance standards.

Sector-specific Regulations

Laws and standards applying to specific industries (e.g., healthcare, finance, education) that AI systems must comply with in addition to general AI regulations.

Consent

Voluntary, informed, and clear agreement by individuals or organisations to the collection, sharing, and use of their data in AI projects.

Open

Dialogue

An engagement approach that encourages feedback, transparency, and shared understanding between AICC and SMEs during AI projects.

AI Redress Mechanism

Processes allowing individuals to challenge, correct, or seek remedies for adverse outcomes resulting from AI systems.

Dual Regulatory Obligations

The requirement to comply with both UK and EU AI laws where Northern Ireland operates under intersecting legal frameworks.

Data

Protection

Legal and ethical standards ensuring personal data is processed securely, fairly, and transparently (e.g., GDPR compliance).

Ethical Exit

A planned and responsible handover or closure of an AI engagement, ensuring no ongoing harm or abandonment of AI projects.

Reactive Compliance

Waiting for regulations to mandate action; contrasted with AICC’s proactive approach of anticipating and aligning with evolving standards.

ISO/IEC 27001 – Information Security Management

Provides requirements for establishing, implementing, maintaining and improving an information security management system (ISMS).

ISO/IEC27701 – Privacy Information Management

Provides guidance on managing privacy controls and building a privacy information management system (PIMS).

ISO/IEC 38505 – Governance of Data

Offers frameworks for the effective governance of data to support decision-making and compliance throughout its lifecycle.

Updated May 2025

We'd love to hear from you

Innovation thrives through connection. Whether you're an SME, researcher, or professional exploring AI, we’re here to help.

Our Partners