Cybersecurity Checklist ETSI Standards

Strengthen your AI systems with a trusted checklist covering key cybersecurity standards and safeguards.

Artificial Intelligence offers opportunities but also introduce unique cybersecurity threats: prompt injection, data poisoning, model theft, adversarial manipulation, and risk from insecure end-of-life practices. To build trustworthy and resilient AI, companies need structured guidance on security.

The UK Government’s Code of Practice for AI Cybersecurity has been adopted global as ETSI TS 104 223, establishing a baseline international standard for AI security. It defines 13 principles, expanded into 72 detailed provisions that span the full AI lifecycle.

The following tool translates those requirements into a practical checklist for organisations to get started, track progress, and prepare for future assurance or certification.

Core Considerations for Companies

Before applying the checklist, reflect on the following questions:

  1. Are you developing your own AI models, integrating third-party models, or operating AI-enabled systems?
  2. Is your AI at the design, development, deployment, maintenance, or end-of-life phase?
  3. What AI specific risks (e.g., poisoning, extraction, adversarial inputs) could affect your system?
  4. Do you have the right people, processes, and tools to manage AI security?
  5. How will you demonstrate compliance (i.e., logs, risk registers, test results, user communications)?

References for this tool

This checklist is based on:

Together these define global baseline cybersecurity requirements for AI, providing practical and scenario-driven ways to secure AI across its lifecycle.

ETSI AI Cybersecurity Compliance Checklist

Note: ETSI TS 104 223’s 13 principles are expanded into 72 provisions. This checklist condenses them into SME-friendly steps. For deeper alignment, organisations should map to the corresponding ETSI provisions.

Stage 1: Secure Design

The foundation of secure AI begins at the design stage. Decisions made here shape the resilience, trustworthiness, and long-term safety of the system. Secure design means building security into the architecture from the outset, not bolting it on later. At this stage, organisations should ensure that teams are trained, threats are anticipated, and safeguards are embedded into requirements and oversight structures.

Ask yourself:

Stage 2: Secure Development

Once an AI system moves beyond design, the focus shifts to building securely. This stage is about controlling what goes into your system. From datasets and code to third-party components, this ensures that your development environment itself is resilient to attacks. Good practices here reduce vulnerabilities that adversaries could exploit later.

Ask yourself:

Stage 3: Secure Deployment

When an AI system is released into the world, security risks shift from theory to practice. Deployment is where systems interact with real users, environments, and adversaries. At this stage, organisations must ensure that users understand the system’s purpose and limitations. Policies should be communicated clearly, and safeguards should be put in place to detect and prevent misuse. Transparency and preparedness are key to building trust and resilience.

Ask yourself:

Stage 4: Secure Maintenance

Security doesn’t end once an AI system is deployed. It must be actively maintained throughout its lifecycle. Models can drift, new vulnerabilities may emerge, and adversaries will adapt. Ongoing maintenance ensures systems remain resilient, up to date, and trustworthy. This stage is about monitoring behaviour, patching vulnerabilities, and having clear roles and processes in place to respond when things go wrong.

Ask yourself:

Stage 5: Secure End-of-Life

Even the most advanced AI system will eventually need to be retired. Without proper end-of-life practices, sensitive data, models, or configurations can be left exposed. This would create a serious risk of misuse or exploitation. Secure decommissioning ensures that when AI systems are phased out, they are disposed of responsibly, safely, and in line with cybersecurity best practice. This stage focuses on secure deletion, transfer, and documentation to prevent leakage or unintended use.

Ask yourself:

Additional lookouts for Companies

For Builders who are developing AI models:

For Acquirers or Integrators who are using third-party AI:

What’s next after this checklist?

Once you’ve worked through the checklist, the next steps to turn answers into action. Begin by addressing any gaps you’ve identified. Prioritise staff awareness, maintaining an AI asset inventory, and setting up basic monitoring.

Next, map your practices against the full 72 ETSI provisions, ensuring alignment across the lifecycle. Capture evidence of your activities, such as logs, risk registers, test results, and user communications. This will help you to demonstrate compliance and progress over time.

Finally, make AI security a living process by revisiting the checklist regularly, updating your threat models, and refining controls as risks evolve. This positions your organisation to meet future assurance and certification requirements built on ETSI standards, while showing customers and partners that you are committed to responsible AI.

Commitment and Disclaimer
The Responsible AI Hub provides resources for learning, examples of AI in action and support for responsible AI. Use of these materials does not create any legal obligations or liability with the AICC.

We'd love to hear from you

Innovation thrives through connection. Whether you're an SME, researcher, or professional exploring AI, we’re here to help.

Our Partners