
Cybersecurity and AI ETSI standards
Artificial Intelligence is powerful, but it’s also vulnerable. Attacks like data poison, prompt injection, or model theft are not science fiction. These are real risks for any business using AI.
Artificial Intelligence is powerful, but it’s also vulnerable. Attacks like data poison, prompt injection, or model theft are not science fiction. These are real risks for any business using AI.
Artificial Intelligence is powerful, but it’s also vulnerable. Attacks like data poison, prompt injection, or model theft are not science fiction. These are real risks for any business using AI. For small or medium enterprises (SMEs), the stakes are especially high, one breach could mean losing customer, damaging your reputation, or having to pay regulatory fines.
To help businesses manage these risks, the European Telecommunications Standards Institute (ETSI) has published global standards for AI cybersecurity. These standards, developed by ETSI’s Securing Artificial Intelligence (SAI) committee, build on the UK Government’s Code of Practice for the Cyber Security of AI and now serve as an international benchmark.
Traditional cybersecurity protects IT systems from familiar threats, like malware, phishing, ransomware and unauthorised access. The goal is to keep networks, software, and data safe.
AI cybersecurity goes further. AI systems introduce new types of vulnerabilities that don’t exist in ordinary IT. For example:
This means that AI needs special controls on top of traditional cybersecurity, such as logging prompts, monitoring models for drift, securing data pipelines and ensuring safe retirement of AI systems.
The ETSI standards bridge the two worlds. They build on proven cybersecurity practices while adding Ai-specific safeguards.
Purpose: Defines a global baseline of cybersecurity requirements for AI models and systems across their full lifecycle.
Scope: The standard sets out 13 core principles, which expand into around 72 detailed provisions. These provisions act as the practical building blocks of AI cybersecurity, covering the full AI lifecycle: Secure design, Development, Deployment, Maintenance, and End-of-Life.
The provisions drill down into specific, actionable steps. For example, under secure design, organisations are expected to adopt security-by-default approaches, minimise unnecessary features, an embed provenance tracking from the outset. For development and deployment, provisions address data integrity, versioning, testing against adversarial threats, and human-in-the-loop oversight. The maintenance phase includes continuous monitoring, prompt and model logging, anomaly detection, and patch management. Finally, end-of-life provision that AI systems and artefacts are securely retired, preventing misuse, leakage, or unintended reactivation.
Highlights from this standard:
The 13 principles outlined are: Awareness, Secure design, Threat evaluation, Human responsibility, Asset protection, Infrastructure security, Supply chain security, Documentation, Testing, Communication, Updates, Monitoring, Disposal.
By breaking these principles into 72 provisions, ETSI TS 104 223 goes beyond broad intentions and provides a concrete, auditable baseline. For SMEs. It sets out the “must-do” requirements to show that their AI systems are not only functional, but also secure, resilient, and responsibly managed.
Purpose: Offers practical, scenario-driven guidance to help organisation implement the 72 provisions set out in TS 104 223. Instead of only listing requirements, this guide show how to put them into practice in different real-world contexts.
Scope: The standard is designed to be practical. Rather than leaving organisation with abstract principle, it translates them into clear, real-world actions. The guide how to apply cybersecurity provisions to different contexts, aligns with other global frameworks, and provides advice tailored to the roles people play in building, deploying, or using AI systems.
Highlights from this standard:
The standard supports assurance and certification by translating baseline requirements into practice, TS 104 128 helps create a pathway for future audit or certification schemes. This is particularly important as regulators (e.g., EU AI Act) and markets increasingly expect demonstrable compliance. The standard is designed to evolve with new threats and technologies. As new attack vectors (like jailbreak attacks on large language models) emerge, TS 104 128 can be updated with fresh case studies, checklist, and mappings.
If TS 104 223 is the rulebook that tells you what must be done, then TS 104 128 is the playbook that shows you how to do it in practice. The two standards are designed to work together:
This relationship makes compliance more achievable for smaller organisation. Instead of struggling with a technical standard that feels abstract or too big to handle, SMEs can look to TS 104 128 for step-by-step explanations, examples, and practical measures that fit their resources. By working with both together, SMEs can demonstrate to customer, regulators, and partners that they meet global best practice in AI cybersecurity, without being overwhelmed.
For SMEs, aligning with ETSI cybersecurity standards offers three key benefits:
Cybersecurity is a core part of Responsible AI. ETSI’s standards help SMEs not only reduce risks but also prove they are serious about building safe, resilient, and trustworthy AI.
Commitment and Disclaimer
The Responsible AI Hub provides resources for learning, examples of AI in action and support for responsible AI. Use of these materials does not create any legal obligations or liability with the AICC.
Innovation thrives through connection. Whether you're an SME, researcher, or professional exploring AI, we’re here to help.