High Risk Areas and Safe Approaches for AI

Artificial Intelligence has enormous potential to transform industries, improve public services, and create new opportunities, but not every use of AI carries the same level of risk.

Artificial Intelligence has enormous potential to transform industries, improve public services, and create new opportunities, but not every use of AI carries the same level of risk. Some uses of AI carry higher risks to people, communities and society. At the Artificial Intelligence Collaboration Centre (AICC), we call these High-Risk Areas.

Can you still work with AI in High-Risk Areas?

Being high-risk does not mean that projects are banned or discouraged. Instead, it means they are subject to continuous monitoring, responsible guardrails, and additional oversight during their development and deployment. Within the AICC Transformer Programme, these safeguards are especially important when working with small and medium enterprises (SMEs). High risk projects may still proceed, but only with the right governance in place.

So, what are High-Risk Areas?

Our team has identified a set of High-Risk Areas where AI applications require closer scrutiny, stronger safeguards, and greater accountability. Below, we introduce each High-Risk Area, explain why it matters, and share examples of the risks involved.

By recognising High-Risk Areas and putting the right safeguards in place, we can ensure that AI is developed and deployed responsibly, unlocking its potential while protecting people, communities and society

Commitment and Disclaimer
The Responsible AI Hub provides resources for learning, examples of AI in action and support for responsible AI. Use of these materials does not create any legal obligations or liability with the AICC.

We'd love to hear from you

Innovation thrives through connection. Whether you're an SME, researcher, or professional exploring AI, we’re here to help.

Our Partners