Skip to main content
High Risk Areas and Safe Approaches for AI
Artificial Intelligence has enormous potential to transform industries, improve public services, and create new opportunities, but not every use of AI carries the same level of risk.
Artificial Intelligence has enormous potential to transform industries, improve public services, and create new opportunities, but not every use of AI carries the same level of risk. Some uses of AI carry higher risks to people, communities and society. At the Artificial Intelligence Collaboration Centre (AICC), we call these High-Risk Areas.
Can you still work with AI in High-Risk Areas?
Being high-risk does not mean that projects are banned or discouraged. Instead, it means they are subject to continuous monitoring, responsible guardrails, and additional oversight during their development and deployment. Within the AICC Transformer Programme, these safeguards are especially important when working with small and medium enterprises (SMEs). High risk projects may still proceed, but only with the right governance in place.
So, what are High-Risk Areas?
Our team has identified a set of High-Risk Areas where AI applications require closer scrutiny, stronger safeguards, and greater accountability. Below, we introduce each High-Risk Area, explain why it matters, and share examples of the risks involved.
- Biometric Identification Systems: A type of artificial intelligence system that uses biometric data – such as facial-features, fingerprints, iris patterns, or voice recognition – to automatically identify or verify the identity of individuals. These are particularly sensitive due to their potential impact on fundamental rights and civil liberties, including privacy, data protection, and non-discrimination.
- Example: Real-time facial recognition in public spaces by law enforcement.
- Context: Biometric identification systems pose a high risk to individual privacy due to the deeply personal nature of the data they collect—such as fingerprints, facial features, iris patterns, or voiceprints—which are unique and unchangeable. Unlike passwords or PINs, biometric traits cannot be easily revoked or reset if compromised, making individuals perpetually vulnerable to identity theft and misuse.
- Responsible AI Mitigations: Clear legal limits, human oversight, strong data security, and bias testing.
- AI for Military or Defence Purposes: AI used in defence contexts (e.g., autonomous weapons, surveillance, decision support) raise ethical concerns around human control, accountability, proliferation, and potential escalation of conflicts.
- Example: Autonomous drones used for target selection and engagement.
- Context: Without human oversight, these systems could make life-and-death based on flawed data or bias, raising ethical and legal accountability issues.
- Responsible AI Mitigations: Ensure human control, international law compliance, and transparency in use
- AI Systems for Education & Training: AI tools that assess students, personalise learning, or make decisions about educational paths can reinforce bias, misjudge individual needs, and affect long-term opportunities.
- Example: AI scoring systems for standardised testing or automated grading.
- Context: These tools may misinterpret diverse learning styles or non-standard responses, disadvantaging students from underrepresented or non-traditional backgrounds.
- Responsible AI Mitigations: Regular audits, inclusion of diverse learning data, and clear appeal processes.
- AI used for Medical Purposes: AI in diagnostics, treatment recommendations, or healthcare resource allocation must be accurate, explainable, and bias-free due to its direct impact on patient health & safety.
- Example: AI diagnostics tools analysing X-rays or MRI scans.
- Context: An undetected bias in training data can lead to misdiagnoses for certain demographics, compromising patient outcomes and trust in healthcare systems.
- Responsible AI Mitigations: Medical validation, explainability, and strict clinical oversight.
- AI for Public Services: AI systems in areas such as welfare, law enforcement, or social support may lead to systemic bias, exclusion, and a lack of recourse for affected individuals.
- Example: AI-based eligibility assessment for unemployment benefits.
- Context: Errors or opaque logic may unjustly deny people access to essential support, with limited human appeal processes and disproportionate effects on vulnerable populations.
- Responsible AI Mitigations: Transparency, human appeals, fairness checks, and community input.
- AI for Workplace Hiring & Management: AI used in recruitment, performance evaluation, worker management, or workplace monitoring risks discrimination, data misuse, and lack of transparency in employment decisions.
- Example: AI resume screening software that filters candidates based on keywords or predicted job performance.
- Context: These systems can reinforce existing biases in hiring data, excluding qualified applicants from marginalised groups without explanation or recourse.
- Responsible AI Mitigations: Diversity audits, transparency in hiring decisions, and human review.
- AI used for Gambling or Gaming Purposes: AI in gaming and gambling can amplify addictive behaviours, manipulate user engagement, and target vulnerable individuals with little accountability.
- Example: AI-driven personalisation in online casino platforms.
- Context: By learning user behaviour, Ai can increase engagement in harmful ways, exploiting addictive tendencies and resulting in significant financial and psychological harm.
- Responsible AI Mitigations: Strict regulation, responsible design, and player protection mechanisms.
- AI impacting Child Safety on Digital & Online Platforms: AI moderation, content recommendations, and interaction analysis must be rigorously controlled to prevent exploitation, exposure to harmful content, and data misuse involving minors.
- Example: Content recommendation algorithms on platforms like YouTube or TikTok.
- Context: These can expose children to inappropriate content, radicalisation, or online predators, often without adequate parental controls or age verification.
- Responsible AI Mitigations: Strong age checks, parental controls, and child-first design.
- AI used by Police or Similar Agencies: Predictive policing, surveillance, and risk profiling using AI can institutionalise bias, violate privacy, and erode public trust if not governed by clear legal frameworks.
- Example: Predictive policing systems that forecast crime “hot spots”.
- Context: These systems often disproportionately target low-income or racially diverse communities, perpetuating historical biases and increasing surveillance in already over-policed areas.
- Responsible AI Mitigations: Independent oversight, fairness testing, and legal guardrails.
- AI used as an Autonomous Systems: AI operating with minimal human intervention (e.g., drones, vehicles, robots) introduces safety, accountability, and control risks, particularly in dynamic or public environments.
- Example: Self-driving cars navigating urban environments.
- Context: A malfunction or incorrect decision can cause traffic accidents, with questions around liability, ethical decision-making, and fallback protocols remaining unresolved.
- Responsible AI Mitigations: Clear accountability, rigorous testing, and emergency fallback controls.
- AI for Judicial or Political Purposes: AI influencing legal decisions, case analysis, or political campaigns can threaten due process, fairness, and democratic integrity if not transparent and equitable.
- Example: AI tools predicting recidivism rates to inform sentencing or parole.
- Context: These tools may rely on historical data that reflect systemic bias, potentially leading to harsher penalties for certain groups and undermining judicial fairness.
- Responsible AI Mitigations: Full transparency, fairness audits, and strict limits on political use.
- AI to Manage & Operate Critical Infrastructure: AI in energy, transport, water, and communication systems must be robust and resilient to avoid catastrophic failures and maintain public safety and continuity.
- Example: AI controlling electrical grid demand and supply balancing.
- Context: A malfunction or cyberattack could trigger widespread blackouts, economic disruption, or public safety hazards if not properly secured and supervised.
- Responsible AI Mitigations: Cybersecurity, resilience planning, and human monitoring.
- AI used for Immigration or Border Activities: AI in migration, asylum, or borer control can affect fundamental rights, rick unjust profiling, and lack the transparency needed for fair human oversight.
- Example: AI tools assessing visa or asylum applications based on predicted integration success.
- Context: These systems may encode cultural bias or use unverifiable metrics, impacting people’s fundamental rights and access to fair immigration processes.
- Responsible AI Mitigations: Human-led decision-making, clear appeal routes, and transparency.
- General Purpose AI with Risk: Versatile AI models capable of multiple tacks (e.g., LLMs) pose unpredictable risks when applied beyond their intended scope, including misinformation, bias, and malicious use.
- Example: Large Language Models generating news articles or legal documents.
- Context: If misused, these systems can produce convincing misinformation, fake legal advice, or impersonations, with broad societal implications for trust and truth.
- Responsible AI Mitigations: Usage boundaries, monitoring, and ethical design.
By recognising High-Risk Areas and putting the right safeguards in place, we can ensure that AI is developed and deployed responsibly, unlocking its potential while protecting people, communities and society
Commitment and Disclaimer
The Responsible AI Hub provides resources for learning, examples of AI in action and support for responsible AI. Use of these materials does not create any legal obligations or liability with the AICC.