Risk Assessments
AI risk assessments are structured evaluations conducted before (and sometimes during) deployment to identify potential harms, failures, and misuse scenarios associated with an AI system. Risk assessments typically answer basic but difficult questions, namely “What could go wrong if this system were deployed, how severe would it be, and who would be harmed by it?” Risk assessments cover safety, fairness, discrimination, security, and privacy risks, as well as social and downstream impacts.
Compliance
Risk Assessments are key in complying with the NIST AI Risk Management Framework, which by itself guides risk assessment; ISO 42001, which formalizes AI management systems that include risk processes, and the EU AI Act, which requires risk management systems for high-risk AI applications, including documented assessments and mitigation measures. Risk assessment is a formal regulatory requirement.
In Practice
Risk assessments could include review, model evaluation, safety review, and system-level evaluation processes. They help identify risk, measure likelihood and severity, and implement mitigation strategies. They should be conducted before a major model or feature is released, as they help surface potential risks that should be addressed suitably, and should be regularly conducted after a model is rolled out to ensure risks are addressed as they arise. The depth of review will vary based on the product and level of risk involved.
Organisations conduct risk assessments at the pre-development or design stages by identifying intended use cases, defining prohibited uses, and setting safety requirements; at the pre-deployment stage by evaluating model behaviour against safety benchmarks, testing for bias, harmful outputs and potential for misuse and conducting red teaming exercises, and based on the results to determine whether to launch and if so, with what safeguards and limitations; and at the post-deployment stage by reviewing incidents and new risks as they emerge, and by making suitable updates to mitigation strategies.
Risk assessments look at technical risks (hallucination, model failure, robustness, reliability), safety risks (harmful outputs and instructions, responses related to violence and self-harm, unsafe advice in high-stakes domains), fairness risks (bias, discrimination, labour market harm), and security risks (prompt injection attacks, data leakage from training or content, model misuse at scale, misinformation amplification, misuse in political and social contexts).
Embedding Responsibility and Ethical Practices
Risk assessments call for structured thinking ahead of deployment and through the lifecycle of the AI system. It helps connect technical evaluations to real world harm, and acts as a gatekeeper for release decisions while also creating accountability within organisations. Most AI failures emerge from real-world applications, making risk assessments vital. Risk assessment is both a technical and legal governance requirement.