Risk Assessment Checklist

Determining the risks from the use of an AI tool can ground ethics in practice. Identifying and responding to risks in a timely manner is critical to governance practice, particularly in putting AI ethics and values underpinning responsible AI in practice. This risk assessment checklist can help you evaluate your AI use, provide for harm ahead of use, and build appropriate policies and response mechanisms to address risk.

  • Use Case Risk:

  1. Does the AI system impact people’s rights?

  2. Does the AI system impact people’s access to particular services?

  3. Does the AI system influence decisions about individuals?

  4. Is the AI system to be used for a sensitive domain (health, finance, policing, welfare)?

  • Data Risk:

  1. Is the dataset representative?

  2. Is there any known bias in the data?

  3. Was the data collected with informed consent?

  4. Is a vulnerable group included or excluded?

  5. Is this inclusion with free, full, informed, and specific consent?

  • Model Risk:

  1. Is the model explainable?

  2. Has the model been tested for bias across groups?

  3. Are the error rates acceptable across different demographics?

  • Deployment Risk:

  1. Is there human oversight?

  2. Can decisions be appealed?

  3. Are users aware they are interacting with AI?

  • Harm Potential:  

  1. Is this system likely to reinforce inequality?

  2. Is this system likely to reinforce discrimination?

  3. Is this system likely to cause exclusion?

  4. Is this system likely to result in a denial of services?

  5. Is this system likely to be misused?

  • Governance Readiness:

  1. Is there a clear owner of the system?

  2. Is there an AI governance policy in place?

  3. Is governance embedded throughout the AI lifecycle in the organisation?

  4. Is there a monitoring plan?

  5. Is there a clear grievance mechanism?

  6. Is there a documentation process in place?

Previous
Previous

AI Ethics Review Worksheet

Next
Next

Build an AI Use Policy