AI Ethics Review Worksheet

With the global uptake in AI and the general understanding that the technology is pervasive, organisations can make space for the technology while acknowledging ethics, values, and human rights. In prioritizing these values and ethics, an organization can make an informed decision on the kind of approach they will take for the use of AI. This ethics review worksheet is intended as a form of guidance for organizations to reflect on as whole teams, before engaging with AI.  

  • Purpose and Justification: Start by asking if this AI system should exist. Reflect on who benefits the most, and who might be harmed, and in what ways.

  • Stakeholder Mapping: Reflect on who the direct users may be, and who may bear the direct and indirect impacts of AI use. List out all those who might be missing from decision-making in relation to AI use.

  • Power Analysis: Interrogate who controls the system, and who has power to make decisions relating to the AI system. Reflect on who has no say in the process but bears the impact of AI use, and explore whether the AI system shifts power to or away from communities you engage with, and in what ways

  • Bias and Fairness: Start by exploring how fairness is defined, and who gets to shape that definition. Then, ask what biases may exist in the data, how they are being mitigated and how sufficient these measures are.

  • Transparency: Explore whether the model is explainable and its workings are comprehensible. Then ask if individuals affected by the model’s use can understand its decisions, if the system is explainable to them in plain language, and if they have a route to ask questions on the workings of the AI system and its deployment.

  • Accountability: Reflect on who is responsible for harm when it occurs, what measures have been put in place to counter or address this harm when it happens, and what recourse affected people have to both question this harm and to be appropriately supported and compensated for its impact.

  • Care and Harm Reduction: Explore the safeguards that have been put in place, and reflect on whose work is contributing toward the design, development, testing, deployment, and use of the AI system and how that labour is being treated. How can exploitation be avoided and what practices of care can be embedded in ensuring that no harm takes place, and where harm does take place, it is mitigated and those facing its impact are appropriately supported?

  • Alternatives: Explore the very need for the AI tool: Is AI the only solution? Is AI the best of all solutions available? What is the cost of using the AI tool from a social, economic, justice, and community-centric perspective? Is there a non-AI alternative that could be more just and less harmful?

Previous
Previous

Building a Feminist AI Governance Framework

Next
Next

Risk Assessment Checklist