UNESCO Recommendation on AI Ethics (2021)

The UNESCO Recommendation on the Ethics of Artificial Intelligence was adopted in November 2021 by all 194 member states. This instrument was the world's first global normative framework for ethical AI development and use. It emphasizes on the importance of centring human rights, dignity, and sustainability as core pillars, aiming to guide policymakers, developers, and users in harnessing AI's potential while taking care to mitigate risks.

Core Values

The document establishes four foundational values to anchor AI ethics. Human rights and dignity must remain paramount, ensuring that all AI respects, protects, and fulfils these obligations throughout its lifecycle from design to use. Diversity and inclusiveness promote equity, and can counter biases that could exacerbate inequalities. The guidelines also call for environmental sustainability, which requires AI actors to align their systems with ecological protection and restoration goals. Finally, creating a culture of integrity and awareness is essential in order to foster ethical responsibility among all stakeholders.

Key Principles

The UNESCO Guidelines comprise ten principles in all, to operationalize these values into actionable norms. These principles are:

  • Proportionality and "do no harm," which call for AI use to be necessary, legitimate, and risk-assessed to avoid undue impacts.

  • Safety and security prioritize robust systems that are resistant to errors, attacks, and/or misuse.

  • Fairness and non-discrimination call for the prevention of bias and promotion of justice.

  • Privacy protection safeguards data across all stages of the AI lifecycle.

  • Human oversight is essential to ensure meaningful control, instead of complete autonomy in critical decisions that involve life-and-death scenarios.

  • Transparency and explainability call for decisions that can be understood and meaningfully contested.

  • Responsibility and accountability assign clear human liability.

  • Multi-stakeholder governance and sustainability embed these principles in broader ecosystems.

Policy Action Areas

The Recommendation translates the principles into 11 sectoral policies for practical implementation. These are as follows:

  • Ethical Impact Assessment: Member states should carry out ethical impact assessments to identify and assess the benefits, concerns, and risks of AI systems and appropriate risk prevention, mitigation, and monitoring measures among other assurance mechanisms.

  • Ethical Governance and Stewardship: Member   States   should   ensure   that   AI   governance   mechanisms  are  inclusive,  transparent,  multidisciplinary,  multilateral (this includes the possibility of mitigation and redress  of  harm  across  borders)  and  multi-stakeholder.  In   particular,   governance   should   include   aspects   of   anticipation,   and   effective   protection,   monitoring   of   impact, enforcement and redress.

  • Data Policy: Member States should work to develop data governance strategies  that  ensure  the  continual  evaluation  of  the  quality  of  training  data  for  AI  systems  including  the  adequacy of the data collection and selection processes, proper data security and protection measures, as well as feedback  mechanisms  to  learn  from  mistakes  and  share  best practices among all AI actors.

  • Development and International Cooperation: Member      States      and      transnational      corporations      should  prioritize  AI  ethics  by  including  discussions  of  AI-related   ethical   issues   into   relevant   international,   intergovernmental and multi-stakeholder fora.

  • Environment and Ecoystems: Member    States    and    business    enterprises    should    assess  the  direct  and  indirect  environmental  impact  throughout  the  AI  system  life  cycle,  including,  but  not  limited to, its carbon footprint, energy consumption and the  environmental  impact  of  raw  material  extraction  for  supporting  the  manufacturing  of  AI  technologies,  and  reduce  the  environmental  impact  of  AI  systems  and  data  infrastructures.  Member  States  should  ensure  compliance  of  all  AI  actors  with  environmental  law,  policies and practices.

  • Gender: Member States should ensure that the potential for digital technologies  and  artificial  intelligence  to  contribute  to  achieving  gender  equality  is  fully  maximized,  and  must  ensure that the human rights and fundamental freedoms of girls and women, and their safety and integrity are not violated at any stage of the AI system life cycle. Moreover, Ethical  Impact  Assessment  should  include  a  transversal  gender perspective.

  • Culture: Member   States   are   encouraged   to   incorporate   AI   systems,    where    appropriate,    in    the    preservation,    enrichment,   understanding,   promotion,   management   and accessibility of tangible, documentary and intangible cultural heritage, including endangered languages as well as  indigenous  languages  and  knowledges,  for  example  by  introducing  or  updating  educational  programmes  related  to  the  application  of  AI  systems  in  these  areas,  where   appropriate,   and   by   ensuring   a   participatory   approach, targeted at institutions and the public.

  • Education and Research: Member     States     should     work     with     international     organizations,    educational    institutions    and    private    and   non-governmental   entities   to   provide   adequate   AI  literacy  education  to  the  public  on  all  levels  in  all  countries  in  order  to  empower  people  and  reduce  the  digital  divides  and  digital  access  inequalities  resulting  from the wide adoption of AI systems.

  • Communication and Information: Member States should use AI systems to improve access to information and knowledge. This can include support to  researchers,  academia,  journalists,  the  general  public  and   developers,   to   enhance   freedom   of   expression,   academic and scientific freedoms, access to information, and  increased  proactive  disclosure  of  official  data  and  information.

  • Economy and Labour: Member  States  should  assess  and  address  the  impact  of  AI  systems  on  labour  markets  and  its  implications  for  education requirements, in all countries and with special emphasis  on  countries  where  the  economy  is  labour-intensive.  This  can  include  the  introduction  of  a  wider  range of “core” and interdisciplinary skills at all education levels to provide current workers and new generations a fair chance of finding jobs in a rapidly changing market, and  to  ensure  their  awareness  of  the  ethical  aspects  of  AI  systems.  Skills  such  as  “learning  how  to  learn”,  communication,  critical  thinking,  teamwork,  empathy,  and   the   ability   to   transfer   one’s   knowledge   across   domains, should be taught alongside specialist, technical skills, as well as low-skilled tasks. Being transparent about what skills are in demand and updating curricula around these are key.

  • Health and Social Well-Being: Member  States  should  endeavour  to  employ  effective  AI  systems  for  improving  human  health  and  protecting  the  right  to  life,  including  mitigating  disease  outbreaks,  while building and maintaining international solidarity to tackle  global  health  risks  and  uncertainties,  and  ensure  that  their  deployment  of  AI  systems  in  health  care  be  consistent with international law and their human rights law obligations. Member States should ensure that actors involved in health care AI systems take into consideration the  importance  of  a  patient’s  relationships  with  their  family and with health care staff.

Implementation, Monitoring, and Evaluation Mechanisms

Member states commit to readiness assessments to gauge national AI ethics capacity, followed by tailored action plans. AI actors, namely governments, firms, and researchers must conduct ethics impact assessments before deployment, evaluating risks to rights, biases, and sustainability. Monitoring involves periodic reporting to UNESCO, with a global observatory tracking progress. Public awareness campaigns build literacy, empowering citizens to engage with AI.

Risk Management and Oversight

Risks are classified by impact levels, requiring mitigation strategies like audits and redress mechanisms. Human determination overrides AI in high-stakes areas; explainability supports democratic accountability. Liability remains attributable to humans, not machines.  

Previous
Previous

ACHPR Resolution 473

Next
Next

The G20 AI Guidelines (2019)