OECD AI Principles 2019

The OECD AI Principles were adopted in 2019 and updated in 2024. The Principles guide AI actors to develop trustworthy AI and provide policymakers with recommendations for effective AI policies. Countries use the OECD AI Principles and related tools to shape policies and create AI risk frameworks, and to build a foundation for global interoperability between various jurisdictions. The guidelines list out values-based principles and recommendations for policymakers.

Values based Principles:

These principles are the foundational norms on which the guidelines make recommendations for policymakers.

  • Principle 1.1: Inclusive growth, sustainable development, and well-being: Stakeholders are expected to proactively engage in the responsible stewardship of trustworthy AI in order to secure beneficial outcomes for people and the planet. It calls for augmenting human capacity, enhancing creativity, advancing the inclusion of underrepresented populations, reducing economic, social, and gender inequalities, protecting natural environments, and invigorating inclusive growth, well-being, sustainable development, and environmental sustainability.

  • Principle 1.2: Human rights and democratic values, including fairness and privacy: Stakeholders are expected to respect the rule of law, human rights, and democratic and human-centred values throughout the AI system lifecycle. It calls for attention to non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised rights. The principle also requires addressing misinformation and disinformation amplified by AI while also respecting the freedom of expression and other rights and freedoms protected by applicable international law. The principle recommends the implementation of mechanisms and safeguards such as the capacity for human agency and oversight, including addressing risks arising from the use of AI for purposes other than those intended, as well as from intentional and unintentional misuse in a manner appropriate to the context and in alignment with the state of the art.   

  • Principle 1.3: Transparency and explainability: Stakeholders are expected to commit to transparency and responsible disclosure in relation to AI systems. They should provide meaningful information, appropriate to the context, and in line with the state of the art. The goal should be to foster a general understanding of AI systems, including their abilities and limitations; to make stakeholders aware of their interactions with AI systems, including in the workplace; wherever feasible and useful, to provide plain and easy-to-understand information on the sources of data/input, factors, processes, and/or logic that led to the prediction, content, recommendation or decision; to provide information that enables those affected by an AI system to understand the output

  • Principle 1.4: Robustness, security, and safety: Stakeholders should ensure that AI systems are robust, secure, and safe throughout their lifecycle so that in their conditions of normal use, foreseeable use, or misuse or other adverse conditions, they function appropriately and do not pose unreasonable safety and/or security risks. Stakeholders should also put mechanisms in place that ensure that if AI systems risk causing undue harm or exhibit undesired behaviour, they can be overridden, repaired, and/or decommissioned safely as needed. Wherever technically feasible, mechanisms should be in place to bolster information integrity while also ensuring respect for freedom of expression.

  • Principle 1.5: Accountability: Stakeholders should be accountable for the proper functioning of AI systems and for the respect of all the principles mentioned above, based on their roles, the context, and in line with the state of the art. This principle asks for stakeholders to ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle in order to enable analysis of the AI system’s outputs and responses to inquiry, appropriate to the context and consistent with the state of the art. Based on their roles and ability to act, stakeholders are also to apply a systematic risk management approach to each phase in the AI system lifecycle on a regular, ongoing basis, and adopt responsible business conduct in order to address risks pertaining to AI systems, including through cooperation among different AI actors, suppliers of AI knowledge and resources, AI system users, and other stakeholders. Risks include those related to harmful bias, human rights including safety, security, and privacy, and labour and intellectual property rights.

Recommendations for Policymakers:

As key actors, policymakers are in the position to guide the entire AI lifecycle in ways that adhere to the principles mentioned above.

  • Investing in AI research and development: Governments should consider making long-term public investment and encourage private investment into:
    - R&D and open science, including interdisciplinary efforts. The aim is to encourage innovation in trustworthy AI that focuses on challenging technical issues and on AI-related social, legal, and ethical implications and policy issues.  
    - Open-source tools and open datasets that are representative and respect privacy and data protection to support an environment for AI research and development that is free of harmful bias and to improve interoperability and use of standards.

  • Fostering an inclusive AI-enabling ecosystem: Governments should foster the development of and access to an inclusive, dynamic, sustainable, and interoperable digital ecosystem for trustworthy AI. This ecosystem should include data, AI technologies, computational and connectivity infrastructure, and AI knowledge-sharing mechanisms. Governments should promote mechanisms like data trusts to support the safe, fair, legal, and ethical sharing of data.

  • Shaping an enabling interoperable governance and policy environment for AI: Governments should promote an agile policy environment that supports moving past the R&D stage to the deployment and operation stages for trustworthy AI systems. They should consider drawing on experimentation to provide a controlled environment in which AI systems can be tested and scaled up. Outcome-based approaches that provide flexibility in achieving governance objectives and enable cooperation within and across jurisdictions to promote interoperable governance and policy environments should be adopted as appropriate. Governments should also review and adapt their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems in order to encourage innovation and competition for trustworthy AI.

  • Building human capacity and preparing for labour market transformation: Governments should work closely with key stakeholders in order to prepare for the transformation of the world of work and society at large. They should empower people to use and engage with AI systems across the breadth of applications, including by giving them the requisite skills.  Governments should also take necessary steps (including through social dialogue) to ensure the fair transition of workers as Ai is deployed through training and specific support for those affected by displacement through social protection and access to new opportunities in the labour market. Governments should work with stakeholders to promote the responsible use of AI at work, in order to enhance worker safety, job and public service quality, foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are shared broadly and fairly.

  • International co-operation for trustworthy AI: Governments should cooperate actively to advance these principles and progress on the responsible stewardship of trustworthy AI. They should work together in the OECD and other global and regional fora to foster AI-knowledge sharing as appropriate. International, cross-sectoral, and open multi-stakeholder initiatives must be encouraged to garner long-term expertise on AI. Governments should also promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI, and encourage the development and use of internationally comparable indicators to measure AI research, development, and deployment, and gather an evidence base to assess progress in implementing these principles.

 

Previous
Previous

The G20 AI Guidelines (2019)