The G7 Hiroshima Process, 2023

The G7 Hiroshima AI Process was the first major international framework specifically designed to govern advanced AI systems and generative AI technologies. It emerged from the discussions during the 2023 Hiroshima Summit, which took place against a backdrop marked by the rapid rise of large language models. The framework establishes high-level principles for AI governance and clarifies a practical code of conduct for AI developers to follow. The process looks at challenges emerging from frontier AI systems that are capable of generating human-like content across various domains. 

The G7 Leaders, in their meeting in Hiroshima in 2023, addressed the significance of AI, its dual potential, and the limitations of the extant governance mechanisms when it came to handling AI. Even as individual governments were moving toward building their own AI regulations, there was a gap in the global governance space given that the technology itself operates across borders. The Hiroshima process specifically focuses on advanced AI systems, particularly generative AI and foundation models and the risks they pose, recognizing the different levels of governance attention required for different kinds of AI tools.

The Hiroshima Process offers guiding principles for governments and specific conduct expectations for those engaged in developing AI tools, to present both policy flexibility and operational clarity.  The outcome of the Hiroshima process is voluntary, which means that it is not binding.  

The Code of Conduct for Organizations Developing Advanced AI Systems calls on organisations to abide by the following actions in a manner that is commensurate to the risks:

  • Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.

  • Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.

  • Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.

  • Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia

  • Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures.

  • Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.

  • Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.

  • Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.

  • Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.

  • Advance the development of and, where appropriate, adoption of international technical standards

  • Implement appropriate data input measures and protections for personal data and intellectual property

Previous
Previous

ISO 42001:2023

Next
Next

Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law