ISO Standards
CORE AI STANDARDS
ISO/IEC 42001:2023 — AI Management System (AIMS) Provides a framework for organizations to establish, implement, maintain, and continuously improve an AI management system. It helps organizations responsibly develop and use AI by addressing governance, risk, and accountability across the AI lifecycle.
ISO/IEC 22989:2022 — AI Concepts and Terminology Establishes a common vocabulary and set of concepts for AI. It standardizes definitions used across other AI standards and ensures consistent language when discussing AI systems, techniques, and processes.
ISO/IEC 23894:2023 — AI Risk Management Offers guidance on how to integrate risk management into AI-related activities. It helps organizations identify, assess, treat, and monitor risks specific to AI systems throughout their development and deployment.
ISO/IEC 5338:2023 — AI System Life Cycle Processes Defines the processes for planning, developing, operating, and maintaining AI systems. It extends traditional software life cycle frameworks to address the unique characteristics of AI, including data management and model training.
ISO/IEC 38507:2022 — Governance of AI by Organizations Addresses the governance implications of using AI for governing bodies (e.g., boards and executives). It guides decision-makers on how to ensure AI use within their organizations is effective, ethical, and accountable.
SPECIALISED AND DEVELOPING STANDARDS
ISO/IEC 42005:2025 — AI System Impact Assessment Provides a methodology for assessing the broader impacts of AI systems on individuals, society, and the environment. It supports organizations in evaluating potential consequences before and during AI deployment.
ISO/IEC 42006:2025 — Auditing AI Management Systems Sets requirements for third-party bodies conducting audits of AI management systems (particularly those certified under ISO/IEC 42001). It ensures audits are consistent, credible, and competent.
ISO/IEC 5339:2024 — Guidelines for AI Applications Offers practical guidance on selecting and applying AI techniques to specific use cases. It helps organizations match AI approaches to their application needs while considering relevant constraints and risks.
ISO/IEC 5392:2024 — Knowledge Engineering Reference Architecture Defines a reference architecture for knowledge engineering systems, including knowledge graphs and ontologies. It supports the structured representation and use of knowledge within AI applications.
ISO/IEC TR 5469:2024 — Functional Safety and AI Systems Explores the relationship between AI and functional safety, particularly in safety-critical applications. It examines how AI properties such as opacity and statistical behavior interact with traditional safety engineering practices.
ISO/IEC TS 6254:2025 — Explainability and Interpretability Defines objectives and approaches for making AI systems more explainable and interpretable to users and stakeholders. It supports transparency by clarifying how and why an AI system produces particular outputs.
ISO/IEC 8183:2023 — Data Life Cycle Framework Provides a framework for managing data throughout its entire life cycle in the context of AI. It covers data collection, processing, storage, sharing, and deletion, supporting responsible and high-quality data practices.
ISO/IEC TS 8200:2024 — Controllability of Automated AI Systems Addresses how organizations can maintain meaningful human oversight and control over automated AI systems. It provides guidance on intervention mechanisms and control measures to keep AI behavior within intended boundaries.
ISO/IEC TR 24027:2021 — Bias and Fairness in AI Examines types of bias that can occur in AI systems and throughout the AI pipeline, including data bias and algorithmic bias. It provides guidance on identifying, mitigating, and monitoring bias to promote fairness.
ISO/IEC 24368:2022 — AI Ethics and Societal Considerations Provides an overview of ethical and societal issues related to AI, including fairness, transparency, privacy, and human dignity. It serves as a high-level framework for incorporating ethical principles into AI governance.
DATA QUALITY AND RELATED STANDARDS
ISO/IEC 5259 (All Parts) — Data Quality for Analytics and ML A multi-part series covering data quality management specific to analytics and machine learning. It addresses data quality measurement, frameworks, processes, and requirements to ensure AI systems are trained and operated on reliable data.
ISO/IEC TR 5259-6:2026 — Visualization of Data Quality Introduces a visualization framework to help stakeholders understand and communicate data quality issues. It supports more intuitive assessment and monitoring of data quality in ML pipelines.
ISO/IEC 31700 — Privacy by Design for Consumer Protection Establishes requirements for embedding privacy protections into the design of products and services used by consumers. It ensures privacy is a proactive, built-in consideration rather than an afterthought — directly relevant to AI systems handling personal data.
ISO/IEC 27001 — Information Security Management The globally recognized standard for information security management systems (ISMS). For AI, it provides the security foundation needed to protect the data, models, and infrastructure that AI systems depend on.