
AI governance and ethics have come to the forefront for organizations as they thoroughly integrate artificial intelligence technologies into their operational structure. The two key existing ISO standards, ISO or IEC standards, ISO/IEC 23894, and ISO 42001 standards can help organizations establish responsible AI usage. ISO/IEC 23894 is specifically focused on AI governance and/or ethical AI, whereas ISO 42001 offers a practical framework for organizations relating to AI governance/management systems. Both standards allow support for organizations using AI models and systems to apply responsible and ethical practices.
In this practical guide, we will explore the core aspects of AI governance and ethics, how ISO 23894 and ISO 42001 assist towards addressing these themes, and how organizations can apply these standards
For assistance, contact us at support@pacificcert.com.
Introduction
As AI continues to affect every industry from healthcare to finance to transportation to retail, organizations must ensure that the AI systems they create and deploy are effective and ethical. AI governance is an important way to manage AI risk, including bias, fairness, transparency and accountability. ISO 23894 and ISO 42001 provide frameworks for responsible AI development and governance to help businesses develop frameworks for operating AI in a way that's responsible, ethical, and compliant.
ISO 23894 and ISO 42001 provide organizations with clear guidance on managing AI risks, complying with relevant regulations, and earning public trust.
ISO 23894 vs EU AI Act: Compliance Checklist
ISO 23894 and the EU AI Act both address the ethical implications of AI systems, but their scope, approaches, and focus differ in notable ways. ISO 23894 is an international standard which provides a overreaching framework for managing AI governance and ethics, whereas the EU AI Act is a legislative framework brought about by the European Union focused on the regulation of AI systems to ensure that they are used by the EU safely and ethically.
Aspect | ISO 23894 | EU AI Act | |
Scope | Global framework for AI governance and ethics | Legally binding framework for AI regulation within the EU | |
Focus | Ethical AI, governance processes, risk management | High-risk AI systems, safety, and transparency requirements | |
Application | Applicable to organizations globally | Applicable to organizations within the EU or offering AI systems to the EU market | |
Risk Management | Emphasizes identification and mitigation of AI risks | Defines AI risk categories and obligations for high-risk AI systems | |
Transparency | Requires transparency in AI models and decision-making processes | Mandates transparency for high-risk AI systems, including explainability | |
Compliance | Voluntary standard for best practices in AI governance | Mandatory compliance for high-risk AI systems operating in the EU | |
Governance Requirements | Provides governance structures and management systems for AI | Requires implementation of AI-specific governance and oversight bodies | |
Ethical Guidelines | Covers ethical principles, fairness, accountability, and non-discrimination | Focuses on preventing harm, bias, and ensuring the ethical use of AI |
Building Model Cards Aligned with ISO 42001
Model cards are extremely useful for the governance of AI as they provide a level of transparency about the properties, limitations, and ethical aspects of AI models. ISO 42001 suggests that organizations compile model cards that include information on key aspects of AI systems so that stakeholders are more informed about how the AI system works, the risks it may present, and any ethical implications associated with it.
Model cards developed according to ISO 42001 typically contain several key elements, including a Model Overview containing a description of the AI model, an indication about purpose and examples of use cases. The Performance Metrics aspect includes descriptions of the accuracy, fairness, and transparency of the findings from the model. Finally, there is a Risk Assessment section that identifies risks and biases in the AI system and to explain what mitigation strategies have been adopted to reduce those risks.
For assistance, contact us at support@pacificcert.com.
AI Risk Assessment Template (Free Download)
Conducting a thorough AI risk assessment is essential for identifying potential ethical issues, safety risks, and regulatory compliance gaps. This template is designed to help organizations systematically assess the risks associated with AI models and implement necessary mitigation strategies. The AI risk assessment template includes:
• Risk Description: A heading to document all of the potential risks that come from the use of the AI model
• Risk Assessment: A template that requires consideration of the degree of severity and likelihood that the risks pose, rates each accordingly so that businesses can allocate priority to the risks that may have considerable ethical and or legal implications.
• Mitigation Activities: Areas where a user would define the mitigating activities to reduce the risks that are identified. An example may be to improve the quality of the data you are inputting.
• Monitor and Review: Area to define processes for ongoing monitoring to track the performance of an AI model, and to compare its adherence to ethical and legal standards.
• Records: A section to clearly document the risk assessment process, the reasoning behind decisions made, actions put into place, and performance and other changes to the model overtime.
[Download Free AI Risk Assessment Template]
ISO 42001 Certification Cost Factors for Startups
ISO 42001 certification assists organizations achieve responsible AI governance, however pricing for certification can differ depending on a variety of different factors. Larger organizations or organizations with complex AI systems typically have higher certification costs than those that might have simpler or disorganized governance. Larger organizations typically must provide more documentation to support their governance practices, undergo more audits by the certifying body and incorporate more systems into auditing
The cost of auditing process and will be affected by two factors - the amount of time used and/or the number of team members engaged. The time to audit and people auditing will depend on the organization applying or the complexity of the artificial intelligence systems and project.
Once certified, the organization will need to comply with the ISO 42001 production specifications, which may include multiple audits or updates to the governance process and additional training.
Privacy by Design: Linking ISO 27701 and AI Governance
By linking ISO 27701 with AI governance frameworks like ISO 42001, organizations can ensure that privacy is an integral part of their AI systems. This involves:
1. Making sure that AI models only collect and process data containing the minimum personal information possible for their operation.
2. Responsibility to inform users with plain language of the limited personal data, clear explanation of how their data will be used in the AI models, and doing this in accordance with privacy rules.
3. Create ways to obtain, document, administer consent from users for the collection and processing of data.
4. Incorporate ISO 27001 controls to protect personal data under AI models, and to ensure the AI processes comply with privacy obligations.
Contact Us
Pacific Certifications can assist your organization in navigating the ISO/IEC 23894 and ISO 42001 certification process. Our team of experts will help you develop responsible AI governance practices, ensuring that your AI models are ethical, transparent, and compliant with global standards.
For assistance, contact us at support@pacificcert.com.
Visit our website at www.pacificcert.com.
FAQs
Q1: What is ISO/IEC 23894?
ISO/IEC 23894 is an international standard that provides guidelines for AI governance and ethics, helping organizations develop ethical AI models, manage risks, and ensure compliance with global regulations.
Q2: How does ISO 42001 help in AI governance?
ISO 42001 helps organizations establish a overreaching AI governance management system, providing a framework for managing AI risks, ensuring transparency, and maintaining ethical AI practices.
Q3: What is the cost of ISO 42001 certification for startups?
The cost of ISO 42001 certification for startups depends on factors such as the size and complexity of the organization, existing governance processes, external consultancy fees, and audit costs.
Ready to get ISO certified?
Contact Pacific Certifications to begin your certification journey today!
Suggested Certifications –
Read more: Pacific Blogs
