Chapter 17

Annex A Controls: Use of AI Systems (A.9)

Detailed guidance on implementing Annex A controls for AI system use (A.9), covering intended use, fitness for purpose, and human oversight with 3 controls.

15 min read

Chapter Overview

This chapter covers the Use of AI Systems domain (A.9), which ensures AI systems are used appropriately and with adequate human oversight. This domain contains 3 controls, including the critical human oversight control.

A.9 Use of AI Systems

Proper use of AI systems is as important as proper development. Even well-designed AI can cause harm if misused or operated without appropriate oversight.

A.9.2 Intended Use

AttributeDetails
ControlThe intended use of AI systems shall be defined and documented.
PurposeEstablish clear boundaries for AI system use
Related Clause8.1 (Operational planning and control)

Implementation Guidance

  • Define intended use cases for each AI system
  • Document what the AI system should be used for
  • Specify what the AI system should NOT be used for
  • Identify user groups and their authorized uses
  • Document environmental and operational constraints
  • Communicate intended use to users
  • Monitor for use outside intended scope

Intended Use Documentation

ElementDescription
Purpose StatementThe primary purpose of the AI system
Use CasesSpecific scenarios where use is appropriate
Authorized UsersWho is permitted to use the system
Operating EnvironmentTechnical and operational requirements
Prohibited UsesUses that are explicitly not allowed
LimitationsKnown constraints on effective use
Geographic ScopeWhere the system may be used
Intended Use Example

AI System: Customer Service Chatbot

Intended Use: Answer common customer questions about products, orders, and returns

Authorized Users: Website visitors, mobile app users

Prohibited Uses:
• Medical, legal, or financial advice
• Processing of sensitive personal data
• Decisions with significant impact on individuals
• Use with vulnerable populations without human oversight

Limitations: May not understand complex queries; escalate to human for complaints

Audit Questions - A.9.2

• How do you define intended use for AI systems?
• Show me intended use documentation
• What uses are prohibited?
• How do you communicate intended use to users?
• How do you detect use outside intended scope?

A.9.3 Fitness for Purpose

AttributeDetails
ControlAI systems shall be fit for their intended purpose and perform as expected within defined boundaries.
PurposeEnsure AI systems actually work for their intended use
Related Clause8.1 (Operational planning and control), A.6.2.9 (Verification and validation)

Implementation Guidance

  • Define performance requirements for intended use
  • Validate AI systems against intended use scenarios
  • Test in conditions reflecting actual use
  • Monitor ongoing fitness for purpose
  • Address performance degradation
  • Re-validate when changes occur

Fitness Assessment Areas

AreaAssessment Questions
PerformanceDoes the system meet accuracy/quality requirements?
ReliabilityDoes the system perform consistently?
RobustnessDoes the system handle edge cases and variations?
ScalabilityDoes the system handle expected volumes?
UsabilityCan users effectively use the system?
SafetyDoes the system operate safely in intended environment?
Fitness Validation Process

1. Define Success Criteria: Measurable requirements for intended use
2. Test Design: Create tests reflecting real-world use scenarios
3. Validation Testing: Execute tests with representative data/users
4. Gap Analysis: Compare results against criteria
5. Remediation: Address any fitness gaps
6. Sign-off: Formal approval for intended use
7. Monitoring: Ongoing fitness monitoring in production

Audit Questions - A.9.3

• How do you ensure AI systems are fit for purpose?
• What validation have you performed?
• Show me fitness assessment for [AI system]
• How do you monitor ongoing fitness?
• What happens when fitness degrades?

A.9.4 Human Oversight

AttributeDetails
ControlThe organization shall define, implement, and document processes for human oversight of AI systems.
PurposeMaintain appropriate human control over AI systems
Related Clause8.1 (Operational planning and control)
Critical Control

Human oversight is one of the most important controls in ISO 42001. It ensures humans remain in control of AI systems and can intervene when necessary. This is also a key requirement of the EU AI Act for high-risk AI systems.

Implementation Guidance

  • Determine appropriate oversight level for each AI system
  • Design oversight mechanisms into AI systems
  • Define roles and responsibilities for oversight
  • Train personnel on oversight procedures
  • Implement monitoring and alerting
  • Enable human intervention and override
  • Document oversight processes and decisions

Levels of Human Oversight

LevelDescriptionWhen Appropriate
Human-in-the-LoopHuman approval required for each AI decisionHigh-risk decisions, early deployment
Human-on-the-LoopHuman monitors AI and can interveneMedium-risk, established systems
Human-over-the-LoopHuman oversight of AI design and outcomesLower-risk, high-volume operations

Oversight Mechanisms

MechanismDescription
Approval GatesHuman approval before AI action takes effect
Review SamplingHuman review of sample AI decisions
Threshold AlertsAlerts when AI confidence is low or output unusual
Override CapabilityAbility to override or reverse AI decisions
Kill SwitchAbility to stop AI system operation
Audit TrailsRecords for post-hoc human review
EscalationAutomatic escalation of edge cases to humans
Human Oversight Documentation

Document for each AI system:
• Oversight level and rationale
• Oversight roles and responsibilities
• Oversight procedures and triggers
• Intervention capabilities
• Training requirements for oversight personnel
• Monitoring and alerting mechanisms
• Records of oversight activities and interventions

Factors Affecting Oversight Level

FactorHigher Oversight NeededLower Oversight May Be Acceptable
Decision ImpactSignificant impact on individualsLow-impact, easily reversible
AutonomyAI acts independentlyAI only recommends
MaturityNew or changing AI systemStable, well-validated system
ReversibilityIrreversible consequencesEasy to reverse or correct
RegulatoryRegulated domainUnregulated context
VulnerabilityAffects vulnerable groupsGeneral population
Audit Questions - A.9.4

• What human oversight do you have for AI systems?
• How do you determine the appropriate oversight level?
• Show me oversight documentation for [AI system]
• How can humans intervene or override AI decisions?
• What training do oversight personnel receive?
• Show me records of human oversight activities
• How do you handle AI decisions that are questioned?

Control Implementation Summary

ControlKey EvidenceCommon Gaps
A.9.2 Intended UseIntended use documentation, prohibited use listsUse boundaries not defined
A.9.3 Fitness for PurposeValidation records, performance monitoringNo validation against intended use
A.9.4 Human OversightOversight procedures, intervention records, trainingNo oversight mechanisms
Key Takeaways - A.9

1. Intended use must be documented including prohibited uses
2. AI systems must be validated as fit for their intended purpose
3. Human oversight is critical and required for high-risk AI
4. Oversight level should match risk level
5. Override and intervention capabilities are essential
6. Oversight activities should be documented

AI Assistant
00:00