Chapter Overview
This chapter covers the AI System Life Cycle domain (A.6), the largest control domain with 12 controls. These controls ensure AI systems are managed responsibly throughout their entire lifecycle from design to decommissioning.
A.6 AI System Life Cycle
This domain is divided into two sub-sections: A.6.1 (General) and A.6.2 (AI System Life Cycle Stages).
A.6.1 General Controls
A.6.1.2 Managing AI System Life Cycle
| Attribute | Details |
|---|
| Control | Processes shall be defined to manage the AI system life cycle. |
| Purpose | Establish systematic lifecycle management |
| Related Clause | 8.1 (Operational planning and control) |
Implementation Guidance
- Define lifecycle stages for your organization
- Establish processes for each stage
- Define stage gates and approval criteria
- Document lifecycle management procedures
- Integrate with existing development methodologies
- Ensure traceability across stages
AI System Lifecycle Stages
| Stage | Key Activities |
|---|
| Conception | Idea generation, feasibility assessment |
| Design | Requirements, architecture, approach selection |
| Data Collection | Data acquisition, preparation, labeling |
| Development | Model development, training, optimization |
| Verification | Testing, validation, bias assessment |
| Deployment | Production release, integration |
| Operation | Day-to-day operation, support |
| Monitoring | Performance monitoring, drift detection |
| Retirement | Decommissioning, archival, transition |
Audit Questions - A.6.1.2
• What lifecycle stages do you define?
• Show me your lifecycle management process
• What are the stage gate criteria?
• How do you track AI systems through the lifecycle?
A.6.1.3 Responsible AI
| Attribute | Details |
|---|
| Control | Principles of responsible AI relevant to the organization shall be defined and implemented throughout the AI system life cycle. |
| Purpose | Embed ethical AI throughout development and use |
| Related Clause | 5.2 (AI Policy) |
Implementation Guidance
- Define responsible AI principles
- Integrate principles into development processes
- Train personnel on responsible AI
- Implement checkpoints for principle adherence
- Review systems against principles
- Update principles based on emerging standards
Common Responsible AI Principles
| Principle | Description |
|---|
| Fairness | AI systems treat all people equitably |
| Transparency | AI operations are understandable and open |
| Accountability | Clear responsibility for AI outcomes |
| Privacy | Personal data is protected |
| Safety | AI systems operate safely and reliably |
| Human Oversight | Humans maintain appropriate control |
| Beneficence | AI provides benefit to users and society |
Audit Questions - A.6.1.3
• What responsible AI principles have you defined?
• How are principles implemented in development?
• How do you verify adherence to principles?
• Show me how principles are embedded in your processes
A.6.1.4 AI System Life Cycle Documentation
| Attribute | Details |
|---|
| Control | AI systems shall be documented according to defined requirements throughout their life cycle. |
| Purpose | Maintain comprehensive AI system records |
| Related Clause | 7.5 (Documented information) |
Documentation Requirements
| Lifecycle Stage | Documentation |
|---|
| Design | Requirements, design decisions, architecture |
| Data | Data sources, preparation, quality assessments |
| Development | Model specifications, training parameters, experiments |
| Testing | Test plans, results, validation reports |
| Deployment | Deployment procedures, configurations |
| Operation | User guides, operational procedures |
| Monitoring | Monitoring specifications, thresholds |
Audit Questions - A.6.1.4
• What documentation requirements exist for AI systems?
• Show me documentation for [specific AI system]
• How do you ensure documentation is maintained?
• What templates do you use?
A.6.2 AI System Life Cycle Stages Controls
A.6.2.2 Defining Objectives
| Attribute | Details |
|---|
| Control | Objectives for the AI system and its intended use shall be defined and documented. |
| Purpose | Establish clear purpose and success criteria |
Implementation Guidance
- Define business objectives for each AI system
- Document intended use cases
- Specify success criteria and metrics
- Identify constraints and boundaries
- Document what the AI system should NOT do
- Obtain stakeholder agreement on objectives
Audit Questions - A.6.2.2
• What are the objectives of [specific AI system]?
• How are objectives documented?
• Who approves AI system objectives?
• How do you define intended use?
A.6.2.3 Assessing Feasibility
| Attribute | Details |
|---|
| Control | Feasibility of achieving objectives shall be assessed and documented prior to development or acquisition. |
| Purpose | Ensure AI projects are viable before investment |
Feasibility Assessment Areas
| Area | Assessment Questions |
|---|
| Technical | Is this technically achievable with current methods? |
| Data | Is sufficient quality data available? |
| Resource | Do we have skills, budget, infrastructure? |
| Ethical | Can this be done responsibly? |
| Legal | Are there regulatory barriers? |
| Business | Does the business case justify investment? |
Audit Questions - A.6.2.3
• How do you assess feasibility before development?
• Show me a feasibility assessment
• What criteria determine go/no-go decisions?
• Have any projects been rejected based on feasibility?
A.6.2.4 Technical Documentation
| Attribute | Details |
|---|
| Control | Technical documentation shall be produced and maintained throughout the AI system life cycle. |
| Purpose | Enable understanding and maintenance of AI systems |
Technical Documentation Content
- System architecture and design
- Model specifications and parameters
- Training methodology and data
- Performance metrics and benchmarks
- API specifications and interfaces
- Dependencies and requirements
- Known limitations and constraints
Audit Questions - A.6.2.4
• What technical documentation do you maintain?
• Show me technical documentation for [AI system]
• How is documentation kept current?
• Who is responsible for technical documentation?
A.6.2.5 Maintaining Records
| Attribute | Details |
|---|
| Control | Records related to AI systems shall be maintained throughout the AI system life cycle. |
| Purpose | Ensure traceability and auditability |
Records to Maintain
- Decision records and approvals
- Change records and version history
- Testing and validation records
- Incident and issue records
- Performance monitoring records
- Training data provenance
- Model versions and experiments
Audit Questions - A.6.2.5
• What records do you maintain for AI systems?
• How long are records retained?
• Show me records for [specific decision/change]
• How do you ensure record integrity?
A.6.2.6 Engaging Interested Parties
| Attribute | Details |
|---|
| Control | Relevant interested parties shall be engaged throughout the AI system life cycle. |
| Purpose | Incorporate stakeholder perspectives |
Stakeholder Engagement Activities
| Stage | Engagement Activities |
|---|
| Design | Requirements gathering, user research |
| Development | Feedback on prototypes, beta testing |
| Deployment | User training, change management |
| Operation | Support channels, feedback collection |
| Monitoring | User satisfaction surveys, complaints |
Audit Questions - A.6.2.6
• How do you engage stakeholders in AI development?
• Which stakeholders are involved at each stage?
• Show me evidence of stakeholder engagement
• How do you incorporate stakeholder feedback?
A.6.2.7 Approaches for Achieving Objectives
| Attribute | Details |
|---|
| Control | Approaches for achieving objectives shall be defined. |
| Purpose | Select appropriate methods for AI development |
Audit Questions - A.6.2.7
• How do you select AI approaches/methods?
• What alternatives were considered?
• Why was this approach chosen?
• How do you document approach decisions?
A.6.2.8 Defining System Requirements
| Attribute | Details |
|---|
| Control | Requirements for AI systems shall be defined and documented. |
| Purpose | Establish clear specifications for AI systems |
Requirement Types
| Type | Examples |
|---|
| Functional | What the system must do |
| Performance | Accuracy, latency, throughput |
| Security | Access control, data protection |
| Compliance | Regulatory requirements |
| Usability | User interface, accessibility |
| Ethical | Fairness, transparency requirements |
Audit Questions - A.6.2.8
• How are AI system requirements defined?
• Show me requirements for [AI system]
• How do you handle requirement changes?
• Who approves requirements?
A.6.2.9 Verification and Validation
| Attribute | Details |
|---|
| Control | Verification and validation of AI systems shall be performed, including system performance against defined objectives. |
| Purpose | Ensure AI systems meet requirements and objectives |
Verification vs Validation
| Aspect | Verification | Validation |
|---|
| Question | Are we building it right? | Are we building the right thing? |
| Focus | Technical correctness | Business value and fitness |
| Methods | Testing, code review, analysis | User acceptance, real-world testing |
AI-Specific Testing
- Model accuracy and performance testing
- Bias and fairness testing
- Robustness and adversarial testing
- Edge case and boundary testing
- Integration testing
- User acceptance testing
Audit Questions - A.6.2.9
• How do you verify and validate AI systems?
• Show me test results for [AI system]
• How do you test for bias?
• What acceptance criteria must be met?
A.6.2.10 AI System Operation and Monitoring
| Attribute | Details |
|---|
| Control | Operations and performance of AI systems shall be monitored. |
| Purpose | Ensure ongoing AI system effectiveness |
| Related Clause | 9.1 (Monitoring, measurement, analysis and evaluation) |
Monitoring Areas
| Area | Metrics |
|---|
| Performance | Accuracy, precision, recall, latency |
| Drift | Data drift, concept drift, model degradation |
| Usage | Volume, user patterns, adoption |
| Incidents | Errors, failures, complaints |
| Fairness | Outcomes across groups |
| Resources | Compute, memory, costs |
Audit Questions - A.6.2.10
• How do you monitor AI systems in production?
• What metrics do you track?
• How do you detect model drift?
• What triggers investigation or intervention?
• Show me monitoring dashboards
Control Implementation Summary
| Control | Key Evidence |
|---|
| A.6.1.2 Lifecycle Management | Lifecycle process documentation, stage gates |
| A.6.1.3 Responsible AI | Principles document, integration evidence |
| A.6.1.4 Lifecycle Documentation | Documentation standards, examples |
| A.6.2.2 Objectives | Objective statements, success criteria |
| A.6.2.3 Feasibility | Feasibility assessments, decisions |
| A.6.2.4 Technical Documentation | Technical specs, architecture docs |
| A.6.2.5 Records | Record retention, audit trails |
| A.6.2.6 Stakeholder Engagement | Engagement records, feedback |
| A.6.2.7 Approaches | Approach selection rationale |
| A.6.2.8 Requirements | Requirements documents |
| A.6.2.9 Verification & Validation | Test plans, results, sign-offs |
| A.6.2.10 Monitoring | Monitoring specs, dashboards, alerts |
Key Takeaways - A.6
1. A.6 is the largest domain with 12 controls
2. Lifecycle management requires defined processes and stage gates
3. Responsible AI principles must be defined AND implemented
4. Documentation is required throughout the lifecycle
5. Verification and validation must include AI-specific testing (bias, robustness)
6. Monitoring must continue throughout operation