Chapter 19

Annex C: AI Objectives and Risk Sources

Comprehensive guide to Annex C covering potential AI objectives for organizations and detailed risk sources to consider in AI risk assessments.

20 min read

Chapter Overview

Annex C is an informative annex (not mandatory) that provides guidance on potential AI objectives and risk sources. It helps organizations identify what they want to achieve with AI and what could go wrong.

Annex C Purpose

Annex C provides:
• Potential objectives organizations may have for AI systems
• Risk sources to consider in AI risk assessments

Use Annex C as a reference when:
• Setting AI objectives (Clause 6.2)
• Conducting AI risk assessments (Clause 6.1.2, 8.2)
• Performing impact assessments (Clause 8.4)

C.1 AI System Objectives

Organizations develop, provide, or use AI systems to achieve various objectives. Annex C lists potential objectives to consider.

Categories of AI Objectives

CategoryObjective Examples
PerformanceAccuracy, reliability, efficiency, scalability
SafetySafe operation, harm prevention, fail-safe behavior
SecurityConfidentiality, integrity, availability, resilience
PrivacyData protection, consent management, anonymization
FairnessNon-discrimination, equitable outcomes, bias prevention
TransparencyExplainability, understandability, disclosure
AccountabilityClear responsibility, auditability, traceability
Human OversightHuman control, intervention capability, override
RobustnessResilience, error handling, adversarial resistance
ComplianceLegal compliance, regulatory adherence, standards

Detailed AI Objectives

Performance Objectives

ObjectiveDescription
AccuracyAI outputs are correct and reliable
PrecisionAI produces consistent, repeatable results
EfficiencyAI operates with optimal resource use
AvailabilityAI systems are accessible when needed
ScalabilityAI handles increasing workloads
ResponsivenessAI provides timely outputs

Safety Objectives

ObjectiveDescription
Harm PreventionAI does not cause physical or psychological harm
Fail-Safe OperationAI fails in a safe manner
Predictable BehaviorAI behaves as expected
Bounded OperationAI operates within defined limits

Ethical Objectives

ObjectiveDescription
FairnessAI treats all groups equitably
Non-DiscriminationAI does not discriminate based on protected characteristics
Human DignityAI respects human dignity and rights
BeneficenceAI provides benefit to users and society
AutonomyAI supports human decision-making autonomy
Using AI Objectives

When setting objectives (Clause 6.2):
1. Review Annex C objective categories
2. Identify objectives relevant to your AI systems
3. Prioritize based on context and risk
4. Define measurable targets where possible
5. Align with organizational values and policy
6. Document selected objectives and rationale

C.2 Risk Sources

Annex C identifies potential sources of AI risk to consider during risk assessment.

Risk Source Categories

CategoryDescription
Data-RelatedRisks arising from data used in AI systems
Model-RelatedRisks from AI model design and behavior
Technology-RelatedRisks from technical infrastructure and tools
Human-RelatedRisks from human interaction with AI
OrganizationalRisks from organizational factors
ExternalRisks from external environment

Data-Related Risk Sources

Risk SourceDescriptionExample Risks
Data QualityIssues with data accuracy, completeness, timelinessIncorrect predictions, unreliable outputs
Data BiasSystematic bias in training dataDiscriminatory outcomes, unfair decisions
Data RepresentativenessData not representing target populationPoor performance for underrepresented groups
Data PrivacyPersonal data exposure risksPrivacy violations, regulatory non-compliance
Data ProvenanceUnknown or unreliable data sourcesUnverifiable data, licensing issues
Data PoisoningMalicious manipulation of training dataCompromised model behavior
Data DriftChanges in data distribution over timeModel performance degradation

Model-Related Risk Sources

Risk SourceDescriptionExample Risks
Model AccuracyModel does not meet performance requirementsIncorrect decisions, failed objectives
Model RobustnessModel sensitive to input variationsInconsistent behavior, exploitation
Model InterpretabilityModel decisions cannot be explainedLack of trust, compliance issues
Model BiasModel exhibits unfair behaviorDiscrimination, reputational damage
Adversarial VulnerabilityModel susceptible to adversarial attacksSecurity breaches, manipulated outputs
Concept DriftUnderlying patterns change over timeOutdated model, poor performance
OverfittingModel too specialized to training dataPoor generalization, unreliable in production

Technology-Related Risk Sources

Risk SourceDescriptionExample Risks
Infrastructure FailureComputing/network infrastructure issuesAI system unavailability
Security VulnerabilitiesTechnical security weaknessesData breaches, system compromise
Integration IssuesProblems integrating AI with other systemsSystem failures, data inconsistencies
Scalability LimitsInfrastructure cannot handle demandPerformance degradation, outages
Tool/Library IssuesBugs or vulnerabilities in AI toolsUnexpected behavior, security risks

Human-Related Risk Sources

Risk SourceDescriptionExample Risks
MisuseAI used outside intended purposeHarm, liability, compliance violations
Over-RelianceExcessive trust in AI outputsUncritical acceptance of errors
Under-RelianceIgnoring valid AI outputsMissed benefits, inefficiency
Skill GapsInadequate user/operator competenceMisoperation, errors, incidents
Automation ComplacencyReduced vigilance due to automationMissed issues, delayed response
Social EngineeringManipulation of AI usersSecurity breaches, data leakage

Organizational Risk Sources

Risk SourceDescriptionExample Risks
Governance GapsInadequate AI oversight and controlUnmanaged risks, accountability issues
Resource ConstraintsInsufficient resources for AI managementInadequate controls, rushed deployments
Communication FailuresPoor communication about AIMisunderstanding, improper use
Change ManagementPoorly managed AI system changesUnexpected impacts, incidents
Vendor DependencyOver-reliance on AI vendorsVendor lock-in, service disruption

External Risk Sources

Risk SourceDescriptionExample Risks
Regulatory ChangesNew or changing AI regulationsCompliance gaps, required changes
Threat ActorsMalicious actors targeting AIAttacks, data theft, manipulation
Market ChangesChanges in competitive landscapeObsolescence, competitive disadvantage
Public PerceptionNegative public view of AIReputational damage, adoption resistance
Technology EvolutionRapid AI technology changesTechnical debt, skill gaps
Using Risk Sources in Assessment

During risk assessment (6.1.2, 8.2):
1. Review Annex C risk source categories
2. Consider each category for your AI systems
3. Identify specific risks relevant to context
4. Assess likelihood and consequence
5. Document identified risks in risk register
6. Use as checklist to ensure comprehensive coverage

Key Takeaways - Annex C

1. Annex C is informative (guidance, not mandatory)
2. Use AI objectives when setting your AIMS objectives
3. Use risk sources as checklist during risk assessment
4. Categories cover data, model, technology, human, organizational, and external
5. Tailor to your specific context and AI systems
6. Annex C helps ensure comprehensive risk identification

AI Assistant
00:00