Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Complexity

Nowadays, we are faced with systems that utilize numerous learning models in their modules for their perception and decision-making processes... One aspect of an AI-based system that leads to increasing the complexity of the system is the parameter space that may result from multiplications of parameters of the internal parts of the system

Source: MIT AI Risk Repositorymit607

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit607

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Systematically reduce model complexity and parameter space by implementing model compression techniques. This includes structural **pruning** to remove non-critical components, **quantization** to reduce parameter precision (e.g., from floating-point to integer), or **knowledge distillation** to transfer learning from an overly complex model to a smaller, more efficient one. 2. Integrate **robustness engineering** and rigorous **adversarial testing** throughout the AI lifecycle. This ensures the system maintains predictable performance and capability by strengthening its resilience against external manipulations (adversarial examples, prompt injections) and managing the risks associated with a high-dimensional parameter space. 3. Enhance the manageability and auditability of the complex system by employing **Explainable AI (XAI)** and continuous performance monitoring. This provides the necessary transparency by enabling human stakeholders to understand the mechanism behind the system's decisions, which is crucial for detecting performance drift, bias, and accountability issues inherent in opaque, multi-model architectures.