Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Compliance

The potential for AI systems to violate laws, regulations, and ethical guidelines (including copyrights). Non-compliance can lead to legal penalties, reputation damage, and loss of trust.While other risks in our taxonomy apply to system developers, users, and broader society, this risk is generally restricted to the former two groups.

Source: MIT AI Risk Repositorymit159

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit159

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.5 > Governance failure

Mitigation strategy

1. **Establish a Comprehensive AI Governance and Accountability Framework** Institute a formal, cross-functional AI governance structure, designating clear roles, responsibilities, and accountability across the AI lifecycle (development, procurement, and deployment). This framework must mandate the creation and annual revision of a written AI policy, ensuring all AI use adheres to legal, ethical, and organizational standards, and providing necessary human oversight for high-impact decisions. 2. **Implement Proactive Risk and Impact Assessments** Conduct mandatory, rigorous, and continuous risk assessments and impact assessments on all AI systems, commensurate with their foreseeable risk level (e.g., as per the EU AI Act's risk-based approach). These assessments must specifically test for algorithmic bias, data quality issues, privacy vulnerabilities, and potential violation of anti-discrimination laws, with a defined protocol for mitigation and redress. 3. **Ensure Robust Legal and Intellectual Property Compliance** Formalize processes for securing legal and intellectual property (IP) compliance, which includes obtaining appropriate licenses for all third-party training data and pre-trained models to mitigate copyright infringement risk. Furthermore, all AI vendor contracts must include robust indemnification clauses to transfer liability for issues arising from the AI's training data or outputs that are beyond the deploying organization's control.