Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Misapplication

This is the risk posed by an ideal system if used for a purpose/in a manner unintended by its creators. In many situations, negative consequences arise when the system is not used in the way or for the purpose it was intended.

Source: MIT AI Risk Repositorymit189

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit189

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Implement **Role-Based Access Control (RBAC)** and **Privileged Access Management (PAM)** to enforce the principle of least privilege, thereby strictly limiting user access and system capabilities to only those essential for the system's intended function. This directly reduces the operational surface area for unauthorized or unintended use. 2. Utilize **User and Entity Behavior Analytics (UEBA)** in conjunction with **Continuous Controls Monitoring (CCM)** to establish a baseline of intended operational patterns. This enables real-time detection and flagging of anomalous or inappropriate interactions, such as unauthorized data access or attempts to execute models outside of specified parameters. 3. Establish a robust **Governance and Accountability Framework** that includes mandatory technical measures such as **Digital Watermarking** of distributed models or datasets. This provides a clear, auditable trail to deter unauthorized distribution or sharing and ensures transparent traceability of the misuse event to the responsible human agent.