Cybersecurity
This section catalogs the risk sources and mitigation measures related to cyber- security. These items may be related to security in terms of AI models being accessible only to the intended users, as well as AI models having appropriate access to the external world during both model development and deployment stages.
ENTITY
3 - Other
INTENT
3 - Other
TIMING
3 - Other
Risk ID
mit1161
Domain lineage
2. Privacy & Security
2.0 > Privacy & Security
Mitigation strategy
1. Establish a Zero Trust Architecture and Principle of Least Privilege Enforce strict access controls and a Zero Trust model across the entire AI development and deployment lifecycle. Utilize robust authentication (e.g., short-lived tokens, M2M OAuth) and role-based access controls (RBAC) to ensure that AI models, data pipelines, and infrastructure endpoints are accessible only to explicitly authorized users or services, minimizing the risk of unauthorized access or lateral movement in the event of a compromise. 2. Implement Cryptographic Protection and Supply Chain Integrity Validation Apply robust encryption protocols (AES-256 for data at rest, TLS 1.3 for data in transit) across the AI data pipeline, including sensitive training data and model artifacts. Furthermore, mandate the continuous vetting and validation of all third-party dependencies, including datasets, software libraries, and pre-trained models, to mitigate supply chain vulnerabilities such as tainted dataset injection. 3. Deploy Continuous Security Monitoring and Incident Response Capabilities Institute real-time behavioral monitoring and vulnerability management systems (e.g., UEBA) across all AI environments to detect anomalous activity, configuration drift, and security breaches post-deployment. Establish and regularly practice an incident response framework specifically tailored to address AI-unique threats, such as prompt injection and model extraction, ensuring rapid containment and remediation.