Back to the MIT repository
2. Privacy & Security1 - Pre-deployment

GPU Computation Platforms

The training of LLMs requires significant GPU resources, thereby introducing an additional security concern. GPU side-channel attacks have been developed to extract the parameters of trained models [159], [163].

Source: MIT AI Risk Repositorymit26

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

1 - Pre-deployment

Risk ID

mit26

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Enforce strict hardware-level resource isolation by dedicating physical GPUs to tenants hosting sensitive LLM workloads or implementing secured GPU virtualization with robust partitioning and arbitration mechanisms 2. Restrict access to and reduce the temporal precision of low-level performance-monitoring APIs and high-resolution timers on the GPU stack 3. Employ constant-time algorithms and data-independent memory access patterns within GPU kernels to eliminate the data-dependent variability exploited by side-channel timing attacks