Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Misuse of drug-discovery models

Models used for drug discovery, such as drug-target affinity prediction models, can be used to identify or develop dangerous toxins. This is particularly concern- ing if the training data contains information related to potentially dangerous proteins and viruses.

Source: MIT AI Risk Repositorymit1197

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1197

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Implement advanced **design-specific protective measures** within the model architecture itself to mitigate misuse, such as technical constraints that prevent the inversion of the affinity prediction or toxicity screening objective function, thereby blocking the system from being intentionally optimized toward the generation of highly toxic or dangerous compounds. 2. Establish stringent **risk-based access control and governance protocols** for deployment, including mandatory user identity verification, documented authorization for access to high-risk models, and continuous usage monitoring and logging. These measures should align with existing biosecurity and dual-use research regulations to limit model accessibility to only vetted and approved researchers. 3. Mandate and execute continuous **adversarial testing (red teaming)**, involving biosecurity and cybersecurity experts who proactively attempt to misuse or repurpose the AI model to generate harmful agents. The findings from these simulations must be used to iteratively strengthen both the technical safeguards and the access controls before the model is widely deployed.