Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Anonymous resource acquisition

The demonstrated ability of anonymous actors to accumulate resources online (e.g., Satoshi Nakamoto as an anonymous crypto billionaire)

Source: MIT AI Risk Repositorymit863

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit863

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.2 > AI possessing dangerous capabilities

Mitigation strategy

1. Mandate Zero-Trust, Fine-Grained Authorization for Agentic Tools Implement rigorous Role-Based Access Control (RBAC) and Fine-Grained Authorization (FGA) for all external tools and APIs accessible by the AI system. The execution logic for any resource-acquisition or financial transaction tool must require authorization based on a verified, non-anonymous human principal (e.g., the logged-in user via OAuth2 delegation), rather than the AI agent itself, thereby treating every tool call as a security checkpoint to prevent unauthorized autonomous action. 2. Enforce Strict Access Controls and Know-Your-Customer (KYC) Protocols Apply technical and administrative controls to limit access to AI models with general-purpose, open-ended capabilities that could be repurposed for anonymous resource acquisition. For external-facing deployments, institute strict Know-Your-Customer (KYC) screening and identity verification prior to granting access, and utilize compute monitoring to detect and restrict suspicious or unscreened usage of dangerous capabilities. 3. Establish Immutable Auditability and Continuous Anomaly Monitoring Require the maintenance of detailed, immutable audit trails and transaction logs for all data movement, financial transactions, and decision-making processes initiated by or involving the AI system. Implement continuous monitoring systems with anomaly detection to proactively flag unusual resource accumulation patterns, high-volume query rates, or deviations from authorized operational norms that may indicate malicious or unintended autonomous resource acquisition attempts.