Automated discovery and exploitation of software systems
GPAIs can be used to aid in the automated discovery of software vulnerabilities [33]. This can empower malicious actors, making their cyberattacks more effi- cient and potentially more damaging. This type of automation allows attackers to expand the scale of their operations at a low cost, increasing the impact of their actions. New malware can be developed automatically, or the known vulnerabilities can be exploited to create more sophisticated attacks.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit1191
Domain lineage
4. Malicious Actors & Misuse
4.2 > Cyberattacks, weapon development or use, and mass harm
Mitigation strategy
1. Mandate the implementation of autonomous, multi-layered defense systems. This involves the real-time integration of AI-powered threat detection, such as behavioral analytics and anomaly detection tools (e.g., combining Network Detection and Response with Endpoint Detection and Response), to identify and neutralize high-velocity, high-volume, and novel attack patterns characteristic of automated exploitation. 2. Establish rigorous, continuous automated security hygiene protocols. This includes mandating self-patching and self-healing software architectures, zero-trust network configurations, and continuous attack surface management to rapidly mitigate known software vulnerabilities that General-Purpose AI models are demonstrably proficient at discovering and exploiting. 3. Enforce robust governance and security practices throughout the AI and LLM development lifecycle. This involves integrating comprehensive security controls from model training through deployment, including filtering model inputs to prevent prompt injection and command-like directives, and adopting standardized "AI Security Compliance" frameworks to ensure preemptive risk assessment and incident response planning. 4. Implement mandatory strong identity and access controls. This requires enforcing universal multi-factor authentication (MFA) and dictating the use of super-strong, unique passwords (greater than 15 characters, mixed composition) to combat the AI-assisted social engineering and credential harvesting that facilitate lateral movement and account compromise.