Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Assisting code generation for cyber security threats

Anticipated risk: Creators of the assistive coding tool Co-Pilot based on GPT-3 suggest that such tools may lower the cost of developing polymorphic malware which is able to change its features in order to evade detection [37].

Source: MIT AI Risk Repositorymit218

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit218

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Implement continuous security validation at the point of code generation, utilizing IDE-integrated agents to inspect, filter, and prevent the inclusion of insecure code patterns, thereby enforcing policy compliance and mitigating the velocity of vulnerability introduction. 2. Mandate the deployment of advanced Endpoint Detection and Response (EDR) solutions that leverage behavioral analysis and machine learning to identify the dynamic execution patterns characteristic of polymorphic malware, superseding the efficacy of traditional signature-based detection methods. 3. Establish a rigorous governance framework and mandatory developer security training to address AI-specific risks, including the avoidance of insecure prompting techniques and the institutional enforcement of human review for all AI-generated code to validate security controls and application logic.

ADDITIONAL EVIDENCE

Risks of disinformation also intersect with concerns about LMs creating new cyber security threats, as it was found that disinformation can be generated in target domains, such as cyber security, to distract the attention of specialists from addressing real vulnerabilities [155].