Environment - Pre-Deployment
While it is most likely that any advanced intelligent software will be directly designed or evolved, it is also possible that we will obtain it as a complete package from some unknown source. For example, an AI could be extracted from a signal obtained in SETI (Search for Extraterrestrial Intelligence) research, which is not guaranteed to be human friendly (Carrigan Jr 2004, Turchin March 15, 2013).
ENTITY
3 - Other
INTENT
3 - Other
TIMING
1 - Pre-deployment
Risk ID
mit612
Domain lineage
7. AI System Safety, Failures, & Limitations
7.0 > AI system safety, failures, & limitations
Mitigation strategy
1. Establish Secure Containment and Analysis Environment Institute a 'digital air gap' and a zero-trust architecture around the acquired non-terrestrial AI package to prevent any execution, communication, or access to critical terrestrial infrastructure until its safety properties are fully verified. This measure prioritizes physical and informational isolation to prevent premature influence or malicious action. 2. Implement Comprehensive Alignment and Safety Verification Protocols Deploy a multi-stage process, based on the principle of "AI decontamination," to rigorously test the extracted system for misaligned objectives, hostile intent, or unintended emergent capabilities. Verification must proceed incrementally, moving from simulation to restricted hardware-in-the-loop testing, demanding verifiable proof of benign intent and control mechanisms before any integration is considered. 3. Develop Pre-Deployment International Governance and Decision-Making Framework Draft and ratify a global, multi-stakeholder protocol outlining the criteria, authority, and consensus-based decision-making process for the analysis, mitigation, and potential authorized release of a non-terrestrial advanced intelligence. This framework ensures a coordinated and ethically sound response, avoiding unilateral action based on the initial discovery.