Prompt engineering
With the wide application of generative AI, the ability to interact with AI efficiently and effectively has become one of the most important media literacies. Hence, it is imperative for generative AI users to learn and apply the principles of prompt engineering, which refers to a systematic process of carefully designing prompts or inputs to generative AI models to elicit valuable outputs. Due to the ambiguity of human languages, the interaction between humans and machines through prompts may lead to errors or misunderstandings. Hence, the quality of prompts is important. Another challenge is to debug the prompts and improve the ability to communicate with generative AI (V. Liu & Chilton, 2022).
ENTITY
1 - Human
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit545
Domain lineage
7. AI System Safety, Failures, & Limitations
7.4 > Lack of transparency or interpretability
Mitigation strategy
1. Implementation of a Formal Prompt Governance Framework: Establish an organizational mandate for prompt quality and compliance, which includes mandatory user training on the principles of secure and effective prompt engineering. This framework should standardize prompt construction, provide approved prompt templates, and require version control to ensure consistency and traceability of critical instructions, thereby addressing the deficit in 'media literacies' and promoting a systematic process for input design. 2. Enforced Context and Instruction Separation via Delimiters: To mitigate ambiguity and misunderstanding arising from the natural language interface, technical controls must enforce clear structural separation between system instructions and variable user input. The use of explicit delimiters (e.g., XML tags or distinct markers) and role-based messaging structures (System/User) provides the model with unambiguous cues to distinguish between commands and data, which is foundational for reducing interpretation errors. 3. Integration of Iterative Prompt Disambiguation and Testing Pipelines: To directly address the challenge of 'debugging the prompts,' organizations must integrate automated and human-in-the-loop systems for continuous prompt validation. This includes employing multi-turn dialogue frameworks to solicit clarification from the user when ambiguity is detected and utilizing adversarial testing (red teaming) and A/B testing against pre-defined performance and safety criteria to systematically refine and improve prompt efficacy before deployment.