Foundations: Mastering Prompt Engineering
Prompt engineering is the art and science of designing inputs to guide AI toward accurate, relevant, and useful outputs. These ten principles are evidence-based.
Download Prompt Templates
These templates are designed to be copied and pasted directly into a large language model. They provide a structure for common academic tasks.
- The Structured Feedback template helps staff to transform unstructured notes into student-friendly, formative feedback.
- The Socratic Tutor template provides a prompt for students, encouraging them to learn a topic through cooperative dialogue with the AI rather than focusing on pure output generation.
- The Academic Reviewer template offers a detailed structure for generating a peer-review-style critique of a written document.
The 10 Evidence-Based Principles of Effective Prompting
Master these core techniques to dramatically improve your AI interactions. Each principle is backed by research and practical testing.
1. Clarity & Specificity
Tell the LLM exactly what you want, leaving no room for interpretation. LLMs are extraordinarily creative, so vague prompts can lead to varied, inconsistent outputs.
Use: "List our five most popular products and write a one-paragraph description of each"
2. Context Provision
Furnish all necessary background information. Context helps the LLM narrow its vast knowledge to your specific needs and avoid generic outputs.
3. Role/Persona Assignment
Assigning a specific role directly influences tone, style, and domain expertise. This makes responses more focused and professional.
4. Output Format Definition
Clearly specify the desired structure for machine-readable outputs like JSON, XML, or human-readable formats like lists and tables.
5. Examples (Few-Shot Prompting)
Including examples can lead to massive improvements in accuracy. Even a single example can significantly guide the model to desired output structure and style.
6. Iterative Refinement
Prompt engineering is rarely "one-and-done." Continuously refining prompts based on LLM responses is essential for improving quality.
7. Conciseness/Information Density
LLM performance can decrease with prompt length. Improve "information density" by shrinking information into fewer words.
Use: "Produce high quality, readable, clear content"
8. Chain of Thought (CoT) Prompting
For complex problems, encourage step-by-step reasoning. This enhances reasoning capabilities and provides transparency into the model's logic.
9. Instructions over Constraints
Instruct the model what to do (positive instructions) rather than what not to do (constraints). Use clear binary "hard on/off rules."
Use: "Only discuss the console, company, year, and total sales"
10. Testing & Data-Driven Approach
Test prompts empirically using a "Monte Carlo approach"βgenerate multiple outputs and evaluate quality for statistical reliability.