The quality of output you get from AI models like GPT-4, Claude, and Gemini depends enormously on how you ask. Prompt engineering is the discipline of crafting inputs that reliably produce the outputs you want. It is not about tricks or hacks but about clear communication with a system that takes your instructions literally. In this guide, we will cover the core prompting techniques — zero-shot, few-shot, chain of thought, and structured prompting — with practical examples you can apply immediately to your own work.
Zero-Shot Prompting: Clear Instructions Without Examples
Zero-shot prompting means giving the model a task with no examples. The model relies entirely on its training knowledge to understand what you want. The key to effective zero-shot prompts is specificity and structure:
# Bad zero-shot prompt
Summarize this article.
# Good zero-shot prompt
Summarize the following article in exactly 3 bullet points.
Each bullet should be one sentence, focusing on the key findings.
Write for a technical audience familiar with machine learning.
Article:
[article text here]
The difference is night and day. The first prompt leaves everything ambiguous: how long should the summary be? What format? What audience? The second prompt constrains the output precisely. Here is a more complex zero-shot example for code generation:
# Zero-shot prompt for code generation
Write a Python function called `validate_email` that:
- Takes a single string parameter `email`
- Returns True if the email is valid, False otherwise
- Uses regex for validation
- Handles edge cases: empty strings, missing @ symbol, multiple @ symbols
- Includes a docstring with examples
- Does NOT use any external libraries beyond `re`
By listing explicit requirements, constraints, and edge cases, you dramatically reduce the chance of getting an incomplete or incorrect response. Treat zero-shot prompts like a detailed specification document.
Few-Shot Prompting: Teaching by Example
Few-shot prompting provides examples that demonstrate the pattern you want the model to follow. This is powerful when the task has a specific format, tone, or logic that is hard to describe in words alone:
# Few-shot prompt for data extraction
Extract structured data from product descriptions.
Example 1:
Input: "Apple MacBook Pro 16-inch with M3 Max chip, 36GB RAM, 1TB SSD. $3,499"
Output: {"name": "MacBook Pro 16-inch", "brand": "Apple", "processor": "M3 Max", "ram": "36GB", "storage": "1TB SSD", "price": 3499}
Example 2:
Input: "Samsung Galaxy S24 Ultra, Snapdragon 8 Gen 3, 12GB RAM, 512GB, priced at $1,299.99"
Output: {"name": "Galaxy S24 Ultra", "brand": "Samsung", "processor": "Snapdragon 8 Gen 3", "ram": "12GB", "storage": "512GB", "price": 1299.99}
Example 3:
Input: "Dell XPS 15 laptop featuring Intel Core i9-13900H, 32GB DDR5, 1TB NVMe for $1,899"
Output: {"name": "XPS 15", "brand": "Dell", "processor": "Intel Core i9-13900H", "ram": "32GB DDR5", "storage": "1TB NVMe", "price": 1899}
Now extract from:
Input: "Google Pixel 9 Pro with Tensor G4 chip, 16GB RAM, 256GB storage at $999"
The examples teach the model your exact schema, naming conventions, and how to handle variations in input format. Three to five examples usually suffice. Choose examples that cover different edge cases and formats you expect to encounter.
Few-shot works exceptionally well for classification tasks too:
# Few-shot classification
Classify support tickets by priority.
"My account is locked and I have a presentation in 10 minutes" -> URGENT
"How do I change my profile picture?" -> LOW
"Payment failed for my team's enterprise subscription renewal" -> HIGH
"The export button produces a corrupted file every time" -> MEDIUM
Classify: "All production dashboards are showing error 500 since 8am"
Chain of Thought: Reasoning Step by Step
Chain of thought (CoT) prompting asks the model to show its reasoning process before arriving at an answer. This dramatically improves accuracy on tasks requiring logic, math, or multi-step analysis:
# Without chain of thought
A store has 45 apples. They sell 60% on Monday, then receive a shipment
of 30 on Tuesday. On Wednesday they sell 1/3 of what they have.
How many apples remain?
# With chain of thought
A store has 45 apples. They sell 60% on Monday, then receive a shipment
of 30 on Tuesday. On Wednesday they sell 1/3 of what they have.
How many apples remain?
Think through this step by step:
1. Start with the initial count
2. Calculate Monday's sales and remaining
3. Add Tuesday's shipment
4. Calculate Wednesday's sales and final count
Show your work for each step.
The step-by-step instruction forces the model to break down the problem rather than jumping to an answer. This is especially powerful for code debugging:
# Chain of thought for debugging
The following Python function should return the second largest
number in a list, but it has a bug. Find and fix it.
```python
def second_largest(numbers):
first = second = float('-inf')
for n in numbers:
if n > first:
second = first
first = n
elif n > second:
second = n
return second
```
Analyze this step by step:
1. Trace through the function with input [5, 5, 3, 1]
2. Track the values of `first` and `second` at each iteration
3. Identify where the logic fails
4. Explain the fix
By asking the model to trace through execution, you get a much more accurate diagnosis than simply asking “what is wrong with this code.”
Structured Output and Role Prompting
When you need output in a specific format, be explicit about the structure. Combining role assignment with output formatting gives you the most control:
# Role + structured output prompt
You are a senior security engineer performing a code review.
Analyze the following code for security vulnerabilities.
For each vulnerability found, respond in this exact format:
### Vulnerability [number]
- **Severity**: CRITICAL | HIGH | MEDIUM | LOW
- **Type**: [CWE category]
- **Location**: [function/line]
- **Description**: [what the vulnerability is]
- **Impact**: [what an attacker could do]
- **Fix**: [specific code change to resolve it]
If no vulnerabilities are found, state "No vulnerabilities detected"
and explain what security measures are already in place.
Code to review:
```python
import sqlite3
def get_user(username):
conn = sqlite3.connect("app.db")
query = f"SELECT * FROM users WHERE username = '{username}'"
result = conn.execute(query).fetchone()
conn.close()
return result
```
The role (“senior security engineer”) activates domain-specific knowledge and sets the appropriate level of scrutiny. The rigid output format ensures you get consistently structured results that can be parsed programmatically or fed into a tracking system.
Advanced Techniques: Self-Consistency and Prompt Chaining
For critical decisions, use self-consistency — ask the model to solve the problem multiple times with different approaches and compare the answers:
# Self-consistency prompt
I need to decide on a database for a new application.
Requirements: 10M+ records, complex joins, ACID compliance,
horizontal scaling, and real-time analytics.
Approach this decision three different ways:
1. First, evaluate from a pure performance perspective
2. Then, evaluate from an operational complexity perspective
3. Finally, evaluate from a cost and ecosystem perspective
After all three analyses, identify where they agree and disagree.
Give your final recommendation based on the consensus.
Prompt chaining breaks complex tasks into sequential steps, where each step’s output feeds into the next. This is how production AI applications work:
# Step 1: Extract requirements
From this client email, extract all technical requirements
as a numbered list. Only include concrete, actionable requirements.
# Step 2: (feed Step 1 output here)
For each requirement, classify as: Must Have, Should Have, Nice to Have.
Consider dependencies between requirements.
# Step 3: (feed Step 2 output here)
Create a sprint plan for the Must Have items.
Estimate story points for each (1, 2, 3, 5, 8, 13).
Identify the critical path.
Each step is simpler and more focused than trying to do everything in one prompt. The model can give each step full attention without losing context on the others.
Conclusion
Prompt engineering is a skill that improves with practice and experimentation. The techniques we covered — zero-shot with detailed constraints, few-shot with representative examples, chain of thought for reasoning tasks, structured output for consistency, and prompt chaining for complex workflows — form a complete toolkit for getting reliable, high-quality results from any AI model. The single most important principle across all techniques is specificity: the more precisely you define what you want, including format, constraints, audience, and edge cases, the better your results will be. Start applying these patterns today and you will see an immediate improvement in your AI interactions.