Prompting quick start: get better answers in 5 minutes
A small checklist that dramatically improves output quality
January 10, 2026
6 min read
promptingbasicsproductivity
Prompting quick start
A prompt is a spec. When the spec is vague, the model guesses what you meant—sometimes correctly, sometimes not.
A simple template (copy/paste)
- Goal: What you want done.
- Audience: Who it's for.
- Output format: Bullet list, JSON, markdown, code-only, etc.
- Constraints: Tone, length, must-include points, must-avoid points.
- One tiny example (optional): Shows what “good” looks like.
Two upgrades that pay off immediately
- “Before answering, list assumptions you're making.”
- “If you're uncertain, say what to check (sources, logs, tests, docs).”
Example (same topic, very different results)
Bad: “Explain embeddings.”
Better: “Explain embeddings to a product manager in 140–180 words. Use one analogy. End with 3 practical use cases.”
Prompt debugging (when output isn't what you wanted)
- “Use fewer concepts; define terms before using them.”
- “Answer in steps, not paragraphs.”
- “Don't invent numbers—ask me for missing inputs.”
- “Assume the database schema is unknown unless provided.”
When precision matters
For pricing, legal, medical, or anything high-stakes: prompts help, but they don't replace verification. Pair prompting with citations, retrieval (RAG), and/or human review.
Related posts
AI hallucination
A friendly explainer of AI hallucinations: what they are, what they look like, why they happen, and practical ways to detect and reduce them.
Agentic AI: the new wave of AI assistants that execute
Agentic AI goes beyond answering questions: it plans steps, uses tools, checks results, and iterates toward a goal.
What is an LLM? A practical explanation
Large Language Models predict text. That simple idea leads to surprisingly powerful tools—if you understand what they can and can't do.