Use this framework when you want predictable outputs without overcomplicating the prompt.
Last updated: 2026-03-19
Write one sentence for the exact outcome you want. If the task asks for multiple outcomes at once, split it into separate prompts.
Tell the model who it should act as and what is in scope. This prevents broad, generic responses.
Example: 'Act as a product marketer for a B2B SaaS onboarding flow. Focus only on activation emails.'
Only include context that influences output quality: audience, constraints, source material, and decision criteria.
Hard constraints reduce drift and make evaluation easier.
Ask for an explicit structure (bullets, table, JSON, markdown sections). Format control is one of the fastest ways to improve usability.
Tell the model how quality will be judged.
Example: 'Prioritize factual clarity, concise language, and clear next steps.'
Pass 1 generates a draft. Pass 2 critiques that draft against your criteria and rewrites weak sections.
Run the same prompt on at least three different inputs: ideal, minimal, and ambiguous. Keep the version that remains stable across all three.
Store reusable prompt structures by task type, then fill placeholders for each project.
Even strong prompts cannot replace source quality. If source facts are incomplete or wrong, output quality will still suffer.
If you want a quick sanity check, you can test your draft in this site's Prompt Quality Check tool before sending it to your model.