Practical Lesson 4 of 7

Prompting That Works

People spend weeks learning frameworks, zero time learning to communicate with AI. Then they blame the model when results are bad.

"If your output sucks, your input sucked. There's no way around this."

What Actually Helps

  • Be specific: "Build auth" is vague. "Session-based auth with bcrypt, PostgreSQL session storage, HTTP-only cookies" is specific.
  • Tell it what NOT to do: "Keep it simple. Don't add abstractions I didn't ask for. No dependency injection framework."
  • Give context about why: "This runs on every request" changes the approach completely. Context shapes implementation.

Claude tends to overengineer. Anticipate this. Explicitly say "simple implementation" or "no unnecessary abstractions" when you want something straightforward.

The Formula

Good prompts follow a pattern:

  1. Specific beats vague
  2. Constraints beat open-ended
  3. Examples beat descriptions

If you can show Claude an example of what you want, that's worth 10x more than describing it in words.

Model Selection

Sonnet

Fast, cheap. Good for execution when the path is clear. Use for implementing well-defined tasks.

Opus

Slow, expensive. Good for complex reasoning and planning. Use for architecture decisions and debugging tricky problems.

Workflow tip: Use Opus for planning, then Sonnet for implementation. Press Shift+Tab to switch models.

Key Takeaways

  • Bad input = bad output. No exceptions.
  • Be specific; add constraints; show examples
  • Tell Claude what NOT to do (it tends to overengineer)
  • Opus for planning, Sonnet for implementation