Prompting Basics
What this lesson teaches
The quality of your output depends on the quality of your input. This lesson shows you what makes a prompt effective—and what makes it fail.
Why prompts matter
Remember: LLMs predict the next token based on what you give them. A vague prompt leads to vague (or wrong) predictions. A specific prompt narrows down the possibilities.
Think of it this way: If you ask a coworker "Can you help with this?" vs "Can you review line 42 of auth.js for a null check bug?", which gets better results?
Bad prompt vs Good prompt
Bad:
Fix my code
Better:
This function returns undefined when the input array is empty.
Add a check at the start to return an empty array instead.
Best:
In src/utils/filter.js, the filterItems function throws when
given an empty array. Add a guard clause at line 12 that
returns [] if items.length === 0.
The specificity principle
Good prompts share these traits:
- Specific location: Which file, which function, which line
- Clear problem: What's wrong, what should happen instead
- Constrained scope: One task at a time
What NOT to do
- Don't be vague: "Make it better" gives the model no direction
- Don't overload: "Fix this, add tests, refactor, and document" is too much
- Don't assume context: The model only knows what you tell it
Common mistake: Assuming the model remembers previous conversations. Each session starts fresh unless you explicitly provide context.
Key Takeaways
- Specific prompts → specific results
- Include: file, function, line number when relevant
- One clear task per prompt works better than multiple vague ones
- The model only knows what you tell it—don't assume shared context