How to Get Better Code from AI: 10 Prompting Techniques That Actually Work
10 proven prompting techniques for better AI code output. Learn what separates great prompts from mediocre ones.
The difference between a useless AI code response and a game-changing one isn’t luck. It’s prompt engineering.
Most people ask AI to “build a React component” and wonder why they get garbage. The AI isn’t broken—you just haven’t learned to speak its language yet.
This guide gives you 10 techniques that work across Claude, Cursor, Copilot, and every other AI coding tool. Each one has a before/after example. Use them.
1. Be Architecturally Explicit
The Problem: You ask for “a login form” and get a confused mess because the AI doesn’t know what patterns you want.
The Fix: Describe the architecture before asking for code.
Bad Prompt:
Build a login form in React
Good Prompt:
Build a React login form with these constraints:
- Use React Hook Form for state management
- Validate email format on blur, password strength on submit
- Show inline error messages below each field
- Use Tailwind CSS with a card layout (white bg, rounded-lg shadow)
- Submit to /api/auth/login with email and password
- Disable submit button during loading
- Show success toast on login, error toast on failure
Why this works: The AI understands the boundaries. It knows you want validation patterns, styling approach, API integration, and feedback mechanisms. It builds that, not something random.
2. Show, Don’t Tell
The Problem: Describing a complex UI state is slow and ambiguous.
The Fix: Paste an example of what you want. Even a rough sketch works.
Bad Prompt:
I want a dashboard with cards showing metrics
Good Prompt:
Build a metrics dashboard. Here's the layout I want:
Grid with 4 cards:
- Card 1: "Total Revenue" with big number, small change % in green
- Card 2: "Active Users" with big number, small change % in red
- Card 3: "Conversion Rate" with big number, small change % in green
- Card 4: "Avg Order Value" with big number, small change % in gray
Each card has a light background, rounded corners, left border accent (color varies by metric).
Use Tailwind CSS.
Why this works: You’ve given the AI a concrete target. No guessing. No iterations fixing layout assumptions.
3. Specify the Data Contract First
The Problem: AI generates code that doesn’t match your API or database schema.
The Fix: Lead with the data shape.
Bad Prompt:
Build a user list component
Good Prompt:
Build a user list component in React.
Users come from GET /api/users and return this shape:
{
"id": "string",
"name": "string",
"email": "string",
"role": "admin" | "editor" | "viewer",
"createdAt": "ISO 8601 date string",
"status": "active" | "inactive"
}
Display as a table with columns: Name, Email, Role, Status, Actions.
Actions row should have Edit and Delete buttons (links/modals, not implemented yet).
Why this works: The AI builds components that actually consume your data. No type mismatches. No prop confusion.
4. Define Success Criteria, Not Just Features
The Problem: “Build a search” is too vague. The AI doesn’t know what success looks like.
The Fix: Tell the AI what you’re optimizing for.
Bad Prompt:
Add search functionality to the product list
Good Prompt:
Add search to the product list. Success criteria:
- Search should be real-time (filter on keystroke, debounce 300ms)
- Case-insensitive, matches product name or description
- Shows results immediately in the same list (no page reload)
- Shows "No results" message if nothing matches
- Clear button to reset search
- Highlight matching text in results (bold the matched term)
Why this works: The AI now knows the performance, UX, and interaction requirements. It builds the right solution, not just a solution.
5. Request Verbose Comments for Complex Logic
The Problem: AI-generated code for tricky logic is hard to debug.
The Fix: Ask for explanation inline.
Bad Prompt:
Build a function to reconcile two data arrays
Good Prompt:
Build a function reconcileArrays(before, after) that:
- Identifies items added, removed, and modified
- Returns { added, removed, modified, unchanged }
- Compares by id field
- For modified items, return { id, changes: { field: { old, new } } }
Add comments explaining the algorithm for each step. Make it readable.
Why this works: You get code you can actually maintain. The AI slows down and thinks through the logic.
6. Provide Constraints Upfront
The Problem: AI generates elegant code that doesn’t fit your tech stack or performance budget.
The Fix: List hard constraints before the request.
Bad Prompt:
Build a file uploader
Good Prompt:
Build a file uploader with these constraints:
- Max file size: 10MB
- Allowed types: PDF, DOCX, XLSX only
- Must validate on client-side (show error immediately)
- Cannot use external file upload services (self-hosted)
- Must store files in /public/uploads
- Cannot use native HTML5 file input styling (use Tailwind custom buttons)
- Should work offline (no network check required to show form)
Why this works: The AI doesn’t waste time suggesting S3, Firebase, or 50MB file sizes. It works within your reality.
7. Ask for Test Cases
The Problem: You build code and then have to figure out edge cases yourself.
The Fix: Request test structure upfront.
Bad Prompt:
Build a password validation function
Good Prompt:
Build a validatePassword(password) function. It should:
- Require at least 8 characters
- Require at least one uppercase letter
- Require at least one number
- Require at least one special character (!@#$%^&*)
Return { valid: boolean, errors: string[] }.
Also provide test cases covering:
- Valid password (all requirements met)
- Too short password
- Missing uppercase
- Missing number
- Missing special character
- Empty string
Why this works: You get working code with edge cases built in. No surprises in production.
8. Use Examples from Your Codebase
The Problem: AI matches its own style defaults, not your project’s conventions.
The Fix: Paste a code sample and say “write code like this.”
Bad Prompt:
Add error handling to the API route
Good Prompt:
Add error handling to this new API route. Here's how we handle errors in existing routes:
[Paste 1-2 existing route examples from your project]
Follow the same pattern for consistency.
Why this works: The AI adopts your naming conventions, error structures, and architectural patterns. Consistency for free.
9. Ask for Implementation Steps Before Code
The Problem: Large features end up broken because the AI doesn’t think through the sequence.
The Fix: Request a plan first, then code.
Bad Prompt:
Build a shopping cart with checkout flow
Good Prompt:
Build a shopping cart with checkout. First, outline the implementation steps:
1. [What component structure?]
2. [What state management?]
3. [What API endpoints?]
4. [What validation happens where?]
5. [What's the UX flow?]
Then implement each step.
Why this works: The AI thinks before it codes. Plans catch architectural problems before they’re baked in.
10. Give Feedback That Shapes Future Outputs
The Problem: AI makes the same mistake twice because you don’t explain what went wrong.
The Fix: Tell the AI specifically what to change, not just “do better.”
Bad Feedback:
This doesn't look right. Make it better.
Good Feedback:
The validation logic is checking order.total before order.items is populated.
Reorder so items validation happens first. Also, the error message should be
more specific—tell the user which items are invalid, not just "cart invalid."
Why this works: The AI knows exactly what to fix. You get one revision instead of five.
Putting It Together: A Real Example
Here’s how to use all 10 techniques on a real prompt.
Without techniques:
Build a todo app
With techniques:
Build a todo app. Here's the data contract:
{
"id": "string (UUID)",
"title": "string (max 100 chars)",
"description": "string (max 500 chars, optional)",
"completed": "boolean",
"priority": "low" | "medium" | "high",
"dueDate": "ISO 8601 date (optional)",
"createdAt": "ISO 8601 date"
}
Architecture:
- React component using React Hook Form
- State management with zustand (single store with add/edit/delete/toggle methods)
- Tailwind CSS for styling
- API calls to /api/todos (GET, POST, PATCH, DELETE)
Success criteria:
- List shows all todos with title, due date, and priority badge
- Priority badges: low=gray, medium=yellow, high=red
- Click to toggle completed (strikethrough text, fade opacity)
- Edit button opens a modal form
- Delete button with confirmation dialog
- Sort by priority (high first), then due date
- Filter buttons: All / Active / Completed (store selected filter in URL)
- Add new todo button at top
- Show "No todos" message when empty
Constraints:
- Use shadcn-style components (simple, accessible)
- Keyboard shortcut Cmd+Shift+N to add todo
- Form validation: title required, dueDate must be future date
- Debounce search 300ms if you add search
- Must work on mobile (responsive grid, touch-friendly buttons)
Test cases to include:
- Add todo
- Edit existing todo
- Delete with confirmation
- Toggle completed
- Filter by status
- Priority sort working
That’s a 10x better prompt. The AI will deliver 10x better code.
The Meta-Skill: Know When to Iterate
Sometimes your first prompt won’t land. That’s normal. When the AI misses:
- Don’t say “that’s wrong”
- Point to the specific problem
- Explain what the right behavior should be
- Give it one more shot
Usually it nails the revision. You’re training the AI to understand your requirements.
What This Isn’t
This isn’t about being polite to AI or making it “feel good.” AI doesn’t have feelings. This is about precision engineering. Better prompts = better code = shipped projects = money in your pocket.
Vibe coders don’t accept mediocre output. They know how to ask for exactly what they want.
Learn these 10 techniques, use them in /prompts, and watch your AI coding productivity skyrocket.
The prompts you write today are the code you ship tomorrow.
Related: Check out our /guides for deeper dives into specific tools, or take the /quiz to find your vibe coding style.