Teaching Agents Your Workflow
Learn how to effectively communicate your development workflow to AI agents for maximum productivity.
Teaching Agents Your Workflow: Encoding 20 Years of Experience
The most common question I get: "How do you teach agents to build software your way?"
The answer isn't prompting. It's encoding patterns.
The Myth of Perfect Prompts
When I started with AI agents, I thought it was about writing the perfect prompt. Crafting detailed instructions. Being very specific about what I wanted.
That works for one-off tasks. But it doesn't scale.
Here's what I learned: Agents execute perfectly. But they execute the instructions you give them, not the instructions you meant to give them.
The problem isn't agent capability. It's instruction completeness.
The Five Layers of Agent Instructions
After building two full apps and refining my orchestrator, I've developed a five-layer approach to teaching agents:
Layer 1: System-Level Patterns
These are the foundational patterns that apply to every task.
Example: File Naming Convention
## File Naming Rules
- React components: PascalCase (UserProfile.tsx)
- Utilities: camelCase (formatDate.ts)
- Constants: SCREAMING_SNAKE_CASE (API_ENDPOINTS.ts)
- Test files: Same as source + .test (UserProfile.test.tsx)
Why This Matters:
- Prevents case-sensitivity collisions (learned this the hard way)
- Makes codebase searchable
- Reduces cognitive load during code review
Result: Before encoding this, I had SkaterMultiSelect.tsx and SkaterMultiselect.tsx created by different agents. Both worked on macOS. Both failed on Linux CI. After encoding, never happened again.
Layer 2: Tech Stack Conventions
These are patterns specific to your chosen technologies.
Example: React Native Component Pattern
## React Native Component Structure
Every component must follow this pattern:
1. Imports (grouped and ordered):
- React/React Native core
- Third-party libraries
- Local components
- Types
- Styles
2. Type definitions (props interface)
3. Component function with proper typing
4. Styles using StyleSheet.create
Example:
\`\`\`typescript
import React from 'react';
import { View, Text, StyleSheet } from 'react-native';
import { CustomButton } from '@/components';
import { UserProfileProps } from '@/types';
export const UserProfile: React.FC<UserProfileProps> = ({ user }) => {
return (
<View style={styles.container}>
<Text style={styles.name}>{user.name}</Text>
</View>
);
};
const styles = StyleSheet.create({
container: { padding: 16 },
name: { fontSize: 18, fontWeight: 'bold' },
});
\`\`\`
Why This Matters:
- Agents follow patterns precisely
- Every component looks like it was written by the same person
- Code review is faster because structure is predictable
Result: 47 components in Parlay. Zero style inconsistencies. Zero import order debates.
Layer 3: Architecture Decisions
These encode the "why" behind your technical choices.
Example: State Management Decision
## State Management Rules
**Local State:** useState for component-specific UI state (modals, form inputs)
**Global State:** Zustand for app-wide state (auth, user profile)
**Server State:** React Query for API data (fetching, caching, updates)
**Never:** Mix these patterns. Each has its responsibility.
**Rationale:**
- Keeps components simple and testable
- Clear boundaries prevent state bugs
- React Query handles loading/error states automatically
Why This Matters:
- Prevents agents from making different choices on different features
- Documents decisions for future humans (or agents)
- Makes refactoring predictable
Result: State management is boring (in the best way). No Redux debates. No "where should this state live?" questions.
Layer 4: Domain-Specific Logic
These are the business rules unique to your application.
Example: Golf Handicap Validation
## Golf Handicap Rules
Valid handicaps: -10.0 to 54.0 (USGA standard)
Validation rules:
- Must be numeric with max 1 decimal place
- Cannot be null for verified users
- Can be "Not verified" for new users
- Must update when user posts new scores
Edge cases:
- Negative handicaps are valid (pro-level players)
- 0.0 is valid (scratch golfer)
- Display as "+2.5" or "-1.5" (always show sign)
Why This Matters:
- Domain rules are easy to get wrong
- Edge cases are easy to miss
- Once encoded, never forgotten
Result: Handicap validation worked perfectly on first implementation. No bugs in production.
Layer 5: Quality Gates
These are the validation checks that run before merging.
Example: Pre-Merge Checklist
## Implementation Agent - Pre-Merge Checklist
Before marking a feature complete:
- [ ] All acceptance criteria met
- [ ] Unit tests written (min 70% coverage for new code)
- [ ] No linting errors
- [ ] No TypeScript errors
- [ ] All imports resolve correctly
- [ ] No console.logs or debugger statements
- [ ] Peer dependencies installed (if added packages)
- [ ] Component registered in component registry
- [ ] No duplicate file names (case-sensitive check)
If ANY item fails, fix it before requesting review.
Why This Matters:
- Catches mistakes before they reach review
- Agents can self-validate
- Reduces back-and-forth iterations
Result: Review agent rejection rate dropped from 40% to 12% after adding this checklist.
The Component Registry Solution
One of the trickiest problems: agents don't know what other agents have built.
The Problem:
- Agent A builds
UserProfileCard.tsx - Agent B doesn't know it exists
- Agent B builds
UserProfileCard.tsx(different implementation) - Both PRs get merged
- App explodes
The Solution: Component Registry
## Component Registry Protocol
Before creating a new component:
1. Check `/src/components/REGISTRY.md`
2. If component exists, import and use it
3. If component doesn't exist:
- Create the component
- Add entry to REGISTRY.md in your PR
- Include: name, path, purpose, props
Example entry:
\`\`\`markdown
## UserProfileCard
- Path: `src/components/cards/UserProfileCard.tsx`
- Purpose: Display user profile summary
- Props: `{ user: User, onPress?: () => void }`
- Used in: ProfileScreen, MatchListItem
\`\`\`
Why This Matters:
- Prevents duplicate components
- Makes reusability discoverable
- Creates living documentation
Result: Zero duplicate components after implementing this. Agents reuse existing components 85% of the time.
The Validation Stack: 5 Layers of Defense
This is the most powerful pattern I've implemented. Multiple validation layers that catch mistakes systematically.
Real Example: Dependency Management
We had a recurring issue: agents would install packages but miss peer dependencies, causing runtime crashes.
Layer 1: Implementation Agent Protocol
When installing new packages:
1. Run `npm install package-name`
2. Check for peer dependency warnings
3. Install all peer dependencies explicitly
4. Verify app starts after install
Layer 2: Review Agent Validation
When reviewing PRs:
1. Check package.json for new dependencies
2. Verify all peer dependencies are included
3. Check for version conflicts
4. Request changes if dependencies are incomplete
Layer 3: DevOps Agent Validation
Before marking deployment ready:
1. Run `npm ls` to check dependency tree
2. Verify no unmet peer dependencies
3. Run build to catch missing deps
4. Run test suite to verify functionality
Layer 4: Validation Script
#!/bin/bash
# pre-merge-check.sh
echo "Checking dependencies..."
npm ls --depth=0 > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "❌ Unmet peer dependencies detected"
npm ls --depth=0
exit 1
fi
echo "✅ All dependencies valid"
Layer 5: Pre-Commit Hook
{
"husky": {
"hooks": {
"pre-commit": "./scripts/pre-merge-check.sh"
}
}
}
Result:
- Before validation stack: 8 dependency issues reached production
- After validation stack: 0 issues in 3 weeks
The key insight: Redundancy in validation is a feature, not a bug. Agents are cheap. Catching bugs early is invaluable.
From Instructions to Intuition
The goal isn't to write longer instructions. It's to encode patterns so well that agents develop "intuition."
Bad Instruction (too vague):
Build a user profile screen.
Better Instruction (specific but rigid):
Build a user profile screen with:
- Profile photo at top
- Name and bio below
- Edit button in header
- Use our standard card component
Best Instruction (pattern-based):
Build a user profile screen following:
- Standard screen layout pattern (see PATTERNS.md)
- User data component pattern (see COMPONENT_PATTERNS.md)
- Edit flow pattern (see INTERACTION_PATTERNS.md)
Include acceptance criteria:
- Profile data loads from API
- Edit button navigates to edit screen
- Loading and error states handled
- Unit tests cover happy path + error cases
The difference: the first two tell agents what to build. The third tells agents how to think about what to build.
The ROI of Pattern Encoding
Time Investment:
- Initial pattern documentation: ~8 hours
- Refinements after Project #1: ~4 hours
- Refinements after Project #2: ~2 hours
Time Saved:
- Project #1: ~10 hours of debugging and rework
- Project #2: ~15 hours of debugging and rework
- Project #3 (estimated): ~20 hours
Patterns are front-loaded work that pays compound interest.
The Living Documentation
Here's the crucial insight: agent instructions aren't just for agents. They're living documentation that makes your codebase maintainable.
Before pattern encoding:
- New developer: "Why did we choose Zustand over Redux?"
- Me: "Uh... I think it was simpler? Check the commit history?"
After pattern encoding:
- New developer reads PATTERNS.md
- Understands every architectural decision
- Knows exactly how to extend the app
- Can contribute without asking questions
Agents forced me to document what I'd been doing intuitively for 20 years.
What to Encode First
If you're starting with agent-driven development, encode in this order:
Week 1: Foundation
- File naming conventions
- Project structure
- Import ordering
- Basic code style
Week 2: Tech Stack 5. Component patterns 6. State management rules 7. API call patterns 8. Error handling
Week 3: Quality 9. Testing requirements 10. Pre-merge checklist 11. Validation scripts 12. Component registry
Week 4+: Domain 13. Business logic rules 14. Edge cases from production 15. Integration patterns 16. Deployment procedures
Start small. Refine constantly. Let production bugs teach you what to encode next.
The Compounding Effect
Each pattern you encode:
- Prevents an entire class of bugs
- Speeds up future features
- Makes agents more reliable
- Makes humans more productive
After two projects, my agent error rate dropped from 40% to 12%. Not because agents got smarter, but because my instructions got better.
No patterns
Basic patterns
Full stack
That 28% improvement compounds across every future project.
What's Next
In the next article, I'll show you what works and what doesn't (yet) with agent-driven development. Real wins, real failures, real edge cases.
This is part 3 of a 6-part series on building production software with AI agents. ← Part 2: The Numbers | Part 4: What Works vs What Doesn't →