Maker’s Notes: Pair Programming in the Age of AI
Welcome to Maker’s Notes by Makers & Breakers
Each week in Makers & Breakers, we dive into the art of creation and reinvention—building systems that matter, breaking down barriers, and reflecting on the lessons we uncover along the way.
Maker’s Notes is where I share personal insights and stories from the frontlines of product development, leadership, and tech. These reflections tackle the messy realities of building, breaking, and learning—designed to challenge assumptions and spark new ways of thinking.
This Week’s Notes: The Hidden Cost of Unspoken Knowledge
In this week’s Maker’s Notes, I reflect on a recent experience with AI pair programming that revealed an uncomfortable truth: we often mistake expertise for clarity.
What started as a simple request—“Build an authentication system”—turned into a deeper exploration of how much we rely on silent, unarticulated knowledge. This gap doesn’t just affect AI collaboration; it impacts how we communicate with teammates, transfer knowledge, and scale our systems effectively.
Pair Programming in the Age of AI: The Hidden Cost of Unspoken Knowledge
"Just write a basic auth system," I told the AI coding assistant. The response was underwhelming - a naive implementation that missed crucial security considerations. Frustrated, I tried again, this time spelling out my requirements in detail. The difference was striking, but it raised an uncomfortable question: How often do we mistake expertise for clarity?
The rise of AI pair programming isn't creating new problems in how we work; it's exposing the cracks in our existing practices. While re-reading Shane Parrish's "The Great Mental Models," I realized something profound: our struggles with AI collaboration often mirror long-standing issues in how we develop software and share knowledge within our teams.
When Expertise Becomes a Barrier
Consider a recent interaction I had with an AI coding assistant. My first instinct was to approach it like I would any coding task - dive in with an intuitive understanding of what needed to be done. After all, isn't that what makes senior engineers efficient? We recognize patterns, we know the pitfalls, we can envision the solution instantly.
But here's what that conversation looked like:
Me: "Build an authentication system similar to Auth0." AI: generates a basic username/password system with minimal security Me: frustration builds
The same scenario plays out in team settings. A junior developer asks how to approach a problem, and we respond with "just do it like we did in the payment system." We forget that our understanding is built on years of accumulated context that others don't share.
The True Cost of Silent Knowledge
This "silent knowledge" problem isn't just about AI collaboration - it's costing us more than we realize:
Technical Debt:
Systems built on unspoken assumptions that become increasingly difficult to maintain
Security considerations that "everyone knows about" until they don't
Performance optimizations that made sense at the time but now no one remembers why
Knowledge Silos:
Critical system knowledge locked in the heads of a few developers
Onboarding that takes months because documentation doesn't capture the "obvious" things
Teams that grind to a halt when key members are unavailable
Failed Knowledge Transfer:
Junior developers who can't grow because they can't access senior developers' thinking processes
Architectural decisions that can't be properly evaluated because their context is lost
Security vulnerabilities that creep in because security assumptions weren't made explicit
Why We Resist Writing Things Down
"I'll document it later," we say. But let's be honest about why we don't:
Pride: Admitting that our intuitive understanding needs to be broken down feels like admitting it's not as solid as we think.
Pressure: In a world of aggressive deadlines, anything that doesn't directly produce code feels like waste.
Politics: Sometimes not documenting is a form of job security - being the only one who "gets it."
But there's a deeper reason: Writing forces clarity. When we write something down, we must confront the gaps in our understanding. It's easier to keep things vague and "intuitive."
The AI Mirror
AI pair programming is fascinating not because it's introducing new problems but because it's forcing us to confront existing ones. When an AI assistant produces code that doesn't match our expectations, it's often because:
We haven't clearly articulated our requirements
We've failed to make our assumptions explicit
We've skipped over crucial context
We've relied on pattern matching instead of clear thinking
Does this sound familiar? These are the same issues that plague our code reviews, architecture discussions, and team knowledge sharing.
A Better Way Forward
Let's revisit that authentication system example:
Instead of: "Build an auth system like Auth0."
Try this: "We need an authentication system that:
Uses JWT for session management because we're scaling across services
Implements rate limiting on password reset flow to prevent abuse
Requires exponential backoff on failed login attempts
Includes OAuth2 flows for third-party authentication
Follows our existing error handling patterns: [example]
Must handle concurrent sessions according to these rules..."
Notice the difference? The second version forces us to:
Make our assumptions explicit
Define our requirements clearly
Consider edge cases upfront
Provide necessary context
The Challenge to Current Practices
It's time to challenge some cherished beliefs in our field:
"Good code is self-documenting" - No, good code is clear about what it does, not why it does it.
"I can hold the whole system in my head" - Maybe, but can your team? Can your future replacement?
"Documentation slows us down" - What slows us down more: writing things down, or having to explain them repeatedly?
"We move too fast to document" - We move too fast not to document. The cost of undocumented decisions compounds over time.
Mental Models for Effective AI Collaboration
Returning to Parrish's "Great Mental Models," let's examine how these thinking frameworks can transform our approach to AI pair programming. Each model offers a unique lens for understanding and improving our collaboration with AI.
The Map Is Not The Territory
At its core, this mental model reminds us that our representation of reality is not reality itself. A map simplifies the real world to make it useful, but in doing so, it necessarily leaves things out. When we look at a road map, we don't see elevation, weather, or road conditions - yet these can be crucial for actually navigating the territory.
In AI pair programming, we often mistake our mental "map" of a solution for the complete territory it needs to operate. When an experienced developer thinks of an "authentication system," their mental map includes years of experience with security vulnerabilities, user workflows, and edge cases. But this map lives only in their mind, while the AI operates in the actual territory of code and requirements.
Consider this common scenario:
Developer's mental map: "We need user authentication" Actual territory includes:
Password complexity requirements
Account lockout policies
Password reset workflows
Session management across multiple devices
Rate limiting strategies
Audit logging requirements
Security breach notifications
Compliance requirements
This gap between map and territory explains why our first interactions with AI often feel frustrating. We're trying to navigate using a map only we can see. The solution isn't to create more detailed mental maps - it's to describe the territory explicitly.
Circle of Competence
This mental model, famously used by Warren Buffett, emphasizes knowing what you know and understanding the boundaries of your knowledge. The key insight is that success comes not from expanding your circle infinitely but from operating clearly within it and recognizing when you're stepping outside it.
This model takes on new significance in the context of AI pair programming. We need to understand three overlapping circles:
What we know.
What we know, we know well enough to explain.
What can we effectively communicate to an AI?
Many developers operate confidently within their circle of competence when coding but struggle when that same knowledge needs to be explicitly communicated to an AI. They're discovering that "knowing how to do something" and "knowing how to explain it" are entirely different circles of competence.
For example, an experienced developer might know instinctively how to structure a scalable database schema but struggle to articulate the decision-making process to an AI.
Instead of: "Design a scalable database schema for an e-commerce platform."
We need to operate within our explainable circle of competence: "Design a database schema that:
Separates customer profiles from authentication data for security
Uses junction tables for many-to-many relationships in product categories
Implements soft deletes for order history maintenance
Includes audit fields for regulatory compliance
Structures indexes based on our most common query patterns..."
Second-Order Thinking
Second-order thinking is about looking beyond the immediate effects of our actions to understand their longer-term consequences. While first-order thinking asks "What happens next?", second-order thinking asks "And then what?" It's the difference between "This solves the problem" and "This solves the problem, but what new problems might it create?"
In traditional development, our experience often lets us intuitively consider these cascading effects. We know that certain design choices will impact maintainability, that performance optimizations might affect readability, that architectural decisions will influence future flexibility. But this understanding often remains unspoken.
In AI pair programming, this model becomes crucial because AI, by nature, tends toward first-order solutions unless explicitly guided otherwise. Consider a typical exchange:
First-order request: "Write a caching layer for this API endpoint"
The AI might deliver a functional caching solution but miss crucial second-order considerations:
Cache invalidation strategies
Memory management
Monitoring and debugging capabilities
Impact on system consistency
Deployment considerations
A second-order thinking approach would look like: "Implement a caching layer that:
Sets appropriate TTL based on data volatility
Includes cache invalidation hooks for manual control
Implements memory limits and eviction policies
Adds monitoring for cache hit rates
Handles race conditions in concurrent updates
Includes logging for cache-related errors
Considers deployment across multiple servers."
This mental model transforms our AI interactions from single-step solutions to comprehensive system designs.
Inversion
Inversion is about approaching problems backwards. Instead of asking "How do I achieve success?", we ask "What would ensure failure?" This often reveals crucial insights that forward thinking might miss. As Charlie Munger says, "All I want to know is where I'm going to die so I'll never go there."
In AI pair programming, inversion is particularly powerful because it helps us identify and articulate our implicit knowledge about what can go wrong. Rather than focusing solely on how something should work, we consider all the ways it could fail.
Traditional approach: "Create a user registration system."
Inverted thinking makes us consider: "What would make a registration system fail? It would:
Allow duplicate emails
Accept weak passwords
Miss validation on critical fields
Leak sensitive information in errors
Be vulnerable to automated signups
Lack rate limiting
Store passwords insecurely
Have no audit trail"
This inverted view then informs our actual request to the AI: "Create a user registration system that prevents:
Email duplication through case-insensitive uniqueness checks
Weak passwords by enforcing complexity rules: [specific rules]
Invalid data through comprehensive field validation
Information leakage by sanitizing error messages
Automated abuse through CAPTCHA and rate-limiting
Security breaches through proper password hashing
Data integrity issues by maintaining signup audit logs."
By thinking about failure modes first, we often create more robust and complete specifications for our AI pair programming sessions.
The Compound Effect of Mental Models
When applied together, these mental models create a more effective framework for AI collaboration. The Map/Territory model reminds us to make our knowledge explicit, Circle of Competence helps us understand what we need to explain, Second-Order Thinking ensures we consider long-term implications, and Inversion helps us identify crucial requirements we might otherwise miss.
The future of development lies not just in mastering these mental models, but in using them to bridge the gap between human expertise and AI capabilities. The question isn't whether AI will transform software development - it's whether we can transform our thinking to make the most of this collaboration.
The tools are ready. Is our thinking?
More from Makers & Breakers
If this resonates with you, don’t miss this week’s Curated Reads in the newsletter, where we explore:
How AI is transforming documentation workflows, reducing toil and technical debt.
The Ivy Lee Method, a timeless productivity framework to help you focus on what matters most.