At Makers & Breakers, we dive into the art of creation and reinvention—building systems that matter, breaking down barriers, and reflecting on the lessons we uncover along the way.
Maker’s Notes is where I share personal insights and stories from the frontlines of product development, leadership, and tech. These reflections tackle the messy realities of building, breaking, and learning—designed to challenge assumptions and spark new ways of thinking.
From Writing Code to Shaping Systems: The Evolution of AI-Assisted Engineering
When AI tools like ChatGPT, GitHub Copilot, Cursor, Lovable, and Repl.it began making waves, the conversation quickly shifted to a familiar question: “Will AI replace engineers?” Predictions about the end of traditional coding sparked heated debates, with some worrying about obsolescence and others seeing it as the next great leap forward. But beyond the noise lies a deeper truth—AI isn’t replacing engineers; it’s changing what it means to be one.
Fact: It’s not about code generation at all. It never was.
Engineering has never been just about writing code. It’s always been about solving real problems, designing great systems, and creating value for users. Writing code is (was) simply the most tedious part of the process—a necessary means to an end.
AI tools don’t take this away; they amplify it. They’re not here to replace engineers but to redefine what it means to be one. And for smaller teams—where engineers often juggle multiple roles—this shift is even more profound.
Remember when you could keep most of your project’s context in your head? Those days are over. AI tools force a new way of working—one that relies on clarity, structure, and collaboration like never before.
Let’s explore how documentation, testing, and code reviews—the pillars of effective engineering—are no longer optional in the AI-augmented era. These aren’t just best practices; they’re essential for harnessing GEN AI's full potential and building systems that deliver value.
The Great Engineering Power-Up
Engineering has always been about power-ups. We started with punch cards, graduated to IDEs, embraced open source, and adopted cloud platforms. Each step made us faster, more efficient, and better at solving problems.
AI tools represent the next leap forward—but they’re fundamentally different. They don’t just accelerate coding; they shift the entire process.
Instead of writing every line of code yourself, you now partner with AI to co-create solutions. This partnership changes how we think about engineering. It’s no longer about the act of writing code but about designing systems, externalising intent, and building tools that amplify impact.
And while these principles apply to every team, they’re especially crucial for smaller teams or solo developers. When you’re the only person on a project, you need to externalise your thinking—not just for others or your future self but also for the AI tools.
The New Rules of Engineering
The transition to AI-assisted development brings three critical shifts:
1. Documentation: The Key to Guiding AI and Aligning Teams
In small teams—or even when you’re working solo—documentation often takes a backseat. Why write things down when everything lives in your head? But with AI in the mix, this approach no longer works.
Why? Because AI doesn’t think holistically. It doesn’t understand your client’s needs, the user’s experience, or the long-term implications of your design. That’s your job. Documentation becomes the bridge between your understanding and the tool’s ability to assist.
Here’s how to turn documentation into your superpower:
Before you code:
Write down the problem you’re solving step by step.
Create Architecture Decision Records (ADRs) to capture your reasoning.
Use flow diagrams to map out system interactions.
Write pseudo code to outline your approach.
After you implement:
Document the “why” behind your decisions.
Capture edge cases and integration points.
Update ADRs and diagrams to reflect what you actually built.
Every piece of documentation is a conversation with your AI tools, teaching them how to think like you. It’s not just for others—it’s for you, ensuring you stay aligned with your intent as you scale.
2. Testing: Your Speed Insurance Policy
“Move fast and break things” doesn’t fully work in an AI-assisted world. AI tools can generate code 10x faster than you can write it, but they can also introduce subtle bugs 10x faster too.
For small teams, testing is no longer a chore—it’s an accelerator. When you invest in tests, you create guardrails that let you work faster with confidence.
Here’s the new testing playbook:
Write tests before you generate code (TDD). This gives AI clear boundaries and goals.
Automate everything with CI/CD pipelines. Every commit should trigger tests across the stack.
Focus on edge cases and integrations. AI struggles with context, so test how components interact.
The math is simple: Time spent writing tests < Time spent debugging AI-generated code.
Testing isn’t about slowing down. It’s about creating systems that move at AI speed without breaking.
3. Code Reviews: The Human-AI Partnership
Here’s where it gets interesting: Code reviews aren’t just about catching bugs anymore—they’re about teaching your AI tools how to think like your team.
AI can generate functional code but doesn’t understand your architectural principles, coding patterns, or team values. That’s where code reviews come in.
The code review checklist:
Does this solution match our architectural principles?
Are we introducing patterns we want to see repeated?
Have we documented the non-obvious decisions?
Are our tests covering the right scenarios?
Think of code reviews as a feedback loop. Every decision you make during review helps refine how you guide your AI tools in the future. Over time, this partnership evolves, creating a system where the AI becomes an extension of your team’s expertise.
Closing the AI Feedback Loop
Here’s the secret to thriving in the AI-assisted engineering era: The effectiveness of AI tools isn’t static. Their true power lies in how they adapt and improve based on your feedback.
Every time you document your intent, refine an AI-generated solution
or create better tests, you’re not just improving your project—you’re teaching the AI. This feedback loop turns your tools into extensions of your thought process, becoming smarter and more aligned with your goals over time.
Here’s how this loop works across the three pillars:
Documentation: When you externalise your thinking, you give AI tools the context they need to generate better solutions. As you refine your documentation, you also refine how the AI understands your team’s processes.
Testing: Writing comprehensive tests ensures reliability and creates boundaries that help the AI avoid repeating mistakes. Tests act as both a safety net and a learning tool for the system.
Code Reviews: Code reviews are no longer just for human teammates. When you correct an AI-generated solution, you’re feeding those insights back into your process, improving future suggestions.
The loop is simple:
Guide your AI with clear input.
Use documentation, tests, and reviews to refine its output.
Feed those learnings back into the system.
Why this matters: With every iteration, your tools get better—and so do you. This isn’t just about solving today’s problems faster. It’s about building a system that evolves alongside your team, making it stronger, smarter, and more effective.
The Bottom Line
The teams and individuals who embrace this iterative mindset are thriving—they’re not just surviving. They’re turning their AI tools into true collaborators, scaling their capabilities, and delivering impactful solutions.
The bottom line? AI isn’t replacing engineers. It’s amplifying them. But this amplification only works if you commit to the feedback loop—refining the tools, improving your processes, and continually learning.
Your AI tools are as good as the systems you build around them. Make those systems exceptional, and the results will follow.
Let’s keep pushing forward!
— Luka