Eliminating Operational Friction: The Hidden Cost of Speed Without Structure
This is the second post in a miniseries: The best pieces of advice I have received. The first post is here. Unlike the first one, this lesson didn’t come from a single person. I learned it by watching how a company had built it into the way they worked.
Since I worked as a senior software engineer at Oqton, I’ve asked the same question of many people in big tech: how do you make great code?
Their answers varied. Some talked about Scrum. Others mentioned Kanban boards, waterfall, or continuous deployment. The same pattern sat under the methodology. Engineers from Google, Facebook, and other companies shipping at scale gave the same answer: we do code reviews, and our standards aren’t optional.
Some companies went further. They had gatekeepers, specific people whose sign-off you needed before your code could be reused. Others relied on peer review. The implementation varied. The principle stayed the same: someone else has read your work and validated it before it touches production.
I thought this practice was about catching bugs early. It is, but the deeper benefit took me longer to see.
The deeper benefit
You can’t estimate the cost of code you never wrote.
When a bug reaches production, someone debugs it, fixes it, deploys the fix, and watches the dashboards. You can measure that cost in engineer hours and customer impact. The cost of preventing the bug stays invisible. Someone called the code review “overhead.” Someone made an architectural choice that looked inefficient and prevented three months of technical debt. The conversation happened in a pull request rather than at 3 a.m. during an incident.
A second invisible benefit is harder to quantify, and matters as much. Code review is how people learn. When you read someone else’s code, you see how they think. You spot patterns you hadn’t considered. You ask questions that change how they work. Your team grows together. People feel that growth, the sense of building something collective rather than grinding through individual tasks. A spreadsheet won’t capture it, but it moves the bottom line in ways that solo speed can’t.
Most of the value of code review sits in the bugs that didn’t ship and the team habits that took root. Once you’ve worked in an organisation where code review is the way work happens rather than a separate process bolted on, you can’t unsee it. Shipping fast and shipping safely turn out to be the same thing, if you’ve built the right processes around them.
How I’ve applied this as an analyst and researcher
When I moved into researcher and analyst roles, the engineering habits I’d built at Oqton paid off. Analyst work differs from software engineering in one important way: much of it starts as a one-off. A stakeholder asks a question. You write a script, run it once, hand over the answer. The work looks disposable. It feels disposable. So most analysts skip versioning and skip sharing.
The trick: you don’t need to treat each analysis as a shared utility from day one. You stay alert. When you’re two projects in and you’ve written the same data pipeline twice, or solved the same statistical problem a third time, you’ve found your signal. You pause, refactor it into something reusable, and pull in collaborators to build on it.
You need one or two collaborative projects before you feel the difference. Once you’ve seen how much faster you move when a teammate understands your code, reviews it, and builds on top of it, you start structuring for reuse earlier. The balance matters. You don’t want to slow yourself down on the first pass. You do want to notice when collaboration unlocks something you couldn’t reach alone.
The warning: AI agents are about to shift the work
With AI coding agents, you can write a prompt and have a working artifact in minutes. An executive in a meeting can ask “can you try that?” and instead of pushing back with “that takes time,” you can say “let me branch and experiment, I’ll have something in ten minutes.” This is possible right now. It will change how analysts and engineers work with stakeholders.
The trap is treating speed as the goal. When you use an AI agent, don’t push the whole project forward at maximum velocity. Your team will fall behind, and you’ll be the only person who knows why each piece is the way it is.
Instead, branch and experiment in parallel. Create a branch. Write a prompt that nudges the agent in a new direction. Show the artifact to stakeholders. Gather feedback. Iterate. You explore several paths at once and bring people into the loop as you go.
The collaboration deepens. Stakeholders see the work as you do it, not six weeks later when you call it “done.” They can ask “what if we try this?” and you can spin up a branch in minutes. Specs and code stop being separate jobs. More people experiment. More people see where the work is heading, in real time.
This works only if you’ve built for it. You need:
- Branching as a first-class citizen. Don’t finish main first and then experiment. Run experiments alongside main from the start.
- Specs and intent captured next to the artifacts. When the agent generates something, the team should know why. Document the gap between what you asked for and what you got.
- Guardrails, not gatekeeping. Only certain people push to main. Many more can branch and experiment. That’s how you avoid silos without losing control.
- A different place for triage. The old model triaged work up front to decide what was worth building. In a branching-heavy world, the harder question is which experiments deserve to land on main and reach production. Define the must-haves at the start. Run the real triage at merge time.
- Stakeholder feedback built into the rhythm. Reviews don’t wait for the end. Stakeholders see progress and respond as the work moves.
A new norm: sharing prompts alongside code
There’s one more habit teams need, and it will feel uncomfortable at first.
Code review should now include the prompts, not the code alone. The prompts are where the learning sits. If a teammate writes terse prompts and gets strong output, they’ve built up context the rest of the team can’t see, a stack of patterns and instructions you’d need to read the prompt to find. Without the prompt, you’d see their PR and assume they ship faster because they’re smarter, when in fact they’ve engineered better leverage.
This is how teams get up to speed in the new era. Reading the prompt next to the code shows you what good prompting looks like in your domain and on your codebase.
There’s a real barrier worth naming. Prompts feel exposing in a way finished code doesn’t.
Code is polished. You’ve shaped it and cleaned it up. A prompt is raw. It captures what you thought, in plain language, often half-formed. Sharing it feels closer to sharing a draft of your reasoning than handing over a finished product. People will resist without noticing they’re resisting. You’ll hear “the prompts aren’t useful to share” or “I wrote whatever came to mind.”
The instinct is wrong, even if it’s understandable. The prompt looks low-value because it sits so close to thought. That’s the part that doesn’t transfer through the code: what you were trying to do, and what you assumed the agent would handle. Sharing it helps your teammates most.
Teams that move past this barrier early will pull ahead. They’ll move faster and build something they can maintain, and stakeholders will be aligned because they were in the loop from the beginning.
The alternative: ship something fast that no one else understands. Then get stuck when the person who built it leaves, or when you need to scale it past their context.