If software development is R&D work, then manufacturing metrics don’t capture what actually creates value. R&D organizations don’t measure success by widgets produced per hour—they measure it by discoveries made, problems solved, and knowledge gained.
We’re used to measuring value in software by visible output: features shipped, story points completed, velocity trends. These metrics feel concrete. They’re easy to track. They show “progress.”
But there’s a question worth asking: Is that actually value? Or is it just activity? And more importantly: does it measure the right things for R&D work?
A team ships a feature in two weeks. Looks great on the roadmap. But six months later, nobody uses it. The feature solved the wrong problem. Or created new problems that took months to unwind. Or became technical debt that slows down every future change in that part of the system.
Was that valuable? The velocity chart says yes. The shipped feature count says yes. But the actual outcome was negative value—time and effort that made things worse, not better.
Another team spends three weeks investigating a proposed feature. They discover it would conflict with three other systems, create security issues, and doesn’t actually solve the customer problem that prompted it. They recommend a simpler alternative that takes one week and actually addresses the need.
Traditional metrics would show this team as “slow”—three weeks for what became a one-week project. But the actual value was enormous: they prevented building the wrong thing, avoided creating new problems, and found a better solution. The three weeks of investigation generated more value than months of shipping features that don’t matter.
Value in software isn’t in the code you write. It’s in the problems you solve and the disasters you prevent.
Preventing waste is value. Investigation that reveals “we shouldn’t build this” just saved weeks or months of effort. That’s not a failed investigation—that’s a home run. You prevented negative value.
Understanding is value. Knowing why a system behaves the way it does, what the real constraints are, where the complexity lives—that understanding pays off every time you touch that system. It prevents mistakes. It enables better decisions. It’s invisible on a velocity chart but massive in impact.
Good decisions are value. Choosing the right approach after investigating alternatives creates value that compounds over time. The code is easier to change later. It doesn’t create technical debt. It solves the actual problem. That’s worth more than ten features that barely work.
Avoided problems are value. Finding the security issue during investigation instead of in production. Discovering the performance problem before launch. Recognizing the edge case that would break things. These discoveries don’t ship, but they create enormous value.
Simplified solutions are value. Investigation often reveals that the complex feature can be replaced with a simple change. The simple solution is more valuable than the complex one, even though it “ships less code.”
Code quality reflects investigation quality. When you investigate thoroughly before coding, the resulting code is different. It’s not a hodgepodge of guesses and patches. It’s a clear expression of understanding.
Thorough investigation leads to code that:
Good code is often less code. Better code is sometimes no code—investigation reveals the problem doesn’t need solving, or can be solved by removing something rather than adding.
When code comes from investigation rather than guessing:
The final code becomes a pristine artifact—evidence of understanding rather than accumulated guesses. Tech debt avoidance isn’t something you have to work at separately. It’s a natural outcome of investigation-driven development.
Compare two approaches to the same feature:
Without investigation: Build based on initial description. Discover missing requirements. Patch the code. Discover more edge cases. Add defensive checks. Realize the approach doesn’t scale. Refactor. Add abstractions for “future flexibility.” End up with complex code that’s hard to change because you don’t fully understand why it’s built the way it is.
With investigation: Investigate first. Understand the actual requirements, edge cases, and constraints. Choose an approach that fits what you learned. Build cleanly because you understand the problem. The code is simpler because you’re not defending against unknown unknowns—you investigated and found the actual knowns.
The code you write is the smallest part of the work. Investigation is the majority. The code just captures what you learned. When investigation is thorough, code tends to be clean, minimal, and maintainable. When investigation is rushed or skipped, code tends to be messy, over-built, and fragile.
This is why “good code” isn’t primarily about coding skill. It’s about investigation quality. The code reflects your understanding. Better investigation leads to better understanding leads to better code.
There’s a common assumption: faster is better. Ship more, ship faster, increase velocity.
But speed without direction is just motion. You can ship the wrong things very efficiently.
Consider two teams:
Team A ships ten features per quarter. High velocity. But half the features see little use. Two create new problems. Three are harder to maintain than expected. The team is constantly putting out fires from previous work.
Team B ships five features per quarter. Lower velocity. But they investigate first. Each feature solves a real problem. The implementations are clean. Nothing comes back as a fire drill. The codebase gets healthier over time.
Which team creates more value? The velocity chart says Team A. The actual business impact says Team B.
Speed is valuable when you’re building the right things. Speed while building the wrong things is expensive.
If velocity and feature count don’t capture value, what does?
Here are alternative ways to think about value:
Problems solved, not features shipped. Did this work solve a real problem? Is it getting used? Did it have the intended impact? A feature that ships but doesn’t solve anything created zero value, regardless of how fast it was delivered.
Waste prevented. How many bad ideas did we kill in investigation before wasting months building them? Each “stop” decision based on investigation is a win.
Quality of decisions. In retrospect, were our technical decisions good ones? Did investigation lead to better choices than guessing would have?
Knowledge gained. What did we learn that will help with future work? Understanding earned during investigation has lasting value.
Technical health. Is the codebase getting easier or harder to work with over time? Features that make future work harder created negative value, even if they “shipped.”
Time to good solution. Not “time to any solution,” but time to a solution that actually works well. A feature that ships fast but needs two rewrites took longer than investigating first and building it right.
Customer impact. Are we solving real problems? Do people actually use what we build? Do they find it valuable?
The challenge is that investigation-created value is often invisible or delayed:
This makes it hard to “prove” investigation creates value in traditional metrics. The value is real, but it’s not captured by story points and velocity.
This is why documentation of investigation matters. Investigation reports, architecture decision records, stop decisions—these make invisible value visible. They show: “We investigated X, discovered Y, decided Z, and here’s why that was the right call.”
There’s a particular trap worth highlighting: moving fast because something seems “quick to build.”
“This is just a simple form. Let’s skip investigation and build it.”
Then you discover:
What seemed quick became slow. But worse, you’re now committed. You started building without investigating, so stopping feels wasteful. You power through, creating technical debt and missing the underlying issues.
The thing that’s “quick to build” is often quick if you don’t understand the context. Understanding the context usually reveals why it’s not actually quick.
Real value comes from understanding first, then building. Not building fast, then understanding why it broke.
If we’re honest, a lot of software development isn’t creating value. It’s creating activity that looks like value.
Features that nobody asked for, built because they were on the roadmap. Technical solutions to problems that investigation would reveal don’t exist. Fast delivery of things that create more problems than they solve. Code written to hit story point targets, not to solve problems.
This isn’t because developers are lazy or incompetent. It’s because we measure activity instead of value. When you measure velocity, you optimize for velocity. When you measure features shipped, you optimize for shipping features. Regardless of whether those features create value.
An R&D mindset shifts the question from “How much did we ship?” to “What value did we create?” And sometimes the answer is: “We investigated three potential features, killed two bad ideas, and built one thing that actually solves the problem.” That’s a valuable quarter, even if the velocity chart looks low.
If you want to shift toward valuing investigation and good decision-making, you need to make that value visible:
Document investigations. Write up what you learned, even if you decide not to build. Especially if you decide not to build.
Track prevented waste. Keep a log of “things we almost built but investigation showed we shouldn’t.” Calculate the time/money saved.
Show decision quality. Six months after a decision, review: was it good? Did investigation help?
Measure customer impact. Are people using what you built? Did it solve their problem?
Track technical health. Is the codebase getting better or worse? Are bugs increasing or decreasing?
Celebrate learning. Make “we learned something important” a success metric, not just “we shipped something.”
These don’t fit cleanly into traditional agile metrics. That’s kind of the point. Value in R&D work doesn’t look like value in manufacturing.
Pharmaceutical companies don’t measure lab success by “experiments run per week.” Bell Labs didn’t measure researchers by “prototypes built per quarter.” They measured discoveries, insights, breakthroughs—the knowledge created, not just the activity performed.
If software development is R&D work, we need R&D metrics: learning, decision quality, problems prevented, knowledge gained. Not manufacturing metrics like velocity and throughput.