journal

A Conversation on Software Engineering’s Immaturity

The following is a lightly edited conversation exploring the naive and young nature of software engineering compared to more established disciplines.


Human: I want to explore the naive and young nature of software engineering compared to older more established disciplines like law and medicine. How no one really knows the right answers and how the speed of technology isn’t helping.

Claude: You’ve touched on something really fundamental about software engineering’s place in the broader landscape of professional disciplines. There’s a striking contrast when you compare it to fields like law or medicine, which have had centuries or even millennia to develop their foundational principles and practices.

Software engineering is barely 70 years old if we’re generous about when it started, and really more like 30-40 years old in terms of widespread professional practice. Compare that to medicine, which builds on thousands of years of accumulated knowledge, or law, which has evolved sophisticated frameworks over centuries. We’re essentially still in the “bloodletting and leeches” phase of software development in many ways.

This youth shows up everywhere. We don’t have well-established ethical codes that everyone agrees on. We lack standardized licensing or certification processes that truly validate competence. Our “best practices” change every few years, sometimes dramatically. What we considered cutting-edge architecture five years ago might be considered technical debt today.

In established disciplines, there are clear hierarchies of evidence and knowledge. A doctor can point to peer-reviewed studies, clinical trials, and established anatomical facts. A lawyer can cite centuries of precedent and refined legal principles. But in software? We’re often making architectural decisions based on blog posts, conference talks, and the collective gut feelings of people who’ve been doing this for maybe a decade.

The pace of change means that by the time we might have enough data to really understand the long-term consequences of our choices, the technology has already evolved three generations. We’re constantly building on shifting sand.

The speed of technological change, which is often celebrated as innovation, actually works against the kind of knowledge accumulation that makes disciplines mature. In medicine, treatments are studied for years before approval. In law, precedents build slowly over decades. But in software, we’re expected to adopt new frameworks, languages, and paradigms on yearly cycles.

This creates a perpetual state of technological adolescence. We never get the chance to really understand the tools we’re using before we’re pushed to learn new ones. The industry rewards novelty over depth, movement over mastery.

Human: I’ve also seen this drive to seek gurus who proverbially sit on a mountain and people visit them for sage wisdom to almost a religious degree. Thoughts?

Claude: That’s such a pervasive and telling pattern in software engineering - this almost mystical reverence for certain figures who are treated less like professionals and more like oracles.

You see it everywhere: the conference circuit where certain speakers are treated like rockstars, the way people hang on every tweet from figures like DHH or Uncle Bob, the almost religious devotion to frameworks created by specific individuals. There’s this sense that somewhere out there, someone has “figured it out” and if we can just learn their secrets, we’ll transcend our current struggles.

It’s fascinating how this mirrors pre-scientific approaches to knowledge. In mature fields, we’ve moved beyond the idea that wisdom comes from individual authorities. Medical knowledge isn’t about following the teachings of one brilliant doctor - it’s built on reproducible research, peer review, and systematic validation. But software engineering still operates very much in this guru model.

The lack of established foundations makes us particularly vulnerable to this kind of thinking. When you don’t have solid institutional knowledge to lean on, charismatic individuals promising clarity become incredibly appealing. The complexity and constant change in our field creates this persistent anxiety - “Am I doing this right? Is there a better way?” - and gurus offer the comfort of apparent certainty.

What’s particularly dangerous is how this creates cargo cult programming - people copying the practices of successful companies or individuals without understanding the context that made those practices work. They see that Google does microservices, so microservices must be the answer. They read that some guru uses functional programming, so that must be the path to enlightenment.

This guru worship actually impedes the kind of systematic knowledge building that would help the field mature. Instead of developing evidence-based practices, we get orthodoxies based on individual personalities and their particular experiences.

The whole conference ecosystem feeds into this. People spend thousands of dollars to travel across the world to hear these figures speak, often delivering variations of the same talks they’ve been giving for years. It has all the hallmarks of religious pilgrimage - the expense, the travel, the communal experience of being in the presence of the “enlightened.”

Human: I think I heard the term “drinking the Kool-Aid.”

Claude: Exactly! That phrase captures perfectly how technical choices become ideological commitments rather than pragmatic decisions.

“Drinking the Kool-Aid” in tech usually means you’ve gone beyond simply using a technology or methodology - you’ve bought into the entire worldview that comes with it. You’re not just using React; you’re a “React developer” who believes in the Facebook way of building UIs. You’re not just practicing Agile; you’re an “Agilist” who sees waterfall as inherently evil.

It’s particularly pronounced around certain ecosystems. Look at how people talk about Apple vs. Android, or Rails vs. Django, or vim vs. emacs. These aren’t just tool preferences - they become identity markers, complete with in-group language, shared assumptions, and often, disdain for the “other side.”

What’s fascinating is how people describe discovering their preferred technology or methodology. They use language that’s almost religious: “I saw the light,” “it changed everything,” “I finally understood.” There’s this narrative of transformation, of moving from ignorance to enlightenment.

And once someone has drunk the Kool-Aid, they often become evangelists. They don’t just use their preferred tools; they need to convert others. Conference talks become sermons. Blog posts become testimonials. Code reviews become opportunities for missionary work.

This creates real problems. When you’ve tied your identity to a particular technology or approach, it becomes very hard to objectively evaluate its weaknesses or consider alternatives. Criticizing someone’s preferred framework becomes like criticizing their religion - it’s taken as a personal attack rather than technical feedback.

Maybe most dangerously, once you’ve drunk the Kool-Aid, you become somewhat immune to contradictory evidence. If your chosen framework has performance problems, well, you just need to understand it better. If your methodology isn’t working, you’re probably not doing it “right.” The fault is always in the implementation, never in the core beliefs.

Human: I’ve never liked seeing the adjective ‘agile’ turn into a proper noun.

Claude: That linguistic shift is so revealing! The moment “agile” (a quality) became “Agile” (a Thing), it stopped being a way of working and became something you either belonged to or didn’t.

When we capitalize something, we’re essentially creating a proper noun - a specific entity with defined boundaries and membership. “Being agile” suggests flexibility, responsiveness, adaptability. But “doing Agile” suggests following a prescribed set of practices and beliefs. It’s the difference between being described as agile versus joining the Agile movement.

You see this pattern everywhere in tech. We don’t just write object-oriented code; we practice “Object-Oriented Programming.” We don’t just organize our code functionally; we embrace “Functional Programming.” The capitalization signals that we’ve moved from describing how we work to defining who we are.

The moment something becomes a proper noun in software development, someone starts offering certifications in it. You can’t be certified in “being agile,” but you can absolutely get your Scrum Master certification, your SAFe certification, your various Agile coaching credentials. The capitalization enables the monetization.

And with that comes orthodoxy. When “agile” was just an adjective describing adaptive practices, teams could figure out what worked for them. But “Agile” comes with specific rituals, specific roles, specific artifacts that must be followed to be truly Agile™.

There’s something deeply ironic about how “Agile” - which supposedly values responding to change over following a plan - has become one of the most rigid methodological frameworks in software development. Teams spend more time in Agile ceremonies than they do writing code, following processes that would make a 1990s waterfall manager proud of their thoroughness.

The original Manifesto talked about valuing “individuals and interactions over processes and tools,” but Agile Inc. has given us Jira workflows, story point poker, and an endless parade of consultants selling us better processes and tools for being Agile.

When something becomes a proper noun, it loses the ability to be qualified or nuanced. You can be “somewhat agile” or “agile in certain areas,” but you can’t be “somewhat Agile” - you either are or you aren’t. You’re either drinking the Kool-Aid or you’re a waterfall dinosaur.

Human: One of my favorite books on the subject was “Lean Software Development” by Mary Poppendieck.

Claude: That’s a brilliant choice! The Poppendiecks’ work stands out precisely because it avoided so many of the traps we’ve been discussing.

What made “Lean Software Development” so refreshing was that it didn’t try to invent a new religion. Instead, it looked to manufacturing - a field that had actually spent decades systematically studying what works and what doesn’t. Toyota didn’t develop lean manufacturing through conference talks and blog posts; they developed it through rigorous experimentation and measurement over generations.

Mary and Tom brought that evidence-based thinking to software. Rather than saying “here’s the One True Way,” they said “here are principles that have proven effective in complex systems, and here’s how they might apply to software development.”

The book focused on underlying principles - eliminate waste, amplify learning, decide as late as possible - rather than prescriptive practices. You couldn’t get “Lean Software Development Certified” because it wasn’t a methodology you could package and sell. It was a way of thinking about problems.

This is exactly what mature disciplines do. Medical education teaches principles of diagnosis and treatment, not rigid protocols for every possible situation. Legal education teaches how to reason about law, not just memorize statutes.

The Poppendiecks also avoided the guru trap by explicitly grounding their ideas in the work of others - Deming, Ohno, and the Toyota Production System practitioners. They positioned themselves as translators and interpreters rather than visionaries or prophets.

And notably, they didn’t try to create a movement around themselves. They wrote the book, shared the ideas, and largely let others figure out how to apply them. No certification programs, no consulting empire, no annual conferences with them as the keynote speakers.

What I found most valuable was how they emphasized learning and measurement. Instead of saying “this is how you should work,” they said “this is how you can learn what works in your context.” That’s such a different mindset from most software methodologies.

It’s telling that while Agile spawned an entire industry, Lean Software Development just… influenced how people think. Which is probably exactly what good ideas should do - they should change minds, not create movements.

Human: Another great read was “Mythical Man-Month.”

Claude: Now that’s a perfect example of what real wisdom looks like in our field! Brooks wrote that in 1975 - nearly 50 years ago - and it’s still more relevant and insightful than most of the books published last year.

“The Mythical Man-Month” has this quality that distinguishes truly foundational work: it’s not about the technology of its time, it’s about the underlying nature of software development itself. Brooks wasn’t selling a methodology or framework - he was sharing hard-won insights about the fundamental challenges of building complex systems with teams of people.

The fact that we’re still dealing with the exact same problems he identified - communication overhead, the impossibility of perfectly partitioning work, the temptation to add more people to late projects - shows how he was addressing something deeper than the fashions of his era.

What made Brooks credible wasn’t that he had a compelling philosophy or gave great talks. He had managed one of the most complex software projects in history (OS/360) and was honest about both the successes and the spectacular failures. His insights came from scar tissue, not speculation.

This is so different from how we typically generate “wisdom” in software development today, where someone has a good experience with a particular technology at one company and suddenly they’re writing books and giving conference talks about universal principles.

His famous “No Silver Bullet” essay might be the most important thing ever written about our industry. The idea that software development is fundamentally difficult, that there are essential complexities that can’t be abstracted away, runs completely counter to our industry’s perpetual search for the Next Big Thing that will finally make programming easy.

But we keep ignoring this lesson. Every few years, someone promises that their new language, framework, or methodology will finally solve the fundamental problems of software development. Brooks basically predicted this cycle of hype and disappointment decades ago.

What strikes me most about Brooks is his humility. He’s not positioning himself as having figured everything out - he’s sharing what he learned from a difficult project and acknowledging the inherent challenges of the work. That kind of intellectual honesty is rare in our guru-driven industry.

It’s telling that his book doesn’t have a sequel called “Advanced Man-Month Techniques” or spawn a certification program. The insights stand on their own without needing a movement built around them.

Human: I like the term “intellectual honesty.”

Claude: That phrase really cuts to the heart of what’s missing in so much of our industry discourse, doesn’t it?

Intellectual honesty requires admitting uncertainty, acknowledging limitations, and being willing to say “this worked for us in this specific context, but I’m not sure it’s universally applicable.” That’s incredibly rare in tech, where everyone seems compelled to present their particular experience as universal truth.

Brooks exemplified this. He didn’t claim OS/360 was a perfect project or that he had all the answers. He said “here’s what we tried, here’s what worked, here’s what failed catastrophically, and here are some patterns I think I see.” That kind of humility creates actual learning opportunities.

Our industry puts enormous pressure on people to be experts, to have strong opinions, to project confidence. If you’re giving a conference talk or writing a blog post, you’re expected to have THE solution, not just “here’s something we tried that seemed to help a little bit.” Nuance doesn’t get retweets.

This creates a perverse incentive structure where intellectual honesty is actually penalized. The person who says “it depends on your context” loses the audience to the person who promises “this one weird trick will solve all your problems.”

Intellectually honest people also acknowledge when their thinking has changed. They don’t pretend their current beliefs were always their beliefs, or that they’ve never been wrong about anything. But in tech, changing your mind about a technology or approach is often seen as inconsistency or weakness rather than growth.

Maybe most importantly, intellectual honesty recognizes that context matters enormously. The person who says “microservices are always better” or “monoliths are always better” is being intellectually dishonest. The honest answer is almost always “it depends on your team size, your domain complexity, your operational maturity, your business requirements…”

But “it depends” doesn’t sell books or build personal brands. Simple, universal truths do, even when they’re not actually true.

Human: You hit on something, the clickbait-y culture we live in, YouTube videos that don’t get traction all because their thumbnail was ‘lacking.’

Claude: That’s such a perfect example of how the medium distorts the message! The visual packaging has become more important than the actual content, and this creates perverse incentives that actively work against nuanced, thoughtful discourse.

It’s absurd when you think about it - some brilliant developer could create a genuinely insightful 20-minute explanation of a complex architectural decision, but if they don’t have a shocked facial expression and giant red arrows pointing at some code snippet in their thumbnail, the algorithm buries it. Meanwhile, “THIS JAVASCRIPT TRICK WILL BLOW YOUR MIND 🤯” gets millions of views even if the content is recycled beginner material.

The thumbnail becomes the primary filter for technical knowledge. Not peer review, not citation counts, not real-world validation - just how clickable the image looks.

This creates a race to the bottom in terms of complexity. Nuanced topics get reduced to binary choices: “React vs Vue: Which is BETTER?” when the honest answer is “they’re tools with different trade-offs that work better in different contexts.” But you can’t thumbnail nuance.

The most intellectually honest response to most technical questions is some variation of “it’s complicated and depends on many factors.” But that doesn’t drive engagement. The algorithm rewards certainty, controversy, and oversimplification.

YouTube has turned technical expertise into performance art. The knowledge transfer becomes secondary to the entertainment value. Developers start optimizing for watch time and subscriber counts rather than actually helping people solve problems or understand concepts deeply.

You end up with this weird situation where the most viral “educational” content is often the least educational. The person calmly explaining the subtle trade-offs between different database architectures gets 1,000 views, while “I Built a Database in 100 Lines of Code!” gets 100,000.

What’s particularly damaging is how this shapes what people think they should learn. Developers start chasing the flashy techniques they see in viral videos rather than building foundational understanding. They want to learn the “mind-blowing tricks” instead of the boring fundamentals that actually matter.

It’s like if medical education was driven by YouTube thumbnails - everyone would know exotic rare diseases but nobody would understand how to take blood pressure.

Remember when technical blogs were just… long form explanations of problems and solutions? No hooks, no cliffhangers, just “here’s a problem I encountered, here’s how I thought about it, here’s what I tried, here’s what worked.” That format has almost completely disappeared in favor of content optimized for social media engagement.


This conversation highlights the ongoing challenges facing software engineering as it struggles to mature into a more established discipline, balancing the need for innovation with the wisdom that comes from accumulated experience and intellectual honesty.