16 min read

ProdFund 1.11: So What?

To wrap up the first season of the podcast, we're reviewing some big themes and then looking toward the future of software development.
A digital collage of a walking path in clouds, with angular shapes and visual artifacts that suggest a digital setting.

Product Fundamentals is a podcast dedicated to spreading the core knowledge software product people need in order to succeed. Season 1 is structured as a history of how we make software.


Sixty thousand words and six hours of audio later, we've made it to the end of season 1 of Product Fundamentals!

This episode, we'll revisit the question that we started with: why do we make software in the way that we do? I'll draw some principles from the story we've covered, and suggest what that history indicates about what could come next.

The audio for this episode is embedded below, and the episode transcript follows.

You can also find this episode of the Product Fundamentals podcast on the show website and through all the usual podcast services.

Transcript

Hello friends and welcome back to the Product Fundamentals podcast, episode 11: So What?

Well, you’ve made it! This is the final episode of this season on the history of software development methodology.

We’ve covered a lot of ground: the origins of software engineering, waterfall vs incremental and iterative development, the background of management b y objectives and OKRs, the Agile Manifesto and its precursors, alpha, beta, and A/B testing, Agile at Scale and the maybe death of Agile, Devops, and even the history of the office and working from home. This episode, I’ll try to make sense of that history and synthesize it into a few conclusions.

I opened this season with the question, why do we make software in their weird way that we do?

I’ve gone through a bunch of mental models to answering that question while working on this project. Sometimes, it’s felt like technological determinism. Other times, I’ve felt the temptation of narrative, to ascribe the things I don’t like to the influence of bad actors. 

But if we’re being honest, I think the reason we make software in the weird way that we do is mostly because of an evolving, generally workable and often ungainly, compromise between workers and managers as they both deal with rapidly evolving technology. The “muddy Agile consensus,” as I labeled the dominant de facto methodology of the industry in the preamble episode, is a cobbling-together of odds and ends that each worked in some context, and then got universalized even as the underlying reality changed. Thus we’ve ended up with accumulated bits from different eras and worldviews, and we’ve never really cleared the cache to see what still makes sense.

I’ll be the first to admit that a conclusion of “historical path dependency happened and that’s how we got to today” is not the most satisfying. But to be fair, I did tell you up front that it was going to be a history podcast, and history almost always involves a lot of path dependency and contingency.

Beneath the obvious observation that the present was caused by the interaction of things in the past, though, are some observations or principles that I think can be useful to us as practitioners.  I’ll lay out flour.

Principles

Exponential improvement keeps it fun

First, continuous exponential technological improvement has been critical to keeping our business from ossifying. Without ongoing technological innovation and disruption, the forces of standardization, routinization, and cost reduction will make software a much less rewarding and fulfilling way to work. Exponential growth is amazing, and we should appreciate it.

Computing has enjoyed exponential growth curves for transistor count on a processor, energy efficiency of processors, bytes of storage on a disk, and the affordability of Internet connection bandwidth. Unfortunately, these exponential growth trends don’t last forever. Dennard scaling (for energy efficiency) and Kryder’s Law (for the cost of storage) have already broken down. Whether Moore’s Law has already fallen or will soon fall has been a perennial topic for at least a decade, but it does have to end someday. Only Nielsen’s law of Internet Bandwidth appears to be holding steady, and even that’s a bit under its trend line in the last two years.

To be clear, computing can and will continue to improve after these laws break down: marginal sustaining innovations, and occasional big breakthroughs, will very likely still occur in a world without exponential improvements. Economies of scale and production learning curves will continue to yield benefits. Even though the continuous exponential drop in disk storage costs broke down about a decade ago, for example, storage is still much cheaper today per gigabyte than it was years ago.

But from the perspective of a software worker who wants to keep having a well-paying job while doing creative work, the end of exponentiation is concerning. Throughout this season, we’ve seen numerous attempts to standardize the process of making software, turning it into a routine job with one right way to work. That was the goal of the NATO conference in 1968, that was one motivation for Fred Brooks in The Mythical Man Month, that was the goal of the attempts at creating “software factories” in the 1970s, that was the goal of the British and American standards that codified Waterfall as the only way to make software for the government, and so on.

The impulse to standardize generally failed because exponential growth in technical capabilities in a competitive market made a stable equilibrium impossible. Instead, we’ve lived in a rapid tick-tocking between complexity and abstraction in the methods and outputs of software development. It hasn’t been rational or effective to lock down one right way to make software products, however much the bottom-line-optimizing Scientific Management aficionados might have wanted to.

If exponential growth in capabilities really does end, I expect much of the fun – and the remuneration – of software will go with it. As long as there are streams of exponential growth, or if new frontiers of exponential improvement somehow open, those will be the areas to work on in order to have the best time. 

No technological or market determinism

Second, Neither technology nor market define a single way to work. As important as the underlying technologies have been to shaping how we work, I don’t think they’re determinative of a single correct way to do it.

From the 1960s through at least the 1990s, there were thoughtful individuals, teams, and organizations working under a Waterfall model, and other thoughtful individuals, teams, and organizations practicing Iterative and Incremental Development. These were very different ways of working, and while both evolved, they also each maintained their distinctive core ideas, which I’ll touch on soon. Both Waterfall and IID were used at the cutting edge in the 1960s, both faced the bureaucratized corporate markets of the late 1970s and 1980s, and both had to deal with the consumerization of software in the 1990s.  

The existence of two very different methodologies competing for decades, across multiple waves of technological change, says that neither the technology nor the market in which we make software leads definitively to one way to work. One way to work may be better or worse than others in many circumstances, but people can persistently choose different ways of working while facing the same environment. 

This is true of other areas of how we work beyond software methodology. Firms using OKRs and using other management systems have stably competed for decades; firms have embraced or avoided work from home for decades. Neither technology level nor broader economic environment have forced the industry to follow just one way.

This means that we should remain humble and open-minded in thinking about how we will work in changing conditions. There is probably more than one way to do it, and the fact that a model has flourished for decades does not necessarily mean that it is the optimal one for any single context, let alone for every context.

So, if technology and market are not determinative of how we build, what is?

Ideas and values matter

My claim is the third principle I draw from this season: How we work is largely driven by ideas and values. 

My claim is that core values and ideas about the world make a critical difference in shaping methodology. In the context of Waterfall vs IID, the essential difference between the camps that motivated their choices was epistemological. That is to say, practitioners of each had very different senses of what was knowable. Waterfall was predicated on the idea that with enough planning, analysis, forethought, and documentation, incredibly complicated problems could be tamed and turned into manageable projects with clear timelines. If we’re smart enough and if we do enough homework first, we can build the right things.

Iterative and Incremental Development is anchored in a fundamentally different worldview. The difference isn’t just “we organize our work into two-week chunks instead of bigger milestones.” There’s a deeper distinction at work. IID practitioners admitted and embraced the fact that they didn’t know the right way to get to the destination at the start. We don’t know quite what’s going to happen. Instead, they had to take small steps, try things, backtrack, and so on, until they eventually discovered the right solution.

Peter Drucker’s Management by Objectives is another fine example of the importance of ideas. Drucker fled Germany in 1933 and had his books burned by the Nazis. Then when he writes The Practice of Management in 1954, he argues that businesses must simultaneously balance a plurality of goals, including those that are explicitly pro-social. He insists that companies organize their workers around broad objectives, rather than specific orders. It’s hard to miss the role of his core values in this thieory.

When Andy Grove reworked management by objectives to be increasingly quantified and stratified, he wasn’t obviously advancing some “truer” way to run a business. Businesses were doing fine without the structure of quantified laddered OKRs. Grove was applying his aggressive temperament and analytical engineering brain to the problem, and the modern notion of OKRs was the result. 

The values that practitioners bring to bear are determinative of how we work. In a big economy with lots of uncertainty, I think this should be freeing and exciting. There is not one obvious correct way to work: we can find other ways to do it that reflect what we value, and subject to the pressures of competitive markets, we can succeed.

That said, there’s no denying that some ideas spread rapidly through the software business and become all but monolithic. This brings us to the final principle that I want to draw your attention to:

Fads

Our business is prone to fads. We software people tend to pride ourselves on diversity and free-thinking, but the reality is that we are subject to trend-chasing like everyone else.

Of course, some trends are entirely rational, especially those rooted in the emergence of new technologies. New technologies open new markets or make new techniques possible, and it makes sense to pile in on those. But then there are the cultural and intellectual trends that deserve more skepticism, because they often result in overreach and pushback.

Seeing software as an engineering discipline that can be solved might have been the first trendy brainworm in our industry. In the context of Big Science, the Space Race, and software’s roots in logic and mathematics, mapping software onto engineering was an understandable choice. Certainly, there are elements of software for which the mental model is a good fit. But that mental model also failed; we never found that one right way. And the pursuit of software as a solvable problem caused frustration and failure for many people and organizations for decades.

Before software, Scientific Management was a meme that moved through business. It had its virtues: who can be entirely against pragmatic optimization? But it also had its dark and dehumanizing extremes. Management by Objectives, Matrix Management, Scrum, OKRs, data-driven decision-making, the open-plan office… each of these has their clear benefits. Each has also been adopted rapidly across companies, across entire indsutries, not because companies had done careful weighing of benefits and tradeoffs, but because a given concept was the new hotness.

What’s the mechanism driving these trends?

The history covered in this podcast points most strongly to the role of influencers, most often in the form of companies, driving the story forward. In turn, IBM, DEC, Intel, and Google have been the star influencers of the technology industry. The tendency to chase Google in the post-2000 software industry is especially striking.

Often, of course, companies follow Google because it really does have incredible technology, and by virtue of its scale, is among the first to face new challenges. But the halo effect of very real success in some areas can cause companies to assume everything a leader like Google does is unambiguously correct. Following Google’s lead with scaling technologies like MapReduce and Big Table has a much tighter logical connection than following its use of laddered OKRs and open bull-pen offices.

It stands to reason that in the near future, the methods used by companies like Open AI will gain increasing attention. The company’s success is already driving fast-followers to mimic its technological approaches, but if history is any guide, we can expect plenty of attention to be paid to how the company organizes and motivates its workers, how it sets its strategy, how it measures progress, and so on. Some of this attention will be entirely appropriate; some will be a halo effect at play.

For working software people and executives, the reason to acknowledge our tendency to go for fads is that it encourages us to be deliberate about how we work, and to be less afraid of breaking with convention. There is value to understanding what others do, and what packaged methodologies exist in the marketplace, but they’re not obviously correct just because they’re being used elsewhere. A variety of methods have always been possible so far, and we should make active rather than passive choices about how we want to work. 

All right, so those observations were:

  • Continuous exponential growth in capabilities has been critical to dynamism and fun.
  • Neither technology nor market structure enforce a single way to make software.
  • How we work is driven largely by ideas and values.
  • And, our business is prone to fads.

The future

What does the past, and what do these observations, tell us about what will come next?

As I’ve worked on this season through the middle of 2023, two great upheavals have been roiling the software business.

The first has been the large-scale tech layoffs of 2022 and 2023, which as of this recording, have cost some 400,000 tech workers in the US and linked countries their jobs. As part of this phenomenon, firms have reorganized teams and roles, with functions like product management and QA removed, reduced, or recast.

The second is the recent acceleration of artificial intelligence, with large language models increasingly able to take on complex language tasks like preparing documents and writing code. 

What does the history of the software industry tell us to expect from these changes?

Recent layoffs

On its own, the wave of tech layoffs is distressing, but probably passing. In percentage terms, the financial and employment consequences of the dot-com crash in 2000 were much more severe than what we’ve seen in the post-pandemic period. 

We discussed why tech firms and venture capital are so sensitive to interest rates in Episode 7, and the current layoffs were driven largely by the recent rise in American and European central bank rates. I certainly won’t hazard a guess about when or if those rates will drop, but the industry will adjust. 

In the dot-com crash, the ideas of the “first mover advantage” and “if you built it, they will come” were found wanting, and as a result, the 2000s gave us the concepts of customer development, product-market fit, the MVP, and the company as a learning machine. In 2023, the current wave seems much less disruptive. As I record this, the NASDAQ index is about 9% below its late-2021 high. The tech giants and popular consumer tech firms, with the possible sui generis exception of Twitter, all seem to have survived the storm. With a much lower magnitude of disruption, the magnitude of resulting change will also likely be smaller.

That said, a prolonged period of high interest rates will have lasting impact. We can expect that venture capital will be much harder to raise for the next few years, so while some unemployed techies will start new firms, they will likely have a much greater focus on near-term profitability.

Existing firms will shift focus to profitability, cutting investments in long-term speculative projects. “Skunkworks” are likely going out of fashion for a while. Companies will experiment with removing or greatly reducing roles where they can. Workers in product management, project management, QA, and other engineering-adjacent functions will likely find themselves stretched across more engineers. Engineers may be expected to pick up more job responsibilities to compensate.

This relative scarcity can drive fruitful new innovations in how we work. If my claims are correct that markets are not deterministic of methodology, and ideas do drive how we work, then we may be entering a time of fresh experimentation as firms organize themselves more purely around profitability and capital efficiency. I don’t know what those innovations will be, but it will be interesting to see.

AI

Speaking of unknowns, I suppose I need to address the AI elephant.

The world’s best minds are nowhere near consensus on what to expect, so I certainly won’t pretend much certainty. But if we look at current technologies and imagine forward one iteration, generative AI primarily seems like a force multiplier and productivity booster, allowing individual developers and smaller teams to do more. It seems likely that companies with large existing codebases will adopt these technologies for incremental gains, making their existing developers more productive thanks to code autocomplete tools, improved debugging, and other co-pilot functions. But for the near-term future, existing companies will likely have such complicated existing code bases that they will rely heavily on their existing human engineers, because the humans can see the big picture and ensure the new pieces fit together with the existing systems. At this point, AIs still struggle to handle that much context.

Thus, I expect existing companies will benefit from productivity gains, but that their core team structures and work patterns will be resilient for some time to come. People will still write spec docs in basically the same way we have since waterfalling in the 1960s, hold the same scrum meetings and sprints we’ve had since the 1990s, optimize for OKRs as we have since the 2000s, and so on.

More interesting to me is how firms that are, for lack of a better term, “AI native” will work. As AIs transition from solving narrow problems to larger problems, instructions like “Write a messaging app” will become more and more possible. At least at first, the code may have redundancies and dead-ends, it may not run as efficiently as it might, and so on. But the history of software shows us that abstractions can be incredibly valuable productivity boosters, as ever-stronger technical foundations make inefficiencies relatively unimportant.

I decided not to cover Calyton Christensen’s 1997 book The Innovator’s Dilemma in this season, though if there’s a season 2 of this podcast, the book will almost certainly get proper coverage. For now, I’ll just give the lightning summary, which is that new technologies become “disruptive” to existing incumbents and spawn new high-growth companies when they offer a way to deliver a radically cheaper product to customers, even if it’s inferior to the dominant company’s product in most ways. It just needs to be “good enough” in most ways, and better in one critical way. In Christensen’s world, that axis of advantage is almost always cheaper price.

AI-native software companies that find ways to build “good enough” software at a fraction of the price of incumbent companies with large expensive software engineering organizations could be a major disruptor to the software industry, fundamentally changing how we work. AI may not be ready to work at the frontier of what’s just barely possible for some time, but it doesn’t need to be. All it needs is to be good enough for a low-end slice of the market large enough to keep some small companies alive and innovating. This seems like it must be true.

Of course, speculating about the future of AI is incredibly fraught. In twenty years, maybe we’re all living in an AI-powered utopia. Or an AI-powered dystopia. Or we’re all nanobot food. Or maybe this was all much ado about a marginal technology that peters out quickly. It’s impossible to say, and I’m very aware that just two or three years ago, some smart (if idealistic) people were calling the crypto-fueled Distributed Autonomous Organization the future of everything. Predictions are dangerous, especially about the future. Here be dragons.

So what?

So with all this, what’s a practitioner to do?

Way back in episode 0 of this series, I said it was not about how to be a PM or an engineer, and I wouldn’t tell you how to run sprints or anything like that.

So, I won’t, in part because that sounds both boring and presumptive of me to do, and in part because there is no right answer.

That’s the big takeaway.

We’re all improvising. No one ever said “here’s Waterfall and I think it’s good” – Winston Royce never used the term, and while he did think he knew some good practices, he was an incrementalist by later in his career. The Agile Manifesto is 68 words about general values, not a prescription for a specific way to work. After the Manifesto, we had a roughly five-year Cambrian explosion of books about “how to Agile,” followed a decade later by another explosion of packages for “how to Agile at Scale.” So, we’re all winging it, in a seas of ideas, some good, some terrible, and plenty in between.

As a practitioner, do what makes sense. Try out new technologies, because a few times each decade, something comes along that makes a lasting impact. Drop stuff that doesn’t make sense, because most of our ideas entered the canon when computers were orders of magnitude weaker than they are today. Many of them predate the consumer Internet and even the personal computer.

The muddy Agile consensus is a lazy compromise. The consensus preserves very old ideas about wise central planners and bureaucratic elites that don’t stand up to serious scrutiny. The consensus lets individual contributors off the hook of actually engaging with their customers. The consensus lets managers optimize for their chosen measuring sticks rather than solving user problems. It creates incentives for teams to haggle over their meeting schedules rather than worrying about deliverables.

And, it’s fine. Lots of big successful businesses were built on the back of the muddy Agile consensus. It’s the common grab bag of shared null hypotheses that let us communicate with one another without reinventing the wheel and haggling over every point of process.

But some null hypotheses were made to be rejected, at least in some settings.

So, my hope is that this series gave you the context to critically examine how you work, to see where ideas and practices came from, and to understand the problems they were meant to solve or the benefits they were meant to create.

I hope you now feel a little better equipped to double-down on the good stuff, and to challenge the things that aren’t working. They’re not preordained. Maybe you can do better.

Wrapping up

And with that, I’m going to close this first season of Product Fundamentals.

I want to thank you for listening, and I want to doubly-thank those of you who have reached out with questions, corrections, and feedback. This has been a passion project that I’ve wanted to take action on for a long time, and I appreciate you listening along.

I’m not sure if or when there will be a proper second season. I’d like there to be – there are a lot more interesting things to learn about! – this project was a solid part-time job for the last four four months. I need to take a break and clear some real-life backlog before I can swing back around to this.

That said. I hope you’ll stay subscribed to this feed, as I expect to post some one-offs here in the near future, including a presentation I gave on the history of product management to the Tokyo PM community. I’ve got a handful of other ideas kicking around too.

So, until next time, thank you again for bearing with me and listening in. I’ve enjoyed learning and sharing this material, and I hope you’ve found something valuable for you in it too.

As always, your comments and feedback on this episode are very welcome. You can find a transcript, links to sources, and ways to reach me on the show website at prodfund.com. 

And if you enjoyed this season, and you want to hear more, do me a favor and share it with someone you think would enjoy it too. Those download numbers are a powerful motivator.

Thank you very much for listening.