23 min read

ProdFund 1.4: Waterfall Ascendant

Waterfall simultaneously thrived and struggled as the early commercial software industry wrestled with professionalization -- and then got locked in by government standards in the 1980's.
A pixel art image of heavy mainframe computers in a steep river valley, dominated by a large waterfall.

Product Fundamentals is a podcast dedicated to spreading the core knowledge software product people need in order to succeed. Season 1 is structured as a history of software development methodologies.

---

This episode builds on our earlier discussion of Waterfall in Episode 1, covering Waterfall's rise to its dominant position over the early commercial software industry. We'll find out what it was like in day-to-day practice, learn how it became the official way to make software on both sides of the Atlantic, and see that old-timers have been complaining that kids these days don't how how to code since at least the 1970's.

The audio is embedded below, and the episode transcript follows.

You can also find this episode of the Product Fundamentals podcast on the show website, on YouTube, and through all the usual podcast services.

Transcript

Hello friends, and welcome back to the Product Fundamentals podcast, episode 4: Waterfall ascendant.

In this season, we are tracking the evolution of how we came to make software in the weird way we do, from the earliest origins of our methods, through to today.

So far, we’ve tracked the two competing software development methodologies of the early decades of the industry: the top-down documentation-driven planning-heavy Waterfall, and the lighter iterative and incremental development (IID) approach. Plus, we’ve looked at the management and measurement techniques of the late industrial and early software age, from Frederick Taylor’s scientific management to Drucker’ management by objectives, and Andy Grove’s objectives and key results.

In the world of 2023, “Waterfall” is largely understood as an obsolete or failed concept, while descendants of IID like “Agile” and “Lean” are held in much higher esteem. But from the 1970s through the mid-1990s, Waterfall was the dominant paradigm for software development. Today, we’ll explore what Waterfall meant in its prime, why it struggled, and how it locked in a dominant position in the software industry.

“The Software Factory”

To begin with, let’s understand the shifting place of software in society and the economy in the 1970s.

As the Space Race and related missile and aerospace projects of the 1960s subsided, the software industry went through a pronounced shift in orientation. The pressure to compete with a rival superpower, both for prestige and for security, diminished in the early 1970s, as the United States and the Soviet Union took a number of steps to reduce tensions. At the same time, the American public soured on the Vietnam War and became more distrustful of government and skeptical of Big Projects.

The culture of Big Science optimism was giving way to an era of business pragmatism, and software was increasingly expected to solve business problems in an economically efficient way. 

Perhaps most emblematic of this shift was the idea of the software factory, likely first proposed at General Electric in 1968 and then tinkered with by many companies throughout the 1970s (source).

The rationale was simple: As with other technical processes, computing was maturing, and programmers should graduate from the world of the “craft shop,” where they built bespoke software on demand, to the industrialized factory, where standardization and economies of scale could take hold.

And by factory, they really did mean one work site with thousands of programmers, organized into groups of specialist workers that did repetitive tasks on similar work items.

There’s a sensible enough rationale at work here.

  • At this time, making software is very expensive. Hardware is expensive, labor is expensive. Anything that brings down costs is good.
  • Next, quality is highly variable across engineers, and there are just too few engineers. But industrial processes had solved the problem of replacing craftspeople with factory workers, and had increased output while lowering cost in the process. Surely this could be done to software, effectively de-skilling the software profession into something far more people could do. 
  • Finally, Everything in software is bespoke, and projects often rebuild everything from scratch. Components (including software libraries) aren’t standardized. This makes it hard to predict output, hard to audit workers, hard to control costs.

So, from a management perspective, we can understand the appeal of the software factory. If software could be made more like Ford Model T automobiles, and less like bespoke hand-crafted luxury items, then the business could be much more scalable, reliable, and profitable.  

Numerous companies gave it a shot. In Japan in 1969, Hitachi became the first company to attempt this model. Other Japanese companies followed suit. In the United States, examples include System Development Corporation (previously a division of the Rand Corporation), and General Telephone and Electric (GTE). 

Through these software factories, firms sought efficiency in several ways. The firms would establish standards for everything: a standard set of software tools for all developers to use; standard libraries and reusable software components to use across multiple projects; a standard methodology to implement a project, which was generally some variant of Waterfall. In short, all the greatest hits of mass production techniques would be brought to bear on software.

As you might imagine, many workers rejected these attempts at greater control and de-skilling, and most software factory initiatives struggled and failed, especially in the US and Europe. Given the power of scarce engineering labor to set their own terms, and the rapid change in technology, it’s not hard to see why attempts to pick winning technologies and build durable processes around it largely failed. That said, it should be noted these efforts were at least a bit more durable and successful in Japan than they were elsewhere.

So in the 1970s, the industry is eager to move in the direction of a predictable standard, and leaders will find Waterfall an attractive template for that standardization.

The Mythical Man-Month

To get a sense of the experience of working with Waterfall in the 1970s and 1980s, I read a number of practitioners’ accounts of their major projects and experience. The most influential of these was certainly The Mythical Man-Month, a collection of essays by Fred Brooks Jr., which is still widely read by software engineers today.

Brooks was a computer scientist at IBM, who after working on IBM’s line of early supercomputers, became the manager for the operating system of the IBM System 360 mainframes that we discussed in episode 1. He had extensive experience with both the hardware and the software sides of the System 360, and he realized that managing the development of large software projects was much harder than managing hardware projects. His 1975 collection of essays sought to answer the question, “Why?”

As an aside, The Mythical Man-Month, especially its first few essays, are the best-written material that I read for any part of this series, including everything coming up in the future about Agile and the startup wave. If you haven’t read it yet, the first hundred pages or so are well-worth your time for the simple love of software that Brooks exudes, as well as for the time capsule it represents into how software once was made. The guy has flare, evocatively comparing software development to fighting to get out of a tar pit, and giving voice to the artistic and creative impulse many of us feel building software. The back half of the book drags as it gets caught up in specific technical challenges of 1970s software, but the first half is a worthy pleasure-read.

All right, back to the narrative.

The primary problem with software, Brooks concludes, is that it just takes too long to deliver. Estimates are always too optimistic; development is inevitably slower than expected.

Brooks is hardly alone in this conclusion – the universal complaint of the 1970s and 1980s was that software development was just too slow. To some extent, this is an evergreen complaint – all big projects are always late, right? But Brooks and many others bemoan that software is really late. And subsequent studies of the same methodologies in the 1980s and 1990s would back up their claim with data – perhaps less than 10% of large software projects were being delivered on-time and on-budget, and even then, being considered “over budget” or “over time” generally meant more than 20% over target.

Why was it so slow?

Brooks pointed to a number of culprits –

  • There’s the perennial optimism from programmers, who tend to be young, inexperienced, and optimistic by nature.
  • There are what he calls “Gutless estimates,” in which programmers don’t want to give bad news to management, and so round-down their estimates and fail to report delays.
  • There are communication and coordination costs, which scale with project size and complexity.

This last phenomenon gives its name to the book – The Mythical Man-Month is the misconception that a project can be finished in some fixed number of man-months; that is, months of work by a programmer. By doubling the number of programmers, the reasoning goes, the number of months until completion is cut in half.

Brooks argues vehemently that that mechanism does not work, and deliberates coins what he calls Brooks’ Law: “Adding manpower to a late software project makes it later.”

This is true for a number of reasons. Adding additional developers to the project imposes training responsibilities on the existing team members, reducing their productivity until the new members ramp up. But more insidiously, new members impose ongoing drag as the number of communication channels increases. A five-member team has 10 one-on-one communication paths between members. A ten person team has 45 one-one communication paths. Maintaining each communication path takes time, reducing everyone’s time available for core development work.

Brooks is aware that not all developers are created equally – he likely helped popularize the concept of the “10X engineer” that we still hear, often obnoxiously, bandied about. For the curious, he was citing 1960s academic research that had given a number of programmers the same problem to solve, and then found a spread of 5 times to 25 times between the best and worst programmers as measured by time to write a program, time to debug the program, the size of a program, and execution speed of the program when run.

But Brooks concludes that even with the most badass team of ninja rockstar 10X engineers, large software projects are simply too big to be delivered by a small team. It would take years to develop, test, and deliver the program, so project groups needed to be bigger. That meant they needed to find a way to coordinate hundreds of engineers.

Conceptual unity

And tight coordination really is the key for Brooks. Much of his book centers on solving the problem of “conceptual unity.”

He never gives us a pithy one-line definition of conceptual unity, but in short, it’s simply the characteristic of a program in which all the pieces fit together well, and the purpose of the project is neatly achieved. He writes,

“I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas.”

He continues,

“The dilemma is a cruel one. For efficiency and conceptual integrity, one prefers a few good minds doing design and construction. Yet for large systems one wants a way to bring considerable manpower to bear, so that the product can make a timely appearance. How can these two needs be reconciled?"

Brooks’ answer is an extension and refinement of Royce’s Waterfall model.

Borrowing from fellow computer scientist Harlan Mills, Brooks advocates for “The Surgical Team,” which is a model for dividing up a large project into separate parallelizable pieces. The overall project has a chief architect, who is responsible for designing the overall structure of the program, and farming out its pieces to smaller teams. Each of those smaller teams has its own architect, who is responsible for designing that part of the program, consistent with the rules and requirements provided by the chief architect.

This org structure of teams aligns reasonably well with the “top-down” program design concept that Brooks pulls in from computer scientist Niklaus Wirth. In this approach, the entire software program is conceptualized as a collection of large somewhat independent modules, each with a clear responsibility in the system. Each of those modules is further decomposed into smaller components recursively until a comprehensive design is achieved. Here we’re seeing an early nod toward the collection of teams at a modern software company, each of which owns separate micro-services that perform some part of the work needed in order to provide a coherent overall product experience to the user. 

Brooks is blunt about the top-down nature of the project.

“If a system is to have conceptual integrity, someone must control the concepts. That is an aristocracy that needs no apology.”

He continues a bit later,

“Conceptual integrity does require that a system reflect a single philosophy and that the specification as seen by the user flow from a few minds. Because of the real division of labor into architecture, implementation, and realization, however, this does not imply that a system so designed will take longer to build.”

In the same mold as Winston Royce, Brooks is clear that documentation is the heart of the project. He describes the user manual as “the chief product of the architect,” which will then inform everything that the implementers will build. However, there is a touch more flexibility here: Brooks’ architect character doesn’t define every detail of implementation, or every test case. Instead, that responsibility is delegated to the subordinate teams, each of which does seem to have some latitude to make choices within their apportioned space. 

Brooks has a few practical recommendation to ensure conceptual unity is maintained.

He writes that during the 1960s, it was essential that every project contributor had a printed-out project workbook, likely hundreds of pages thick, that contained an always up-to-date design of the system, and records of every decision that had been made. Every morning, updated pages would be delivered to each team member, and each team member would review the new pages while integrating them into their copy of the project workbook. With new technologies available in the 1970s, he recommends using a digital workbook, shared on a network and kept up-to-date in real-time. Essentially, he presages the ubiquitous use of Google Docs for specifications.

Also of some interest is the use of meetings. One should sound familiar today. This is 

“a weekly half-day conference of all the architects, plus official representatives of the hardware and software implemented, and the market planners.”

Basically, the team leaders get together each week, discuss issues, and brainstorm solutions.

The second, which Brooks practiced at IBM, was an annual two-week-long event that he calls a “supreme court” at which anyone on the team could share objections or alternatives to the project direction, raise them for review by the chief architect for a final resolution.

Brooks retains the centrality of documentation from Royce’s system. I’ll spare you the full list of documents he thinks every project needs, but it’s everything you’d expect from a heavily-planned project: requirements, headcount plans, budgets, deadlines, and so on.

One slight head-nod toward a more iterative development approach is Brooks’ insistence that all projects will have a throw-away first version. Royce made it a recommendation that teams budget time to build a disposable first draft. Brooks says it’s inevitable.

“The management question, therefore, is not whether to build a pilot system and throw it away. You will do that. The only question is whether to plan in advance to build a throwaway, or to promise to deliver the throwaway to customers. Seen this way, the answer is much clearer. Delivering that throwaway to customers buys time, but it does so only at the cost of agony for the user, distraction for the builders while they do the redesign, and a bad reputation for the product that the best redesign will find hard to live down.

Hence plan to throw one away; you will, anyhow.

But that’s not it

So those are Brooks’ recommendations, and they give us a good sense of what the day-to-day experience of the 1970s software engineer was like. Indeed, Brooks represents very nearly the state of the art. He’d led one of the most important projects of the previous decade, and was now a luminary academic. And in that context, the way to work was to have an elite few experts draft all-encompassing documentation, and then to delegate well-defined chunks of work through a regimented team structure to lower-level contributors. While there might be the occasional throw-away prototype, projects were still mostly linear one-shot exercises in executing the plan.

It seems like the industry is a bit stuck in a rut. They think they’re too slow, and they’re piling on lots of top-down process overhead to speed things, and we’ve already said that the Waterfall paradigm would dominate the industry until the 1990s. Intuitively, this all seems contradictory. What’s going on here?

One missing piece is understanding where in the process things got slow.

Testing is the problem

From a modern perspective, it at first struck me as surprising that essentially everyone writing about the Waterfall age agreed that coding was not the slow part of software development.

In 1975, Brooks wrote that the primary coding work took just one-sixth of a project’s time budget. Marvin Zelkowitz, another prominent computer scientist of the period, wrote in 1978 that just 20% of project time, excluding maintenance, went to coding. Writing in 1998 while looking back on Waterfall development, Walker Royce – Winston Royce’s son and a software leader in his own right – estimated that 30% of time went into primary coding (source).

Despite its major investment in planning and documentation, Waterfall’s defenders contend that planning wasn’t the cause of delays either. Instead, the real time sink was testing and integration. Brooks is representative of the various writers I’ve mentioned in saying that module or component testing taking 25% of effort, and integration testing taking another 25%. That’s fully half the project’s time budget going to testing.

Zelkowitz takes it a step further – not only does he think nearly half of work time before delivery goes into testing, but the time consumed by maintenance for a system is even more than the testing before delivery.

Why was testing the problem?

Distribution and integration

One factor was technology. Distributing software was not easy or cheap! In many cases, it involved shipping hardware from the vendor to the customer – sometimes a complete built-to-order computer system, sometimes just a storage drive. While it was possible to deliver software using a phone connection between distant computers, this process was slow, expensive, error-prone, and required deliberate effort and careful support. There were no seamless software updates.

In this context, investing heavily in testing before the software is delivered makes a great deal of sense – after delivery, bug discovery was mostly dependent on the user, meaning unhappy customers and expensive-to-deliver updates. Since there was no practical way to push a bug fix to many clients who were running the same software, customers who had purchased the same product would quickly end up with different versions, as different clients accepted or rejected different patches. Thus a single software package might have dozens of variants active in the wild, and the vendor needed to support all of them. The problem also exponentiates. Brooks writes, 

“The fundamental problem with program maintenance is that fixing a defect has a substantial (20-50 percent) chance of introducing another. So the whole process is two steps forward and one step back.”

Catching bugs before release was the best way to avoid this trap.

Finally, integrations were finicky! There was simply less standardization, with fewer shared layers of abstraction across systems, meaning that the act of integrating the new software with a client’s existing system was more liable to encounter issues and require customizations to make it fit.

Fixing an issue discovered during integration was also expensive! In 1987, computer scientist Barry Boehm wrote that 

“Finding and fixing a software problem after delivery is 100 times more expensive than finding and fixing it during the requirements and design phase.” 

Others of the period echoed the sentiment, even if they varied on the multiple. A bug discovered at delivery time meant potentially significant reworking of interconnected systems.

Languages

Besides distribution challenges, it’s worth remembering that the act of writing good code is just harder at this time than it is today. Software languages are lower-level than today’s languages. All else being equal, that means a programmer has to write many more lines of code to do the same job in Assembly or Fortran than in a more modern language like Java, Python, and so on. Plus, the programmer interfaces to navigate and manage large code bases are much less sophisticated, leading to mistakes or overlooked connections. There’s just more room for human error. Thus, there’s a higher value to testing.

But why stick to Waterfall?

So far we’ve discussed how Waterfall worked in practice, that the biggest pain point was speed of delivery, that speed of delivery was constrained by the need for heavy bug testing, and that the technology of the time made heavy bug testing important.

But, attentive listeners may be wondering, through this discussion of Waterfall’s struggles with speed and budget, why so many businesses would stick with the Waterfall methodology. We discussed the iterative and incremental development approach in Episode 2, which had been successful in a number of high-stakes projects.

So, Why did the industry stick with Waterfall?

Let’s discuss three major reasons:

  • The Principal-Agent Problem, 
  • Matching expectations, and 
  • Regulation.

The Principal-Agent Problem

The idea of the Principal-Agent problem, also called the Agency Dilemma, emerged from economics research in the 1970s, and has gone on to become one of the “big ideas” of the social sciences in the late 20th century, cutting across economics, organizational design, political science, law, and more.

The core idea is so simple as to seem almost trivial. Its building blocks are basic: principals are people with resources and objectives they want to achieve; agents are the people hired by a principal to help the principal achieve an objective. In the context of politics, the voting public is the principal, and an elected politician is the agent. For our purposes – thinking about software development – customers are principals with a problem to solve and vendors are agents hired to solve the problem; then within a software vendor, the senior leaders or owners are the principal, and the workers are the agents.

The problem in the Principal-Agent problem comes in because the actors’ goals are almost never perfectly aligned. While your company hired you to build software, and you might even feel intrinsically motivated to build software, management wants to pay less, and you want to get paid more. Management wants to ship as fast as possible; you might not want that. You’re an agent of your employer, but that doesn’t necessarily mean you’re going to do everything they want, or that they want everything you want.

Principal-agent problems are everywhere, but they’re especially acute when the stakes are high. While a principal is always taking some risk by working with an agent, oversight also has a cost, so principals let little stuff slide. This is why you might have permission to expense books or cheap software tools on your own, but anything above some fixed price needs manager approval. That price is the tipping point where your company, the principal, thinks it’s worth the overhead required to monitor its agents.

So, obviously, if you’re working on a big budget project that your manager really cares about, your manager is likely to get pretty involved, demanding clear plans and status updates. 

This dynamic of principal-agent problems helps to explain how Waterfall became so sticky for the pre-consumer software business.

Throughout the 1970s and 1980s, software projects remained predominantly very expensive systems, built with at least some customization for each client. While the transition from the mainframes of the 1960s to the minicomputers of the 1970s and 1980s did see the price of computers drop, they remained significant investments: the centralized minicomputers used for heavy tasks could still cost well over $100,000 in today’s inflation-adjusted terms, and early personal workstations that emerged in the 1980s could run tens of thousands of dollars in today’s terms.

As a result, the customer (as a principal) would demand a high degree of specific commitments and oversight from a vendor, acting as an agent of the customer. The customer doesn’t trust that the vendor’s interests are well-aligned with the customer’s, and the stakes are high because of the high cost of the purchase. In order to manage the risk associated with the large financial investment being made in a new bespoke or customized software system, the customer would require clear deadlines and exhaustive lists of very specific requirements. Walker Royce would describe a large software project for a US Air Force missile warning system in the late 1980s as including 2000 enumerated shall statement requirements, such as, “the system shall be resilient to any single point of failure in hardware.” Enumerating 2000 requirements seems rational for the customer when they’re contracting to an external vendor, even if it vastly increases the weight and complexity of the project. 

Management, in turn, acts as a principal overseeing its workers, who are agents that are only partly-trusted. With each customer order so expensive and fulfillment timelines so long, management is at real risk if anything goes wrong. Thus management is compelled to tightly oversee the software development process. Extensive documentation, schedules of milestones, comprehensive test cases, prescribed and proscribed technologies, and so on, make sense when management is more risk averse than the workers.

Thus, the Waterfall methodology was a good fit for most risk-averse companies and management teams dealing with long expensive projects, compared to the less-structured exploratory approach favored by incremental and iterative development. “Trust us, we’ll get there, even if we’re not sure how yet” is just a much harder pitch to make to customers or managers.

Matching expectations

The second force supporting Waterfall was the drive among practitioners – or at least, among practitioners who went on to write books about software management – to turn software development into a proper respectable engineering discipline. 

I quoted from Harlan Mills at the NATO Conference in Episode 1, where he compared the state of software design in 1968 to amateurish building construction. We discussed a similar impulse in the attempts to create software factories at the start of this episode.

Whatever its merits, the argument for imposing consistent repeatable scientific structure to software development had a tendency to take on a real “kids these days” tone. For some of the most remarkable complaining about kids these days that I’ve ever seen, let’s read from the 1980 essay “Software Methodology and Practice,” cowritten by 9 computer science luminaries, including Mills.

They write that the US is at risk of losing its edge over the Soviet Union because of a lack of clear standards for software development. Two of the underlying problems: the poor quality of engineers, and the lack of scientific rigor.

“In the early days of computing, many talented people from the nation’s great universities and laboratories entered the field, and brought or developed sound and enduring ideas in those early days. Because of the astounding growth of data processing and resulting need for personnel, however, the bulk of those who came later were less highly educated and motivated. Some 500,000 people are managing or being managed in data processing today. Most of them have come to the field laterally, without university training in computer science or software methodology. And they have received only spotty, on-the-job training. The industry has seen managers with no knowledge of the programming process manage large projects, and programmers maintaining complex programs with only three weeks training…Scientific journals have played a relatively small role in transmitting scientific and engineering ideas, because the personnel as a whole are not educated enough to understand them.”

They write that computer hardware is a better discipline, because its practitioners respect the need for scientific rigor. Unfortunately,

“... [I]t is relatively easy to teach a person with no scientific background how to understand a very small program… The simplicity and easy with which one could be taught to understand a small, simple program led to the idea that one did not have to have a scientific, mathematical background in order to be a professional programmer. Moreover, how could science be helpful to the average programmer when the application upon which he usually worked was not scientific, but industrial, financial, or administrative?This lack of understanding of the need for a scientific basis and discipline for programming, this misconception that programming is easy, has led to the sloppy and wasteful practices one finds in the data processing industry today.”  

Yikes! Kids these days! No wonder we need software factories – we’ve got to keep the amateur in line and make sure they don’t do too much damage with their non-scientific ways!

SSADM

Fortunately for those who wanted more formalization in software, the government did eventually intervene, establishing standards.

In 1980, the UK government commissioned a management consulting firm to draft a standardized methodology for software design in the British public sector. The resulting standards, called Structured Systems Analysis and Design Method (with the unfortunate acronym, SSADM) became the rule for how all government software should be developed in Britain. This then caused private government contractors to adopt SSADM in order to be compatible with government standards. Through these channels, SSADM’s standards spread and became a strong default approach across the British software industry, and from there influenced developers in linked countries as well.

SSADM was a very Waterfall-y approach. It is made for large projects, driven by a single owner. It mandates an initial feasibility study, followed by a study of the status quo through interviews and questionnaires for stakeholders. Then the owner prepares multiple options for solutions and presents them to users for feedback. Then the owner generates a thorough requirements specification doc. Then the owner generates a large number of options for specific technical implementations, and one is chosen. Then the owner designs the specific logic of the application. Finally, a full technical spec document is delivered to engineers, who will finally implement the design.

While there are bits we might see as forward-looking here, such as the explicit interviewing of stakeholders and users, the process remains built around defining a single fixed long-term plan that will be implemented in one-shot. It’s just adding more stuff up front. 

DOD-STD-2167

The UK was not the only country to impose a heavy standard on its software industry.

As in all things related to the early days of software, the US military plays an important role in the ascendance of Waterfall. In 1985, the US Space and Naval Warfare Systems Command published Department of Defense Standard 2167, the military standard for Defense System Software Development.

This standard laid out the practices the US military expected its coders and contractors to follow in developing software. These rules applied to procurement of software from outside vendors, meaning that if you wanted a lucrative DoD software contract, you needed to follow these rules. Since the military was such an important client, the effect of Standard 2167 was to push a certain way of doing software onto the industry.

That certain way was, of course, Waterfall. It is consistent with the approach to large software projects that Winston Royce described in his 1970 article, which makes sense – Royce was a leader at TRW, a major defense contractor. 

When the DoD standard mandates that all software should be developed through the steps of Software Requirements Analysis, Preliminary Design, Detailed Design, Coding and Unit Testing, Computer Software Component Integration and Testing, and Computer Software Configuration Item Testing, Royce’s thinking is clearly at play. 

But Standard 2167 heavily formalized the process. While Royce’s article was about six pages of text and another 5 pages of pretty accessible diagrams, the DoD standard was 93-pages, mostly of dense text, laden with bespoke acronyms and references to numerous other DoD documents. It never uses the word “Waterfall” or refers explicitly to Winston Royce, but this standard is the embodiment of a heavy linear single-shot waterfall process. It prescribes the exact process to be followed in building software, the exact set of documents to be produced, the exact customer input cycle to be followed, and so on. By my count, any software project following the standard should generate 37 documents, in addition to the code itself.

While the authors of the standard did note that software development was an evolving field and that the standard should be “selectively applied and tailored to fit the needs of each program,” in practical terms, this wasn’t likely to happen much. Aside from that brief mention in the foreword, the rest of the standard is entirely prescriptive.

Essentially all of Royce’s improvements to Waterfall got lost in formalization by the DoD. Gone is the idea of building a prototype and throwing it away. The notion of feedback between steps in the sequence is gone; the development process is much more completely linear. The heavy documentation remains, but any sense that documentation isn’t enough to ensure success is gone.

Like the British SSADM, the exhaustive details of the DoD standard would have been stifling for teams that worked under it. Yet it had the imprimatur of the US government, meaning that it not only directly set the rules for military contractors, but also became a default for many companies outside the defense industry looking for best practices to follow. The result was that these standards pushed a particularly demanding, top-down version of Waterfall across the industry, leaving a lasting mark. 

Wrapping up

The codification of Waterfall into these government standards in the early 1980s was a fateful moment for the software industry. Throughout the 1970s, exponential improvements in processors, storage, and energy efficiency had continued their relentless pace, such that smaller and cheaper computers were becoming ever more feasible. The “1977 trinity” of fully-assembled personal computers brought the Apple II, the Commodore PET, and the TRS-80 to market. While these machines were still expensive toys for early adopters – the Apple II cost the equivalent of $4000 today and had little utility at launch – they were early signs of the coming wave.

It’s easy now to see these standards as foolish. But let’s not forget the principal-agent problem. The public rightly fears corruption, nepotism, or plain poor judgment in government spending. We want confidence that our money is being well-spent. Our agents in the government bureaucracy want to show that they take the public’s trust seriously, so they draft exhaustive explicit criteria for spending to demonstrate their objectivity. The hidden cost, though, can be lower-quality or more-expensive output than alternatives that didn’t check all the process boxes.

By locking down the “right way” to build software as a necessarily slow, heavily-documented, single-pass process for big companies with serious resources, SSADM and DOD Standard 2167 served to limit experimentation in a rapidly evolving field, and planted the seeds for failure and for inevitable passionate rejection.

That’s all for this episode. Please join me next time as we cover how the new ideas, new technologies, and new customers of the 1990s would undermine this Waterfall equilibrium and set the stage for what comes next.

As always, your comments and feedback on this episode are very welcome. And I would like to thank everyone who has sent thoughts and feedback already; you’re helping me make a better product, and I appreciate it.

You can find an episode transcript and links to sources at prodfund.com. And if you like this series, and you want to hear more, do me a favor and share it with someone you think would enjoy it too.

Thank you for listening.