21 min read

ProdFund 1.9: Agile at Scale

Agile started as a "grassroots labor movement" for "organizational anarchists." By 2018, a third of its founders had disavowed their creation. What happened?
A modernist midcentury illustration of a giant Egyptian pyramid under construction by a modern construction crew.

Product Fundamentals is a podcast dedicated to spreading the core knowledge software product people need in order to succeed. Season 1 is structured as a history of how we make software.


As the Internet grew in the 2000s, it drove the creation of far more large and complicated software organizations than the industry had ever seen before. This posed hard new questions, leading to both a crisis of confidence – "Agile is Dead!" – as well as to some fresh green shoots in the form of DevOps.

The audio for this episode is embedded below, and the episode transcript follows.

You can also find this episode of the Product Fundamentals podcast on the show website and through all the usual podcast services.

Transcript

Hello friends, and welcome to the Product Fundamentals Podcast, Episode 9: Agile at Scale.

Last time, we discussed the evolution of testing and quantification in software development, which I described as a consequence of the scale of the torrent of data generated by the consumer Internet.

But the growth of the Internet drove scale in other ways. The sheer size of software organizations took off, as incumbents across industries spun up software organizations to keep up with the changing times, alongside fast-growing Internet-native companies. By the mid-2000s, Agile was understood as the default way to start a team.

Companies with large existing software teams were also drawn toward Agile methods by some combination of frustration with the alternatives, stories of Agile successes, and competition for talent.  

This rapid headcount growth posed challenging questions for the various Agile flavors that dominated the 2000s.

This episode, we’ll look at how the once-radical incremental and iterative methodology proliferated into – and was transformed by – large companies.

The challenge

When the Agile precursors methodologies were designed, they were built around small teams. Scrum, for example, anticipated a cross-functional team of 7 ± 2 members. XP didn’t specify a number, but was built for small groups that worked together every day. The principles published alongside the Agile Manifesto aren’t prescriptive about a specific number of people, but it would be hard to satisfy principles like “the most efficient way to communicate is face-to-face” or “have the team reflect on how to improve after each iteration” if the team is larger than can fit in a meeting room. Lean software development, similarly, emphasizes small teams without picking a particular cap.

This creates some open questions for larger organizations trying to adopt Agile. How does the company set and enforce a strategy? How do teams coordinate? How autonomous should teams really be? Bluntly, what do middle managers even do in an Agile context?

Scrum of scrums

But the early Agile boosters hadn’t completely missed the idea of large organizations – they just didn’t think it merited a very different model. Scrum’s inventor, Jeff Sutherland, first broached the topic of scaling Agile in his appropriately-titled 2001 article, Agile Can Scale: Inventing and Reinventing SCRUM in Five Companies.

What’s most remarkable about this short essay is that Sutherland just treats scaling as a non-problem. Reflecting on his experience implementing Scrum at healthcare company IDX Systems, he writes:

“The approach at IDX was to turn the entire development organization into an interlocking set of SCRUMs. Every part of the organization was team based, including the management team... Front-line SCRUMs met daily. A SCRUM of SCRUMs, which included the team leaders of each SCRUM in a product line, met weekly. The management SCRUM met monthly.The key learning at IDX was that SCRUM scales to any size. With dozens of teams in operation, the most difficult problem was ensuring the quality of the SCRUM process in each team, particularly when the entire organization had to learn SCRUM at once.”

See? Easy as that. If you’re having trouble with Scrum, you’re just not scrumming hard enough.

Elsewhere, Sutherland would expand on this “Scrum of scrums” a bit more, and the idea would evolve. More recent flavors have teams of 5-10 contributors, each of which sends a so-called ambassador to a daily scrum of ambassadors. At that meta scrum, alignment issues are addressed. The meta scrum has a “chief product owner,” whose job is to represent the interests of the customer of the overall product and ensure that the ultimate deliverable meets that customer’s needs. If the group gets so large that there are more than about 10 ambassadors, another layer – a scrum of scrums of scrums –  can be added. But that’s all there is to it; everything’s a scrum, and it's scrums all the way down.

IBM & Microsoft

In Episodes 5 and 6 of this series, we discussed how the high failure rate in waterfall projects drove the creation of new methods like Scrum and XP. These same failures – plus, no doubt, some trend following – drove a wave of companies to adopt Agile in the 2000s.

IBM stands out, having begun adoption of Agile across the 25,000-person IBM Software Group in 2006. IBM was better positioned than many for such a transition – the company had been a bastion of incremental and iterative development for former NASA engineers in the 1970s and 80s, as we discussed in Episode 2

And IBM had acquired Rational Software in 2003, a company that made software tools for development teams based on Rational’s own preferred software methodology, filling a role a bit like Atlassian today. IBM had the internal experience of thinking deliberately about software methodology.

Microsoft also gained attention for making Agile moves. The giant began adding support for some Agile practices in Visual Studio, the company’s development environment, in the late 2000s. By 2015, even Microsoft’s core Developer organization was being described in the popular press as Agile.

But in both these cases, the Agile that resulted was not necessarily like the one envisioned at Snowbird. Ken Schwaber, one of the founders of Scrum, criticized Microsoft’s tooling in 2011 for its focus on assigning tasks and measuring developer capacity, which he saw as cutting against the core value of self-organizing teams. Speaking of assigned work, Schwaber wrote,

“This is a common smell of a development team that is not self-managing. Who ‘assigns’ work if a development team is self-organizing? Development teams select work, figure out how to do it, and go do it. Assignments are a dysfunction.In my measurements, self-organization is a prerequisite for Scrum and Agile productivity. This is where the 100%-plus productivity occurs not to mention the creative ideas, enjoyment of working, and quality."

IBM, meanwhile, outlined in a whitepaper the scaling dimensions that it claimed original Agile methods couldn’t handle. These include geographic distribution, regulatory compliance, domain complexity, technical complexity, organizational complexity, and more. Naturally, IBM had developed an improved way to address these issues, and had a consulting service ready to help other large companies handle the transition. 

IBM called its methodology “Disciplined Agile Delivery.” Also known by the acronym DAD (or Dad?), Disciplined Agile Delivery’s divergence in mindset is captured in the name: “Disciplined.”

Here’s Scott Ambler, one of DAD’s creators at IBM, writing in 2009 on the difference:

“Mainstream agile development processes and practices, of which there are many, have certainly garnered a lot of attention in recent years. However, these mainstream strategies (such as Extreme Programming… or Scrum…) are never sufficient on their own; as a result organizations must combine and tailor them to address the full delivery lifecycle. When doing so the smarter organizations also bring a bit more discipline to the table, even more so than what is required by core agile processes themselves, to address governance and risk.” 

The new buzzword that IBM introduces throughout their materials is “appropriate governance framework.” Ambler doesn’t tell us exactly what an appropriate governance framework looks like, but he does make it clear that its purpose is to tamp down on too much team autonomy.  

“Self-organization leads to more realistic plans and estimates more acceptable to the people implementing them. At the same time these self-organizing teams must work within an “appropriate governance framework” that reflects the needs of their overall organizational environment: such a framework explicitly enables disciplined agile delivery teams to effectively leverage a common infrastructure, to follow organizational conventions, and to work toward organizational goals.” 

What could companies gain from widely adopting IBM’s “disciplined” agile method over their preexisting, presumably waterfall-style methods? In a separate marketing document from 2010, Ambler said that, conservatively, consulting clients could expect to see 4% fewer defects per year, 4% lower labor costs per year, and 5% faster time-to-value. These improvements would compound annually, but still, they are pretty paltry gains.

IBM’s Disciplined Agile Delivery was a relatively early player in the field of consultant-driven scaled Agile methods, but they were far from alone. One literature review lists 19 methodologies for scaling Agile released since the Manifesto in 2001, and only one – from Spotify – was published by an actual working software company. The rest were by consultancies.

I’m not going to talk through 19 methodologies – most seem to have never had serious uptake or differentiation. But while I certainly haven’t done thorough reading on all of them, there are some common themes across many of these methods, and they form a loose spectrum, with the most contentious issue across almost all methods being how top-down or bottom-up the structures are.

Schwaber’s original Scrum of scrums model is certainly the lightest-weight, and the most bottom-up. Disciplined Agile Delivery is somewhere in the middle. And anchoring the heavy end of the spectrum is perhaps the most famous, widely-adopted, and controversial version of Agile for enterprises. That is the Scaled Agile Framework, or “SAFe”. 

SAFe

The primary author of SAFe is Dean Leffingwell, though from the beginning, it has been a business project, under the umbrella of Leffingwell’s company, Scaled Agile Inc.

Leffingwell had previously worked at Rational Software, the software tools company that IBM had acquired. In the 1990s, Leffingwell and others created the “Rational Unified Process” as a comprehensive software development methodology.

I skipped over Rational Unified Process in my history of the 1990s in order to focus on the Agile precursors, and I won’t give it the full treatment now. The short version is this: 

Initially released in 1994, Rational is an iterative and incremental approach, but one with more structure than methods like XP or Scrum. It had a greater number of formalized roles, more specific artifacts to produce, and a greater emphasis on having an architectural plan for a project. Its ethos was much more “we know what we’re building, we’ll use iterations to get there, and we value following a rigorous playbook” rather than the exploratory and organic approach used by the Agile precursors.

SAFe builds on the foundations of the Rational Unified Process, is a much heavier methodology than scrum of scrums. It was first released as a short white paper in 2011 by Leffingwell, and has since gone through numerous major and minor revisions, growing in scope and complexity with each revision. “SAFe 6.0” released in March 2023.

I’m going to describe the official process for a company to adopt SAFe now, and touch on the basic concepts and life cycle of SAFe operations along the way. I won’t lie to you: It’s a lot. Strap in, and I’ll do my best to get through this quickly. 

Here we go.

SAFe starts from the top. In this model, executives decide that it is time for their business to become “Agile.” The business develops a “Lean-Agile Center of Excellence,” which is a group of employees who become certified SAFe Agile coaches, with the help of paid consultants from Scaled Agile Inc. Then executives are trained in some new methods. Executives should see their organization as structured around “portfolios,” and executives should be trained in three disciplines: Strategy and Investment Funding, Agile Portfolio Operations, and Lean Governance. Of course, each executive should work in consultation with the company’s Lean-Agile Center of Excellence and the Value Management Organization. If you haven’t heard of a Value Management Organization before, that’s okay; it’s just a consultant’s new name for project management.

Once the executives have been trained in Lean Portfolio Management, it’s time for the SAFe Value Stream and ART Identification workshop. You may be wondering what ARTs are – that’s an acronym for “Agile Release Trains,” and they are constructs (yes, they use the term “construct”) that contain 50-125 individual people and team leaders. In normal human terms, an Agile Release Train is a group of teams that work together on the same product line. This workshop to identify the value streams and ARTs that the organization should fund is also an activity for executives; there’s no need to trouble the riff-raff with thinking about stuff like what teams fit together. The major projects that the Portfolio leadership sponsors are Portfolio Epics, which are handed down from the Portfolio leads to the lower levels for execution.

Before individual teams can start being SAFe-ly Agile, though, their team leaders need training. Fortunately, there are separate training courses for everyone: Leading SAFe for ART stakeholders, SAFe Product Owner/Product Manager training, SAFe Scrum Master training, and SAFe for Architects. Finally, it’s time to train the individual contributors on each Agile team with the SAFe for Teams training.

Now that the lumpenproletariat have been trained, the cycle of 10-week Program Increments can begin. Every team within an Agile Release Train should operate on the same two week sprint cycle, with the usual routines of daily standups, sprint reviews, and retrospectives. When each 10-week Program Increment finishes, team leaders come together for a two day Program Increment review and planning. Leaders make plans, and a single 30-minute retro provides the only real forum for any of the perhaps 125 individual contributors in the team to raise an issue to their decisionmakers. Then leaders participate in an “Inspect & Adapt” workshop to review the outcomes of the increment, identify bottlenecks, and create new items on a special “improvement backlog” to address process issues for the next increment. Then those team leaders agree on the mission, vision, and scope of the next 10-week Program Increment, and return to their teams with a train-wide plan in hand. And so the wheel turns.

Believe it or not, this is a really stripped-down, bare bones telling of the SAFe methodology.

That’s for two reasons:

  1. While writing, I was starting to feel those weird feelings you feel when you look over a high balcony, and 
  2. SAFe tries to do everything. It says so right in SAFe’s materials

“SAFe integrates the power of Lean, Agile, and DevOps into a comprehensive operating system that helps enterprises thrive in the digital age by delivering innovative products and services faster, more predictably, and with higher quality.”

It’s an operating system for enterprises! That’s a lot of ground to cover! You may remember how Scrum had two explicit roles: Scrum Master and Product Owner. SAFe has at least fifteen roles. On the master diagram for the SAFe methodology, there are 92 concepts, each of which has its own dedicated web page explanation, from “lean budgets” to “architectural runway” to “model-based system engineering.”

And, forgive the quick aside, but this trivial detail seems to perfectly capture the essence of this model. When you copy text snippets from the official SAFe website, they use JavaScript to inject additional material onto your clipboard, so that when you paste it, you helpfully see this:  

“© Scaled Agile, Inc.
Include this copyright notice with the copied content.
Read the FAQs on how to use SAFe content and trademarks here: https://scaledagile.com/about/about-us/permissions-faq/
Explore Training at:https://scaledagile.com/training/calendar/

So that’s SAFe: the Scaled Agile Framework. It’s expansive, it’s comprehensive, and it’s available from a certified consultant near you. 

Lest it seem like I’m picking on a bad fringe outlier methodology, the official SAFe site claims that SAFe has been adopted by more than 20,000 enterprises, and that more than 1 million individuals have been trained. If we take these data seriously, that means SAFe is among the most widely practiced software development methodologies in the world today. And while it does stand out for its heft, it’s not unique: other methodologies, like Disciplined Agile Delivery, are not that far behind it in scope, complexity, or top-down-ness.

“No True Agile”

But… is Scaled Agile Framework… Agile?

On the one hand, clearly no. Agile might be fuzzy and capacious and accommodating, but in the mere 68 words of the Agile Manifesto, the Snowbird folks said they value, Individuals and interactions over processes and tools and Responding to change over following a plan.

SAFe is all about processes, tools, and following a plan.

That’s not just my hot take – Ken Schwaber, founder of Scrum and Agile Manifesto signatory, was damning in a 2013 blog post titled “unSAFe at any speed,” commenting on the appearance of the Scaled Agile Framework’s creators at an Agile conference. Schwaber writes,

“The boys from RUP (Rational Unified Process) are back. Building on the profound failure of RUP, they are now pushing the Scaled Agile Framework (e) as a simple, one-size fits all approach to the agile organization. They would be at the RUP conference, but there are none. They would be at a waterfall conference, but they are no longer. So they are at our conference. Strange, but they had nowhere else to go. Try to be polite."

At least six other signers of the Agile Manifesto have called out SAFe as a bad practice.

The US Air Force’s first Chief Software Officer, Nicolas Chaillan, wrote a memo for the Air Force instructing that 

“... each software intensive program is encouraged to use eXtreme Programming (XP), Kanban, Scrum or a similar framework… Programs are highly discouraged from using rigid, prescriptive frameworks such as the Scaled Agile Framework (SAFe).

In the whole two page memo from Chaillan, the only use of bold text is for that sentence all-but-forbidding the use of SAFe. And this is the US Navy, one of the most scaled organizations in the world, and as part of the broader Department of Defense, the largest employer of software engineers in the world.

Lastly – and this is something I can’t quantify, but it’s real – SAFe seems very unpopular with developers. Nearly everything positive written about it on the Internet is from a senior manager or SAFe consultant; nearly everything written by an actual software worker is some degree of damning.

So in a bunch of ways, we might dismiss SAFe as “Not Agile,” a wolf in sheep’s clothing, and move on.

But then, take a second look at the parts. There are teams of 5 to 9 contributors, they have two week sprints, they do standups and retros… the groups of teams in an Agile Release Train are at least kind of like a scrum of scrums. Sure, SAFe is expansive, but much of the stuff it incorporates is at least Agile-adjacent or otherwise commonplace, like OKRs. 

In some sense, there’s almost nothing original to SAFe – it’s a compilation of every idea someone reputable has written about as a best practice. SAFe even comes in for criticism by some for plagiarizing concepts from everyone else. Sure, SAFe is obnoxiously consultant-y, but most of the Agile Manifesto’s signers were consultants. That’s just the deal in this game.

So, maybe this is Agile?

Agile is dead

As pedantic as an argument over “what’s really Agile” may seem, the underlying concerns have gotten real. As the various methodologies for “Agile at Scale” emerged in the 2010s, they really did create a crisis of confidence in parts of the software community. 

In March 2014, Manifesto signer Dave Thomas published a blog post called “Agile is Dead (long live Agility)”. Other signers followed suit. In May 2015, Andy Hunt, published “The Failure of Agile.” In May 2018, Ron Jeffries published “Developers Should Abandon Agile.” In August 2018, Martin Fowler was warning of the “Agile Industrial Complex.” 

Each author made their own argument about what had gone wrong and how. To generalize, each made some version of the claim that the core principles and values of the Agile Manifesto remain good, but that in practice the ideas have either been too rigidly codified, or otherwise gone from being emancipatory for developers to being another system of control.

Dave Thomas wrote,

“Once the Manifesto became popular, the word agile became a magnet for anyone with points to espouse, hours to bill, or products to sell. It became a marketing term, coopted to improve sales in the same way that words such as eco and natural are. A word that is abused in this way becomes useless—it stops having meaning as it transitions into a brand.”

Manifesto signer Alistair Cockburn, among others, has made the point that companies buy products and services that come in boxes with price tags. Thus, when an executive makes the case to transition his or her company to Agile, they buy an Agile methodology, which they expect to install an observable set of structures and practices. This leads to lots of attention being spent on meeting cadences and team roles. But it misses out on the cultural, behavioral, and attitudinal parts of Agile.

In this series, I’ve tried to highlight the ways that the emergence of Agile was not just a methodology shift, but also a cultural moment, and one that was often in tension or even outright conflict with management. I found one retrospective that describes early Agile as “a grassroots labor movement.” Key concerns at the beginning were ensuring a 40-hour work week, increased self-determination, increased access to customers, and getting management out of the way.

In the same skeptical note on Microsoft’s adoption of Agile from early in this episode, the ever-combative Ken Schwaber wrote:

“Many organizations have not adopted the self-organizing, team-based aspects of Agile. They still are predictive, top-down organizations. Tools that don’t support this function are hard to sell. However, form follows function. If we continue the same predictive manufacturing model, wrapped in Scrum tools, we as software professionals will have a very hard time rising to the increasing demands of our world for creative, sophisticated, quality products."

Scrum without self-organization and empowerment is a death march, just like waterfall, but an iterative, incremental death march without slack.

Alistair Cockburn hit on the labor relations element of Agile in, of all things, a defense of strictly-scoped sprints, writing

“Scrum struck a magnificent bargain in hostile territory: Management got 12 times per year (once per month) to change direction in any way they wanted; the team got 1 month of total quiet time with no interruptions or changes of direction to do heavy thinking and working. No execs ever got a better deal.”

All of this is to say, the friction between “big org management” and the development team was at the center of Agile from the very beginning. Making it scale isn’t simply a matter of training more people how to read a kanban board. It requires taking on basic conflicts between autonomy and control that an off-the-shelf consultant package will simply never be able to do.

The Agile methods that we’ve engaged with, from The New New Product Development Game forward, have had little for managers to do besides point in a general direction and sign checks. Those first teams following what Takeuchi and Nonaka called the “rugby approach” were given simple directions by their managers: “Make a car that young people would like to drive.” “Build a new photocopier with premium features for half the cost.”

These were teams of professional engineers at enormous Japanese companies. They were not scrappy startups composed of new grads in hoodies. But with a general mandate from management, resources to work with, occasional check-ins for progress – and in one case a paid trip to Europe to check out the car market – they built highly-successful new products. And they did it without an externally-approved well-documented methodology.

Those teams were small-ish, but there was no magic number. They worked in iterations and the steps of their development process overlapped, but there was no common fixed iteration length. All the different job functions were put together in one physical room so that they would have to interact with one another and see what they were each doing; they didn’t all follow some scripted process of demos and updates.

I can’t help but wonder, with all the wailing and gnashing of teeth over scaling Agile, if the direction might be inverted. Instead of figuring out how to impose the right structure onto teams, perhaps the right challenge is to impose the right hands-off approach on management. 

DevOps

But while the managerial and organizational sides of scaling Agile have drawn all this angst, a quiet but important methodological evolution, itself rooted in the Agile tradition, has been proceeding apace. That evolution is DevOps.

If you’re not a developer, or not as familiar with the nuts and bolts of how software gets made, it’s worth unpacking “DevOps.” The name is a blend of Development and Operations. In this context, Development refers to writing application code – that’s all the features and capabilities of the website or app. Operations refers to the process of getting that code running on servers for real users to interact with. This generally involves stitching together code that is contributed by many separate developers, testing the code for bugs in a safe environment without real users, and then running that code on public-facing servers.

The focus of our recent epsiodes has been on smaller teams writing application code, either as the new programming teams embedded in non-software businesses in the 1990s, where Scrum and Extreme Programming originated, or as small startups building MVPs in the startup wave. At that scale, a small team with a relatively small business can (hopefully) handle the complexity of their application, and can tolerate minor outages and issues with deploying new code. But as large-scale consumer Internet businesses with billions of users and many thousands of software engineers rose to prominence, those small team patterns became unsustainable.

DevOps is generally described as the application of Agile ideas to the operations parts of software to improve efficiency and address these scalability and complexity problems. If the original waves of Agile were about reworking the relationship between software development and the business, DevOps is about reworking the relationship between development and IT operations. 

The term “DevOps” was coined by Patrick Debois, a Belgian system administrator and consultant who had grown frustrated with the inefficiencies and conflict between developers and administrators during a data center migration project. In 2009, Debois watched a conference presentation by the leaders of development and of operations at photo-sharing site Flickr, called 10 Deploys a Day: Dev and Ops Cooperation at Flickr. This inspired Debois to organize a well-attended conference on the topic, called “DevOpsDays”, in Belgium in October 2009. The concept spread quickly, with a follow-up conference in Mountain View in 2010, and a rapid succession of books, events, and consultancies springing up as DevOps ideas proliferated.

CI/CD

The concrete practices most associated with DevOps are Continuous Integration, Continuous Delivery, and Continuous Deployment, or CI/CD. The goal of these practices is to minimize the time between when code is written by a developer, and when that code is running, serving real users.

Integration refers to how code from many programmers is stitched together into a single shared codebase. At the core of integration is version control software – which tracks changes in the code and makes it easy to restore previous versions if a new bug has been found. The first publicly available version control software came in 1977; the open-source industry standard, git, was released in 2005. 

Continuous integration has been kicking around for a while, but is most associated with Extreme Programming, which we discussed back in episode 5. Kent Beck included Continuous Integration as a basic practice of XP in his 1999 book. He defined it as “Integrate and build the system many times a day, every time a task is complete.” 

Extreme Programming also requires all code to be covered by comprehensive tests. Each time new code is integrated, all of the tests for existing code, plus new tests for new code, will run. If the tests all pass, then the new code can be integrated into the main branch and the developers can move on to their next task.

Quick aside to spare anyone the confusion I had when I first encountered the term – “continuous” here has always been a bit of an exaggeration. It’s not like integration, delivery, or deployment is happening with every keystroke. Continuous is more of an aspirational term; in practice, it just means “as often as possible, ideally a few times each day.”

By integrating all of the code several times a day – each time a small task is complete – the developer pair working on the task discovers issues quickly, without wasting time. The entire team also benefits from always having access to up-to-date code, reducing the risk of significant divergences between branches that might emerge if code were left unintegrated for days or weeks at a time.

Continuous delivery takes this process a step farther, frequently deploying new code from version control to a staging environment. This staging environment is generally a close mirror of the production environment that serves customers, creating a space for more comprehensive testing of how all the many pieces of the software fit together, and for testing if they actually deliver the desired functionality for users. While building blocks for continuous delivery had been around before, the concept got its canonical articulation in 2010, with Jez Humble and David Farley’s book, also called Continuous Delivery.  

The last step is continuous deployment, which means frequently promoting code from this staging environment to production, where it serves real users.

The 2010s and early 2020s have seen a proliferation of tools to automate more steps of this process, increasingly replacing human quality assurance steps with sophisticated automated testing that can render and inspect web pages and app views.

Putting the pieces together, in the limit case, when a developer tries to merge new code into the main codebase, it triggers a cascade of tests and promotions that see the code pushed to a staging environment and on to production. New code could be thoroughly tested and serving real users within minutes of when it was written, without any human action beyond that first code merge.

CI/CD is a natural extension – perhaps the limit extension – of the idea of iterative and incremental development, which we’ve tracked since all the way back in episode 2. Now, with the right investments in testing in place, an iteration can be as small as a single line of code, and can deliver immediate value to its users. We’ve come a long way from iteratively tinkering with rocket plane components.

Architecture

DevOps is about more than CI/CD, though. It has also influenced software architecture, privileging microservices over large monoliths, as a collection of smaller services provides more flexibility in scaling and is easier for large developer teams to contribute to than one big service. 

Containerization, which wraps applications in a virtualization layer and makes it easy to run them on a wide variety of hardware and operating systems, has been an important tailwind for DevOps. A number of containerization systems emerged in the mid-2000s, with Docker (released in 2013) becoming a dominant open-source platform. By using containerization, software can easily be deployed to a heterogeneous mix of cloud infrastructure with ease.

A final layer of architectural innovation has been orchestration, which refers to the tools for managing the many services running in many containers, balancing load, and shifting resources among many instances of the same service. The dominant technology in orchestration has been the open-source Kubernetes, which launched in 2014, generalizing Google’s internal tool for the same purpose. 

Taken together

Taken together, DevOps has been part and parcel with evolving technological changes of the post-2010 period. As a methodology, it has shifted headcount away from human quality assurance, and toward fields like site reliability engineering. It has strengthened the case for test-driven development, or at least for building comprehensive test harnesses, affecting how developers spend their time. By introducing practices that make it easier for companies to run many smaller, separate but interconnected services, DevOps has supported the shift to smaller teams within large organizations,  each responsible for its own services, potentially using different technologies than others.

Wrapping up

As I record this in 2023, DevOps seems ascendant and non-controversial, at least among tech-forward companies. 

Early Agile, meanwhile, has perhaps been a victim of its own success. As it racked up some early wins and admirers, and frankly as it became cool, waves of consultants and business leaders sought to capitalize on it. This took Agile from a loose set of principles for small teams practicing software development, to something sold as a panacea to work for every kind of company. As we’ve seen, the transition has not been graceful.

That’s all for this episode.

Join us next time as we cover one last facet of how we work, addressing a silent partner that has taken on a lot more relevance to software workers in the 2020s. It’s time for the history of the office and of working from home. 

As always, your comments and feedback on this episode are very welcome. You can find a transcript, links to sources, and ways to reach me on the show website at prodfund.com

And if you like this series, and you want to hear more, do me a favor and share it with someone you think would enjoy it too.

Thank you very much for listening.