Software Engineering is Decision making

Software Engineering is Decision making

What is Software Engineering? Seems like a straightforward question, right? Most people that do it professionally tend to describe it in terms of activities like system design, programming, and testing. However, thinking about Software Engineering in these terms is missing something far more important.

Software Engineering is actually best understood as the practice of managing a continual stream of decisions - both making new ones and grappling with the consequences of those made by others. 

Software Engineering is decision making in two distinct, but related ways:

  1. Nearly all of the code ever written is nothing more than the sum of more-or-less arbitrary (even if justifiable) decisions. There is no foundational axioms, no laws of physics underpinning most software design choices. Almost all the properties you think of as fixed or universal in your code, are not. Sure there are carefully researched algorithms that exhibit degrees of theoretical rigor, but those represent a tiny amount of the code in any system. Even so, whatever rigor governs them can often be upended in practical use.

    All of modern software engineering is built on the back of millions of previous decisions which abide by no independently derivable system of logic. Any constraints you experience while developing software are either self-imposed, or imposed by a series of quasi-arbitrary decisions others have made.
  2. If you accept #1, then the most important skill any software engineer can possess is the ability to quickly make principled decisions in the face of nearly constant uncertainty. In other words, doing “Software Engineering” is fundamentally an exercise in continuous decision making.

A consequence of these ideas is that as you become a more senior engineer you become more and more concerned with managing the number of technical decisions you, your teams, and possibly your organization has to make. This also explains a common frustration junior engineers feel when trying to convince their tech lead or manager to try a bunch of new, cool new technologies, only to hear some version of “we don’t need all this fancy stuff”. 

When you first start out as a software engineer you probably believe your job is all about learning and using new technology. It’s only after years of hard earned experience (narrator: and pain) that you realize that minimizing novelty and choice is often the most effective way to ensure you deliver what your boss needs from you on time. Too much freedom, it turns out, can be very expensive.

Before we go much further, it’s worth asking: in what way does thinking about the deeper nature of software engineering help in any practical sense? Is there more to this than an interesting theoretical exercise? I believe so.

Today’s systems have become so complex we can hardly understand them. Simultaneously, the apps, services, and systems we create have grown indispensable in our lives. I think it is important and urgent that we have a good diagnosis of the root cause of the complexity in our systems if we hope to maintain some degree of intellectual control over them. I believe focusing on the number and breadth of decisions we wrestle with has far more impact on the maintainability and safety of our systems than nearly any other aspect of the software development process.

What’s in a decision anyway?

Now I know what you might be thinking, you make decisions all the time, it’s such a mundane act it seems kinda weird to pay this much attention to it. In one way you are right - as software engineers we make decisions all day long and most of us don’t ever think about it as something we learned how to do, we just do it. However, I think it’s valuable a little more time consider how and why decision making, and specifically technical decision making, works in practice.

Let's start with an example - you don’t need to deeply understand the code, just make a casual review - as you read, consider the number of decisions that were made to create it:

function validateCreditCard(cardNumber: string): boolean {
  // 1. Basic Cleanup & Luhn Check
  const cleanNumber = cardNumber.replace(/\D/g, '');
  if (!isLuhnValid(cleanNumber)) {
    return false;
  }   

  // 2. Length Checks (Based on card types)
  // Common lengths for Visa, Mastercard, Amex
  const validLengths = [16, 15]; 
  if (!validLengths.includes(cleanNumber.length)) {
    return false;
  }

  // 3. Basic Type Detection
  if (cleanNumber.startsWith("4")) {
    console.log("Likely Visa");
  } else if (cleanNumber.startsWith("5")) {
    console.log("Likely Mastercard");
  } else if (cleanNumber.startsWith("34") || cleanNumber.startsWith("37")) {
    console.log("Likely American Express");
  } else {
    console.log("Card type not recognized");
  }
  // Additional checks omitted for brevity.
  
  return true;  // If all checks pass
}

function isLuhnValid(cardNumber: string): boolean {
  let sum = 0;
  let shouldDouble = false;

  for (let i = cleanNumber.length - 1; i >= 0; i--) {
    let digit = parseInt(cleanNumber.charAt(i), 10);
    if (shouldDouble) {
      digit *= 2;
      if (digit > 9) {
        digit -= 9;
      }
    }
    sum += digit;
    shouldDouble = !shouldDouble;
  }
  return (sum % 10) === 0;
}

How many decisions did you count in this ~50 line block of code? Some are obvious, like method and variable names, choices of algorithms, or how to structure the work across two methods. Less obvious are decisions about how to handle errors (or not), or the use of early returns. More subtle are decisions about indentation, camelCasing, the use of a specific programming language, and the various frameworks and libraries. Directly connected to this example but invisible are the choices about testing, building, and releasing this code. And of course, every line of code in this example depends on 1000s of decisions that have come before. 

Creating this small, unremarkable sample of code required dozens of direct decisions and depended on hundreds or thousands of indirect decisions. Multiply this number by the millions of lines in a modern app or service, add in all the decisions made in the supporting libraries and other infrastructure and pretty soon, you have your hands full of a lot of complexity.

Extrapolating from the simple example, it’s obvious every software engineer is required to make a lot of decisions as they write code. However, the impact, suitability, and risks of each decision are hardly considered - how could they be? If each choice was weighed with care, software would take ages to ship; the process of doing so would effectively return us to the “big upfront design” techniques from software engineering’s earliest years. As much as you might long for the halcyon days of careful software design, the reality of our current moment is that we must move faster.

Of course, the need to make so many decisions can be read another way. A more positive framing is that each individual decision point allows for creative flexibility. Flexibility empowers an individual engineer to explore nearly endless possibilities for constructing solutions. The availability of choice is what has fostered the vibrant and dynamic software ecosystem we have today. Don’t like a specific library API? There are dozens more that you can choose from. Don’t like any of them? Write your own!

The (hopefully obvious) fact is, some decision making is necessary and even good, but other kinds are expensive and risky. Unfortunately, as Software Engineers, we have a really hard time telling which is which ahead of time. And no matter what, the more decisions you have to make, the more complex your job as a software engineer is, and that complexity creates massive inefficiencies across the software development lifecycle and increases the risk factors for your systems.

What makes software engineering decisions so difficult?

I’ve found that experienced software engineers often have an intuition that decision making is expensive, yet they aren’t always able to articulate why they feel that way. In fact, there is a universal quality to the experience of embracing the variety of options early in one’s career, yet over time, becoming more conservative. Why do software engineers grow to appreciate the value of constraints and reduced technical choice as they gain experience?

Perhaps it is because underlying nearly every decision in software engineering lies a tradeoff, and tradeoffs are cognitively challenging to deal with. There is no clear right or wrong answer, only many, many shades of possibility. Tradeoffs also require placing a bet in the face of uncertainty. Not knowing if a choice will be correct creates tension and tension can be uncomfortable. Many engineers learn to compartmentalize this feeling, but that does not mean it ever goes away. 

But feelings of tension, are not, themselves, enough to explain the challenge. I believe there are several factors which set software decision making apart from nearly all other kinds of decision making: 

We have not yet discovered enough of the “laws of physics” in software. Software Engineering is a form of collective “World building” in which everyone who commits code to the code base is attempting to build from scratch, all the concepts, representations, nouns, and verbs required to bring the world to life. Doing so in a way that is consistent and learnable, is more art than science. Software engineers have developed some lightweight heuristics to guide them, but most amount to little more than glorified rules of thumb. In practice, of the millions of decisions required to construct a working system, nearly every one could have been made in dozens of different ways and still achieved the same functional result. The lack of a “naturally correct way” leaves ample opportunity for misunderstanding.

Decision making is expensive - and it’s expensive on several levels. At the micro-scale, every time a developer has to reason about various command line flags, or figure out how to use APIs with inconsistent names, they are spending time and there is a risk they will get something wrong. At a slightly larger scale, making an important technical choice like which frontend framework to use often devolves into the development of side-by-side feature comparisons matrices. I’ve reviewed dozens of such comparisons - each one taking weeks or months to develop - and every one seems like an exercise in motivated reasoning, as there are truly very few objective criteria against which you can judge something as complex as a framework.

The fact is, most hard software decisions are hard because of the complex (sometimes chaotic) environments they take place in. In a lot of cases we just don’t know what is going to work well and we drive ourselves in circles trying to make the “correct decision” when “correct” may not be an attainable goal.

Every decision made in a new project today is built atop a mountain of decisions made in the past, going all the way back to the design of the original Von Neumann architecture. There is no naturally correct way to design a computer, to write a library, or design a distributed system. To build anything of consequence we have to wrestle with a zoo of tools, libraries, and frameworks, each the result of thousands of individual decisions. Why do terminal commands work the way they do? Why does HTML use <brackets/>? Why is PHP, well… just why is PHP?

We have the computing landscape we do today as much because of a series of historical accidents as any directed effort. Thus, you cannot “learn” the APIs of a programming language by reasoning from first principles. Instead you must learn to cope with an interconnected network of a nearly infinite stream of human decisions. Rarely will you have the time to dig deep into the chain of decisions that led to the oddly shaped command line standing between you and delivering your next feature request, instead the most efficient strategy is to just memorize 1000s of pieces of esoteric trivia without asking too many questions.

Every decision has a consequence but we can’t reliably predict which decisions will be consequential. Large scale outages have resulted from a misinterpretation of a single boolean variable name. Many hours have been wasted trying to figure out how to use inconsistently named APIs that do functionally similar things. In many cases, a few locally optimal decisions can combine to create emergent failures that would be impossible to predict from the vantage point of any single developer. Compounding the problem, it is hard to predict exactly how your code will be used in the future. These things happen because we lack the analytical techniques to properly estimate the potential risk of most decisions without a prohibitively expensive process.

Systems are larger than ever, requiring more decisions to be made. Today’s average sized systems involve more code, and are therefore impacted by more decisions, than even the largest systems of 30 years ago. Software development is a discovery driven process where each decision reveals new potential design paths and therefore, new decisions. The sum total of all the decisions required in the course of developing software, a project's decision space, can be truly overwhelming. The number of trade-offs that have to be balanced can easily lead to analysis paralysis and thrashing in equal measure. 

All decisions made within an ecosystem are interconnected. Whether it's how to write a test, or deciding the behavior of a particular API, every decision a software engineer makes is deeply interconnected with its upstream and downstream dependencies. Decisions made today may have important consequences years down the road. No software engineering decisions are made in isolation.


The upshot of all of these challenges is that human beings, limited as they are in their ability to juggle complexity, will succumb to decision fatigue.

Software engineering may not be unique in its construction of very complex systems with many interlocking parts, but it does stand alone in that so many of those decisions are made with comparatively little rigor and such weak tools for modeling potential outcomes. 

There may be no other industry where the scale, speed, and complexity of the decision space is so massive, and the risk of each choice so difficult to predict.

Not all decisions are equal

Some decisions are easy to make, or at least they feel easy because the consequence is low. When you name a local variable in a small function the impact of a future misunderstanding is likely contained to the function, and should you want to change the name, the cost is ~zero. Slightly harder to undo is changing the name of a public class. Now the risk of misinterpretation increases because more people will interact with this code. Also, performing the rename may span multiple files, and if the class is in a widely used library, perhaps several different code bases. 

As a rule, the greater the cost to change and the greater the cost of being wrong, the more complex a decision becomes. 

Classifying decisions like this is sometimes described as thinking about reversibility. Decisions which are very hard, or impossible to undo are called Type 1 decisions (e.g. choosing a cloud provider). Type 1 decisions are also sometimes called “one-way doors”. Decisions that are easy to undo are called Type 2, or two-way doors (e.g. local variable naming). The trick is often determining which kind you are facing. And while reversibility is an important property to recognize, a second problem, commonly called blast radius, is perhaps more significant. The blast radius of a decision is the range of negative impacts that may result from a particular choice.

The reason blast radius is more critical to understand is that, in my experience, blast radius is incredibly difficult to predict reliably. For example, while the blast radius of choosing the wrong product strategy is likely catastrophic, less obviously, a decision like which units to use when storing a timestamp can be just as consequential. Don’t believe me? Go read about the Mars Climate Orbiter which had a “close encounter with Mars” that resulted from different pieces of guidance software using different units of measurement for thruster timings. 

Being unable to predict blast radius also implies being unable to predict reversibility. Therefore, the greater the potential blast radius, the more time you are likely to invest up front to mitigate the range of bad outcomes.

Experienced software engineers are pretty good about identifying many examples of Type 1 vs Type 2 decisions. As Martin Fowler once said “architecture is the stuff that’s perceived to be hard to change”. Thus a systems architecture is often composed of the Type 1 decisions that define it. Unfortunately, our systems have become so large and so interconnected that it can be quite difficult to predict the blast radius of many decisions. Even simple, seemingly local decisions, like naming a local variable, can have substantial blast radius in a tightly coupled ecosystem of dependencies.

I don’t think this fact is controversial, or even under appreciated. Professional software engineers spend considerable time developing heuristics to help them estimate the risk of certain choices and how to contain them. We create style guides, write tests, design “availability zones”, and much more, all in an attempt to contain the potential impacts of the choices we will make. We try, sometimes successfully, to engineer our way around uncertainty inherent in our decision making processes. 

What I think is less appreciated is the tax that all of this consideration places on our velocity, our teams, and our products. 

As an exercise, try to add up all the time you have had to spend in the last week making decisions on your team, for example, how you name a domain object, which libraries to use, how to run a specific Linux command, or how to release your code. Now add in all the work you’ve done to mitigate the risk that some of the decisions will be sub-optimal today, or in the future. It wouldn’t surprise me if you found yourself tallying several days-per-week. What is this costing us across the entire software development lifecycle?

Balancing flexibility and velocity

If decision making is central to the job of a software engineer then the role of senior engineers within an organization is to pay very close attention to the decisions the team is making and try to reduce the cost of as many as possible. One of the most direct ways to reduce decision costs is to reduce the number of decisions that are required to deliver working code. Every potential decision you can remove can save others in your organization time, frustration, and risk.

Alas, you will never eliminate all decisions. Such a development environment would produce very little value. Allowing variety within your organization’s engineering ecosystem can produce innovative solutions and can help to solve previously unsolvable problems. The question is, how do you navigate the tradeoff between risk, velocity, innovation, and novelty?

The cost of decisions grows alongside a code base

Projects or products early in their lifecycle benefit tremendously from the diversity of technology options available to help. Being able to evaluate a wide selection of data stores and programming languages is appropriate when your ideas are only partially formed. In some cases the technology exploration can even create a feedback loop with the product refinement process and the two can help to shape each other.

Early-stage teams also benefit from being able to explore many processes, tools, and techniques for achieving their goals. How should the team work? What goals do we have for the quality of the code itself? These are important decisions. Each will take time as you evaluate various trade-offs.

New projects are also less complex and less susceptible to systemic effects than larger, older systems. This means each decision likely has a smaller blast radius. Similarly, early stage products and services are often being developed by smaller teams, and so the rate of change within the system is also lower. 

Taken together, there is less risk and cost associated with each decision early in a project’s life when the team and product are smaller and the customer base may not exist. Consequently, managing the decision space earlier in a project should be a lower priority as the benefits of flexibility outweigh the costs of getting things wrong.

As a project and team grow, the need for novelty begins to fade and the value of standardization increases. Once you have chosen an architecture, re-litigating the design every three months is unlikely to yield improvements. Likewise, allowing every engineer to define their own security strategy is a recipe for disaster. As a project begins to mature, the availability of choice shifts from being a benefit to an annoyance. As the project grows further, an unmanaged decision space transforms from an annoyance to being the source of extreme productivity loss and even critical failures.

Managing the Decision Space

The decision space of your project is the sum of all the potential decisions you, or your team, have to wrestle with. This includes choosing frameworks, languages, and tools. It also includes the choices you make adapting to neighboring infrastructure, or how to configure your binaries. Any choice someone has to make about what/how/when within your technical infrastructure is part of the decision space. Tracking the growth of the decision space is one of the most important jobs of a technical leader.

The number of decisions in your project’s decision space will directly influence how easy it is for developers to correctly reason about, and extend your code base. There is no correct number of decisions, but as a general guide, fewer is almost always better. Fewer decisions-per-developer means fewer opportunities to make mistakes, less time spent weighing tradeoffs, and less waste exploring unclear potential solutions. Figuring out how to manage your decision space will be a continuous balancing act for the duration of your system’s life.

The most obvious tool you have to manage your decision space is constraints. Constraints on languages, tools, design patterns, and even architecture styles. Constraints can also look like style guides, standard document templates, and the documentation of various best practices.

At Google, perhaps the most famous constraint is our Single Version, Monolithic Repository – a single code base shared by nearly every engineer in the company. There are no development branches and no versions. All code is committed to head and immediately visible to every engineer in the company. Google’s monorepo is a powerful constraint because it creates a lot of second order constraints all on its own. For example, there is only ever one version of any standard library and everyone is using it. You don’t have to wonder if the code you’re interacting with will behave the way you expect, you can just go look at the source. Additionally, given that our entire developer tool-chain is based on the monorepo, we also know that every compile step will create the same output for a given input.

Inconveniences like “it compiles on my machine” seem like they are just part of the job until you work in an environment where they cannot happen.

The use of constraints, while powerful, come with tradeoffs. When choosing where you apply them, look for places where:

  • The availability of choice provides little or negative value (e.g. 80 vs 100 column widths or 10 different frontend frameworks).
  • There are meaningful opportunities to exploit eventual economies of scale (e.g. reuse, risk reduction, etc).
  • Constraints can drive efficiency (eg. monorepo).

Be prepared to explain to those affected why you are limiting options. No matter how well intended, removing choice can easily be interpreted as a lack of trust in the engineers in your organization. 

Flipping the trade-off on its head, you should treat novelty as a tool and exploit it in those places in your products or teams where exploration and rapid experimentation may lead to meaningful advances against the metrics you care about like productivity, or release cadence. When used judiciously, exploration of new tools and practices can lead to important breakthroughs that would not have been possible in your existing ecosystem.

When deciding when and how to move from experimentation to mainstreaming a new library, tool, or technique, you need to consider what it will cost for your organization to incorporate it. Keep in mind that cost will include training, integration, potential migrations and more. As a general heuristic the “new option” should be at least as valuable as the old solution + the overhead of connecting with the rest of your ecosystem. An important consequence of this heuristic is that as your organization grows, the activation energy required to introduce novelty will increase as well. Being able to quickly rule out unproductive options will be the key to efficiently exploring new possibilities.

How far can you push decision constraints? Can you manage 100% the decision space of an entire engineering staff? In many ways no, software engineers will always create decision spaces local to their teams and products, however it is possible to employ constraints at scale to remove many common choices across an entire company.

At Google, the adoption of company-wide constraints like the monorepo, standardized infrastructure, and common transport formats has enabled thousands of engineers to avoid entire classes of decisions. The degree to which constraints can be globalized depends on how “well sealed” you can keep your internal organization. If you have clear API boundaries and standardized policies for how to manage them, it is not strictly necessary for every team to use the same languages or tools. However, Hyrum’s law tells us that it’s very easy for an API to leak and those leaks will inevitably lead to an increase in the decision space for both API owners and consumers.

Often the scope of a constraint is as important as the specific limitation imposed when considering how to shape technical policy.

In software development there is a fundamental tension between innovation and achieving economies of scale. Innovation can explode when everyone in your organization is empowered to imagine and create in whatever way suits them, however, that same energy will work against your ability to quickly train new teammates, meet regulatory requirements, or reason about potential defects. Different organizations, at different stages of development will need different mixes, what’s important is acknowledging that novelty and scale are often at different ends of the same dial.

Improving Decision Making in Practice

Because software engineering is decision making, we will never eliminate the need to make choices, but we can create conditions for better decision making outcomes. The most important thing you can do is acknowledge the role decisions play in the job of a software engineer, accept that they are everywhere, and work to make the decisions you do have to make easier. Here are six (+ a bonus!) clear actions you can take to take control of the decision space in your organization:  

  • Embrace constraints. Whether you choose a limited set of programming languages or choose a single release tool, the fewer decisions an engineer has to make, the fewer opportunities there are for confusion and mistakes. Each decision comes with a tiny amount of cognitive load, and given enough, it can be paralyzing to make progress. To combat this, treat your decision space as a quantity to be minimized. Measure it, track it, and create a culture that celebrates doing it well. Fair warning, that introducing constraints is likely to frustrate your engineering org, and you will need to make peace with that, and practice your diplomacy.
  • Learn from the decisions (and mistakes) being made around you. We seldom have the chance to practice making decisions in a consequence free environment, this makes it difficult to “learn by doing”. However, decisions are being made around you all the time, you can learn from them too. Become a student of the decision making processes across your organization. Learn how other kinds of decision makers think - product manager, UX designers, and more. Experiment with different techniques as you work to improve your own decision making process.
  • Create feedback loops at every level of your software development process. Feedback loops are an often forgotten, but critical part of a decision making process. Design review, code review, automated testing, retrospectives, and post-mortems - these are all opportunities to learn from the decisions you and your team make. Many of these practices are fairly standard today, but what’s less common is reviewing the efficacy of a design two, three, or five years later. Consider doing longitudinal reviews where a decision's fitness is assessed over its lifetime.
  • Optimize for archeology. The rate and scale of decision making in a software project make it difficult for those who come later to understand why or how specific decisions were made. Optimize your team’s development processes to aid future software archaeologists that will need to make decisions about how to evolve code to meet future needs. Take good notes, spend time creating a useful archive of important documents, do whatever you can to capture and organize the context of the decisions you made. Don’t just document “what”, but also document the “why” or “why not” of the decisions you have made in the past, including any assumptions you made about the use cases you were designing for.
  • Raise the level of abstraction. Developer tools and libraries that operate at very low abstraction levels require many more decisions to be made to complete simple tasks. Most of our tooling is overly focused on individual text files, and fails to help us understand the system-wide impacts of our technical choices. Some software engineers relish their mastery of esoteric low-level details, and occasionally it can be helpful to get “closer to the machine”, but  dev environments, tool chains, or libraries that require operating only in terms of low level abstractions will create more complexity and lead to more unnecessary choices. Invest in building tools and frameworks that allow your team to think and build with higher level abstractions as a way of reducing the cognitive load.

    Good abstractions prevent decisions from leaking beyond your code. Poor abstractions tend to push decisions into layers of the system where they cannot be appropriately resolved. That said, BEWARE Hyrum’s law - Your system likely has a larger API surface than you intended.
  • Develop delegation strategies. Delegation is an essential skill, but it often makes new technical leaders uncomfortable. Learning to do it well helps in four distinct ways: 
    • It allows you to focus on the decisions that need your time and attention. If you are leading a team, you can’t possibly make all the decisions that need to be made to deliver working software. Having people you can trust to act according to your team’s values helps you scale. 
    • It begins training more junior engineers in the decision making process. As mentioned previously, often the only way to “learn” decision making is to practice it, and delegation allows you to provide a more constrained environment for others to train on the job. 
    • It allows new kinds of choices to be made. When you delegate a task to someone and they surprise you with a creative solution you wouldn’t have considered, that can infuse your team with new capabilities or help get them unstuck from a dead end.
    • Because software engineering is decision making, encouraging every engineer to improve their decision-making process will l higher-quality software.
  • BONUS TIP! Become a student of your organization’s decision making process. What does your organization value when it makes decisions? How do the internal social networks influence the decision making process? What does it take to get larger decisions made and communicated? It may seem like these are less “engineering” concerns, but as your technical decisions begin to scale you will find yourself running into organizational scale problems like building support, communication, and even being unpopular. Large-scale technical decisions will require you to embrace techniques you may have previously ignored so you may need to spend time working with senior managers to understand how they approach making decisions that affect large parts of the organization.

Change is coming

AI promises to (maybe) dramatically change the relationship we Software Engineers have with code and our systems as a whole. In a world where machines can produce prodigious amounts of code, you may wonder, does that help us with our decision fatigue? I mean, LLMs can already produce 1000s of lines of code in a flash and save you from having to make a lot of tactical choices. 

I don’t believe that LLMs or any other technology we possess today will have much of an impact on the problems I outlined in this post. 

First, the code you can generate today, while impressive, is largely done in isolated greenfield, toy-scale projects. As I discussed, at small scales, the cost of any decision is cheap. The real costs, and therefore the real skill, comes from managing lots of interlocking decisions. Keeping various chatty robot processes in agreement about the best way to generate code will still require many decisions to be made - it may well resemble the process of consensus building we do in our human teams today. I'm fairly confident that  for as long as humans are expected to understand the code that is written, decisions will be required.

Second, even if new technology allows us to create a more complete layer of abstraction between our thoughts and the code, we will have only succeeded in freeing up more time to focus on making the harder decisions. In a world where LLMs or other AI systems are the primary authors, we may find that the job of every engineer begins to resemble the job of the Tech Lead (TL). And as any current TL can tell you, their job is already much less about writing code and instead it is defined by the need to make effective decisions about how the team spends its time. If everyone is a TL, new problems will emerge as we try to figure out how to coordinate all this new productivity and apply it effectively. 

Along the way to this AI hellscape Utopia, Software Engineers will have to reckon with the fact that the value they provide is not the rapid recall of deep CS trivia. No, the value of Software Engineers in the future will lie in our ability to make good decisions in the face of near complete uncertainty. It turns out, this has been the job all along.