Chaos, Artificial Intelligence, and the Uncertainty of War
The Delusions of the AI Revolution Part II
Blade Runner 2049
This is a follow-up to my first piece on how integrating AI into the military will disappoint most technology enthusiasts in the AI space. While my first part was focused more on the political aspects of AI and the military, I want to touch on how AI is unlikely to resolve the fundamental confusion and unknowability that defines war.
Now, it should be taken as a given that almost everything that occurs throughout our day is cloaked in unknowability. The smallest details in our daily commute decide the difference between getting to work on time or getting hit by a truck because the driver spilled coffee on himself after a kid accidentally kicked a ball into the road.
This relationship between an incalculable number of interactions that produce a non-linear outcome forms the basis of chaos theory. As someone who once took a seminar in chaos theory, I’m now contractually obligated to bring it up at every possible opportunity.
The general gist of chaos theory is that you have a system that’s sensitive to what’s called initial conditions. In this context, sensitivity means that any change to the initial condition will produce radically different outcomes as the dynamic function progresses through time.
The double pendulum is the typical depiction of how changes in initial conditions play out over time to produce significant differences. While it’s not clear in this .gif, there are 3 axes when depicting dynamical functions in chaos theory since you’re also showing movement through space and time. It’s called “phase space” and if you have a background in physics you’ve probably seen it quite a few times.
While there’s more to chaos theory than what I’m touching on here, the other part I want to touch on is called irreversibility.
While chaos theory is an entirely deterministic system, the system itself cannot be returned to its state of initial conditions. It’s what makes the system non-linear in how the dynamic processes play out. Putting this in a non-mathematics way, at a physical level it’s impossible to calculate how the system would unfold unless you knew the exact initial condition of the system.1 Which, if you know anything about physics, you know isn’t possible.2
To get the point across, this is what’s called a bifurcated system, and functions as a representation of the predator-prey relationship for a population of rabbits where P is the population of rabbits and r is the rate of population growth. The dynamic relationship between P and r across time determines the future rabbit population. If you take this and plot the population across time, you see wild swings in population as food is exhausted, leading to a shrinking population, which in turn means more food for more rabbits. At the extremes, it also means a potential extinction due to lack of food if there’s a large boom in population. For anyone interested, you can go model this in Wolfram Alpha.
The theory itself was first put forward to explain why projections of meteorological data rapidly diverged from the observed behavior of weather systems when projected out more than a few days. The reason you can’t know the weather more than a few days in advance is that it’s just not possible.
This gets me to the topic of AI and why modeling something as complex and unknowable as war will never happen.
I assume most of you are at least somewhat familiar with Clausewitz’s ideas of the “fog of war,” but if you aren’t I’ll just briefly explain what it means.
Fog of war refers to the uncertainty and incompleteness of information available to a commander (or any participant) in a battlespace. As a rule, you will never have all possible information available to you in a conflict, and you will have to make decisions knowing that there are always gaps in what you can know.
It’s also important to mention that it’s simply impossible to know what other people are thinking. No amount of technology will ever let you read minds, and your opponent always has a vote in a war. You don’t have perfect information on what an opposing force is doing since they’re also human beings with free will who are going to act in their interests.
This is all of course without mentioning that you’re being shot at the entire time by someone else who is actively trying to kill you. Every decision you make is simultaneously being countered by your enemy, who is actively adapting to every choice you’re making.
Managing a military in peacetime is already a difficult task before introducing the fear, stress, and emotion of combat.
So what are we asking AI to do here? The biggest proponents of integrating AI at a vast scale in the military seem to be fixated on two primary advantages in decision-making—that AI will be faster than any human being and that AI can simulate all possibilities.
For the first point—that AI will be faster than human beings at analyzing information—I don’t see where that helps exactly. If they mean that AI will be able to rapidly ingest technical information from sensors and output usable data, I hate to be the one who tells them that computers already do that. That’s just programming and mathematics—there’s nothing AI adds.
What I suspect people mean when they say this though is that AI will be able to collate all possible information from ISR into a real-time situational picture of the battlespace.
First off, with or without an AI, this isn’t going to happen. Our adversaries aren’t just going to sit there and let us maximize the amount of information we know about them—they’re going to be actively countering us and there’s no reason to expect we’d have any sort of freedom in our ISR activities. Our adversaries will also obviously be actively attempting to mislead our sensors and introduce false information into our systems.
Typically people aren’t content to just sit back and die, so they’re not going to agree to this proposition of letting the American military have perfect information.
To be more generous, what I think people mean here is that they believe that AI can accurately interpret information for a battlefield commander. AI, in their reasoning, is free of human things like “emotions” and would therefore be able to render something like an objective judgment.3
This is where I want to go back to the chaos theory from earlier and point out that there is no way to calculate any of this. The smallest error in initial conditions leads to an output that over time radically deviates. Taking in mass amounts of data and sending it rapidly to a battlefield commander does nothing to alleviate the problem of information in an evolving battlespace.
Now you might say, the AI does it fast! Surely that must help!
If we’re talking about a limited set of information, with a limited prediction, there’s a much smaller chance of critical errors. But that’s the same if I just looked at that information myself. You don’t need an AI to do that.
The problem that people seem to want to solve is to take mass amounts of information and use AI to make that information coherent. However, the more information you introduce, the greater the odds that the AI is going to produce a rapidly divergent result.
If you go back and look at the diagram depicting the rabbit population, you can think about the mass ingestion of information into an AI as the growth rate for a rabbit population. As you increase the information, you rapidly break into a chaotic system where the smallest difference rapidly (and non-linearly) produces inaccurate results.
While that’s a problem that regular human beings also deal with, introducing a rapid delivery of that information to a commander pushes out the possibility of presenting alternative information. You’re eliminating the process of creating well-rounded judgments when you do this, and you remove the ability to have dissenting voices in the process.
You don’t counteract the flow of bad and inaccurate information by getting the information to someone faster.
Doing things on time matters, but you need more voices involved to interpret what is being seen in different ways. Unless of course, you want what led to “weapons of mass destruction in Iraq” style reports being delivered to every operational command in the U.S. military.
You produce good decisions in military decision-making through collaborative work and working through what you’re being presented. Commanders have staff for a reason.
This all goes towards the second great hope for AI with “simulating all possibilities.” I don’t think I have to belabor this point too much, but it’s also not a thing. It’s not going to happen. If there are too many variables and interconnected moving pieces in the short term to make judgments, you’re not doing it over an extended period.
Similarly to how a meteorologist can’t tell you where a hurricane will be in two weeks, an AI is not going to simulate the actions that will be taken during a conflict.
Throw even the smallest choice off, and your entire system will rapidly diverge from what was predicted. Political leaders will rapidly adjust to the conflict, and military leaders will iteratively adapt moment by moment. Nobody is following a script.
This is of course not to mention the inability to ever know the initial conditions of a system that you’re looking at, and the inherent irreversibility of a non-linear system. Knowing one thing at a time doesn’t tell you anything about the conditions for the next step in a system. An AI is no more capable of predicting how this system will play out than I’m capable of disproving gravity.
To use a metaphor here, war is like an infinite series of those double pendulums from earlier constantly and dynamically interacting with each other to produce new conditions. You can’t simulate that in any capacity, and thinking you can is only going to end with your well-constructed plans going up in flames when your adversary disagrees with your perfectly constructed maneuver.
I’m not saying people shouldn’t plan, but you need to understand what you’re asking for. This? This just isn’t going to happen.
For anyone more interested, there’s a thought experiment called “Laplace’s Demon” that deals with the idea of knowing the entire system if you had perfect knowledge of a system at one point. The idea is that if you knew the exact initial conditions of the Universe you would know everything that followed from that point in a mechanistic system. The problem with this is that the computational power to do that is physically impossible, so you don’t get to be God.
This deserves an asterisk because some forms of reversibility are possible like Brownian motion. People also often interpret Maxwell’s Demon incorrectly, his real point was that theoretically, particles could behave in such a way as to be reversible. However, in reality, the statistical nature of reversibility means that it is a practical impossibility.
This is without getting into things like the philosophy of AI. There’s no such thing as an “objective” judgment from an AI. All systems are fundamentally reflections of what creates them, so an AI will be just as fallible and filled with bias as any human being.
I think some people see AI as a technological means of getting inside an opponent’s OODA Loop, though if AI gets it wrong that wouldn’t help.
I really like this post; I've been thinking about it quite a bit today, and I definitely agree that all the AI-for-war systems, or at least the ones I've seen, have been snake oil. It made me want to be a bit pedantic about the limits of predictability, though, precisely because chaos is such a useful concept. Rambling thoughts:
Knowing a system is chaotic does tell you that small perturbations arbitrarily diverge, but what it doesn't immediately tell you is how quickly it does that relative to a human scale. Mathematically, I think you'd be trying to work out the Lyapunov exponent, but that class was a while ago.
For instance, Hayley's Comet is part of a classic chaotic system (N-body motion), and it is unpredictable, but only on the scale of centuries- we understand the system relatively well. For your bifurcating rabbit population, that scale of predictability might be in weeks or months: your long-term intuition might be off, but you can get a pretty good idea of the current population, and how many baby bunnies are on the way. Or say you get tossed in the Octagon: you might only be able to predict your counterparty's actions for the next couple seconds at the best of times, but if you've got a quick jab, that's plenty.
The thing is that even for chaotic systems, there's a horizon of predictability/linearizability. What matters isn't just that a system is chaotic, but how its predictability horizon compares to the timescale of phenomena that you care about. This is why weather forecasting is such an interesting example: even though it's where we discovered formal chaos and theoretical unpredictability, we've actually become pretty dang good at practical prediction in recent decades. Some of that is good heuristics and some of is just raw computing power, but some of it is ML/AI (of the pre-LLM variety).
Humans are still really, really hard: not only do you have to think about all the variables of the system you care about, but everyone's beliefs about their systems, then you get into beliefs-about-beliefs and the whole state space blows up through tetration. The systems you care about aren't fully unconstrained, though- friction applies to the other guy too, and physical limitations constrain the options available to them, and hence the possibilities to consider.
We're able to do it, after all, Even in the fog of war, people are able to make best-guess inferences about what others are likely to do, and do their best accordingly, with some success. It's impressive, and it probably leverages a lot of deep evolutionary practice predicting what other primates will do, but it's not magic.
The point, I think, is that while you're right that current AI systems suck because of factors intrinsic to war, the sucking isn't itself intrinsic. It's computational, and problem-specific. A repeated pattern from AI/ML in the last couple years is very hard problems just suddenly becoming tractable, either with new approaches or just loads of compute (chess then Go, various computer vision problems, various fluid and dynamical systems modelling problems, realistic image generation, and yeah, generating unlimited reams of plausible text).
When AI at some point arrives impacts war real- on the operational level like Luckey and co. are pitching, not autonomous drones or whatever- my guess is it'll feel a bit like the introduction of radar in WWII. Not some app that "solves war", but a specific part of the fog of war just suddenly becoming transparent, probably to the dismay and bewilderment of those involved.
Anyway, I realize that's far too long for a comment and this is in fact a Wendy's, so I'll wrap it up. Thanks for the thought-provoking post, looking forward to reading more.