Chaos, Artificial Intelligence, and the Uncertainty of War
The Delusions of the AI Revolution Part II
Blade Runner 2049
This is a follow-up to my first piece on how integrating AI into the military will disappoint most technology enthusiasts in the AI space. While my first part was focused more on the political aspects of AI and the military, I want to touch on how AI is unlikely to resolve the fundamental confusion and unknowability that defines war.
Now, it should be taken as a given that almost everything that occurs throughout our day is cloaked in unknowability. The smallest details in our daily commute decide the difference between getting to work on time or getting hit by a truck because the driver spilled coffee on himself after a kid accidentally kicked a ball into the road.
This relationship between an incalculable number of interactions that produce a non-linear outcome forms the basis of chaos theory. As someone who once took a seminar in chaos theory, I’m now contractually obligated to bring it up at every possible opportunity.
The general gist of chaos theory is that you have a system that’s sensitive to what’s called initial conditions. In this context, sensitivity means that any change to the initial condition will produce radically different outcomes as the dynamic function progresses through time.
The double pendulum is the typical depiction of how changes in initial conditions play out over time to produce significant differences. While it’s not clear in this .gif, there are 3 axes when depicting dynamical functions in chaos theory since you’re also showing movement through space and time. It’s called “phase space” and if you have a background in physics you’ve probably seen it quite a few times.
While there’s more to chaos theory than what I’m touching on here, the other part I want to touch on is called irreversibility.
While chaos theory is an entirely deterministic system, the system itself cannot be returned to its state of initial conditions. It’s what makes the system non-linear in how the dynamic processes play out. Putting this in a non-mathematics way, at a physical level it’s impossible to calculate how the system would unfold unless you knew the exact initial condition of the system.1 Which, if you know anything about physics, you know isn’t possible.2
To get the point across, this is what’s called a bifurcated system, and functions as a representation of the predator-prey relationship for a population of rabbits where P is the population of rabbits and r is the rate of population growth. The dynamic relationship between P and r across time determines the future rabbit population. If you take this and plot the population across time, you see wild swings in population as food is exhausted, leading to a shrinking population, which in turn means more food for more rabbits. At the extremes, it also means a potential extinction due to lack of food if there’s a large boom in population. For anyone interested, you can go model this in Wolfram Alpha.
The theory itself was first put forward to explain why projections of meteorological data rapidly diverged from the observed behavior of weather systems when projected out more than a few days. The reason you can’t know the weather more than a few days in advance is that it’s just not possible.
This gets me to the topic of AI and why modeling something as complex and unknowable as war will never happen.
I assume most of you are at least somewhat familiar with Clausewitz’s ideas of the “fog of war,” but if you aren’t I’ll just briefly explain what it means.
Fog of war refers to the uncertainty and incompleteness of information available to a commander (or any participant) in a battlespace. As a rule, you will never have all possible information available to you in a conflict, and you will have to make decisions knowing that there are always gaps in what you can know.
It’s also important to mention that it’s simply impossible to know what other people are thinking. No amount of technology will ever let you read minds, and your opponent always has a vote in a war. You don’t have perfect information on what an opposing force is doing since they’re also human beings with free will who are going to act in their interests.
This is all of course without mentioning that you’re being shot at the entire time by someone else who is actively trying to kill you. Every decision you make is simultaneously being countered by your enemy, who is actively adapting to every choice you’re making.
Managing a military in peacetime is already a difficult task before introducing the fear, stress, and emotion of combat.
So what are we asking AI to do here? The biggest proponents of integrating AI at a vast scale in the military seem to be fixated on two primary advantages in decision-making—that AI will be faster than any human being and that AI can simulate all possibilities.
For the first point—that AI will be faster than human beings at analyzing information—I don’t see where that helps exactly. If they mean that AI will be able to rapidly ingest technical information from sensors and output usable data, I hate to be the one who tells them that computers already do that. That’s just programming and mathematics—there’s nothing AI adds.
What I suspect people mean when they say this though is that AI will be able to collate all possible information from ISR into a real-time situational picture of the battlespace.
First off, with or without an AI, this isn’t going to happen. Our adversaries aren’t just going to sit there and let us maximize the amount of information we know about them—they’re going to be actively countering us and there’s no reason to expect we’d have any sort of freedom in our ISR activities. Our adversaries will also obviously be actively attempting to mislead our sensors and introduce false information into our systems.
Typically people aren’t content to just sit back and die, so they’re not going to agree to this proposition of letting the American military have perfect information.
To be more generous, what I think people mean here is that they believe that AI can accurately interpret information for a battlefield commander. AI, in their reasoning, is free of human things like “emotions” and would therefore be able to render something like an objective judgment.3
This is where I want to go back to the chaos theory from earlier and point out that there is no way to calculate any of this. The smallest error in initial conditions leads to an output that over time radically deviates. Taking in mass amounts of data and sending it rapidly to a battlefield commander does nothing to alleviate the problem of information in an evolving battlespace.
Now you might say, the AI does it fast! Surely that must help!
If we’re talking about a limited set of information, with a limited prediction, there’s a much smaller chance of critical errors. But that’s the same if I just looked at that information myself. You don’t need an AI to do that.
The problem that people seem to want to solve is to take mass amounts of information and use AI to make that information coherent. However, the more information you introduce, the greater the odds that the AI is going to produce a rapidly divergent result.
If you go back and look at the diagram depicting the rabbit population, you can think about the mass ingestion of information into an AI as the growth rate for a rabbit population. As you increase the information, you rapidly break into a chaotic system where the smallest difference rapidly (and non-linearly) produces inaccurate results.
While that’s a problem that regular human beings also deal with, introducing a rapid delivery of that information to a commander pushes out the possibility of presenting alternative information. You’re eliminating the process of creating well-rounded judgments when you do this, and you remove the ability to have dissenting voices in the process.
You don’t counteract the flow of bad and inaccurate information by getting the information to someone faster.
Doing things on time matters, but you need more voices involved to interpret what is being seen in different ways. Unless of course, you want what led to “weapons of mass destruction in Iraq” style reports being delivered to every operational command in the U.S. military.
You produce good decisions in military decision-making through collaborative work and working through what you’re being presented. Commanders have staff for a reason.
This all goes towards the second great hope for AI with “simulating all possibilities.” I don’t think I have to belabor this point too much, but it’s also not a thing. It’s not going to happen. If there are too many variables and interconnected moving pieces in the short term to make judgments, you’re not doing it over an extended period.
Similarly to how a meteorologist can’t tell you where a hurricane will be in two weeks, an AI is not going to simulate the actions that will be taken during a conflict.
Throw even the smallest choice off, and your entire system will rapidly diverge from what was predicted. Political leaders will rapidly adjust to the conflict, and military leaders will iteratively adapt moment by moment. Nobody is following a script.
This is of course not to mention the inability to ever know the initial conditions of a system that you’re looking at, and the inherent irreversibility of a non-linear system. Knowing one thing at a time doesn’t tell you anything about the conditions for the next step in a system. An AI is no more capable of predicting how this system will play out than I’m capable of disproving gravity.
To use a metaphor here, war is like an infinite series of those double pendulums from earlier constantly and dynamically interacting with each other to produce new conditions. You can’t simulate that in any capacity, and thinking you can is only going to end with your well-constructed plans going up in flames when your adversary disagrees with your perfectly constructed maneuver.
I’m not saying people shouldn’t plan, but you need to understand what you’re asking for. This? This just isn’t going to happen.
For anyone more interested, there’s a thought experiment called “Laplace’s Demon” that deals with the idea of knowing the entire system if you had perfect knowledge of a system at one point. The idea is that if you knew the exact initial conditions of the Universe you would know everything that followed from that point in a mechanistic system. The problem with this is that the computational power to do that is physically impossible, so you don’t get to be God.
This deserves an asterisk because some forms of reversibility are possible like Brownian motion. People also often interpret Maxwell’s Demon incorrectly, his real point was that theoretically, particles could behave in such a way as to be reversible. However, in reality, the statistical nature of reversibility means that it is a practical impossibility.
This is without getting into things like the philosophy of AI. There’s no such thing as an “objective” judgment from an AI. All systems are fundamentally reflections of what creates them, so an AI will be just as fallible and filled with bias as any human being.
I think some people see AI as a technological means of getting inside an opponent’s OODA Loop, though if AI gets it wrong that wouldn’t help.
Do you think there is demand in western militaries for AI integration in war planning?