9 Comments

I think some people see AI as a technological means of getting inside an opponent’s OODA Loop, though if AI gets it wrong that wouldn’t help.

Expand full comment
author

I mean maybe, but I’d probably just disagree that AI realistically helps that process make well reasoned judgements compared to institutional methods of sharing information

Expand full comment

I’m going to pursue this because I’m not clear on your answer. As a former intell analyst and watch officer I have long seen AI’s greatest potential as a tool for managing the firehose of information usually flooding in, ideally spotting the key bits of info amid the flotsam and jetsam. I think that at least overlaps with your point.

Expand full comment
author

I mean I guess I just don’t really see what AI is doing in that process. That’s not really any different than Google’s search engine returning relevant results through relatively simple sets of code. But it’s not the same as taking that information and providing a judgement and characterization of it.

Expand full comment

Agreed, but it would be asking AI to recognize possible implications of the noted information within a defined frame of reference. I don’t yet trust AI to initiate action in response to information inputs (there are a good number of people I don’t trust to do that).

Expand full comment
author

I mean I guess I just see the problem with that being the almost certain distortions and inaccuracies of an AI characterizing that information inevitably producing poisoned outputs that will led to rapidly diverging assessments. It’s sort of compounding errors on top of one another whenever you try to ingest large data sets and make any sort of assessment about it outside of a system with dissenting voices

Expand full comment
Jul 6Liked by James

Do you think there is demand in western militaries for AI integration in war planning?

Expand full comment
author

I mean as a general rule there's no military that isn't attempting to integrate and leverage any technology if it can confer any advantage

Expand full comment

I really like this post; I've been thinking about it quite a bit today, and I definitely agree that all the AI-for-war systems, or at least the ones I've seen, have been snake oil. It made me want to be a bit pedantic about the limits of predictability, though, precisely because chaos is such a useful concept. Rambling thoughts:

Knowing a system is chaotic does tell you that small perturbations arbitrarily diverge, but what it doesn't immediately tell you is how quickly it does that relative to a human scale. Mathematically, I think you'd be trying to work out the Lyapunov exponent, but that class was a while ago.

For instance, Hayley's Comet is part of a classic chaotic system (N-body motion), and it is unpredictable, but only on the scale of centuries- we understand the system relatively well. For your bifurcating rabbit population, that scale of predictability might be in weeks or months: your long-term intuition might be off, but you can get a pretty good idea of the current population, and how many baby bunnies are on the way. Or say you get tossed in the Octagon: you might only be able to predict your counterparty's actions for the next couple seconds at the best of times, but if you've got a quick jab, that's plenty.

The thing is that even for chaotic systems, there's a horizon of predictability/linearizability. What matters isn't just that a system is chaotic, but how its predictability horizon compares to the timescale of phenomena that you care about. This is why weather forecasting is such an interesting example: even though it's where we discovered formal chaos and theoretical unpredictability, we've actually become pretty dang good at practical prediction in recent decades. Some of that is good heuristics and some of is just raw computing power, but some of it is ML/AI (of the pre-LLM variety).

Humans are still really, really hard: not only do you have to think about all the variables of the system you care about, but everyone's beliefs about their systems, then you get into beliefs-about-beliefs and the whole state space blows up through tetration. The systems you care about aren't fully unconstrained, though- friction applies to the other guy too, and physical limitations constrain the options available to them, and hence the possibilities to consider.

We're able to do it, after all, Even in the fog of war, people are able to make best-guess inferences about what others are likely to do, and do their best accordingly, with some success. It's impressive, and it probably leverages a lot of deep evolutionary practice predicting what other primates will do, but it's not magic.

The point, I think, is that while you're right that current AI systems suck because of factors intrinsic to war, the sucking isn't itself intrinsic. It's computational, and problem-specific. A repeated pattern from AI/ML in the last couple years is very hard problems just suddenly becoming tractable, either with new approaches or just loads of compute (chess then Go, various computer vision problems, various fluid and dynamical systems modelling problems, realistic image generation, and yeah, generating unlimited reams of plausible text).

When AI at some point arrives impacts war real- on the operational level like Luckey and co. are pitching, not autonomous drones or whatever- my guess is it'll feel a bit like the introduction of radar in WWII. Not some app that "solves war", but a specific part of the fog of war just suddenly becoming transparent, probably to the dismay and bewilderment of those involved.

Anyway, I realize that's far too long for a comment and this is in fact a Wendy's, so I'll wrap it up. Thanks for the thought-provoking post, looking forward to reading more.

Expand full comment