Plain English with Derek Thompson: The Future of War Is Here

The Ringer The Ringer 5/16/23 - 1h 26m - PDF Transcript

Hey, it's Sean Fennessy, one of the hosts of the Prestige TV podcast.

HBO's Barry is back for a fourth and final season, and that means I'll be back recapping

the show with co-creator and star Bill Hader to dive deep on the themes, scenes, and major

moments in the series.

Bill will provide insight into how every episode was made and why it's ending.

New Prestige TV Barry recaps will go live every Sunday night when the episode ends, so make

sure you're subscribed to the Prestige TV podcast wherever you get your podcasts.

What do you think of when you think about AI and war?

Maybe you think about Skynet and the Terminator.

Maybe you think about killer drones going rogue.

Or maybe you're a little bit more creative and you think about misaligned AI acting

as a terrorist, stealing money, ransoming military leaders, and generally creating

habit.

Or maybe for some reason, you think about peace.

You think about a curtain of atomic robots hovering in the air, threatening to demolish

any invading army with such gravitas that we all freeze in place and we get world peace

under the watchful eye of aligned AI.

It seems to me that AI has a way of doing this in the common discourse of the day.

It has a way of pushing everybody's ideas about the future into two clean and opposite

categories.

It's dystopia or utopia.

Either everybody dies or it's heaven on earth.

But the world doesn't really ever work out like that, does it?

You and I don't live in dystopia, and we never have.

And we don't live in utopia either, and we never will.

We live in the messy in between, where some things get better and some things are shit.

Today's episode is about trying to think about how AI will change the future of military

campaigns.

And this is admittedly a subject where I am so, so, so, so far from being an expert.

So this is a show where two guests do a lot of my thinking for me.

First up, we have Brian Schimpf, the CEO of Enduril, named after the sword in Lord of

the Rings.

Enduril is a young military technology company that builds AI systems for the Defense Department.

Most famously, drones and anti-drone technology, a lot of which we have sold to Ukraine.

And Enduril often notes that there is more AI in a Tesla than a typical American military

vehicle.

There's better computer vision in a Snapchat app than in a DoD system.

And that's a problem, because America's geopolitical nemeses are building AI into their military

systems.

It's also ironic, because historically the US military was a fount of software technology.

This is how we got the internet.

But presently we're behind the eight ball, Schimpf says, and in particular we are very

likely behind China.

Among most people who share my politics, drones have a terrible reputation.

They are killing machines.

They are tools of war, and war often goes haywire.

So in this conversation with Schimpf, we talk about not just how this technology works,

which is very important, but also how we can ensure that its use is ethical and responsive

to democratic processes.

Next up we have my friend, the Atlantic author Ross Anderson, who has done some very deep

thinking about the next major question in AI and war, which is, how could all of this

go wrong?

And not just a little wrong, I mean disastrously wrong, at the scale of threatening human civilization

and blowing up the world wrong.

So that's today's episode, Light Fair, how AI is already changing war, and the rules

we need to set in place to make sure it does not blow up the world.

I'm Derek Thompson, and this is Plain English.

Brian Schimpf, welcome to the podcast.

Thank you very much.

Pleasure to be here.

Question, what is Andrewle?

Why did you start this company?

So Andrewle's goal is to create a defense technology company.

So what does that mean, and how is that different than what has been happening in defense for

the last, you know, 100 years?

When you look at the state of defense today, what the defense base is really good at is

building really big ships, really big planes, building tanks, kind of the technologies that

we have looked at as the key for how the military operates for, you know, the duration of the

Cold War, and in a lot of ways, essentially unchanged since World War II.

You fast forward 20, 30 years, what's going to start to be different, right?

You start to see how software, how AI starts to impact how warfare is conducted.

That starts to look like more systems that are cheaper, that are more intelligent, that

take fewer humans to operate, that enables the humans to do what they're actually very

good at and make sort of informed decisions with the context and understanding of politics,

of, you know, the consequences of their actions in a way that machines are always going to

be kind of limited.

But we want to get humans sort of out of this game of doing very mechanical parts, operating

these big systems and working with smarter lower cost systems.

So that sort of focus on software and technology is really key.

The other part of it is our view when we started the company was the pace of getting new technologies

into the hands of the actual soldiers, the sailors, like all those folks that are actually

doing the, you know, sort of the operations and conducting military engagements, like

we want to get them real technology, but the system has evolved again on this mindset of

I'm building an aircraft carrier, I'm building a submarine, I'm building a giant plane.

And that is a very cumbersome, a very slow and a very expensive process to get those

things to work.

When you start talking about how do I take advantage of, you know, smarter drones, smarter

surveillance systems, more autonomous on man capabilities, you can move wildly faster.

So a second part of what we've done as a business and, you know, beyond just the technology

is really focus on how can we apply a different business model that is kind of fit for the

types of technologies that are going to be relevant for the next 30 years and look at

how do we invest our own dollars to be able to get things out there quicker?

How do we find a way to kind of work through this process and get technology fielded as

fast as possible?

We're going to go a little bit deeper into just about all of that, but first I would

love to hear you answer this question at a personal level.

Like why did you, Brian, decide that this is what you wanted to do with your life of

all the different kinds of startups that you could have been a part of?

Why were you so interested in military technology?

So I think the characteristic, you know, with a lot of the folks working in national security

is they are passionate about national security.

They really believe in the mission.

They believe in why they are doing it and they believe the importance of US having the

best technology.

And for me, that is very much a huge part of the motivation.

This is the sort of thing that I believe the world is better off when the US is able to

kind of keep a sane world order where conflict is not the default way you resolve issues,

where using military force is not going to be that effective.

And so that's sort of the worldview I truly believe in.

I think, you know, for better or for worse, conflict is a part of human nature and making

it so that these things are unwinnable, that countries have the defenses they need to

maintain their sovereignty.

These are things that I think are incredibly important and I think the US of all the world

powers is the one founded on human rights as an important construct and freedom is an

important construct.

It's harder to say that for a lot of the other major potential world powers.

So I think there is sort of a, for me, a moral imperative on working in these areas.

And from a technologist perspective, I think the ability to actually do these things in

a way that's more intelligent, that's more proportionate, that's more limited is actually

a very good thing.

You know, like, worth will happen for better or for worse and having the best technology

that is the most limited and targeted possible, that feels like a good thing and something

I feel like I could move the needle on.

I think it'd be useful for a lot of listeners for you to help us level set on what the

American military is good at right now and where our deficit is so that we can understand,

you know, why we would need new technology to bring our capabilities into the 21st century.

So again, what is the US strength as a military?

The US has built up a way of conducting war, which is described as sort of this power projection

idea.

We have the ability to fight the sort of away games where we can send our carriers.

We have bases worldwide.

We have like a very large scale, very experienced military that is able to conduct, you know,

operations anywhere in the globe.

And we've done that by largely investing in these very large, very expensive capabilities.

So things like aircraft carriers, stealth bombers, advanced fighters.

And you know, kind of since World War II, those were very effective strategies if you

believed you were going to be in this world of sort of state-on-state conflict and that

was what you needed to provide.

I think the reality is there's a bit of a shift.

So one, we had the war on terror.

That was what preoccupied us the most for, you know, the last 20 years where that was

very much about how do we kind of do the best intelligence possible?

We need to know very precisely where, you know, people were, how they were networked,

how were they connected and be able to conduct very precise counter-insurgency operations.

Very limited, kind of, you know, like a very ground-heavy type of conflict.

But all about that sort of precision and getting every move correct.

Ukraine has kind of shifted this where the two major things kind of happened in my opinion.

So one, there was a resurgence of state-on-state conflict.

I think there was kind of, people didn't really believe this was going to be a way a state

would try to exert its will to try to get to its political ends.

But as we saw, it still is, right?

Like Putin thought this would be an effective way to accomplish, you know, whatever it was

and Putin's head of body was trying to accomplish.

So, you know, I think there is, that is back, right?

That is a well-funded state entity with real advanced capabilities that is, you know, looking

to use military force to accomplish their end.

The second is the U.S. did not directly engage, right?

And so that is another thing that I think is a little subtle, but kind of we take for

granted a little bit where we've reset the norm where it was kind of a little unclear

when it set out how much support would we provide?

Would we get engaged?

Would NATO get engaged?

What would trip these different conditions?

Like how much would this be the U.S. and Western Europe directly fighting?

But I think the norm in a lot of ways was set where the U.S. and the Western allies would

provide intelligence, they provide support financially, they provide support with weapons

systems and everything that our allies need to defend themselves and conduct their own,

you know, kind of try to withstand this aggressor, right?

And so that was a big shift.

But then you look at what's happened here and it's actually quite interesting.

It matches a lot of the thesis we had of how these things would play out where day one

of the conflict, you had all of the air bases, all of the F-16s, all those things were knocked

out, right?

Like they were not relevant to the fight.

You had a massive dispersal of forces.

So instead of having these relatively expensive, easy-to-target air bases and military compounds,

you had the Ukrainian forces and the Russian forces both learn very quickly that the optimal

strategy was to break it up, to have essentially uneconomic targets.

And you have a lot of this more smaller unit fighting, but then how you provide them intelligence,

what do they need for drones to be able to surveil the adversary?

Just the sheer volume of ammunition and weapons they're going through is huge.

And so I think you saw a pretty big shift where this isn't a large-scale naval battle.

This isn't, you know, tanks are having limited effect, there isn't a real air campaign, there's

no like major fighter jet pieces here.

This is really looking like a lot of very tactical, short-range engagements and a lot

of them simultaneously.

And that is a very different way of conducting warfare than we've been used to.

But I think in a lot of ways that's going to be the hallmark of what this next generation

of military engagements looks like, which is highly defensive in nature.

How do I withstand a stronger adversary trying to exert its will on me and prevent them from

being successful?

I'm not going to decisively win this as Ukraine.

I'm not going to knock out Russia.

That is not the goal.

But the goal is to make it so impossible to succeed that people take military options

off the table as an effective means of accomplishing their political ends.

I want to bring in the technology revolution that we're seeing as well, because you make

AI products, you make unmanned autonomous products, and the fact that you exist, the

fact that a startup has to take the technological frontier here, suggests that the U.S. military

missed something.

Right?

I mean, you wouldn't have to exist if the U.S. military was all over these technologies

and clearly owned the tech frontier.

So what happened?

Why do you think this institution, U.S. military, which succeeded in big manned technology,

did not do as well to pivot towards smaller unmanned technology?

Yeah, I think there's kind of like a process and incentives thing with any organization,

right?

And so in any organization, you've got sort of the process pieces of they built up a way

of ensuring safety, reliability, based on what they were doing.

So you look at the life cycle of a submarine, like an attack submarine.

Those are going to go into service.

Next generation, let's say, goes into service in 2030 with a lifespan through 2080.

These are very long duration platforms, and then the rigor you want to do on the engineering,

the amount of effort you put into that and the safety considerations around this are

massive.

But that all costs money.

That all takes time.

That is a very particular way of working when you need that to go.

And so the processes that have served them well on building these big man things are

kind of the opposite of what you need in a lot of these lower cost systems, where when

you look at what has happened in Silicon Valley, Silicon Valley has figured out and the tech

world at large has figured out, how do I move faster?

How do I just learn faster, iterate faster, velocity has become the predominant thing,

and then figuring out all the process and technology pieces around that to ensure that

I still have quality, the system still work.

I can manage thousands of developers simultaneously contributing on this without them stepping

on each other's toes.

Those are hard problems, and Silicon Valley has really figured that out, but they had

a very different set of consequences and structures and incentives for how to do this.

And so I think there was a genuine and real learning and innovation that happened in how

you conduct business, how you build technologies in the valley that is not obvious to a lot

of folks.

This idea that velocity sort of trumps nearly anything else in terms of what you're trying

to accomplish.

You can just learn faster is a pretty subtle but very important idea.

So the military, because of all their incentives, they built a system that was kind of dialed

in for it, right?

Like they wanted safe, predictable, large scale things, and they got it.

But then that is what works against you when you need to move fast, when you need to iterate

and when you need to actually do things where you can absorb more risk.

You can have these systems just get out there faster and learn faster.

And the consequence from a cost perspective of like, hey, this drone wasn't as effective

as I wanted, but I can make another one very, very quick.

If you're not tuned for that world, it is very, very hard.

So I really do think it was not sort of inherently a question of like, were people sort of incompetent

or did they not have the talent?

I think this is just purely a matter of the system that worked for so long is working

against them now.

And in a lot of ways, that's what I think startups can do well.

They can say, I don't have this legacy business where 95% of my money is coming from the platform

I made 30 years ago.

I have the opportunity to look at this differently and my success is contingent on getting everyone

to understand a different way of operating.

Let's talk about your portfolio.

What are your contracts with the federal government as much as you can talk about them?

What are you selling to the federal government and how are they using your technology?

So cross cutting everything we do, we think of this as sort of more intelligent, unmanned

and autonomous systems.

So for us, that is very much where are these areas where smarter, networked or intelligent,

really sensor heavy systems will move the needle in terms of providing capability.

We have a software platform crossing all of this that we do and trying to take the best

of how tech companies have learned how to build those advanced software technologies

into the military space, right?

Like you invest heavily in these core technologies, you ply it kind of broadly.

From a hardware perspective, we build a lot of hardware systems and there's a lot of reasons

for that.

A lot of them practical around like how can you actually get stuff out to field quickest?

Like the military at the end of the day, they buy integrated systems, they buy it by capability.

Can I stop you there?

When you say the military buys integrated systems, I kind of understand what that means,

but I kind of don't understand what that means.

So in plainer language, what are you saying?

So there's sort of this view that it's like when you're selling like a Fortune 500 company

and you're selling, for example, cybersecurity, you can sell them a piece of network gear or

something like that and they'll have an IT shop that is able to pull all this together

or you're selling them apart and the company is often doing all that integration to make

like the whole system work.

The military needs to buy it already working.

When they buy a drone, they need to, they're not buying like an airframe and then separately

buying this and separate buying sensors and network and then they're going to bolt it

all together.

They need to buy a full system, a ground control system, the drone with the sensors integrated

with the training, with how do I actually get people how to use it?

So they need a lot of kind of holistically, this has to be something they can train troops

on, deploy, service, operate.

So it's quite a holistic thing that you end up that they need to buy at the end of the

day.

They don't have the in-house capability to kind of pull all these things together.

I would love to understand just at a really basic level how drone and anti-drone technology

actually works.

So talk to me like I'm a moderately smart high schooler here.

How does this stuff actually work?

So when you think of drones, you've got kind of quadcopters, so four blades on every arm.

Those are like the commercial ones you see, like DJI makes these, you can buy them at

Best Buy and you've got these bigger ones we think of as like US military, right?

Like the Predator drones, these big planes that can kind of take off.

A lot of what we're focused on is somewhere in between, how do we shrink these things

down, make them so that they can actually be brought by small numbers of troops to the

front lines, launched and go find things.

And then you have the question of what do you want to even do with these things, right?

What is the point of having this?

And by and large, the goal is surveillance and reconnaissance.

I want to know where the troops are on the other side.

I want to know what they're doing.

Are they getting ready to launch a missile at me?

Or are they moving their tanks, right?

And like you kind of have these questions of kind of surveillance at the end of the

day.

I just want to be able to see things, find things like the goal of the military is often

just to find and identify where things are and what is their intent, right?

That is the goal.

So where autonomy starts to come in on this is how can I make it so that I have much less

manual effort involved in doing this.

So often a big constraint when you look at this is that two sticks, I'm flying it around

and I'm looking at a video screen.

When we think about adding in more autonomy to this, I can now start to push more of that

intent of what am I trying to do to these drones?

I can say, go find tanks.

And they're probably on those roads and go out a lot of course, go look for them, use

computer vision to say like, yep, I think I found tanks here, here and here, report

that back to the operator and decide what they want to do with that information.

So the drone space is, you know, kind of that we focus on is how do we make this smaller?

How do we make this easier to employ?

And how do we reduce the manpower so it's not just people holding joysticks, looking

at a camera, they can now start to operate many of these while doing their other job,

which is being a soldier, actually making sure they're secure, you know, conducting

whatever operation they're on.

So that's sort of how we think about the drone side.

When you get to the counter drone side, right, you're trying to say, well, I don't want

to get surveilled.

And then the extension of this is some of these drones are kind of suicide munitions.

They are explosive.

They're officially cruise missiles.

These are things that an adversary can send in and say, hey, go hit this target.

And now what do I have to do?

I have to be vigilant and know where are these showing up, right?

So I have to like detect and track and identify these to start.

That is hard.

These things are small.

They often look like birds on radar and on cameras at range.

They are flying very low.

So even being able to get a line of sight to see them is very, very hard.

And they can strike anywhere.

Right?

So I could not just looking at big military base, but it could be a power plant we're

seeing in Ukraine.

And so the challenge you have is, you know, step one, identify them, know where they are.

Know it's not a friendly aircraft.

Know it's not a bird.

Know it's something you need to engage with.

And then you have to decide what to do about it.

And there's a handful of strategies you can actually take.

I can try to jam communications.

I can jam their ability to navigate and that can work to some degree, but there's a lot

of countermeasures you can do against that.

And ultimately what does work is I can shoot it down.

That can be a missile.

That can be a gun.

But I got to start with this process of detecting them, identifying them, deciding to act and

then taking that effect against it.

And all that whole engagement might happen in a matter of two minutes, max a minute.

So your ability to very quickly understand what's happening, make a decision and respond

is the fundamental thing you've got to solve for a lot of these problems.

And now doing it at scale with hundreds of humans that are very tired and get very fatigued

doing the same activity day in and day out and never making a mistake.

So how can we use technology to automate more of that process?

And again, the same analog applies.

The historic means is guys with joysticks looking at screens, I can take a lot of those

manual steps out, just present them with decisions when they need to know.

That's a huge step forward in our efficacy of how we can actually start to combat these

problems at the scale we're going to need to.

I'm really interested in how this technology actually works in a battlefield or along a

border.

So maybe the best way to get at that question is to ask a hypothetical like this.

If your company started a decade ago or 15 years ago and it had reached full maturity

before the Russia-Ukraine war had even started, what would you and the federal government

that's contracting with Enduro, what would you have given Ukraine that would have helped

them discourage an invasion by Russia?

Yeah.

So I think about this a lot and there's kind of like a couple different facets to this

when you think about what are these conflicts really going to look like.

So the most obvious one is kind of the most visible is the weapons side.

How do you provide intelligent weapons that are going to be able to hit the types of military

targets and work at the sort of like economics of like what is feasible here?

Let's get the end of the day.

These can't be $10 million missiles.

They have to be very cheap.

There's going to be a lot of them and you have to make it very clear that the volume

can be sustained.

That is a huge part of what we've seen in Ukraine is this is not a one-day conflict.

This is a multi-year conflict.

So how do you have the sort of ability to produce an operative scale?

I think this would look like things like long range, smart, loitering munitions.

So there's a couple of these in the world, but being able to have the javelin missiles

been very effective against tanks.

That works at a couple of kilometers away.

That's a pretty dicey position to be in if you're a Ukrainian, like right up against

the armor column coming in.

If you could provide that capability at 100 miles away and I knew there was a column of

tanks coming and I could deploy 50 of these to go out and deter them, it would be very

deterrent, right?

It's like, you will not drive on this route.

It will not work.

Okay.

Well, that really is a very clear indication of like, well, that strategy is not going

to be effective.

So I think that the weapon side is how do you kind of get very precise, very targeted systems

working at long ranges really tips the scale.

I think the second is you need surveillance and an understanding of what's going on.

So low cost drones that troops on the front lines can deploy that work at relevant ranges

that can spot these sort of enemy positions and be able to identify where they are, what

they are and be able to do this at scale without needing a huge amount of manpower is very,

very key.

And the other aspect is defensive in nature.

So how do I provide the ability to counter all of those things I just described, right?

Because if we can figure this out, the other guys are going to figure this out too, which

is, you know, counter drone capabilities, counter cruise missile capabilities.

How do I provide this air defense picture that tells me what's going on, the ability

to have all the, you know, counter drone missiles I need or jamming capabilities I need to be

able to make it so these things can't operate effectively.

And then underpinning this is kind of the resilient communications, right?

We kind of take for granted that this is really important.

A big part of this has been Zalinsky is able to communicate out to the world, communicate

to his country what is going on, tell the story, retain support, and that the troops

can communicate with each other.

That wasn't obvious when this started, you know, the cell phone infrastructure was taken

out, Starlink has been a huge kind of lifesaver in this, but having a more proactive strategy

for this is really, really key.

We're kind of working in all of those spaces.

So, you know, some stuff we have deployed today, some things we're working on and enhancing.

All of those are subtly different than what I think the U.S. has historically wanted to

procure, the capabilities they've, you know, kind of historically specked out.

And a lot of the technology you're seeing there today is things that were built up in

the 80s and 90s, right?

Like these are not kind of cutting edge capabilities.

These are things that we've kind of had solved since, you know, around the end of the Cold

War.

So, you know, I think there's a lot of, you know, how do we make sort of smaller, higher

quantity systems that really work in this new paradigm of lots of distributed, you know,

forces fighting?

And then on the defensive side, I'm protecting whole cities.

I'm not protecting, you know, just a small group of troops.

I'm protecting Kiev, protecting the critical infrastructure.

How do I make that a reality?

Given all the tactics we've been seeing that the Russians have been employing.

So just to reiterate for my own benefit, you're talking about giving this sort of parallel

universe Ukraine, this Ukraine that exists in February 2022 in the world in which Enduro

was founded a decade earlier, you're getting them next-gen anti-tank missiles.

You're getting them armed drones that provide a kind of curtain of surveillance that discourages

a border crossing into the Donbass in the first place.

You're giving them anti-drone technology.

You're giving them resilient communications technology.

And it seems to me, just stop me if I'm wrong here, the goal is to discourage a Russian

invasion in the first place to not even get into a multi-year war that then unlocks the

need for even more sophisticated and unmanned military technology.

Is that all correct?

That's 100% right.

And so I think the goal is, it has to be obvious that a conflict is unwinnable, right?

And I think that's probably the right term for this.

No one is going to decisively win these conflicts in the future.

There will not be a sort of full and unequivocal surrender that is unlikely to happen.

And so if that is obvious, wars are conducted to accomplish political ends.

And if it is sort of so obvious that the deterrent capability is there that your invasion won't

succeed, the other side won't capitulate, then the military option becomes far less

attractive.

And so I think that's a huge part of it is kind of this ability to deter an invasion,

to make those strategies off the table is very, very key.

And that's one of the more consequential ways you can see these conflicts playing out.

I think it's very relevant to Taiwan, I think it's very relevant to Eastern Europe.

I think writ large, they're all going to look at this with a perspective of how would

I make this very clear from a display of force that, and kind of a revealing of capability

that any kind of means of driving large scale conflict will not work, right?

It's just not going to accomplish your political ends and it's not an winning strategy.

That is a lot of how we thought about this.

And we, as you know, are going to get to Taiwan in just a second.

I want to double back and ask a question about how countries should communicate to their

potential adversaries exactly what kind of military technology that they have.

So for example, this is a question I think that we got in the audience when we talked

in at the Progress Summit in Los Angeles and I thought it was a really interesting question.

You just made what seems to me like a perfectly valid point, a great way to discourage an

invasion from, let's just go back to, discourage an invasion from Russia into Ukraine is for

Ukraine to be able to project the fact that they have such an arsenal of military technology

that victory is impossible.

But the more you communicate exactly what military technology you have, the better Russia

can plan to counter that military technology.

So I know we're getting a little bit into the question of strategy and communications

and outside the realm of pure technology.

But if you thought a little bit about this question of shouldn't a country want to keep

some of its military secrets a secret?

Yeah, I absolutely think there's this constant sort of debate on like a reveal, conceal perspective.

And the conceal part is actually really valuable in the sense of if you know there, if the

adversary knows you're concealing certain things, then anything is possible in terms

of what could happen.

So there's kind of a tricky strategy question here.

I think the other dimension to this is a lot of these technologies do not present an easy

counter.

And so there's sort of always this question of technologies that either favor the offense

or the defense, and it's a little blurry.

So it's like stopping an armored convoy coming in, is that inherently an offensive action

you're taking against that convoy or defensive nation?

It's a little bit tricky often to have a clean line there.

But a lot of these just present very challenging problems where you're sort of just on the

one side of this cost and efficacy curve and technology curve where in those contexts

revealing is very effective in terms of your strategy you have to take if you kind of want

to do an invasion will not work for all these reasons, which you can't really beat or it's

so expensive to beat that it'll take you a decade.

And the nature of defense is it's always this sort of like cat and mouse game, right?

You're always kind of adversaries are learning, adapting, building technologies to counter

you.

You're learning what they're doing, building technologies to counter them.

It is a constant race here for better or for worse.

I think what has kind of shifted is the view of the race has moved from these sort of like

strategic deterrents like nuclear weapons and how do we have bigger and more nuclear

weapons that are more secret when it's like nobody, you know, that's kind of reached stasis,

right?

That has reached an equilibrium.

Now I think we're seeing a similar thing on the conventional side where kind of these

sort of conventional conflicts will kind of get to a point of equilibrium, cyber, right?

All these realms.

I think new realms of warfare will happen.

A lot of what I think we'll see is moving to these more kind of below threshold of conflict,

armed conflict sort of engagements, kind of covert actions, sabotage, weird sort of,

you know, paramilitary, semi-military exertion of force trying to extend your C claims, like

all these things that are very hard to have decisive action against.

And that will, I think, be in a very effective strategy as well.

And that'll kind of spur a new set of ways of how do we counter that?

How do we, you know, kind of not escalate conflicts?

These are all just, I think the nature will shift, but I think taking these large-scale

military conflicts off the table is something we can reach a stable equilibrium on quite

quickly here.

All right, well, let's head right into the trillion dollar question here.

What are the sort of things that the U.S. could give Taiwan from your portfolio of technology

that could make an invasion of the island less likely?

I think it's a lot of those things we described, right?

So it's how do I have the defensive capabilities to protect what is going to have to be a whole

island, right?

And no longer just protect these one or two military bases, right?

The volume of missiles that China can launch at Taiwan is too high.

So how do I make this as sort of uneconomic as possible, make this as dispersed as possible,

enable them all the communications possible that they need?

All those pieces are sort of still there.

I think when you look at the Taiwan scenario, it's a little more constrained in good and

bad ways.

The logistics of an invasion are harder coming across a channel, an ocean, or you're coming

via air.

Those are both very hard to do.

There's very real choke points on where those invasions can happen.

And so I think there are kind of more constraints to this that make an invasion scenario kind

of highly favor the defense of you're largely trying to deter kind of these, you know, maritime

evidence, right?

And so like anti-ship capabilities, things like that become really, really key.

I think a lot of the, you know, autonomous undersea capabilities, things like that become

really relevant as well.

So I think there's a lot of aspects where, you know, the other interesting angle for

Taiwan is the Chinese strategy on this has been, you see in the South China Sea, I think

DoD released recently all the islands and the reefs that China has built up military

presence on and have basically been incrementally exerting more claims into South China Sea.

I'm just trying to just say this is our territory.

We're going to have fishing rights, we're going to have mining rights, we're going to

mineral rights.

We're going to use kind of low grade force and harassment to exert those claims.

That's a hard thing to counter, right?

And Taiwan has to have a response, Philippines has to have a response, Vietnam has to have

a response.

And so figuring out ways to, you know, kind of provide those defensive capabilities, protect

your rights, you know, as you're doing mining or, you know, oil and gas or anything like

that.

How do I protect those from, you know, sort of adversarial action becomes really, really

key.

So I think there's a lot of like subtle parts of this where a lot of the autonomous capabilities

are really relevant where you're not going to have the number of pilots and OTs you need.

You're not going to have an economy to support, you know, the huge amount of capital ships

you would otherwise require.

So this has to look unmanned in a lot of ways.

This has to be much more cost effective to counter it.

And those are, you know, kind of writ large the types of technologies we're working on.

And of course, if we arm Taiwan with a bunch of autonomous submarines and other technology

that you've mentioned, there's a possibility that it makes Taiwan safer because it has exactly

the effect, the effect that you've described, which is discouragement.

There's also at least a small chance that China seeing Taiwan collect all of these new technologies

says to itself, we might have to accelerate our military takeover plans.

So how do you think about the fact that you are building tools that in some versions of

the future are used to discourage war, but in other versions of the future are used as

a kind of cussus belly by policymakers to accelerate warfare rather than merely strengthen

defensive systems?

I think there's a really tricky foreign policy line for, you know, and largely to, you know,

one as a company, our view is aligned with the military is aligned with, you know, kind

of US foreign policy positions, which is, you know, number one, the job of the military

is to provide the president options, right?

Like that is their job.

They are not the policy people, like they want to do things ethically, responsibly and effectively,

but how use of force is implied, that is inherently a question for civilian leadership.

That's their job.

In the same view, I think our view as technologists is that we have to provide these technologies

in a way that are effective, they work, and we clearly communicate the limits and what

these things can and can't do, and that we, you know, are employing the technologies in

a way that makes humans more accountable, that gives them more agency, that gives them

more control and reinforces kind of the policy doctrine we have, right?

Which is humans are accountable for their actions, that rolls up through elected officials,

and that's a pretty good system.

And so I think on the whole, our sort of framework on this is, you know, our job is not to conduct

foreign policy.

Our job is to provide effective technology solutions and do it as ethically as we can.

And kind of the question of like, how do you thread this, you know, if I'm a policy person

in Washington, like what's the right way to think about this?

It's quite tricky, obviously, right?

Like, you know, I think there's a lot of ways to kind of analyze this problem.

I think the view has been, you know, sort of a non-antagonism strategy and pulling

them into this sort of, you know, Western rules-based system would kind of mute out

a lot of the more authoritarian dimensions, would increase freedom, would, you know, kind

of pull them in from economic interests into a more stable system.

I think history has told us again and again, that's kind of a questionable strategy.

The, I think, you know, again, with Russia here, it was not sort of obviously rational

why you would have even made Ukraine, but, you know, certain leaders, especially in these

sort of more unstable authoritarian setups, have incentive to do so, right, in sort of

weird cases.

And so I think it's kind of hard to imagine a world where a completely pacifist Taiwan

somehow is a more stable system that is less likely to induce conflict than one that is,

you know, kind of relying on the stability of, it just won't work, like it's just not

possible to work.

So how do you thread this sort of transition period where, you know, one power is rising,

one power is declining, it's quite tricky.

Those are the most unstable times.

And I don't think there's a perfect answer for this.

Yeah, I want to hold on this topic.

I know that it's thorny, but I think these questions are really important, like in keeping

with these sort of ethical gut checks.

So obviously, autonomous technology is the sort of thing that could be used for good

or could be used for something really terrible.

And a good example there is, you know, surveillance technology in Xinjiang, in the western province

of China, is being used not to, you know, discourage war, but to create a police state

to subjugate an ethnic minority.

How does the fact that this kind of technology can so clearly lead to a kind of, you know,

digital Orwellian state, how does that make you guys think harder about the ethical guard

rails around this tech?

So a lot of this is you can't get enamored with what can be done.

You have to sort of really think about how would this technology be most responsibly

employed and understand the policy frameworks that the government operates in, right?

And I think that's really key.

I think people forget that the U.S. actually has extensive rules on use of force, extensive

rules on, you know, kind of authorities here, how it flows down from the president, what

legal basis we have to have in these different contexts, how much information we have to

have, use of surveillance technologies and like what sort of checks and balances we have

in the system.

And I think you can disagree with those at times.

You could, you know, wish them to be different.

But at the core of it, the U.S. has, you know, we've kind of taken the position that this

should be real policy.

This is not an authoritarian system, you know, this is accountable to a civilian leadership

and we will refine and enhance these use of force guidelines and intelligence guidelines

every time.

I think that's a good system.

I really believe in that system.

So then as a technologist, I think our view is, you know, where can these things be done

responsibly and accurately represent what is possible?

So simple examples of this are like, you know, we've had people ask us to do, you know, long

range facial recognition, we're like, it's not going to work the way you think it's going

to work.

And they're asking honestly, right?

They're asking like, if I could get, you know, 99.9999% confidence of like, you know, this

to work, it's like, yeah, that world might be believable, but that's not the world we

live in.

Here's the actual efficacy of this.

So we can't, we can't be selling sort of a false promise of what these technologies

can do.

When we think about autonomous systems, it's very much about, you know, nobody is asking

for nor would we propose that these systems go out and autonomously choose which targets

they're going to engage.

It's too complicated, right?

It's like, it can make mistakes.

The accuracy is, you know, not where we would want it to be for that to work.

We need a human kind of vetting these things at the end of the day.

And then on top of it, you're kind of relying on the system actually understanding all the

political societal contexts and having all this information as disposal that we hold

humans accountable to.

So our view has always been, this is about enhancing human accountability on this.

Like that's the goal, reducing the fog of war, making them make better decisions.

That feels like something that will lead to better outcomes and gives you more policy

levers of how do you inform and make sure the decisions at the right level of confidence,

not in these sort of like, you know, and get us out of these dire situations where you're

sort of forced into making compromised decisions because of necessity.

And I think that's sort of how we've thought about it on the whole.

One of the ways that I'm trying to get smarter in the frames that I use, the thing about

technology is to distinguish between what I've come to think of as, quote, mere mournness.

Like, Hey, here's a thing we can do.

So it's good.

Here's a new thing we can do.

It's we've pushed technology forward.

Therefore, it's good.

Here's a thing we can make more of.

Therefore, it's good.

That's what I would call mere mourness and then there's technological progress.

And that's the ability to use technology to actually improve people's lives, to reduce

pain and suffering in the world and raise the ceiling of human flourishing.

And so I was interested in asking you, like, what's an example of a technology that you've

explored at Enduro, that you came to realize, wait, I'm not sure we can stand by this.

I'm not sure that we can support this within the ethical guardrails that you and I have

just discussed.

Just to go a little bit deeper, would you say that long range facial recognition is

a good example of a place where you explored and thought, you know what, this can't really

be done in a way that is efficacious and ethical?

Yeah, I think that's a great example where we are just like the physics and technology

is not enabled us to work today.

This can't be done in a way that, you know, like the probability you're going to misidentify

someone is too high.

Like, it's just not reasonable.

Like you can't say there's like a given I found someone I think is, you know, a high

ranking drug cartel member.

But there's like, actually, it's only a 20% chance it's actually him.

Like that, like the math doesn't work out.

And so it's from our view, we have to actually be able to kind of look at this in the context

of the system and what they're trying to accomplish.

And we can do the engineering understand the system understand the math of this to say,

here's what will work or what will not or here's the probability you're going to have

consequences that are unintended.

Is that acceptable in this context or not?

And so we can kind of accurately represent if we can actually represent those trades.

I think that informs policymakers, the decision makers on where they want to be on this.

They can make fully informed decisions.

And you know, there's that that has certainly been kind of our view to this on the whole.

So I think that's a great example of one that we've, you know, kind of early on looked at

and said, or had a request to look at and said, I don't think it's going to do what

you want it to do.

I want to move to 2023 and domestic politics.

Tell me what contracts, and again, as much detail as is reasonably allowed, what kind

of contracts have you signed under the Biden administration?

How has the U.S. military continued to expand their portfolio of Endural products?

So the vast majority of contracts we've signed as a company have been during the Biden administration.

The biggest one to date in the U.S. has been with special operations to do their counter-drone

work.

Kind of the capabilities to detect, track, identify drones, and then counter them.

And it's structured in a way that we're able to evolve this to be able to, you know, respond

to the threats that are actually being seen.

We've done a lot of work with the, you know, kind of drone capabilities at large, like

how does the military kind of take advantage and integrate these smaller, more autonomous

systems?

That's been a big part of the portfolio as well.

And then a lot of work on what we would call kind of this like joint command and control.

So how do we think about the Air Force and the Navy and the Army and the Marine Corps

all kind of operating together, understanding what's happening in this battle space and

being able to respond and operate very quickly?

We've done a lot of work there as well.

All with the intent of, you know, the pace and nature of these conflicts has increased

tremendously.

And it was wildly different than what we saw from, you know, kind of war on terror.

And this needs to be something where we can make highly informed decisions very, very

quickly and be able to respond.

And so increasing that pace of how we operate as a country, as a military has been a huge

focus as well.

So it's been kind of quite comprehensive, I would say, you know, national security in

our experience has been one of the most bipartisan supported efforts, right?

People believe in a strong defense, they believe America needs to have these technologies.

I think you see kind of like shifts on the margin from administration to administration.

But on the whole, the military has, I think, kind of tried very hard and by and large upheld

its belief as a nonpartisan entity that is serving the national interest and has not

been as much of a political hot potato as many other issues.

How's your anti-balloon technology?

Oh, man, so many ideas.

The balloon thing's actually wild, right?

Because it's an example of, you know, one, I kind of think there's been kind of this

question of, is China actually building military capability?

Is this all just saber rattling?

And I generally believe that if the Biden administration says, hey, definitively we

have this evidence that this is a military or intelligence balloon, like they're not

lying.

They're not, you know, they're actually quite honest and high integrity on the whole in

terms of like really thinking, you know, releasing information they believe to be factually accurate,

right?

So I'm inclined to take them at face value that this is, in fact, a Chinese military

capability or intelligence capability.

And I think it's indicative of the types of capabilities that China's building very quickly.

It's a very cheap way to get effective surveillance capabilities that on the whole are actually

kind of hard to detect and defeat.

So there's a lot of really interesting nuances of like why these things are actually like

quite hard to find and quite hard to shoot down.

It's it really gets very weedsy fast, but it's a pretty effective technology area.

So I think it's a really interesting indication of like China's active.

They're really looking, they're really building a real military intelligence capability that's

global.

And that is something, you know, at the face of it, we should take very seriously and

understand that they are a real, you know, entity that's able to play at this level of

the game, essentially.

You said that one of the biggest areas of contracting with the bad administration is

an anti drone technology.

Just just one level deeper.

Where are we talking about anti drone technology being employed?

Are we talking about it along the border of the US?

Is it in Eastern Europe?

Is it in near China surrounding our our allies in the region?

Like where is this anti drone technology being employed?

I can't really say, like, exactly where it's being deployed, but I would say like zip

code and here's the specific base and geolocation of where it is.

The I would think of it as military based security kind of within the US.

And then, you know, kind of looking at foreign military installations overseas has

been like kind of overall, like where we've where we've been deployed to date.

And most of these locations have had varying degrees of sort of drone threat,

everything from actual, you know, kind of similar things you're seeing in Ukraine to

more hard to attribute kind of commercial drones flying over air bases and things like that.

So it's it kind of varies across the spectrum.

But I think kind of the thing we've seen it on it is, again, it's like technology that was

historically uniquely military technology was very exquisite, it's extremely proliferated.

And this is a big shift where last 20, 30 years, the US didn't have to think about air

defense. There was nobody else with air superiority in anything we were doing.

And when we think of air defense, it's like protecting relatively few assets.

And we have this huge benefit in the US of geography.

But now if these things are pervasive, they're cheap, everyone can have access to them.

It suddenly gets very hard, right, where now everything has this concern of protection,

security, you know, air defense in a way that's quite a bit different.

Every unit going out is that threat from drones.

That's a big shift from, you know, kind of where we were.

You look back to like the Iraq war, the threat was shooting down these scud missiles, right,

you know, for the folks that remember that big missiles shot at long range,

targeting US military installations, that's not the threat anymore.

It can come from anywhere at any level.

And that is a very hard problem to counter.

So a lot of what we focused on with this is how do we make it cheap?

How do we get the manning down as low as possible?

Make this so it's as close as possible to you get alerted.

There's a drone and here's your options of what to do about it.

And it has to be that simple contrast it with where we are today, where you have humans

times three shifts staring at multiple screens, manually correlating all the

sensor data to figure out, what am I looking at?

Is this the threat?

And they have like, you know, seconds to respond and then knowing what, you know,

kind of techniques will be effective against this is incredibly subtle.

And they may only see like five of these in their whole career, right?

So it's a very tricky problem on this defense side to actually crack at the scale.

We're going to have to operate this at, which is very different from how we've

historically looked at kind of high end air defense, right?

This isn't defending North Korean missiles.

This is defending against this pervasive low end threat, which can harass and

cause problems for nearly any unit at any time.

My last question for you is about talent.

I think it's ironic that Silicon Valley was in many ways birthed out of the

defense industry and investments made by the defense industry in that part of

California, but today it seems to me like most people working in what I would

call the tech industry do not want to work in this space anymore.

They don't want Google to contract with the defense department.

They don't want to be involved with war.

We are living at least relative to the 1940s, 1950s in a kind of peace time,

at least in the Western hemisphere.

And we don't want to, I think tech people don't want to think of themselves as

belonging to the international war industry.

How do you think about balancing on the one hand the virtue of wanting peace with

the need to keep the military at the cutting edge of this kind of technology

so that we aren't lapped by China and Russia and its adversaries in the next generation?

So first off, I think what we've seen is that there's a very vocal, but very

small set of the tech population that feels very strongly about that.

And I love that in America.

That is something we embrace and we want them to have, right?

That is not an option in China so much, right?

And so I think that's, you know, first and foremost, very important to keep in

mind that people certainly get this choice and I love that about this country.

And but I think at the end of the day, what we've seen is it's a relatively

small minority of it.

The second piece of this is that the war in Ukraine really changed opinions for

folks, which was, I think there was a belief that kind of, you know, the sort

of end of history view that conflict was over and that, you know, sort of the

economic stability would lead to kind of a stable international order.

Many claims were made about that before World War One or two, right?

And it's like, I think it's just history tells us that that is inherently not

true and that I think the world has benefited from this sort of unipolar

world where the US on the whole is trying to enforce a rules-based order.

You can disagree with specific conflicts, how we engage in it.

But at the end of the day, we are not inherently trying to use military force

to, you know, massively extend our territory, our, you know, our rights,

any of those sorts of things.

And so I do think the world has benefited from this.

And I think when people examine that and look at kind of some of the alternatives,

they come to a similar conclusion and that a world where the US has the ability

to, you know, support that order is a beneficial thing.

And, you know, there's going to be a lot of legs to this, right?

There is a diplomatic angle and relations we have there.

There is trade and economics and there's going to be military support.

And all three of those have to be used in concert to be effective.

And on the whole, I think people get that.

They get that this is actually pretty complicated, that this is probably not

going away and that they'd rather be in a world where the US has the edge.

And that is probably a net better world.

Then I think people do have very reasonable disagreements around use of

force when we should employ it and how we should employ it.

And that is a very important debate to have that I am very happy in America.

We get to have.

And so I think on the whole, we found people are really gravitate to the

problems, to the mission, to the impact it can have.

I think we've had no issue recruiting.

There's a lot of people who truly believe this as a sort of imperative that the US

has this access to these technologies and that it's not so simple as not

participating, makes everything peaceful, right?

That's just empirically not true.

And so I think a lot of folks choose to work in it.

We've had no issue recruiting.

And I think there's been a massive shift in perception of why this is

important and a reminder that for better or for worse, conflict has not gone away.

And we would like to get to a world where it is off the table, but that only

comes through strength and hard power deterrence is for better or for worse,

still part of the equation.

Brian Schimpf, thank you very, very much.

Thank you.

That was my interview with Endural CEO, Brian Schimpf.

And as Schimpf said several different times, drone warfare is not meant to

replace human decision making.

Endural, you could say, is automating troops, not automating generals, right?

They want elected officials or people empowered by elected officials to make

the key decisions for how to use these weapons systems.

But here's where the story gets a little bit hairy for me.

What we're seeing today with AI, the likes of Chashy PT, is that artificial

intelligence is encroaching on extremely human faculties like creativity,

writing, and even logic.

Over time, AI might climb that corporate ladder from grunt to manager in many

industries, sports and music, entertainment and defense.

War.

In an essay in the Atlantic magazine, the author Ross Anderson recently examined

the question of whether we should allow AI into the situation room, into the

cabinet, into the West Wing, whether artificial intelligence should be allowed

to make decisions about how the United States conducts war.

The title and the subtitle of that piece is, quote, never give artificial

intelligence the nuclear codes.

The temptation to automate command and control will be great.

The danger is greater.

Now that, now, and now here is the author of that essay, Ross Anderson.

Ross Anderson, welcome to the show.

Derek, thanks for having me.

So before you and I talk about killer AI and the ethics of automating our

military decision making in a hypothetical future, I have a question for you

about reality.

In the current scenario, what happens if the US president is informed that a

nuclear missile is headed for America?

It's an interesting question and I hope one that remains a hypothetical.

It's interesting that the answer to this question has changed over the years.

It used to be at the start of the Cold War or several years into the Cold War

once the United States and the Soviet Union both had large nuclear arsenals.

They were carried by bombers, by planes.

These things take hours to travel between Eurasia and North America.

And both countries set up radar stations such that they would have a lot of

warning that a nuclear attack was underway, something like an hour and a

half, two hours maybe.

And then over the history of the Cold War, what you see is both sides looking

to innovate in their launch technology and the weapons themselves such that

instead of bombers, now they're using ICBMs, which not only make that same

trip in just 30 minutes, but when they arrive, they come screaming down from

space, you really have very little chance of stopping them.

And so now the president, if he or she is faced with this devastating news that

a nuclear attack on the United States appears to be underway, they've got maybe

25 minutes, maybe 30 minutes.

Then when you have nuclear submarines that have been perfected and are

patrolling the oceans everywhere, that decision window, as they call it, has

now shrunk down to something like 15 minutes.

And there are now technologies on the way that could shrink it further still to

maybe like seven or eight minutes.

So your piece opens with a twist on this absolutely terrifying seven minute

experience that the American president has to undergo, understanding that

there's a nuclear attack on America.

Your piece opens with a war game, a scenario that has been used to test our

military's decision making.

Tell me everything about this war game.

This war game was actually developed by Jacqueline Schneider, who's at the head

of war games at the Hoover Institution at Stanford.

And it's about nuclear command and control.

And here's the idea.

You're hustled into the situation room with the president.

You get the absolutely terrible news that an adversary of the United States

is considering a nuclear attack on our country.

And there's a wrinkle, however.

Intelligence officials tell you that they have good reason to believe that the

enemy has developed a cyber weapon that whose purpose is to interfere with

nuclear command and control the system that the president uses to get his or

her nuclear commands out to his or her nuclear forces, which means your

ability to retaliate is an in serious question.

And the people who played this war game were not like jokers like you and I,

right?

Like they're not like obvious to like to think about this stuff on the side.

Like these were former heads of state.

These are NATO officials.

These are former foreign ministers.

These are people who have thought about this scenario, should have thought

about this scenario and really thought through what they might do if this were

to happen.

And chillingly, some of them said, OK, well, in that case, we have to delegate

launch authority out to officers out in the field at the actual, say,

missile silos or in the submarines.

And look, that itself, like disaggregating nuclear commands from the

president to lots of different individuals who might have different

psychological temperaments or judgments and don't have the level of support of

the president comes with a whole bunch of issues, which is a reason we don't do

that.

So that in itself is like, oh, wow, that's pretty wild that people are willing

to do that.

But even more chilling is that you had a what are the regular answers was

that people said, OK, automate it, have an AI make the decision as to whether

to launch a retaliatory nuclear strike.

And I found that just shocking that people with that kind of life experience

and career experience and presumably sobriety would be willing to give

algorithms kind of control as to whether the United States would enter

into a nuclear exchange.

The title of your article is Never Give Artificial Intelligence, The Nuclear

Codes. And when I saw the title, my reaction was, well, of course, we should

never give, you know, an extremely evolved version of ChatGPT, the nuclear codes.

That would be insane.

What's so wild about this story and about this anecdote of the war game is

that it's not just, as you said, jokers like you and me and people on Twitter

and Instagram saying, yeah, you know, maybe in some scenario in the future,

we should give a large language model or some advanced AI access to nuclear

codes. No, these are military leaders who have played a popular war game,

a commonly played war game that have essentially said in this not entirely

unforeseeable scenario, we could imagine automating Armageddon.

But as you add, Ross, there's precedent for the idea of automating Armageddon.

Tell me a little bit about Russia's Dead Hand program.

Yeah. Yeah. And this is, it's crazy.

I mean, it's funny. I had some ambient knowledge about this that I'd forgotten.

And I remember when I came across this again in the research, it was sort of

like shocking to me anew.

But yeah, it turns out that during the late Cold War, you know, again, I was

talking earlier about both sides, you know, we're constantly trying to innovate.

And every time one side made an innovation, it disturbs that strategic

equilibrium of mutually assured destruction, right?

Because one side thinks, well, maybe the other has an advantage

and they could achieve a first strike on us.

And so what the Soviets did is they developed this dead hand technology

that was very simple and, you know, you could quibble with whether you call it

AI, but the principle is it's not a human decision maker.

The idea was that if this machine stopped receiving communications from the Kremlin,

it would inquire into the atmospheric conditions above Moscow.

And if it had seen, you know, bright flashes and, you know, high levels of

radioactivity and the sorts of atmospheric conditions that you would

associate with a large nuclear attack, it's not just that it's retaliating.

It's the way it worked was it set up an ICBM missile and that ICBM missile

sent a radio signal to all Soviet silos and they're sending everything.

So it really was kind of automating Armageddon.

Now, its purpose, you know, to add some nuance to that was not because the Soviets

were like, you know, let's go.

We really want Armageddon.

I mean, it was to shore up their deterrent in the face of what they perceived

to be a technological or a series of technological setbacks.

And I don't want to suggest it's just Russia that has thought about or acted

on this inclination to automate the nuclear process.

Have any American researchers or military folks suggested that the U.S.

should develop our own version of dead hand?

Yeah, absolutely.

There's there's two guys, two scholars, some who both worked at nuclear command

and control, and they put out an article a few years ago, one of them fairly

senior at the Air Force Institute, making an explicit argument that the United

States, because of this shrinking decision window, again, because we're getting

down from a world where we had two hours, then we had one hour, I'm sorry, 30

minutes with ICBMs, now with submarines here at 15 minutes.

And, you know, missile technology development continues apace.

Maybe that's going to inch down to seven minutes, six minutes, five minutes.

And their point was in that kind of world, you know, maybe I don't want to say

sleepy Joe Biden, but, you know, maybe it's tough to rouse Joe Biden from his

bedside at three in the morning or he's in the bathroom, you know, you're eating

up two minutes, just getting the guy in front of the nuclear football.

And so maybe there's an argument these guys were saying for introducing our own

dead hand.

So just to catch people up, the idea of automating the nuclear process has

already somewhat been done by Russia.

It's something that Americans have already discussed.

And now here is this war game that military leaders have responded to by

saying we should actually increase the degree to which we automate our nuclear

response process.

Now, how much power is AI currently being given to make decisions on the

battlefield?

Like, does the Pentagon allow the development of AI weapons that can make

kill shots on their own?

That's a good question.

And I want to be honest, like a lot of this is highly classified that the

picture is murky.

However, the Pentagon did recently adjust its policy to make clear that it can't.

It's funny, they sort of hide behind, you know, all these various caveats, but

they are allowed to develop such a weapon that can make kill shots on its own.

And look, in South Korea along the border, they have these, you know, famously

these robots that are able to shoot on their own.

But the other places that you see AI kind of in the military chain of command

right now is at the troop level, right?

At the soldier level, you know, we're talking about drone swarms.

We're talking about algorithms that can achieve dogfighting in the F-16 with

maybe the human doing the shooting.

You have tanks that can identify and engage targets on their own that are

kind of always scanning the horizon, looking for someone with an IED or a,

you know, a machine gun or something.

And that's where this technology is today.

Like, I don't want to suggest that lots of people are rushing, that no one is

inviting artificial intelligence into like the joint chiefs of staff.

However, it's not that far of a leap to imagine a world in which they would.

But yeah, so now you have AI working at like on the battlefield, you know,

in almost like a soldier, I'm sorry.

Now you have AI working on the battlefield, almost in like a soldier role,

right? But you could imagine at most.

But you could imagine a world in the near future, actually, where AI starts to move

up the chain of command and why?

Because AI has some obvious advantages over human commanders.

A really basic one is they don't need to sleep, right?

But also, you know, and we've seen this, you know, open AI before they got into

large language models, developed an artificial intelligence that played the

game Dota. And Dota is unlike Go, you know, famously, AlphaGo, which beat

the world champion with this very brilliant creative strategic move that

kind of blew everyone's minds five or six years ago.

Go is not nearly as complicated of a game as Dota is.

Dota is five on five, highly variable.

The game space is three dimensional.

It's it's like just just orders of magnitude more complex, such that

nobody thought that it was going to be beat by an AI, a team of AI's in this

case, and they were able to do it.

And you would imagine that in the future, a commander that can take stock

of an unfolding battle that has, you know, thousands of variables and positions

and continually update its sort of strategic disposition and orders according

to the changing in those variables, there's no human that can do that.

And there may well be an AI that can do that sometime soon.

And it's easy to say, well, you know, despite that, I actually don't think

we would hand over, you know, the ability to command an army or command, you

know, a fleet of ships to an AI.

When your adversary does it and is achieving, you know, genuine battlefield

successes, you might be quite tempted.

Yeah.

One way that I imagine AI possibly moving up the chain of command is that

American decision makers get scared of China, that there's some future in

which China seems to be developing AI that gives its military strategist some

kind of super, superhuman capability to observe all these different variables

that happen during war.

And so they're going to be just masterful in terms of strategy.

And there's some Wall Street Journal Financial Times headline that's like

China's military decision making has now reached AI superhuman level.

And that people in Washington, DC get freaked out and they say, we need to

bring superhuman AI into the decision making process in the Pentagon.

And that leads to this scenario where we essentially invite artificial

intelligence into, you know, the room with the nuclear codes.

Do you have a similar sense that there's something about the fact that we are in

this geopolitical fight against China, which is probably number two in terms of

developing AI, that a kind of arms race between us and China could lead to a

situation potentially where AI is introduced into this decision making process?

Yeah, I do.

I think that's really well put.

I mean, we've been living in a world for the last 70 years or so where mutually

assured destruction was this kind of neat symmetry between two parties.

And now it's a three body problem, right?

Now there are three parties to this.

There's recent reporting that suggests that China now has enough ICBMs to take

out, you know, every American city over, say, 2 million in population.

And so, yeah, you no longer have this neatly symmetrical strategic equilibrium.

The thing is messier and more complicated.

And China looks to be, you know, in many ways, a much more formidable long term

adversary, if it's proper to think of them as an adversary, certainly seems like

it is now than the Soviet Union was just because, you know, its population is much

larger, right?

Its economy is more dynamic, or at least appears to be for now.

And so, yes, I do think that, you know, we, it was really difficult to escape arms

race dynamics with the Soviets.

And now having China, I can imagine that that's only going to continue.

There's other details from your essay that I won't entirely go into.

I really appreciated this guy, Michael Clare, peace and world security studies

professor at Hampshire College, who talked about the idea that there could be the

equivalent of a flash crash in war if we automate too many of these intercontinental

ballistic missile systems.

That the same way we had a Wall Street flash crash where a bunch of algorithms,

essentially on mass and in hyperspeed, sold off a bunch of securities so that it

created this enormous dumping of assets.

You can imagine something similarly happening where you have a kind of domino

effect of algorithms responding to each other and creating a kind of flash war

scenario. That's hypothetical, that's science fiction.

But I just wanted to point out because I thought it was it was an interesting

thing that you glance toward in the piece.

I want to end on solutions here, or if not solutions, then at least what models

from history should guide our approach to ethically limiting artificial

intelligence in command and control?

Yeah, I mean, unfortunately, we're in the position where the solutions are

kind of unsexy, right? They look like international agreements that are

hashed out across many parties.

But as you say, we do have a historical model for this, right?

After the Cold War ended, or looks as though it was going to end, famously,

George Bush Senior and Mikhail Gorbachev signed the first start treaty.

And that was the first in a series of agreements that nearly shrunk the

nuclear weapon arsenals of the two major nuclear powers by an order of magnitude.

Tremendous success. All those guys should have Nobel Prizes, in my opinion.

And that's a model for this, right?

And it's a model because what that represented was a recovery of human

agency, of human choice, right? It was saying that, look, we are not going to

allow the dynamics of technology, of an arms race, of just pure kind of

mindless technological competitive development to dictate how we arrange

our world here on this planet, right? We are going to say that actually we

don't want a world where all the incentives lead towards ever expanding

arsenals so that people have tens upon tens upon tens upon thousands of these

missiles set on a hair trigger pointed at each other.

That's a silly world to live in. And when we have these little moments of

peace, these windows, those are a good time to strike those agreements and try

to get towards a world where there are fewer such weapons.

And I think in this case, we're not at a moment of peace right now.

The strategic disposition towards China hasn't been more hawkish, really,

in decades. We have a hot war that the Ukrainians are fighting against Russia

that we obviously are bankrolling to a large extent.

And so I don't think you're going to see an agreement not to use AI and nuclear

commanding control just yet. But I'm hopeful that as things potentially

cool down, maybe there's a recognition that China doesn't need to be our

adversary necessarily. Maybe there's new leadership in Russia and that

relationship gets better. And we can sit at the table and say, hey, whatever

our differences, we don't want to live in the world where we have a bunch of

nuclear arsenals putting us, you know, one glitch away from the apocalypse.

The piece of history that I was reminded of as I read your piece and thought

about what kind of action we have to take was the Montreal Protocol.

So the Montreal Protocol was a 1987 international treaty to protect the ozone

layer by essentially banning chlorofluorocarbons. And it took decades of

scientists ringing the bell about the danger of CFCs for the world to come

together and say, we have now determined that these chemicals are destroying

the ozone layer. And the only way to avoid the destruction of the ozone layer

is for all the world to come together and ratify a treaty to phase out CFCs.

With AI, it's like we've invented this technology, the equivalent of CFCs,

but we're not entirely sure what the dangers are yet. It's not entirely

clear what proverbial ozone layer they're depleting. But still, we need a similar

sense of international organization to ensure that we don't get the kind of

arms races that you've pointed to. Because I find your hypotheticals, as

hypothetical as they are, right, the piece that you wrote and a lot of what we've

just discussed is a kind of science fiction. But I find it such a plausible

science fiction because we've just lived through the 20th century where we had a

similar technology in nuclear weapons that saw through a Cold War the

proliferation thereof, and it's created the potential for enormous problems for

the world. And I just hope we can find some way to get the equivalent of the

Montreal Protocol in place for AI before these things become the equivalent of

nuclear weapons. I'm kind of mixing a bunch of metaphors here, like AI and

CFCs and nuclear weapons. But there are just some inventions that have global

consequences. And it is really hard, I think, to get our arms around what the

global consequences are of AI. But it's so incredibly important, I think, for

governments, even in a period of geopolitical controversy, to come together

and talk about making sure that we don't put nuclear arsenals at the hands of

systems that might go haywire and don't have this trigger of human decision

making to keep us from having this kind of flash war. Any last comment on just

this philosophy of nonproliferation in AI that you thought about as you wrote

this piece?

Yeah, well, first, I just wanted to respond to your excellent example of the

Montreal Protocol, another sort of great achievement. And, you know,

unfortunately, you know, for each particular country giving up CFCs, you

know, it entailed some perhaps an economic setback, you know, a loss in

productivity for some period of time, some inconvenience for your citizens. And

what's really dangerous about arms races is the thing that you're asking the

other country to give up, especially in a geopolitical situation like we have

right now where there isn't strong trust buildup between the really important

parties you would need to an agreement like this, is you're making yourself

vulnerable, right, to existential attacks, right? You're not just giving out

CFCs for, you know, 10 years and taking that on the chin economically. You're

having to explain to your voters or, you know, to your party committee, in the

case of China, that you have struck an agreement that makes you potentially

more vulnerable to an adversary that is willing to use these tools. And so that

doesn't mean that I'm necessarily pessimistic. You know, as I said, I'm

not a techno determinist, and the Stark Treaty is a great example of it, right?

Like, people got together and said, we don't want to live in a world like

that. And I think that's what it's going to take this next time around.

Ross Anderson, thank you very, very much.

Derek, thanks for having me on. It's a blast.

Plain English was hosted and reported by me, Derek Thompson, and produced by

Devon Manzi. We'll see you back here every Tuesday for a brand new episode. Have a great one.

Machine-generated transcript that may contain inaccuracies.

Today’s episode is about how artificial intelligence will change the future of war. First, we have Brian Schimpf, the CEO of Anduril, a military technology company that builds AI programs for the Department of Defense. Next we have the Atlantic author Ross Andersen on how to prevent AI from blowing up the world.
If you have questions, observations, or ideas for future episodes, email us at PlainEnglish@Spotify.com. You can find us on TikTok at www.tiktok.com/@plainenglish_
Host: Derek Thompson
Guests: Brian Schimpf and Ross Andersen
Producer: Devon Manze
Learn more about your ad choices. Visit podcastchoices.com/adchoices