Fully Autonomous Weapon Systems - The technology, capability and controversy of robots at war
Through the history of military conflict we've seen massive evolution in weapon systems. But for all the advances we've seen in things like range and lethality, the human factor has often remained relatively central. Systems like long-range drones and cruise missiles have given countries the ability to strike targets potentially a continent away. And to do so without putting a pilot directly in the firing line.
But across those systems it's often been a human with their finger on the proverbial button, making that potentially very impactful decision to employ potentially lethal force. But with computing resources becoming cheaper and AI technology developing at a breathtaking rate, nations are probably going to have to confront the fact that increasing levels of autonomy in weapon systems are only going to become more practical. And so today we are going to start our exploration of the military applications of AI with a look at Lethal Autonomous Weapon Systems, or LAWS. This episode will be a little more narrowly focused and a tad shorter than usual for two reasons. One, because I've been pretty sick the last couple of weeks and an hour would be a bit of a stretch. And two, because military autonomy and AI as a topic is so broad that today I think it makes sense to focus in just on Lethal Autonomous Weapon Systems themselves.
That'll mean starting with the emergence of increased autonomy in weapons and the technologies that have made that more and more practical. Then we'll discuss autonomous weapon systems, what they are, and why militaries might want them and what sort of potential utility they might bring to the battlefield. Then I'll pivot into a look at these things from a more defence economics perspective, and why, despite the potential controversy involved in deploying a fleet of killer robots, we are probably going to see far more of them used in the future. That'll hopefully pave the way for a future episode where I can expand on all this further, looking at AI in general as a military tool, the risks and uncertainty involved in development, and what individual nations are pursuing in the field. But before we jump into it, I need to offer up a quick word from a sponsor who have kindly allowed me to use a previous recording to spare my long-suffering voice. And today I'd like to welcome back my long-term VPN of choice, Private Internet Access.
Generally speaking, I think the digital world has a lot to offer. And whether you're engaged in perfectly normal pastimes like watching videos online or, you know, counting armoured vehicles in remote storage bases, the internet can be a pretty remarkable thing. But it can also be a highly transparent and increasingly fragmented environment. So to operate and protect your privacy there, you might want the right combination of tools and practices.
Something you might compare to fitting safety equipment to an aircraft. You don't add ejection seats because you expect them to be used every flight. You include them because when they are needed, people are likely to be very happy they have them. Protecting your privacy might mean not entering your personal details into every dodgy web form you can find, and also maybe using a VPN. Then with a click you can change your publicly-perceived IP address, reroute your internet traffic and potentially shift the way your activity is logged. With Private Internet Access having a long-stated no logs policy that's reportedly been both audited and tested in court.
If you are worried about fragmentation, you'll find they have servers in 91 countries. And in the US they offer just that little more precision with servers in 50 states. All within the scope of a program advertised as being compatible with a range of streaming services, available cross-platform, and one where (contrary to some recent trends) a single subscription allows you to cover an unlimited number of devices. So if you're interested, there will be a link in the description which will give you access to an 83% discount on a 2 year subscription. Which also comes with 4 bonus months, covering an unlimited number of devices and with a 30-day money-back guarantee. So with my thanks, let's get back to it. OK, so why might we be starting to have more of a discussion about lethal autonomous systems now? After all, you don't have to go much further than 20th century science fiction in order to find stories that are absolutely saturated with things like killer robots and droids.
And if you went back to an ancient commander and offered them a system that didn't complain, didn't eat, didn't need training, and (I'll say it again) didn't complain, one imagines they would have been pretty quick to pick up what you were putting down. But actually getting that concept closer to a deployable reality relies on technologies that have only really seen development over the last couple of decades. A truly autonomous system would need to be able to see and interpret its environment without assistance from a human operator. That means advanced sensor systems that are also affordable enough and compact enough to fit into the required platform.
If you want a parallel example, think of the tremendous difficulty of giving self-driving cars the ability to see, perceive and understand their surroundings. In a lot of ways there might be battlefields out there that are simpler to see and interpret than city streets, or for that matter, Australian roads complete with kangaroos. Which have reportedly given some self-driving car designs no end of trouble, because how do you interpret and predict this thing? But even if interpreting an aviation environment is somewhat easier, it probably wouldn't have worked particularly well using the finest potato-tier cameras of the 1950s. Of course being able to see isn't enough for a machine, it needs to be able to understand, analyse and decide. That requires computing power, which has become much cheaper, more compact and readily available over the course of time. What you can see on the right there is a chart showing the evolution in GPU performance versus cost between 2006 and 2021.
Over that time the number of FLOP/s, or floating point operations you get per [second], per [dollar] has doubled roughly every 2.5 years. And that matters both when you are training really large models (which we'll get to in a moment) and at the other end of the spectrum it also means that when you're talking about the LAWS platform itself, there are now small, compact modules that you can buy in packs of 100 that are good enough for a lot of basic autonomy tasks. Think things like enabling visual targeting solutions in loitering munitions or small drones. In a sense, even though technology is likely to just continue to improve, we've already hit something of a break point, where there might be systems that would have been too expensive in the past to be worth building that now represent decent, affordable options.
As long as you are comfortable of course with the idea of C-3PO being able to cap a fool from 400 metres. Putting those technologies together then, what are some of the applications we've seen, walking up the level of autonomy as we do so? At perhaps the more basic end of the spectrum, we've seen autonomy used in sort of a human assistance mode. Think the target identification function on some modern loitering munitions that will throw a box around things that look kind of like a tank or a target, and passively aggressively suggest to an operator that, you know, maybe they might want to do something about it.
Either through an operator manually controlling an engagement or authorising a semi-automated one. A step up then would be a lot of the manned/unmanned teaming concepts that we've seen teased and in development. Here you have unmanned platforms operating as de-facto wingmen or squad mates for manned platforms.
Most these concepts tend to include a basic level of autonomy capable of flying or driving the vehicle in question. So the human operator might be able to give basic commands like "follow me" or "head to this general area", but the decision to actually employ potentially lethal force still rests with the human operator. It might be the crew in that NGAD or B-21 Raider looking at a potential hostile target and then giving the CCA (or other escorting drone) an instruction to go send their best wishes to that target in AMRAAM form. This might be a type of autonomy, but it arguably isn't lethal autonomy. Because even if the drone might be both the sensor and the shooter, the human still acts as the decider.
But from a purely technical perspective, leaving aside small complications like legal and ethical concerns, there's very little to stop a country designing a system that doesn't require the human operator to make that final call. That would probably represent a leap towards what could rightfully be called a fully autonomous system. And here's the thing, depending on how you define it, this is a direction humanity has arguably been moving in for thousands of years.
And that question of how you choose to define it does in fact matter. Because while there's a lot of controversy around Lethal Autonomous Weapon Systems, one of the things that most of the countries involved can agree on is that there is no common definition. But we probably need a definition as a starting point, so let's pull one out from the relevant US document, DoD directive 3000.09, which defines an autonomous weapon as, "A weapon system that, once activated, can select and engage targets without further intervention by an operator.
This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation." That's great, but it's also very broad. I mean by that definition, some of the first autonomous weapon systems might have been holes in the ground. When our ancient ancestors figured out that they could dig a pit, cover it up, and then wait for an animal to fall into it they'd arguably come up with something capable of causing potentially lethal effects that didn't require further human intervention in order to engage a target.
Indeed, history is arguably absolutely replete with examples of weapon systems where human-out-of-the-loop operation is the norm. Trip wires, pit traps, those rolling boulders that ruined many a Crash Bandicoot play through. All of these are arguably examples of Lethal Autonomous Weapon Systems, albeit not computerised ones. But in terms of actual effect, you could argue they'd share a lot of characteristics with more recent computerised systems. They were human developed and deployed, and intended to operate with a certain set of engagement criteria. It's just that often the decision to engage, so to speak, was more of a mechanical consequence of the way a system was designed than the product of a coded decision.
You could think of a mousetrap as a Lethal Autonomous Weapon System for example, with the instruction being "to engage whatever first applies pressure to the trigger plate". The logical extension to this line of thinking is that a bucket of water propped above a door frame is obviously a non-lethal autonomous weapon system, and so the international community desperately needs to intervene to stop little Timmy and his fellow pranksters before they further proliferate this dangerous technology. More seriously, you could argue we are already seeing the importance of human-out-of-the-loop systems in places like Ukraine, most particularly in the form of the landmine.
Systems like the TM-62 landmine are available in very large numbers, and can be thought of as human-out-of-the-loop autonomous weapon systems with very simple engagement instructions: the trigger mechanism causes the mine to detonate if any sufficiently heavy object runs over it. The point I'm trying to get to here is not a moral or legal judgement one way or another on the value of the anti-tank mine, it's just to argue that this isn't really a forward-looking hypothetical question so much as an evolution of a long-standing historical reality. Humans have long designed systems intended to inflict harm without direct supervision. Now it's all well and good if, when talking about autonomous weapons, you'd rather focus on newer, more advanced technology as opposed to old legacy systems like landmines.
But even if that's your aim, you might find defining your way out of the problem more difficult than you might expect. If you suggest that the decision-making of the weapon needs to be more complex, or might result in decisions that a human operator didn't entirely foresee, well then arguably not only do we already have those, we allow members of the public to buy them. Because there are plenty of dogs out there entirely capable of engaging targets without the supervision of a human operator. Yes, they may be trained to gate and bound that decision, but I also doubt anyone has realistically suggested programming any computerised autonomous weapons that operate without any sort of coded guidelines or bounds, other than a line saying "go nuts." And even if you would argue that actual autonomy requires a degree of computerisation, that probably already exists too.
In fact, if you go to the IAI website for the HARPY loitering munition you will see this described as the "HARPY anti-radiation autonomous weapon". The dot point highlights for that weapon include the fact that it is effective in SEAD and DEAD missions, land and naval based, operational with several air forces, and "fully autonomous". Basically what you're meant to be able to do with HARPY is load up the signatures of the systems of your opponent, give it a basic mission plan and area of operation, launch it. And then if at any point during its patrol it identifies the tell-tale signs and signatures of opposing air defence systems, it will then go over and introduce itself in a highly explosive fashion. Arguably, HARPY doesn't represent a completely new or unique concept. The old AGM-136 Tacit Rainbow missile for example, designed during the 1980s, was conceived as something that you could launch ahead of waves of manned aircraft.
Patrolling over a pre-programmed target zone until it detected an enemy radar source, at which point it would stop loitering and go and engage the signature. Think something like HARM, but with the ability to loiter over a target area for a significant period of time. The Tacit Rainbow was never fielded, HARPY has been. But you could argue that both hit a lot of the criteria for being Lethal Autonomous Weapon Systems. One potential caveat of course that might have made them less controversial was the chosen target type.
One of the great fears with autonomous weapon systems is always going to be target misidentification and the risk of civilian casualties. Fortunately for dedicated anti-radiation missiles and loitering munitions, civilians and civilian vehicles don't tend to carry around high-powered air defence radars and the consequent emissions. Yes, there is arguably autonomy here, but it might be autonomy with a fairly restrictive target set. The point of all this is to say that when we are talking about lethal autonomous weapons, it's probably best not to put them all in the same box. Instead we might think of them as a kind of continuum with different features that raise greater or lesser degrees of concern. For example, just how autonomous is it, and how often is it expected to operate in that mode? Is it just going to be operating autonomously when it loses connection to its operator, or is that the intended mode all the time? Is its intended target engagement set very tight and well defined? For example, "you will only engage things that are emitting like an S-400's radar station."
Or is it more general and thus potentially more risky? Since, given some of the evolution we've seen in Ukraine, giving a weapon an instruction like "destroy any Russian armoured vehicles you see," might potentially result in wide scale damage to the barns and sheds of the Ukrainian countryside. In that construction, jokes aside, the important part might not be the autonomy so much as how you intend to use it. So the next question to explore then is probably just why? Why develop LAWS in the first place? What sort of characteristics and advantages can they offer relative to other, potentially less controversial systems? And here I want to break that analysis into three separate components.
What are their basic characteristics? Where are those characteristics potentially useful? And how might we see them used across different domains? In terms of the basic elements of what a LAWS can offer you, a lot of them are going to be in common with more conventional unmanned systems. There's no human operator in the system, so you don't have to pay the weight and volume tax for finding a place for the squishy human. You also probably don't have to risk one of those squishy humans, which is great from a political, humanitarian and veteran's benefits perspective. Autonomous systems may also have advantages including superior reaction times, less vulnerability to issues like fatigue, and also often advantages like scalability and affordability. Humans can't be built in a factory to order and tend to have all sorts of overhead associated with them. With some prone to making exorbitant demands of their national militaries like food, pay and basic accommodations.
Also with humans each individually manned system has to be trained, well, individually. Pilots for example don't share a hive mind, and each of them has to go through flight school. Software meanwhile, is pretty much infinitely scalable.
Once you've got the programming right, you can just install it on all relevant systems. But as I said, for the most part those are all characteristics that autonomous systems would share with other unmanned systems, including both semi-autonomous and manually-controlled ones. A Reaper drone for example does get to save weight, form factor and cost because it doesn't have to fit a pilot in the system, but it's going to spend a lot of its time operating under the direct control of a distant human operator.
For me, one of the key differentiators between LAWS and other unmanned systems is that a LAWS might be much, much less reliant on a constant reliable network. For decades now, militaries have been getting better and better at networking themselves together. Considering we live in a world where your fridge is increasingly likely to ask for your personal information before figuring out that, yes, you probably want your food to be kept cold, it's probably not surprising to hear the military has become pretty reliant on tech and networks of its own.
But organisations like NATO have also spent a lot of time theorising what the response might be to a spectrum-contested or spectrum-denied environment. Where opposing electronic warfare or anti-satellite weapons, or some combination of systems, robs part of the Joint Force from their ability to access and leverage that wider network. In the early 21st century those environments were often mostly theoretical. But in Ukraine we've often seen them first-hand, with Russia routinely blocking or spoofing GPS signals, interfering with drone control signals, and intercepting or interfering with Ukrainian communications. This is perhaps most visible in its impact when you talk about drone operations. According to organisations like RUSI, the single largest driver of losses among small reconnaissance drones or FPVs is opposing electronic warfare, whether you're talking about the Russians or the Ukrainians.
And for a lot of the drone systems out there, losing control signal can be about as impactful as a kinetic solution, shooting the drone down. Some off-the-shelf drones are programmed to just land in place if they lose control signal. But if you add a degree of autonomy to those systems, it may be that soft-kill isn't really a kill at all.
An FPV with a visual targeting function for example, might put a bounding box around potential enemy armoured vehicles and ask its operator, "Hey, do you think that's a target?" Assuming the operator says, "Absolutely; go for your life, mate," from that point on losing signal is no longer fatal. The drone can continue to guide itself into that target using its on-board sensors. And if you're willing to dispense with the check and balance of that final engagement decision entirely, you may be able to operate these systems even in a completely EW blacked-out environment. You program a drone with a database of what valid targets look like, give it an area to operate in before it launches and then potentially just send it on its way.
It might use systems like inertial navigation and terrain recognition in order to figure out where it is, its on-board sensors to identify potential targets, and then, well, be able to get on with the job even if the local electromagnetic environment looks kind of like an artist's impression of the bloody warp. In that sense, what autonomy might be able to do is cure some unmanned systems of one of their greatest weaknesses, their intense attachment and reliance on their human handlers. Another potential take on that particular advantage is that just as autonomous systems might not need to be receiving instructions, they might also be comparatively less reliant on transmitting themselves. And on a modern battlefield where emissions are signature, transmitting less might translate into greater survivability.
Not only, it has to be said, potentially for the drone or autonomous system itself, but also now potentially for the operator who might not have to broadcast as much if the system is capable of autonomous or semi-autonomous operation. Where this starts to get really interesting is in environments where it's not just the opponent that can make communication difficult, but where the environment itself can. Communicating underwater for example is notoriously difficult, especially over longer distances or greater depths. There is a reason that a lot of advanced torpedo systems are still to this day literally wire guided. A Mark 48 ADCAP is going to be spooling out a line as it leaves the torpedo tube, because that provides a reliable low-emission way for the submarine to reliably maintain communication with that torpedo as it makes its potentially very long journey towards its target. We'll probably talk more about USVs and off hull vehicles when we eventually get to next generation submarines.
But suffice to say, there seems to be a lot of interest out there in pairing manned submarines with unmanned systems. And even if you expect those to be in contact with the controlling submarine some of the time, adding a degree of autonomy might allow them to operate further away or in conditions where communication isn't easy or even possible. Other applications might be areas where response time is at an absolute premium. We already have air defence systems for example that are capable of operating on automatic or self-defend modes. And in a world where you potentially have things like hypersonic weapons that may not provide more than a couple of seconds worth of warning time, you might want to have some weapons or defensive systems that don't ask a human operator nicely before deciding whether or not the incoming ball of superheated plasma moving at 12 times the speed of sound is a) a threat, or b) a civilian passenger plane.
To be fair, the autonomous engagement of incoming hypersonic missiles by an air defence system might not be the most controversial thing in the world. But if you swap out your air defence system for an unmanned ground vehicle for example, and switch the target set from hypersonic missiles to any infantry-like target that happens to try and dash across a designated kill zone, and suddenly you might be having a very different policy discussion. The same logic can obviously be applied from the perspective of a smart munition. If you need a rapid reaction time or if the weapon can't readily communicate with a launching platform, there might be significant incentive to give it the degree of autonomy it needs to get the probability of a successful engagement up. And as with many other times this episode, I'm left to suggest that maybe that mode of application isn't entirely hypothetical. And we may already have seen small drones or UAS used in essentially a fully-autonomous mode.
In 2021 a panel of experts on Libya wrote a letter to the President of the United Nations Security Council. And in paragraph 63 of that report it refers to the alleged use of small Turkish Kargu-2 drones or loitering munitions in what sounds awfully like a fully-autonomous mode. To quote from the report, "Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapon systems such as the STM Kargu-2 and other loitering munitions. The Lethal Autonomous Weapon Systems were programmed to attack targets without requiring data connectivity between the operator and the munition, in effect a true fire, forget and find capability." Now I obviously wasn't in Libya in 2020 and cannot validate that report. But if some of the reporting we've seen is accurate, the world may already have seen a Lethal Autonomous Weapon System of this kind used to inflict casualties in a fully-autonomous mode.
And the discussion around these Turkish-built drones I think highlights one of the other things we should keep in mind when we are talking about the potential future of LAWS. Autonomy technology isn't being developed in a vacuum and it doesn't have to be deployed in a vacuum. Often with weapon systems development the most effective results come from taking individually scary technologies and sticking them together until you get something truly terrifying.
The CEO of STM for example, the company that manufactures the Kargu, has reportedly talked about the system being capable of swarming and leveraging things like facial recognition technology. If a firm was to take that combination of technologies and stretch them to their ultimate conclusion, it might be possible for operators to upload an image of a target to a drone, designate a search area, and then have the drones autonomously sweep through that area operating as a swarm until they identify and attack their target. Now unless you're a secret Skynet sympathiser, all of that probably sounds fairly terrifying. And you're probably left wondering whether nations will choose to regulate this kind of tech the way we have with some other weapon systems, or if it'll ever actually make it to the battlefield at significant scale. After all, just because we hypothetically can build a system doesn't mean it's going to get the green light for mass production.
There are companies in Japan for example that are fully capable of building you this thing. If militaries wanted battlefield mechs, at least of a kind, hypothetically they could have them. It's just that small things like their utter uselessness as practical weapon systems deny us any chance to live out our childhood Gundam or mech warrior dreams, at least for now. I'm sure that sounds like a tragedy to some of you, but I'm the kind of guy who used to play BattleTech without fielding any mechs.
Lancers are far more cost effective tanks, and infantry did perfectly well for me, thank you. In the real world far more often than fiction, politics, practicalities and economics can weigh against a system ever being produced and fielded. But arguably one of the great dangers or advantages of LAWS, depending on your point of view, is that they do arguably have a number of features that are likely to win over that most important group of military stakeholders: the accountants and those wonderful people in force planning and procurement. For example, a lot of the development cost for coming up with a fully-autonomous weapon system, which might be a little controversial, is likely to be incidental to developing a still very useful (and much less controversial) semi-autonomous system. Imagine for a moment that Emutopia wants to develop a new semi-autonomous armed drone. For PR reasons they call it the Koala. OK then, so what are they likely going to need it to be able to do? They are probably going to want something that can fly a mission plan without a human babysitting it at every moment.
A system that can identify things that it thinks might be targets, and which can then ask a human operator for confirmation before engaging. And if it gets the proverbial green light, complete that engagement and return to a recovery point or airfield under its own direction. That's a semi-autonomous system with a human in the loop.
So for most countries there isn't much of a policy challenge there. No one is going to start yelling at the Emus in an international forum for cranking out Koalas by the dozen. But functionally that semi-autonomous system already has a lot of the capabilities it would need to act as a LAWS. If it can attempt to identify targets by itself and it completes attacks by itself without direct human control, just human permission, then a few tweaks and upgrades here and there might quickly convert a semi-autonomous system into a fully-autonomous one.
What might that mean practically from a defence economics perspective? Well, a few things. Firstly, it might mean that you as a force can invest in getting really close to a LAWS capability without being worried that all the investment will be wasted because policy, legal or some other concerns cause you to have to throw out your LAWS capability later. At the very least you are likely to get a functional human-in-the-loop system regardless. Secondly, it might make it cheaper to go all the way down that development path even if you're not ready to publicly commit to fielding the kill-bots just yet. Some countries may already have embraced that distinction, with the CRS report noting that "The Chinese delegation has stated that China supports a ban on the use, but not development of LAWS. Which it defines to be indiscriminate lethal systems
that do not have any human oversight and cannot be terminated." So if you adopt that policy position, you can design them, you just can't field them. Now with some systems that distinction might matter. If you had a rule saying your country wouldn't field fighter jets and then suddenly you change your mind, it's still going to take you a long time to train pilots, maintainers and build or buy in appropriate platforms.
There is a reason after all that during those inter-war years where Germany really wasn't meant to have an air force, that you saw a lot of development effort focused on high performance "civilian" aircraft that just so happened to be really well suited to the bomber role as soon as they were quickly adapted into military versions. Rapid development can be difficult if you have to do it from a standing start. But once again, in this respect LAWS might be a little bit different. And depending on how capable your underlying system is, the change from semi-autonomous to autonomous might be fairly easy to make, or even just limited to a well-timed software update. At the most extreme end of the spectrum, if you had a semi-autonomous system where the most important role of a human is to confirm target identification and give permission for it to be engaged, hypothetically all you would really need to convert a LAWS would be a software update telling the system not to worry about the whole "ask for permission" step.
It's probably never going to be that simple, but to give a cartoonishly oversimplified scenario Emutopia could have built up a significant stockpile of those Koala drones that we talked about before, while also telling the world that it has a policy that says that no Lethal Autonomous Weapon Systems will be deployed by the Emutopian military. Any international inspector that comes through, looks at those drones and audits the software on those systems will find that that's entirely true. But if the situation ever calls for it, suddenly someone can roll out the Drop Bear patch that they totally didn't have ready to go hidden away somewhere. Out of nowhere an autonomous operation option appears in the menu, and the Drop Bear swarm is now ready to go about its potentially fully-autonomous business.
And while that might sound like something countries might hesitate to do, the motives to do so might get fairly compelling. Toggling over to fully-autonomous capability might come with risks, but it also might offer a fairly cheap and quick answer to emerging battlefield pressures. Consider this, small UAS are current fairly dominant on the Ukrainian battlefield.
But militaries around the world are not exactly universally confident that weaponised video gamers with armed racing drones are going to remain dominant for that long. I've seen some Russian commentaries suggesting that we'll probably be back to tanks and artillery before long. And while, yes, that does seem to be the Russian default military answer to just about anything, those predictions aren't exactly unique. Recently for example, the French Army's Chief of Staff was quoted saying that, "The life of impunity for small, very simple drones over the battlefield is a snapshot in time." The article quoting him went on to say that, "First Person View drones currently carry out about 80% of the destruction on the front line in Ukraine, when 8 months ago those systems weren't present," according to Schill. The General said that situation won't exist 10 years from now, and the question could be asked whether it might end in 1 or 2 years.
Now I'd probably take issue with some of the numbers presented there, but let's focus on the core prediction, the idea that the dominance of small UAS and loitering munitions is likely to fall away. The chief culprit in predictions like that is often improved UAS counter-measures. A case of the proverbial shield catching up with the sword.
And the biggest factor in counter-UAS currently is, as we discussed earlier, electronic warfare. As per the General, 75% of drones on the battlefield in Ukraine are lost to electronic warfare. And while you'll find different figures in different places, they all tend to concur that jamming is the greatest threat (or one of the greatest threats) that small drones currently face. But as we said right at the start of this presentation, one of the potential advantages of fully-autonomous systems is they might not give that much of a shit about EW.
So what might be likely to happen if countries find themselves in a situation where they are losing a lot of their drones to EW, but they could press a button that meant those drones didn't care about the jamming anymore? What if they could field systems that were capable of continuing to target even if they did lose connection to their operator? Either because of EW systems, or because the earth isn't flat and line of sight can be a bit of a bitch. To me, that feels like the kind of impetus that might push a force towards deciding to accept the risk of turning up the dial on autonomy. If you want to get a really firm understanding of the potential incentives there, just take a hypothetical LAWS capability and apply it to Ukraine as we see it now. If both sides had ready access to fully functioning and developed LAWS already, the maths on drone engagements could change fairly dramatically. Not just because you could convert some of those cases where systems are lost to EW into successful engagements, but because you might be able to do things like use those drones to go hunt the electronic warfare assets themselves.
Using autonomous systems to punch holes for semi-autonomous or human operated systems to do their work. A hypothetical LAWS might be less effective than a human guided system, but if they're cheap enough and available enough they don't have to be as good as a human, they just have to be good enough. And the mere existence of that sort of system might force the opponent to adapt and change the way they operate. Alternatively, imagine a future scenario where you had a completely hypothetical stretch of ocean that one side wanted to deny to an adversary. One which in any escalation scenario you could also be fairly confident was clear of civilian traffic. Even if that stretch of water was subject to electronic assault by everyone involved, LAWS might enable you to flood the zone so to speak, with sea drones operating on pre-programmed assignments and an overwhelming compulsion to ram into or launch missiles at any ship or ship-like object that they encountered in the no-go zone.
Again, in that scenario the system doesn't have to be anywhere near perfect, it just has to be good enough to be tempting to develop and deploy, and good enough to potentially deter. So from a purely military perspective, I'd argue that going forward LAWS are likely to have a pretty significant wind at their back. They have the potential to offer significant military capability, including a potential counter-measure to one of the normal counter-measures to unmanned systems. And to potentially do it in a way that might tick a lot of defence economics boxes. But that doesn't mean the debate over their development, regulation and fielding is over.
And so in closing I want to give a quick snapshot of where some countries stand on the debate around Lethal Autonomous Weapon Systems, and what we might expect from the debate around their potential regulation going forward. And be aware you really do have to look at individual national positions here, because there is far from a consensus, with different capitals taking vastly different views on our potential murder-bot future. At one extreme you already have those countries that have called for some sort of pre-emptive ban on LAWS. These include a number of states (you'll see some of them listed on the right-hand side there) and that list includes everything from a nuclear power, Pakistan, through to the mighty military power of the Holy See. It also includes the UN Secretary General, who according to one UN release maintains that, "Lethal Autonomous Weapon Systems are politically unacceptable and morally repugnant."
But those states so far have been matched up against a range of others that don't support that kind of pre-emptive prohibition. These include major drone manufacturers like Türkiye. Some countries that have to deal with very problematic neighbours, South Korea looking north comes to mind here. India for reasons I promise go beyond just the fact that Pakistan voted "yes". And of course, some of the world's most significant military powers. Russia for example made their opinion of laws very clear when they illegally invaded Ukraine.
But their position on Lethal Autonomous Weapons Systems was clarified in March 2024 in a submission the Russian Federation made linked to the Convention on Certain Conventional Weapons, which I've shown on the right there. In that document Moscow does a number of things. For one they put forward their suggested definition of what a LAWS might constitute, saying that, "a Lethal Autonomous Weapon System is a fully-autonomous unmanned technical means, other than ordnance, that is intended for carrying out combat and support missions without any involvement of the operator.
In this regard, we oppose to discuss the issue of unmanned aerial vehicles in the context of LAWS within the CCW framework, since they are a particular case of highly-automated systems and are not classified as LAWS." So basically if it's a munition like a torpedo, a shell, or a missile, or it flies, even if it's fully-autonomous Russia doesn't think it should fit within the LAWS discussion. They also say, "We consider it inappropriate to introduce the concepts of "meaningful human control" and "form and degree of human involvement" promoted by individual states into the discussion, since such categories have no general relation to law and lead only to the politicisation of discussions."
Now I don't want to read too far into what the Russians are saying here. But if you note that their definition only covers systems that operate without any involvement of the operator, and combine that with a resistance to introducing concepts of "meaningful control" or "degrees of involvement", and to me it reads like for Russia even the most cursory involvement of a human might be enough to shift a system out of the LAWS category. We'll probably talk about the US position in more detail when we look at the potential military implications of AI in the future.
But given that for many years the US was the undisputed leader when it came to unmanned military systems, you can't exactly gloss over them in the LAWS debate. In the old US Unmanned Systems Roadmap, which was meant to cover 2007 through 2032, the focus wasn't really on lethal autonomous systems. It was on drones that could do dull, dirty and dangerous jobs that didn't revolve around delivering lethal effects directly. In that road map the US identified four priority areas, reconnaissance and surveillance, target identification and designation, counter-mine warfare, and chemical, biological, radiological, nuclear and explosive reconnaissance.
With that last one describing a grab bag of situations where you really, really, really don't want to send a human in if you can help it. The 2023 DoD directive we've discussed did address the question of autonomy and weapon systems directly. It didn't ban the development or use of LAWS, but it did put some policy and guidelines in place around that development and potential deployment. For example, it stated that LAWS must be designed to, "Allow commanders and operators to exercise appropriate levels of human judgement over the use of force." But at the same time it set out that "appropriate" in that context was a fairly flexible term with no one-size-fits-all definition and which didn't necessarily require manual human control of a weapon system in order to be met.
To quote from a relevant CRS report, "Human judgement over the use of force does not require manual human control of the weapon system, as is often reported, but rather broader human involvement in decisions about how, when, where and why the weapon will be employed." One of the key US policy concerns actually seems to be making sure that when systems are deployed with a degree of autonomy built in, they've been adequately tested to make sure that when they hit the battlefield they perform as intended and expected. That's perhaps understandable, given that if you release a video game in an unfinished state that's pretty much par for the course, if annoying. Roll out a Terminator with a Bethesda level of bugginess however, well, let's just say that in terms of consequences there's a pretty big gap between review bombing and actual bombing. The US wants software and hardware to be tested to ensure they, "Function as anticipated in realistic operational environments against adaptive adversaries taking realistic and predictable counter-measures, and complete engagements within the time frame and geographical area, as well as other relevant environmental and operational constraints consistent with commander and operator intentions.
If unable to do so, the system will terminate the engagements or obtain additional operator input before continuing the engagement." From a policy perspective this could be described as an "if you are unsure, don't shoot" setting. But while there might be some more policy guide-rails around US LAWS development than there might be in other places, like potentially Russia, that doesn't mean the US military doesn't have a number of autonomy and AI programs on the go. Most of you for example would probably be aware of the US tests involving an AI-piloted F-16.
And back in 2022 the second in command of the US Space Force reportedly told Air Force Academy cadets that the US will eventually need to have machines that can make the decision to employ a potentially lethal force. The question might be then, what's likely to happen when those sort of national positions and expectations come up against those individuals, organisations or nations advocating for some sort of ban or regulation on this technology? Back in 2015 for example there was an open letter that called for a ban on offensive autonomous weapon systems that were beyond meaningful human control. That letter described autonomous weapon systems as a third revolution in warfare, alongside gunpowder and nuclear weapons.
More recently in 2023, the UN Secretary General and the International Committee for the Red Cross President called for UN states to negotiate a ban and regulation on autonomous weapons by 2026. And talks around LAWS regulation have been held since May 2014 in Geneva, but so far without a definitive outcome. These debates and meetings will almost certainly continue, although the prospect of a final resolution is still very much up in the air.
The CCW (or Convention on Certain Conventional Weapons) which is being used as the framework for a lot of these discussions around potential regulation or restriction on LAWS, operates with essentially a consensus framework. So potentially one of the world's main efforts to regulate LAWS relies on the likes of Washington DC, Moscow and Beijing all agreeing and getting along. I know that might sound like a tall order, but the UN Secretary General did urge these countries to reach an agreement. And surely no major military and economic power would ever ignore the United Nations. But speaking more broadly, there might be some reasons to be pessimistic about the potential outcomes of those sort of talks. For one, they arguably just offer too much potential military utility for countries to entirely ignore as long as they remain legal.
And even if you can argue that autonomous systems aren't perfect, there's a pretty strong case that humans aren't either. An autonomous system might be prone to making mistakes. It might have difficulties in an environment it wasn't programmed to understand or anticipate.
By its very nature, it's always going to lack some of the discernment, some of the humanity, that a manned system is going to bring. But unless programmed or directed to do so it's also less likely to embody some of the worst aspects of humanity in a war zone. It's hard to imagine for example an autonomous weapon system firing when it shouldn't because it's scared, panicked or tired. It's unlikely to refuse to take prisoners because it's angry or loot civilian homes because it's greedy.
Whatever the future holds for autonomous weapon systems, I doubt it includes drones breaking into homes and stealing washing machines. The danger of LAWS is that they are likely to make decisions according to their code. The potential benefit is that they make decisions according to their code.
But the biggest reason I think to be pessimistic about the potential outcomes of these talks is that, like it or not, systems like this are already here and they are evolving quickly. In Ukraine, both the Russians and Ukrainians are seeing the value of weapon systems that can operate more autonomously. Whether it works or not, Russia clearly has great ambitions for the autonomy of the Lancet loitering munition. And Ukraine might feel like it needs any tool it can get to help make its munitions more resistant to Russian electronic warfare, autonomy is one of them. So one way or another, you are probably going to have at least some actors out there that are developing and fielding highly autonomous weapons.
And if some major powers are doing it, you could argue there are pretty strong incentives for others to do it as well, if for no other reason than to try and get some leverage. Because as a very general historical rule, whenever you have an arms limitation conference or treaty being negotiated, the countries with the most leverage are going to be the ones with the most potentially to give away. If you are a country with nuclear weapons and your competitor doesn't have them, what incentive do you have at any arms talk to agree to give yours away? There's arguably a reason that major talks around the reduction of nuclear arms during the Cold War largely weren't driven by countries that didn't have those weapons, but instead concluded between the two countries with by far the most of them, the US and the Soviet Union.
And agreements like SALT or the INF Treaty aren't really historical outliers there. The Washington Naval Treaty for example, historically tried to cap the naval arms race in the inter-war period to some success. And surprise, surprise, it was the powers that went into that process with the greatest fleets and shipbuilding industry, the British Empire and the United States, that came out of that negotiation process with the greatest treaty limits.
And yet those smaller players had every incentive to accept that deal being put on the table, because the alternative was to continue with very expensive uncapped competition, in which the US and UK already enjoyed a massive advantage. The British were also in the best position to be able to make serious concessions by decommissioning existing ships for example, while still ending up in a pretty strong position. Part of those concessions for example included scrapping the battle cruisers HMAS Australia and HMS New Zealand.
Something we Antipodeans absolutely do not hold a grudge for. The core point here is that in any negotiation it probably pays to have chips that you are willing to play. And often extreme negotiating results result from extreme disparity in positions. For example, potentially the closest the world arguably ever got to banning nuclear weapons outright, was during that brief window in the 1940s where the United States was still the world's only nuclear power. Among other things, the plan would have eliminated nuclear and other weapons of mass destruction from national armaments, with the key American leverage in those negotiations being that they were the ones with the nukes. And that, potentially, competitor states would be placed in a better relative military position if they agreed.
Once the Soviets and other powers started to acquire nuclear weapons however, that American leverage was arguably diminished. The key point here is that even if a major power was to adopt a policy position saying that LAWS should be further regulated or potentially banned outright in some way, they'd still arguably be pretty incentivised in the short and medium term to continue developing. And potentially even try and get ahead in the development race so as to be in the strongest possible position to influence the course of future talks.
Of course the more effective those tools become and the more a military might come to rely on them, the harder and more costly those concessions might be if and when the time does come. The core point here is that because of the way the incentives are set up, even countries that might be deeply nervous about the idea of lethal autonomous systems or their use may still feel pressure to invest in or develop the technology in order to do things like deter potential competitors, or help shape any final agreement on the technology. Consequently, you might expect to see a fairly wide-ranging proliferation of this technology that isn't solely led just by those countries with fewer reservations about the potential of this technology. In closing, I think the big picture around Lethal Autonomous Weapon Systems can probably be summarised like this: existing international law demands a degree of humanity in warfare. Autonomous systems might create new scenarios where meeting those existing obligations is difficult or complicated.
And those doctrinal and ethical questions should probably be addressed, even if by now humans have proven on many, many occasions that they are entirely capable of deviating from international law without any autonomous systems involved. But the important thing to recognise is that the technology and the debate really isn't hypothetical at this point. AI is here, unmanned systems are here. And unless something dramatic changes, whether we like it or not, you can dread it or run from it, but the robots will still arrive. And we have every reason to believe they will play an increasingly definitive role on the battlefields of the future. And finally a quick channel update to close out.
I'll keep this one brief because as you can probably hear from the recording, my voice isn't entirely recovered yet. Over the last couple of weeks I've been pretty unwell. So unwell in fact that for the first time in more than 2 years I was forced to miss not one, but two release dates. That was a big call considering that even Covid hasn't been enough to stop me in the past, but this time there really wasn't a choice. So thank you for your patience during the delay, and I hope you still enjoyed this somewhat shorter and roughly voiced episode.
With any luck, we should be back to a regular release schedule next week, and there is a lot in the backlog that I want to cover. But to spare the risk of throwing away what little voice I have left, I'll save next week's updates for next week. Thank you again to all of you, and especially the patrons, for your understanding, support and patience.
And of course thanks to some of those other YouTubers who offered to step in to provide voice lines if my voice wasn't up to the point of being able to finish this episode this week. One (I'll reveal who in the future) actually went so far as to record the lines just in case I'd end up needing them which of course was incredibly generous. So thank you to both them and all of you. And I hope to see you all again with a better voice next week.
2024-07-03 12:36