Home » Featured » Skynet, Killer Robots and DoD Policy

Killer Robots

Skynet, Killer Robots and DoD Policy

on September 20 | by | in Rabbles

It has been almost a year since the DoD policy on killer robots was published and with the very little coverage of directive #3000.09 it wouldn’t surprise me if most people did not know the US is actively developing killer robots. I was reminded of this when reading an article on http://thebulletin.org earlier today and decided it was a good topic for a rabble. Would killer robots make better decisions than humans? Would killer robots make our country more safe? Should we be worried about SkyNET?


A brief run down of Killer Robot policy

The DoD is not officially calling them Killer Robots but we all know what they mean right? There have long been complaints from former military officials that the human element of war is becoming the weak link in operations. That humans are weak and undependable. That line of thinking combined with the general idea that robots are replaceable is cause for a huge push for this sort of autonomous weapon policy. The DoD policy is a brief 15 pages that includes references and a glossary. The real substance of the policy is only about 3 pages, and its pretty clearly written for a government document. The document was created to establish policy, guidelines and responsibilities for autonomous and semi-autonomous manned and unmanned platforms. The policy excludes and cyberspace weapons or platforms, such as Stuxnet.

The key sections of the document are (emphasis is mine):

  • The design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems, including guided munitions that can independently select and discriminate targets
  • Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.
  • Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.

So the plan is to create robots (much like the drones of today) that can also pick their own targets. In case something goes wrong the killer robots should only hunt out and engage targets that have been approved by operators.
Group of Robots

Killer Robot Decision Making

We all know technology always does what it is supposed to right? It is programmed to do something, follows that logic, and completes the task at hand, software bugs and bad software design aside. I’m reminded of Will Smiths story in “I Robot” where he was saved by a robot that made a decision based on logic alone. The robot had decided that Will Smiths character was more likely to survive than a little girl and rescued him while the little girl drowned in a sinking car. While this makes a lot of sense logically it does tug at the heart a little bit, okay a lot.
That was a helpful robot and we are talking about killer robots, and personally I think logic is the right way to pick a target, most of the time. Humans are emotional beasts. Very emotional. It is what makes us, well us. I think that the decision to engage a target and end a life should not be influenced by emotion. I think the bigger question is how do you write code to decide to end a life? I write a lot of code, and a lot of it is “if this then this but if this that” type stuff and am having a hard time seeing how any group of developers could come up with all the criteria needed for this to work properly. Would if be like spam filtering?

rules = Array (
	"armed" => 3,
	"dangerous" => 2,
	"terrorist" => 3,
	"child" => .-5,
	"mother" -> -1,

while( person = get_new_contact()) {
	foreach(rules as rule => points )
		if( person [atributes] = rule )
			killscore = killscore + rule
	if (killscore > 5)
	elseif (killscore > 1 < 5 )

That seems pretty silly to me as pseudo-code but you get the idea right? Obviously emotion of the designers is going to go into the ranking criteria so would the killer robots decision really be based solely on logic? Probably not, and that scares me.

Do Killer Robots make us safer?

I don't think any weapons really make us safer as a world. We get a new weapon, someone else gets a new weapon and it is a race that is just not winnable. I think there are some benefits of killer robots by getting soldiers off the battlefield but if that happens whats the real cost of war? The real cost of war now is the lives ended early and the families torn apart. Everything else can be replaced, but a loved one can not. Insert robots into war, and whats to stop large-scale killer robot war from breaking out? The thought of a war where there is no real or moral cost I think makes us inherently less safe.

Medical Robot Lego

Medical Lego Robot

Now, the thing about technology in general is that it gets developed for a certain reason or goal and introduces hundreds or thousands of other uses. In March of 2012 the Naval Research Lab opened the Laboratory for Autonomous Systems Research (LASR). While NRL had been researching robots and such for almost 100 years this lab marked the first time dedicated resources and specialized facilities are being used for a broad range of research paths.
The influx of money, personnel and private corporations caused by directive #3000.09 has some real benefits for humanity. Sure, killer robots may not make us safer but medical robots, transportation robots, social robots, etc certainly has the potential to make us safer. Police robots have the potential to go either way so I'm going to hold judgment on them for now but generally speaking, I think research into these things are a good thing.

Should the idea of SkyNET worry us?

Absolutely. As we as a race push forward with creating killer robots to fight for us, helpful robots to care for us, nurturing robots to raise our children and all sorts of robots to replace humans at some point the logical choice is to replace humans altogether, isn't it? Life dictates art and art dictates life, there is a lot to learn (and be worried about) in some of the classic science fiction novels, series, and movies about killer robots and the apocalyptic wars they bring!
Anyway, I would love to hear your thoughts on killer robots and such so leave a comment below!

Killer Robot Cylon

Killer Robot Cylon


This post is an opinion piece, otherwise known as a rabble. The views expressed in this piece are not necessarily reflective on TechRabble as a whole, and in some situations are just crazy talk.

photo credit (cc):

San Diego Shooter

« »

Scroll to top