While many an armchair futurist continues to bemoan the distinct lack of flying cars available in 2009, or the fact that we’re still at least 20 years away from having a high speed train in California, I can’t help but notice that the armies of murderous robots have been arriving right on schedule.
This year marks the first in which the US Air Force has ordered more unmanned aerial vehicles (UAVs) than piloted planes. Two of USAF’s current models have racked up an estimated 250 kills in Pakistan, and are lovingly named Predator and Reaper. The US Navy’s new aircraft-carrier-ready Pegasus is scheduled to fly later this year, and let’s not forget about the USAF’s tiny intelligence gathering Raven, or the new stealthy Polecat from Lockheed-Martin’s SkunkWorks division. And surely you’ve seen Boston Dynamics’ Big Dog video… if not – click through and be terrified.
The government has mandated that 1/3 of all military vehicles must be unmanned by 2015. Already, the military currently employs some 12,000 robot agents in the field in Afghanistan and Iraq, and while many are doing mundane things like hauling supplies or dismantling bombs, a new generation of killer Johnny no. 5 style robots are on the way. Each of these murderbots is equipped with an m-16, a grenade launcher, and a rocket launcher. Don’t worry, though, because a computer science professor from Georgia Tech is making some special software to ensure that these
early model Terminators delightful robots always behave ethically and only kill poor bad people. According to an MSNBC article:
Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an “ethical governor,” a package of software and hardware that tells robots when and what to fire. His book on the subject, Governing Lethal Behavior in Autonomous Robots, comes out this month.
Recently, three PhDs at the Office of Naval Research wrote a paper about how dangerous all this might be, and there’s also the book How Just Could a Robot War Be? by Rutgers University philosophy professor Peter Asaro (interesting interview with Asaro here), which investigates the ethical implications of several imagined robot war scenarios.
So while a few academics are expressing a bit of concern about ethical dilemmas, it seems too little, too late. All the (still human) decision-making parties seem quite comfortable, excited even, to hand “the keys to our defense mainframes” over to the robots (yes that’s BSG talk). Skynet’s practically already here, and when the flying cars do come, I’m guessing they’ll be coming in for the kill. Whatever, we’ll just have to back-burner these ethical discussions until after the global population has been reduced to a more sustainable level. I’ve been hearing that it’s advisable to “maintain humanity under 500,000,000 in perpetual balance with nature.”