The warfare of the not-so-distant future will rely on robots that can make decisions for themselves—and for our own safety, they’ll need a firm code of conduct, says a US Navy report in the first “serious” study of robot ethics. “There is a common misconception that robots will do only what we have programmed them to do,” its writer tells the Times of London.
But since teams of programmers will have a hand in these robots, no one individual will know the millions of lines of code, or be able to predict how they will work together—which could lead to a scenario in which they turn on their human handlers. The report also details a number of ethical, legal, social, and political issues associated with robot technology: For instance, if a robot harms civilians, who takes the fall—the robot, its programmer, or the president?
(More robot stories.)