The Guardian’s editorial of 14th April 2014 (Weapons systems are becoming autonomous entities. Human beings must take responsibility) argued that killer robots should always remain under human control, because robots can never be morally responsible.
They kindly published my reply, which said that this may not be true if and when we create machines whose cognitive abilities match or exceed those of humans in every respect. Surveys indicate that around 50% of AI researchers think that could happen before 2050.
But long before then we will face other dilemmas. If wars can be fought by robots, would that not be better than human slaughter? And when robots can discriminate between combatants and civilian bystanders better than human soldiers, who should pull the trigger?
To the Guardian’s great credit, they resisted the temptation to accompany the piece with a picture of the Terminator.