In his second Reith Lecture in 2021, Stuart Russell addresses the theme of AI in warfare.
The first thought is this is a red-line not to cross, of course we wouldn’t want to create a robot army… but of course ethical dilemmas aren’t easy.
- Would battles fought between robot armies result in less human deaths?
- Would Autonomous weapons allow for ‘surgical strikes’ against key leaders, again to minimise the loss of life?
- If assasination by drone is already considered legitimate (it’s certainly a MO of the US military), why would we rule out the use of micro drones?
Aside from these one can also imagine humanitarian and defensive usage to predict the actions of an enemy.
It seems to me that the biggest single red-line here is to give AI the decision making power to take a life. The ethical case against these kind of weapons is made powerfully at Lethal AWS. Their short film ‘Slaughterbots’ below, reminiscent of the Black Mirror episode ‘Hated in the Nation‘, makes a powerful case for not crossing some basic red-lines.