In a previous post I explored the question of teaching ethics to artificially intelligent system. I briefly covered artificially intelligent autonomous weapons. Should we develop them? A recent video, which has gone viral, raises that question again. It portrays fully autonomous drones with sophisticated facial recognition which can target a specific person while ignoring all bystanders. Its ability to react quickly also allows it to outmaneuver people who may try to stop it.
What if such weapons could be developed? In the right hands they could be a great asset. They could make precision strikes while avoiding collateral damage. As the video demonstrates, however, they could also be dangerous in the wrong hands.
The kind of weaponized drone in the video does not yet exist, but it is plausible considering current trends in technology and artificial intelligence. So what is the solution? Should we refrain from developing weaponized AI drones to avoid the possibility of abusing them? Should we develop them because we know rival nations will develop them, and we therefore need a counter-measure?
Elon Musk believes that governments should develop regulations now to control artificial intelligence and to minimize the future risks. Unfortunately, such regulations will only be as reliable as the politicians in power at any moment. Political leaders are subject to the same temptations and corruption as anyone else.
So, again, what is the solution? What are your thoughts? Should tech companies and/or nations develop this technology? Why? Is it inevitable that someone will develop it? What sort of safeguards should we implement? What sort of safeguards are realistic? These are not easy questions.