OPINION / ANALYSIS
Nuclear war, climate catastrophe, contagious disease, meteor collision. All of these are threats to human existence. And the more humanity progresses, the more existential threats it creates for itself.
As if the threat of nuclear war wasn’t bad enough, scientists are currently developing killing machines that rely on artificial intelligence. These machines are also known as killer robots, and some of them are already in use.
This might sound like the plot line of a science-fiction movie, sort of like Terminator. But the technology exists, and continues to improve.
Russian “killer drones,” which use AI, have been spotted in Ukraine. At least according to Wired, which cites Telegram and Twitter posts. It should not be surprising that Russia has access to this technology as it has been in development for years.
While reports suggest Russia’s military technology is embarrassingly ineffective, other countries have been developing similar weapons. Without a ban or international safeguards, these machines could pose a very real existential threat in the coming decades. What we’re seeing now in Russia is just the first stage of AI powered killing machines.
Slippery slope
Nuclear weapon development is widely viewed as an existential threat. The same should go for the development of AI powered killing machines.
As time progresses, and competition between world powers remains, it stands to reason that robot technology will continue to develop to the point that killing machines can travel large distances and kill target based on algorithms. Human input will no longer be necessary.
While this technology is arguably advantageous in the context of armed conflict, or possibly police operations, it carries obvious risks.
The threats are numerous. If world militaries continue to develop technology without proper safeguards, it could be used to mass slaughter innocent civilians during war. The automated technology may decide to turn on people, developing a mind of its own.
Then there is the potential these types of machines could be hacked. Or they could end up in the hands of a terrorist organization such as ISIS or Al-Qaeda.
Interestingly, the US opposes a ban on autonomous killer robots, this move has been criticized by experts, warning that the US government should reconsider.
“We’ll see even more proliferation of such lethal autonomous weapons unless more Western nations start supporting a ban on them.”
Max Tegmark, MIT professor, Future of Life Institute co-founder
Fortunately, there has been some pushback against the use of killer robots. Organizations such Stop Killer Robots exist to advocate against the development and eventual use of these machines. Human Rights Watch (HRW) has echoed this sentiment. An article published back in 2018 puts it perfectly: “people have a moral and legal imperative to ban killer robots.”
The Fermi Paradox
The Fermi Paradox is a term used to describe the lack of evidence for alien life in our universe, when it should be full of it.
One of the hypotheses for the lack of evidence for alien life is that intelligent life does exist elsewhere in the universe, but it destroys itself before it can reach the ability to fully explore space. This is known as the great filter, originally formulated by Robin Hanson in the 1990s.
While this is just a hypothesis, it is in the realm of possibility that other planets have had life that is similar to ours, but never reached the ability to travel space, because it destroyed itself first. Unfortunately for us, our planet is on a path to meeting that same theoretical fate. It is entirely possible that the inhabitants of Earth may completely kill themselves off in the coming decades.
That’s not to say these weapons will turn on us tomorrow, we still have time to reverse course. In order to do that, people, specifically lawmakers in the US, need to see these weapons for what they really are: an existential threat.