In an effort to troubleshoot any worst case scenarios that might arise from the impending robot revolution, a group of scientists (funded by Elon Musk) have created a checklist of worst case outcomes, which, they warn, may sound far-fetched, but really ought to be taken seriously if we want to help safeguard humanity against disaster. On that list? AIs rounding up humans for concentration camps, committing spermicide against humanity, and nothing less than the destruction of the the planet, a significant portion of the solar system, or even the universe!!!
From Daily Mail:
‘The standard framework in AI thinking has always been to propose new safety mechanisms,’ computer scientist Roman Yampolskiy from the University of Louisville told New Scientist.
But instead, he believes we should approach the threat to AI in the same way cybersecurity researchers do when looking for vulnerabilities.
He says creating a list of all the things that could go wrong will make it easier to prevent it from happening.
For instance, in one scenario, the researchers predict AI will create ‘a planetary chaos machine’ that creates a global propaganda, pitting governments against the general public.
The researchers write that AI may also ‘takeover resources such as money, land, water, rare elements, organic matter, internet, computer hardware, etc. and establish monopoly over access to them.’
In another scenario, robots could force humans to become cyborgs by requiring everyone to have a brain implant that could be controlled remotely.
‘They could also abuse and torture humankind with perfect insight into our physiology to maximize amount of physical or emotional pain, perhaps combining it with a simulated model of us to make the process infinitely long,’ the researchers say.
Another threat is that AI could commit spermicide against humankind, ‘arguably the worst option for humans as it can’t be undone.’
Pistono and Yampolskiy there may be some warning signs than an organisation is developing an evil AI, such as the absence of oversight boards in the development of AI systems.
‘If a group decided to create a malevolent artificial intelligence, it follows that preventing a global oversight board committee from coming to existence would increase its probability of succeeding,’ they say.
Read the entire article here. It’s TERRIFYING.