AI can lead to extinction of humans and this risk is totally real, says US govt report
In October 2022, when the ChatGPT launch was still a month away, the US government commissioned Gladstone AI to work on a report to evaluate the proliferation and security threats posed by weaponised and misaligned AI. A little over a year later, the assessment is complete.
The report has found that AI can possibly cause an “extinction-level threat to the human species”.
“The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilise global security in ways reminiscent of the introduction of nuclear weapons,” the report reads.
This was first reported by Time.
AGI, or Artificial General Intelligence, refers to a concept of technology which would be capable of performing tasks equal to or surpassing human abilities. Many tech leaders like Meta CEO Mark Zuckerberg and OpenAI chief Sam Altman have repeatedly spoken of AGI being the future. While such systems are not presently in existence, it is widely anticipated within the AI community that AGI could become a reality within the coming five years or possibly even sooner.
The assessment report suggests the US government to move “quickly and decisively” to avoid “growing risks to national security” caused by AI.
The report has been authored by three researchers. In over a year of completing the report, they reportedly spoke with over 200 individuals, including government officials, experts, and employees of some of the prominent AI companies, such as OpenAI, Google DeepMind, Anthropic, and Meta.
Insights gleaned from these conversations reportedly highlight a troubling trend, suggesting that numerous AI safety professionals within advanced research labs are apprehensive about the potential negative motivations that can possibly influence the decision-making processes among company executives who hold sway over their organisations.