ChatGPT creators OpenAI have called for regulators to buckle down on AI to prevent world-ending scenarios in case of a catastrophic event.
In a new blog by OpenAI founder Sam Altman, the AI leader has requested the development of “superintelligent” programs be regulated. The reasoning is that this is a far more “powerful” piece of tech than previous huge innovations that “humanity has had to contend with in the past”.
AI development has been called out for progressing too fast without any guardrails by tech leaders, including Elon Musk and Steve Wozniak, the co-founder of Apple. Altman, however, appears to agree with the sentiment.
Altman was recently questioned by US Congress, in which he advocated for more regulations surrounding the technology.
OpenAI compares AI development to nuclear energy
He, along with the other authors, compares the advancement of AI to that of nuclear energy and synthetic biology. With this comparison, they say that preventative measures can’t be “reactive” and instead will have to be set up now in preparation.
The main reason behind this is to prevent any massive fallout from a superintelligent AI from rising up and taking over humanity in some capacity. However, this isn’t to say that OpenAI is calling for the prevention of development.
Further down the blog, Altman and his team claim that it “would be unintuitively risky and difficult to stop the creation of superintelligence”.
Subscribe to our newsletter for the latest updates on Esports, Gaming and more.
As various developers dip their toes into the fray, and open-source projects begin to get more advanced, it would appear to be too hard to stop the creation at all.
What OpenAI wants is measures put in place should the superintelligence begin to pose an issue for humanity as a whole. One example given is the IAEA, the Internation Atomic Energy Agency, which is a multi-government body that wants nuclear energy to be used in a peaceful manner.
OpenAI wants development to be controlled in newly proposed methods
Another option, which Altman hopes to combine into the proposed monitoring body, is to simply cull how far an AI can be taken in a year:
“We could collectively agree… that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
And of course, individual companies should be held to an extremely high standard of acting responsibly.”
AI is currently booming, as Google, Chinese companies, and plenty of others, begin to integrate the software into everything from operating systems to search engines.