A breakthrough in AI “superintelligence” has been rumored to have been made by the company that unleashed ChatGPT on the world, OpenAI, with their new project reportedly employing reasoning capabilities that previous machine learning models have not been capable of. Their research has also prompted the company to make its own call for authorities to proactively create a regulatory system designed to govern the use and development of these incredibly powerful tools before they are fully developed, a new technology that OpenAI says “could lead to the disempowerment of humanity or even human extinction.”
Shortly before the November 17 ousting of OpenAI CEO Sam Altman, several of the company’s research staff sent a letter to the board of directors warning that the company had made “a powerful artificial intelligence discovery that they said could threaten humanity,” according to sources contacted by Reuters; these sources also indicated that the warning was one of a number of factors leading to the board’s dismissal of Altman.
Although Altman was reinstated to his position only four days later, one of the revelations that emerged from the corporate chaos was the existence of ‘Project Q*’ (pronounced ‘q-star’), believed by some of Reuter’s anonymous sources to be a breakthrough in the development of what is termed artificial general intelligence (AGI), a more adaptable form of AI that is supposed to be able to learn skills that it wasn’t designed for, a more human-like ability than what the current cadre of generative AI programs are capable of.
Current AI models have proven to be extremely adept at learning the skills used to accomplish the tasks they were programmed to tackle, but are limited to those skillsets; for example, Google’s AlphaGo is really good at learning and playing games, but don’t expect to have a conversation with it; conversely, ChatGPT is excellent at conversing with its human users, but it won’t be winning chess matches any time soon.
An AGI, however, would presumably be capable of learning skills beyond those that it was originally designed to acquire—imagine Stable Diffusion generating verbal narration to accompany its photorealistic images, or sitting down to a ChatGPT session and asking it “would you like to play a game?”
Although a direct link hasn’t been definitively made in this instance, the implication is that the big breakthrough was made by Q*’s development team, with the project, given a large amount of computer processing power, apparently able to successfully solve certain grade-school-level mathematical problems.
While this doesn’t exactly sound like a herculean accomplishment—the basic function of a computer is to process numerical values—it is important to remember that while computers are good at crunching numbers, it is not the computer itself trying to do the math in this instance; it is the AI.
It’s important to bear in mind that the entire point of AI is for it to generalize: while the computer hardware the program is being run on is designed to be mechanically precise in its processing, the AI itself is designed to be general in its thinking, and, consequently, is bad at math. For instance, ChatGPT is designed to construct sentences by statistically predicting the most suitable word to follow what it’s already produced, and as a result can give wildly different answers to multiple prompts that use the exact same question.
Needless to say, such ambiguity is unacceptable in mathematics: any given equation must produce the same answer every single time, regardless of how often the problem is presented. If Q* has indeed conquered basic mathematical problems, that means the AI has developed some sort of reasoning skill, allowing it to determine whether or not the answer to the equation is correct, rather than blindly posting the output that generative AI has been producing thus far.
But, like a Doctor Frankenstein that’s aware of the potential danger its creation could pose, OpenAI is part of a growing chorus calling for the proactive creation of official regulations that would govern the development and use of what it is describing as AI “superintelligence”, a phenomenon that they believe may outstrip human capabilities within ten years.
The proposal, posted on the company’s official blog, states that “we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.”
The post likens the emergence of AI to the development of nuclear technology: as it was with the cracking of the atom, advancements in AI can revolutionize numerous fields, such as AlphaFold’s solving of half-century-old protein-folding problems within mere hours; but, like the development of nuclear technology, that power comes with the potential to rein unimaginable destruction upon humanity. To that end, OpenAI is proposing that an intergovernmental organization akin to the International Atomic Energy Agency be formed, a global regulatory body that ensures that all parties adhere to the regulations governing the technology.
The post also calls for the development of “the technical capability to make a superintelligence safe,” technological measures that the blog’s authors say “is an open research question that we and others are putting a lot of effort into,” an effort mirrored in a separate blog post that introduces OpenAI’s “Superalignment” initiative, intended to develop the “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”
The blog’s authors state that their goal is to have these “alignment” tools, intended to keep superintelligent AI models in-line with humanity’s best interests, developed within four years, and have devoted 20 percent of the computing power they’ve secured for AI development to solve this problem.
Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.
Why do I think 10 years on this forecast is way too far out?