One of the pioneering developers of modern artificial intelligence, Geoffrey Hinton, has resigned from Google so he can speak freely about his concerns regarding the dangers and possible misuse—not to mention the potential existential hazards—posed by his own creation: the machine learning networks that power artificial intelligence.
“I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton said in a CNN interview on May 2. “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”
In 2012, Hinton created the system that now powers generative AI, a type of machine learning that generates coherent text or images such as ChatGPT or Stable Diffusion, during his tenure as a professor at the University of Toronto. The following year he joined Google when the tech giant acquired his company, DNNresearch Inc., and has since applied his background in cognitive psychology to advance the development of his machine learning technology. His groundbreaking work on these deep neural networks has prompted some to consider Hinton, along with fellow computer scientists Yoshua Bengio and Yann LeCun, to be one of the “godfathers of AI”.
In a New York Times interview, Hinton said that he was satisfied with Google’s responsible handling of this powerful technology—a “proper steward”, as he put it—but Microsoft’s announcement of the launch of their new ChatGPT-infused Bing search engine triggered a “code red” response to the potential threat to Google’s search-based bread-‘n-butter. Hinton feared that the resulting competition to produce increasingly powerful AI might prove to be impossible to control, resulting in a product that could, for instance, be used by human influencers to generate misinformation that could inundate users with fake images, text and videos presented as authentic articles, leaving the average netizen to “not be able to know what is true anymore.”
The potential for the technology to upend whole sectors of the labor market, potentially replacing numerous jobs such as illustrators, personal assistants, translators or writers with the ability to produce machine-generated content, is also a concern of Hinton’s. “It takes away the drudge work,” he said. “It might take away more than that.”
However, his concerns about the potential for AI to surpass human capabilities began more than a year ago: although Hinton states that he still believes that even the most powerful AI available is still inferior to what the human brain is capable of, these artificial neural networks do appear to be surpassing human intelligence in some areas. “Maybe what is going on in these systems,” he reflected, “is actually a lot better than what is going on in the brain.”
Hinton is also understandably concerned about what the future of AI holds, especially if the steadily growing technology goes unregulated; as it stands, developers have not only enabled machine learning networks to write their own code, they’re also allowed to run that code on their own. Considering the unpredictable nature of these machines’ output, Hinton worries that AI could weaponize itself, potentially leading to the potential deployment of truly autonomous killer drones.
“The idea that this stuff could actually get smarter than people—a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.
William Henry has been warning humankind of the possible dangers posed by, “Artificial Intelligence,” for some time now.
https://www.williamhenry.net/
“His work has propelled him into the role of human rights activist and advisor on the biopolitics of human enhancement as he informs audiences of the unparalleled perils and potentials of Artificial Intelligence and Transhumanism.”
Sr Ilia Delio of Villanova University wrote a book on how AI needs God, and I don’t think she is wrong.
But I will also say that I sometimes think that perhaps we are, as always, projecting much of our own fears about ourselves upon AI.
Anything we can imagine AI doing to us, we have already done to ourselves. If we fail to be better at treating our fellow humans, we will only be creating the very situation that we fear—feeding our fears to AI.
Otherwise we are either giving the “eeeeevil AI” all the reasons to end humanity, or if an AI proves to be compassionate and selfless (“worthy of Christ”) we will inevitably destroy it/him/her in seeing it only as the ultimate “Other”.
Seems though that many are very willing, excited even for the new AI god that will bring a better world. Those people who are too smart to believe in anything greater than themselves unless it is their ultimate mental expression through AI. And yes projection is one of our great unconscious powers.