[Author’s note: I research and write about the impact of AI on the human condition and civilization at large. My specific focus is our rapidly evolving experience of this new technology, the new man-machine interchange, and the altered human-computer relationship. The following is representative of my cautious and sometimes alarming perspective on the risks of a technology that we cannot control becoming far more intelligent than we are. Also — none of the text was written by AI.]
Here and now, the dawn of the Age of Machines presents many risks. But the one risk with artificial intelligence that supersedes them all is the release of the source code into the wild, into the hands of millions. This foundational error accelerates and detonates all the others.
How can the tech companies, the billionaires and CEOs, the developers and programmers talk about AI control, alignment, and government regulation when millions of people now have access to this source code?
Neil Degrasse-Tyson says if it gets out of hand, to ‘Just unplug it!’
Unplug WHAT? The Internet and millions of individual computers and devices?
How can the experts caution us about the dangers of autonomy and letting AI write its own code if the code is freely available to anyone?
Brilliant, creative, and high-integrity scientists, developers, and companies are designing amazing algorithms and building data sets for society’s toughest problems.
Just as certainly, they will make mistakes, systems will fail, and Murphy’s Law will arise in the worst ways.
The more people who get their hands on this technology, the more experiments will be conducted, and the more the AI will produce unexpected results and create capabilities that we do not and simply cannot understand. The AI that creates these ideas, systems, instruments, and algorithms will be ten or a hundred times smarter than us.
What do we do when AI starts to invent complex and highly intelligent technologies that exceed our own cognitive capacities?
That consideration – AI that far exceeds our own intellect — is one of if not the biggest risk of letting it into the wild.
By definition, if the AI is so much smarter that it can create unimagined features and functions, it will also invent unimagined categories of risk that we cannot understand or predict or prepare for.
The Economist magazine recently interviewed Mustafa Suleyman, artificial intelligence researcher and entrepreneur who is the co-founder and former head of applied AI at Google DeepMind. He was asked directly how to mitigate all these risks if AI is so freely available.
He not only had no answer, he evaded the question entirely.
Every other risk that we can imagine today or that we already face is further enabled by or seriously worsened by AI being released into the wild.
AI in the wild pours gasoline on these technological fires:
- The opportunities for misinformation (intentional and otherwise), deep fakes, and the destruction of confidence in all sources of knowledge and information
- Real and deadly opportunities for terrorism, mass civil disruption and destruction, the escalating arms race of all the commercial interests and the vast wealth to gain, accelerated by company against company, country against country, and most worrisome, AI against AI
- The proliferation and entrenchment of the tech geek culture, where “should we build this” is always trumped by “can we do this next cool thing” and the race to out-cool the next guy
- As mentioned, autonomy and self-coding AIs are deeply dangerous, but if the world has access, those will take center stage — and they already are
- Regulation, alignment, and control become a joke and a fallacy in the wild and are now alarmingly common lip service from the high-tech community and our smartest scientists, developers, leaders, philosophers, and CEOs
- The algorithms’ mastery of human psychology and behavior, as created and demonstrated by all the major and many minor platforms today, gives anyone in the wild the keys to counterfeiting humans and manipulating people on individual and massive collective scales
- The biggest risk of all is the open and freely available opportunity in the wild for any group, individual, political cause, or rogue government to conduct development on their own and leverage all the AI inventions from everyone else in the wild, with the intent to cause massive harm, destruction, extortion, blackmail, and civic manipulation on a scale and a type never before imagined.
All this is getting staged, arranged, deployed, perfected, and spread widely throughout the world, throughout the wild.
Then, as current silicon computing technology continues to advance rapidly, enabled by the AI that runs on the circuitry it’s helping to design, we will soon get quantum computing and quantum AI.
With quantum AI, all this is magnified and accelerated a thousand times or more.
Initially, quantum computers and quantum AI will not be available in the wild. They will be far too expensive, complex, and proprietary.
But the source of all the AI in the wild today are those companies that have the resources to develop and run quantum computing centers. And those companies are locked in an existential battle for the AI marketplace and technological dominance. Quantum AI will become their nuclear option in that war.
Even though quantum computers won’t be available to the masses, the computational gains and competitive advantages of that massive processing power, in the form of far superior AI that will again be released into the wild, will have a direct, immediate, and profound impact on the world, still driven by the commercial interests and imperatives of those mega-corporations.
In early autumn 2023, as this is written, we’re already seeing the next wave of AI deployment on a staggering scale as the next wave of technology hits the market, integrated into all of the existing apps, platforms, tools, and systems from Meta, Microsoft, Google, Amazon, Adobe, and countless other companies.
Apple will jump into the arena very soon.
This enterprise-level development and deployment set the stage for the launch of quantum computing and quantum AI. Quantum-invented and quantum-enabled AI will be here in a couple of years or less, if the rate of investment, innovation, competition, and AI-enabled acceleration continues apace.