Listen to Article
Imagine an “Internet of Things” in virtually every object around you, “alive” with AI, observing, collecting and interacting.
Now imagine this pervasive AI presence being smarter than you...and wanting you dead.
The scenario might seem like just a variation of sci-fi movies. But it’s not sci-fi. It’s the quickly advancing vision of Singularity University, a globalist organization devoted to actualizing an AI vision that frankly welcomes human obsolescence.
SU knows that the technology it is creating and promoting, via innovators, industries and government entities spells existential peril for humankind. Their own whitepapers say it.
But despite the dangers, SU continues to fuel the development of AI that can outthink and outstrip humankind. And practically no regulatory bodies are currently standing in their way, asking tough questions about their purposes or anti-human agenda.
Currently the world is dealing with an ongoing COVID cataclysm. It quite possibly came about because scientists, despite nominal legal barriers, engaged in controversial chimeric and gain-of-function experiments to artificially make viruses more deadly.
Former DNI John Ratcliffe recently said he saw all the most sensitive intelligence on the matter, and that he believes it’s a “near certainty” that COVID-19 originated in that lab.
Scientists involved in the research have said they were motivated to help mankind by understanding how to better combat viruses. But even if they were not motivated by ambition, profit or other dark objectives, the inherent danger of their enterprise should’ve dissuaded them.
Something analogous to that is happening right now in AI development. The people profiting and advancing their careers by creating bleeding edge AI claim their work is meant to benefit, and is indeed already greatly benefiting the world.
But their claims are contradictory. Many of them also admit they believe AI is destined to surpass the abilities of humankind, and supersede or even replace natural humans. And the dark truth is, many of them see that as desirable. In their own words, they look forward to a world where AI and humans will perhaps merge in what they call “The Singularity.”
The Driving Forces Behind Singularity University
Technocratic elites are currently funding an endless array of projects meant to direct and control the course of human development. Though these projects all virtual signal about their goals, and often focus on marginalized groups, this often just disguises their deeper purposes.
Those purposes are to increase the wealth and control of the entities and persons behind the projects, but also to further often insanely anti-human ideological goals.
Such is the case with “Singularity University.” Despite its name, SU is not an accredited higher learning institution. But it has plenty of world-class technologists among its membership. Founded in 2008 by futurist Ray Kurzweil and Peter Diamandis at the NASA Research Park in California, SU exists to fund and promote its Artificial Intelligence vision. It acts as a think tank network connecting AI focused projects and personalities around the globe.
To give an idea of the size of the organization, SU lists over 240 administrative and “faculty” members who deliver the org’s “transformative content”, plus hundreds more guest lecturers on its website.
SU receives funding from AI fueled corporations like Google, and in turn, develops and coordinates transhumanist projects and technologies that feed into Big Tech’s ecosystem.
Kurzweil, the most prominent face of SU, is a well-known AI innovator and activist intent on spurring a convergence of mankind with machine intelligence and capabilities.
Though his aims might seem radical, he is no fringe player. He has received the National Medal of Technology and Innovation, the United States' highest honor in technology, and was elected to the National Academy of Engineering in 2001, for the application of technology to improve human-machine communication.
Inc. magazine ranked Kurzweil among the “most fascinating” entrepreneurs in the United States and called him “Edison's rightful heir.”
Big Tech Backing Transhuman Agenda
Unsurprisingly, Google, the planet’s leading AI driven analytics company, has been a major funder of SU, and their vision of “The Singularity”, which is described on the SU website:
“The Singularity is often defined as the point at which exponential technology crosses the threshold of ‘strong AI’ and machines possess a broad intelligence that exceeds human levels. It’s a concept that’s understandably hard for many of us to accept, because the Singularity also represents a point where human intelligence and AI merge.”
Google’s search engine utilizes some of the most advanced deep learning AI technology in the world right now. The company has made vast fortunes from developing and implementing AI that tracks, advises, and increasingly, decides what information human beings are able to access.
SU maintains a SingularityHub website which is a clearing house of info about science innovations which are at the bleeding edge of transhuman technological development. Some recent stories featured include:
- “Arm’s New Flexible Plastic Chip Could Enable an Internet Of Everything”
- This Robot Taught Itself to Run, Then Proceeded to Knock Out a 5K
- Scientists Bred Healthy Mice Using Artificial Eggs and Ovaries Made from Stem Cells
- Google Gets One Step Closer to Error-Corrected Quantum Computing
SU advocates a transhuman agenda, and pokes fun at those who might point to dangers. A recent Guardian article linked from the Singularity Hub lauded an Australian law which accorded patent rights to an AI inventor (ie. not a human inventor), under the title “I’m sorry Dave I’m afraid I invented that.” The title is an allusion to the human-murdering AI system HAL portrayed in Arthur C. Clarke’s famous work 2001: A Space Odyssey.
Building Out The Infrastructure of AI for Profit and Control
It may seem astounding that the world’s leading AI innovators would consciously be dedicated to advancing AI technology with no bounds, while believing AI could and would displace or radically alter human beings. But a recent whitepaper available from SU’s homepage, called “The Exponential Guide to AI,” acknowledges exactly that.
Among other things, the Singularity envisions an interim period where “not only is AI likely to be integrated into nearly every electronic system—but also into nearly every person as well.”
The Exponential Guide to AI describes a future in which AI will be inescapable around us, embedded in virtually everything:
“Unlike the human brain, these intelligent programs can be run in a variety of different hardware types, whether that’s your smartphone, a warehouse of web servers, or a self-driving Tesla.
“This variety of use cases is what often makes AI so difficult to understand, but it’s also what makes it so powerful. The ability to add an AI layer on to nearly every technology means that as AI progresses, the world around us will increasingly seem to come alive. This ‘awakening’ will drastically alter life as we know it, from leisure and business activities to our health and spirituality.”
At the same time SU is aggressively pushing and monetizing the pervasive presence and abilities of AI, it is frankly acknowledging the fast approaching superiority of AI, compared with humans:
“What makes AI remarkable is the speed, accuracy, and endurance it brings to this human-like learning process. Humans have to eat, sleep, and tend to a variety of personal needs. We are also creatures of comfort, and quite stubborn—too much change makes us uncomfortable. And when presented with new information and experiences, humans tend to let our biases sway us from making the most reasonable and logical decisions.
“Machines suffer from none of these shortcomings.”
The Exponential Guide to AI notes that though AI has existed at least in infancy since the 1950’s, three relatively recent developments have supercharged its recent advances:
- Big data, which provides massive data sets and user activity to greatly increase the quality of “education” AIs receive.
- Machine learning is a method of data analysis that enables computers to learn without external instruction.
- Deep learning is a branch of machine learning that uses computer simulations called artificial neural networks.
For those who wonder why data privacy is the price of so many “free” apps and services on the web, part of the answer lies in Big Data. Companies like Google and Amazon exploit big data gleaned from participation in their free or cost-friendly services, to leverage other services and products that are monetized and highly profitable [give examples or point to prior TJ articles].
In an age of the “Internet of Things”, data mining of every aspect of our lives has become commonplace, as the paper acknowledges and extols:
“With the rapidly decreasing cost of sensors and the global growth of the Internet of Things (IoT), we have dramatically increased the number of smart and connected devices that are continuously measuring and recording data. Nearly every action we take is now recorded in a database somewhere. This includes mobile device activity, the purchase history on our credit cards, our online browsing activity, our social media feeds, and even our biological data.
“Big data is the term for these massive collections of data that we’re all contributing to every day. Big data is the fuel that enables AIs to learn much more quickly. The abundance of data we collect supplies our AIs with the examples they need to identify differences, increase pattern recognition capabilities, and to discern the fine details within the patterns.”
“Machine learning” involves AI being programmed to use statistical methods and other “thinking” attributes, accruing and analysing big data, categorizing and making decisions and predictions - all without human involvement.
“Deep learning” involves computer simulations patterned after the human brain. It can include combining machine learning algorithms with neural networks, which mimic the way human brains process info and recognize patterns. Deep learning has allowed AI to do things like beat humans in games like chess and Go.
The paper notes that:
“Some of the most powerful and prevalent applications of AI are the ones we often take for granted. These include the AIs that handle your Google searches, deflect spam from your inbox, and select the ads you see across the digital landscape. AIs identify people in your Facebook pictures, and recommend the products you buy from Amazon.”
As it has been employed so far, AI has concentrated a disturbing amount of power and control into the hands of a few technocratic elites. Amazon, for example, has used its analytics and AI to undermine competing companies, including vendors on its own platform, to gain market share. It now accounts for well over 40 percent of all online purchases.
Google, meanwhile, has unleashed AI algorithms with political biases that control, suppress and even banish dissident viewpoints, information and repositories from people conducting searches and queries.
Helping Humans By Canceling Them: Learning The Deeper SU Agenda
According to SU, AI represents a powerful technology that can aid in “solving our biggest global challenges. Perhaps the biggest mistake we can make with AI is to underestimate its impact and rapid growth.”
But the deeper belief of the Singularity is that AI is not really destined to remain a tool of mankind at all. It is destined to supplant, merge with, and, at its own superior minded discretion, perhaps to completely dispense with humankind, in the not too distant future.
A section in The Exponential Guide to AI titled “What Are the Risks and Benefits Associated with AI?” contemplates what the future likely holds.
It concludes that the only path to peaceful human coexistence with AI, might be to merge with it. The section acknowledges that AI comes with potential bad consequences.
“Risks of AI:
- Drastic changes to our lives
- AI created with bad intention
- AI created with good intention goes bad”
In meditating on the possibilities, the section admits that dark outcomes are possible:
“There are concerns that AI will replace human workers, and some people fear the ultimate outcome will be that superintelligent AI-powered machines will eventually replace humans entirely. While this is a possibility, many experts believe that it’s more likely that AIs will enhance, not replace, humanity…”
And here, in familiar Hegelian dialectic fashion, the section posits a solution to allay concerns:
“...and that eventually, we might merge with AIs.”
Other than that, the paper offers little advice, except to say:
“Singularity University Co-Founder and Chancellor Ray Kurzweil explains that while certain jobs will be lost, new jobs and careers will be created as we build new capabilities.
“Kurzweil notes that AI will benefit humans and that AI is less likely to be threatening than beneficial to us, and it benefits us in many ways already.”
Of course, The Exponential Guide to AI waxes on the many benefits of AI. It presents a vision of AI supplementing and freeing up humans for “higher level” activities. It seeks to allay concerns that AI will overtake humans in important endeavors, though earlier the paper seemed to predict just the opposite.
The only lingering superiority the whitepaper actually assigns to humans is in the arena of emotive and ethical concern:
“As AIs become more mainstream and take over mundane and menial tasks, humans will be freed up to do what they do best—to think critically and creatively and to imagine new possibilities. It’s likely this critical thought and creativity will be augmented and improved by AI tools. In the future, more emphasis will be placed on co-working situations in which tasks are divided between humans and AIs, according to their abilities and strengths.
“Perhaps the most important task humans will focus on is creating meaningful relationships and connections. As AIs manage more and more technical tasks, we may see a higher value placed on uniquely human traits like kindness, compassion, empathy, and understanding.”
Kurzweil has plainly stated that he believes AI is destined — perhaps far sooner than the average person would imagine possible — to surpass human ability.
In an 16 July 2021 article published at uxdesign.cc titled “Why the digital brain will prevail over the analog brain,” Kurzweil’s work is referenced in arguing the article’s premise:
“While our brain performs 1⁸ power calculations per second, today’s supercomputers will be able to handle 10¹⁸ power calculations per second. Does this mean that supercomputers already can simulate brains?
“The question is obviously more complex; it depends not only on raw power but also on the collaborative power of the neural networks. However, according to Ray Kurzweil in How to Create a Mind, being able to create a functioning cyber neocortex is an inexorable outcome. Bits can travel infinitely faster than neural signals, and this will be of decisive importance.”
The message is clear. The Singularity is the goal, no matter what the consequences for humankind.
Support the Trends Journal with these great products