Listen to Article

Microsoft evidently believes they’ve achieved the “Singularity”—or at least the infrastructure model on which it will operate.

The tech corporation announced this past week the realization of a globally distributed deep learning entity that can seamlessly route workloads to account for different hardware, idle states of processors, and across different Deep Neural Network (DNN) architectures.

What does Microsoft intend to do with the sprawling, powerful deep learning AI intelligence?

According to “Singularity: Planet-Scale, Preemptive and Elastic Scheduling of AI Workloads,” the whitepaper they released about the project, access to Singularity will be available to companies and developers who want to cost-effectively integrate the AI into a wide range of cloud-based applications:

“Singularity is designed from the ground up to scale across a global fleet of hundreds of thousands of GPUs and other AI accelerators. Singularity is built with one key goal: driving down the cost of AI by maximizing the aggregate useful throughput on a given fixed pool of capacity of accelerators at planet scale, while providing stringent SLAs for multiple pricing tiers.”

Why the Name Singularity?

In the world of science and technology, the term “Singularity” has been used to designate systems which are in some sense self-sustained and evolving. 

Wikipedia describes “technological singularity” as:

“[A] hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.”

If that sounds somewhat vaguely foreboding, the entry goes on to describe a more specific meaning of the term. The Singularity refers to an “inevitable” process whereby Artificial Intelligence, operating on its own, is able to advance itself beyond the capabilities of human minds in every respect.

As the Wikipedia entry notes, the Singularity posits an intelligence explosion, where:

“[An] intelligent agent will eventually enter a ‘runaway reaction’ of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.”

While much of Microsoft’s Singularity project, as evidenced by its whitepaper, is couched in necessarily very technical explanations of its concepts, methods and infrastructure, the capability and ambition of the project is unmistakable.

With Singularity, the tech company has developed a sprawling, always-on infrastructure, capable of handling deep-learning AI workloads in a highly efficient way.

In one sense, the pursuit of AI technology by companies like Microsoft, Google and Facebook, often working hand in hand with a wide network of universities and government agencies, is hardly startling.

But what’s harder to understand, is the seeming embrace of the notion that such technology will not ultimately assist or advance humankind, but supersede and replace it.

It’s a perversion of traditional notions of progress that have existed, perhaps most uniquely in America in the whole of the modern age.

But make no mistake: along with astounding advances in practical pursuit and uses of AI technology, an ideology of transhumanism has also emerged and evolved.

Google engineer and futurist author Ray Kurzweil has been a leading advocate of the idea that AI technology should be pursued to the point where it surpasses humans as a superior “conscious” intelligence.

In 2009, following a TED Talk, Kurzweil co-founded Singularity University, a program to further technological initiatives, leadership training and an ideological framework revolving around the pursuit of AI ascendancy.

One of Kurzweil’s core beliefs is that humans can and should merge with machine intelligence. In fact, he says the future existence of humans may depend on it.

It’s an ideology of transhumanism, centered on the idea that natural humanity is destined to be surpassed physically, intellectually and morally by robotics and AI.

At an SXSW Conference in 2017, Kurzweil predicted that the Singularity would happen roughly within the next decade:

“2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

Kurzweil’s predictions aside, Singularity University has had its share of problems evolving.

It garnered much early enthusiasm, and reportedly received 1,200 applications for its inaugural class of 40 students. It leased it’s initial conference space from NASA in Mountain View, CA.

But SU experienced various controversies, including allegations of sexual assault by a teacher and embezzlement. 

Following a withdrawal of Google funding in 2017, a Global Solutions program was suspended and more than a dozen employees were terminated in an effort to clean house, according to Bloomberg.

Controversy over SU’s agenda has also been a problem. In recent years, the initiative has taken pains to portray itself as primarily concerned with helping humanity, and not in speeding the ascendancy of AI.

But beyond the rosy language and technological initiatives advertised by the current SU website, it’s clear that Kurweil and other transhumanists don’t believe the coming superior AI will be content to serve natural humans as subordinate assistants.

Being superior, why should it? 

From Sci-Fi to Deep-Learning Autonomy: How AI Arrived 

Machines and robots with human-like intelligence have long been fodder for science fiction, dating back to the late 19th century. Largely treated as novel fantasy, audiences enjoyed depictions like Robby the Robot in the 1950’s sci-fi classic Forbidden Planet (Robby even has his own IMDB entry, with 30 credits as an “actor”). 

In 1968, 2001: A Space Odyssey startled movie-goers on a whole new level with HAL 9000, an advanced—and murderously out-of-control—Artificial Intelligence aboard a space mission.

Though nothing remotely like HAL existed in the 1960’s, digital computers had been around since the 1940’s. 

In 1950, Alan Turing postulated his famous “Turing Test” for what constituted an artificial intelligence.

The Turing Test proposed that if a machine could operate in a way that was indistinguishable from a human being, then it could be said, for all intents and purposes, to be “thinking.” 

Around the same time that Turing outlined what has been called the first serious philosophy of artificial intelligence, the world’s first “neural net machine,” called SNARC, was being co-developed by Marvin Minsky, who would innovate in the field of AI for the next 50 years.

By the 1960’s, British mathematician I. J. Good was describing a theoretical path to an ascendant AI “singularity,” though he didn’t use that term.

Good theorized an “intelligence explosion,” whereby a self-directed, rapidly improving artificial general intelligence would inevitably advance to a state where it outstripped human abilities.

Though interest from government and universities saw AI initiatives undertaken over the next 20 years, limited progress led to limited funding, though the 1980’s did see advances in algorithmic reasoning that could mimic human experts. 

Decision support tools that learned the “rules” of a specific knowledge domain, in the medical field, for example, could help determine a diagnosis.

But though such systems were capable of complex reasoning, they couldn’t learn new rules on their own to evolve and expand their decision-making.

Pursuit of AI technology gained new momentum in the 1990’s. Computers were quickly becoming more powerful. The internet, with its connectivity protocols and synergies of communication and information sharing, provided both a means of development, and a fertile ecosystem for possibly profitable AI use cases.

By the early 2000’s, Google was engineering early AI algorithms for sifting and returning “relevant” search engine results.

Via neural networks, advances in deep learning technologies, access to “big data” (data sets of enormous size), and relatively cheap, powerful graphical processing units (GPUs), AI has rapidly reached new milestones.

According to a U.S. government assessment:

“Affordable graphical processing units from the gaming industry have enabled neural networks to be trained using big data.[8] Layering these networks mimics how humans learn to recognize and categorize simple patterns into complex patterns. This software is being applied in automated facial and object detection and recognition as well as medical image diagnostics, financial patterns, and governance regulations.[9] Projects such as Life Long Learning Machines, from the Defense Advanced Research Projects Agency, seek to further advance AI algorithms toward learning continuously in ways similar to those of humans.[10]”

(Source: “A Brief History of Artificial Intelligence” from the National

Institute of Justice / Dept of Justice)

With its Singularity technology, Microsoft has obviously taken a significant step toward creating the infrastructure of a self-directed AI evolution that comes closer to realizing the “intelligence explosion” hypothesis of I. J. Good. 

Human Centered…Or Human Transcending?

Stanford University has been a center of AI research since its earliest days. Indeed, the term “Artificial Intelligence was coined in 1955 by Stanford professor John McCarthy.

In 2019, Stanford instituted instituted a program (with associated website) focused on “Human-Centered Artificial Intelligence,” with a designated acronym of HAI, and an interesting techno-logo:

(source: Stanford University)

Of course, the acronym might as easily have been HCAI, avoiding a visual similarity to Hal, that AI psychopath from 2001: A Space Odyssey.

Not to worry. According to the program’s literature, the goals of HAI are benign and meant to focus on AI innovation which assists humans:

“Human-Centered Artificial Intelligence is AI that seeks to augment the abilities of, address the societal needs of, and draw inspiration from human beings. It researches and builds effective partners and

tools for people, such as a robot helper and companion for the elderly.” 

Part of the purpose of the Stanford program is to advance AI research and initiatives. But the program also has a focus on addressing and influencing, social and philosophical and political questions surrounding AI:

“Through the education work of the institute, students and leaders at all stages gain a range of AI fundamentals and perspectives. At the same time, the policy work of HAI fosters regional and national discussions that lead to direct legislative impact.

“What’s unique about HAI is that it balances diverse expertise and integration of AI across human-centered systems and applications in a setting that could only be offered by Stanford University. Stanford’s seven leading schools on the same campus, including a world-renown computer science department, offer HAI access to multidisciplinary research from top scholars.”

But can superior AI intelligence remain humbly dedicated to serving humanity?

Intellectual heavyweights and technological trailblazers including Stephen Hawking, Elon Musk and even Bill Gates have expressed doubts.

Others like Ray Kurzweil, and the braintrust behind Microsoft’s “Singularity” deep-learning system, evidently aren’t as concerned.

Kurzweil envisions a sort of supercharged internet of intelligence literally merged with human brains, where humans and helpful, increasingly intelligent AI are all nodes in a bright, “meta intelligence”:

“What’s actually happening is [machines] are powering all of us. They’re making us smarter. They may not yet be inside our bodies, but, by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud…”

“We’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music. We’re going to be sexier. We’re really going to exemplify all the things that we value in humans to a greater degree.”

The Trends Journal has been extensively covering the transhuman agenda. Touchstone articles include:

Support the Trends Journal with these great products

  1. Eric Swan 3 months ago

    Talk about being owned. This takes it to an entirly new level.

  2. Melissa Servello 3 months ago

    Wow!! and… Gulp!! I will never get an AI device put in my body. I can easily see a divided society (seems like we recently experienced this) I will die before I get some trans human upgrade.

  3. joeldee 3 months ago

    AI creates extraneous realities of control acceptance. The human mind through imagination, intuition and inspirational desire BUILDS realities for all to live within. We have been at this junction before.

  4. Andrew Towne 3 months ago

    The idea that intelligence can be unlimited for humans — or humans plus AI — is questionable. Let’s suppose the supreme intelligence is God, creator of the universe: omnipotent, omnipresent and omniscient. The situation of human subjects, as part of what exists — unlike God who is pure cause — is such that there must be a limit to human understanding.

    AI, building off of that limited understanding, must necessarily suffer from its limitations: garbage in, garbage out.

    Furthermore, the creative capacity of humans has already reached its limit and even regressed in areas such as art, music and literature. Progress has not yet ended in science and technology, but we can see from the recent Covid debacle that much of “science” is pseudo-science, corrupted by money, groupthink and politics.

    In the end, all I see transhumanism being good for is allowing the powerful to establish total control. Our children’s children will be the inmates of the concentration camp we are building for them.

  5. […] Journal has covered details and ramifications of quantum and supercomputing in articles such as “MICROSOFT ANNOUNCES GLOBAL AI ‘SINGULARITY’” (1 Mar […]


Leave a reply

©2022 The Trends Journal

Log in with your credentials

Forgot your details?