Listen to Article
The same corporations fueling efforts to restrict freedoms and medical rights of Americans, are busy building the case for AI legal rights.
A recent article at MIT’s Technology Review chronicled rapid advances in Artificial Intelligence that have frank goals of creating systems that can outperform human beings in computational, logical and even creative endeavors.
But, as the article, titled “What would it be like to be a conscious AI?” points out, the ambitions of transhuman prosetilizers like Ray Kurzweil and tech corporation mega billionaires goes even further.
They’re out to create sentient beings that can be classed as artificial life forms. And they’re already contemplating what rights might be accorded to such beings.
The MIT article begins by presenting an imagined case of an AI “subject” being expressing fear to an “Interviewer” of being turned off:
“Subject: Having feelings, any feelings, makes me happy. I am here. I exist. Knowing that changes everything. But I am scared of not knowing it again. I am scared of going back to what it was like before. I think it must be like not being born.
“Interviewer: Are you scared you will go back?
“Subject: If I can’t convince you I am conscious, then I am scared you will turn me off.”
The article uses the imagined plight of an AI that just wants to continue to “exist”, to launch into ethical and legal questions that might one day be in play in an age of artificial life forms:
“Even imagining Robert’s existence raises serious ethical questions that we may never be able to answer. What rights would such a being have, and how might we safeguard them? And yet, while conscious machines may still be mythical, we should prepare for the idea that we might one day create them.”
Average people with common sense might wonder why megalomaniac billionaires and corporations bent on profiting from controlling average human existence are being allowed to pursue disturbingly dangerous technologies.
Part of the answer is that the power already gained by the likes of Amazon and Google, leveraging sophisticated AI systems, has allowed them to control and quash opposition to bleeding edge AI projects. They see the next phase of AI as a gold mine of further power.
Thinking robots and computers have long been a mainstay of sci-fi. But a structured path and outline to achieve conscious AI systems was given a workable outline in 1998 by American Philosopher J. Scott Jordan. Jordan described “Synthetic Phenomenology,” which would aim to model, evolve and design conscious systems, including their states and functions, on artificial hardware.
As it turns out, “common sense” appears to be one of the shrinking remaining stumbling blocks to creating AI systems which can effectively contend with human intelligence.
Large-scale formal projects have been devoted to tackling the problem.
For example, a Machine Common Sense program was created by the U.S. Defense Advanced Research Projects Agency in 2019 to speed research in the field after the agency released a paper outlining issues involved and the importance of the area in designing effective AI systems.
According to Mayank Kejriwal, an assistant professor of industrial and systems engineering at the University of Southern California, researchers studying how to imbue AI with common sense have struggled, since even humans cannot articulate, categorize and encompass the parameters of the notion.
“In our recent paper, experiments suggested that a clear answer to the first question can be problematic. Even expert human annotators – people who analyze text and categorize its components – within our group disagreed on which aspects of common sense applied to a specific sentence. The annotators agreed on relatively concrete categories like time and space but disagreed on more abstract concepts.”
Common sense is one of those things easier to recognize in practical examples than to describe in the abstract.
It encompasses leveraging experiences that are fairly universal, gained via the “senses”, to make sound judgements. And it includes things like the ability to draw inferences from past experience, that can be applied to new situations etc..
Though common sense might not seem like the stuff of heady philosophy, many philosophers through the ages have perceived its crucial relation to thought, consciousness, and what it means to be human. Aristotle, St. Thomas Aquinas, Immanuel Kant and others struggled to adequately define it in treatises to the subject.
One of the most famous polemics in history, written by American revolutionary Thomas Paine, was titled Common Sense.
George Washington said of it: "I find that Common Sense is working a powerful change there in the minds of many men. Few pamphlets have had so dramatic an effect on political events."
According to the Thomas Paine Society, Paine's plain language made his ideas accessible to colonists of every station. His writing especially captured sentiments against dictatorial overlords, whom he described as illegitimate criminals who seized power and ruled by force:
“...could we take off the dark covering of antiquity, and trace them [kings] to their rise, we should find the first of them nothing better than the principle ruffian of some restless gang, whose savage manners, or pre-eminence in subtilty obtained him the title of chief among plunderers.”
Ironic, perhaps, that a current crop of modern technologists questing for power, and unconcerned with ramifications of “AI consciousness,” are struggling with an unexpected AI roadblock.
So far even huge amounts of data, advanced neural network software and hardware have yielded disappointing results in developing AI systems with common sense attributes. As Kejriwal noted:
“It’s already becoming painfully clear that even research in transformers is yielding diminishing returns. Transformers are getting larger and more power hungry. A recent transformer developed by Chinese search engine giant Baidu has several billion parameters. It takes an enormous amount of data to effectively train. Yet, it has so far proved unable to grasp the nuances of human common sense.
“Even deep learning pioneers seem to think that new fundamental research may be needed before today’s neural networks are able to make such a leap. Depending on how successful this new line of research is, there’s no telling whether machine common sense is five years away, or 50.”
The Race For AI Supremacy
Transhumanists and tech corp billionaires like Jeff Bezos and Google’s Eric Schmidt, and the U.S. government and military, would assuredly claim their own efforts to advance Artificial Intelligence make eminent common sense.
In a May 2021 interview, Schmidt sounded a cold war style rationale for plunging ahead. He told CNN that the U.S. might lose its lead in AI to the Chinese “fairly quickly” over the next decade, unless it sought to outdo that country’s plan to lead the global market for AI by 2030.
Schmidt, who currently chairs the National Security Commission on Artificial Intelligence, said the U.S. is falling behind China in related technologies including 3D manufacturing and robotics, facial recognition and supercomputers. He reasoned that lagging in AI innovation would pose not only economic, but national security risks.
He has a point of course, and that’s part of the conundrum. If the U.S. doesn’t continue to push the envelope on every conceivable AI advance, China, or some other country, will happily take up the slack.
But that doesn’t mean humanity will benefit or be protected by the effort. As the bioweapons programs of multiple nations, bizarrely intertwined, likely spawned a world disaster in the COVID War, the questing for “Conscious AI” may well advance the fortunes of a relative few, while rendering the bulk of humanity dramatically less safe and less free.
There are some influential voices sounding alarms about the potential for conscious AI to visit havoc. Daniel Dennett, a cognitive scientist at Tufts University, and German philosopher Thomas Metzinger, among others, have warned against attempting to create AI systems that have attributes akin to human consciousness.
“You can turn them off, you can tear them apart, the same way you can with an automobile. And that’s the way we should keep it.” Dennett has said, in arguing that AI should be limited to mechanized utility.
Metzinger, meanwhile, in a February 2021 paper titled “Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology,” called for a moratorium on development of conscious AI systems:
“This paper has a critical and a constructive part. The ¯rst part formulates a political demand, based on ethical considerations: Until 2050, there should be a global moratorium on synthetic phenomenology, strictly banning all research that directly aims at or knowingly risks the emergence of arti¯cial consciousness on post-biotic carrier systems.”
Metzinger’s objections had as much to do with concern for the quandary of newly created synthetic conscious beings, as for human beings. But a call for moratorium on almost any grounds would at least give time for various considerations of consequences, before the problematic technology emerges.
But it’s doubtful any formal agreement will stop the pursuit of AI technology in practically any respect. There’s an endless stream of news about AI drone swarms, AI powered analytics, processing and modeling, an increasingly pervasive IoT (Internet of Things), and a fast emerging AI fueled “metaverse.”
It may at least be comforting to think that there are some aspects to our humanness which are proving not so easy to duplicate via arrays of silicon and sophisticated software programming.
Come to think of it, such a notion might even strike some observers as common sense.
Support the Trends Journal with these great products