25 October 2023

Empire of darkness

UK prime minister Rishi Sunak called for honesty and openness ahead of this week’s AI Safety Summit in Bletchley Park for global politicians, tech executives and experts. But warm words and loose promises may not be enough to stem the AI tsunami.

by Clive Simpson

Several days prior to the attack on Israel by Hamas, the renown Israeli author, historian and philosopher Yuval Noah Harari was in Azerbaijan, its own territorial dispute with Armenia having flared up only a week earlier, to give a keynote address at the opening ceremony of the 74th International Astronautical Congress (IAC) in Baku.

For this annual global gathering of world-leading space scientists, rocket engineers and space graduates, all with their futuristic eyes firmly set on the heavens above, his evocative and challenging words brought them crashing down to Earth.

“Soon the era of human domination of this planet might come to an end,” he warned, laying out the stark reality of AI (Artificial Intelligence) and the inherent dangers it presents to humanity. His talk drew rapturous applause from delegates crammed into the 3,000-capacity auditorium.

Despite suggesting that AI has the potential to help humanity, Harari, most famous for his international best-selling book ‘Sapiens’, expressed serious concerns about its precipitant threat to the very life that brought it into being.

Era of human domination

“For tens of thousands of years humans have dominated Earth but if we could go forward in time 700 years, or even just 50 years, we are likely to find a planet dominated by an alien intelligence.
 
We have already met this alien intelligence here on Earth and, within a few decades, it might take over our planet.” 
 
Harari said wasn’t referring to an alien invasion from outer space but “an alien intelligence created by us” in our own Earth-bound laboratories over just the last few decades.
 
“AI is an alien intelligence,” he asserted. “It processes information, makes decisions and creates entirely new ideas in a radically alien way. 
 
“Today, it already surpasses us in many tasks, from playing chess to diagnosing some kinds of cancer, and it may soon surpass us in many more. The AI we are familiar with today is still at a very, very early stage of its evolution.” 
 
He described AI as being still at its “amoeba stage” but unlike human evolution over billions of years it wouldn’t evolve at such a slow pace.
 
“Digital evolution is millions of times faster than organic evolution. The AI amoebas of today may take just a couple of decades to get to T-Rex stage.” If Chat GPT is an amoeba, what do you think an AI T-Rex would look like, he asked?
 
Space exploration
Harari believes that AI has great potential to help humanity, not only by exploring other planets free of stringent life support constraints but also protecting the eco-system of Earth, providing us with much better health care and raising standards of living “beyond our wildest expectations”.
 
But in parallel he issued a stark warning that it would bring with it many new dangers.
 
“AI is likely to de-stabilise the global job market and the global economy. Algorithms might enshrine and worsen existing biases like racism, misogyny and homophobia. Bots that spread outrage and fake news threaten to destroy trust between people, and thereby destroy the foundations of democracy,” he said.
 
“Dictatorships too should be afraid of AI, for they work by silencing and terrorising anyone who might speak or act against them. It isn’t easy, however, to silence and terrorise AI. What would a 21st century Stalin do to a dissenting Bot? Send it to Bot Gulag?” 
 
Existential threats
As well as significant societal challenges, Harari believes AI poses a series of existential threats to the very survival of the human species.
 
“Is it wise to create entities more powerful than us, that might escape our control?
 
“The problem isn’t that AI might be malevolent, the problem is that AI might be so much more competent than us that it will increasingly dominate the economy, culture and politics, while we humans lose the ability to understand what is happening in the world and to make decisions about our future.” 
 
AI might destroy humanity not through hate and fear but because it doesn’t care, just as humans have driven numerous other species to extinction by carelessly changing and destroying their habitats.
 
“Maybe AI will push humanity to extinction and then spread itself through the Milky Way galaxy and beyond? Homo-sapiens will then be remembered in the annals of the universe simply as the short-lived connecting link that shifted the evolution of intelligence from the organic to the inorganic realm.
 
"Some people may view this as a noble achievement, but I personally have a deep fear of this scenario. I believe that what really matters in life is not intelligence, but consciousness.”
 
Intelligence versus consciousness
Harari said intelligence should not be confused with consciousness. “Intelligence is the ability to solve problems, like winning at chess or curing cancer,” Harari explained.
 
“Consciousness is the ability to feel things like pain and pleasure, love and hate. In humans and also in other mammals and birds intelligence goes hand-in-hand with consciousness.
 
“We rely on our feelings to solve problems but computers possess an alien intelligence that so far has no link to consciousness.”
 
Despite an immense advance in computer intelligence over the past half century, he acknowledged there has been exactly “zero advance” in computer consciousness with no indication that computers are anywhere on the road to developing consciousness. 

“Just as spaceships, without ever developing feathers, fly much further than birds, so computers may come to solve problems, much, much better than human beings without ever developing feelings,” he said. 

“If human consciousness goes extinct and our planet falls under the dominion of super intelligent but entirely non-conscious entities that would be an extremely sad and dark end to the story of life. It would be an empire of total darkness.” 

How can we avoid this dark fate and deal with the numerous challenges posed by AI? The good news is that while AI is nowhere near its full potential, the same is true of humans too.

 

Positive potential
In terms of regulation, Harari suggested that humanity first needed to focus its attention on this existential threat of AI.

“We humans need to stop fighting among ourselves and cooperate on our shared interests. Unfortunately, in too many countries, like in my own country of Israel and elsewhere, people are not focused on our shared human interests, but rather on fighting with the neighbours about a few hills. What good would it do to win these hills if humanity loses the whole planet?”

Even if humans across the world cooperate He described the task of regulating AI as a difficult and delicate one.

“Given the pace at which AI is developing it is impossible to anticipate and regulate in advance all the potential hazards, therefore regulations should be based less on creating a body of rigid rules and more on establishing living regulatory executions that can quickly identify and respond to problems as they arise,” he said. 

 “To function well the institutions should also be answerable to the public and should stay in close contact with the human communities all over the world that are affected and impacted by AI.”

Mistakes happen
Harari believes regulatory institutions will need one more crucial asset - strong self-correcting mechanisms - if we are to prevent an AI catastrophe.

“In this era of AI the greatest danger to humanity comes from a false belief in infallibility. But even the wisest people make mistakes and AI is not infallible either,” he said.

“If we put all our trust in some allegedly infallible AI, in some allegedly infallible human being or in some allegedly infallible institution, the result could be the extinction of our species.

“In the past humans have made some terrible mistakes, like building totalitarian regimes, creating exploitative empires and waging world wars. 

“Nevertheless, we survived because previously we didn’t have to deal with the technology that can annihilate us. Hitler and Stalin killed millions but they couldn’t destroy humanity itself, so humanity got a second chance to learn from its catastrophic mistakes and experiments.” 

But Harari warned that AI is very different. “If we make a big mistake with AI we may never get a second chance to learn from it. We should not allow any single person, corporation or country to take a gamble on the fate of our entire species and perhaps on the fate of all earthly life forms,” he said. 

“As far as we know today, terrestrial animals maybe the only conscious entities in the entire galaxy or perhaps in the entire universe. There might be other conscious beings out there somewhere, but at least to the best of my knowledge we haven’t met any of them, so we cannot be sure.

“We have now created a non-conscious but very powerful alien intelligence here on Earth. If we mishandle this, AI might extinguish not just the human dominion over this planet but the light of consciousness itself, turning the universe into a realm of utter darkness. It is the responsibility of all of us to prevent this.”

*         *         *

The 74th International Astronautical Congress (IAC), in Baku, Azerbaijan, held between 2 and 6 October 2023, was organised by the International Astronautical Federation (IAF) in conjunction with Azercosmos (the Space Agency of the Republic of Azerbaijan) under the theme ‘Challenges and Opportunities: Give Space a Chance’. In 2024 the IAC will be held in Milan, Italy.

A shorter version of this article was published by Central Bylines on 5 November 2023.

Spacesuits are not merely uniforms

Boeing (left) and SpaceX flight suits - a question of compatability? IN THE realm of space exploration, where innovation is often celebrated...