Pessimists Archive Newsletter

Share this post

Is AI Fear this Century’s Overpopulation Scare?

newsletter.pessimistsarchive.org

Discover more from Pessimists Archive Newsletter

Fears about old things when they were new
Over 8,000 subscribers
Continue reading
Sign in

Is AI Fear this Century’s Overpopulation Scare?

Is Yudkowsky the False Prophet Ehrlich was?

Archie McKenzie
Mar 31, 2023
16
Share this post

Is AI Fear this Century’s Overpopulation Scare?

newsletter.pessimistsarchive.org
12
Share

“Sometime in the next 15 years, the end will come.”  

Reading this in 2023, you could be forgiven for thinking these words came from an artificial intelligence (AI) alarmist like Eliezer Yudkowsky. Yudkowsky is a self-taught researcher whose predictions about intelligent machines veer into the apocalyptic. “The most likely result of building a superhumanly smart AI… is that literally everyone on Earth will die”, Yudkowsky wrote earlier this week in an article for Time. “We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan.”

But the words “sometime in the next 15 years, the end will come” are not Yudkowsky’s. They are from a similarly clueless, similarly obstinate prognosticator over fifty years ago: Paul Ehrlich. Ehrlich, a biologist at Stanford, authored The Population Bomb, which argued that Earth’s booming population would spell doom for humanity. (The idea that there are too many people, proven embarrassingly wrong decades ago, is sadly still fashionable in some circles.) In 1968, when Ehrlich wrote The Population Bomb, there were three-and-a-half billion humans. Now, there are over eight billion of us. The famines that Ehrlich predicted never happened – while world population doubled, agricultural efficiency tripled. People are not burdens; we can innovate out of resource scarcity. 

“Most of the people who are going to die in the greatest cataclysm in the history of man have already been born.” The words are Ehrlich’s but they could easily have come from Yudkowsky, who once declared that children conceived in 2022 have only “a fair chance” of “living to see kindergarten”. Ehrlich and Yudkowsky are both fatalists. Their prophecies are those of panic and helplessness – the forces of modernity, or capitalism, or natural selection have already spiraled out of control. There’s nothing you can do but wait for death. (Notoriously, the first sentence of The Population Bomb was “the battle to feed all of humanity is over”.) Yudkowsky reckons that for humanity to survive, he would have to be jaw-droppingly wrong. “I basically don’t see on-model hopeful outcomes at this point.” 

Luckily for humanity, history will prove Yudkowsky just as wrong as Ehrlich. Nevertheless, his misplaced faith that we’re all going to die has deep roots. Millions of years of evolution causes humans to see impending doom, even when it’s not there. That’s true for religious ends-of-days, or illusory famines, or runaway AI. Yudkowsky’s anxieties aren’t even original: evil machines have been a plot point in sci-fi for over a century. 

Judgement Day isn’t coming; Yudkowsky is systematically wrong about the technology. Take his tried-and-true metaphor for superintelligence, chess computers. Computers, famously, have been better at chess than humans for a quarter-century. So, Yudkowsky claims, humans fighting a rogue AI system would be like “a 10-year-old trying to play chess against Stockfish 15 [a powerful chess engine]”. Chess, however, is a bounded, asynchronous, symmetric, and easily simulated game, which an AI system can play against itself millions of times to get better at. It’s a terrible metaphor for the real world. (Not to mention Yudkowsky’s assumption that there are no diminishing returns on intelligence as a system gets smarter. Or that such a system is anywhere close to being built.)

AI fearmongering has consequences. Already, luminaries like Elon Musk, Steve Wozniak, and Andrew Yang have signed an open letter which pressures researchers to halt the training of powerful AI systems for six months. (“Should we risk loss of control of our civilization?”, it asks, melodramatically.) This letter is ethically ruinous and those who signed should be ashamed. Suffocating AI won’t save humanity from doom, but it will kill people whose deaths could have been prevented by the AI-assisted discovery of new drugs. Yudkowsky, however, thinks the open letter doesn’t go far enough – he wants governments to “track all GPUs sold” and “destroy… rogue datacenter[s] by airstrike”.

Twitter avatar for @mattparlmer
mattparlmer 🪐 🌷 @mattparlmer
In the past I have given credit where it was due to Eliezer Yudkowsky for not explicitly advocating violent solutions to the problems with AI development that by his own admission only he and a few other (mostly nontechnical) people see on the horizon He crossed that line today
Image
12:05 AM ∙ Mar 30, 2023
1,421Likes197Retweets

This has grim echoes. An agonizing number of well-intentioned, logical people take Yudkowsky’s ideas seriously, at least enough to sign the open letter. Fifty years ago, a comparably agonizing number adopted Paul Ehrlich’s ideas about overpopulation, including the governments of the two largest countries on Earth. The result was forced sterilization in India and the one-child policy in China, with backing from international institutions like the World Bank. 

Yudkowsky and Ehrlich are people you want to take seriously. They associate with academics, philosophers, and research scientists. They claim to be rational, scientific, unsuperstitious people. They are not. They are fearmongers, devoid of self-awareness and too psychologically and reputationally invested to question their own positions. And they are both wrong. Utterly, utterly wrong.

Paul Ehrlich, now 90, continues to believe his only error was assigning the wrong dates to his predictions. Yudkowsky will never stop catastrophizing, even after he has been proven wrong for decades. Like Ehrlich, he will keep shouting ‘the end is nigh’ long after the crowd has moved on. 

“How many years do you have to not have the world end” to realize that “maybe it didn’t end because that reason was wrong?” asks Stewart Brand, a former supporter of Ehrlich, quoted in the New York Times. Anyone predicting catastrophe or collapse should have to answer this question. What is the latest time that, if doom hasn’t happened, Yudkowsky will admit that he was wrong? He won’t ever. For a doomsday cultist, apocalypse lurks around every corner.


You just read the Pessimists Archive Newsletter. Subscribe for free to receive new posts and support our work.


Other interesting reads:

NEWART
Photography Created Many 'Last Mile Perfection' Jobs, Will AI?
The early days of photography were marked by a sense of excitement and awe as people marveled at this strange new medium that could capture reality with unbelievable accuracy. Naturally, some traditional artists were skeptical and feared that their craft would be rendered obsolete. These fears were addressed in 1840 during a…
Read more
6 months ago · 3 likes · Louis Anslow
NEWART
Why Steven Spielberg Shouldn't Fear AI
In a recent exchange on The Late Show, host Stephen Colbert pressed iconic director Steven Spielberg to weigh in on the ever-evolving world of AI-generated art. Spielberg expressed his love for it, saying that he believes any time a person uses digital tools to express themselves and convey a message is fantastic. It was a rational and thoughtful respo…
Read more
6 months ago · 2 likes · Louis Anslow
16
Share this post

Is AI Fear this Century’s Overpopulation Scare?

newsletter.pessimistsarchive.org
12
Share
Previous
Next
A guest post by
Archie McKenzie
🇬🇧 in 🇺🇸 🦅 🗽 🌉
Subscribe to Archie
12 Comments
Share this discussion

Is AI Fear this Century’s Overpopulation Scare?

newsletter.pessimistsarchive.org
Kristin Ides 🧁
Writes Marketing Cake
Apr 2Liked by Archie McKenzie

Thank you for writing such a reasoned response. I read Yudkowsky's article in Time and came away from it knowing a lot about Nina's tooth falling out and her imminent demise, but not a lot about why exactly AI is going to kill us all. I read it as emotional manipulation, not a sincere warning.

Expand full comment
Reply
Share
Calion
Writes Substack Industrial Complex
Apr 1

No. It's the new *nuclear* scare, and Yudkowski is Heinlein, not Elrich. The Population Bomb was always transparently wrong for those who understood Economics and human behavior. But nukes, when they first arrived, looked very plausibly as if they would end the world. It turned out that MAD works, but we got lucky.

There's no equivalent for MAD for AI.

Expand full comment
Reply
Share
10 more comments...
Top
New
Community

No posts

Ready for more?

© 2023 Pessimists Archive
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing