12 Comments

Thank you for writing such a reasoned response. I read Yudkowsky's article in Time and came away from it knowing a lot about Nina's tooth falling out and her imminent demise, but not a lot about why exactly AI is going to kill us all. I read it as emotional manipulation, not a sincere warning.

Expand full comment

No. It's the new *nuclear* scare, and Yudkowski is Heinlein, not Elrich. The Population Bomb was always transparently wrong for those who understood Economics and human behavior. But nukes, when they first arrived, looked very plausibly as if they would end the world. It turned out that MAD works, but we got lucky.

There's no equivalent for MAD for AI.

Expand full comment

> Chess, however, is a bounded, asynchronous, symmetric, and easily simulated game, which an AI system can play against itself millions of times to get better at. The real world is, to put it simply, not like that.

True. Which is why superhuman chess AI was built decades ago, and a general superintelligence hasn't been built yet. Do you have evidence that AI that deals with messy uncertain reality as opposed to chess is impossible, as opposed to just taking more R&D? Already AI can deal with lots of tasks that are much less of a perfect toy problem than chess.

> (Not to mention Yudkowsky’s assumption that there are no diminishing returns on intelligence as a system gets smarter. Or that such a system is anywhere close to being built.)

Yudkowsky analyses the evolution of hominids and says there doesn't seem to be steep diminishing returns around human level. There are likely to be diminishing returns and physical limits somewhere. The question is whether those kick in before or after AI is smart enough to destroy the world?

The human brain has a bunch of limits imposed by evolution that wouldn't apply to AI (like being small enough to fit through the birth canal, only having 20 watts of power etc.) Not that human brains look like they are near the physical limits of space and power efficiency. I think it is very hard to win against an enemy smarter than you. So if you think humans win, either you are expecting some very sharp diminishing returns just at the level of top humans, or you are expecting humans to hold a massive advantage of some kind. A billion copies of an Einstein smart AI all working flawlessly together towards a common goal is probably already enough to destroy the world.

Expand full comment

Thank you for this. I have consistently complained that Ehrlich still being allowed in polite society is quite a condemnation of "polite society."

BTW, Eliezer is right -- we ARE all going to die.

https://www.mattball.org/2023/03/weekend-reading-were-all-going-to-die.html

Expand full comment