4 Comments

People have foreseen and written about the consequences of human-level and superintelligent AI for a long time, which if anything suggests that such concerns are grounded. Weiner & others of his era underestimated the complexity of the task of creating AGI. We've never actually been close to it before in history. But we are now.

Expand full comment

Weiner vs Wiener

Expand full comment

AI still too undeveloped to autocorrect:)

Expand full comment

If you see the capabilities of humans as basically constant, and those of machines as going up. It makes sense to think that machines will probably one day be far more powerful than humans.

As these people didn't give a date on their warnings, it's hard to tell if they were wrong, or predicted something that is still in our future.

Expand full comment