People have foreseen and written about the consequences of human-level and superintelligent AI for a long time, which if anything suggests that such concerns are grounded. Weiner & others of his era underestimated the complexity of the task of creating AGI. We've never actually been close to it before in history. But we are now.
If you see the capabilities of humans as basically constant, and those of machines as going up. It makes sense to think that machines will probably one day be far more powerful than humans.
As these people didn't give a date on their warnings, it's hard to tell if they were wrong, or predicted something that is still in our future.
People have foreseen and written about the consequences of human-level and superintelligent AI for a long time, which if anything suggests that such concerns are grounded. Weiner & others of his era underestimated the complexity of the task of creating AGI. We've never actually been close to it before in history. But we are now.
Weiner vs Wiener
AI still too undeveloped to autocorrect:)
If you see the capabilities of humans as basically constant, and those of machines as going up. It makes sense to think that machines will probably one day be far more powerful than humans.
As these people didn't give a date on their warnings, it's hard to tell if they were wrong, or predicted something that is still in our future.