People have foreseen and written about the consequences of human-level and superintelligent AI for a long time, which if anything suggests that such concerns are grounded. Weiner & others of his era underestimated the complexity of the task of creating AGI. We've never actually been close to it before in history. But we are now.
The idea of "Godlike superintelligence" from a machine, which, despite being designed to accomplish tasks, somehow achieves perfection, ends up gaining self-directed agency, gains desires and goals and decides to go to war with humanity is stupid.
People who have no background in the tech simply do not understand what it is. It's not a mind. It's not a thinking machine. It's a pattern recreator that runs on math. It's not capable of understanding its environment. It doesn't have perception. It does not exist when it is not being run.
All this "doom" nonsense, of course, comes from people who don't understand the tech in any way. They just don't have the mental model to realize that it truly does not have a mind or intentions and never will.
To say "human level intelligence" is a revelation that someone does not understand the tech. It will always be better at math than humans. It will always look up things faster. It will never have a self-driven mind.
Everything about these claims is idiotic. Even the idea of "superintelligence" as a non-fictional concept is suspect. Nobody ever really explains what that even means. It's an overly simplistic idea of what intelligence is. Intelligence is not a cheat code to the laws of the universe and it has its limits, because not all problems can be solved by information processing, because there are diminishing returns...
None of it makes sense at all. It's all just so stupid.
If you see the capabilities of humans as basically constant, and those of machines as going up. It makes sense to think that machines will probably one day be far more powerful than humans.
As these people didn't give a date on their warnings, it's hard to tell if they were wrong, or predicted something that is still in our future.
People have foreseen and written about the consequences of human-level and superintelligent AI for a long time, which if anything suggests that such concerns are grounded. Weiner & others of his era underestimated the complexity of the task of creating AGI. We've never actually been close to it before in history. But we are now.
Weiner vs Wiener
AI still too undeveloped to autocorrect:)
The idea of "Godlike superintelligence" from a machine, which, despite being designed to accomplish tasks, somehow achieves perfection, ends up gaining self-directed agency, gains desires and goals and decides to go to war with humanity is stupid.
People who have no background in the tech simply do not understand what it is. It's not a mind. It's not a thinking machine. It's a pattern recreator that runs on math. It's not capable of understanding its environment. It doesn't have perception. It does not exist when it is not being run.
All this "doom" nonsense, of course, comes from people who don't understand the tech in any way. They just don't have the mental model to realize that it truly does not have a mind or intentions and never will.
To say "human level intelligence" is a revelation that someone does not understand the tech. It will always be better at math than humans. It will always look up things faster. It will never have a self-driven mind.
Everything about these claims is idiotic. Even the idea of "superintelligence" as a non-fictional concept is suspect. Nobody ever really explains what that even means. It's an overly simplistic idea of what intelligence is. Intelligence is not a cheat code to the laws of the universe and it has its limits, because not all problems can be solved by information processing, because there are diminishing returns...
None of it makes sense at all. It's all just so stupid.
If you see the capabilities of humans as basically constant, and those of machines as going up. It makes sense to think that machines will probably one day be far more powerful than humans.
As these people didn't give a date on their warnings, it's hard to tell if they were wrong, or predicted something that is still in our future.