> Silicon Valley’s faith in the ability of statistically-driven AI tools to one day achieve
> general intelligence [...] is just that – an article of ideological faith.
I have to point out that believing that "statistically-driven" AI tools will *not* achieve general intelligence is also an article of ideological faith. I think many people over-rotate on the "statistically-driven" part, ignoring the fact that the AI tools are explicitly modeled on the neuronal structure of our brains. There is no non-supernatural reason to assume that they will not eventually achieve general intelligence, and the rapid pace of recent progress is a strong indicator that it will happen sooner rather than later.
Further, there's no obvious reason to assume that they will not achieve superhuman intelligence shortly after reaching human-level general intelligence. All that's required is to train them on the same AI-creation knowledge possessed by their human creators and direct them to self-improve (and since self-improvement is a powerful instrumental goal, we probably don't even need to give them that direction, just not be careful enough about denying the ability).
To believe that AI will not achieve human-level and then superhuman general intelligence basically requires a supernatural belief that there is something more to our brains than mere physical matter.
The words I elided in the original quote are important, though:
> and solve all our problems
To the extent this is believed, it is very much an article of ideological faith, an illogical one. There is every reason to expect that artificial superintelligence will have its own goals, which might be antithetical to humanity. ASI probably won't deliberately set out to destroy or enslave us, but may well do so incidentally or inadvertently, much as we've done to so many species we found useful, or irritating, or just in the wrong place.
it is so refreshing to see such simple, common sense advice on a subject that is often subject to ridiculous hype and hand waving. thank you
> Silicon Valley’s faith in the ability of statistically-driven AI tools to one day achieve
> general intelligence [...] is just that – an article of ideological faith.
I have to point out that believing that "statistically-driven" AI tools will *not* achieve general intelligence is also an article of ideological faith. I think many people over-rotate on the "statistically-driven" part, ignoring the fact that the AI tools are explicitly modeled on the neuronal structure of our brains. There is no non-supernatural reason to assume that they will not eventually achieve general intelligence, and the rapid pace of recent progress is a strong indicator that it will happen sooner rather than later.
Further, there's no obvious reason to assume that they will not achieve superhuman intelligence shortly after reaching human-level general intelligence. All that's required is to train them on the same AI-creation knowledge possessed by their human creators and direct them to self-improve (and since self-improvement is a powerful instrumental goal, we probably don't even need to give them that direction, just not be careful enough about denying the ability).
To believe that AI will not achieve human-level and then superhuman general intelligence basically requires a supernatural belief that there is something more to our brains than mere physical matter.
The words I elided in the original quote are important, though:
> and solve all our problems
To the extent this is believed, it is very much an article of ideological faith, an illogical one. There is every reason to expect that artificial superintelligence will have its own goals, which might be antithetical to humanity. ASI probably won't deliberately set out to destroy or enslave us, but may well do so incidentally or inadvertently, much as we've done to so many species we found useful, or irritating, or just in the wrong place.