What exactly makes you think that? He writes that it will "augment human intelligence" (listing many examples of augmentation), not that we'll have a superhuman intelligence.
1) Arguing about whether AI will be in human control can be confusing because the idea of control is a philosophical quagmire. Personally, I think it is easier to think in terms of whether AI could have substantial negative unintended/unforeseen consequences than whether AI could be outside of human control.
2) We already have superhuman AI, it is just superhuman in narrow domains, such as Go and protein folding.
3) AI doesn't have to cause singularity to have substantial negative unintended/unforeseen consequences. For example:
a) AlphaFold could be used to make bioweapons that cause great harm, even though I don't believe it was the intention of the DeepMind researchers to create bioweapons.
b) AI trading algorithms could inadvertently distort capital allocation in ways that substantially negatively impact the economy, without the makers of the trading algorithms intending that.
c) AI could make it easy to mass manufacture cheap autonomous weapons systems that cause mass casualties or are used by tyrants to consolidate power over their population.
4) I don't think that alignment necessarily helps with any of the above three examples, but I do think they demonstrate examples in which the AI is substantially more powerful than any *individual* human.
5) The above examples are perhaps somewhat similar to the atomic bomb: the atomic bomb is *in some ways* controlled by humans, but is also arguably more powerful than any human. Even if you are a human who has access to "the red button", you don't have the power to stop a nuclear weapon from being used against you. Similarly, you "controlling" an AI that can design a bioweapon doesn't give you the power to prevent a bioweapon being used against you.
6) Another way to put the above might be that even if you control a piece of technology, you might not be able to control the systems, incentives, and structures that the technology creates (which I will refer to hereafter as the logic of the technology). Even if you control the bomb, you are subject to logic of the bomb. If it were easier to make stealthy ICBMs and easier to detect submarines, the logic of the bomb might force preemptive strike and the likely eradication of much of humanity. The makers of the bomb didn't know that this would not be the case. As we currently work to build AI, it is very difficult to predict what it will better and worse at, and how those dynamics will impact the future of humanity. Even if AI is not itself "in control", there is a substantial chance that the logic of AI will be.
I havenтАЩt followed everything he has said in the year and half since this, but to me he seems credulous in the way that every high status man approaching the end of his telomeres throughout history becomes credulous about things he imagines could promise some flavor of eternal life
It seems to be what Marc thinks will happen, given all the things it will be able to do.
Starting from his premise, it makes sense.
What exactly makes you think that? He writes that it will "augment human intelligence" (listing many examples of augmentation), not that we'll have a superhuman intelligence.
Some thoughts:
1) Arguing about whether AI will be in human control can be confusing because the idea of control is a philosophical quagmire. Personally, I think it is easier to think in terms of whether AI could have substantial negative unintended/unforeseen consequences than whether AI could be outside of human control.
2) We already have superhuman AI, it is just superhuman in narrow domains, such as Go and protein folding.
3) AI doesn't have to cause singularity to have substantial negative unintended/unforeseen consequences. For example:
a) AlphaFold could be used to make bioweapons that cause great harm, even though I don't believe it was the intention of the DeepMind researchers to create bioweapons.
b) AI trading algorithms could inadvertently distort capital allocation in ways that substantially negatively impact the economy, without the makers of the trading algorithms intending that.
c) AI could make it easy to mass manufacture cheap autonomous weapons systems that cause mass casualties or are used by tyrants to consolidate power over their population.
4) I don't think that alignment necessarily helps with any of the above three examples, but I do think they demonstrate examples in which the AI is substantially more powerful than any *individual* human.
5) The above examples are perhaps somewhat similar to the atomic bomb: the atomic bomb is *in some ways* controlled by humans, but is also arguably more powerful than any human. Even if you are a human who has access to "the red button", you don't have the power to stop a nuclear weapon from being used against you. Similarly, you "controlling" an AI that can design a bioweapon doesn't give you the power to prevent a bioweapon being used against you.
6) Another way to put the above might be that even if you control a piece of technology, you might not be able to control the systems, incentives, and structures that the technology creates (which I will refer to hereafter as the logic of the technology). Even if you control the bomb, you are subject to logic of the bomb. If it were easier to make stealthy ICBMs and easier to detect submarines, the logic of the bomb might force preemptive strike and the likely eradication of much of humanity. The makers of the bomb didn't know that this would not be the case. As we currently work to build AI, it is very difficult to predict what it will better and worse at, and how those dynamics will impact the future of humanity. Even if AI is not itself "in control", there is a substantial chance that the logic of AI will be.
I havenтАЩt followed everything he has said in the year and half since this, but to me he seems credulous in the way that every high status man approaching the end of his telomeres throughout history becomes credulous about things he imagines could promise some flavor of eternal life