Everyone is sleeping on the *collective* advantages AIs will have, which have nothing to do with raw IQ - they can be copied, distilled, merged, scaled, and evolved in ways humans simply can't.
If I'm being frank, I think this is ridiculous, and scary. Efficiency is not everything. I'm probably going to sound like a Luddite here, but AI should be used for scientific & material advancement, not to create mega-Sundars. The Silicon Valley tech narrative of efficiency seems dystopian. It also ignores how much of the modern world is a result of consumer demand. So many innovations, it is because there was consumer demand. When you replace a Google engineer, you might increase Google's bottom-line, but you also replace a source of demand for rest of the economy. And considering Google is dependent on advertising revenue from other corporations, Google is indirectly hurting its own bottom-line too. Sure a world with AI firms might distribute the surplus back to human beings, but where does demand even come from this world?, and if we can find a way, do we want this? Work also provides meaning, lessons to us, as human being. It grounds us as human beings. It's what makes us human. And the world you are describing does not have a place for most humans. Maybe this happens, but I hope it doesn't. We need norms to deal with this.
> Work also provides meaning, lessons to us, as human being.
Yes, but so does simulated work. Or games, or sports, or exploration.
This still necessarily exists in a world where humans exist. And likely in a more fulfilling and evenly distributed form. It just doesn't drive the intergalactic economy.
Good point. This resonates with eg Graeber's "Bullshit Jobs", i.e. one could argue that much of modern-day, esp white-collar work already lay far within the frontier of necessity i.e. they serve wants more than needs.
Going forward we may simply have more [e-]sportsmen, livestreamers, hobby farmers, etc who will create value entertaining each other, while our survival needs become increasingly satisfied by an evermore automated industrial backbone.
"simulated work" - very interesting comment. While working at Google, I saw "simulated work" examples where people just literally simulate that they do some work when in fact they don't. But I guess it's not what you meant here - you are alluding to the idea of some gamified work, right? E.g. you are in Apple VR glasses and you are playing a game where you go to 9-5 job as a CEO, CTO or whatever you choose?
As a shareholder of Alphabet, I welcome mega-Sundar and his copies. But it really raises the question: What are most of us going to do with our time?
Are governments going to be ready to deal with such transformative change? It's kind of insane how little discussion there is on creating frameworks for different AGI scenarios. I guess everyone is too busy building AI and/or implementing it in their day-to-day lives.
I'm pretty skeptical that the average government employee is at all aware of this stuff. Maybe some NatSec types are but most government employees seem too siloed to be aware of any of this.
I think that very few people in the world are extrapolating advanced AI scenarios. There still hasn't been a tipping point or shock event (Think, how fast the world closed in March 2020) nor there might be one. It might feel progressive enough that one day the US government might wake up and see that UR is at 10% and realize why.
If the transformation is very slow (Say, it takes a generation to replace everyone in the service economy), then it might not affect society much. With Total Fertility Rates where they are, we might see enough of a shift of future generations from a service economy to a post scarcity economy that is a combination of UBI + Entertainment economy (Think professional athletes) + a tiny portion of humans in the service + manufacturing + agricultural economies.
If the transformation is fast (say 20 years or less), we will probably go through a period of serious societal unrest.
The following paper kind of lays out a scenario where humans are disempowered by AI:
Based on the simple math of the size of the current economy, no.
However giving everyone a million dollars would indeed do that.
Magnitudes - here, relative to the size of the economy - matter.
If the AI world the authors imagine succeeds in making the country 10,000x more productive and richer (not likely to me, IMO, but at least possible), then giving everyone a million dollars would be quite doable.
If “only” 1000x wealthier, then giving everyone $100K would be doable.
In this scenario, much of the population can’t earn meaningful wages, so the baseline is massively deflationary. The rate of technological progress would also presumably be quite high, also causing deflation for many products and services through better productivity. So while a UBI would be inflationary now, my hypothesis is that it would at best offset some of the deflation in the economy in the future we’re talking about.
Your initial clam is incorrect. Massive productivity is not inherently deflationary. And in anything akin to the real world, if most couldn’t earn wages, there would be redistribution such that at barest minimum the bottom 60% of the population was at least as well off as before, because to do otherwise you wouldn’t have a democracy, but you would have a revolution and rule of law / property rights wouldn’t remain static.
That said, while we are using different words, I essentially agree with your conclusion about UBI not being problematic in a world where society is 1,000x+ more productive.
Would paying everyone one hundred dollars to do fake work (fake work defined as: going through motions without producing wealth) reduce the value of the hundred dollars?
I am working as computer electronics hardware engineer in a 300k-employee corporation. I have 30 years of experience with software coding as well. We have MS-Copilot and an internal chat-GPT system in place which allows us using the AI even for confidential data.
And I like reading sci-fi. And I am just running deepseek 14b on my home pc under linux.
With that said, based on my experience, I find this article as convincing as the second coming of Jesus should happen tomorrow. (No insult meant to Christians.)
I just can't get over the idea that some computer system should ever work so well as to viably replace humans doing non-trivial jobs. I just can't get it. Sorry.
The Turing test was invented in 1950 and passed in 2023. The things you now call normal were once science fiction.
The heuristic I often use is that if you can't prove that some phenomenon violates the laws of physics and seems economically desirable, then it is likely to happen at some point in the future.
I don't see why, in this scenario, Sundar would remain on top. If we reach the point where agents can do everything else, then those agents will necessarily require all sorts of capabilities that allow them to interact with, navigate, and reason about the world. They will need a certain level of independence and autonomy.
This would also mean that intelligence has advanced to such a degree that it dwarfs Sundar. If the intelligence surpasses Sundar, a firm would be better off without him at the top.
In this case, AIs would be running the firm itself.
Are we supposed to believe that AIs running firms will still be part of a system ultimately controlled by humans? That doesn't seem right.
Firms—composed of an amalgamation of agents that can interface with humans, explore the world to gather necessary information, conduct advanced planning, deploy other agents, and negotiate—are somehow supposed to be governed by humans in a world that humans still control?
I don’t see where, in this scenario, humans remain in charge of anything.
During the Industrial Revolution, machines extended human capabilities. Humans simply moved one level up—they regulated and guided the actuators. The person tilling the field became the one driving the tractor.
In this scenario, however, as soon as a tractor-equivalent AI is created, the machine becomes a better and more efficient tractor driver. You will always lose the race to the machine. This is replacement, not extension.
And to top it off! A government with decrepit leadership, which could barely handle COVID, is supposed to "do something"?
I wonder what will happen to the price of compute on the path to this world. The price of goods is lower bounded by marginal cost and upper bounded by marginal utility. What happens when marginal cost keeps declining but marginal utility keeps increasing. My intuition tells me prices will explode and supply limitations will kick in first, but who knows.
Recently I've been thinking about the phenomena where if your product is manufactured in China, it will be cloned and sold marginally over the cost of production, rendering you unable to recoup the cost of prototyping the idea. Ironically this is sort of happening with the American AI labs right now, and they are fighting back by essentially copying the methods of American manufacturers: with Google competing on integration, OAI launching a price war, and Anthropic arguing for government support. A lot of blog posts talking about how you as an individual can stay valuable in the future assume that you can do so through idea-generation and effective usage and management of AI workers. But it seems to me that anyone who sets up a functioning process will be sniped off as AI immediately replicates their business model in a more efficient AI-centric manner.
While I appreciate the educated guess, I find it difficult to imagine what an AI firm would be like, because so much (as you say) of what human firms are, are in reaction to various traits of humans. And, I guess, non-AI software systems.
Changing the underlying substrate of firms to AI from humans would seem to change far more than I could predict. Would "firm" even be the right term for it?
And the frustrating thing for me is that these AI labs are want to create AGI but leave this problem of purpose that we get from work being displaced for us to solve.
I I'm enjoying how many new luddites this AGI movement is creating because we really have to argue for what technology we actually want as consumers instead of letting economic or efficiency forces dictate our future of work
Due respect, but if you had 10,000x the wealth you have now - and didn’t need to work for a living - I don’t think you’d have (well, most people, anyway) that hard a time coming up with rewarding things to do with your time.
Arguing that we are better off being poorer such that we have to work, as opposed to everyone in the world having the income of the top 10% in the U.S. today (and I did say top 10%, not 0.1%, because that cannot happen due to the scarcity of things humans value) seems, well… Luddite…
…and to completely misunderstand that massive societal wealth is always a better thing than poverty.
I don’t like the evil China Communist party. But there is no doubt at all that 95%+ of the Chinese population is better off now than they were 50 years ago, when they still lived under Communist dictatorship but were massively poorer.
The concept of the future presented here is both delusional and dystopic.
Dwarkesh neglects that reality is stochastic and even if you could predict "optimal strategy" for some "future metric" and assign uncertainty, there would be many possible strategies (we do not sit at a game theoretic equilibira). It would be an inefficient use of funds ( to spending billions and billions of the compute in building "accurate predictions of the future" that will be wrong.
It also false, that well-financed single vision companies would be the most successful in the long-term. The reason empires remain successful is not just the vision of the CEO, but the dynamics of humans with ambition and creativity trying to make their own progress and impact.
In the model presented all the capital would flow to a minority most talented software+computer entrepreneurs, or those who already have most of the capital. In the end these would be one in the same. The world is already so unequal, why is this the world we want?
The dynamic of large companies becoming smaller is true and they should. Organisations should function to explore smaller visions of aligned individuals.
Humans have needs, wants and desires. We connect, we explore, we love. We don't just function to fulfill a role in the market. The nature of organisations and the economic system as a whole will change, but we should hope such systems serve us and technology is used to improve the lives of many, not just hand all capital to two. There are optimistic versions of the future, where technology is democratised and is a tool of connection. Where peoples goals in life aren't to maximise profit, but to have a full life. In this world, everyone could be a creator and entrepreneur. The limited bandwith and attention of human is an asset not a limitation, because we can enjoy these fleeting moments before we die.
With someone of Dwarkesh's reach, this is a real shame.
Love the post. I think you are right. This isn't a direct critique but I have two other musings I hope you'll consider.
First, if the future is AI-driven decision-makers, and these decision-makers will have wide impacts on human beings, who will be accountable for mistakes? It seems to me like accountability—to AI—makes no sense in this scenario and so AI should not _legally_ be allowed to make decisions with widespread consequences for humans. If they do, a person must be able to be held accountable in some way that's commensurate with the effects of the decisions they set the AI off to make. I think the feedback loop between decisions, their effects, and how accountability affects future decisions is not discussed widely enough.
Second, if AI will take over companies, we should not optimize _only_ for efficiency. AI will be efficient but it will also be "fragile", from Taleb's _Antifragile_. In a world optimized for efficiency, a mistake in one copy will be spread everywhere, ~immediately. The whole is only as robust as the weakest copy in the herd. A human system is slower, sure, but also more robust. Think of the supply chain failure during the pandemic. A single break in the chain immediately defects the entire system. The supply chain was optimized for efficiency, not robustness. Let's not have a massive society-wide failure before we think about how to make an ecosystem of AI run companies _robust_ as well as efficient.
I think you could apply this to politics as well. Hundreds of robots figuring out how to maximize vote shares from humans, able to operate on shoestring budgets so they don’t need to fundraisers nearly as much. Maybe you still need a human politician at the center to actually take the role, but why have staff, consultants?
I think the incentives are actually against mega-firms of AI agents, and you're missing compute costs. If you do some napkin calculations to figure out how many AI agents you can run 24/7 on all the US AI clusters now (or Stargate), it's on the order of low-mid tens of millions effective labor force. This just isn't that much relative to what is sketched out here, which means AI firms will be compute bottlenecked even after the capability exists to run them.
Therefore, firms will have to make the calculation: if I try and do a large complex project and I want to spend $X on it, I can either do it with 1 agent, costing $X worth of tokens for that one agent, or I can split $X across 2 (or n) agents, where *each agent now has to use 1/2 the tokens* to do it in budget but it is 2x faster. And this assumes no communication costs between agents *and* that you get significant gains from having multiple agents work on the problem!
My guess is that inference will typically be fast enough (especially given that AI workers run 24/7) that you are cost dominated rather than time dominated, which means you'd lean towards picking the single agent because token communication/coordination costs between agents are >0.
I think the key crux might ultimately be that I think coordination costs are negligble by default (a lot of this has to do with my view that alignment is more or less easy by default, and the basic reason why we have principal-agent problems is you can't control human data nearly as well as AI values, and the stuff you would do if you aligned AI would be wildly illegal and tantanmount to brainwashing a human), plus thinking that we will eventually have enough compute such that when AIs seriously automate something, it's closer to billions of AIs than millions.
Yeah, this post basically assumes infinite revenue. Perhaps economic growth and more efficient compute gets us there some day, but it definitely doesn’t describe early AI-first firms, where as you say there will be cost constraints. People will also still be part of the equation since AI won’t be reliable enough to just run unsupervised. It will be interesting to see whose jobs mostly disappear and whose get augmented with AI.
If I'm being frank, I think this is ridiculous, and scary. Efficiency is not everything. I'm probably going to sound like a Luddite here, but AI should be used for scientific & material advancement, not to create mega-Sundars. The Silicon Valley tech narrative of efficiency seems dystopian. It also ignores how much of the modern world is a result of consumer demand. So many innovations, it is because there was consumer demand. When you replace a Google engineer, you might increase Google's bottom-line, but you also replace a source of demand for rest of the economy. And considering Google is dependent on advertising revenue from other corporations, Google is indirectly hurting its own bottom-line too. Sure a world with AI firms might distribute the surplus back to human beings, but where does demand even come from this world?, and if we can find a way, do we want this? Work also provides meaning, lessons to us, as human being. It grounds us as human beings. It's what makes us human. And the world you are describing does not have a place for most humans. Maybe this happens, but I hope it doesn't. We need norms to deal with this.
> Work also provides meaning, lessons to us, as human being.
Yes, but so does simulated work. Or games, or sports, or exploration.
This still necessarily exists in a world where humans exist. And likely in a more fulfilling and evenly distributed form. It just doesn't drive the intergalactic economy.
Good point. This resonates with eg Graeber's "Bullshit Jobs", i.e. one could argue that much of modern-day, esp white-collar work already lay far within the frontier of necessity i.e. they serve wants more than needs.
Going forward we may simply have more [e-]sportsmen, livestreamers, hobby farmers, etc who will create value entertaining each other, while our survival needs become increasingly satisfied by an evermore automated industrial backbone.
"simulated work" - very interesting comment. While working at Google, I saw "simulated work" examples where people just literally simulate that they do some work when in fact they don't. But I guess it's not what you meant here - you are alluding to the idea of some gamified work, right? E.g. you are in Apple VR glasses and you are playing a game where you go to 9-5 job as a CEO, CTO or whatever you choose?
can you give me some references for what you mean? just to get a better idea of these things.
As a shareholder of Alphabet, I welcome mega-Sundar and his copies. But it really raises the question: What are most of us going to do with our time?
Are governments going to be ready to deal with such transformative change? It's kind of insane how little discussion there is on creating frameworks for different AGI scenarios. I guess everyone is too busy building AI and/or implementing it in their day-to-day lives.
I'm pretty skeptical that the average government employee is at all aware of this stuff. Maybe some NatSec types are but most government employees seem too siloed to be aware of any of this.
I think that very few people in the world are extrapolating advanced AI scenarios. There still hasn't been a tipping point or shock event (Think, how fast the world closed in March 2020) nor there might be one. It might feel progressive enough that one day the US government might wake up and see that UR is at 10% and realize why.
If the transformation is very slow (Say, it takes a generation to replace everyone in the service economy), then it might not affect society much. With Total Fertility Rates where they are, we might see enough of a shift of future generations from a service economy to a post scarcity economy that is a combination of UBI + Entertainment economy (Think professional athletes) + a tiny portion of humans in the service + manufacturing + agricultural economies.
If the transformation is fast (say 20 years or less), we will probably go through a period of serious societal unrest.
The following paper kind of lays out a scenario where humans are disempowered by AI:
https://gradual-disempowerment.ai
If or when a large share of the population doesn’t have work, something like UBI will become a lot more feasible politically.
Would giving everyone a hundred dollars (UBI) massively reduce the value of a hundred dollars?
Based on the simple math of the size of the current economy, no.
However giving everyone a million dollars would indeed do that.
Magnitudes - here, relative to the size of the economy - matter.
If the AI world the authors imagine succeeds in making the country 10,000x more productive and richer (not likely to me, IMO, but at least possible), then giving everyone a million dollars would be quite doable.
If “only” 1000x wealthier, then giving everyone $100K would be doable.
In this scenario, much of the population can’t earn meaningful wages, so the baseline is massively deflationary. The rate of technological progress would also presumably be quite high, also causing deflation for many products and services through better productivity. So while a UBI would be inflationary now, my hypothesis is that it would at best offset some of the deflation in the economy in the future we’re talking about.
Your initial clam is incorrect. Massive productivity is not inherently deflationary. And in anything akin to the real world, if most couldn’t earn wages, there would be redistribution such that at barest minimum the bottom 60% of the population was at least as well off as before, because to do otherwise you wouldn’t have a democracy, but you would have a revolution and rule of law / property rights wouldn’t remain static.
That said, while we are using different words, I essentially agree with your conclusion about UBI not being problematic in a world where society is 1,000x+ more productive.
Would paying everyone one hundred dollars to do fake work (fake work defined as: going through motions without producing wealth) reduce the value of the hundred dollars?
I am working as computer electronics hardware engineer in a 300k-employee corporation. I have 30 years of experience with software coding as well. We have MS-Copilot and an internal chat-GPT system in place which allows us using the AI even for confidential data.
And I like reading sci-fi. And I am just running deepseek 14b on my home pc under linux.
With that said, based on my experience, I find this article as convincing as the second coming of Jesus should happen tomorrow. (No insult meant to Christians.)
I just can't get over the idea that some computer system should ever work so well as to viably replace humans doing non-trivial jobs. I just can't get it. Sorry.
The Turing test was invented in 1950 and passed in 2023. The things you now call normal were once science fiction.
The heuristic I often use is that if you can't prove that some phenomenon violates the laws of physics and seems economically desirable, then it is likely to happen at some point in the future.
Having lived through the creation of the Internet as a teenager / adult….the AI talk is going EXACTLY the same.
The AI hypsters sound exactly like Internet hypsters in 1995.
The AI doubters sound exactly like the Internet doubters in 1995.
A little SAT analogy
ChatGPT : AI :: AOL : The Internet
After using Microsoft Copilot, I have to agree with you.
I don't see why, in this scenario, Sundar would remain on top. If we reach the point where agents can do everything else, then those agents will necessarily require all sorts of capabilities that allow them to interact with, navigate, and reason about the world. They will need a certain level of independence and autonomy.
This would also mean that intelligence has advanced to such a degree that it dwarfs Sundar. If the intelligence surpasses Sundar, a firm would be better off without him at the top.
In this case, AIs would be running the firm itself.
Are we supposed to believe that AIs running firms will still be part of a system ultimately controlled by humans? That doesn't seem right.
Firms—composed of an amalgamation of agents that can interface with humans, explore the world to gather necessary information, conduct advanced planning, deploy other agents, and negotiate—are somehow supposed to be governed by humans in a world that humans still control?
I don’t see where, in this scenario, humans remain in charge of anything.
During the Industrial Revolution, machines extended human capabilities. Humans simply moved one level up—they regulated and guided the actuators. The person tilling the field became the one driving the tractor.
In this scenario, however, as soon as a tractor-equivalent AI is created, the machine becomes a better and more efficient tractor driver. You will always lose the race to the machine. This is replacement, not extension.
And to top it off! A government with decrepit leadership, which could barely handle COVID, is supposed to "do something"?
I think you are onto something.
Once we assume AGI, then “the singularity” seems a more likely outcome than any of the specifics in this piece.
I'm the type of mf that likes Dwarkesh posts before reading them because I already know they're gonna be fire 🔥
sundar writing press releases a very dark future indeed
I wonder what will happen to the price of compute on the path to this world. The price of goods is lower bounded by marginal cost and upper bounded by marginal utility. What happens when marginal cost keeps declining but marginal utility keeps increasing. My intuition tells me prices will explode and supply limitations will kick in first, but who knows.
Recently I've been thinking about the phenomena where if your product is manufactured in China, it will be cloned and sold marginally over the cost of production, rendering you unable to recoup the cost of prototyping the idea. Ironically this is sort of happening with the American AI labs right now, and they are fighting back by essentially copying the methods of American manufacturers: with Google competing on integration, OAI launching a price war, and Anthropic arguing for government support. A lot of blog posts talking about how you as an individual can stay valuable in the future assume that you can do so through idea-generation and effective usage and management of AI workers. But it seems to me that anyone who sets up a functioning process will be sniped off as AI immediately replicates their business model in a more efficient AI-centric manner.
While I appreciate the educated guess, I find it difficult to imagine what an AI firm would be like, because so much (as you say) of what human firms are, are in reaction to various traits of humans. And, I guess, non-AI software systems.
Changing the underlying substrate of firms to AI from humans would seem to change far more than I could predict. Would "firm" even be the right term for it?
This post is similar to Life 3.0 chapter 1
Yet to read but I did very much enjoy this Tegmark piece (excerpted from same?) https://nautil.us/the-last-invention-of-man-236814/
Yes this is it
A more interesting question is what will humans do? We are not going to be able to retrain or repurpose our way out of this one folks.
If we are all vastly richer, there will be more “artists” / entertainers.
The idea that, e.g., humans wouldn’t prefer to see human athletes as opposed to AI ones seems far fetched to me.
The idea that AIs will come up with better jokes than humans seems not quite as far fetched, but very unlikely.
And the frustrating thing for me is that these AI labs are want to create AGI but leave this problem of purpose that we get from work being displaced for us to solve.
I I'm enjoying how many new luddites this AGI movement is creating because we really have to argue for what technology we actually want as consumers instead of letting economic or efficiency forces dictate our future of work
Due respect, but if you had 10,000x the wealth you have now - and didn’t need to work for a living - I don’t think you’d have (well, most people, anyway) that hard a time coming up with rewarding things to do with your time.
Arguing that we are better off being poorer such that we have to work, as opposed to everyone in the world having the income of the top 10% in the U.S. today (and I did say top 10%, not 0.1%, because that cannot happen due to the scarcity of things humans value) seems, well… Luddite…
…and to completely misunderstand that massive societal wealth is always a better thing than poverty.
I don’t like the evil China Communist party. But there is no doubt at all that 95%+ of the Chinese population is better off now than they were 50 years ago, when they still lived under Communist dictatorship but were massively poorer.
there's a guy who wrote a whole book about a similar scenario
First chapter of life 3.0?
Who & what book
robin hanson age of em
Along with Warty Dog's answer, also Kurt Vonnegut's Slaughterhouse 5.
The concept of the future presented here is both delusional and dystopic.
Dwarkesh neglects that reality is stochastic and even if you could predict "optimal strategy" for some "future metric" and assign uncertainty, there would be many possible strategies (we do not sit at a game theoretic equilibira). It would be an inefficient use of funds ( to spending billions and billions of the compute in building "accurate predictions of the future" that will be wrong.
It also false, that well-financed single vision companies would be the most successful in the long-term. The reason empires remain successful is not just the vision of the CEO, but the dynamics of humans with ambition and creativity trying to make their own progress and impact.
In the model presented all the capital would flow to a minority most talented software+computer entrepreneurs, or those who already have most of the capital. In the end these would be one in the same. The world is already so unequal, why is this the world we want?
The dynamic of large companies becoming smaller is true and they should. Organisations should function to explore smaller visions of aligned individuals.
Humans have needs, wants and desires. We connect, we explore, we love. We don't just function to fulfill a role in the market. The nature of organisations and the economic system as a whole will change, but we should hope such systems serve us and technology is used to improve the lives of many, not just hand all capital to two. There are optimistic versions of the future, where technology is democratised and is a tool of connection. Where peoples goals in life aren't to maximise profit, but to have a full life. In this world, everyone could be a creator and entrepreneur. The limited bandwith and attention of human is an asset not a limitation, because we can enjoy these fleeting moments before we die.
With someone of Dwarkesh's reach, this is a real shame.
Love the post. I think you are right. This isn't a direct critique but I have two other musings I hope you'll consider.
First, if the future is AI-driven decision-makers, and these decision-makers will have wide impacts on human beings, who will be accountable for mistakes? It seems to me like accountability—to AI—makes no sense in this scenario and so AI should not _legally_ be allowed to make decisions with widespread consequences for humans. If they do, a person must be able to be held accountable in some way that's commensurate with the effects of the decisions they set the AI off to make. I think the feedback loop between decisions, their effects, and how accountability affects future decisions is not discussed widely enough.
Second, if AI will take over companies, we should not optimize _only_ for efficiency. AI will be efficient but it will also be "fragile", from Taleb's _Antifragile_. In a world optimized for efficiency, a mistake in one copy will be spread everywhere, ~immediately. The whole is only as robust as the weakest copy in the herd. A human system is slower, sure, but also more robust. Think of the supply chain failure during the pandemic. A single break in the chain immediately defects the entire system. The supply chain was optimized for efficiency, not robustness. Let's not have a massive society-wide failure before we think about how to make an ecosystem of AI run companies _robust_ as well as efficient.
I think you could apply this to politics as well. Hundreds of robots figuring out how to maximize vote shares from humans, able to operate on shoestring budgets so they don’t need to fundraisers nearly as much. Maybe you still need a human politician at the center to actually take the role, but why have staff, consultants?
I think the incentives are actually against mega-firms of AI agents, and you're missing compute costs. If you do some napkin calculations to figure out how many AI agents you can run 24/7 on all the US AI clusters now (or Stargate), it's on the order of low-mid tens of millions effective labor force. This just isn't that much relative to what is sketched out here, which means AI firms will be compute bottlenecked even after the capability exists to run them.
Therefore, firms will have to make the calculation: if I try and do a large complex project and I want to spend $X on it, I can either do it with 1 agent, costing $X worth of tokens for that one agent, or I can split $X across 2 (or n) agents, where *each agent now has to use 1/2 the tokens* to do it in budget but it is 2x faster. And this assumes no communication costs between agents *and* that you get significant gains from having multiple agents work on the problem!
My guess is that inference will typically be fast enough (especially given that AI workers run 24/7) that you are cost dominated rather than time dominated, which means you'd lean towards picking the single agent because token communication/coordination costs between agents are >0.
I think the key crux might ultimately be that I think coordination costs are negligble by default (a lot of this has to do with my view that alignment is more or less easy by default, and the basic reason why we have principal-agent problems is you can't control human data nearly as well as AI values, and the stuff you would do if you aligned AI would be wildly illegal and tantanmount to brainwashing a human), plus thinking that we will eventually have enough compute such that when AIs seriously automate something, it's closer to billions of AIs than millions.
Also, see this link for a counting of AI numbers:
https://www.lesswrong.com/posts/CH9mkk6BqASf3uztv/
Yeah, this post basically assumes infinite revenue. Perhaps economic growth and more efficient compute gets us there some day, but it definitely doesn’t describe early AI-first firms, where as you say there will be cost constraints. People will also still be part of the equation since AI won’t be reliable enough to just run unsupervised. It will be interesting to see whose jobs mostly disappear and whose get augmented with AI.