Discussion about this post

User's avatar
R.B. Griggs's avatar

The fatal flaw with speculations of this sort is assuming intelligence is radically transformative yet doesn’t transform capital itself.

The value of capital historically derived from its capacity to enhance human coordination and meaning-making, acting as a proxy for compressing all values into a single quantifier.

If AI is genuinely intelligent, why wouldn't it make the proxy smarter rather than just making more of the same dumb proxy? The scenario where "approximately everything belongs to those wealthiest at transition" requires that wealth means the same thing before and after—that the compression function of capital stays fixed while everything else transforms.

The more interesting question isn't "how do we redistribute returns from automated capital?" but "what coordination capacities become possible when AI lets us optimize on 100x more dimensions of value?"

Jan Kulveit's avatar

Great econ thinking as I'd expect from Philip Trammell

Yet I have close to zero trust in the conclusions when read as futurism / directly applied to "how the world will look."

Several crucial considerations seem missing:

CC1: Capital will likely end up owned by AIs, not humans. Beren argues this convincingly in "Capital Ownership Will Not Prevent Human Disempowerment": rapid technological transitions create new forms of capital, information asymmetries prevent efficient indexing, and the information advantage will be on the AI side. The more default result isn't descendants of today's rich inheriting galaxies - it's human share of capital going to roughly 0.

CC2: Human control of the state doesn't only/mostly depend on economics. It depends on a combination of: the state literally running cognition on human minds, the security apparatus being composed of humans, the possibility of revolts, and explicit mechanisms like elections. As we argue in "Misalinged States" part of http://gradual-disempowerment.ai, human dependence drops on all these dimensions roughly simultaneously with labor share.

The essay does address this, but unconvincingly - suggesting capacity for havoc like "engineering biological weapons" as a source of continued leverage. Two problems:

A. Bioweapons mostly endanger humans, not states. The more states run on machine substrate, the less bioweapons threaten them.

B. If this leverage actually worked, it makes large free-roaming human populations a massive liability for states. Self-interested states would want to reduce this risk. E.g., a smaller population preserved as brains in vats is easier to protect.

I wouldn't bet on "human capacity for violence" as the terminal source of democratic stability. Military value of humans is dropping, predictive policing can spot dissent before it forms, and AIs coordinate better. The window for effective revolt is closing, not opening. (Alternative framing: capacity for violence toward humans is far more equally distributed than capacity for violence toward orbital datacenters.)

(This does not rule out violence short term.)

CC3: Human preferences exogeneity. Typical of econ-style analysis, preferences are treated as fixed and self-interested. In practice, preferences are shaped by cultural evolution. If that process becomes misaligned, neither capital ownership nor democracy saves humans.

81 more comments...

No posts

Ready for more?