Here; an excerpt:
[O]n February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch… more like the moment you realize the water has been rising around you and is now at your chest.
I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect….
I think of my friend, who’s a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it won’t work. It’s not built for his specialty, it made an error when he tested it, it doesn’t understand the nuance of what he does. And I get it. But I’ve had partners at major law firms reach out to me for advice, because they’ve tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me it’s like having a team of associates available instantly. He’s not using it because it’s a toy. He’s using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it’ll be able to do most of what he does before long… and he’s a managing partner with decades of experience. He’s not panicking. But he’s paying very close attention….
Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.
This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.
Will 50% of law jobs disappear in the next five years? A former student, now a seasoned litigator with 25 years of experience, told me: “I’ve been mainly using Claude right now. I’m consistently surprised and scared about what it can do. I think a lot of 1st year lawyer jobs are going to be eliminated in the not-too-distant future.” If that happens, and even if it’s only 25% rather than 50%, there is going to be a massive contraction in law schools, much greater than what the Great Recession produced 15 years ago.
The bigger worry, though, is that what the labor economists call the “reinstatement effect” (where new technologies eliminate old jobs, but create new job opportunities elsewhere [e.g., the invention of automobiles was bad news for blacksmiths, but created jobs in auto factories]) may not apply here. As the author we began with put it: “When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”
As we pointed out in our recent book on Marx,
While the displacement of human labor by technology has generally been offset by the reinstatement of human labor in other contexts, something closer to what Marx expected has begun to occur more recently in the developed capitalist economies. As two contemporary non-Marxist economists note, there has been, since 2000, “a significant decline in the labor share after more than a century of stability” (Grossman & Oberfield 2021: 1; see also Acemoglu & Restropo 2019). As some other recent economists write:
“Labor’s share of national income has fallen in many countries in the last decades. In the United States, the labor income share has accelerated its decline since the beginning of the new century…While estimates of their long-run trends depend heavily on accounting assumptions and, thus, are subject to debate, they have all gone through a clear fall in the last 20 years.” (Bergholt et al 2022: 163)
Why has labor’s share fallen? Some non-Marxist economists think automation is the primary explanation (e.g., Bergholt et al. 2022: 166), which is what Marx would have predicted. Others cast the net wider in terms of explanatory factors for the decline of human labor’s share of income:
“[M]any economists appear to believe that further automation, robotization, globalization, market concentration, and aging of the population spell ongoing declines for the labor share. Some even fear that the labor share in national income might fall to zero.” (Grossman & Oberfield 2021: 28)
If the labor share fell to “zero” that would be consistent, of course, with Marx’s prediction of eventual immiseration of the vast majority.
The decline in the reinstatement effect, and the decline in labor share, documented by neoclassical labor economists (above), all predates the arrival of the current powerful versions of AI. What happens next? Substantive comments, especially those with links to other data and analysis, will be preferred.




Leave a Reply