Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. OLP's avatar
  2. Roger Albin's avatar
  3. James Bondarchuk's avatar
  4. John Rapko's avatar

    The image next to Wittgenstein is actually John Turturro saying ‘If pasta could talk, I’d understand it’.–On a lighter note:…

  5. F.E. Guerra-Pujol's avatar
  6. Adam Shear's avatar

    And the image of eyeglasses in the linguistic turn panel are not eyeglasses. (oh wait, I thought we were playing…

  7. a moyez's avatar

    Like the image next to Kripke’s name, that is in fact not an image of G. E. Moore, either.

On dreading the AI future

Philosopher Harvey Lederman comments; a long, but far from exhaustive, excerpt:

For the last two and a half years, since the release of ChatGPT, I’ve been suffering from fits of dread. It’s not every minute, or even every day, but maybe once a week, I’m hit by it—slackjawed, staring into the middle distance—frozen by the prospect that someday, maybe pretty soon, everyone will lose their job….

Does the coming automation of work foretell, as my fits seem to say, an irreparable loss of value in human life?….


I was brought up, maybe like you, to value hard work and achievement. In our house, scientists were heroes, and discoveries grand prizes of life. I was a diligent, obedient kid, and eagerly imbibed what I was taught. I came to feel that one way a person’s life could go well was to make a discovery, to figure something out.

I had the sense already then that geographical discovery was played out…..

We may be living now in a similar twilight age for human exploration in the realm of ideas. Akshay Venkatesh, whose discoveries earned him the 2018 Fields Medal, mathematics’ highest honor, has written that, the “mechanization of our cognitive processes will alter our understanding of what mathematics is”. Terry Tao, a 2006 Fields Medalist, expects that in just two years AI will be a copilot for working mathematicians. He envisions a future where thousands of theorems are proven all at once by mechanized minds….

The core of my dread isn’t based on the idea that human redundancy will come in two years rather than twenty, or, for that matter, two hundred. It’s a more abstract dread, if that’s a thing, dread about what it would mean for human values, or anyway my values, if automation “succeeds”: if all mathematics—and, indeed all work—is done by motor, not by human hands and brains….

If discovery is valuable in its own right, the loss of discovery could be an irreparable loss for humankind.

A part of me would like this to be true. But over these last strange years, I’ve come to think it’s not. What matters, I now think, isn’t being the first to figure something out, but the consequences of the discovery: the joy the discoverer gets, the understanding itself, or the real life problem their knowledge solves.


But the advance of automation would mean the end of much more than human discovery. It could mean the end of all necessary work. Already in 1920, the Czech playwright Karel Capek asked what a world like that would mean for the values in human life. In the first act of R.U.R.—the play which introduced the modern use of the word “robot”—Capek has Henry Domin, the manager of Rossum’s Universal Robots (the R.U.R. of the title), offer his corporation’s utopian pitch. “In ten years”, he says, their robots will “produce so much corn, so much cloth, so much everything” that “There will be no poverty.” “Everybody will be free from worry and liberated from the degradation of labor.” The company’s engineer, Alquist, isn’t convinced. Alquist (who, incidentally, ten years later, will be the only human living, when the robots have killed the rest) retorts that “There was something good in service and something great in humility”, “some kind of virtue in toil and weariness”.

Service—work that meets others’ significant needs and wants— is, unlike discovery, clearly good in and of itself. However we work— as nurses, doctors, teachers, therapists, ministers, lawyers, bankers, or, really, anything at all—working to meet others’ needs makes our own lives go well. But, as Capek saw, all such work could disappear….

In Automation and Utopia: Human Flourishing in a World without Work, the Irish lawyer and philosopher John Danaher imagines an antiwork techno-utopia, with plenty of room for lying flat [i.e., leisure]. As Danaher puts it: “Work is bad for most people most of the time.”“We should do what we can to hasten the obsolescence of humans in the arena of work.”

The young Karl Marx would have seen both Domin’s and Danaher’s utopias as a catastrophe for human life. In his notebooks from 1844, Marx describes an ornate and almost epic process, where, by meeting the needs of others through production, we come to recognize the other in ourselves, and through that recognition, come at last to self-consciousness, the full actualization of our human nature. The end of needed work, for the Marx of these notes, would be the impossibility of fully realizing our nature, the end, in a way, of humanity itself….

Today, I feel part of our grand human projects—the advancement of knowledge, the creation of art, the effort to make the world a better place. I’m not in any way a star player on the team. My own work is off in a little backwater of human thought. And I can’t understand all the details of the big moves by the real stars. But even so, I understand enough of our collective work to feel, in some small way, part of our joint effort. All that will change. If I were to be transported to the brilliant future of the bots, I wouldn’t understand them or their work enough to feel part of the grand projects of their day. Their work would have become, to me, as alien as ours is to a roach.

But I’m still persuaded that the hardline pessimists are wrong. Work is far from the most important value in our lives. A post-instrumental world could be full of much more important goods— from rich love of family and friends, to new undreamt of works of art—which would more than compensate the loss of value from the loss of our work.

Harvey had asked me for my reaction to his comments about Marx, so I will focus only on that (but emphasize, again, that his full discussion is even more nuanced than my long excerpt does justice to). Discussing the Marx of 1844, Harvey writes: “by meeting the needs of others through production, we come to recognize the other in ourselves, and through that recognition, come at last to self-consciousness, the full actualization of our human nature.” While the Marx of 1844 is still under the (malign) influence of Hegel (over the next two years he breaks decisively with Hegel), this is an even more Hegelian reading of the argument in the 1844 Manuscripts than I think is warranted. One of the core ideas of the early Marx is that human beings by nature need to engage in productive (creative) activity, in particular, they want to produce things not to meet material needs but because they want to produce them, for aesthetic or other reasons. Marx calls this “spontaneous activity,” and it represents an ideal of freedom that Marx accepts throughout his life. If AI rendered work in the service of others unnecessary, that would be fine, since the essence of “spontaneous” work for Marx is that it is not necessary to meet material needs (although it might do so). The real challenge presented by the AI future, in which most human labor is rendered unnecessary, is who controls the immense productive power of that technology. If it is used to liberate humans from “necessary” labor (i.e., labor to meet human needs), then it will bring about, for the first time in human history, a humane society. If, instead, it is used to enrich the rich, the rest be damned, that is a different story. It is, as Rosa Luxembourg said, a choice between “barbarism and socialism.”

, ,

Leave a Reply

Your email address will not be published. Required fields are marked *

10 responses to “On dreading the AI future”

  1. Thanks for sharing the essay, Brian!

    My citation was to the Notes on James Mill. (I realize in hindsight my reference in the text is confusing, since everyone will think I was referring to the Economic and Philosophic Manuscripts; maybe there were multiple Marxes of 1844.) I think the following passage (bounded by asterisks), especially the penultimate paragraph is reasonably clear:

    ***

    On both sides, therefore, exchange is necessarily mediated by the object which each side produces and possesses. The ideal relationship to the respective objects of our production is, of course, our mutual need. But the real, true relationship, which actually occurs and takes effect, is only the mutually exclusive possession of our respective products. What gives your need of my article its value, worth and effect for me is solely your object, the equivalent of my object. Our respective products, therefore, are the means, the mediator, the instrument, the acknowledged power of our mutual needs. Your demand and the equivalent of your possession, therefore, are for me terms that are equal in significance and validity, and your demand only acquires a meaning, owing to having an effect, when it has meaning and effect in relation to me…

    Although in your eyes your product is an instrument, a means, for taking possession of my product and thus for satisfying your need; yet in my eyes it is the purpose of our exchange. For me, you are rather the means and instrument for producing this object that is my aim, just as conversely you stand in the same relationship to my object…

     Let us suppose that we had carried out production as human beings. Each of us would have in two ways affirmed himself and the other person. 1) In my production I would have objectified my individuality, its specific character, and therefore enjoyed not only an individual manifestation of my life during the activity, but also when looking at the object I would have the individual pleasure of knowing my personality to be objective, visible to the senses and hence a power beyond all doubt. 2) In your enjoyment or use of my product I would have the direct enjoyment both of being conscious of having satisfied a human need by my work, that is, of having objectified man’s essential nature, and of having thus created an object corresponding to the need of another man’s essential nature. 3) I would have been for you the mediator between you and the species, and therefore would become recognised and felt by you yourself as a completion of your own essential nature and as a necessary part of yourself, and consequently would know myself to be confirmed both in your thought and your love. 4) In the individual expression of my life I would have directly created your expression of your life, and therefore in my individual activity I would have directly confirmed and realised my true nature, my human nature, my communal nature.

    Our products would be so many mirrors in which we saw reflected our essential nature. (Translation from: https://www.marxists.org/archive/marx/works/1844/james-mill/)

    ***

    These passages make tolerably clear that the form of production he has in mind involves the meeting of human needs. (See especially 2 in the penultimate paragraph of the quote.) I of course agree that he also emphasizes spontaneity elsewhere (and even here!), but need also seems to me central to the view in this passage, insofar as need relates to essential nature. As a non-specialist, reading around these passages and a bit of the secondary literature on them, I was struck by how Aristotelian Marx is in this text.

    1. Thanks for the clarification. The commentary on James Mill is also from 1844, part of the so-called “Paris Notebooks,” like the more famous “Economic and Philosophical Manuscripts.” Marx is very Aristotelian in 1844. But my basic point stands: the view in this early passage is not one that plays a significant role later on.

      1. Thanks! I’m not in a position to comment on its role in the whole corpus. But do you still agree with what you said in the post that my reading is “an even more Hegelian reading of the argument in the 1844 Manuscripts than I think is warranted”? Or was the “over-Hegelian” aspect independent of the points about “need” here?

  2. Re: Marx, hasn’t the catastrophe already occurred, well before AI was a twinkle in Marvin Minsky’s eye? The character Domin and lawyer philosopher Danaher react to a world in which industrial rationalization of labor has already displaced handiwork, which is the species of work that Marx’s scenario involves.

    AI as co-pilot is fine by me. If it’s trained to optimize theory-making and -testing, then I don’t see how a resulting altered understanding of mathematics is a bad thing. The problem for me is with the ubiquity of AI intermediation. Much like the ubiquity of so-called smartphones, the hypesters want to insinuate AI in every path of my engagement with the world. It’s no longer the right tool for the right job, e.g., the mathematician’s co-pilot, no longer an ad hoc solution, but a component of every transaction and interaction. This problem is compounded by the fact that ubiquitous technologies like smartphones, digital cameras, and AI need only be “good enough” to go to market. And I’m afraid that the legacy of good enough computing technology over the past few decades hasn’t been so good. That’s what worries me about AI. I can work to keep my distance from it, as I do from smartphones, but when it forces me to engage, odds are good that it won’t function well, it will not do what the hypesters claim it can do.

    In addition to rereading her novels, I’d love to read a new one by Anita Brookner. Alas, she died in 2016. Perhaps AI will produce Anita Brookner’s first post-mortem novel? Perhaps, but it will likely fail to impersonate Brookner to a fan’s satisfaction.

    1. Coincidentally, ArsTechnica has just wrapped up a live interview with Ed Zitron, whose work I don’t know, about AI hype. I’ve heard only the last half hour of the live interview, but the entire interview sits on YouTube at the first link in the AT story here: https://arstechnica.com/ai/2025/10/ars-live-is-the-ai-bubble-about-to-pop-a-live-chat-with-ed-zitron/. The discussion has nothing to do with Prof. Lederman’s concerns about the risk of AI exhaustion of fundamental human values, but I’m encouraged that Zitron confirms both my skepticism of AI hype and my general opinion of the quality of computing and communications technology these days. He is certainly enthusiastic about his disdain for Silicon Valley blowhards like Sam Altman. Proving my point about technology that is merely good enough, the ArsTechnica interviewer’s connection routinely drops during the first half of the talk. AI is the same in my book.

      I recommend the interview as a complement to Prof. Lederman’s reflections.

  3. I think the dread felt by Prof. Lederman is justified. The first step is to listen to one’s gut. “Ordinary users are understandably excited about the inexpensive abundance promised by AI marketers. However, these users won‘t be excited once that technology starts devaluing the very things and events they initially prized” (https://philpapers.org/archive/CHANIA-6.pdf)
    The second step is to actually do something about it: https://certifiedaifreeskillsandknowledge.org/

  4. Michel Xhignesse

    My slack-jawed concern is more that half-assed human work will be replaced by quarter-assed (or less) chatbot work. You can already see it in some quarters, like customer service, where what was once a frustrating experience becomes just impossible.

    Driving along the other day, I saw an ad for an AI law firm. Predictably, it touted the reduced cost to clients. But… that sure seems like a bad idea. In fact, it seems especially bad because the people to whom that will be most attractive are the people who most need a real lawyer and not, pace State v. Demesme (2017), a “lawyer dog”.

  5. I love the gentle nostalgia that suffuses the essay. Also nice to discover Prof Lederman’s articles (via his website) on philosophy of mind/action in relation to the current iteration of AI technologies (eg asking “whether LLMs should be considered to have propositional attitudes, and how this relates to our understanding of LLM behavior”). Much more substantial than the other discussion I come across.

    1. Thanks for the nice words, Jasper!

      1. Harvey, this is in reply to your question, above (for some reason there was no “reply” option on your reply to me. I agree the view expressed in the passage from the “Notes on James Mill” has Hegel’s theory of property and labor written all over it. I don’t think it’s a very plausible view, and it’s less plausible than Marx’s early *and* late view that what makes work free is that it is work we want to do, not work we must do to meet material needs.

Designed with WordPress