Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Benj's avatar
  2. Mark's avatar
  3. Andrew Richmond's avatar
  4. Michel's avatar
  5. VL Brandt's avatar
  6. Dennis Arjo's avatar
  7. Thomas J Fournier's avatar

LLMs and graduate education in philosophy

A philosopher elsewhere (who does more formal work) writes:

[S]ince publicly available LLMs significantly reduce a lot of mechanical writing labor (great example: those who write in LaTeX needn’t spend hours and hours trying to code a complicated diagram, since even the medium-grade LLMs do it quickly and, with minimal back and forth, fairly accurately), and since publicly available LLMs also reduce a lot of literature searches, would it be a step forward to trim the standard PhD program from 5+1 period to either 3+1 or 4+1 (where the “+1” is offered in cases where extension is justified etc.)?

I guess my sense is that these LLM aids do not matter for most philosophy dissertations, but may help in some more formal areas–but one can’t tailor general program requirements to just a few subfields that might get an advantage from using an LLM. I should also say that part of the point of a literature survey is to read the literature and make sense of it yourself, and so an LLM might help in identifying literature, but that’s about it. What do readers think?

Leave a Reply to Andrew Richmond Cancel reply

Your email address will not be published. Required fields are marked *

6 responses to “LLMs and graduate education in philosophy”

  1. IHE has a piece on this very topic that came out today. Like many things being written about AI, it assumes its inevitable spread and use by students, including graduate students. It seems fair to assume some significant percentage of graduate students in philosophy are using AI to not only find sources for their work but also to summarize things they don’t read, help frame arguments, generate examples and counter examples and so on. Doing the same when writing the dissertation is a natural next step. I agree these are the sorts of things we ought to be able to assume someone earning a Ph.D. has learned to do for themselves. But I think the norms will drift to match practice.

    Here is the URL of the article: https://www.insidehighered.com/opinion/views/2026/03/31/ai-and-post-human-dissertation-opinion.

  2. I think it’s a mistake to use LLMs during learning, because it takes a great deal of knowledge and understanding to spot errors. In the lab I trained in decades ago, grad students had to prove they could conduct certain experiments ‘the old-fashioned way’ before being allowed to buy kits that would save them time, because we had found that the young’uns were unable to trouble-shoot their experiments unless they’d had to work through the biochemical steps and really understood the principles behind how the kits were put together (and how the experiment might be need to be modified for particular purposes). It’s one thing to really understand something and then play with shortcuts, knowing their limitations; it’s another thing entirely to be taught only a shortcut.

    As for “reducing literature searches,” Philosopher Elsewhere is proposing to take away one of the great pleasures and rewards of scholarship: the hunt for X and all the serendipitous finds one makes along the way, enriching one’s knowledge base and broadening one’s exposure to ideas and facts one would not have otherwise encountered.

  3. There are a lot of things that are built into time to graduate, such as course loads, working up to and passing the logic requirements, working up to and passing the language requirement, qualifying papers, etc. Unless you get rid of these, which easily eat up three years of the allotted time, then you’re making things substantially harder for graduate students.

    Then there’s the stuff you need time to develop, but which isn’t built in:
    gaining teaching, conference, and publication experience, and networking. Once again, the less time pressure on that score, the better

    And, finally, you need to be absolutely certain that no supervisors are taking six months or a year to get back to students. Nothing slows a student down more, or is less within their control–and it’s _super_ common.

    None of that is helped much or at all by chatbots. Even if a chatbot cuts down the time needed for some tasks, I don’t see how that time savings translates to years cut off a graduate education. Let’s agree it’s great for LaTex coding: so what? Did you devote a year or more of your graduate education to that? Do non-logicians?

    As an aside, every once in a while I check out how the chatbots are doing by giving them a bit of grunt work to complete. I have yet to see any time savings, let alone be impressed (except in the sense that it’s impressive that a chatbot can do what it does now). Most recently, I was hunting down the source for a bit of verse for a translation. I knew the author and the work it occurred in, a piece of 16th century epic poetry. Several versions are available online, but none are searchable–or, if they are, searches are made impossible by the outdated typography and subsequent orthographic reforms. It seemed this was a perfect piece of grunt work to outsource to an LLM, so I did. I immediately got my answer–or, at least, I was told which 50-page book of the poem to find it in. So I read that book, and it wasn’t there. I went back to the LLM, which clarified with a different edition. So I checked that, to no avail. I went back, and got a third edition; still nothing. I went back again, and got a song and dance about the earliest sources to quote the line. Turns out those were hallucinated too (not surprising, since the LLM’s earliest source was from 1810, whereas I was translating something from 1765). I spent a couple of hours investigating, all told, and got nothing back but hallucinations. No time at all was saved. I am now checking every line of the 600-page poem for myself, and I’m far enough along that I strongly suspect a long-standing misattribution. There’s no way to safely externalize any of that research to an LLM, though. And that sucks, because again, it’s a basic bit of grunt work. So I remain pretty skeptical about all the time these tools can currently save us.

  4. I sympathize with anyone who had to spend \textit{1-2 years} of their PhD struggling with Latex. There’s a learning curve, but that’s a bit much… For everyone else, I don’t see LLMs replacing enough work to bring the required time down from 5 years to 3 or 4. I doubt that the time spent on diagram generation, Google Scholar queries, deciding what journals to submit to, etc. (let alone the fraction of that time LLMs might save on those tasks), will add up to a year, i.e. 12 months of full time work. These things just aren’t the reason that good research — or learning to be a good researcher — is time-consuming.

    1. I would code that as “\emph{1–2 years}” — the functor on the basis of ‘conceptual not visual’ characterization, the en-dash rather than hyphen to appropriately connote range!

  5. My specialization is AI, and I’m very, *very* skeptical. My graduate studies were in math, an area where I would expect LLMs to be much more – and I’m choosing this word carefully – much more *effective* than in philosophy. Looking back on my time in grad school, I don’t think that LLMs would have helped enough to shorten the amount of time I needed.

    I wrote everything, including my thesis, in LaTeX, but figuring out *what* to write was always the bottleneck, not how to write it. I outsource LaTeX commands to LLMs all the time now, but that’s saving me less than 2-3% of the time required to write a paper, even in papers with very complicated symbology and notation. LaTeX just isn’t that hard. If someone thinks that a full year of their Ph.D. was devoted to figuring out how to do LaTeX diagrams, then I would like to see some of these diagrams!

    I have found Deep Research to be genuinely very useful. I love that I can now go to an LLM and ask it for a general concept, without knowing the right keywords, and expect it to figure it out – I can even use it as a search function for mathematical formulae! But even there, *finding* the papers is never the bottleneck, it’s reading and understanding them.

    I’ve found LLMs to be truly useful as tutors when I’m struggling to understand a concept. However, I’ve not yet tried to apply them to genuine frontier problems – the sort you’d be grappling with in graduate school – and I am skeptical that they would work as well in that role. But at least within the domains of coding and math, they’re very useful at these more beginner-to-intermediate problems.

    If LLMs continue to get better – and I’m not convinced that they will – then I think this interactive tutor application is much more potentially useful than writing or lit searches, assuming a genuinely motivated student. Even if they never reach the ability to reason on the frontier, a tutor with infinite patience and availability, who can explain and coach someone through intermediate-level material, is hugely useful. But I’m nowhere near close to being convinced that they’re reliable enough yet to start talking about shortening Ph.D. programs!

Designed with WordPress