Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Kenneth Pike's avatar

    In terms of pedagogy, I agree with Professor Sagar. In philosophy courses, at least, the exercise is the point; I…

  2. AG Tanyi's avatar

    The central claim is that LLMs (or AI more generally, I suppose) is an existential threat to universities. This gets…

  3. Mark's avatar
  4. Fool's avatar
  5. Santa Monica's avatar
  6. Charles Bakker's avatar
  7. Matty Silverstein's avatar

“Artificial Intelligence [sic] forever inanimate and dumb”

This is funny and probably right (and timely, given yesterday's thread):

The cognitive scientist Gary Marcus has been the most active when it comes to showing that, contrary to some of the claims referenced above, LLMs [Large Language Models] are inherently unreliable and don’t actually exhibit many of the most common features of language and thought, such as systematicity and compositionality, let alone common sense, the understanding of context in conversation, or any of the many other unremarkable “cognitive things” we do on a daily basis. In a recent post with Ernest Davis, Marcus includes an LLM Errors Tracker and outlines some of the more egregious mistakes, including the manifestation of sexist and racist biases, simple errors when carrying out basic logical reasoning or indeed basic maths, in counting up to 4 (good luck claiming that LLMs pass the Turing Test; see here), and of course the fact that LLMs constantly make things up.

Leave a Reply to Howard Cancel reply

Your email address will not be published. Required fields are marked *

5 responses to ““Artificial Intelligence [sic] forever inanimate and dumb””

  1. IANACS.˟ Nor am I particularly a fan of Nick Cave's work, but in his own way he echoes Lubina's criticisms:
    https://www.sfgate.com/tech/article/nick-cave-excoriates-openai-chatgpt-17723986.php

    For example, "ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend." I'm not sure I understand precisely what he's after here with his talk of transcendence and limitations, except that these are concepts often associated with the power of art to relieve the human condition.

    ˟I am not a computer scientist.

  2. I'd say again what I noticed in Dec 2021, namely, that ChatGPT (and LLMs more generally) perform poorly with basic reasoning, and have difficulties solving maths and physics problems. Essentially, this is because the computational procedure of an LLM is not compositional. I just skimmed the substack article by Gary Marcus and Ernest Davis — it makes similar points to the ones I did, using similar examples.

    I suggested back then that maybe the software engineers working on LLMs can remedy this by adding a peripheral "reasoner", like an ATP (automated theorem prover) or even just a regular calculator. This is all compositional. E.g., to work out 5 + (7 x (3+4)), this term must first be parsed. Then, from the parse tree, it knows it must work out 3+4 and maybe store it; and then multiply that by 7; and then add this to 5. This would require the ATP-augmented LLM system to convert an input string S to (what Quine calls) a string S* in "regimented notation" — like a first-order language — and then this can be fed into the peripheral to figure out implications, satisfiability, etc. Maybe this might remedy these problems. I'm mainly neutral on whether AI will succeed.

    https://leiterreports.typepad.com/blog/2021/12/so-much-for-artificial-intelligence.html

  3. Human reasoning has an intrinsic social aspect. I'm a near dummy on AI but even I could see that Spielberg's AI: Artificial Intelligence contained some hard truth that I expressed in a chapter I wrote for a book on his movies (search Amazon if you wish–SS and Philosophy). Unless AI develops capacities like empathy or at least a chronic replication of some valuation for others that AI cannot produce at present, it is at best just producing a kind of sociopathic noncontextual human intelligence. Intelligence divorced from emotion may never add up to replicating what evolved humans experience, though I'm not sure at all that such experience cannot be duplicated with sufficient ingenuity with programming. But the social aspect is some kind of necessary condition for AI meeting human expectations of intelligence IMHO.

  4. @ V.Alan White
    Aren't computers more autistic than sociopathic, or is it to early to diagnose?

  5. It seems pretty east to show that these models lack a lot of the basic infrastructure of cognition that we take for granted.

    But when the frontline critique involves "the manifestation of sexist and racist biases, simple errors when carrying out basic logical reasoning or indeed basic maths, in counting up to 4," one rather marvels at how far they've come at approximating us–not to mention the good Professor Marcus's rather rosy assumptions.

    From a 2020 Forbes article:

    "According to the U.S. Department of Education, 54% of U.S. adults 16-74 years old – about 130 million people – lack proficiency in literacy, reading below the equivalent of a sixth-grade level."

    —–
    KEYWORDS:
    Primary Blog

Designed with WordPress