Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. A in the UK's avatar
  2. Jonathan Turner's avatar

    I agree with all of this. The threat is really that stark. The only solution is indeed in-class essay exams,…

  3. Craig Duncan's avatar
  4. Ludovic's avatar

    My big problem with LLMs at the present time, apart from being potentially the epitome of Foucault’s panopticon & Big…

  5. A in the UK's avatar

    I’m also at a British university (in a law school) and my sentiments largely align with the author’s. I see…

  6. André Hampshire's avatar

    If one is genuinely uninterested in engaging with non-human interlocutors, it is unclear why one continues to do so—especially while…

  7. Steven Hales's avatar

Empirical evidence against Chomsky’s theory of language learning?

I'd be curious to hear from experts their take on this article.  Links to other relevant material welcome as well.

Leave a Reply to Skef Cancel reply

Your email address will not be published. Required fields are marked *

13 responses to “Empirical evidence against Chomsky’s theory of language learning?”

  1. It's not very good. Folks might find useful this comment by Jeff Lidz, which is typical of the reaction of linguists I am familiar with (see the comments as well):

    http://facultyoflanguage.blogspot.com/2016/09/the-generative-death-march-part-1.html

  2. Cog linguistics is a minority field in linguistics that's been around for a while. It seems its main task are bold claims about how generative grammar does not work. It's a bit like SPEP in Continental but in linguistics.

  3. Part of the criticism of UG going back many years is that it has become (if it ever wasn't) a Motte and Bailey doctrine (although no one would have put it this way before Shackel's article). Whether that's right or not at this point there's so much baggage that any argument not in terms of a set of specific claims and predictions is rarely illuminating.

    Take Lidz's argument from the facultyoflanguage link. Suppose that the human capacity for language is almost entirely in virtue of more general cognitive capacities, but with a few language-specific adaptions. Is anything described on this page inconsistent with such a view? If that were the case, would what people think of as UG be true? One answer is "yes (motte) and no (bailey)".

    The statement "Note also that the generativist would generally be delighted to learn that something they thought fell into their purview is better explained by something extralinguistic for it allows UG to be smaller, which everyone agrees is the strongest scientific position." sounds particularly motte-y. Whatever else you might say about the UG community, they don't have a reputation for epistemic modesty.

  4. I'm all for reducing linguistic phenomena to more general features of cognition, and I don't have a lot invested in the specific models that Chomsky's narrow circle are currently using to talk about grammar. But I find this article quite unsatisfying. My suspicion is that formal linguists will react negatively for one of three very different reasons: either out of a desire to defend Chomsky personally (boring), out of a desire to defend nativism (more interesting), or out of a desire to defend the need for formalization in linguistics (much more interesting). Ibbotson & Tomasello say of Chomsky's early work that "This way of talking about language resonated with many scholars eager to em­­brace a computational approach to … well … everything." It still does for many of us, even those who don't believe much of the details of what he said then or what he says now – for example, like many linguists I'm on the fence about the nativism issue even though I'm strongly committed to the usefulness of generative grammar.

    Ibbotson & Tomasello, along with many of their nativist opponents, neglect to mention that doing formal linguistics doesn't commit you to being a nativist about grammar specifically – see for example Amy Perfors' work for a clear statement of the alternative. In general, the well-known facts about the relevance of attention, memory, cognitive control, mind-reading, etc. are relevant to language use says nothing about whether a good theory of language will have the structure of a generative grammar, or some other kind of formal grammar. Psycholinguists working in the formal tradition have known this for a long time.

    Personally, my attitude is that, if you can give me a description of a hypothesis about cognition that is precise enough to be falsifiable, it will be possible to write it down as a computer program. So, an anti-computational attitude is essentially an anti-scientific attitude. (With due room for philosophical subtleties about whether the cognitive processes involved are 'really' computational, vs just being the sort of things that one could model computationally.) For example, their suggestions about analogy being a key mechanism in language learning would, if expressed with enough precision to be evaluated, probably end up looking a lot like one or the other of the many computational methods developed in statistics & machine learning to modeling induction. I don't know whether the right way to model human learning precisely will end up being readily interpreted in terms of a generative grammar, but I am pretty sure that there will be some kind of abstract structures involved – perhaps not domain-specific. I'm also pretty sure that vaguely gesturing toward 'analogy' as a mechanism will not push the scientific discussion forward. On both sides we need specific, formally precise, empirically evaluable hypotheses.

  5. Lidz cotra Tomasello, part II:

    http://facultyoflanguage.blogspot.com/2016/09/the-generative-death-march-part-2.html

    (By the way, the main point of Lidz' part I is nothing new and ought to be very familiar to Tomasello. It's the same sort of reply Crain made in his debate with Tomasello.)

  6. Seems you have a bunch of Chomskian linguists here? Perhaps because the Chomskian side is more aligned with philosophy; the other is more aligned with neuroscience and psychology. The Scientific American article seems a not altogether terrible description of the state of the art of the non-Chomskian side, about 10 years ago. What is clearly wrong about it is claiming that any of this has been a recent development. It is not. This is a decade-old war. Recently, the fronts have remained the same, only the weaponry has been updated.

    It is true the field is split and divided, and will remain so for some time. In some fields, the numbers are 30:70, in some 70:30. There is little willingness for debate, and little common language. There are exceptions, but they are clearly exceptions. What the Chomskian commenters here are perhaps misrepresenting is acting as if Chomskians have won the debate, or even acting as if linguist equals Chomskian; that is as false as the Scientific American article claiming Chomskians have lost the debate.

  7. This isn't my wheelhouse, but I'm very surprised that no one has raised the following criticism: The reason to postulate an innate grammar is to provide a non-magical explanation of how language is learned, given the "poverty of the stimulus". The article just keeps saying that innate grammar isn't needed since humans have a "unique ability to understand what others are trying to communicate" (including "constraining mechanisms" and the ability to "generalize"). But what we wanted was an *explanation of* this "unique ability". What, exactly, are these "containing mechanisms"–could they be innate rules?–and how is it that we're able to generalize properly?

    Is there something I'm missing? Did Chomskians abandon the poverty of the stimulus argument?

  8. I was just going to say ..

  9. Set aside language learning. What is it in virtue of which we all "generalize" in roughly the same way (proper or not)? Something does this. It's a question about "epistemology naturalized" that needs an answer regardless. You can't, looking just at a "stimulus," and knowing nothing about the organism/machine/system, or set of such things, know what the output/response of the system to the stimulus is going to be. You always have to say something about the object being stimulated. This has nothing specifically to do with linguistics, or epistemology, or psychology, etc. It amounts to little more than saying that some input/output relation isn't metaphysically necessary.

    Very roughly then, the poverty of the stimulus motivates you to say something about the organism itself. But it puts almost no constraints on what it is that you say about that organism. Chomsky says "something new" constrains the development of language. Other say, "something old". Poverty of stimulus only says "something". (I have no position on the issue. One of the concerns seems to be that the question isn't as clear is it seems.)

    "Explanation" highlighted for some reason. Fine, unless you are suggesting that we have to use inference to the best explanation, where this is an armchair method for deciding which of two theories is uniquely correct. The linguists participating in this debate about human psychology want the matter resolved empirically.

    It's also important to be clear that, however these "generalizations" are understood, they are not generalizations formed in, for example, English. Generalizations is a language of thought? Maybe, in that case, they might be described is "about" English, in the traditional sense of "about". Or, maybe "regularity pertaining to" would be better than "generalization about." This calls to mind another type of argument which also doesn't establish much of anything.

  10. Norbert Hornstein. Sorry.

    —–
    KEYWORDS:
    Primary Blog

Designed with WordPress