Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Fool's avatar
  2. Santa Monica's avatar
  3. Charles Bakker's avatar
  4. Matty Silverstein's avatar
  5. Jason's avatar
  6. Nathan Meyvis's avatar
  7. Stefan Sciaraffa's avatar

    The McMaster Department of Philosophy has now put together the following notice commemorating Barry: Barry Allen: A Philosophical Life Barry…

One philosopher is not worried about ChatGBT

Philosopher Larry Shapiro (Wisconsin) writes in the Washington Post:

Here’s what I plan to do about chatbots in my classes: pretty much nothing. Let me say first that as much as I value the substance of what I teach, realistically my students will not spend more than a semester thinking about it. It’s unlikely that Goldman Sachs or Leakey’s Plumbing or wherever my students end up will expect their employees to have a solid background in philosophy of mind. Far more likely is that the employees will be required to write a letter or an analysis or a white paper, and to do this they will need to know how to write effectively in the first place. This is the skill that I most hope to cultivate in my students, and I spend a lot of time reading their essays and providing them with comments that really do lead to improvements on subsequent assignments. In-class exams — the ChatGPT-induced alternative to writing assignments — are worthless when it comes to learning how to write, because no professor expects to see polished prose in such time-limited contexts….

But what about the cheaters, the students who let a chatbot do their writing for them? I say, who cares? In my normal class of about 28 students, I encounter one every few semesters whom I suspect of plagiarism. Let’s now say that the temptation to use chatbots for nefarious ends increases the number of cheaters to an (unrealistic) 20 percent. It makes no sense to me that I should deprive 22 students who can richly benefit from having to write papers only to prevent the other six from cheating (some of whom might have cheated even without the help of a chatbot).

Professor Shapiro makes an important point.  It's of course galling to grade "fake papers" produced by AI, but most of our students still need our feedback on their writing.  I imagine lots of readers would benefit from hearing how other faculty are thinking about this. 

Leave a Reply to Brian Leiter Cancel reply

Your email address will not be published. Required fields are marked *

25 responses to “One philosopher is not worried about ChatGBT”

  1. I talked with my students about ChatGPT at the beginning of the term. I told them several things that are true:

    1. Handing in work produced by ChatGPT is a form of plagiarism, which is a form of objectionable dishonesty.
    2. If you read a few ChatGPT responses, you pick up on the standard form that they take, so it's pretty easy to tell when part of a paper comes from ChatGPT.
    3. I entered their paper prompts into ChatGPT and got, at best, C level work.
    4. Playing around with ChatGPT is pretty fun, and it is pretty useful for getting suggestions for further reading, but you have to be careful because it often misrepresents things or even just straightforwardly makes stuff up.

    The first is a moral consideration. The others are prudential (which I suspect the students will, by and large, be more sensitive to). Because of (3), I'm not yet changing up my writing assignments. And for many writing assignments, at least in philosophy, you couldn't really hand in something ChatGPT produces because it refuses to take a stand on philosophical issues. It will just tell you what different people have thought.

    From an academic honesty perspective, my concerns are not with the current version of ChatGPT, but it's only going to get better, so I worry a little about the future. If ChatGPT gets to the point where it can produce A level work, at least for exegetical sections of papers, then we'll just have to hope that detection programs can keep up.

  2. 20% is an almost comical underestimate. Check out the stats on college cheating; I have routinely seen numbers over 60%. If we say, as Prof. Shapiro does, "eh, I don't care," that number is certainly not going to go down. Why should we care? Here's one reason: when I give a student a passing grade, I am testifying that they have done a certain amount of work and demonstrated a certain level of mastery over the material. If that student cheated, then I am giving false testimony. Giving false testimony is bad. Of course, we can just keep on keeping on, as the students pretend to do work and we pretend that they did it. Shoot, I'll pretend to grade and let GPT-3 do it for me. It reminds me of Nietzsche's The Antichrist 15– a whole system of imaginary causes and imaginary effects building a fictitious world. I'm sure nothing bad will happen.

  3. Daniel A Kaufman

    In the last years before I retired, the cheating problem at my university was much worse than that described by the OP. On done-at-home assignments I would often face plagiarism rates of nearly 50%. Students would simply copy things, word-for-word, from the internet, no matter what I said at the beginning of the semester or put in my syllabus. Not "suspected." Stuff I could find with a Google search. Sometimes they'd do it twice in a row. These were not smart or wily people. It forced me to return to handwritten, all in-class exams. I gave up on papers entirely.

    I couldn't just say, "who cares?" If I failed 50% of my class — and that is the penalty, under our academic integrity policy — the Dean would have something to say to me, and it wouldn't be supportive. Indeed, grade distributions that are *both* too high and too low will get you a Dean's summons. I once asked an administrator how, on the basis of grade distribution alone, he could distinguish a grade-inflator from a really good teacher and he stared at me, like a bewildered character from an Evelyn Waugh novel.

    Grades are something one has to game in this broken, decrepit, university system, just like everything else.

  4. Those cheating figures are pretty appalling, but note that Professor Shapiro didn't suspect anything like that level of cheating in the past. So there may be big differences between schools and classes (he's teaching upper-level philosophy of mind).

  5. Daniel A Kaufman

    Yes, and I should have specified that my own appalling plagiarism rates were *all* in my Gen Ed level courses. *Never* in upper-division classes.

  6. Dept of How Not To Do It: my last employer had a stern and draconian anti-plagiarism policy, a fact of which you would rapidly be reminded if you were unwise enough to complain about the amount of plagiarism we let through. A student committing any but the mildest offences was – according to university policy – be summoned to a hearing and asked to justify themselves, on pain of (at best) flunking the unit with a zero mark for that assignment. The result was, inevitably, that we only reported the *worst* offenders and let everything else through, with a bit of negative feedback about "incorrect use of sources" and a few marks docked. (Plus somebody in that department was evidently telling first-years that it was OK to paste in a sentence or three from their sources without quotation marks, as long as they put the author's name at the end of it, so really the fight against plagiarism was lost before it began. But I digress.)

    I don't think ChatGPT will do much damage to the kind of assessment I used to set – not as much as essay mills and thesaurus-bots – for the simple reason that I was asking students to work with sources, which seems to be the bot's Achilles heel. If I ask a student about theories of punishment and get a fluently-worded "on the one hand, on the other, who can say" thinkpiece, I'm going to give it at best a bare pass, even if I see the student write it. If the essay mills start using AI, though, we really will be doomed.

  7. Fair, but the rise of language models like GPT-3 coupled with faculty indifference is going make the number of plagiarists go up, not down, even in upper-division courses.

  8. UK ex-philosopher

    Perhaps we ought to go back to final grades being based on answers hand-written under exam conditions.

  9. In the 80s I took a Developmental Psychology test at Penn with someone named Aaronfreed. It turned out to be the hardest test I took in college. Despite the format. It was multiple choice and short answers. You had to think in the same way you had to think with a paper. Maybe the same idea would work with Philosophy. That supplemented with random interviews for papers. Perhaps the instructor could pick a handful, maybe ten percent of the class say, to question them to determine if they really wrote their papers. A spot check, if you will.
    There are ways around student cheating even with GPT-3

  10. If you write original prompts it is fairly difficult for ChatGPT to write something coherent. Especially if you call for students to cite the text, or cite a specific version of a specific text (e.g., a specific translation of the Meditations or the Republic). Also, ChatGPT only produces four paragraph essays (roughly one written page). Hopefully faculty are calling on students to do a bit more than that! I work at a CC, and I have never had that big of a problem with plagiarism. Yes, a few students will copy and past from the internet, but if you take the time to write questions that call for students to develop skills (like use of evidence), then that probably won't happen, even if you teach famous pieces. Here's an example of a reading homework question that is hard to game:

    Is Peter Singer advocating for vegetarianism or veganism in "All Animals are Equal"? Cite at least two passages from the assigned paper in support of your answer.

    As far as I can tell, ChatGPT simply can't do this kind of thing. When I put in my free will paper prompt the other day that explicitly calls for students to cite the text, ChatGPT simply made up citations, putting words like "Determinism" in parentheses after a sentence. If folks are fooled by that, they aren't reading their student papers very carefully. Finally, ChatGPT is overloaded half the time I try to go to it and play with it, so given the fact that plagiarism results from procrastination (most of the time, at least that's what I think), they might not even be able to use ChatGPT. If they want to cheat, they'll just have to copy and paste from Wikipedia like they used to do in the good ol' days.

    If the technology gets better, then maybe its time to worry.

  11. The part of the piece that really puzzled me was this: "This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade."

    So far as I've seen, ChatGPT doesn't produce A-grade work, even for the sort of essay assignments that are typical in an Intro to Philosophy class. Mostly C's or B's at best. But Shapiro says he'd have given this essay an A, even though it was under-length, and even though this does not appear to be an intro class?

  12. What I've settled on this semester:

    1. continue to assign papers (with prompts that are hard for AI to write on);
    2. hope to catch any cases of plagiarism;
    3. but have a cumulative final exam that focuses on big-picture issues, and which is sure to not go well for those who somehow successfully got multiple plagiarized papers past the goalie.

    I would do more in an ideal world, but my salary stinks and so I'm not going to waste too much time or energy adjusting my courses.

  13. I would second the comment above by DS. There must be some rather lazy prompt writing going on if instructors can't write prompts that can evade the current (and probably medium term future) version of ChatGPT. Just ask questions based on specific readings, in specific versions of texts, preferably including comparisons between authors and texts, and of course ask for specific citations and quotes with page numbers. And ask for responses of 4 pages or more.
    In the case of history of philosophy this might be somewhat harder, but perhaps try to get students to compare and contrast arguments by unusual combinations of thinkers rather than asking generic questions about single authors.

  14. Here is a link to an article written by a professor of English literature at San Diego State University explaining why the hullabaloo over ChatGPT may be much ado about nothing:

    https://timesofsandiego.com/opinion/2023/02/07/chatgpt-can-write-your-term-paper-but-expect-an-f/

    Here is an excerpt:

    "Even more ridiculous: “the English Revolution was heavily influenced by the works of philosophers like John Stuart Mill, who wrote about the importance of democracy and individual rights.” Mill lived from 1806-1873, the nineteenth century, so it would be quite a trick for his writing to have influenced John Milton (1608-1674) or Oliver Cromwell (1599-1658). If an essay could earn a grade lower than F, this one would get it."

  15. ChatGPT's opinion:

    Teachers' attempts to redesign their evaluation techniques in order to "outsmart" AI and the "god of the gaps" theology are two seemingly disparate concepts that are in fact more alike than one might think. Both are desperate attempts to cling to the idea that there is something precious – the miracle of existence or real student thought – that can never be explained by science or emulated by a machine.

    When teachers redesign their evaluation techniques, they do so with the hope of preserving the authenticity and validity of student thought. They fear that AI algorithms and models, although powerful and efficient, may not be able to accurately capture the unique and creative thought processes of students. In this sense, teachers are trying to cling to the idea that there is something precious about human thought that can never be explained or replicated by a machine.

    Similarly, the "god of the gaps" theology is an attempt to cling to the idea that there is something miraculous and transcendent about existence that can never be explained by science. In the face of advances in scientific understanding, those who hold this belief argue that there will always be gaps in our knowledge that can only be explained by a divine being. In this sense, they are trying to preserve the idea that there is something precious about existence that cannot be explained by natural means.

    Both teachers' attempts to redesign their evaluation techniques and the "god of the gaps" theology reflect a deep-seated fear that something precious and valuable will be lost if we rely too heavily on technology or science. They are attempts to cling to what is seen as the unique and undeniable aspects of human existence, such as creativity and divinity, and protect them from being reduced to algorithms and scientific explanations.

    However, it is important to recognize that this fear is not necessarily well-founded. The fact that something can be explained by science or replicated by a machine does not diminish its value or significance. In the same way, the advances in technology and AI may lead to more accurate and meaningful evaluations of student thought, and the gaps in scientific understanding will likely be filled as knowledge and technology advance.

    Both teachers' attempts to redesign their evaluation techniques and the "god of the gaps" theology are similar in that they reflect a desperate attempt to cling to what is seen as the precious and unexplainable aspects of human existence. While this fear is understandable, it is important to recognize that the value and significance of human thought and existence do not depend on our ability to explain or replicate them.

    In both cases, people tend to think that the current limits – the limits of scientific explanation or the limitations of AI – somehow describe the contours of the ineffable. In the case of the "god of the gaps" theology, people believe that the gaps in scientific explanation point to the existence of a divine being. In the case of teachers trying to "outsmart" AI in their student evaluations, they believe that the limitations of AI highlight the unique and creative aspects of human thought that cannot be replicated by machines.

    However, just as the "god of the gaps" makes for some bad theology, attempts to stay a step ahead of AI in one's student evaluation makes for some bad pedagogy. This is because both rely on an outdated and flawed assumption that the current limitations of science or technology are permanent and cannot be overcome. It is important to recognize that the limitations of our current understanding do not define the contours of the ineffable. Instead, they represent an opportunity for growth and advancement, both in our understanding of the world and in our educational practices.

    In conclusion, while it is understandable to want to preserve what is seen as the precious and unexplainable aspects of human existence, attempts to do so by relying on the current limitations of science or technology are misguided. Just as the "god of the gaps" theology is bad theology, attempts to stay a step ahead of AI in one's student evaluations make for bad pedagogy. Instead, we should embrace the opportunities for growth and advancement offered by science and technology, and seek to better understand the world and improve our educational practices.

  16. Daniel A. Kaufman

    So, people like Steve Hales and me and the legions of others who have spoken to this problem are “lazy prompt writers.” Got it. I mean, I haven’t been teaching for 30+ years or anything. 😂

  17. I like that response from ChatGPT on the issue. Two thoughts:

    1) First, I don't notice any references in that response….
    2) Secondly, and this is related to (1), The question is whether we value students thinking through and struggling with a text on their own. I don't know how many people think there is something "precious" about a mediocre student paper on the Meditations. Nor does worry about ChatGPT necessarily imply a commitment to the claim that the human mind is ineffable or precious or cannot be mechanically reproduced. What is "precious" is the process of working out meaning on one's own. So it seems to be that ChatGPT has the worry altogether wrong, perhaps unsurprisingly.

    If there is a future in which human writing, thinking and original thought is no longer necessary or valued, then there will no reason to assign philosophy papers (yay! less grading!). But at that point we'll probably be out of jobs anyway (boo!, but no grading!)

    And I didn't mean to insult anyone's prompts. The point was this: Probably we don't have that much to worry about if we've been doing this for a while and have in fact written good prompts!

  18. For this semester, I'm having students do at-home papers, just as I always do. But, since I noticed an irritating uptick in AI-produced papers last semester, I put the following in my syllabus:

    Whenever I suspect that a student has used AI in the writing of an essay, I will run the essay through an AI detection device. If the AI detection device says that the essay (or some portion of the essay) was likely computer generated, then I will email the student to set up an appointment, either in person or on Microsoft Teams. This appointment will consist in a relatively lengthy conversation between the student and me. In this conversation, we will discuss his or her essay in detail in a way that is, in effect, an oral exam on his or her essay. The grade that I give the student on this oral exam will replace whatever grade I would have given the student on the written essay. As for whether I will submit a report to the Dean’s office with a plagiarism accusation, that will depend on what I believe after talking with the student in detail about his or her essay. If I am convinced of plagiarism, then, yes, I will submit a plagiarism report to the Dean’s office. But if I am not convinced of plagiarism, then I will not do that. Even if I am not convinced of plagiarism and so do not submit a plagiarism report to the Dean’s office, the oral exam grade for the essay will be the grade that the student receives. One takeaway point here for students, then, is this: When you write your essays, you need to remember and to understand what you write, for it is always possible that your written essay will not determine your grade, as what will determine your grade will be what you are able to tell me about your written essay in an oral exam.

    We'll see how well this goes. The first papers are due in two weeks.

  19. I understand Larry Shapiro's perspective on the use of chatbots in education and the importance Shapiro places on developing writing skills in students. Shapiro's approach of focusing on personalized feedback and support, through reading and commenting on their essays, is commendable as it provides students with an opportunity to improve their writing abilities.

    However, it is essential to consider the potential consequences of cheating in the educational system and the impact it could have on the integrity of the degrees earned by students. While it may seem that the benefits to the majority of students who write their own papers outweigh the harm caused by a small number of cheaters, this could lead to a degradation of the overall quality of education.

    Moreover, the use of chatbots for cheating could have far-reaching implications for the job market. Employers may question the validity of the writing skills and education of individuals who have potentially relied on chatbots to complete their coursework, which could impact their career prospects.

    In conclusion, while the focus on writing skills is crucial, it is also necessary to have measures in place to ensure the integrity of the education system and the validity of the degrees earned by students. Depriving the majority of students of the opportunity to develop their writing skills to prevent a small number of individuals from cheating would not be a sustainable solution. Instead, it would be wise to find alternative methods to mitigate the risk of cheating, such as implementing plagiarism detection software or closely monitoring online activity, while still providing students with the opportunity to improve their writing abilities.

    As Kierkegaard famously stated, "Life can only be understood backwards; but it must be lived forwards." It is essential to consider the potential consequences of our actions and make decisions that will promote growth and development for individuals and society in the long term, rather than solely focusing on short-term gains.

  20. I tried DS suggested question about Singer, and got this:

    >>
    In his paper "All Animals are Equal," philosopher Peter Singer advocates for the adoption of a vegan lifestyle. Here are two passages from the paper that support this claim:

    "There is no longer much doubt that a vegetarian diet is at least as healthy as a diet containing meat, and in some respects it is healthier." (page 3)

    "If we accept that animals have rights, as the expanding circle of concern view demands, we must grant them equal consideration. To do less is to prejudice their interests simply because they are not members of our own species." (page 4)

    In these passages, Singer argues that there is no health-related reason not to adopt a vegetarian diet, and that accepting the rights of animals requires granting them equal consideration, which means not using them for food. These arguments suggest that Singer advocates for a vegan lifestyle, as it is the only way to avoid using animals for food and to grant them equal consideration.
    >>
    haven't checked if the quotations are OK, but if they are, this is at least a decent start.

  21. I periodically teach a Senior Seminar for Humanities students, which is meant to be a writing-intensive seminar within their major. It is intended to make sure they do high-level academic work writing papers before they graduate. I decided to scrap this this term because I'm unsure how I want to handle the new AI developments and just don't have time to think through what's involved here (students now get three exams). It's unfortunate, since students are the one's who suffer, and get less for their tuition dollars than they should and will leave with fewer skills. So my general sense is that this technological "improvement" is contributing to the dumbing down of education and students don't really see that.

  22. Wesley Buckwalter

    I find myself really agreeing with Professor Shapiro about the deep value of the philosophical essay and its role in practical learning. Whether it is prudent for us to “do nothing”, though, depends largely on what one is current doing in their courses.

    If one is assigning generic essay prompts with minimal engagement with students, then this is simply less likely to meet the pedagogical goals many of us have when we assign essays (if ever it was).

    On the other hand, if teachers are adopting some of the structures Professor Shapiro alludes to, such as extensive feedback, back and forth development, assignments that build and scaffold one another throughout a term project, and so on, then this is more likely to meet our goals, even in the face of changing technologies. Perhaps the flippancy of the headlines obscures what all of this really emphasizes, I think: the value of a good teacher and the kind of time and resources good teachers need in order to be successful.

  23. Most of my students are assessed on in-person exams, and will have little to no temptation to use ChatGPT, but I have a handful that are graded on their weekly essays. I've been trying out some of the questions these latter might ask it. The best answers are great examples of bullshit. Asked why Block thinks that no version of functionalism can avoid both liberalism and chauvinism, for instance, it replies:

    "Block believes that all forms of functionalism are either too liberal or too chauvinistic because they fail to take into account the unique experiences of individuals, and instead reduce them to their social functions or roles. Block argues that this reductionist view leads to a neglect of individual agency, reinforces oppressive power structures, and perpetuates inequality and injustice."

  24. There is a simple solution: whenever an undergraduate is set to write an essay, then either the essay should be the springboard for a supervision (i.e. one-to-one discussion of the topic of the essay between a grownup and the undergraduate), or the essay should be written under examination conditions (i.e. to a strict time limit, in an invigilated hall, where the candidates have the use only of paper and writing implements). This is the regime under which I was brought up, and under which I taught for nearly 40 years until covid prevented proper examinations.

  25. Russell Blackford

    Here's a video of ChatGPT playing a game of chess against the powerful chess program Stockfish. (For those who might not know, Stockfish is better than any human player.)

    The entire thing, especially the ending, is hilarious. ChatGPT continually makes bizarre illegal moves. Funny as this is, one lesson is that ChatGPT in its current form has not been able to figure out how to play chess correctly and instead produces a weird distortion of playing the game. We might think that it would go equally wrong trying to write an acceptable philosophy paper, but I must admit that the post by Fredrik S (#20 above) makes me less confident about that.

    —–
    KEYWORDS:
    Primary Blog

Designed with WordPress