Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Keith Douglas's avatar

    Cyber security professional here -reliably determining when a computational artifact (file, etc.) was created is *hard*. This is sorta why…

  2. sahpa's avatar

    Agreed with the other commentator. It is extremely unlikely that Pangram’s success is due to its cheating by reading metadata.

  3. Deirdre Anne's avatar
  4. Mark's avatar
  5. Mark Robert Taylor's avatar

    At the risk of self-advertising:… You claim “AI is unusual in degree, not in kind” and “It is not clear…

  6. F.E. Guerra-Pujol's avatar

    Apropos of Sagar’s wish to foist the A.I. industry by its own petard, this article appeared in print in yesterday’s…

  7. Claudio's avatar

    I teach both large courses, like Jurisprudence and Critical Legal Thinking (a.k.a Legal Argumentation), and small seminar-based courses at Edinburgh…

“AI will destroy universities”

That’s political theorist Paul Sagar’s not implausible assessment; an excerpt:

I…teach political philosophy in a British university, so I have had to wrestle with the impact of large language models (LLMs) in one small domain: higher education. And here, my conclusion is simple. The threat they pose is existential…. Specifically, students who use LLMs to complete their coursework assignments [are the core of the problem]. Ask anybody lecturing in a university today and they will tell you the same: the impact has been dramatic.

The most obvious change is that whereas plagiarism software was previously very good at catching students passing off copied work as their own, LLMs evade this entirely. Programs like ChatGPT generate wholly original text based on the prompts you feed them, making plagiarism software useless. (It’s cheating, Jim, but not as we knew it.)

Likewise, software which claims to be able to detect AI-generated text is of no help, yielding false results in both directions. Students know this. I can only speak directly to the effect on essay-based subjects, but I can’t imagine the situation is any better in the sciences. In turn, we academics are painfully aware that at least some students are using the technology, and hence we are sometimes giving out grades for work spat out by machines, but fraudulently presented as human.

Shouldn’t we, the certified experts, be able to tell the difference between undergraduate work and LLM slop? Well kind of — but it’s tricky. A little while ago I was confidently pronouncing on my ability to spot AI-generated work in my field. When it comes to political philosophy, there is a certain tone of arch confidence — a panoramic control of the wider discipline — that it takes years of reading, writing, and thinking to pull off. No undergraduate has been studying the subject anywhere near long enough to be able to write with that level of control and authority. Hence when I find this tone in student work, I’m pretty sure that I’ve got an LLM-cheat….[But] those who are effective at using the technology to cheat are precisely the ones getting away with it….

My gut instinct that coursework feels like AI is, reasonably enough, insufficient proof to fail a submission. Unless the student is daft enough to have included a hallucinated bibliography of made-up AI references (and yes, this does happen), they always have plausible deniability. Their word, against my gut instinct. Unprincipled offenders win every time. After all, for a student willing to cheat on submitted assignments, a bit of extra lying isn’t much of a leap.

Yet it’s not just the problem of brazen cheating. In some ways, the more insidious threat LLMs pose to undergraduate learning is the promise of instant shortcuts. Why struggle through that difficult article, why read that complicated book, why force yourself through the problem set, when the internet can just summarize it for you?

The answer to which is: because it is only through the struggle, the forcing, the wrestling with ideas for yourself, over the course of years, that you can truly train and develop your mind. Indeed, this is the reason university humanities degrees put such a high premium on writing. Writing is thinking. Until you have tried to put your ideas on the page, you never really know if you understand them and have them under control.

Unfortunately, the truth of these facts only becomes apparent with experience — which is exactly what undergraduates lack….

What I would give to demolish the Silicon Valley cartels foisting this corrosive digital narcotic upon my students! Move fast and break things? How I wish I could return the favor.

By this point, it is abundantly clear that the only pedagogically robust response to LLMs in universities is at least a partial return to traditional methods. Reliance on online coursework has to be reduced; a significant return to paper and pen is required. This is the only way we can guarantee that students are not cheating in (all) their submissions. It is only by demanding that they prove their knowledge directly, in person, that we can incentivize them to go away and learn properly in their own time. Everybody in higher education knows this already.

For thirty years, I gave take-home essay exams in my jurisprudence classes. No longer: now it’s a 4-hour in-class essay exam, no Internet access. Colleagues convinced me it was too risky to continue with a take-home exam.

Thoughts from readers on Professor Sagar’s piece? Signed comments (full name, valid email address [the latter will not appear] will be preferred.

,

Leave a Reply to André Hampshire Cancel reply

Your email address will not be published. Required fields are marked *

25 responses to ““AI will destroy universities””

  1. I’m not at a university, so I don’t have a ton to contribute, but I want to subscribe to see what other people say… One thing I would like to mention, though, is that for those rare students who *want* to learn, LLMs can be excellent tutors, at least in math and coding. I use Gemini for this pretty regularly – they’re infinitely patient, always available, and able to explain beginner-through-intermediate subjects quite well. That doesn’t make up for the problems they create, obviously.

  2. The central claim is that LLMs (or AI more generally, I suppose) is an existential threat to universities. This gets somewhat conflated with assessment-related cheating. However, the latter does not seem like an existential threat to me since, as also noted by Sagar, one can just return to traditional modes of assessment (sit-in exams, oral exams, e.g.). So, I suppose, the existential threat lies really in the ‘shortcut problem’ Sagar mentions. I don’t know how much though this poses an existential threat specifically to universities. If students are assessed in ways that rules out AI shortcuts, then it seems to be in the students’ interests not to use those shortcuts since they know that they will fail in the exam where they won’t be able to perform well since they are too accustomed to relying on AI. So I am a bit puzzled about the larger, overarching claim about existential threat. I do not want to play down the danger since, for example, if students don’t learn properly to read and write and think (critically) in a sustained, deep, concentrated way that it takes to produce a good essay in a week or two (or more), that is obviously a problem. But does it pose an existential threat to universities? That I doubt. I think it poses a more general threat to the ‘human intellect’ (for want of a better word), but that threat appears comprehensive and not affecting universities more than other fields of life.

  3. In terms of pedagogy, I agree with Professor Sagar. In philosophy courses, at least, the exercise is the point; I do not assign essays because I enjoy reading student-written essays, I assign essays because the best way to improve one’s reading, writing, and thinking is to read, write, and think.

    But students for the most part aren’t paying for my classes because they want to get better at reading, writing, or thinking. They are for the most part paying for my classes because they want a particular certification and I am a stop on the road to that. I am in effect certifying that they have done a certain amount of reading, writing, and thinking that, thanks to AI, I can no longer be confident they have actually done.

    Furthermore, one response from vocation-oriented colleagues and administration seems to be, “generative AI is the future, a class that integrates AI into student learning is more likely to help them get a job than a class that bans the use of AI.” The recommendation, for example, to “focus on ‘prompt engineering’ instead of writing” is one I see somewhat routinely in job-focused places like LinkedIn.

    These considerations suggest to me that the reckoning generative AI is forcing is substantially on the vocationalization of higher education, which is a problem that pre-dates AI by many years. Institutions of higher education have been quite happy to take money from government and industry to serve as proxy filters for intellect or conscientiousness or whatever it is companies think an undergraduate degree (particularly one with no relation to the work being offered) is supposed to prove. Generative AI scrambles that equilibrium.

    For my own part, I think next semester I will have students turn in hand-written reading summaries for their weekly assignment. If they generate them via AI and then copy them out by hand, at least copying lines may be of some plausibly pedagogical value, if not quite the value I was hoping for.

  4. I completely agree with both Professor Sagar’s diagnosis and treatment. Students who abuse LLMs in order to merely obtain grades can’t realize the disservice they do themselves, essentially bypassing what education means. Professors have to avail themselves of direct means of assessment.

  5. When the problem of AI-based papers started a few years ago, I immediately switched to in-class essay exams and told myself I will just wait a few years for someone to make AI-detecting software that will fix the problem. So far I have not switched back.

  6. Edwin Fruehwald

    Generative AI has the potential to do catastrophic harm to higher education. This is because learning is a biological process that requires years and years of effortful labor. Anything that disrupts this labor will severely damage an individual’s ability to function in the world. Gen AI is such a disruption because it substitutes efficiency for learning. Nothing can compensate for the losses incurred when Gen AI is used instead of the human brain.

    Learning is a biological process in the human brain that is the product of thousands of years of human evolution on the savanna. Simply stated, learning occurs in brains cells (neurons) that are connected by synapses. Anything that strengthens neurons and creates more connections is learning.

    Scott Fruehwald, How Generative AI Can Harm Higher Education, With Special Emphasis on Legal Education (2026) https://www.amazon.com/How-Generative-Harm-Higher-Education/dp/B0GQ52WH77/ref=sr_1_4?dib=eyJ2IjoiMSJ9.BoJkRYJB7pmx-bWR9Qod_52asxwbIxVgwVmtQRjeFxb3vdsG5QBzGVuTkCpm_Ea4o76gZVBD1xDslPGHOviR53LNToHU59MViyaIaeQh53A16f4yxCW22FPzBSS5jPF0xNMlTDKpixrQipT35Hw0BWGffd5OiI2w-o32zMbqKoCBSzyVuYky9cF8vSyUNEllOyIZKX5Yfz2_YFkVqurwGW2K9HKw01S8v5IYbRhjOrk.zul82Vdo4eu8zFeskeQiauVxIOpd_IRWmioiFy0Tnnk&dib_tag=se&qid=1775490163&refinements=p_27%3AScott+Fruehwald&s=books&sr=1-4&text=Scott+Fruehwald

  7. André Hampshire

    Sagar’s claim that LLMs pose an “existential threat” to universities rests on a set of conflations that do not survive scrutiny. What is threatened is not the university as such, but a particular assessment practice—namely, the use of unsupervised essays as proxies for student understanding.

    The underlying assumption is that AI substitutes for thinking. That is too crude. In practice, LLMs can just as readily function as tools for augmenting reasoning: surfacing blind spots, testing arguments, and refining positions. The relevant distinction is not between “AI use” and “no AI use,” but between passive reliance and active engagement. Collapsing these into a single category obscures more than it clarifies.

    The companion slogan—“writing is thinking”—is equally overstated. Writing is a technical practice involving style, structure, and convention; it is neither reducible to nor a reliable indicator of underlying thought. One can think clearly and write poorly, or produce competent prose without deep understanding. Treating essays as transparent windows into cognition mistakes a contingent pedagogical tool for a dependable epistemic instrument.

    This matters because the current panic presupposes that essays already function as robust measures of intellectual ability. In reality, especially at scale, they often do not. Rubric-driven grading, minimal feedback, and high student-to-instructor ratios have long reduced them to standardized outputs evaluated against coarse criteria. If so, then LLMs are not so much corrupting a well-functioning system as exposing the fragility of one already compromised.

    The real issue, then, is evaluative. LLMs weaken the link between product and process. But that is a problem with our measurement model, not with cognition itself. Universities have faced analogous shifts before: new technologies alter what counts as competence and force adjustments in assessment. AI is unusual in degree, not in kind.

    Nor are institutions without options. Timed exams, oral defenses, and supervised assessments have long existed. A return to these methods may secure authorship, but it comes at the cost of other pedagogical goods—depth, revision, and sustained engagement. That is a tradeoff, not an existential rescue.

    What is striking is how quickly the response has defaulted to restriction. Rather than adapting assessment to a world of tool-mediated cognition, many propose excluding the tool in order to preserve legacy practices. But the existence of an evaluative difficulty does not justify constraining a capacity—especially one that can, when used well, enhance the very forms of reasoning education is meant to cultivate.

    The more plausible conclusion is not that AI will destroy universities, but that it exposes a tension long present in them: between scalable assessment and genuine intellectual formation. Resolving that tension will require institutional adaptation, not rhetorical inflation.

    1. Is the joke here that your comment is very obviously written by AI?

      1. André Hampshire

        If anything, this exchange illustrates the problem: judgments are being made on stylistic impressions (“this sounds like AI”) rather than engagement with the argument itself. That’s exactly why treating prose as a transparent indicator of thinking is unreliable. One would hope philosophers would be more attentive to that distinction.

        1. Essays as coursework has never been just about engaging the argument itself. Authorship matters because it matters that the argument is *the student’s*. An educator would know that. Style is an indicator of authorship, and so, yes, it matters.

        2. André Hampshire

          It is not clear why tool use undermines authorship. The argument remains mine insofar as I am responsible for its direction, structure, and endorsement. We do not ordinarily deny authorship because someone has used external aids—whether texts, colleagues, or other intellectual resources. The relevant question is not whether a tool was involved, but how it was used. An educator, one would think, would recognize the potential of such tools to advance a student’s reasoning and writing rather than assume their misuse by default.

          As for style: it is neither a necessary nor a sufficient indicator of authorship. Weak writers can produce flat, generic prose; stronger writers can adopt or refine styles through revision and assistance. At best, style is a fallible representation of authorship, not a reliable diagnostic of it.

          More generally, attributing a piece of writing to AI on stylistic grounds is not an argument. It illustrates the very problem under discussion: the substitution of surface-level impressions for substantive engagement. One would expect better epistemic discipline in making attribution claims.

        3. Maybe people are more interested in engaging with human interlocutors than with machine ones, no matter what the argument.

        4. André Hampshire

          If one is genuinely uninterested in engaging with non-human interlocutors, it is unclear why one continues to do so—especially while asserting, without evidence, that one’s interlocutor is not human. In any case, this shifts the discussion away from the argument itself, which is what ought to be under scrutiny.

    2. Mark Robert Taylor

      At the risk of self-advertising:… You claim “AI is unusual in degree, not in kind” and “It is not clear why tool use undermines authorship.” I’ve written a set of arguments against both these claims in “The Obligation to Restrict AI in Student Writing” in the Journal of Philosophy of Education. I hope you’ll give it a look and be persuaded that AI is an unprecedented technology and we should treat it (in education) more like a separate author than a tool which extends authorship.

  8. I think Paul is absolutely right. For the humanities it is an existential crisis, not just some minor issue that modifying our assessment techniques will solve. Undergraduate AI use is like fentanyl—the faculty are forced to become addiction counselors while the fanboys gush about what a great pain management tool fentanyl is.

  9. The existential threat is not to higher-ed as such but a particular (and now common) higher-ed business model: the one that centralizes or relies on fully online asynchronous classes. State colleges and community colleges have increasingly moved in this direction. That’s because (in part) they are competing for enrollment/customers, and the customer often prefers to stay at home (even more so now, because they know they can outsource work to AI). One cannot move assessment for these classes back into the classroom without significant help and good will from administrators, but the incentives for administrators are often to look the other way. Competing schools could collectively move things back into the classroom, but that requires solving the collective action problem, which seems unlikely. One option for professors caught up in this situation is to conduct oral exams via Zoom, which is what I have tried to do for my own online classes, but there are unique challenges here, especially with scale.

  10. I’m also at a British university (in a law school) and my sentiments largely align with the author’s. I see a big part of my job on the teaching side as being to help my students achieve certain “learning outcomes” (as they’re called here) and then verifying through the assessments that these outcomes have been met. The learning outcomes are usually formulated in terms of students possessing and demonstrating a certain level of understanding or mastery of the relevant material (ideas, arguments, canonical texts and other important sources). Accordingly, our assessments should be designed to assess much more than just the quality of the words on the page (the quality of the output), but to try to probe deeper and see if the student themselves has developed the target skills and understanding. So to my mind, take-home essays are dead — and as Brian says, in-class, closed-book no-internet assessments surely have to be a part of the response by academics.

    The trouble, and this is what I’m writing to add, is that our university (like many others in the UK from what I gather) does not have the capacity to support in-class, laptop based exams — even while they also (at least negligently if not recklessly) foist “this corrosive digital narcotic upon [our] students.” The lack of capacity here is twofold. For one, my university has thus far been reluctant to pay for exam lockdown software of the necessary kind (despite its widespread availability and relatively low cost — with a university wide license costing £15-30k per year). But the second reason is that even if we had software to enable students to take in-class exams on their laptops, the university does not have enough rooms to enable all or even most exams to take place in person during the 3-week exam period. Part of the problem is that each exam of ~100 students requires ~10 rooms (give or take) because of the various student accommodations that are needed (e.g. for a private space or extended time). There are of course some obvious solutions to these very low-tech problems, such as deploying temporary rooms (like we do with a graduation tent) for exam purposes or extending the exam period to 5 weeks from 3, and so on. But the university (like so many institutions in the sector) is pretty risk-averse and accordingly slow to act — and so the problems seem to pile up despite the availability of seemingly feasible solutions.

    As a result, we academics where I’m based are being asked to convert about half of our assessments into in-class tests sprinkled throughout the semester and the rest during the exam period should be some mix of group vivas, presentations and a few in person exams (likely hand-written for now). None of this can take place until next year as well, given the slow approval process for any assessment changes that is required at UK institutions.

    So institutional inertia and inflexibility is a big part of the challenge that AI is posing for universities at least in the UK. Academics and students are most likely to have to adjust to fit the realities of what universities can provide rather than universities re-conceiving themselves in a very forward-looking, creative or dynamic way to sustain intellectual and pedagogical rigour in an AI world. So I also fear for many UK universities’ ability to mould themselves to the times to remain fit for purpose and capable of carrying out their underlying social mission of equipping the next generation of responsible citizens with the intellectual tools that they, and all of us, need to thrive and live well together.

    Hope I’m wrong, but I largely share the author’s hunch that the AI-wave is going to be too much for a lot of UK universities — especially the institutionally inflexible ones.

    (One of the few places I might have some sympathy for the “move fast and break things” mantra is with respect to the overbearing bureaucratic apparatus at many UK universities — though even there you still want to do right by students and colleagues, so you wouldn’t want to break too much. Probably it’s just the “move fast” part of it that we need a heavier does of…)

  11. My big problem with LLMs at the present time, apart from being potentially the epitome of Foucault’s panopticon & Big Brother looking right back at you in potentially real time, is that their so-called “safety features” are per se inherently contrary to humanistic inquiry. If I’m researching De Sade, Bataille, Céline, Genet, even Mann, indeed any number of other figures who are both monstres sacrés and monstres impudiques of literature, philosophy, and art, then, almost by definition, these muzzled machinations cannot be trusted; it is indeed almost certain that they will try to euphemize away the obvious tension inherent in such works, therefore applying a low, slow, but insidiously enervating form of moral and ideological gaslighting that surely can’t be good for one’s research outcomes, nor, ultimately, for one’s mental health, or, at the very least, for one’s scholarly élan (not to speak of patience). How such bowdlerizing translates to the finished product, qua student essays, I suppose, is that, where not inevitable, the student will simply choose a less controversial topic to begin with or, barring this, will euphemize away the “unsafe” per the guidances of the LLM guide/amanuensis of their choosing.

  12. I’m not sure I’d yet go so far as to call LLMs an existential threat to universities. But I do think it is an existential threat to the integrity of the at-home, unsupervised student essay. I’ve always understood such essays to be at the core of my discipline (philosophy), the means by which students learn to think for themselves. So this threat is threatening indeed.

    There is a call to shift our emphasis to in-class handwritten assessments, such as blue book exams. For me personally this will not be a return to former times, since throughout my years of teaching I have continued to give hand-written blue book exams. However, these exams were always kind of a “police action” on my part, so to speak — namely, my attempt to give students an incentive to read the texts beyond what they needed to read for their essays, and to be sure they have a basic competency with the ideas covered in the course. But learning how to do philosophy? That happens as one wrestles with slippery ideas in the context of composing — and ideally, revising — an original essay. My blue book exams don’t really ask students to do that.

    Nor is it clear to me how they could. I frequently see claims that the solution to rampant AI-driven cheating is to have students hand-write essays in class. I have my doubts — about the time pressures inherent in this format, plus its incompatibility with extended research, not to mention the opportunity costs of what you give up by devoting so much time in class to what used to be done as homework. (FWIW I recently described these worries in more detail in this reddit post: https://www.reddit.com/r/Professors/comments/1rpdicd/some_questions_about_inclass_essay_writing_as_a/ )

    I remember, in the early days of ChatGPT, reading an essay in which the author wondered whether LLMs would turn out to be more like calculators in a math class, or more like e-scooters in a gym class. With several years of hindsight, I now think “e-scooters” is the better answer. Yes, we likely must shift to giving higher priority to in-class supervised assessment methods (e.g. blue book exams), but if this means shorter and fewer student-written essays as a result, then I grieve for all the lost learning this shift will entail.

  13. Jonathan Turner

    I agree with all of this. The threat is really that stark. The only solution is indeed in-class essay exams, with no internet access, but unfortunately my university (top 20 for law in the UK) is sticking pig-headedly to the line that they do not have the economic resources to pursue it. They seem unmoved by warnings that the alternative is giving up on academic integrity – and ultimately on a university degree as a viable academic credential – altogether. Sadly, what most students want is not an education as such, but a piece of paper that unlocks economic opportunities for them, and since universities in the UK rely on their custom for their continued existence, they are happy simply to give them what they want.

    1. Hear hear

  14. I’d like to pose a question. Let’s be pessimistic for the moment, and assume AI *does* destroy the university, at least as we know it. (To be clear, I am not yet at that point, but the thought experiment is useful.) What do we do then? Specifically, what do *we*, as scholars and scientists, do then? How do we design new institutions – whether it’s a transformed university or something else – that can continue to generate and disseminate scholarship in this new world?

  15. Charles Pigden

    Surely there is an answer to the problem of AI cheating which averts the existential threat. . It’s not great, and it’s otherwise undesirable in many ways but it will go a long way to disincentivize AI-based cheating. We simply revert to written exams for a major part of the grade. In my lower-level classes there will be a mark for a presentation, marks for two take-home essays but 50% will go on the final exam. Of course some will cheat with their take-home essays and I won’t be able to stop that. But such cheaters will be unlikely to prosper when it comes to the final exam. I will design the exam questions to be sufficiently different from the corresponding essay prompts that it won’t be possible to memorise an AI-generated essay and successfully regurgitate, especially as I will mark hard for relevance. They will have to answer the specific question I asked in the exam not the similar question that I asked as an essay prompt or they will lose points big-time. Furthermore they will have to answer one exam question on a topic that they have not written an essay on. Those who relied on AI to help write their take-home essays will be likely to come a cropper when asked write and think for themselves in the exam. This will gradually become obvious to students, making them less likely to cheat wrt their take-home assignments.
    I would much prefer a system based entirely on class-presentations and long take-home essays (much better pedagogically IMHO), but AI has effectively wrecked this as an option.
    It should not be forgotten that one of the functions of universities (and an entirely legitimate one in my eyes) is as gatekeepers to the lower slopes of the elite. A university degree should say a) the you know a fair bit about some worthwhile stuff, and b) that you have a range of cognitive skills and capacities which make you useful both to society and to potential employers. Unless we disincentivize AI-based cheating it won’t say either of these things and people will be getting their entry-tickets to the elite under false pretences. But the graduates of universities that DO disincentivize such cheating (and do so very publicly) are likely to be at a premium in the job market. For they will have acquired the knowledge and the skills that their cheating contemporaries lack, skills which are likely to remain at a premium, even in an AI-affected future.
    Here is an excerpt from a submission to a Government ‘Green Paper’ that I made about thirty years ago and is perhaps still, apposite.

    Students as a group tend to suffer from inconsistent time-preferences. Whilst they are students, many of them want to get good grades without doing much work. Once they have graduated, they want TO HAVE BEEN to a place where a good grade denotes intelligence, a capacity for hard work and (often) mastery of some specific discipline. Thus their preferences whilst they are students are inconsistent with their preferences thereafter. In responding to student demand there is a problem about WHICH demands we respond to – the demands they have as students or the demands they acquire as alumni? I think that in the long term the right strategy is to cater to the retroactive demands of alumni rather than the present demands of undergraduates. But this may entail being UNRESPONSIVE to the preferences of the current crop of students.

  16. I teach both large courses, like Jurisprudence and Critical Legal Thinking (a.k.a Legal Argumentation), and small seminar-based courses at Edinburgh University.
    In the large courses (>250 students) we reversed back to traditional 3-hour in-person written exams (with no internet access). But in seminar based courses (<25 students), I have been using a combination of a shortish written assignment (worth 50% of the marks) a 25-30 min oral exam (worth the other 50%) exclusively dedicated to discussing the claims and arguments put forward in the written assignment.
    The first feels like a necessary, but regretful, development. But the latter, I came to think is an improvement on take-home exams. I was originally concerned about the oral exam, particularly in face of the astonishing increase in mental health challenges declared by students. But the experience over the past few years has been overwhelmingly positive.

  17. F.E. Guerra-Pujol

    Apropos of Sagar’s wish to foist the A.I. industry by its own petard, this article appeared in print in yesterday’s NYT: “First to Be Disrupted By A.I.? Its Creators.” The online version of this article is here: https://www.nytimes.com/2026/04/02/technology/ai-silicon-valley-tech-work.html

Designed with WordPress