Philosopher Steven Hales comments. Thoughts from readers? Signed comments preferred.
Apropos of Sagar’s wish to foist the A.I. industry by its own petard, this article appeared in print in yesterday’s…
News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.
Apropos of Sagar’s wish to foist the A.I. industry by its own petard, this article appeared in print in yesterday’s…
I teach both large courses, like Jurisprudence and Critical Legal Thinking (a.k.a Legal Argumentation), and small seminar-based courses at Edinburgh…
Surely there is an answer to the problem of AI cheating which averts the existential threat. . It’s not great,…
I’d like to pose a question. Let’s be pessimistic for the moment, and assume AI *does* destroy the university, at…
Hear hear
I agree with all of this. The threat is really that stark. The only solution is indeed in-class essay exams,…
I’m not sure I’d yet go so far as to call LLMs an existential threat to universities. But I do…
Philosopher Steven Hales comments. Thoughts from readers? Signed comments preferred.
I propose the Shakespeare test; AI can beat Kasparov and whip any champion in Go and Jeopardy.
Harold Bloom famously quipped that Shakespeare invented the human.
If AI can be more than a generator of text, and assume human qualities then it must convincingly ape Shakespeare
It can't even imitate Heaney https://www.theatlantic.com/books/archive/2023/02/chatgpt-ai-technology-writing-poetry/673035/
It’s possible that predictions about AI recursively improving itself to the point that it’s forever better at everything will turn out to be false. However, most of the criticisms of AI since the beginning of some of the major successes of deep learning have looked to me like barely concealed fear, jealousy, and a desperate attempt to convince ourselves that our fears are unfounded.
I can sympathize with fears about financial security. I escaped academic philosophy, became an engineer, and started feeling financially secure for the first time in my life. I think people with my skill set will continue to be necessary for the foreseeable future but the amount of people that any given company requires will shrink as AI improves. AI will improve whether I like all of the consequences of that or not, and I think it would be irresponsible for me to bury my head in the sand rather than stay alert with regard to which doors are closing or opening for me financially.
I can also sympathize with some of the social and political fears. One thing that I haven’t seen anyone talk about is the fact that the massive amounts of surveillance data that governments and corporations around the world have about each and every one of us can be weaponized in much more dangerous ways than they already have been. It’s going to be increasingly easy for these organizations to systematically control people’s behavior and destroy people’s lives, especially Gen Z and younger who will have lived their entire lives online. I’ve publicly worried about this for many years and most people seem to think I’m being paranoid but it’s shocking to me that no one seems very concerned about this.
What I struggle to sympathize with are the fears pertaining to status. This seems to be at the root of most of the criticisms of AI. If your primary motivation to engage in the arts has been that you want to be considered a Beethoven, Rembrandt, or Shakespeare of your time, or you want to be super famous and have the lifestyle that goes along with it, and the fear that AI will become better than you at your favorite art forms causes you to feel demotivated, then I’d be willing to bet you weren’t making very good art in the first place and that you never will. I think analogous claims could be made about most fields, including philosophy. I honestly struggle to understand how someone could have as powerful an experience as giving a heartfelt musical performance and think that that kind of experience isn’t enough to sustain an interest in creating art, despite the fact that they almost certainly won’t be considered one of the greatest artists ever. If a robot can play piano better than anyone we’ve ever heard, that sounds to me like progress in the arts and it’s something to rejoice in and it shouldn’t stop you from playing piano.
I think the predictive aspects of the article are more or less on point. Much criticism of contemporary systems is, as Hales says, like "seeing the infant Hercules and harrumphing that he cannot beat [one] at wrestling." I am less convinced by the optimistic tone of the conclusion. If all human activity comes to have the social function of sport — to demonstrate the limits of human excellence as opposed to the limits of excellence simpliciter — I am not sure the correct response to that will be "gratitude and amazement."
I think Frank Herbert got it right. We will most likely have to fight AI for survival in a version of a Butlerian Jihad. Science fiction as prophecy. The Terminator was a documentary.
My admittedly somewhat limited understanding is that it's part of the nature of deep learning models that there is no feasible way to look in their machinery and tweak them to give them other general capacities, because none of the nodes correspond to any intuitively useful concept or function and it involves trillions of them. They just work by brute statistical force until they find an incredibly convoluted mathematical function that approximately fits the desired result. As such, there is no way to just "build off" of GPT to somehow give it common sense or the ability to reason in a logical manner, and the knights and knaves example from one of BL's previous links clearly demonstrated that it has no such ability. That isn't just wishful thinking. It just has to do with how the algorithms work. They aren't magic. There is no obvious way to create an algorithm that is generally capable of discerning truth from falsehood in a totally topic neutral way. You can do it with mathematical claims with classical algorithms because we know how to formalize them. But as we all know, these methods have many limitations.
My guess is that GPT is probably close to the peak of what an LLM trained on the internet can do. You could maybe get it to sound a bit more like Shakespeare, but there's nothing you can do to get it to the point where you can trust its answers on factual matters, short of having human beings fact check it or by doing the incredibly hard work of building up actual databases of confirmed knowledge that it can draw on. Even if you trained a generative LLM only on claims that were true, it wouldn't be able to become sensitive to their "truthiness" because there is no such thing, and it would likely still produce falsehoods because it only tracks statistical correlations between words in sentences. "Now we just have to teach it to discern truth from falsehood!" Were it so easy. Business might get GPT-happy for a while, right up until it inevitably screws up and they get sued.
All that being said, undergrads' writing is already so dismal and typically confused that it does pose a real problem. I try to circumvent this with my students by having an outline and multiple drafts, focusing on philosophical issues that are clearly beyond GPT. This is a lot of work for me, though.
As far as art goes, I suspect a human artist using AI as a tool for specific tasks is pretty much always going to produce way better work than giving an AI a simple prompt and having it produce a whole work on its own. I saw an example the other day of someone who tried to combine a text-to-image and text-to-story model to write a children's book. Unsurprisingly, the text-to-image model could not make the characters look the same on each page. Unsurprising because it just works off of the prompt for each page and the different sentences will have different correlations. A machine with an overall understanding of a narrative and how the images on each page correspond to characters would require genuine reasoning and concepts, which isn't going to happen any time soon. Image generators can't even do their one simple job right. They consistently give people the wrong number of fingers. This phenomenon is likely the result of the work of millions of different nodes and is not something one can just fix by looking under the hood and tweaking a few things. Imo, the biggest threat of generative AI is deepfakes. But even then, to make truly convincing deepfakes you will likely need an artist to correct the initial image and then another algorithm to make the corrections undetectable.
I immediately thought of Rick Beato, too. He indeed demonstrates on his YouTube channel that all of that ubiquitous, weird sounding music that’s everywhere nowadays—the stuff that sounds like one horny robot serenading another horny robot—is computer generated, and for this reason is preferred (that is, for good, solid, financial reasons) over the sloppy art made at great expense by inefficient human musicians.
But Hales misses the crucial point Beato makes on this matter (admittedly, it’s in another video, the one where he plays “Stairway to Heaven” for his baffled 8-year-old). Beato points out that the new stuff doesn’t actually need to be very good to be successful. Why? because “people have no taste”.
Budweiser is lousy beer, McDonald’s is lousy food, television is lousy culture, and so on. Last night we watched a show (“You” on Netflix) that was so bad I decided it must have been written by some sort of computer program—which in a sense it was. All of these things are industrial products that can be accurately described as the “artificial” products of a certain kind of “intelligence”. Schopenhauer fulminated against the pen-and-ink, 19th century version of AI-generated philosophy, when he pointed out that while Hegel may seem profound, it is in fact horseshit.
Strictly speaking, “artificial intelligence”—in the form of charlatanry—has been around for as long as language itself, and AI can (or will continue to) generate pseudo-intellectual “content” that will continue to displace genuine insight for most people. Most people won’t care, and will in fact come to prefer the ersatz version. Why? Because (as Beato points out) they have no taste, because they are incorrigible philistines!
Wow, I originally wasn't going to comment but given the other comments I feel you need a bit of counterweight. The article is the same breathless hype over AI that has been common for some time and even predates the current period. It is full of hysteria and misinformation that gives the illusion the AI is on some kind of path of constant improvement leading to more and more miraculous behaviour. The truth is that for decades now, every AI project has just been empty hype ending up as disappointment and eventually disappearing to be replaced by the next hype cycle. Watson was meant to go from playing Jeopardy to diagnosing cancer, and failed miserably.
Instead of me boring you a point by point analysis let me suggest reading of Piekniewski's criticism of the current AI hype. For example https://blog.piekniewski.info/2023/02/07/ai-psychosis/
Right now, one of the biggest problems confronting computing is application security (and what is now called cybersecurity in general). I see no indication that using AI to write software will improve matters here; it is likely, in my view to create incommensurable problems (since security is really hyperdimensional). Making trade-offs here as a species will be *hard*.
I also think developments will force us increasingly to confront questions long asked by philosophers about our own cognitive architecture. I see no indication that the corporations and software schools behind recent developments (particularly in so-called "data science") are reading, e.g., the Churchlands. Were this done, I would hope it would at least give pause to some on implementing artificial neural networks, for example. In security (see above) we have certainly not come to grips with incomprehensible machines (if the C.'s are right, anyway). And this is so even if we humans are somehow special (if, e.g., computationalism broadly speaking about cognition is false.)
I have decided to incorporate ChatGPT into my classes, and I explain why (and how) here: https://priorprobability.com/2023/02/16/how-i-learned-to-love-chatgpt/
Here is an excerpt from my essay: "For my part, resistance is futile. This semester alone, for example, I have over 800 enrolled students spread out across five sections in my large business law and ethics survey course. To make matters worse, Big Tech has hundreds of billions of dollars in resources, while I am a mere college professor with a small handful of teaching assistants and a couple of liberal arts degrees. Like the lyrics in the song 'Right Hand Man' from the Hamilton musical: 'we are outgunned, outmanned, outnumbered, and outplanned.'"
An uninformed, even naive query: Can Hubert Dreyfus's work continue to make a contribution to this discussion?
The basic illusion is the illusion of reference: that the bot is actually referring (as you and I do) to chairs, say, when it says "chairs." That illusion – that the bot can refer, is referring – underwrites the further illusion of thoughts and feelings.
I’d appreciate it if someone here could tell me if my basic understanding of “AI” is correct (or not).
It was my understanding that what thing that we call “AI” is doing is making, at the most basic level, simple, essentially “mindless” operations, but—thanks to the ever-increasing power of modern equipment—it is making so many of these, and accomplishing them so quickly (nearly instantaneously), that quite impressive things can result from this activity.
By “mindless” I mean anything that is automatic and can be done without “really” thinking. We “know” that 1+1=2 because we’ve memorized the multiplication tables, it becomes a rule that is followed automatically, and not following the rule becomes unthinkable.
If everyday “thinking” is a mixture of the automatic and the deliberate, we can divide people into categories of “more intelligent” and “less intelligent” based on whether their thinking is automatic or deliberate. Here I recall Frank Sinatra’s purely automatic, unthinking (and therefore unintelligent) response in the “Manchurian Candidate”: “Raymond Shaw is the kindest, bravest, warmest, most wonderful human being I’ve ever known in my life.”
But how can “AI” be “intelligent” in any way? It’s very impressive, and the engineers who’ve built this are themselves intelligent, but is the AI gizmo itself “intelligent”?
The modern CNC “machine tool center” has replaced separate drilling, milling, and lathing operations, and industrial robots can accomplish any kind of physical movement such that it’s possible to imagine an entire automated factory in the near future as a kind of black box, with feedstocks going in one end and a shiny new car driving itself out of the other end. That would be impressive, but would that really mean that we’ve passed some kind of metaphysical event horizon? Would that factory be really of a different order than Ford’s original River Rouge plant?
I checked out the AI-generated representational art that Hales references (images.ai) as “spectacular”. It’s certainly visually striking, but better said, all of what I looked at was creepy, weird, ugly, and obviously computer-generated. Here, AI has measurably increased the amount of visual trash existing in the world, just as (is seems to me) GPT-Chat is busy increasing the amount of written trash existing in the world.
Well, I’ll make an exception for the [(lost sock) + (Gettysburg Address)] thing, that was kind of brilliant. But that enforces my point, that this AI “generated” joke is traceable to the human who thought it would be funny.
Am I missing something?
PS, I represented a client in a government administrative tribunal last year and I can assure you that our government has been investing heavily in this stuff (AI-generated misinformation) for years, in (more or less) open cahoots with the culture industry—with the heirs of Walt Disney himself, in fact. We’re already living in the Matrix!
Here’s what I was referencing:
“At the University of Southern California Institute for Creative Technologies (ICT), leaders in the artificial intelligence, graphics, virtual reality and narrative communities are working to advance immersive techniques and technologies to solve problems facing service members, students and society.
Established in 1999, ICT is a DoD-sponsored University Affiliated Research Center (UARC) working in collaboration with the U.S. Army Research Laboratory. UARCs are aligned with prestigious institutions conducting research at the forefront of science and innovation.
ICT brings film and game industry artists together with computer and social scientists to study and develop immersive media for military training, health therapies, education and more.”
So Terry, just relax, you’ve got nothing to worry about. The Army, Hollywood, and other “prestigious institutions” would never—ever—think of “weaponizing” this technology …
—–
KEYWORDS:
Primary Blog
Leave a Reply to Terry Cancel reply