Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Justin Fisher's avatar

    To be worth using, a detector needs not only (A) not get very many false positives, but also (B) get…

  2. Mark's avatar

    Everything you say is true, but what is the alternative? I don’t think people are advocating a return to in-class…

  3. Deirdre Anne's avatar
  4. Keith Douglas's avatar

    Cyber security professional here -reliably determining when a computational artifact (file, etc.) was created is *hard*. This is sorta why…

  5. sahpa's avatar

    Agreed with the other commentator. It is extremely unlikely that Pangram’s success is due to its cheating by reading metadata.

  6. Deirdre Anne's avatar
  7. Mark's avatar

Fantasies about AI

This seems a good antidote to the silly fantasies (or nightmares) about AI that animate some benighted folks in Oxford–and it comes from a leading AI researcher (who is, alas, a bit of a philosophical muddle on other topics).  An excerpt:

When you work so close to A.I., you see a lot of limitations. That’s the problem. From a distance, it looks like, oh, my God! Up close, I see all the flaws. Whenever there’s a lot of patterns, a lot of data, A.I. is very good at processing that — certain things like the game of Go or chess. But humans have this tendency to believe that if A.I. can do something smart like translation or chess, then it must be really good at all the easy stuff too. The truth is, what’s easy for machines can be hard for humans and vice versa. You’d be surprised how A.I. struggles with basic common sense. It’s crazy….

I’m a big fan of GPT-3, but at the same time I feel that some people make it bigger than it is. Some people say that maybe the Turing test2 has already been passed. I disagree because, yeah, maybe it looks as though it may have been passed based on one best performance of GPT-3. But if you look at the average performance, it’s so far from robust human intelligence. We should look at the average case. Because when you pick one best performance, that’s actually human intelligence doing the hard work of selection. The other thing is, although the advancements are exciting in many ways, there are so many things it cannot do well. But people do make that hasty generalization: Because it can do something sometimes really well, then maybe A.G.I.3  is around the corner. There’s no reason to believe so.

Leave a Reply

Your email address will not be published. Required fields are marked *

Designed with WordPress