Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Roger of Invisible America's avatar
  2. Rex Welshon's avatar
  3. Nicolas Delon's avatar
  4. Henry Cohen's avatar
  5. Mark's avatar
  6. s. wallerstein's avatar

What can philosophy contribute to the effort to make AI helpful in the empirical sciences and mathematics?

Reader Matteo Bianchetti writes:

Thanks very much for your list of the most cited books in the philosophy of empirical sciences. 

Reading that list made me think of the following. Several prominent AI companies are promoting the use of AI to advance the empirical sciences and mathematics. This is an example. These companies collaborate with scientists and mathematicians to build AI tools that are particularly fit for that purpose. I wonder whether your readers have thoughts concerning what the philosophy of science and the philosophy of mathematics could contribute to such projects (if at all). It would be great to hear specific, motivated answers and, possibly examples. Factors that one could consider include the development of an AI model, its application, or its evaluation (benchmarking or else).

An interesting topic–comments are open. Comments will, as usual, be moderated and edited for relevance and helpfulness.

, ,

Leave a Reply to Charles Anthony Bakker Cancel reply

Your email address will not be published. Required fields are marked *

9 responses to “What can philosophy contribute to the effort to make AI helpful in the empirical sciences and mathematics?”

  1. The Minnesota Center for Philosophy of Science presented a symposium on AI and the Nature of Science, including a keynote by Carl Bergstrom this fall and it is availalbe at:
    https://www.youtube.com/channel/UCdJh7yX7H6cCNL0buhHDYLw

    In terms of my personal response after attending the symposium, I think that philosophy could contribute by providing an account of when “black box” procedures are an acceptable part of the scientific process. I am characterizing AI as a “black box” procedure in the sense that you provide inputs and receive an output, but there is limited insight into what happens between.

    So generally we think of this as a problem, but to provide a sort of “toy” counterexample, I do not work out the answer to 7 X 8 when I need to multiply them, I have just memorized that the answer is 56, but I don’t think my lack of insight into my memory processes would invalidate a scientific result that depended on the factor that 7 X 8 = 56. So I think that there are going to be cases where AI tools are going to promote scientific goals, and cases where AI tools are going to contravene scientific goals, and it seems like philosophers would be in a position to provide insight into when the limitations of AI tools are going to result in limitations on the legitimacy of the results that researchers produce when they use them. Maybe such an analysis already exists, I am not familiar enough with this subject to know the existing literature.

  2. I’m a mathematician who works in AI. I’m very worried about the impact of AI on the scientific enterprise, broadly speaking. I’m not too worried about the LLMs – they, at least, produce human-interpretable output, even if it’s sometimes hallucinatory. What I worry about more is black-box AI models that model a natural phenomenon more accurately than our theories do, but without any kind of *explanatory* power. Scientists will be willing to *use* such a model, but they’ll never be satisfied with it; for us – and I think for most philosophers – the whole point of the enterprise is to understand. But I worry a lot that the people who *fund* science won’t care – they just want a new widget, and they won’t even see a problem here.

    I started reading analytic philosophy because I’m trying to stockpile ammunition for future arguments about this. (If anyone has any suggestions for good sources, I’d love to hear them.) I’m genuinely very worried about the medium-term future, and if philosophy can make a contribution to this looming fight, they’ll have at least my gratitude…

    1. I would recommend reading Bas van Fraassen’s “The Scientific Image.” In particular, read what he has to say about why questions, contrast classes and relevance relations. What counts as an explanation depends on what question is being asked, and what questions are being asked depend upon the various interests of those asking them.

      This matters because AI didn’t just spring forth ex nihilo into existence. It was and is being developed by groups of great apes who collect, curate and feed input into these computers. These same collections of human holobiont-umwelt developmental systems also program the AI, test it to see that it is working as they desire, and utilize its output for a variety of different purposes. All of this context is crucial for understanding not only what goes on inside of “black boxes”, but also, why “black boxes” are considered “black boxes” in the first place.

      There is the question of what is going on inside of the computer, and then there is the question of why we evolved to ask this question. I’m guessing we won’t be able to explain the former question, fully, until we have begun to make serious progress towards answering the second question.

      1. Thank you, I will add that to the reading list.

        It wouldn’t surprise me. I’m doing a lot of reading on evolutionary biology as well, but that’s in an earlier phase.

    2. justin clarke-doane

      Sorry, my reply below was supposed to be to Mark.

      1. Thank you for the offer, I’d love to take a look! I found your email address on your faculty webpage; I’ll send you an email. 🙂

  3. Charles Anthony Bakker

    I think moral, political, and philosophers of law could develop means for holding the developers and users of AI accountable to the general public.

    I think environmental philosophers could develop means for reducing the degree to which the computers used to run AI harm the biosphere.

    And I think philosophers of mind could develop evolutionary theories of organism-umwelt collective intelligence which take anthropology, ecology and eco-evo-devo biology seriously, and which move beyond the Turing/Shannon/Weaver conception of mind which we inherited through Fodor from the Cognitive Devolution of the 1960s. This would involve calling into question what it even means to talk about “artificial “intelligence””.

    1. justin clarke-doane

      Hi, Mark! Michael Harris and I are writing a book partially devoted to your concerns. I don’t know if reading it would be helpful to you, but your criticism would be helpful to us! Let me know if you’d like us to send you drafts as we go.

Designed with WordPress