Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Fool's avatar
  2. Santa Monica's avatar
  3. Charles Bakker's avatar
  4. Matty Silverstein's avatar
  5. Jason's avatar
  6. Nathan Meyvis's avatar
  7. Stefan Sciaraffa's avatar

    The McMaster Department of Philosophy has now put together the following notice commemorating Barry: Barry Allen: A Philosophical Life Barry…

IHME COVID-19 model from U of Washington isn’t very good

We noted it near the end of March (and see the prescient comment by reader Robert Lee there), but the consensus now seems to be that it's junk:

“It’s not a model that most of us in the infectious disease epidemiology field think is well suited” to projecting Covid-19 deaths, epidemiologist Marc Lipsitch of the Harvard T.H. Chan School of Public Health told reporters this week, referring to projections by the Institute for Health Metrics and Evaluation at the University of Washington.

Others experts, including some colleagues of the model-makers, are even harsher. “That the IHME model keeps changing is evidence of its lack of reliability as a predictive tool,” said epidemiologist Ruth Etzioni of the Fred Hutchinson Cancer Center, who has served on a search committee for IHME. “That it is being used for policy decisions and its results interpreted wrongly is a travesty unfolding before our eyes.”

The article includes a good explanation of traditional infectious disease modelling and contrasts it with what IHME has been doing.

(Thanks to Dr. David Ozonoff for the pointer.)

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

6 responses to “IHME COVID-19 model from U of Washington isn’t very good”

  1. An IHME rep was just on CNN defending the model as very accurate to the data, attempting to answer challenges from the correspondent that it's not a good model.

  2. HA! "accurate to the data" is a standard statement in modeling. The question rarely asked is "Compared to what?"

    Having said that, the root issue is not the equations or the statistics. It is the assumptions. No one is modeling the sociological issues, namely how compliance varies with growth rate. The closest I have seen is a preprint in arXiv indicating that RO (a key parameter in infection models) varies by location. The authors suggest, and I agree, that community level compliance with social distancing is the cause. It also explains why the once the growth rate levels off the rate of decrease of that rate declines. (This has not been reported yet to my knowledge but it is showing up in the data for many hot spots.)

    In short, we need sociological models not biomedical ones.

  3. Well, IHME is a sociological model in that sense. My understanding of it (from a great review I saw on Twitter, so YMMV) is that R0 is basically irrelevant. The assumption of the model is that mortality rates move in a particular curve based on intervention, and so they take the curve to date and then estimate what the end will look like under various assumptions that affect the shape of the curve.

    I suppose it does take R0 into effect in a roundabout way, to the extent that deaths are more correlated to actual infection than to measured infection (ignoring differences in hospital systems, capacity, etc.)

  4. I'm not sure this is what you have in mind by a sociological model, but what seems pretty clear to me is that there isn't just one community in any given location, such as NYC, but many, and each may have different degrees of compliance and different values of R0 within themselves. Ultraorthodox Jews engage in very different social practices from others, as do many immigrant communities (who may well often have many people in the same living quarters), as do many service workers, as do the homeless, etc. For some of these communities, the social practices may drive R0 above 1, even if in the larger population the social distancing may be well below 1.0. In such a situation, the average R0 may go down at first, because that average will be dominated by the larger population. But at some stage, it is the R0 of the less compliant which becomes the dominant factor in the average, since most new infections will be among them. The more compliant portion of the population will tamp down the overall numbers, and certainly among themselves.

    But it may be that the only way over the long run to control the overall infection numbers is to let the less compliant portions of the population achieve herd immunity among themselves, which will come at considerable cost to themselves.

    From the standpoint of the larger, fully compliant population, what will be important over this stretch is to keep their R0 below 1. The good news is that doing so may well allow many restrictions to be relaxed. It is quite plausible that among the more compliant, the R0 is already well below 1. Relaxing restrictions dramatically for them may well allow the R0 among them to remain below 1.

  5. One community which is playing a major role in new infections is that of residents and workers at institutions for the elderly. Certainly among them R0 is well above 1.

    How many of these institutions have escaped infection? Will they all be infected at some point or another?

    They may get to herd immunity, but at a huge price.

  6. My apologies for not being clear. R0 is supposed to be a function of the infecting agent not community nor time. The fact that it needs to be adjusted indicates that the model is lacking. That lack, in my opinion, is the human element.

    For example, Imperial College model had a fixed compliance of 50% across all policies, locales, infection rates and death tolls. Who would argue that that is anywhere near a good representation of human behavior?

    —–
    KEYWORDS:
    Primary Blog

Designed with WordPress