Iraq Body Count, the organization that has been publishing gross underestimates of the number of civilian dead in Iraq since 2003, is pissed at the Lancet for giving a more accurate figure. IBC has written a lengthy apologia to that effect, which boils down to pointing to another survey that gives a lower figure.
IBC’s apologia talks about the Iraqi Living Conditions Study:
[Link] Discussion of ILCS (Iraq Living Conditions Survey, also known as IMIRA [Iraq Multiple Indicator Rapid Assessment] or the UNDP study, published May, 2005), has been minimal among IBC’s critics and generally falls into two contradictory camps.
The first camp asserts (wrongly – see section 3.6.2) that ILCS perfectly corroborates Lancet, and ends discussion of ILCS there.
The ILCS is a perfectly good study, if you want to learn how many Iraqis own washing machines or what the Iraqi wage gap is (working women make twice as much as working men per hour; however, women are only 16% of the labor force). What it does not give is death rates after the war, except for infant and maternal mortality.
IBC’s defense of the ILCS is based on one small tidbit from the analytical report‘s page 54 (55 in the PDF file), which states,
The number of deaths of civilians and military personnel in Iraq in the aftermath of the 2003 invasion is another set of figures that have raised controversy. The ILCS data indicates 24,000 deaths, with a 95 percent confidence interval from 18,000 to 29,000 deaths.
The ILCS data has been derived from a question posed to households concerning missing and dead persons during the two years prior to the survey. Although the date was not asked for, it is reasonable to suppose that the vast majority of deaths due to warfare occurred after the beginning of 2003.
The ballpark figure of 24,000 is a good reason to doubt the relevance of the data to Lancet. The Lancet study estimates a total mortality rate, which in principle is cause-blind. When asked about war deaths, people may not respond affirmatively if the death was not obviously related to the war.
Obviously, if your mother was killed in Shock and Awe, you’ll almost certainly consider her a war dead. But what if she was killed by American troops who were frustrated with their not finding any real insurgent? What if she died in an insurgent bombing? The phrasing of the question is skewed toward war deaths as opposed to occupation deaths. At most, it refutes Dana’s assertion that the Lancet numbers are too high because by a certain metric they’re higher than a certain WW2 death toll.
The ILCS doesn’t have any figure for the total death rate, so it’s really incomparable with the Lancet study. But even if it were, IBC’s claim that the ILCS must take precedence betrays ignorance of statistical testing. The meaning of “the 95% confidence interval of the 650,000 figure is 400,000-900,000” is “If the real figure is between 400,000 and 900,000, then the Lancet study’s methodology gives a 95% confidence interval that includes 650,000.”
In other words, the Lancet study may have a large error margin, but it also has high enough a figure that it doesn’t matter. From the above formulation, if some group releases a study that says 400,000 Iraqis died, then the Lancet’s figure is within its margin of error, so we can’t conclude the group is wrong. But if a group claims that 40,000 Iraqis died, then the Lancet’s figure is well outside its margin, so the Lancet contradicts it.
When we have two contradictory studies, we can’t ever assume that the one with the larger sample size is wrong. We can only assume that if we have some discrepancy that lies within the margin of error. If the discrepancy is this big, we need to investigate the methodologies and see who’s doing a mistake; it’s possible neither side is, but the probability of that is vanishingly small. In this case, we have a study of Iraqi death rates that uses a standard epidemiological methodology, versus a compilation of media reports that not only neglects deaths not reported to the authorities but also neglects deaths not mentioned in the media.