Separating Spin from Science
By Rachel Walden — December 24, 2007
This week, you may have seen media coverage of a study on abortion and future pregnancy outcomes, with headlines such as, “Study Links Abortion and Preemies.”
Published in the Journal of Epidemiology and Community Health, the authors report that the prevalence of low birth weight, term low birth weight, and premature birth was higher in black women, those under 20 or over 40 years of age, less educated, and unmarried, women, among other things. They also reported that the prevalence of all of these things increased with an increasing number of previous abortions. Predictably, some anti-choice outlets jumped on this news.
The problem? The study did not distinguish between spontaneous abortion (miscarriage) and induced abortion. It was also conducted via a survey of women between 1959 and 1966, when abortion was still illegal in America, making it difficult both to assess the possible health effects of the black market procedures and to understand the level to which women were honest about what was then a criminal act.
It also ignores how women’s health status and access to healthcare may be different in the study population than in today’s women. In fact, anyone reading the full text of the article would find these limitations quite clearly spelled out in the Discussion portion. Ultimately, then, this single study cannot serve as a definitive statement on today’s risks of induced abortion.
However, this isn’t a problem on only one side of reproductive health issues. Following a study published in the Journal of Adolescent health concluding that formal sex education delays teen initiation of sex, there was no shortage of commentary suggesting that this proves the failure of abstinence-only sex ed. Regardless of the other evidence on comprehensive vs. abstinence-only sex education, this study does not make a conclusion on the issue, because it simply can’t.
The researchers looked at any formal sex education conducted by a school, church, or community organization, but did not separate the programs by the type of content delivered. As in the earlier study, the authors make this quite clear in their Discussion section, stating, “No conclusions about type of sex education (i.e., comprehensive sex education vs. focus on abstinence-only) can be drawn from this analysis.”
The point? It’s important that, when weighing the evidence on a topic, it is clearly understood and accurately communicated.
We can debate issues such as whether the CDC’s involvement in the sex education study might have played a role in the study’s design of lumping comprehensive and abstinence-only sex education together. However, when looking to research to support our policies and agendas, it’s useful to make sure we understand said research and recognize whether it truly does support our positions.
Thanks for posting this.
I hate when people, whether they are professional members of the media or lay people, run away with research and draw ridiculous, often judgmental conclusions. It is especially bad when it is done by a health reporter of some sort.