In our court system, judges face the task of deciding when scientific evidence is reliable enough to present to the jury. Since the 1920s, many courts have used the Frye standard, requiring the proffered scientific study’s methodology to have been “generally accepted” in the scientific community. Since 1993, most jurisdictions have begun to follow the Daubert standard, which does not require Frye’s general acceptance, while still considering whether the research has been subjected to peer review and publication. Since not every judge has a science background, it is understandable if some are intimidated by the task of evaluating the reliability of a scientific study or publication.
Today marks the end of 2013’s Open Access Week, a yearly event for academics to tout the benefits of Open Access. Its participants advocate for free, immediate, online access to scholarly research. In the Internet age, the barrier to publishing content is low (notably, wordpress alone hosts nearly 72 million sites.) While Open Access has advantages that justify this enthusiasm among researchers, it also has the disadvantage of not leaving markers for judges to evaluate the level of acceptance and the depth of review by other qualified scientists.
Under Open Access, not only do judges have little information to determine accuracy, but even a science magazine contributor was surprised with the level of scrutiny his paper submitted to open-access journals faced. John Bohannon’s submission of his paper about a fictitious cancer experiment, describing test methodology and results that included intentional red-flag flaws, was accepted by more than half of the 300 open-access journals that he submitted it to.
Outside of checking a box for peer-review over Open Access, one other tempting approach to distinguish sources of good science from bad science might be to rely on the impact factor of the publication. Impact factor is a metric that yields a number, and therefor allows ranking scientific journals by impact factor (i.e. the frequency that their articles are cited by other publications.) Unfortunately, impact factor and the resulting journal rank faces criticism for their unintended consequences of skewing researchers’ submission strategies.
As scientific publishing continues to change, so will the indicators of which journals provide good, reliable studies. Which journals and sources those are won’t be evident to the scientists publishing, much less the judges reading.