The quest to identify “the good journals” continues

In our court system, judges face the task of deciding when scientific evidence is reliable enough to present to the jury. Since the 1920s, many courts have used the Frye standard, requiring the proffered scientific study’s methodology to have been “generally accepted” in the scientific community. Since 1993, most jurisdictions have begun to follow the Daubert standard, which does not require Frye’s general acceptance, while still considering whether the research has been subjected to peer review and publication. Since not every judge has a science background, it is understandable if some are intimidated by the task of evaluating the reliability of a scientific study or publication.

Today marks the end of 2013’s Open Access Week, a yearly event for academics to tout the benefits of Open Access. Its participants advocate for free, immediate, online access to scholarly research. In the Internet age, the barrier to publishing content is low (notably, wordpress alone hosts nearly 72 million sites.) While Open Access has advantages that justify this enthusiasm among researchers, it also has the disadvantage of not leaving markers for judges to evaluate the level of acceptance and the depth of review by other qualified scientists.

Under Open Access, not only do judges have little information to determine accuracy, but even a science magazine contributor was surprised with the level of scrutiny his paper submitted to open-access journals faced. John Bohannon’s submission of his paper about a fictitious cancer experiment, describing test methodology and results that included intentional red-flag flaws, was accepted by more than half of the 300 open-access journals that he submitted it to.  

Outside of checking a box for peer-review over Open Access, one other tempting approach to distinguish sources of good science from bad science might be to rely on the impact factor of the publication. Impact factor is a metric that yields a number, and therefor allows ranking scientific journals by impact factor (i.e. the frequency that their articles are cited by other publications.) Unfortunately, impact factor and the resulting journal rank faces criticism for their unintended consequences of skewing researchers’ submission strategies.

As scientific publishing continues to change, so will the indicators of which journals provide good, reliable studies. Which journals and sources those are won’t be evident to the scientists publishing, much less the judges reading.

Computer Scientists need a new lexicographer

The federal court for the District of Idaho recently caused a stir on the Internet by issuing a Memorandum Decision and Order (Battelle Energy Alliance v. Southfork Security) influenced by a one-sided view of what it means to be a “hacker”. In one sample reaction, security firm Digital Bond’s blog post summarized by saying that the court ‘ruled that an ICS product developer’s computer could be seized without him being notified or even heard from in court primarily because he states on his web site “we like hacking things and don’t want to stop”.’

The court appears to have relied on a commonly understood meaning of hacking, the act of acquiring access to computer resources without official authorization. The New Hacker Dictionary has many definitions of hack, but a simple and benign characterization is “an appropriate application of ingenuity” making no mention of gaining unauthorized access.

It takes a longer-form summary to get to why use of the term “hacker” was so pivotal in the Battelle case. The court ruled on the plaintiff’s request for a temporary restraining order that would disable the defendant’s website, and would preserve a copy of data for evidence in the pending copyright infringement action between the two parties. The court determined that Battelle was entitled to the temporary restraining order before the term “hacker” came up. The court did reference the term when deciding to issue the order without notice and to allow copying of the defendant’s hard drive. Much of this part of the decision was influenced by the line:

We like hacking things and we don’t want to stop.

The use of the term ‘hacking’ here is unfortunate, because it caused the judge to rely on a commonly understood meaning of hacking as the act of acquiring access to computer resources without official authorization. The opinion does not delve into the definition of “hacker,” but the failure to do so may have allowed the court to be misled. The opinion twice cites sources that articulate undesirable actions that “hackers” take, without questioning whether the conclusions apply to each and every “hacker,” especially those self-identifying by using the term in a different sense.

Among computer scientists, hacking can be a good thing. Computer Scientists may be their own lexicographer, but what they need is good Public Relations.

Do we need more scientific basis for our drug policies?

The closing paragraph of a recent New York Times piece on rational choices made by drug addicts raises the question of whether scientists have furnished policy-makers with the broad range of information they would need to make good drug policy. The piece reports on studies by Dr. Carl Hart, an associate professor of psychology at Columbia University, in which Hart’s test-subjects, drug addicts, made rational choices between a delayed reward and an immediate drug-fix.

Dr. Hart did not appear to be surprised by the results. The Times quotes him saying that scientists have “played a less than honorable role in the war on drugs.” His concern is that “eighty to 90 percent of people are not negatively affected by drugs, but in the scientific literature nearly 100 percent of the reports are negative.” The “less than honorable role” at issue was that scientists have an economic incentive to keep telling Congress about a “terrible problem”, so that Congress will continue to fund programs to study and solve the problem.

The point Dr. Hart makes about gaming Congress bears examination. Congress’s spending power is not one that requires strict scrutiny. Congress would not need studies to prove the likelihood of addiction as long as there is a rational basis to support pursuing the policies it enacts. In this case, Hart’s indication that ten to 20 percent of people who use crack and methamphetamine will become addicts, combined with the negative behavior associated with the addicts shows a significant population of addicts that are capable of the negative behaviors associated with the non-rational drug-addicts. Even without expecting 100 percent of drug users to wreak havoc on our civilization, a limited exposure to those who do exhibit the behavior published in the negative reports should justify enacting a policy that would study and help minimize the impact of drug-addiction.