Efficiency: Sometimes More is Less

Listeners of the Radiolab podcast might recall an episode earlier this year about how the stock market works.  For those who do not listen, Vice’s Motherboard blog  (URL below) provides a handy catch-up.

The advent of High Frequency Trading (HFT) has the potential for extreme market disruption, as the world witnessed on May 6, 2010.  A mutual fund firm out of Kansas placed a large sell order for a particular security, and HFTs–which can conduct billions of transactions in a single day–precipitated a “Flash Crash” wherein the Dow dropped by almost 1000 points, the largest intraday loss in history.The HFT algorithms, which were unable to distinguish between an impending crash and a large, one-off order, began automatically selling and buying back securities at volumes and prices that  had no real basis in the market.

 

Some European countries have responded to the advent of High Frequency Trading by introducing the one thing that is sure to cause “friction” in an otherwise “efficient” process–a tax.  Both the French and Italian governments have passed laws that tax acquisitions and dispositions of covered securities in an attempt to reduce the effect of high frequency trading, but it is questionable whether such a thing would ever happen in the United States.  The idea is basically that instead of opting to conduct millions of trades to exploit price differences of as little as $0.0001, traders will balk at the additional tax liability and record-keeping burden that attaches to each exchange.  Even Americans, who may trade in American Depository Receipts (essentially securities that represent shares traded in French or Italian markets) will pay taxes to these foreign governments in the form of fees that are passed on to them from the in-country subcustodians.

 

Given Americans’ typical aversion to new taxes, what are our options?  The SEC has partially responded by introducing rules against fraudulent trading, which can provide an after-the-fact enforcement of rules designed to prohibit such practices, but the additional time and expense of conducting investigations and prosecutions renders it a slower and less effective (?) method of deterring high frequency trading.

 

What do you think are the best ways to approach the problem?  SEC rule-making? Levying a transaction tax?  Or is it even a problem, given that some proponents believe that high frequency trading provides liquidity to the market?

 

 

Mother Jones Article (includes Radiolab podcast):

http://motherboard.vice.com/blog/what-high-frequency-trading-looks-and-sounds-like

 

SEC Findings Regarding the Flash Crash:

http://www.sec.gov/news/studies/2010/marketevents-report.pdf

 

Information on the Italian Transaction Tax:

http://www.nortonrosefulbright.com/knowledge/publications/74950/italian-financial-transactions-tax-italy-imposes-tobin-tax-on-financial-transactions

High Frequency Trading:

http://www.scienceworldreport.com/articles/11053/20131120/supercomputers-vs-high-frequency-trading-analysis-leads-to-new-regulatory-law.htm

SEC: Skynet…E-something…Cyberdyne?

An early appearance of a Terminator machine in American court history is the familiar case of Kaltko v. Briney.  As you may recall from first year Con Law, this is the case where an enterprising couple from Iowa set up a trap to deter maim or kill trespassing thieves.  When a would-be burglar triggered the trap, which was deliberately aimed “to hit the legs,” most of his leg was blown off.  He sued for damages and eventually won at the Iowa Supreme Court.

Twenty-seven years later, the first movie in the Terminator series was released.  The important similarity is not how John Connor also instructed his shotgun-wielding death machine to aim for the legs, but in a larger plot point:  in the series, a worldwide nuclear apocalypse (“Judgement Day”) is initiated when an automated defense system, Skynet, interprets a command to take it offline for maintenance as an attack, and releases the American nuclear arsenal on Soviet Russia.  The Soviets respond in kind and pretty much everyone gets vaporized.  Obviously, there was some ret-conning in order for later sequels as the USSR no longer existed in 1997, the year of the apocalypse.

Some actors in the securities markets apparently have not heeded either the legal precedent of Kaltko nor the fictional cautionary tale of Terminator and have invested several billion dollars in high frequency trading supercomputers that buy and sale securities at a staggering pace—billions of transactions per second. 

One result of this trend apparently has been the “Flash Crash” of 2010, wherein the Dow plunged by almost 1,000 points within a single day.  Some investigations are still ongoing, but the SEC reporting so far has indicated that a single, massive sale of E-Mini contracts by a Kansas-based mutual fund sent the High Frequency Trading Algorithms into a tizzy, selling off enormous amounts of shares and driving their prices down rapidly.  By the end of the day, buyers had snapped up the egregiously undervalued shares and the market closed just 3% down.  A big sale by a mutual fund company isn’t quite the same as a nuclear attack, but the HFTAs—like Skynet—hadn’t been programmed to distinguish between a large transaction and an impending liquidity crisis.

If you were trying to liquidate part of your portfolio to, say, make a down-payment on your house or pay for your kid’s tuition bill, and your order happened to be executed during certain hours of the day, you might have been pretty disappointed—like the man whose investment wasn’t so much liquidated as vaporized when he lost $17,000 selling a blue chip stock. 

The SEC’s stated mission is “to protect investors, maintain fair, orderly, and efficient markets, and facilitate capital formation.”  The modern notion of market efficiency, an economic model that relies heavily on frictionless transactions, ironically emerged in the 1960s while at the same time capital markets were losing billions of dollars in what is known as the “Paper Crunch.”  Securities transactions were so inefficient (all being done via delivery of physical certificates) that the backlog trades to linger before settlement for days and days at a time.  Versions of the Efficient Market Hypothesis have been enshrined in the fraud-on-the-market doctrine which has been successfully invoked at the Supreme Court as recently as the summer of 2012 in the Erica P. John Fund v. Halliburton case.

But efficient market theories often depend on an asset pricing model, such as the Capital Asset Pricing Model (CAPM) or others, that assume a risk-averse, buy-and-hold investment outlook.  Rates of return, or “future cash flows,” feature heavily in the models, but in the market today most of the trades are executed by computer programs that don’t plan on collecting any dividends.

 

On the Flash Crash of 2010:

http://spectrum.ieee.org/riskfactor/computing/it/flash-crash-cause-found

http://online.wsj.com/news/articles/SB10001424052748704029304575526390131916792

 

On Asset Pricing Models:

http://en.wikipedia.org/wiki/Capital_asset_pricing_model

http://www.nber.org/papers/w8059

SEC explains what it does: 

http://www.sec.gov/about/whatwedo.shtml

 

Probable Paws

As an enthusiastic dog-owner (for two weeks now), I enjoyed reading “Inside of A Dog: What Dogs See, Smell, and Know ” by cognitive scientist Alexandra Horowitz–I’m a sucker for airport bookshops.  The author describes how humans have six million olfactory receptor cells while dogs have not only many more–as many as three hundred million–but they also have a much more robust neurological apparatus for detecting and interpreting the nerve signals that they send on to the brain.

 So dogs smell better than we do, which is not news.  Humans, though, have a much better grasp of the Fourth Amendment’s protection against unreasonable search and seizure.  In 2013, the Supreme Court made some important decisions about the use of trained police dogs in establishing probable cause.

 Comparing the use of sniffer dogs to the unwarranted use of thermal imaging, Justice Scalia wrote an opinion in Florida v. Jardines holding that police officers may not bring a drug-sniffing dog to your doorstep and use a canine response to establish probable cause for obtaining a warrant to search the home.

 Article about 2013 Supreme Court Opinions Regarding Police Dogs: http://www.reuters.com/article/2013/03/26/us-usa-court-dog-sniffs-idUSBRE92P0NE20130326

Insufferably Enthusiastic Sharing of Pictures and Videos of My New Dog:Upon request.  (During the first week, a request was unnecessary).

Twenty tips for interpreting scientific claims

Policy: Twenty tips for interpreting scientific claims

This post isn’t about anything scientists discovered or some new emerging tech.  I found an article authored by William J. Sutherland, David Spiegelhalter, & Mark Burgman, about the process of integrating science into politics.

I thought this article was very interesting and relevant since our class discussions all centered on legal interpretation of scientific findings.  The article discusses twenty tips that laypersons should keep in mind when reviewing scientific data.  Of course it isn’t a textbook manual on complicated theories and equations, instead it’s more a helpful guide on how scientists and politicians and lawmakers should interpret scientific data.  But clearly written for non-scientists.

The 20 tips (they are discussed in greater detail in the link provided above):

  • Differences and chance cause variation.
  • No measurement is exact.
  • Bias is rife.
  • Bigger is usually better for sample size.
  • Correlation does not imply causation.
  • Regression to the mean can mislead.
  • Extrapolating beyond the data is risky.
  • Beware of the base-rate fallacy.
  • Controls are important.
  • Randomization avoids bias.
  • Seek replication, not pseudoreplication.
  • Scientists are human.
  • Significance is significant.
  • Separate no effect from non-significance.
  • Effect size matters.
  • Study relevance limits generalizations.
  • Feelings influence risk perception.
  • Dependencies change the risks.
  • Data can be dredged or cherry picked.
  • Extreme measurements may mislead.

The two that I found most interesting were “correlation does not imply causation” and separating “extrapolating beyond the data is risky.”  These two rules are clearly related and in essence say that jumping to conclusions is dangerous.  I think too often people are too quick to glance over scientific findings and come up with their own conclusions.  I suppose this is a form of bias.  But if lawmakers and politicians are briefed on these twenty tips prior to making decisions based on scientific data, I can imagine a much more efficient and informed machine operating and moving the country towards positive change.

While the article specifically addresses the interpretation of scientific data in political forums, I think these tips could be refined into principles that should be taught in schools as well.  Similar to the scientific method, it would improve people’s understanding of information available through the media by minimizing bias or at the very least teaching people to understand that bias exists.

How interesting would it be if children were taught to understand that their opinions and beliefs are a result of conditioning, and as teenagers learn to form opinions from an almost neutral point of view?

Neuroscience reliability questions

A recent study casts doubt on the overall reliability of neuroscience, begging the question of whether any evidence produced by the field can overcome potential Daubert challenges. The study, published on the website Nature Reviews Neuroscience, asserts that the small sample sizes used in most academic neuroscience studies results in conclusions which lack credibility.

Sample sizes in a scientific studies have a direct effect on the statistical power of the study. When a sample size is large, both subtle and major effects are discernible within the collected data. When the sample size is small, only the larger effects will be shown with any reliability and smaller effects may be missed entirely. False positives may be recorded or worse the size of the effect may be exaggerated.

A statistical power of 80 percent is the desired goal in most studies. At this level, if the sample size is adequate and the effect is genuine the study would detect it 80 percent of the time. The study published in Nature Reviews Neuroscience reviewed 49 meta-analyses, studies of other studies, and concluded that within these 730 individual studies the median statistical power was below 20 percent. Human based neuroimaging studies within the study reached a median statistical power of only 8 percent.

Further, the small studies in the field are often not blind-tested and often the results have not been reproduced.

While the results of the paper have caused angst in the academic field of neuroscience, if the conclusions are correct there are major legal implications as well. The Daubert standard and its relevant factors call into question any conclusions reached by one of these studies with low statistical power. It appears that most of the testing is done on such a small scale the results have not been subject to peer review and a widespread acceptance within the scientific community has not been attained.

Neuroscience remains an area of tremendous interest and potential in both the scientific and legal communities yet the statistics discussed in Nature Reviews Neuroscience suggest we may be far away from tangible uses of the field.

http://www.nature.com/nrn/journal/v14/n5/abs/nrn3475.html

http://www.wired.com/wiredscience/2013/04/brain-stats/

http://www.theguardian.com/science/sifting-the-evidence/2013/apr/10/unreliable-neuroscience-power-matters