Whisper in the Clouds

Jetsons

The future is here, and we are finally living in the clouds—albeit not quite in the way the Jetsons imagined.  While our cities remain disappointingly rooted firmly on Earth, our personal lives (or at least their digital representations) have migrated to the Cloud.  This planar transcendence has brought incredible conveniences such as the ability to access all of your email, documents, photos, and music virtually anywhere and at any time.  It has also heralded awkward fits with well-worn legal doctrines like attorney-client privilege.  With the steady erosion of privacy, both in expectation and in fact, it is becoming increasingly difficult to maintain a reasonable expectation of privacy.  Cloud-computing has made communication more seamless, but cloud-computing has also made it easier to inadvertently waive attorney-client privilege.

For attorney-client privilege to hold, a communication must be held in strict confidence.  In general, if a communication is exposed to a third-party, the privilege is waived.  The circumstances under which the privilege may be waived vary depending on jurisdiction.  For example, one scholar [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1557033] has outlined three general categories into which most jurisdictions fall:

a. The ‘never waived’ approach, which is that a disclosure that is merely negligent can never effect a waiver;

b. The ‘strict accountability’ rule, which is that disclosure automatically effects a waiver regardless of the intent or inadvertence of the privilege holder; and

c. The ‘middle test’ in which waiver is decided by consideration of (1) the reasonableness of the precautions taken to prevent inadvertent disclosure, (2) the amount of time it took the producing party to recognize its error, (3) the scope of the production, (4) the extent of the inadvertent disclosure, and (5) the overriding interest of fairness and justice.

For “strict accountability” jurisdictions, some cloud-based services present a risk of waiver, however minuscule, from unintended disclosure to a third-party service provider or its agents.  For example, Google’s popular email service, Gmail, offers users free access to webmail in exchange for targeted advertisements.  Google creates these targeted advertisements by automatically scanning the content of user email (regardless of whether the email belongs to the Gmail user [http://www.nytimes.com/2013/10/02/technology/google-accused-of-wiretapping-in-gmail-scans.html]) and displaying third-party ads that match keywords that appear within the body of the email.

Arguably, if an attorney provides legal advice to a client over email, and the client uses Gmail, the communication is not strictly confidential because the communication’s contents are exposed to a third-party (Google).  In some ways, it would be similar to an attorney sending her client a letter using a courier service that carries the letter for free so long as the courier is permitted to read the letter before delivering it and then pitches the client various personal services based on the letter’s contents.  Of course, Gmail is distinguishable from such a silly courier service because Gmail’s process is automated and is not susceptible to human frailties like gossip.

The New York State Bar seems to agree.  In a 2008 opinion letter [http://ftp.documation.com/references/ABA10a/PDfs/3_13.pdf], the Bar wrote: “Merely scanning the content of e-mails by computer to generate computer advertising, however, does not pose a threat to client confidentiality, because the practice does not increase the risk of others obtaining knowledge of the e-mails or access to the e-mails content.”

However, automated scanning is not the sole cloud-based threat to confidentiality.  Attorneys should be mindful of what service providers have promised to do, or not do, with user data.  The New York Bar made no promise of confidentiality where “the lawyer learns information suggesting that the provider is materially departing from conventional privacy policies or is using the information it obtains by computer-scanning of e-mails for a purpose that, unlike computer-generated advertising, puts confidentiality at risk . . . .”

Many service providers reserve expansive rights in their terms of service to access user data for the vague and undefined purpose of improving provider services.  Here are two excerpts from the EULAs for popular services from Microsoft and Google:

Windows Live (Email) [http://windows.microsoft.com/en-us/windows-live/microsoft-services-agreement]

3.3. What does Microsoft do with my content? When you upload your content to the services, you agree that it may be used, modified, adapted, saved, reproduced, distributed, and displayed to the extent necessary to protect you and to provide, protect and improve Microsoft products and services. For example, we may occasionally use automated means to isolate information from email, chats, or photos in order to help detect and protect against spam and malware, or to improve the services with new features that makes them easier to use. When processing your content, Microsoft takes steps to help preserve your privacy.

Google Drive (Documents) [http://www.google.com/policies/terms/]

When you upload or otherwise submit content to our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. This license continues even if you stop using our Services (for example, for a business listing you have added to Google Maps).

While it would be hard to prove that a particular user’s data was in fact exposed to a third-party, the risk nevertheless remains.  Such uncertainty grants a toe-hold of purchase from which to give some enterprising opposing counsel the reach on an argument to defeat privilege.  But defeating privilege in this way does not serve the interests of the profession.  Results that depart from the ordinary and prudent person’s expectation of privacy regarding personal communications undermine the unfettered candor that the attorney-client privilege is meant to engender.  To prevent surprise, courts should adopt a more balanced approach (like the “middle test”) which takes into account a party’s reasonable expectations and measures to secure the communication.  Until such a time, attorneys should continually survey the ever-changing digital landscape and never assume that conversations in the cloud are confidential.

Post-Conviction Relief from Junk Science

On November 18, 2013, Texas freed three of the women referred to by the media as the “San Antonio Four.”  The women were serving fifteen year sentences (with one serving a thirty-seven and a half-year sentence concurrently) for the sexual assault of two young girls.

Attorney Mike Ware, director of the Innocence Project of Texas, filed a petition under a new Texas law, which went into effect on September 1, 2013, that allows convictions to be challenged where modern scientific knowledge controverts the validity of the scientific evidence offered at trial.  SB 344 [ftp://ftp.legis.state.tx.us/bills/83R/billtext/html/senate_bills/SB00300_SB00399/SB00344F.htm], commonly described to as a “junk science” law, enables a court to overturn a conviction where it is more likely than not that the jury would have acquitted the accused if modern scientific knowledge had been available and presented at the time of trial.  However, to prevent clever defense attorneys from getting a second bite at the apple, SB 344 cannot be used to file a petition where the defendant was aware of the superior scientific knowledge at the time of trial but failed to raise the issue.

The primary evidence offered at trial was comprised of the victims’ testimony and the testimony of a medical expert.  According to the victims’ statements, the four women had held each girl captive in a bedroom and sexually assaulted them over the course of two days.  Furthermore, it was alleged that one of the captors “had used a gun to threaten [one of the victims] not to tell any one about the assault.”  But the credibility of the victim’s testimony was seriously weakened when one of the victims later recanted [http://www.mysanantonio.com/news/local_news/article/Woman-recants-accusation-of-sex-assault-3868974.php#ixzz26dbVbncB] in 2012.

There is some suspicion that prejudice towards the four women’s sexual orientation may have played an unfortunate role in their convictions.  In addition to charges of homophobia, supporters of the jailed women had identified several issues [http://fourliveslost.com/evidence-of-innocence] that called into question the validity of the evidence offered against them.

The State’s medical expert, Dr. Nancy Kellogg, had testified that a scar on the genitals of one of the victims supported the conclusion that the victim had been sexually assaulted.  Kellogg had also expressed concern in one of her reports to police that the incident could be “satanic-related.”  The defense did not challenge the State’s expert with its own expert.

In a habeas petition made possible by SB 344, Ware offered a recent medical study to controvert Kellogg’s testimony.  In 2007, the American Academy of Pediatrics conducted a study of 239 female child sexual assault victims and concluded that despite the wide range of injuries suffered, “[n]o scar tissue was identified . . . in any of the patients.”  The report explained that “injuries in these prepubertal and adolescent girls all healed rapidly and frequently left little or no evidence of the previous trauma.”  Based on the study, there is little scientific evidence to suggest that sexual assaults cause scarring in young children.

At first blush, the study’s conclusion appears to controvert Kellogg’s testimony that the scar was one of several indicia of sexual assault.  However, the study can only be relied upon for the proposition that a medical examination of most victims will show “little or no evidence” of the assault.  The study does not establish that scarring is not possible, or that a scar, if present, is not indicative of sexual assault.

Assuming that many of the reported inconsistencies in the evidence are true, reversal is the outcome that best serves justice in the case—albeit almost too late to save any of the women from serving nearly all of their sentences.  However, in applying SB 344, the reviewing court reached a just conclusion but upset a jury verdict to do so.

SB 344 only permits reversal where “junk science” is the “but-for” cause of the conviction.  Because of a rapid healing rate, the presence or absence of a scar does not rule out the possibility of sexual assault, and the convictions are still supported by other evidence at trial, including the victims statements.  Although one of the victims recanted, recantation testimony alone may not be sufficient to overturn a conviction, with some courts favoring the original testimony at trial over the new.

SB 344 will likely serve as an important post-conviction tool to safeguard citizens from being deprived of their liberty because of unreliable scientific evidence.  However, courts should be careful not to discredit the judgment of twelve persons because the pendulum of scientific thought on a particular matter has swung the other way.

Neuroscience reliability questions

A recent study casts doubt on the overall reliability of neuroscience, begging the question of whether any evidence produced by the field can overcome potential Daubert challenges. The study, published on the website Nature Reviews Neuroscience, asserts that the small sample sizes used in most academic neuroscience studies results in conclusions which lack credibility.

Sample sizes in a scientific studies have a direct effect on the statistical power of the study. When a sample size is large, both subtle and major effects are discernible within the collected data. When the sample size is small, only the larger effects will be shown with any reliability and smaller effects may be missed entirely. False positives may be recorded or worse the size of the effect may be exaggerated.

A statistical power of 80 percent is the desired goal in most studies. At this level, if the sample size is adequate and the effect is genuine the study would detect it 80 percent of the time. The study published in Nature Reviews Neuroscience reviewed 49 meta-analyses, studies of other studies, and concluded that within these 730 individual studies the median statistical power was below 20 percent. Human based neuroimaging studies within the study reached a median statistical power of only 8 percent.

Further, the small studies in the field are often not blind-tested and often the results have not been reproduced.

While the results of the paper have caused angst in the academic field of neuroscience, if the conclusions are correct there are major legal implications as well. The Daubert standard and its relevant factors call into question any conclusions reached by one of these studies with low statistical power. It appears that most of the testing is done on such a small scale the results have not been subject to peer review and a widespread acceptance within the scientific community has not been attained.

Neuroscience remains an area of tremendous interest and potential in both the scientific and legal communities yet the statistics discussed in Nature Reviews Neuroscience suggest we may be far away from tangible uses of the field.

http://www.nature.com/nrn/journal/v14/n5/abs/nrn3475.html

http://www.wired.com/wiredscience/2013/04/brain-stats/

http://www.theguardian.com/science/sifting-the-evidence/2013/apr/10/unreliable-neuroscience-power-matters

AVATAR kiosks aid Department of Homeland Security

Researchers at the University of Arizona have developed a system to assist the Department of Homeland Security with the detection of deceptive behavior in subjects crossing the border into the United States. The Automated Virtual Agent for Truth Assessments in Real-Time (AVATAR) is a kiosk-based system which uses a number of detection techniques to monitor for and flag a subject for suspicious behavior which can then be further investigated by a trained human agent.

In a border crossing usage scenario, a traveler stands in front of the AVATAR kiosk and answers a number of questions posed by a computer generated face on the screen. Three sensors on the AVATAR kiosk monitor the subject to detect any attempt to give false answers. An infrared camera recording at 250 frames per second monitors eye movement and dilation, looking for dilation or flicker caused by the stress of lying. A microphone in the kiosk analyses vocal data as the subject speaks, looking for telltale changes in pitch which indicate deception. Finally, a high definition camera monitors the subject for inadvertent fidgeting movements which can indicate a subject is not telling the truth. In trials conducted in Poland the system was able to detect deceptive behavior with a 94% success rate.

While the AVATAR is not an infallible “lie detector” or used by DHS in an effort to collect evidence which may later be used against the subject, it is a tool that helps overburdened agents at border crossings. The advantage of a computerized system is that is consistent across all the travelers it interviews whereas a human agent may give preferential treatment to one subject as compared to another. Once the AVATAR system has identified a subject who may have made deceptive statements or has exhibited deceptive behavior during its interrogation, an agent will approach the subject with a heightened sense of alert and be more thorough in their follow up questioning. By allowing agents to focus on travelers that have been flagged by the AVATAR system the entire border crossing process will not only be more efficient but safer as well.

 

http://borders.arizona.edu/cms/projects/current

http://www.wired.com/threatlevel/2013/01/ff-lie-detector/all/

 

Cracking down on anti-polygraphy consultants

While both state and federal courts have generally held that polygraph testing does not meet the Daubert standard for admissibility in court, governmental agencies routinely use polygraph testing as part of pre-employment screening or in the course of normal operations. Because an examiner may interpret nervous behavior or uneasiness on the part of a person undergoing a polygraph test there many federal job seekers are naturally concerned about passing these pre-employment tests.

A cottage industry has emerged in response to the increase in polygraph testing which claims to prepare individuals with proper techniques to “beat” a polygraph examination. The training, costing as much as $1000 a day, teaches students specific methods such as pinching muscles or counting backwards while answering questions, resulting in a relaxed demeanor which could mask any deception or nervousness.

The existence of the anti-polygraph technique consultants has not gone without notice by the federal government. In response to the high-profile leak cases involving trusted government employees such PFC Bradley Manning and Edward Snowden, authorities have launched a program cracking down on instructors.

Chad Dixon, of Marion, IN, was sentenced to eight months in prison for wire fraud and obstructing a government proceeding through the operations of his consulting company, Polygraph Consultants of America. First Amendment activists across the country expressed their displeasure in Dixon’s arrest, stating that Dixon’s instruction was protected speech.

Doug Williams, the Oklahoma City based operator of polygraph.com had his business raided and records seized as part of the crackdown on the anti-polygraph consultants. Though Williams has not been formally charged, the names of 5000 of his former students and purchasers of his book which details evasion techniques is now in government hands. One must wonder if the authorities are reviewing these names to determine which of William’s clients have undergone and passed polygraph testing by federal examiners.

It seems odd to me that a testing methodology like polygraphy which has not gained acceptance by the scientific community is used so extensively by government agencies and that critics of the methodology have been subject to prosecution. Considering that many state courts have a per se rule against admission of polygraph results and that few federal courts will admit the data it seems unfair that job seekers could be denied employment or security clearance due to polygraphy.

http://www.mcclatchydc.com/2013/08/16/199590/seeing-threats-feds-target-instructors.html

http://www.huffingtonpost.com/2013/09/06/chad-dixon-_n_3882052.html

http://www.foxnews.com/us/2013/08/18/feds-target-instructors-teaching-how-to-beat-polygraph-tests/

Bonaparte Identifies Criminals and Victims on an International Scale

Researchers at Radboud University Nijmegen, working with SMART Research BV, have developed a software program that can identify people by using their relatives’ DNA.  Earlier this month, Interpol announced that it will be implementing the program, which is called Bonaparte, to improve the ability of member countries to identify missing persons and victims of disasters. “Napoleon made sure everyone was given a surname, and with our Bonaparte program nameless victims get their name back.

The Netherlands Forensic Institute (NFI), which is working with Interpol to assist with the program’s implementation, has successfully used Bonaparte to identify the victims of the Tripoli airplane crash in 2010 and to identify the criminal behind the 1999 murder of Marianne Vaatstra, a young Dutch woman.  The NFI also plans to use Bonaparte to identify the unnamed victims of the 1953 flood in the southwest region of the Netherlands.

With the addition of Bonaparte, Interpol will be able to swiftly identify criminals, victims, and their family members.  In particular, Bonaparte is expected to increase Interpol’s ability to effectively respond to missing persons and transnational crimes.

But Bonaparte is not limited to human identification.  Bonaparte is a software program that applies logical techniques and advanced statistical methods to solve problems. Thus, it can be used in a variety of circumstances.  For example, it can advise complementary wine pairings for a meal or estimate the number of newspapers that should be printed.  Interpol has plans to use Bonaparte to help police fight wildlife crime in Africa by using DNA and isotope analysis to identify poaching hotspots.

The Chief Executive Officer of the NFI, Tjark Tjin-A-Tsoi, stated that the Bonaparte program and the cooperative efforts of Interpol and the NFI is “an example of the growing internationalization of the forensic domain.”  With advances in science and technology and the collaborative work of various national and international organizations, the globalization of criminal justice is progressing. Interpol’s implementation of Bonaparte represents a significant step towards the international community’s ability to combat crimes, recover from disasters, and provide victims and their families peace. As more technology is developed and adapted to aid the criminal justice system, hopefully, more countries will get involved, sharing techniques, methodologies, and information and improving crime detection and prevention worldwide.