chevron-down Created with Sketch Beta.
July 01, 2020 Feature

Machine-Generated Evidence

By Alex Nunn

Technological advancement is changing the world. Just a few decades ago, only the earliest adopters owned a computer; today, 96 percent of Americans own a phone with more computing power than the vessel that catapulted astronauts to the moon.1 A few decades ago, discovering the answer to a certain bit of trivia required deep dives into encyclopedias; today, the answer to almost any question is available in seconds on Google. In the not-too-distant past, pilots landed planes, doctors offered diagnoses, and drivers sat behind semitrucks’ steering wheels; today, aspects of each of these tasks are increasingly handled by artificial intelligence.2 Technology is indeed changing the world, and courtrooms are no exception.

As machine-based evidence becomes increasingly common, judges and lawyers must wrestle with how to best examine and scrutinize it in the courtroom.

As machine-based evidence becomes increasingly common, judges and lawyers must wrestle with how to best examine and scrutinize it in the courtroom.

Jason Butcher via GettyImages

Prior to the recent technological revolution, courtroom evidence was, by and large, intrinsically linked to the actions of individuals.3 It was, for lack of a better term, “person-based.” (Human) witnesses dominated the courtroom. Cases were won or lost solely on the reliability of eyewitness testimony, the credibility of character assessments, or the expertise of a particular individual. Even the reliability of documentary or physical evidence was often linked to the inherent ability or subjective behavior of an individual. For example, in the era before the advent of modern production lines, the reliability of a chair primarily depended on the subjective actions of the artisan who constructed it. And, to be sure, all this evidence still plays a centrally important role in trials today.

But person-based evidence is no longer the monolith it once was. Rather, with technological advancement has come the rise of so-called machine-generated evidence.4 Unlike traditional forms of person-based evidence, machine-generated evidence is, by definition, nonpersonal. That is, the reliability of machine-generated evidence primarily depends not on any person’s actions—neither the quality of their perceptions nor their ability to carry out tasks—but instead on the standardized processes and mechanisms internal to the machine that produced it.5 When a police officer uses a breathalyzer to test a driver’s blood-alcohol concentration, for example, the accuracy of the BAC level indicated by the machine depends primarily on the ability of the breathalyzer to reliably measure alcohol, not the actions of the officer.6 When a toxicologist in a forensics lab tests a substance from a crime scene using gas chromatography or a mass spectrometer, the reliability of the identification of that substance as, say, arsenic depends more on the internal mechanisms of the gas chromatography machine or the mass spectrometer than the actions of the analyst.7 Indeed, the reliability of photographs,8 neurological brain scans,9 and many other emerging types of evidence depends far more on standardized mechanical processes of the instrument involved than individuals.

As technological advancement continues apace, and new, innovative forms of machine-generated evidence reach the courtroom, judges and lawyers will be required to respond in two important ways.

First must come awareness of the unique nature of machine-generated evidence. The legal profession must recognize that machine-generated evidence is categorically distinct from person-based evidence. It constitutes a difference in kind. As explored below, much of the present confusion, inaccuracies, and inefficiencies in our legal system’s treatment of machine-generated evidence is caused by attempts by judges and lawyers to treat machine-generated evidence as if it were person-based—as if its reliability depended on the actions of some individual. Only by recognizing the unique and, in many ways, paradigm-shifting features of machine-generated evidence can necessary reform occur.

Second, after recognizing that machine-generated evidence constitutes a difference in kind, the legal profession must consider how to best treat it in the courtroom. What doctrines and rules must change to better scrutinize machine-generated evidence? If the reliability of machine-generated evidence doesn’t (or shouldn’t) require a witness on the stand, what is the best way to evaluate it at trial? Answering these questions will require innovative and potentially radical thinking, as demonstrated by recent scholarship paving the way on this front.

Recognizing a Difference in Kind

In the courtroom, both historically and today, the witness box is center stage.10 Testimony is, by far, the most common form of evidence received during trials.11 Evidence comes directly from, or is at least presented through, people. Indeed, trials are usually little more than a sequential procession of witnesses. In light of this reality, our evidentiary regimes—the Federal Rules of Evidence and state evidentiary codes—largely assume that evidentiary reliability depends on the reliability of some person’s actions or testimony.12 Thus, the hearsay rule and Confrontation Clause seek to ensure, in effect if not necessarily in purpose, that declarants—people—appear in court to testify from the witness stand.13 The oath and perjury penalties ensure that those testifying—people—are truthful on the stand. Cross-examination, famously described as “the greatest engine ever invented for discovery of truth,”14 assumes that there will be people to cross-examine. Our trial system is, and has for centuries been, all about people.

Into this paradigm now comes machine-generated evidence, which is, of course, emphatically nonpersonal.15 Instead of deriving reliability from the actions of individuals, the reliability of machine-generated evidence depends on, predictably, how the machine is maintained and operates. By definition, a person’s actions or reliability will be less relevant in the context of machine-generated evidence.16

So how has our legal system thus far responded to this evidentiary newcomer? How is machine-generated evidence treated at trial? Unfortunately, but perhaps unsurprisingly, most find the current courtroom treatment of early forms of machine-generated evidence suboptimal. Most often, machine-generated evidence is treated as if its reliability somehow depends on the actions of a person; the reliability of that person then acts as something of a proxy for the reliability of the machine-generated evidence.

Consider, first, the present treatment of DNA evidence in courtrooms. DNA evidence is a touchstone example of machine-generated evidence.17 Unlike some forensic disciplines that heavily depend on the expertise of a forensic examiner, a DNA technician does not primarily rely on his or her subjective judgment when examining DNA or identifying a DNA match. Her role at the lab is largely ministerial.18 Where, then, does a DNA match come from? A genetic analyzer (specifically, a DNA-typing machine), aided by software programs like TrueAllele, analyzes DNA samples using statistical methods to determine the likelihood that a sample “matches” certain known DNA profiles (say, a defendant’s profile).19 In most modern instances, the entire process is driven by algorithms. A lab technician does not view different strains of DNA to visually identify a match; rather, DNA typing is an exercise in machine-based statistical analysis.20 The role of the lab technician is largely to prepare adequate samples, run the tests, and record the machine’s results.

But despite the objective, machine-generated nature of DNA evidence, our legal system has, to this point, treated DNA evidence as if it were person based—as if its reliability depends far more on the actions of the lab technician than the processes internal to the genetic analyzer. Rather than first emphasizing appropriate scrutiny of the mechanical and algorithmic processes used to give rise to a DNA match, recent prominent cases from the U.S. Supreme Court focus on the lab technician to the detriment of considering the primary function of the machine in question. Decisions such Melendez-Diaz v. Massachussetts21 and Bullcoming v. New Mexico22 render DNA evidence inadmissible under the Confrontation Clause unless the lab technician who oversaw a DNA test takes the stand and subjects herself to cross-examination.23 To be sure, there will, of course, be instances when scrutiny of lab technicians is appropriate.24 Recent prominent examples of misconduct and sloppiness in forensic labs across the country make that patently clear.25 But there is a critical difference between testing a certain lab technician for misconduct or sloppiness and asserting that, in every case, the technician is the primary source of DNA reliability. More often than not, the reliability of a DNA match will depend far more on the reliability of the genetic analyzer than the actions of the lab custodian.26

Courtroom treatment of photographs also reflects some present discomfort with machine-generated evidence. Like DNA evidence, photographs are quintessential examples of machine-generated evidence. The reliability of a photograph depends primarily on the internal mechanisms and standardized processes of a camera rather than the actions of whoever might use it.27 Reliability questions relating to photographs should therefore center around those internal processes. Is there some systematic error in a camera that affects or distorts the photographs it produces?28 As chronicled by UCLA Law School Dean Jennifer Mnookin, courtroom recognition of the machine-generated nature of photographs is the exception, not the rule.29 Instead, as with DNA, courts predominantly tie the machine-based evidence to the testimony of a person. Rather than allowing photographs to stand alone as reliable depictions of certain events, the prevailing approach at trial is to treat photographs as demonstrative evidence.30 That is, photographs are often introduced simply to illustrate the testimony of a witness—a person. Rather than recognizing a photograph’s independent value and, oftentimes, superiority to eyewitness testimony, photographs are (at least technically) treated as a demonstrative aid on par with graphs, charts, and posters.31 In Mnookin’s words, the “new evidentiary category” of machine-generated photographs saw “[j]udges [attempting] to accommodate the new technology by pronouncing it an iteration of an existing phenomenon” rather than recognizing its unique, machine-based characteristics.32

Practically speaking, perhaps the present approach to DNA, photographs, and other types of machine-based evidence makes sense—after all, our tools for testing evidence in the courtroom largely assume that there will be a person on the stand. How do you put a machine under oath, let alone cross-examine it? But forcing machine-generated evidence into a person-based paradigm has led to some clear conceptual distortions and practical inefficiencies. Absent atypical situations involving human misconduct or substandard calibration, the reliability of a breathalyzer reading is far more dependent on the breathalyzer’s internal processes than the actions of an officer who administers it.33 Absent similar atypical situations, the reliability of an EEG or fMRI brain scan depends far more on the machines analyzing brain activity than the technician overseeing the procedure.34 Generally stated, the reliability of machine-generated evidence depends far more on the machine than any one person, yet the machine is not the focus in courtrooms today.

So as machine-based evidence becomes increasingly common, judges and lawyers must wrestle with how to best examine and scrutinize it in the courtroom. If the status quo is suboptimal, how can we improve our procedural and evidentiary rules to better evaluate machine-generated evidence?

Evaluating Machine-Generated Evidence

Because machine-generated evidence represents such a historically unique entrant to the courtroom, considering how to best evaluate it at trial requires innovative and radical thinking. When we take tools designed for evaluating people off the table, what’s left?

Andrea Roth, a professor at UC Berkeley Law School, has offered perhaps the most incisive commentary on the institutional, structural, and procedural changes that are necessary for proper evaluation of machine-generated evidence in the courtroom. In her 2017 Yale Law Journal article “Machine Testimony,” Roth notes that “[m]achine sources potentially suffer ‘black box’ dangers . . . a machine’s programming . . . could be imprecise or ambiguous because of human error at the programming, input, or operation stage, or because of machine error due to degradation and environmental forces.”35 To protect against these concerns, particularly as they manifest in machine-generated evidence, Roth insists that we must materially change the way our legal system has traditionally thought about applying procedural and evidentiary rules.36 Among a comprehensive set of proposals, Roth suggests that, minimally, discovery must allow litigants enhanced access to the very machines that produced evidence in a particular case. “These rules might allow litigants to access machines before trial to test different parameters or inputs (much like posing hypotheticals to human experts). . . . [Additionally, the] rules might also require public access to programs for further testing or ‘tinkering’; disclosure of ‘source code,’ if necessary to meaningfully scrutinize the machine’s claims; and the discovery of prior statements or ‘Jencks material’ of machines, such as COBRA data for breath-testing machines.”37 Similar reforms are suggested in a 2019 Texas Law Review article by Vanderbilt Law School professor Ed Cheng and myself. Like Roth, Cheng and I argue in favor of expanding compulsory process and subpoena powers to allow litigants increased access to machines.38

Beyond proposing new discovery tools, Roth, Cheng, and I all also insist that certain fundamental doctrines must be reconceptualized to account for the rise of machine-generated evidence. Take Confrontation Clause jurisprudence: Demanding that a forensic lab technician take the stand and testify about machine-generated evidence might make for great theater, but the accusatory element in, say, a DNA test is primarily the processes of a machine, not the actions of a technician.39 Far better than a Confrontation rule that requires a prosecutor to produce witnesses would be one that affords defendants increased access to a lab’s equipment and procedures. So too should certain rules of evidence, such as Federal Rule of Evidence 702 (which governs expert witnesses), be enlarged to encompass machines, thereby ensuring that machine processes and internal methods are sufficiently reliable.40

Of course, these proposals merely constitute the tip of an iceberg. Other scholars have made inroads by directly suggesting significant reforms for specific types of machine-generated evidence. Ed Imwinkelried, an emeritus professor at UC Davis School of Law, has insisted that computer source code must be subject to enhanced scrutiny, thereby disabusing the widespread notion that “manufacturers have an evidentiary privilege protecting the code as a trade secret.”41 Emily Murphy, a professor at UC Hastings School of Law, has constructed a new regime for evaluating neurological brain scans in the courtroom, insisting that Daubert-like protections should apply with equal force to machine-generated evidence.42

Still others are working on a host of outstanding questions. If we accept machine-generated evidence as a difference in kind, how are we to scrutinize the human actions (e.g., code creation and calibration) that necessarily precede machine-generated evidence? If the reliability of machine-generated evidence isn’t tied to any one person, must it be admitted through a witness? If not, how might trial procedure change to accommodate it? Is Federal Rule of Evidence 403 sufficient to govern the admissibility of machine-generated evidence? Why are (or aren’t) machine-based analogs to the hearsay rule necessary?

At core, what these questions and proposals make clear is that change is coming to the courtroom. Machine-generated evidence is something new and unique; it doesn’t fit into our current paradigm. But it is here to stay. How our legal system adapts and changes in response to machine-generated evidence will shape the course of trials for decades to come.

Endnotes

1. Mobile Phone Ownership over Time, Pew Res. Ctr. (2019), https://www.pewresearch.org/internet/fact-sheet/mobile.

2. See Keith Button, A.I. in the Cockpit, Aerospace Am. (2019), https://aerospaceamerica.aiaa.org/features/a-i-in-the-cockpit/; Conor Dougherty, Self-Driving Trucks May Be Closer Than They Appear, N.Y. Times (Nov. 13, 2017), https://www.nytimes.com/2017/11/13/business/self-driving-trucks.html; D.A. Hashimoto et al., Artificial Intelligence in Surgery: Promises and Perils, 268 Annals of Surgery 1 (2018).

3. Edward K. Cheng & G. Alexander Nunn, Beyond the Witness: Bringing a Process Perspective to Modern Evidence Law, 97 Tex. L. Rev. 1077, 1081 (2019).

4. See id.

5. See id.; Andrea Roth, Machine Testimony, 126 Yale L.J. 1972, 1993 (2017).

6. See Edward J. Imwinkelried, Computer Source Code: A Source of the Growing Controversy over the Reliability of Automated Forensic Techniques, 66 Depaul L. Rev. 97, 97–102 (2016).

7. Cheng & Nunn, supra note 3, at 1093–94.

8. Jennifer L. Mnookin, The Image of Truth: Photographic Evidence and the Power of Analogy, 10 Yale J.L. & Human. 1, 73 (1998).

9. G. Alexander Nunn, Ep. 78: Emily Murphy, Brain-Based Memory Detection, Excited Utterance: The Evidence and Proof Podcast (Oct. 7, 2019), http://www.excitedutterancepodcast.com [hereinafter Ep. 78: Emily Murphy].

10. Cheng & Nunn, supra note 3, at 1077.

11. See generally Alexandra Natapoff, Snitching: Criminal Informants and the Erosion of American Justice 6 (2009).

12. See, e.g., Fed. R. Evid. 404 & 801.

13. See Michigan v. Bryant, 562 U.S. 344 (2011); Crawford v. Washington, 541 U.S. 36, (2004); Davis v. Washington, 547 U.S. 813 (2004).

14. Watkins v. Sowders, 449 U.S. 341, 349 n.4 (1981) (citing 5 J. Wigmore, Evidence § 1367, at 32 (J. Chadbourn rev. 1974)).

15. Cheng & Nunn, supra note 3, at 1077.

16. Roth, supra note 5, at 1974–83.

17. 4 David L. Faigman et al., Modern Scientific Evidence § 30:3 (2018); Cheng & Nunn, supra note 3, at 1093.

18. Faigman et al., supra note 17, § 30:3; Cheng & Nunn, supra note 3, at 1093.

19. Imwinkelried, supra note 6, at 97–102.

20. Faigman et al., supra note 17, § 30:3; Cheng & Nunn, supra note 3, at 1093.

21. 557 U.S. 305, 329 (2009).

22. 564 U.S. 647, 659 (2011).

23. See Moore v. State, 294 Ga. 682 (2014); Gardner v. United States, 999 A.2d 55 (D.C. 2010). But see Williams v. Illinois, 567 U.S. 50 (2012).

24. See id. at 451 n.1 (noting the importance of cross-examination in cases of technician misconduct).

25. See, e.g., Jess Bidgood, Chemist’s Misconduct Is Likely to Void 20,000 Massachusetts Drug Cases, N.Y. Times (Apr. 18, 2017), https://www.nytimes.com/2017/04/18/us/chemist-drug-cases-dismissal.html; Lauren Kircher, Traces of Crime: How New York’s DNA Techniques Became Tainted, N.Y. Times (Sept. 4, 2017), https://www.nytimes.com/2017/09/04/nyregion/dna-analysis-evidence-new-york-disputed-techniques.html; Joseph Goldstein, Report Details the Extent of a Crime Lab Technician’s Errors in Handling Evidence, N.Y. Times (Dec. 5, 2013), https://www.nytimes.com/2013/12/05/nyregion/report-details-the-extent-of-a-crime-lab-technicians-errors-in-handling-evidence.html.

26. Cheng & Nunn, supra note 3, at 1093.

27. See id.; Mnookin, supra note 8, at 73.

28. For an example of how significant systematic camera error can be, see Jessica M. Salerno, Seeing Red: Disgust Reactions to Gruesome Photographs in Color (But Not in Black and White) Increase Convictions, 23 Psychol. Pub. Pol’y & L. 336, 345–47 (2017).

29. Mnookin, supra note 8, at 73.

30. Id. at 5 (“[J]udicial response to the photograph brought into existence that category of proof we now know as “demonstrative evidence.”).

31. Cheng & Nunn, supra note 3, at 1101.

32. Mnookin, supra note 8, at 6.

33. Imwinkelried, supra note 6, at 97–102.

34. Ep. 78: Emily Murphy, supra note 9.

35. Roth, supra note 5, at 1977–78.

36. See id.

37. Id. at 1981.

38. Cheng & Nunn, supra note 3, at 1105–08.

39. See id. at 1108–13.

40. Roth, supra note 5, at 1981–82.

41. Imwinkelried, supra note 6, at 97–102.

42. Ep. 78: Emily Murphy, supra note 9.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

By Alex Nunn

Alex Nunn is Assistant Professor at the University of Arkansas School of Law.