Photo Courtesy of Kathleen Crosby & The Innocence Project
The Innocence Project published a very insightful article describing how AI-based surveillance systems lack independent verification, empirical testing, and error rate data. These shortcomings lead to wrongful arrests and potentially wrongful convictions. More worrisome, there’s a disturbing readiness among some system actors, especially prosecutors, to accept AI-based evidence at face value. As a result, the eager acceptance of AI-based evidence mirrors the same flawed embrace of misapplied forensic science, which has contributed to numerous wrongful convictions.
BACKGROUND
The use of unreliable forensic science has been identified as a contributing factor in nearly 30% of all 3,500+ exonerations nationwide. Take bite mark analysis, for example. The practice was widely used in criminal trials in the 1970s and 1980s but is poorly validated, does not adhere to scientific standards, lacks established standards for analysis and known error rates, and relies on presumptive tests. It has since been discredited as unreliable and inadmissible in criminal trials due to its shortcomings. Still, there have been at least 24 known wrongful convictions based on this unvalidated science in the modern era.
ADMITTING SCIENCE-BASED EVIDENCE
The 1923 Frye v. United States decision introduced the “general acceptance” standard for admissibility at trial. In short, the scientific technique must have expert recognition, reliability, and relevance in the scientific community to be “generally accepted” as evidence in court. Some state courts still apply this standard today. Also, the Daubert v. Merrell Dow Pharmaceuticals Inc. decision shifted the focus to evaluating the relevance and reliability of expert testimony to determine whether it is admissible in court.
In applying the Daubert standard, a court considers five factors to determine whether the expert’s methodology is valid:
- Whether the technique or theory in question can be, and has been, tested;
- Whether it has been subjected to publication and peer review;
- Its known or potential error rate;
- The existence and maintenance of standards controlling its operation; and
- Whether it has attracted widespread acceptance within a relevant scientific community.
Under Daubert and Frye, much AI technology, as currently deployed, doesn’t meet the standard for admissibility. ShotSpotter, for example, is known to alert for non-gunfire sounds and often sends police to locations where they find no evidence that gunfire even occurred. It can also “significantly” mislocate incidents by as much as one mile. It, therefore, should not be admissible in court.
Similarly, facial recognition technology’s susceptibility to subjective human decisions raises serious concerns about the technology’s admissibility in court. Such decisions, which empirical testing doesn’t account for, can compromise the technology’s accuracy and reliability. Research has already shown, for instance, that many facial recognition algorithms are less accurate for women and people of color, because they were developed using photo databases that disproportionately include white men.
My opinion? If we are to prevent a repeat of the injustices we’ve seen in the past from the use of flawed and untested forensic science, we must tighten up the system. Too many investigative and surveillance technologies remain unregulated in the United States.
Please contact my office if you, a friend or family member are charged with a crime. Hiring an effective and competent defense attorney is the first and best step toward justice.