Category Archives: Artificial Intelligence

Can Artificial Intelligence Lead to Wrongful Convictions?

Image: (Kathleen Crosby/Innocence Project)

Photo Courtesy of Kathleen Crosby & The Innocence Project

The Innocence Project published a very insightful article describing how AI-based surveillance systems lack independent verification, empirical testing, and error rate data. These shortcomings lead to wrongful arrests and potentially wrongful convictions. More worrisome, there’s a disturbing readiness among some system actors, especially prosecutors, to accept AI-based evidence at face value. As a result, the eager acceptance of AI-based evidence mirrors the same flawed embrace of misapplied forensic science, which has contributed to numerous wrongful convictions.

BACKGROUND

The use of unreliable forensic science has been identified as a contributing factor in nearly 30% of all 3,500+ exonerations nationwide. Take bite mark analysis, for example. The practice was widely used in criminal trials in the 1970s and 1980s but is poorly validated, does not adhere to scientific standards, lacks established standards for analysis and known error rates, and relies on presumptive tests. It has since been discredited as unreliable and inadmissible in criminal trials due to its shortcomings. Still, there have been at least 24 known wrongful convictions based on this unvalidated science in the modern era.

ADMITTING SCIENCE-BASED EVIDENCE 

The 1923 Frye v. United States decision introduced the “general acceptance” standard for admissibility at trial. In short, the scientific technique must have expert recognition, reliability, and relevance in the scientific community to be “generally accepted” as evidence in court. Some state courts still apply this standard today. Also, the Daubert v. Merrell Dow Pharmaceuticals Inc. decision shifted the focus to evaluating the relevance and reliability of expert testimony to determine whether it is admissible in court.

In applying the Daubert standard, a court considers five factors to determine whether the expert’s methodology is valid:

  • Whether the technique or theory in question can be, and has been, tested;
  • Whether it has been subjected to publication and peer review;
  • Its known or potential error rate;
  • The existence and maintenance of standards controlling its operation; and
  • Whether it has attracted widespread acceptance within a relevant scientific community.

Under Daubert and Frye, much AI technology, as currently deployed, doesn’t meet the standard for admissibility. ShotSpotter, for example, is known to alert for non-gunfire sounds and often sends police to locations where they find no evidence that gunfire even occurred. It can also “significantly” mislocate incidents by as much as one mile. It, therefore, should not be admissible in court.

Similarly,  facial recognition technology’s susceptibility to subjective human decisions raises serious concerns about the technology’s admissibility in court. Such decisions, which empirical testing doesn’t account for, can compromise the technology’s accuracy and reliability. Research has already shown, for instance, that many facial recognition algorithms are less accurate for women and people of color, because they were developed using photo databases that disproportionately include white men.

My opinion? If we are to prevent a repeat of the injustices we’ve seen in the past from the use of flawed and untested forensic science, we must tighten up the system. Too many investigative and surveillance technologies remain unregulated in the United States.

Please contact my office if you, a friend or family member are charged with a crime. Hiring an effective and competent defense attorney is the first and best step toward justice.

AI Facial Recognition Tech Leads to Mistaken Identity Arrests

Facial recognition fails on race, government study says - BBC News

Interesting article by Sudhin Thanawala and the Associated Press describes lawsuits filed on the misuse of facial recognition technology by law enforcement. The lawsuits come as Facial Recognition Technology and its potential risks are under scrutiny. Experts warn about Artificial Intelligence (AI’s) tendency toward errors and bias.

Numerous black plaintiffs claim they were misidentified by facial recognition technology and then wrongly arrested. Three of those lawsuits, including one by a woman who was eight months pregnant and accused of a carjacking, are against Detroit police.

The lawsuits accuse law enforcement of false arrest, malicious prosecution and negligence. They also allege Detroit police engaged “in a pattern of racial discrimination of (Woodruff) and other Black citizens by using facial recognition technology practices proven to misidentify Black citizens at a higher rate than others in violation of the equal protection guaranteed by” Michigan’s 1976 civil rights act.

WHAT IS FACIAL RECOGNITION TECHNOLOGY?

The technology allows law enforcement agencies to feed images from video surveillance into software that can search government databases or social media for a possible match. Critics say it results in a higher rate of misidentification of people of color than of white people. Supporters say it has been vital in catching drug dealers, solving killings and missing persons cases and identifying and rescuing human trafficking victims. They also contend the vast majority of images that are scoured are criminal mugshots, not driver’s license photos or random pictures of individuals.

Still, some states and cities have limited its use.

“The use of this technology by law enforcement, even if standards and protocols are in place, has grave civil liberty and privacy concerns . . . And that’s to say nothing about the reliability of the technology itself.” ~Sam Starks, a senior attorney with The Cochran Firm in Atlanta.

FALSE ARRESTS BASED ON INACCURATE IDENTIFICATIONS FROM AI CAN SUPPORT A DEFENSE OF MISTAKEN IDENTITY

My opinion? AI should be abandoned if the technology incorrectly identifies perpetrators. As a matter of law, the prosecution must prove the identity of the perpetrator of an alleged crime.

According to the jury instructions on Mistaken Identity, in determining the weight to be given to eyewitness identification testimony, jurors may consider other factors that bear on the accuracy of the identification. These may include:

  • The witness’s capacity for observation, recall and identification;
  • The opportunity of the witness to observe the alleged criminal act and the perpetrator of that act;
  • The emotional state of the witness at the time of the observation;
  • The witness’s ability, following the observation, to provide a description of the perpetrator of the act;
  • The witness’s familiarity or lack of familiarity with people of the perceived race or ethnicity of the perpetrator of the act;
  • The period of time between the alleged criminal act and the witness’s identification;
  • The extent to which any outside influences or circumstances may have affected the witness’s impressions or recollection; and
  • Any other factor relevant to this question.

But what happens when the “eyewitness identifier” is, in fact, AI technology?

At trial, the defense should procure an expert witness who’d testify on the inaccuracies of AI technology. That’s an appropriate route to challenging the credibility of this “witness.”

Please review my Search & Seizure Legal Guide and contact my office if you, a friend or family member are charged with a crime involving AI. Hiring an effective and competent defense attorney is the first and best step toward justice.