Interesting article by Sudhin Thanawala and the Associated Press describes lawsuits filed on the misuse of facial recognition technology by law enforcement. The lawsuits come as Facial Recognition Technology and its potential risks are under scrutiny. Experts warn about Artificial Intelligence (AI’s) tendency toward errors and bias.
Numerous black plaintiffs claim they were misidentified by facial recognition technology and then wrongly arrested. Three of those lawsuits, including one by a woman who was eight months pregnant and accused of a carjacking, are against Detroit police.
The lawsuits accuse law enforcement of false arrest, malicious prosecution and negligence. They also allege Detroit police engaged “in a pattern of racial discrimination of (Woodruff) and other Black citizens by using facial recognition technology practices proven to misidentify Black citizens at a higher rate than others in violation of the equal protection guaranteed by” Michigan’s 1976 civil rights act.
WHAT IS FACIAL RECOGNITION TECHNOLOGY?
The technology allows law enforcement agencies to feed images from video surveillance into software that can search government databases or social media for a possible match. Critics say it results in a higher rate of misidentification of people of color than of white people. Supporters say it has been vital in catching drug dealers, solving killings and missing persons cases and identifying and rescuing human trafficking victims. They also contend the vast majority of images that are scoured are criminal mugshots, not driver’s license photos or random pictures of individuals.
Still, some states and cities have limited its use.
“The use of this technology by law enforcement, even if standards and protocols are in place, has grave civil liberty and privacy concerns . . . And that’s to say nothing about the reliability of the technology itself.” ~Sam Starks, a senior attorney with The Cochran Firm in Atlanta.
FALSE ARRESTS BASED ON INACCURATE IDENTIFICATIONS FROM AI CAN SUPPORT A DEFENSE OF MISTAKEN IDENTITY
My opinion? AI should be abandoned if the technology incorrectly identifies perpetrators. As a matter of law, the prosecution must prove the identity of the perpetrator of an alleged crime.
According to the jury instructions on Mistaken Identity, in determining the weight to be given to eyewitness identification testimony, jurors may consider other factors that bear on the accuracy of the identification. These may include:
- The witness’s capacity for observation, recall and identification;
- The opportunity of the witness to observe the alleged criminal act and the perpetrator of that act;
- The emotional state of the witness at the time of the observation;
- The witness’s ability, following the observation, to provide a description of the perpetrator of the act;
- The witness’s familiarity or lack of familiarity with people of the perceived race or ethnicity of the perpetrator of the act;
- The period of time between the alleged criminal act and the witness’s identification;
- The extent to which any outside influences or circumstances may have affected the witness’s impressions or recollection; and
- Any other factor relevant to this question.
But what happens when the “eyewitness identifier” is, in fact, AI technology?
At trial, the defense should procure an expert witness who’d testify on the inaccuracies of AI technology. That’s an appropriate route to challenging the credibility of this “witness.”
Please review my Search & Seizure Legal Guide and contact my office if you, a friend or family member are charged with a crime involving AI. Hiring an effective and competent defense attorney is the first and best step toward justice.