First, the song, from an ebulliently sweaty David Cassidy:
How Can I Be Sure?
Written by Eddie Brigati and Felix Cavaliere
Performed here by David Cassidy
From his 1972 album Rock Me Baby
Belief is based on evidence, but what constitutes evidence ranges from the groundless to the wonders of the eternal almighty. The almighty, for now, to one side, evidence is clear under the law: anything can be adduced as evidence, but only corroborated witness testimony, probative documents, admissions and testimony on oath count as proof. It is the means by which evidence is composed that has a heavy bearing on whether it can constitute or be persuasive in constituting proof. And courts are now having to contend with a question: Is evidence knowingly created, or which might potentially have been created, using artificial intelligence admissible before a court of law?
A court considering a probative document must check it is genuine. It is the document’s authenticity as a probative document that establishes that quality, not the mere fact it claims to be what it is. Likewise, not all corroborated witness statements get accepted without further ado. And none should. Witnesses are notoriously inaccurate in their recollections, even when they are adamant and absolutely, as Cassidy puts it, sure.
Studies to examine the accuracy of witness testimony conclude in its non-existence, time and again. For instance, the 1995 survey at Rhode Island University. The students didn’t agree on what the guy in the video had worn. They failed—any of them—to identify the culprit in a line-up. They all picked several wrong guys. There is a statistical probability that every test will fail. But a test designed to deprive the accused of his liberty should be allowed to fail seldom. It is for that reason alone that appeal courts were invented. The group of subjects in the Rhode Island survey only amounted to 12 persons. One can argue that, with a large survey population, mistakes are inevitable. With a small group, like that one, there should be greater homogeneity. There wasn’t. Then, again, if the sample is small, is the survey representative, or would enlarging the sample reduce the incidence of failures? How can we be sure? Let’s not forget, the sample was not of housewives and manual labourers—these were bright, sharp university students.
As far as I can tell, what artificial intelligence does is what we ourselves do: it scours information and remembers it, and then uses it, not to match questions to answers but to reason the answers to new questions based on what it knows from its reading. Its production of responses is not knowingly coloured by prejudices or moods or confidence bias, and other things that can affect human judgment. It ought to be a great witness. But it has one endearing fallibility, one it shares with its human creators, as demonstrated in the Rhode Island test: it seeks to please. None of the Rhode Island students had the gumption to reason that if I don’t recognise any of the men in the line-up as the culprit, I must say so, instead of the culprit is here—which one will I pick? The abilities of the students to distinguish one person from another were no different after asking the question than before. What affected the outcome was their failure to read the question: can you identify the bomber? It’s akin to the dramatic moment at which a witness is asked Is that person in this court room this morning? If so, indicate him with your finger. (The rule against leading questions prevents counsel from asking whether the accused is the person in question. But the witness knows full well that they’re not being asked to identify the court usher.)
Artificial intelligence is said to be helping resolve a lot of crime, especially in terms of facial recognition. Advocates say it speeds up the process of catching criminals who are at large, and thus reduces overall crime figures. That has to be a good thing. With cameras so common in built-up areas now, it’s like having an attentive police officer at every street corner. But there are reservations: how would you feel if there were a real police officer at every street corner? Whilst we have assurances that evidence will only be used to secure the arrest of known offenders, there can be no guarantee that evidence will not be used as the primary source to construct a probable cause (as in the case of using number plate technology to track pregnant women seeking an abortion in another United State). Nor, of course, are there absolute safeguards against data being stolen or even manipulated. In all of that, AI is no different from previous forms of evidence and, like previous forms of evidence, AI can also be manufactured.
That gets courts to a quandary position: some evidence, like video evidence, is crucial to securing a conviction, and yet video evidence is notoriously manipulable. At present, its weight is such that, without it, courts will often return a verdict of no case to answer. After all, the conclusion that Jeffrey Epstein committed suicide was not based on evidence, but precisely on the lack of any other plausible explanation. A case brought by an ex-incarcerated person, Michael McCallion, and reported by the Marshall Foundation makes this point: the prosecution of the prison guards who allegedly beat him whilst he was in prison was already shaky owing to the fact the guards had committed the assault in parts of the prison where it was known there are no cameras. Without pictures of the act in actual progress, the jury returned a verdict of not guilty. The worrying truth is the assumption that anyone who’s been incarcerated is a liar; anyone who hasn’t is truthful. And the mere fact that the beatings occurred where they did should be a good reason to suspect foul play, although, even if there were, the accusations ended up as a case of his word against theirs. The mere fact of being a prison officer endows them with an aura of trust and truth and, while that is perhaps justified, it doesn’t mean they never lie. Mendacity within institutions like the police and prisons is not individual, it is institutional—it get imposed like gang culture. It’s not like canasta, where you can say, “Deal me out on this round, I’ll make the tea.” So, either an honest officer keeps quiet—in which case they may be tolerated by the body of dishonest officers but otherwise unharmed—or they quit. Maybe the professors at Rhode Island University can conduct a survey to establish whether dishonesty is now the rule, and honesty the exception, or whether that is just my confirmation-biased conclusion (based on the criminal actions of a UK prime minister, a US president, and quite a few French and German heads of state, not to mention Frank Vandenbroucke’s alleged Agusta helicopters money).
So, in the end, is AI making law enforcement more efficient and, if so, should we be complaining? Or is it nosing its way into matters that are no concern of law enforcement, but that law enforcement is nevertheless taking great interest in? How can we be sure?