> If it would take 8.5 yrs to review, it's probably god awful, and should never ever ever be used to convict someone of such a crime.
It's not like you review all scientific evidence and re-do the experiments that lead up to the discovery of <insert some evidence method> in the first place. Validating all that would also take years and much of it can be established as generally accepted by all parties. Similarly, there will be some trust involved with this source code as well. Getting the opportunity to look for bugs is essential in my opinion, but it needn't take multiple years. Focus on the parts you doubt, similar to what you'd do if you were reviewing the scientific method used in analog evidence.
Of course, the two aren't identical. Validating scientific methods and validating a program is different in that the program is proprietary and the science (usually) merely behind a paywall. The latter can then be replicated by others and becomes established. The former will only ever be seen by that company and doesn't become established. So scrutiny is necessary, but after a couple cases that used an identical version, requiring access without articulating particular doubts would unduly delay the case. It doesn't seem unreasonable to start trusting the program after a bunch of defendants had experts look at it and found no way to cast doubt on its result. If you don't think software of 180k lines can be used in court under such circumstances because it would take too long to review, we should throw out pretty much all software anywhere in the judicial system. (That's not what you said, but some of the replies including yours hint at that.)
> It's not like you review all scientific evidence and re-do the experiments that lead up to the discovery of <insert some evidence method> in the first place.
Actually, it is. That's how science works and that's how convictions often get overturned.
> Validating all that would also take years
Are you suggesting that unvalidated data is being used to prosecute crimes?
> and much of it can be established as generally accepted by all parties.
The point here is that it isn't established as generally accepted by all parties.
> Similarly, there will be some trust involved with this source code as well.
"Trust but verify"
> If you don't think software of 180k lines can be used in court under such circumstances because it would take too long to review, we should throw out pretty much all software anywhere in the judicial system.
I firmly believe that if the source code isn't available to review by all parties, including the public, then it shouldn't be used in a criminal court.
> It's not like you review all scientific evidence and re-do the experiments that lead up to the discovery of <insert some evidence method> in the first place. Validating all that would also take years and much of it can be established as generally accepted by all parties. Similarly, there will be some trust involved with this source code as well
There are a few important differences between a generally accepted method, and some Matlab black-box that you feed an input into, and it prints out 'guilty' and 'not guilty'.
1. The former is based on centuries of peer review, where the best ideas eventually get selected for. The latter is an externally un-reviewed application, which encapsulates the best of whatever we could ship by Thursday.
2. You can call an expert witness to the stand, and ask them questions about the state of the art of <some evidence based method>. You can ask them why. You can ask them about how certain one should be about their statements. You can't cross-examine a black box.
The actual solution to your quandary is to require that forensic analysis services must pass an annual, independent, double-blind analysis of the accuracy of their methods, before they are used in a courtroom - and that the results of those audits are made available to the defense.
It's one thing for a man in a lab coat to take the microphone and say that their methods are accurate 'to within one in a million'. It's quite another to see an audit, where 100 samples were sent in for analysis over six weeks, and only 92 of them were analysed correctly.
A jury might still convict on the basis of that 92% accuracy, but only if other meaningful evidence points against the defendant.
Unfortunately, the reality of forensic science in 2021 is that most of it is sloppy bunk, with no assurances of accuracy.
>The actual solution to your quandary is to require that forensic analysis services must pass an annual, independent, double-blind analysis of the accuracy of their methods, before they are used in a courtroom - and that the results of those audits are made available to the defense.
Agreed! But if that's the standard, it still doesn't involve letting the defendant see the source code.
the point he is making that 1 in a million was an outright lie used by prosecutors to secure convictions on innocent, while the real criminals are still out and about
> Validating scientific methods and validating a program is different in that the program is proprietary and the science (usually) merely behind a paywall.
Or completely fictitious.
Have you heard the story about the FBI crime lab and the “science” of fiber analysis that they developed, and not only used in federal criminal trials but also provided as a service for state and local agencies for decades?
Or the Shirley McKie debacle in Scotland in 2005, where it turned out that finger print detection was more of an 'art' than a science. The ball got rolling once they started convicting police officers (so at least the analysis was double-blind?)
Or the phantom of Heilbronn, where dozens of crimes were linked to a single woman. Who turned out to be the lab technician that assembled the kits. Doubts started once they discovered the caucasian female DNA in cells of the charred remains of a black male.
I often wonder how prosecuters defend against the use of these cases to create doubt.
It's not like you review all scientific evidence and re-do the experiments that lead up to the discovery of <insert some evidence method> in the first place. Validating all that would also take years and much of it can be established as generally accepted by all parties. Similarly, there will be some trust involved with this source code as well. Getting the opportunity to look for bugs is essential in my opinion, but it needn't take multiple years. Focus on the parts you doubt, similar to what you'd do if you were reviewing the scientific method used in analog evidence.
Of course, the two aren't identical. Validating scientific methods and validating a program is different in that the program is proprietary and the science (usually) merely behind a paywall. The latter can then be replicated by others and becomes established. The former will only ever be seen by that company and doesn't become established. So scrutiny is necessary, but after a couple cases that used an identical version, requiring access without articulating particular doubts would unduly delay the case. It doesn't seem unreasonable to start trusting the program after a bunch of defendants had experts look at it and found no way to cast doubt on its result. If you don't think software of 180k lines can be used in court under such circumstances because it would take too long to review, we should throw out pretty much all software anywhere in the judicial system. (That's not what you said, but some of the replies including yours hint at that.)