Hacker News new | past | comments | ask | show | jobs | submit login

An (IMO) Interesting question is how to reduce the risks of things like this happening.

Where evidence from IT systems is being used as a large part of a prosecution, it seems that it should have some kind of scrutiny as to how those systems operate.

One option would be allowing the defence to see details of how the system works, testing that was done and known bugs, but that would require a lot of expensive work by legal defence teams, especially where the system is complex.

Another option would be some kind of certification of IT system operation, but again it would be hard/expensive to do and very incompatible with rapid development techniques.




I'm very sure this system was certified in a multitude of ways. No certification process would prevent this.

The real issue here was that Post Office refused to recognise that, although computers themselves are mostly infallible, computer programs are never infallible. They conducted their activities and took actions based on assuming the reporting was flawless.

Then the really serious problem is that in cases where the fallibility became more visible, they consistently and systematically covered it up and pressed forward with their incredibly aggressive enforcement work anyway, knowing how much damage it was doing.

This is unquestionably an issue of abuse of power and position.


> although computers themselves are mostly infallible

What do you mean? Hardware is fallible too, just less often than software. This may cause problem on its own e. g. bit flips in non-ECC memory, HDD which lie (reply to flush cache before data is actually written) or HW can trigger software errors, e. g. HW can crash at random moment and SW can be not designed to handle this properly.


That is why I said "mostly", at least in comparison to programs or humans. Obviously yes there are very occasional problems as you've described.


> An (IMO) Interesting question is how to reduce the risks of things like this happening.

I look forward to finding out if this was a “fraud system gone wrong” or a more basic ledger system failing to do sums correctly.

Partially addressing your question though, if you were to insert the words “AI” and “bias” into the sentence we as an industry are starting to figure this out. The certification and testing processes you mentioned are there in cases where a team’s mature enough to have both a data and model lifecycle worked out. You see words like MLOps trying to describe how to do that effectively in production.

For example, my work has both a design approach (in both the product design touchy/feely sense and software architecture sense) that includes questions and practices that will help to reason through data needed to address a problem, what can go wrong with that, and how things look when it goes wrong. The last bit is the most interesting one to me. In terms of practical engineering, inference results generally should have some sense of lineage - of data, model, and training services which explain how you got to a given answer, including what inputs were considered or ignored.

An interesting side topic with this is that poor implementations can result in inexcusable differences that affect downstream systems. For example, if a particular model has predicted something like “this transaction is suspected to be fraud” it better be consistent from run to run, and the input data better be consistent over time. If either of those changed - explaining that to the consumers of the data is essential to them understanding that either the model changed, the data changed, or both.


>An (IMO) Interesting question is how to reduce the risks of things like this happening.

Corroborating evidence. In this case, where was the evidence that this money was ever in their possession? Was it ever sitting in their bank account? Was it buried in the back yard? Did they buy fancy sports cars or houses? The prospect of thousands of people stealing money without a trace of the cash is fantastical.

In general, I'd say electronic evidence should need to be corroborated with physical or other types of evidence to achieve a conviction. It's too easy for electronic records to be falsified, either through software bugs or outright malicious intent.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: