Hacker News new | past | comments | ask | show | jobs | submit login

In this case the analogy would be that software companies should be hold accountable for the decisions their AI makes?



The article makes the case that if an interoperable model (one we can explain) isn't used, then the user of the black box model should have the burden of proof to prove that no interoperable model exists that does the job, and some level of responsibility for trying to develop one.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: