Hacker News new | past | comments | ask | show | jobs | submit login

Good luck arguing the 'why' when you have a trained neural network and all you have is the network and weights.

On a more serious note, I love transparency but again, this is an overeager regulation (not surprising from the EU). You almost never get the true reasons for being rejected, be it in an interview or for a credit, etc. You have to figure that out yourself from the often very vague rejection letter. Our minds are basically running algorithms to which we have even less insight than our computer algorithms. Therefore, this law only hampers the flourishing of the economy while providing no value whatsoever.

Another law that 'solves' a non-issue, brought into being by overpaid career politicians.




If all you have is a network and weights, you have a black box. While it might be magic and make the right decision most of the time, we need to look in and see why those decisions are being made, somehow.

This generally basically means seeing how the net was trained, and its initial conditions. If you threw away your training methodology and data set, why should anyone trust that your algorithm can make suitable decisions? Are we to assume that nobody who writes AIs which are making influential decisions has vested interests?

If you look for example at common law; we have a system where decisions by judges are binding, and future equivalent cases are required to uphold the previous decisions. The only way this system can work of course, is if someone is keeping a record of the previous decisions. The alternative is we'd have a system of judgement based on heresay, and we can have about as much confidence in it as asking a random person on the street to make the decision.

The technical challenge is really about storage and retreival. If we know that discarding training data makes our neural networks "unaccountable for their decisions", then our technical requirement must be that we store the training data in its entirity, such that we can look back and maybe glimpse at why a neural network may behave as it has.

For example, I might create an AI which is used to decide whether to give someone a mortgage. I could have millions of samples as training data, but I might chose to sort the training data by race, and begin training a network initially so that non-white people who have been rejected mortgages initially overtrain the network to correlate race with rejection, then use a limited sample of white man, mostly accepted mortgage requests to train it the other way.

Of course this is extreme, and an expert who looked at such data set would quickly notice that the AI is unfit for purpose, and blatantly racist. But he needs the training set to even have a chance of concluding that. Without it, he has effectively random numbers which tell him next to nothing.

These laws aren't meant to stifle innovation or economic benefits, but only to ensure that fair treatment is practiced in their development. As far as I see it, if you have a neural network, a sound justification of its design and the methodology you used, combined with a complete data set and training set which can be analysed for biases, then there's no reason these regulations should get in your way.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: