Hacker News new | past | comments | ask | show | jobs | submit login

A problem I've observed with these kind of blackbox systems is that the process from input to output really is a mystery.

When the results are right, they're just "right" so you should accept them, when they're wrong they're actually also right by whatever magical hamster wheel is operating inside of the thing and you just don't "get it".

The problem is that humans like to have some clue as to how the results were derived, something easy to explain that gets the gist across. Something like "Watson counted all the words you use and compared them to different reference lexicons to arrive at the score". This provides a little bit of context so we understand the semantics of the result and how to consider them and reason with them.

But for all we know the results we're seeing are from some arbitrary stochastic method:

openness=rand(90,99) harmony=rand(90,100)

etc.

For things like this to be accepted by the users (humans) there needs to be a quick explanation for how this works otherwise we get head scratchers.




Please see my other 2 responses in this thread for some insight. I think I posted them about the same time you posted this.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: