Hacker News new | past | comments | ask | show | jobs | submit login

I disagree: for anything beyond basal stuff, we have hypotheses for causal explanations of human cognition, not full theories.

For example, we cannot explain the mental process how someone came up with a new chess move and check the validity in that case. We can have some ideas how it might happen and that person might also have some ideas how, but then we back to hypotheses.




All this is demagoguery.

If a bank denies you credit, it has to explain why. Not "AI told us so".

If police arrests you, they have to explain why, not "AI told us so".

If your job fires you, it has to explain why, not "AI told us so".

etc.


I'm totally lost as to how any of that is relevant to anything I've said. The claim I am rebutting is that we can't expect to say anything about AI causality because we can't even say anything about human causality.


Context of the discussion matters.

"When AI is used, its decisions must be explicable in human terms. No 'AI said so, so it's true'". Somehow the whole discussion is about how human mind cannot be explained either.

Yea, the decisions made by the human mind can be explained in human terms for the vast majority of relevant cases.


Because (1) it holds AI to higher standard than humans, (2) it means that even if AI makes better decisions (name your metric), we would deny the use if those decisions if we could not sufficiently explain them. I note, that with certain things we do not do that. For example, with certain pharmaceuticals we were and are quite happy to take the effect even if we do not fully understand how it works.


> Because (1) it holds AI to higher standard than humans,

It doesn't

> it means that even if AI makes better decisions (name your metric), we would deny the use if those decisions if we could not sufficiently explain them.

Ah yes, better decisions for racial profiling, social credit, housing permissions etc. Because we all know that AI will never be used for that.

Again:

If a bank denies you credit, it has to explain why. Not "AI told us so".

If police arrests you, they have to explain why, not "AI told us so".

If your job fires you, it has to explain why, not "AI told us so".

etc. etc.


If you claim AI will only be used for evil then sure.


I'm super confused at how we are disagreeing. Because I agree with all of that.


Nice. Maybe we just talked past each other.


I made no such claim. My claim is that it might not be useful to hold AI to a higher standard as humans. With humans we accept certain "whims" in decisions, which is the same as having some unexplainable bit in an AI decision.

EDIT: it might not even be useful to insist on explainability if the results are better. We did not and do not do that in other areas.


I noted this elsewhere, but I'll reiterate. I'm confused as to where our disagreement is, because all of that, I agree with. Did I misread your original claims? If so, my apologies for dragging us through the mud.


All good, I think we managed to talk past each other somehow.


Nice, cheers.


Once again, you are raising the bar artificially high by pointing to examples where we fail to have reasonable casual accounts. But you are conveniently ignoring the mountain of reasonable accounts that we employ all day long simply to navigate our human world.


I think this is confusing a certain predictability with having a correct explanation (again, we are beyond basal things like being hungry leading to eating). Those two things are not the same.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: