Hacker News new | past | comments | ask | show | jobs | submit login

I think essentially the opposite is true, and academia's notion of cardinal sins have held back the analysis of data by decades. And I say this as an academic.

You make predictions all the time, and you don't know how. You don't know how you walk, how you drive a car, how you know the rules of English grammar. Your mind is a black box. And yet, you can do things that are still beyond the power of what we can understand using first principles. In many domains, first principles have achieved nothing. Years and years of effort by some of the smartest people in the world, and we pushed the ball one inch down the football field.

Supervised machine learning asks the question "To what extent can we predict this outcome, given these predictors?" This is a perfectly valid question, one that we can try to answer given enough data. And we can do it under the exact same conditions we can do standard statistical inference.

Ultimately, we want to answer "why" questions. But sometimes we can't even answer "what" questions. Most data is so complex that it's hard to say what the data even says, let alone why it says it. We could have been using what as a stepping stone to why, but our own provincialism as statisticians prevented us. I hope now we are learning to do better.




How does one interpret what a black box means? In truth, any black box must remain beyond interpretation as a consequence of the fact that Verificationalism failed.

There exist hidden verificationalist presuppositions embedded in your conclusions. If verificationalism doesn't hold, then by-first-principles stands as the best approach.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: