Hacker News new | past | comments | ask | show | jobs | submit login

I’m no expert, but tools such as SHAP and DeepLift can give you insight into what activates a network. It’s probably not possible to inspect a network with billions of parameters, however it’s to be expected since I don’t think that explainable ML is an established field yet.

But also think about it from another angle: it doesn’t seem too hard to explain why people say what they say. We can usually get into the shoes if the other person if we try hard enough. However, if we say there’s no way for us to explain GPT-3, it just shows how fundamentally different it is from human mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: