Hacker News new | past | comments | ask | show | jobs | submit login

But couldn‘t the same be said about standard MLPs or NNs in general?



_Sometimes_, and people do find features in neural networks by tweaking stuff and seeing how the neurons activate, but in general, no. Any given weight or layer or perceptron or whatever can be reused for multiple purposes and it's extremely difficult to say "this is responsible for that", and if you do find parts of the network responsible for a particular task, you don't know if it's _also_ responsible for something else. Whereas with a decision tree it's pretty simple to trace causality and tweak things without changing unrelated parts of the tree. Changing weights in a neural network leads to unpredictable results.


If a KAN has multiple layers, would tweaking the equations of a KAN be more similar to tweaking the weights in a MLP/NN, or more similar to tweaking a decision tree?

EDIT: I gave the above thread (light_hue_1 > empath75 > svboese > empath75) to chatgpt and had it write a question to learn more, and it gave me "How do KAN networks compare to decision trees or neural networks when it comes to tracing causality and making interpretability more accessible, especially in large, complex models?". Either shows me and ai are on the right track, or i'm as dumb as a statistical token guessing machine....

https://imgur.com/3dSNZrG


LIME (local linear approximation basically) is one popular technique to do so. Still has flaws (such as not being close to a decision boundary).


LIME and other post-hoc explanatory techniques (deepshap, etc.) only give an explanation for a singular inference, but aren't helpful for the model as a whole. In other words, you can make a reasonable guess as to why a specific prediction was made but you have no idea how the model will behave in the general case, even on similar inputs.


The purpose of post-prediction explanations would be to increase confidence of a practitioner to use said inference.

It’s a disconnect between finding a real life “AI” and trying to find something which works and you can have a form of trust with.


Is there a study of "smooth"/"stable" "AI" algorithms - i.e. if you feed them input that is "close" then then the output is also "close"? (smooth as in smoothly differentiable/stable as in stable sorted)


You are right and IDK why you are downvoted. Few units of perceptrons, few nodes in a decision tree, few of anything - they are "interpretable". Billions of the sames - are not interpretable any more. This b/c our understanding of "interpretable" is "an array of symbols that can fit a page or a white board". But there is no reason to think that all the rules of our world would be such that they can be expressed that way. Some maybe, others maybe not. Interpretable is another platitudinous term that seems appealing at 1st sight, only to be found to not be that great after all. We humans are not interpretable, we can't explain how we come up with the actions we take, yet we don't say "now don't move, do nothing, until you are interpretable". So - much ado about little.


(IMO) to a lesser extent.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: