Hacker News new | past | comments | ask | show | jobs | submit login

This comment is very confusing. First of all, the linked paper doesn't state what you claim it states. The authors show equivalence between two specific frameworks of neural networks: SVM-NN and Regularized-NN, and not equivalence between SVM and NN. Generally, SVM and NN are equivalent only in the sense that all discriminative models are equivalent. The kernel trick in SVM requires your embedding to have an "easily" calculable inner product. I'm not an expert, but I think this places strong constraints on the embeddings you can use.

Second of all, SVM does not create any feature space (i.e., embeddings). It just finds a good separator with a maximal margin. Deep NNs, on the other hand, do create features in their hidden layers.

Anyway, even ignoring these issues, I'm not sure I understood your main point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: