Hacker News new | past | comments | ask | show | jobs | submit login

Yes, it's not exactly equivalent here but it's pretty close. Basically, Sobel kernels are analogous to the 1st derivative and the Laplacian is analogous to the 2nd. You'll also often see the Laplacian combined with a Gaussian (the "LoG" operator) for pre-smoothing since it tends to be particularly sensitive to any noise.



Being a second derivative, I suppose it would be.

So does that mean that there are other kernels that approximate derivatives "better"? Like with finite differences?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: