Hacker News new | past | comments | ask | show | jobs | submit login

How does it do the blur detection? I assume it's also just a convolutional kernel applied on the image in some way. So it would do just exactly the same as a tiny neural network with one convolutional layer. Thus, they would be exactly the same speed, if they would use the same underlying native kernels. However, the native kernels in TF/PyTorch/etc are more heavily optimized than OpenCV, thus I assume that OpenCV would actually be slower here. And also more complicated, as you need to mix two big frameworks.



Realistically I think you'll often want both depending on what you're doing. Especially for things like blur detection, what's your acceptable specificity, acceptable/scale of performance, where you're running the algorithm (on device vs cloud).

I'm not an expert at all but most image processing network I've seen generally involve at least a few plus a few other layers. I don't think you can get away with a single convolution, at least not that well.

OpenCV you could use Laplacian variance which looks like it's just a single line of code.

> cv2.Laplacian(image, cv2.CV_64F).var()

Many of the NN implementations look like their finetuned off google's ViT checkpoints. I really can't imagine these are faster (at least not without spending extra on GPU/TPU's) than Laplacian variance but I could be wrong.

And I assume you might be able to get better evaluation performance from a finetuned NN but depending on what you're doing, that's a ton of work compared to opencv.

https://pyimagesearch.com/2015/09/07/blur-detection-with-ope...

https://sh-tsang.medium.com/review-bdnet-blur-detection-conv...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: