I ran this on a Flask app. The app has no linting issues and decent test coverage. Vulture reported 192 incidences of unused methods/classes/variables/functions, all at "60% confidence" (whatever that means). There might be some legitimately-unused code in there, but this is almost entirely false positives, with the same "confidence".
It seems like people build giant whitelists when using it with Flask, Django, and other frameworks. Given the volume of false positives, it seems like a maintenance burden to me.
Maybe there's some trick to using it effectively, but it's not obvious to me.
I've had some success using it in a Django app where I generate a whitelist from my main branch, and then I run vulture on my feature branch just before I open up a pull request. This way, it doesn't matter how many false positives there are because I'm only really trying to check if my changes cause any new issues. It's still not as good as other languages with better static analysis tools, but it's better than nothing.
When it works, it’s a fantastic tool. I like pairing it with codespell, interrogate, bandit, and a few other tools to get a broad idea of how my codebase is doing on different fronts.
With that said, lately I’ve had issues using vulture on larger projects with various errors, so hopefully they’ll continue to develop it and address those.
This tool seems interesting, but I haven’t yet figured out how to configure it to work with Django’s dynamic nature. If anyone has had good success I’d love to hear more!
It seems a bit ... dead itself. There's a few issues there that haven't really been addressed in a long time.
Definitely a cool idea though, but a little underwhelming when all it found for me was a bunch of unused variables (the boilerplate context manager arguments). And this is a fairly large Python codebase it was running over.
> Due to Python's dynamic nature, static code analyzers like Vulture are likely to miss some dead code. Also, code that is only called implicitly may be reported as unused.
Note that this problem is generally undecidable, so no tool will ever find all dead/non-dead code accurately.
This doesn't really help. I've found plenty of code before that isn't marked by the IDE/linters as unused because it's called by a unit test, but nothing else.
It seems like people build giant whitelists when using it with Flask, Django, and other frameworks. Given the volume of false positives, it seems like a maintenance burden to me.
Maybe there's some trick to using it effectively, but it's not obvious to me.