I've read/heard a couple times from people ridiculing Apple's HFS pointing out all the ways in which NTFS is superior, so in my mind at least the filesystem I never thought of as an advantage on macOS. I guess I was wrong then.
I think it depends on your metric. I don't know what's wrong with HFS, but I do know that Apple drops these .DS_Store entries all over the place, things that look ideal to be stuffed in NTFS streams.
I'm sure there are many metrics on which HFS is superior to NTFS; and probably many metrics where NTFS is superior. NT (and MS in general) has never philosophically worked well with more than N of X, wherever N is significantly larger than a consumer would deal with - whether it's processes, files, TCP connections, whatever. Their consumer heritage usually finds a way to shine through.
> Anything that does lots of small writes, so basically anything unixy, suffers like this.
To be honest, when NTFS was conceived it probably didn't have as a design goal that common Unix patterns should be fast. After all, Unix applications use the file system for lots of things where Windows has different mechanisms for that purpose.
SVN sadly has to traverse the complete working copy and lock every single directory individually because every directory is also a working copy on its own. Most of the time SVN spends on, e.g., update, is spent on locking and unlocking. Git/hg only need to do this in one place and avoid that problem.
Never checked if it was NTFS issue, but it felt like logical explanation - when starting Stronghold Crusader through wine on Linux, loading took like 0.5s, compared to 10s or so on Windows.
Didn't bother to check the numbers, but it was order of magnitude for sure. Way more than just noticeable.
edit: fine example as I got a downvote.
SVN checkout on windows NTFS 8 minutes 30 seconds
SVN checkout on same kit on ext4 48 seconds.
This problem scales to git as well, so is not SVN related. Anything that does lots of small writes, so basically anything unixy, suffers like this.