I thought Git handled huge files poorly too. What I'd read was that most algos in git assume that the whole file in revision control can be held in memory at once to do things like diff.
Has this changed, or was I always mistaken, or this guy talking about the sum of file sizes being a few gig?
It still handles them poorly by default. You can do things like edit .gitattributes to say that e.g. a *.binary file should be treated specially, but Git's still not a good system for e.g. archiving HD video.
Some of this can be fixed, but a lot of it is probably not going to change. When you git-add something it has to checksum the old data + new data. That's going to be a pretty expensive operation when you have 50TB of data.
There's no DVCS that I'm aware of that handles large binary files as well as say Perforce or Subversion.
I cant find the resource, but sometime back someone compared DVCS for large binary files (game assets) and saw they were absolutely inefficient .. and in several case incapable of handling large files
Has this changed, or was I always mistaken, or this guy talking about the sum of file sizes being a few gig?