This sounds like a parallel effort (along with the commit graph work) to keep pushing that better. The article's writer (who also wrote most of the blog posts on the commit graph work) mentions a "three million file repository" used for testing in this article and that would of course sound like the Windows repo.
It's also I'd imagine not mutually exclusive effort. It seems like exactly like something you would want in combination with something like VFS at scale, as it reduces the number of materialized versus virtual objects in both the git working copy and the git object database. If you've got millions or billions of objects and files, even reducing the number of virtual placeholders I would imagine is probably a big win.
It's also I'd imagine not mutually exclusive effort. It seems like exactly like something you would want in combination with something like VFS at scale, as it reduces the number of materialized versus virtual objects in both the git working copy and the git object database. If you've got millions or billions of objects and files, even reducing the number of virtual placeholders I would imagine is probably a big win.