> This combination speeds up the data transfer process since you don’t need every reachable Git object, and instead, can download only those you need to populate your cone of the working directory
If you're only downloading what you need to populate the working directory how is it that `.git` will have the entire repository?
Using sparse-checkout by itself will still download the entire repository and its complete history into .git. If you additionally use the "partial clone" feature, then you can restrict what gets downloaded and stored in .git as well - it will download only the objects that are needed for your selected directories (along with their complete history). On big repositories with long history this might still be too much data, so you might also want to use the "shallow clone" feature (via the --depth flag) to restrict how much history you download.
I guess it's possible you don't get it all, but I've definitely ran `git grep` before on that repo and had results come back that weren't in my worktree.
I would speculate that the partial-clone implementation pulls down all the commits that touch any files that are required. Some of these commits would presumably include changes to other parts of the source tree. Perhaps `git grep` still matches on such commits?
LFS only downloads the files required by the checkout. From Git’s perspective, those files are very tiny, and only include the information required so LFS can download the files on-demand.
Git’s partial clone is a more natural way of achieving the same outcome.
> This combination speeds up the data transfer process since you don’t need every reachable Git object, and instead, can download only those you need to populate your cone of the working directory
If you're only downloading what you need to populate the working directory how is it that `.git` will have the entire repository?