So what's the underlying model that makes the staging area make sense? Why does stash followed by unstash leave my checkout in a different state from what it was before?
The staging area is where you construct your next commit— giving you a middle ground between your changes in your local working copy and the last actual commit so that, if you don’t want to commit everything that you’ve changed in a single commit, you can do that.
(If you always want to commit everything you’ve changed, you can do that too— always commit with ‘git commit -a’ and only use ‘git add’ when dealing with new files that you want to add to version control.)
Hmm. With Mercurial I just use "commit --interactive" if I only want to commit part of my changes, and I always found that more intuitive and less confusing than having to mentally keep track of Git's staging area as well.
The git analogue to that would be `git commit --interactive`, or using `git status` to check the staging area while using `git add`. Keeping mental check of it is the worst solution imho.
You can also have your git porcelain handle it. Magit for example has a great interactive overview of unstaged and staged changes. When I need to do something more picky than just commiting every change, I'll usually grab magit to stash individual chunks: I don't necessarily want to commit all changes in a file, sometimes I want individual lines.
You can do that with staging using the commands above, magit, or some other porcelain (I've heard good things about git kraken). If you really want to forget staging even exists, you could just commit straight up and amend the commit afterwards to get a comparable experience I guess. I've found staging to be helpful in keeping track of what I've achieved for my next "version" of the software to be added to the history, which is why I'm still using it.
Staging is useful for gradually queuing up multi-file commits rather than listing them all in one command. It becomes even more useful with partial file commits.
What’s useful in it? A commit is a thing that should preferably make sense on its own, which can be guaranteed by testing or at least building/running the code. By cherry-picking changes from workdir into a commit don’t you basically make a blind guess? Or is it stash/test/pop every time? What if you overpicked? Reset and repeat?
I don't know about you, but I often get sidetracked with different changes when I'm working on something, so that the work directory is in a messy state to commit everything. The staging area allows me to cherry-pick only the changes that will be in the next commit, while keeping the rest for later. This way you can save the state momentarily, finish polishing the changes, and then easily commit them. I find it very useful to keep focus on what I'm currently working on, without the overhead of WIP commits.
> By cherry-picking changes from workdir into a commit don’t you basically make a blind guess?
No, you use the interactive mode (`git add -p`) to select exactly what you want.
If you overpicked, you can reset a single file, and try again. That can be a bit annoying if there are a lot of changes, so this is another reason to keep commits small and atomic.
I think the answer to the question is "yes, the person staging a partial commit may be making a guess". I think this is because the tool apes an earlier practice of crafting patches to share with other developers. There are definitely cowboys writing patches to show others, and not necessarily testing every implied snapshot in a chain of such patches. Some CI practices also encourage cowboy commits, i.e. if a team pushes commits to get them tested rather than testing prior to commit.
You can imagine an inverted perspective where the stash should be the only non-staging area, and the working copy _is_ the staging area for the next commit. Stash away partial changes you want to defer, then test the current working copy, then commit the working copy.
You'd also want status/diff commands that let you more easily compare: working vs HEAD (what can be committed); stash vs HEAD (all uncommitted changes); and stash vs working (deferred changes).
Your point is actually great, but it’s important that you should always test from a commit itself - that if those tests pass, gets merged to the release branch. If you are only testing your working directory I feel like that’s even harder to do.
The staging area is a virtual snapshot, in roughly the way that the working copy and a commit are actual snapshots. It's defined in terms of the current HEAD with some changes.
Not sure what you mean by "unstash", since "git unstash" is not a command (on my machine anyway, so not unless it was added very recently). I'm pretty sure stashes are still modeled as commits/snapshots.
The git stash command is a little wonky, yes, but I don't think that's a data model thing. It's easy to mistake the disaster zone of Git's CLI for problems with the data model. It becomes more obvious where the problem is when you start thinking in terms of the data model, and trying to figure out what incantation will perform the relatively simple operation in your mind.
> Not sure what you mean by "unstash", since "git unstash" is not a command (on my machine anyway, so not unless it was added very recently).
I meant pop or apply.
> The git stash command is a little wonky, yes, but I don't think that's a data model thing. It's easy to mistake the disaster zone of Git's CLI for problems with the data model. It becomes more obvious where the problem is when you start thinking in terms of the data model, and trying to figure out what incantation will perform the relatively simple operation in your mind.
I disagree. I think the staging area and its behaviour are inherently unreasonable; certainly all the "it's just a DAG of commits" people tend to be confidently wrong about what the staging area will do under a given sequence of operations.