Hacker News new | past | comments | ask | show | jobs | submit login
Git Is Simpler Than You Think (nfarina.com)
609 points by nfarina on Sept 7, 2011 | hide | past | favorite | 118 comments



If you don't understand git, then don't mess around with 'rebase', 'push -f', or any other command that tries to edit history. These commands assume that you have a strong mental model of how git works, and this is how the author of the article got into trouble.

It's possible to build a very successful git workflow using only the following commands:

    git clone git:...
    git add path/to/new_file
    git commit -a
    git pull
    git push
(Yes, commit before pulling, rather than vice versa.)

If you want to use branches, you need to add:

    # Showing branches.
    git branch -a
    gitk --all

    # Checking out branches.
    git checkout some_branch

    # Creating and pushing a new branch.
    git checkout -b new_branch_name
    git push -u origin new_branch_name

    # Checking out an existing branch.    
    git checkout -b some_branch origin/some_branch

    # Merging a branch into your current branch.
    git pull origin some_branch
This workflow is extremely safe. At worst, you might need to resolve a merge conflict.

But if you start digging around under the hood, and you start editing your commit history, you'd better be prepared to understand what you're trying to do.


A thousand times this. For my first year to year and a half of using git, I didn't even know that you could re-write your history. I just started out with a personal repo on my own machine, where I'd init, add, rm, commit, and look at logs. Then I learned how to start working with someone else's repo, which introduced me to cloning, pulling, pushing, fetching, and all that jazz. As the project sizes grew, I realized how useful branches were, so I became more familiar with tracking things upstream, merging, and some more advanced workflows. Finally I started poking around in the git object model, and learning how I can clean up my local history before a push. It was baby steps all the way.

Trying to use rebase the same week or month that you learn git (or version control period), while doable for some, is just crazy for everyone else. You don't jump into riding a bike before you learn to walk.


I really don't like "git commit -a". I've seen people new to git add every file in the project directory without checking what files are there. This includes merge conflicts, SQL dumps and random backups.

If you do a "git status" immediately before then "git commit -a" can be a slight time saver; though It's better to use "git add -u", which stages all files that are in the repo and ignores any new files. Personally, I like adding files either 1 or 2 at a time or even with "git add -p".

While I'm at it, lots of people seem to have poor (single line) commit messages, stemming from the use of "git commit -m". Whilst this is OK on some occasions, commit messages are often better thought of as an email with a subject and a body.

(Yes, I know sometimes a single line commit is all you need and you can write multi-line commits directly on your shell but some people don't know this and it limits their ability to write a good commit message.)

Feel better getting that out my system. Now I need to tell my co-workers.


"git commit -a" actually does not add every file in the directory. It only adds those that are already in the repo.


That's embarrassing. I meant "git add -A" or "git add .". I guess "git add -a" or "git commit -a" isn't so bad but I still think care should be taken when adding files to the index, especially with newcomers to git, who pick up habbits from tutorials like this.


If I follow your workflow, my coworkers go ballistic. The problem is that the "git pull" messes up the revision history. I finally learned to use "git pull --rebase".


Technically speaking it doesn't "mess up" the revision history, it preserves it. Rebasing instead of merging creates a linear revision history, which is simpler and more similar to centralized version control, but at the cost of losing information about the actual path of development.


All true. Certainly nothing is "messed up".

That said, reading through a bunch of fine-grained and uninformative merge commits when looking at a log is really annoying. Serious projects expect submissions to be in the form of clean patch sets that apply in series and "look like" an instantaneous change. No one cares about per-developer histories.

If you look at the kernel history, for example, the only merge commits you see are Linus's and the subsystem maintainers'.


Right, because the kernel patches are emailed and never pushed to a public location. In essence, the "committers" of the kernel are the only ones who do rebases (either explicitly or implicitly).

Frankly, git was not designed for a lot of the use cases it's finding itself in. Medium sized agile teams (5-10 devs) with multiple people mucking around in the same files will be removing bullets from their feet repeatedly for a few weeks when first switching.... OR they will have an incomprensible tangle of branches and merge commits that no person will ever be able to figure out.


That may be overstating things. The incomprehensible tangle is really just an annoyance. Look at it with "git log --no-merges" and it looks exactly like the tangle you'd get with CVS or subversion: lots of independent unsequenced changes by different developers.

Git by default shows this stuff, which in the kernel's use case is useful data. But in this case it's just useless chaff, and an annoyance. But hardly a serious problem.


It only "messes up revision history" if you do your development on your remote tracking branch that multiple people are pushing to. If you keep your remote tracking branch clean (e.g. master), you can pull into master whenever you want, merge your changes from a local branch, and push it right back out. Done this way, you never have to use pull --rebase (which will rewrite local history).

This workflow actually works pretty well with your desired outcome as well - you can rebase your local branch against newly pulled changes on master. This will make all of your local branch merges look like fast forward merges on master (i.e. no merge commit, linear history).


My workflow is like this:

- Create a feature branch off of development. - Commit lots of times until that's ready. - Switch back to development. Do "git merge --squash featurebranch" This introduces all the changes from the feature branch as uncommitted changes. View the diff to ensure it's exactly what I wanted, and commit them with a single, coherent commit message.


As a SVN user who is not afraid to use feature branches, this "no branching" workflow of many git users amuses me to no end.


I have found that branches are the default for git users working on any project larger than a toy.


You're kindly assuming people never enter the wrong command by accident.

Undoing a commit, for example, is very common and necessary, especially for beginners.


The easiest way to undo a commit is to treat git just like you would Subversion or Mercurial: Use 'git revert $COMMIT_ID' to reverse the generated commit, or simply fix things by hand and commit again.

Sure, you can edit the broken commit, if you value a pretty commit history. But again, this requires understanding git well enough to predict the effects. What happens if you run 'git commit --amend -a' after pushing the original commit? What if another user already pulled? How will you fix the resulting mess?

If you can answer these questions, then you'll have no problems editing history. If you can't, then you may be signing up for a lot of pain.


Good advice. I would add the following use case:

    # Oh no! Those changes should have been on their own branch, not dev...
    git stash
    git checkout -b changes_on_their_own_branch
    git stash apply
    git commit -a


That is overly verbose. Try this:

    git checkout -b changes_on_their_own_branch
    git commit -a
You can always create a new branch at HEAD (where you are currently) and switch to it without having to stash.

On the other hand, stashing may be required if you wanted your branch to start elsewhere:

    git checkout -b changes_branch start_point


But doesn't git stop you from switching to another branch when there are uncommitted changes?


Git only prevents switching branches if the act of switching branches would modify the files you have uncommitted changes to. In the case of fr0sty's suggestion, the new branch cannot have any changes because it's being created from the current HEAD. But even if you're switching to a pre-existing branch, it may still work depending on what files git will need to touch.

In other words, no harm in trying the `git checkout other_branch` first, and only stashing if that fails.


As GP said, not if you're creating a new branch. There's no history in the new branch to potentially overwrite your uncommitted changes in the working tree, so nothing for it to object to.


Couldn't agree more! I have lead quite a successful 'git life' without using any complex commands and therefore have never found git hard!


Why the -u on push?


It allows you to use `git pull/push' without specifying the remote. see: http://mislav.uniqpath.com/2010/07/git-tips/ Section "Push a branch and automatically set tracking"


It creates a tracking branch, and so git knows how far a branch is relative to a remote branch.


       -u, --set-upstream
           For every branch that is up to date or successfully pushed, add upstr
           eam (tracking) reference, used by argument-less git-pull(1) and other
           commands. For more information, see branch.<name>.merge in git-config
           (1).
--man git-push


To checkout an existing branch, you need only do 'git checkout new_branch_name' - if the branch exists on the remote, git will automatically create a local version and set it to track the upstream version.


The whole "downloading history of the repository onto your machine" thing about git is what makes it unworkable where I work. A normal checkout from SVN is over 3GB in size just for our team's tree. There are a number of binaries that get pulled in and updated for various reasons (SDKs, platform-specific binaries) and they are versioned for repeatable and automatable builds across branches, all self-contained with minimal dependencies. I dread to think what the entire history would take - it must be many 100s of GBs at least. It would certainly rule out the whole "working disconnected" idea on laptops, for one.


If you wanted to use git in this situation, rather than svn, I'd recommend using git-annex (http://git-annex.branchable.com/). It avoids those binaries bloating the history while still letting branches "contain" specific versions of them. You can set up a centralized annex that is eg, a bup repository (or a rsync server, or use S3) and git-annex pulls down the binaries from there on request.


What is the disk footprint of the existing SVN repository? "100s of GBs"? git's worst case repository size is equal to or slighly larger (few % probably) compared to an SVN repository.

Git will deduplicate all of your binaries across branches (and if you are clever, across repositories, but that's another story) so worst case you will only have one copy of any binary file no matter how many times it appears in your history.

That is not to say that git may be a poor fit for your current project+organization for other reasons but a blanket assumption of ("repository would be too huge") is not generally accurate.


He didn't say the repo would be too huge for the server where the svn repo lives today. He said it would be too huge for the laptops.

The binaries are already deduped, that's why they live in svn.


The OP made no mention of how large the SVN repo is, but rather speculated that the git equivalent would be "100s of GBs".

Had the statement been: "Our SVN repo is alreay 100s of GBs in size" then yes you are not likely to want to stuff that onto a laptop but that was not the claim. The claim (without rationale) was that the git equivalent would be 100s of GBs which is similar but not at all the same.

one assumse 1:1ish correspondence the other applies an unstated multiplier SVN * n = git where n > 1 (and significantly so.)

In reality the multiplier appears to be negative in many cases:

> Git's repositories are much smaller than Subversions(sic) (for the Mozilla project, 30x smaller) [1]

[1] https://git.wiki.kernel.org/index.php/GitSvnComparison


He speculated that the entire history (which I would say is equivalent to the SVN repo) is 100s of GBs. We are assuming a 1:1 svn repo to git repo ratio.

Mozilla's ratio would be relevant if they were storing something like the visual studio installer in their repo. They aren't, so it's not.


The binaries (including debug symbols etc.) are the bulk of that size, and storing all revisions locally will almost certainly add up to non-trivial size for a laptop. Unless git has some kind of magic differencing algorithm specifically executable code and debug symbols, I don't really see a way it could work - that's my rationale.

Of course, such algorithms do exist - Google's Courgette - but I don't think git is using them (I have looked) and doubt they are tuned to e.g. Borland TDS/RSM/etc. symbols.

I have no idea how large the svn repository is - it's stored on a SAN and run on a dedicated server I only interact with via svn. It could be many terabytes for all I know; and of course, my team's project tree isn't the only thing in the full repository.


Can you maybe separate the binaries and use git for only sources?


You're being downvoted for not using an HN approved workflow. Your workflow is inconceivable and therefore wrong.


Maybe I'm just grouchy today, but what's inconceivable to me is why you'd write an Oh No The Hivemind comment on this topic of all things. Some workflows actually are suboptimal.


The people complaining about this particular workflow being suboptimal do not understand it, nor why it exists in the places it does. Sometimes when something looks stupid, it's because it is stupid. But sometimes it's because you don't understand what you're looking at.


But have you actually tried it? You might be surprised.


I might give it a bit of a go when I'm on-site in CA in a couple of weeks, rather than trying it over transatlantic VPN - though my MBA only has 256GB SSD, and it's 50/50 Windows 7 and OS X.

Still optimistic? :)


Two words: git submodules.


Git actually starts compacting your object database and creating super-efficient delta-compressed packfiles after a bit. You can still throw object files in there afterwards though, and it doesn't change the basic principles of operation.


Super-efficient delta-compressed packfiles of zip files are not, in fact, super-efficient.


Ah ha, yes, you are correct - but packfiles combine multiple blobs together to benefit from additional compression that you couldn't achieve by compressing each individually.


Git repositories are quite a bit smaller than svn's[1], but hundreds of GB is pretty huge.

[1] http://www.contextualdevelopment.com/logbook/git/large-proje...


git-clone --depth 1


Shallow clones are essentially read-only, they can't push nor be fetched from. Unless you're prepared to regress to emailing patches around, a shallow git clone is actually less useful than a svn checkout.


Shallow clones don't seem to save much space: http://blogs.gnome.org/simos/2009/04/18/git-clones-vs-shallo...


Thanks -- great article. This is the most readable and straight-forward explanation of git's internals that I've seen (and I've read a bunch of articles/books/etc. looking for resources to help others learn git).

I'd also recommend The Git Parable (http://tom.preston-werner.com/2009/05/19/the-git-parable.htm...) for anyone who hasn't read it. Different focus, but also helpful for understanding git's philosophy.


I found git to make a lot more sense personally after reading http://eagain.net/articles/git-for-computer-scientists/ . The git parable is also very good.


So if I have branched from master, onto experimental, and made several experimental changes that I want to actually use, then I can simply type:

    git merge master
    git branch master
from the tip of my experimental branch, and master will be moved up to the tip of the tree?


fr0sty was definitely not tactful, but his commands are correct. Merge always brings the branch you specify onto the commit you are currently on. The branch command either lists branches (when no other arguments, only certain options, are specified) or creates branches. Checkout is the command used to move around between commits in your repository.

Branching and merging is covered pretty well by Pro Git[1]

[1] http://progit.org/book/ch3-2.html


Not at all. Where did you come up with that? That would merge in any commits in master not already in experimental and the next command would fail because a branch 'master' already exists.

To do what you are describing you would do this (or something similar:

    git checkout master; git merge experimental


Thanks, obviously I'm missing some concept somewhere.


The checkout command moves you off the experimental branch and back onto the master branch.

The merge command then takes the commits in experimental that aren't in master and puts them into master.


Thanks, I think I may have been misusing

    git branch [some name]
I'll take a look at some of the suggested tutorials again. Thanks all. That said, I pretty much do straight-ahead main-line development, so this isn't a huge problem right now. I do most of my development after midnight, so there isn't a huge cognitive headroom left...


I'm still just getting the hang of git, but I've found this best practices[1] tutorial the most handy to just get started. I needed to play around at this basic level before any of these other more complicated comments started to make sense. Check it out, it's very short.

[1] http://ariejan.net/2009/06/08/best-practice-the-git-developm...


i found the pro git book a very good reading to understand git for the first time http://progit.org/book/


Great book. It really helped me overcome git's learning curve.


I've struggled with helping others learn git as well; you can see some of my thinking at (http://think-like-a-git.heroku.com/#1) and (http://confreaks.net/videos/612-cascadiaruby2011-think-like-...). Currently working on a longish standalone website for those who (like me) can't stand to slow down enough to sit through an entire 20-minute video. ;>


No, actually, it's not. And that post proves it.

You can do some things to make it easier on yourself (and others) but it's not simple. You find out how un-simple it is when you hit one of those magical corner cases.

Don't get me wrong, I love Git. I far prefer it over SVN and CVS. But it's not simple.


Git is conceptually simple, but has a baroque interface. It's nearly impossible to form a proper mental model of what's going on under the hood from its CLI.

The point of this article is that if you take the time to learn how git is actually constructed, the CLI becomes a lot more bearable, and is less likely to leave you in distress when Something Unexpected Happens.


I struggled for several hours last night to fix the history of a git-svn repo that had fallen out of sync. Eventually I started to grok it and the fix took all of two commands. I think that this semi-obfuscation may have been an intentional attempt to improve the SNR of kernel contributions - by putting in place a mild cognitive obstacle...


I wish I had written your first line!


I think the author's point is that the git internals (i.e. the .git folder) are simple; I don't think the author is disputing that the commands sitting on top of the git internals are sometimes horribly complicated.

The fact that the contents of the .git folder can be explained in ~2000 words (10 min of reading) is impressive--this would be impossible with a more complicated data model.

FWIW, compare:

http://www.google.com/search?q=what%20is%20in%20.svn%20folde...?

with

http://www.google.com/search?q=what%20is%20in%20.git%20folde...?

All the results for the svn search are about getting rid of these folders -- nothing explaining their contents.


> All the results for the svn search are about getting rid of these folders -- nothing explaining their contents.

That would be because there's little to no content in the svn folders, and none of it is user-consumable: all the metadata manipulation is done via svn's properties[0] and their cli interface (proplist, propset, propget and propdel).

Apart from these, the content of the .svn folders (at least before 1.5) is pretty much only duplicates of your checked-out files. All the brains live in the (remote) repository, not in the .svn folders. I've never seen any case where somebody would go muck around with the content of the .svn repos, the only direct manipulation of them users have a use for is stripping them for exports or because somebody did not `svn export` and sent .svn folders where they did not belong.

Oh, and svn has had a full-featured client library[1] (and bindings for most popular languages[2]) for a very long time, so people have rarely needed to essentially reverse-engineer the interaction process from a checked-out working copy.

edit: now that I think about it further, your very observation is proving the complexity of git, or at least of git's UI: it's not expected that users of svn ever go muck around with their .svn (and as I explained there really is no reason to), so there is no wondering or audience about that, whereas not only is it expected that git users understand the content of their .git, it's pretty likely it will become necessary at one point in your usage of git, because you'll have no other way (learning the plumbing by rote is not an option until you've even done that, since it can only be understood if you know how git's store works, you'll just end up learning both anyway)

[0] http://svnbook.red-bean.com/en/1.1/ch07s02.html

[1] http://svn.apache.org/repos/asf/subversion/trunk/subversion/

[2] http://svn.apache.org/repos/asf/subversion/trunk/subversion/...


A better comparison would be between an explanation of the contents of .git and the contents of your actual SVN repo on the server. Since that's what the equivalency is.


My point is that to a naive user googling to figure out what is going on, the dotfile element of svn is inscrutable.


Well, to be fair, the title is not 'Git Is Simple'. I'm just starting to really use Git and after reading this it seems a little simpler to me. Personally, I think Git is relatively simple as far as version control systems go, it's forgetting 5 years worth of knowledge of SVN and the nice visual tools that support it that is hard.


I thought it was a little odd that he (seems to be) comparing the svn GUI with the git command line experience even though there are GUI tools for git as well.

Have you tried Tower (http://www.git-tower.com/)? I used it briefly for a small project and thought it was nice, but didn't really have time to get to know it that well.


I make daily use of GitX for visualizing the state of a repo, and it's got a pretty nice UI for building commits as well. (It's Mac-only, though.) Apparently there are several forks that have added new functionality on top of the version I use, but I'm comfortable enough at the command line now that I haven't bothered checking any of them out.


Yes I'm with you with this wccrawford. Git is not simple. Git internals may be simple hacks for hackers but git usage is another story. It does have neither understandable nor compatible terminology.

Because it's simple bash scripts but distributed and decentralised it has a steep learning curve.

No, no git is not simple. It's really complicated.


Git is mostly not bash scripts anymore:

    [~] 0 (jon@snowball2)
    $ file /usr/lib/git-core/git-status
    /usr/lib/git-core/git-status: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, stripped


Thank you. I was misdirected by the article. And I have not peeked at the git source ever.


Having used Mercurial and Git, I have to say I vastly prefer the interface of Mercurial. It still has its quirks, but overall I find it much easier to use, especially on Windows. Git was very clearly built as a Unix solution first, with Windows support hacked on later.


The worst thing in Git (comparing to Mercurial/other DVCSes) is that it is "more mainstream", thanks to GitHub. Everyone is assumed to have an account on GitHub (and such an account is even requested in the YC application form!), even if one is a Python programmer fond of Mercurial, or a Haskellista using darcs for everything around.


i tried to learn git and hg and i found that hg was so much more friendly to work with. when things go wrong in git, cryptic error messages are spawned and i have no idea what to do. when i do things in hg, error messages are actually helpful. even when things don't go wrong, hg constantly gives me helpful advice at the command line. example: after you do hg pull, it tells me that i need to do hg update. why can't git stuff like that?


I used both, too.

From the mercurial docs: "If you felt uncomfortable dealing with Git's index, you are switching for the better."

To be honest, after a few years with git, I feel uncomfortable without the index.


Agreed - although I will say I've lost count how many times I've had to explain to a junior that "push would create remote heads" just means you need to fetch and then push...


Interesting - haven't ever used Mercurial but as far as Windows ports of Unix-based tools go I love the git interface. I always run it in standalone shell mode, and its shell has enough Unix commands built in that I haven't even had the need to install cygwin on my new PC yet. The "check out line endings as \r\n, check in as \n" feature is a neat touch as well (granted, any decent text editor handles both kinds of line endings, but still).


Maybe I'm just an idiot, but it took me much longer than it should have to get SSH support setup for Github on my Windows box. I had no such issues with Hg.


When git was born, many people complained about its poor usability and some people created friendlier "porcelain" scripts. Eventually, git itself became "good enough" and maintaining porcelain scripts became too much work to keep up.

Is it too late for an improved git user experience? With so many online tutorials and books, a new porcelain interface would have a tough time capturing much mind share (while still calling itself git).

For those looking for something simpler than git, I recommend eg (EasyGit), a one-file Perl wrapper:

http://people.gnome.org/~newren/eg/


Personally, I think github is trying to become that simpler interface. We'll see how they do, but so far it is promising.


I would have really liked it if the author had mentioned exactly how the co-worker in the beginning had got into the "3-way merge" state, and some how got onto a non-existent branch. I don't know what it means, and I've never got there, so I'd love to know how it happened.

(or maybe the author did mention it and I missed it?)


It's the result of a merge conflict (e.g. after attempting a rebase). If you work regularly with other people you end up seeing this a lot (or rather, whenever you both edit the part of the same file at the same time).


Although I love Git, this whole thread exemplifies the way in which Git horribly fails my one success measure for good tools: you don't talk about them very much.


Totally agree. We switched from SVN to Mercurial around 6 months ago and I have to say I am annoyed that I have to spend so much time messing around with the tool rather than coding.


You can say the same with almost any non-trivial development tool. Vim, emacs, Eclipse, VisualStudio, etc all require time spent "messing around with the tool rather than coding.

Another fallacy is that "writing code" is the only "useful" thing one can do. Creating clean logical history and good commit messages (or other documentation) does not invole writing code, but that doesn't mean it is not important.


Where did I say that that coding is the only useful thing?

The point I was trying to make is that tools are productivity aids and if a tool gets in the way and distracts you from your main activity (or activities) too much it can be counter productive.

I use both Eclipse and VisualStudio and find those tremendous! A good tool hides unnecessary stuff from you and works in a clear predictable way. A tool is bad for me when it does the opposite.


The point I am trying to make is that all "productivity aids" have a learning curve and will require time spent "messing around" before they confer any benefit.

In my experience, once you understand how git works it gets out of your way entirely.


I wonder what would happen if we'd take the git object model, totally as-is, and redesign a coherent set of commands to manipulate them. With simple and consistent switches, good command names (if at all possible), closely related to the concepts in the object model.

Would that be very different from what git is now? My underbelly says yes, but really I'm not knowledgeable enough to tell. Any ideas?


Git is very powerful and I do like it more than SVN, but I felt like I had more trouble switching to it from SVN than if I had learned git from a clean slate. Switching my mindset from SVN-style centralized repos to decentralized git was the hardest part, as certain things in SVN didn't translate to git. Git is simple, but switching is not.


There true, it's hard to transform the concept in SVN/CVS to git world.

I remember when I was migrating from CVS to git, it's ok to understand cvs update is git fetch & git merge now, committing is the same, additionally I need to push the commits to share with co-workers. The hardest part to understand rebasing and branch management. But it worth to learn all these concepts, it make version control more flexible. It's pretty cool to use git reset and git push -f to modify the commits history (make sure you know what you are doing).


> If you’re anything like me, you probably wondered why you were the only stupid person on the planet who didn’t intuitively get Git already.

Spot on! I really thought I was the only one.


(a bit off-topic)

Oh no! I now see cognitive dissonance everywhere. I realized that it's the same with git: maybe we like it so much because it took such a hard time to master. I still don't feel like I master it. However, reading some books on git and understanding the philosophy did make it feel simpler.


Love the concept of the article but right out of the gate comparing it to a Model T and saying you must be a mechanic to operate it will likely result in a high bounce rate, and possibly just serve to re-affirm someones belief that git is, indeed, complex.


Git is intended as a tool for programmers, which are precisely those "mechanics" he mentions. Any programmer who's scared away by that doesn't understand what they do.


Err - Maybe we just resent having to spend so much time on the mechanics of the tool/s rather than our main activity which is writing code.


Oh, I agree. But if the net result is something that works better, the fact that it needs some internal tinkering shouldn't scare programmers away.


Should writers care about the opinions of people who do not read their entire article, but leave after seeing a diagram of a Model T?

I think the intended audience of the article are people who are already at least vaguely familiar with Git, but do not know its inner workings. The thesis seems to be "Git is complex, but not as complex as you might fear reading 'man git'".


So what was the cause of the original error and how did you fix it?


There was a conflict while rebasing. In this situation you generally do one of:

1. Resolve the conflict and continue rebasing.

2. Drop the conflicting change and continue rebasing.

3. Abort the rebase operation.


I read through the article waiting for the explanation of the "Falling back to patching base and 3-way merge?" line but unfortunately it wasn't explained. Git didn't become any simpler after all. :(


This line is irrelevant debugging information. The problem is that there's a merge conflict. This would happen with any version control system; it's not something unique to Git.


Great, entertaining writing style. Probably the most readable intro to Git I've ever read.


Yes, for someone who has never use Git is a very nice introduction to it.


I made the executive decision to leave our comfy world of Versions because it seemed clear that Git was winning the Internet.

I ADORE git and can't imagine NOT using it now that I've switched, but the little sentence above packs a whole lotta lame in it.

Executive decision? Gross. There are good reasons for them, but not many. If you can't win your team over you probably don't have a good argument for the change... which brings me to the next problem I have with that sentence: winning the Internet? That's a great reason to look into something. Not a great reason to switch your team to it.

Also, no. It's not simpler. It's pretty much just as complicated as You think it is. It's actually kind of a big pain in the ass to start using git.. but boy is it worth it.


I wish the author would've explained the motivation to move from SVN to Git, other than 'it was winning over the web'. Was SVN just not working for them, for some reason? Are there things that he and him company wanted to do that wasn't possible with SVN, but was with Git?


Quite frankly there's no more reason for "why we moved from SVN to Git" topics anymore. It's been beat to death.

Better to keep up while the team is small than to try to move even more people over later and turn into mega-corp still using CVS.

Learning something new isn't necessarily a bad thing. I question people that are unable to try to learn something new every day. It takes a few minutes of your time and you can learn something. Sure, git might take more than a few minutes… but why not TRY?

As for requesting why they switched from subversion to git… yea, as I said, you have a billion articles written out there by now about WHY. There aren't more needed. The benefits are pretty clear if you read any one of those other articles.

Of course, this more leads into "why git and not mercurial (or any other DVCS)?" Which again, has a billion other articles written up on it.


Part of it was that SVN was the last service running on a server I wanted to shut down. The other part was simply momentum. Github (and therefore Git) were too exciting to not want to be a part of. I didn't want to be left behind. That said, SVN was perfectly serviceable, though I don't imagine I'll ever use it again.


The sea parted for me when I caught Scott Chacon explaining git.

  http://www.youtube.com/watch?v=QF_OlomyKQQ
It really is better in so many ways and once you understand how it works, you may say, like me, "Ahh, YES!"

And, yeah, he warns you about rebase.


For me it really clicked when I watched: http://blip.tv/scott-chacon/git-talk-4113729 (also by Scott Chacon).

Once you understand the internals -- I remember being amazed at what .git/objects/ really was -- nearly everything about the git ecosystem becomes measurably easier to understand.

Once you understand how your project looks in the eyes of git, it becomes a breeze to manage.


I read this, and think: cryptographic hashes are just too damn cool.


This article was EXACTLY what my little brother needed! I've been trying to convince him to jump over to git for nearly a year now.


Nice, you'd be surprised of how many people would start using Git because these kind of articles. No complications.


If only promoters of Git put as much effort into fixing Gits usability as they do posturing about how everyone is wrong and how easy Git is, we'd have a better tool.


No thanks, I think I will wait for another couple of years until the next fan-boy tool comes along.


...and you are an idiot. WHY did you use rebase? If you are a noob, why won't you use the great GitHub for Mac app? It is even prettier than Versions. http://mac.github.com/




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: