Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Can we do better than Git for version control?
173 points by slalomskiing on Dec 10, 2023 | hide | past | favorite | 298 comments
Do you think it’s possible to make a better version control system than Git?

Or is it a solved problem and Git is the endgame of VCS




A lot of people these days have just been thrown into the fire with Git as the first and only VCS they’ve ever seen.

I’m not that old, but I’m old enough to have used RCS, CVS, SVN, then finally Git. I started using Git super early, before GitHub existed. You may not believe me, but Git was the answer to all my prayers. All those previous systems had fundamental architectural flaws that made them a complete nightmare to use on a team for any serious work.

Git has no such problem. There’s nothing it can’t do. Instead, the only limitation is on the user being able to know how to get Git to do what they want it to do. Thankfully, that’s always solvable with a quick read of the documentation or web search.

I understand that people might want to make it easier, since the UI is complex, but that’s never going to work. If you abstract away the complexity of the UI, you will necessarily abstract away the power. If you abstract away the power, you are no longer solving all the problems that Git solved. People just don’t realize what problems are being solved because they never lived through those problems with the previous VCSes.

You know what’s easier than building a new VCS better than Git? You know what’s easier than using a different VCS from the rest of the world? You know what’s easier than designing a miraculous abstraction on top of Git?

Learning Git.

Whatever effort you are putting into seeking or building an alternative, instead put that effort towards becoming a Git expert. It will be a lot less effort with a lot more benefit. Trust me on this. It will be well worth it for your career and for your personal computing life as well.


While I feel like this is generally true for most programmers and knowledge workers, Git is absolutely not suited to the workflow of several industries, including the one I work in: games.

Working with an engine like Unreal Engine for a project of any reasonable size requires working with both hundreds of thousands of active files (my current project's repo's HEAD has ~400k) and hundreds of gigabytes of active files, many of which are many GB on their own. Our repository (in perforce) is currently in the order of 10TB.

Git, even with LFS and partial clone and shallow copies and fsnotify just falls apart at this scale. Add to that the necessity for less-technical people (artists, designers, VFX, audio, etc) to use it, and it is really just a non starter.

I absolutely loathe Perforce (having used and occasionally admin'd it professionally since 2007), but I begrudgingly admit that is is currently the only publicly available VCS that can fulfill all of the requirements of this industry in a practical way. Plastic and the others just aren't there yet.


This is a solved problem with git. I’ve worked on bigger projects than yours with histories that are so big you can’t clone them if you wanted to. I’ll make a note to drop what to google for tomorrow, but basically the history is fully available but git knows how to query it over the network. When you open a file, it loads it over the network. Remote build systems do your builds.

Most of this was built by Microsoft to work on Windows with git.

We moved from SVN in the late 2010’s, and holy crap man, did it change our workflows. No longer were we passing around patch files in a team for code reviews, but actually reviewing code somewhat like on GitHub. It was magical.


Microsoft had a bunch of solutions to handle their massive Windows repo: VFS for Git (GVFS), Scalar, and now it has a bunch of MS specific patches on top of the official git client, but apparently that one is also not required any more as partial clone is now supported on azure as well (which is another such implementation from Microsoft employees that made it to both GitHub and upstream git).

So yeah, solved problem thanks to Microsoft. Solved multiple times in fact, and because it's Microsoft, only not all had Linux installers: https://github.com/microsoft/scalar/issues/323#issuecomment-...

https://github.blog/2020-01-17-bring-your-monorepo-down-to-s...

https://devblogs.microsoft.com/devops/introducing-scalar/

https://github.com/microsoft/git

https://devblogs.microsoft.com/devops/git-partial-clone-now-...


Thanks!


>Most of this was built by Microsoft to work on Windows with git.

Hacking git into something it is not does not qualify as learning to cope with git in my book.

Apparently git does not solve all problems and there is indeed value in exploring other options and building new systems that fill other niches.


umm. it's still git. Git is basically content-addressable storage with a couple of layers on top (heads/tags/trees) which is itself content-addressable storage. If you can offload that storage to a remote, you still have git... and can use git as you've always used git. The tricky part is making it feel like the storage is local and your own, instead of shared.

You only need this when, well, you need it. You can go a surprisingly long way (nearly a decade of daily commits by hundreds of programmers) and stick to vanilla git.


> Add to that the necessity for less-technical people (artists, designers, VFX, audio, etc) to use it

How is that solved with Git?


Maybe they just need to follow carefully written instructions created by more technical people?


When has that ever worked? And why does this count as a solution rather than a poor workaround for you?


What has ever worked, reading and following manuals? Like, forever? Stop pretending there's a simple solution. People just need to use their brains. Both people who write those manuals and those who read them.


There literally already is a simpler solution for non-technical people, it’s called Perforce and tons of game studios use it

But I’m sure you’re smarter than the whole industry so you probably know better


Hey smart man, this is a thread about git, not perforce. Just fyi.


Since forever? It's not like people are born with an innate knowledge of how any software works.

Manuals/guides/tutorials are the most scalable way to teach people how to do literally anything.


I shed a tear as I remembered the pain of doing that with SVN


That’s why you need a binary repository like Artifactory and not store your large files in Git. But you can still track them in Git with the large files in a binary repository like Artifactory.


This could be solved with references rather than copies if your tools integrated them. e.g. dependencies can be ref'd with source/name and version. To ensure availability all such used blobs could be stored efficiently elsewhere and versioned.

It's great that Perforce works and I've heard others in the graphics field using it, so it satisfies a need. Don't know if/when it would be a general need for say GitHub users.


This sounds like a case of using excel as a db. Holding a hammer by it's head a declaring it unfit for normal people to paint with.


I also work in games and everyone I know uses Perforce for the reasons mentioned above

Have you considered that maybe they just have a different use case than you?


What I mean is, no one ever said that git was meant to hold bulk assets and resources directly, and so doing so is painting with a hammer, not an insufficient pain brush.

Yes of course they have a different use case. That is just another way to say the same thing.


This post is a terrible failure of imagination that would make one stop language development as C because it was so much better than Assembly.

If you have no problems with Git, then I’m happy for you. I certainly have problems with Git, eg its inability to handle large repositories meaningfully (enjoy a Chromium checkout!), the arcane CLI commands, the shut-your-eyes-and-hope of moving commits between branches and other sharp edges.

That Git has been as resilient as it has is largely a function of being written by Torvalds and GitHub network effects. It isn’t for lack of better/as good features found in other VCS methodologies.


I think it's definitely possible to create a better CLI than git, however I think the majority of what unhappy users of git are imagining when they ask for a better UI would oversimplify things and end up removing a lot of valuable capabilities. These capabilities may not be suitable for a non-technical audience, but I believe should be table stakes for professional software developers as they are very sharp tools for managing the software development lifecycle that transcend specific languages or domains one may be working in.

To take your example of moving commits between branches, the git rebase parameters and --onto flag are definitely confusing, but each of them is clear and necessary when understood. The CLI could be refactored to be clearer, but the hard part is understanding what each one means, which you would need to do anyway even with a better CLI. Obviously one can work around these knowledge gaps, but once you understand them they are useful for an entire career and the warts tend to fade in importance relative to the value of a tool with the insane power-to-weight and rock solid architectural tradeoffs of git.


git still has issues with lots of files. I downloaded the sec Edgar database as files and I thought it would be nice way to store and watch changes.

Nope.

So slow. It does not like millions of files. And all my tooling that does a “quick” git status locked up the terminal, vs code, etc.


Did you try the file system daemon (git fsmonitor--daemon start) that’s built into git that was designed to speed up git status on many files?


There is a major misconception that you move your "commit" around. Actually, each commit in the git history refer to a specific directory state and parent of such commit. You can't meaningfully "move" a commit directly. What the cherrypick and rebase does is.

1. The client diff content of your commit with previous one. 2. Apply such diff on content of another commit. 3. Create a new commit with the old commit message/author of commit in step 1.

And that is also what make git work for everyone, because it is simply a directory that contains tons of new_new_updated_project on steroids. It's up to you to decide what to do with it.


> its inability to handle large repositories meaningfully

Out of curiousity, what other SCM tools can pull this off better?


I haven’t played with many because I simply don’t have the time between work (Google’s Piper and Git) and kids. Piper handles the monorepo by only offering files when requested by the file system at the specified commit (this is probably a gross or even incorrect simplification, but is how it presents to the user).

There are good reasons that one should keep the whole commit history to distribute a source of truth, but at the same time if one is going to place full trust in something like GitHub (as many do), there’s no reason that full trust can’t be given to a similar website that provides a Piper model with some mechanism to request more than one commit to be brought to local storage (to get more sources of truth).

End of the day, a large repo checkout is going to be limited by either your network, your CPU/storage to perform archive expansions, or frequently both. I get the impression that Android and Chromium are expanding faster than those other things are improving, but have no data to back that up.


Almost every big game studio uses Perforce


> arcane CLI commands

Are the syntax of the commands the issue or are all CLI commands arcane to you? What would you propose to be the command syntax if you were designing it?


>I understand that people might want to make it easier, since the UI is complex, but that’s never going to work. If you abstract away the complexity of the UI, you will necessarily abstract away the power.

I disagree, really.

There is not fundamental reason why you cannot have user-friendly UI for 98% of the cases and some "advanced" for those 2%.

Just like GUIs have "advanced" settings, the CLI can have well designed 10 commands for basic usage that are taught to newbies and then advanced ones.


The problem is that for 99.99% of the people engaged in the debate over if git is good or not and should be replaced the correct answer for them is "stop fighting it and just learn git" because there's nothing better. They need to stop thinking about how git sucks because that is literally getting in the way of their career goals and is self-sabotage.

And if anyone seriously wants to try to replace git they need to understand how git works first at the level of an advanced expert first anyway. At that point, you can have the discussion about how to make git, with better LFS/monorepo support, with a more pleasant UI/UX, etc.


1. That‘s called stockholm syndrome.

2. Literally no relevant career goal is significantly obstructed by less git proficiency.


What? Maybe I'm misunderstanding you, but are you saying that absense of git knowledge is not a factor for career possibilities? I guess it depends on the kind of job..


Isn't this the case with git? If you only do basic stuff I think you don't need more than 10 commands. And if you wanna go more advanced, you have the other commands and flags. Git has also added some new commands like `switch` to make it easier for newbs.


Do it then. Many have tried. All have failed. Talk is cheap.


I would say that Mercurial has a simpler UX than Git without being less powerful. I think Jujutsu (see other posts here), which I started, also has simpler UX and is more powerful than Git in many ways. Have you looked at either of those?


About a decade ago, I tried both mercurial and git, and at the time had only subversion knowledge. I remember finding mercurial extremely confusing and git very simple, but don't remember the details about why mercurial was confusing.


Whereas conversely I was glad that with Mercurial I no longer had to deal with Git's staging area (for me, Mercurial's interactive commit serves the same purpose).


Magit is great. So not all have failed.


But your talk "it's impossible, trust me" is cheaper since it's a much stronger claim


>Do it then. Many have tried. All have failed. Talk is cheap.

I'm, just like many other people using GitHub Desktop (or Git Kraken or Git Extension, but I dont like this one)

when I'm working in GUI environment, that's proof that it is possible.

The problem with creating 3rd party tool that's CLI wrapper is that you cannot rely on it being installed on the system.

That's why such solutions usually fail - because you risk relying on stuff that may not get traction and you'd be better sticking to "official" syntax in long run.


I'll share here my positive opinion of GitExtensions (on Windows). It has really helped me use and better understand Git, while also helping me define and share workflows with my colleagues..

Other GUIs, like the ones embedded in VisualStudio, VSCode or Rider, try to sell skipping some cognitive steps for some operations as simplifying or speeding up dev flow. I find they are just offering extra (beyond what git itself offers) ways of shooting yourself in the foot.


Is gitk still a thing that's available with git? Not as full featured but great for viewing and good with many commits/files.


Yes


Plenty of folks are!


> I understand that people might want to make it easier, since the UI is complex, but that’s never going to work. If you abstract away the complexity of the UI, you will necessarily abstract away the power.

There's a lot of "useless" complexity in the Git UI. The naming of the commands and the underlying concepts are quite inconsistent. The commands themselves are inconsistent. It's rare to see somebody, even git fans, to dispute that the UI is far from optimal. And the warts in the UI are being improved.

These UI warts start to become invisible when you get used to them, but e.g. when teaching git they become painfully obvious again.

The underlying git architecture is simple and it's the simplicity that makes git powerful. Powerful tools don't necessarily need complex interfaces. In fact power often comes from simple interfaces that compose well.


We could get aliases that get slowly incorporated into git. Like, let's say git undo instead of git reflog, or things like that. It would not be too difficult to support a couple of names to do the same thing and let people use the one they prefer.


We are slowly getting those. For example there's now git restore (and git switch) to fix the god-awful overloaded checkout.

When you get your repo into a state that needs reflog, it's probably not amenable to very straightforward commands. A bigger problem is that it's a bit "too easy" to get into that kind of state.

The original git interface wasn't really intended for "general use" and there was supposed to be the friendlier "porcelain" on top of the lower level "plumbing". But people started to use the plumbing directly and the porcelain never came to be.


The git model has fundamental limitations because it saves snapshots rather than changes, and doesn't capture some metadata changes like renames. A tool like Darcs or one of its spiritual descendants will have fewer merge conflicts than git.

Totally agree on your main point though. The benefits of switching are far lower than the costs.


>it saves snapshots rather than changes

Once you understand this, GIT makes a whole lot more sense. The trouble with failed merges is an artifact of trying to appear like it tracks deltas. Merge conflicts when you're the only programmer on a product (but have multiple computers) are maddening.

I blew away .git and everything in it, restarting a repository, more than once because I couldn't resolve a merge conflict, before I learned that simple fact.

I've got from .zip files of my source code on floppy disks, to SVN, then Mercurial, and finally GIT. GIT is amazing, though the UI is a pain, as most agree.


Not only merge conflicts I find that this causes trouble if you need to manage multiple branches. Generally you see patterns like 1. Create patch 2. Merge to main branch 3. Backport a cherry pick.

But there is no machine-readable metadata that the commit is the same in both cases. In simple cases you can work around this by basing the patch off of the merge base but if there are conflicts that doesn't work well.

The result of this is that these two different but logically identically commits will cause future merge conflicts and make questions like "does this branch have this patch" much more difficult to answer.

This is something that I think https://pijul.org/ does quite well. Their base unit is a patch and that can be applied to multiple places. You can also have snapshots which are logically a collection of patches.


> doesn't capture some metadata changes like renames

If you rename the file using git and that is the only change in your file, then it works:

  git mv oldfile newfile
Commit that change and the rename is in your history.


`git mv` is just shorthand for `git rm` and `git add`. All Git knows is that there was a file called `oldfile` and there's now a file called `newfile`, and whether or not they have similar contents or filenames. It's only when you run `git status` or `git diff` or whatever that it actually tries to guess if anything was renamed. So what you're seeing isn't the rename being recorded in history but just that guess being correct. It's easy for it to become incorrect if you make some nontrivial changes or have multiple similar files.


The parent comment's suggestion is to commit each of these moves individually, which should guarantee that the detection algorithm will get it right. Will definitely pollute your history, though.


the rename won't be in your history because the tree and commit objects don't support renames

the tooling infers a rename based on the lack of a content change


Is there a difference?


Absolutely. In a move-and-edit situation, git will sometimes infer a rename and sometimes not, based on your local version of git, your settings, and the details of the edit. If inference fails, you may have a harder time resolving merge conflicts, performing cherry-picks, etc.


So just make sure the moves and edits are in separate commits?


Yes, that's the standard workaround.


Absolutely, the inference is based on a set of heuristics and is often just plain wrong.


I've had one or two really annoying merges that were complicated by these renames. TIL thanks to these responses that it's not actually a rename.


> If you rename the file using git

Does nothing. Git won't remember you did it.

> and that is the only change in your file

This always works, but it means you need to choose between flaky rename detection and weird extra rename-only commits that probably don't compile.


> ..because it saves snapshots rather than changes..

I might be misremembering the technical details, but isn't that only the case in a git repo with zero pack files?

Will grant that the lack of metadata on renames can be issue when a file is heavily refactored alongside it's relocation.



Don't the snapshots make it much faster than without?


> Git has no such problem. There’s nothing it can’t do.

It can't display me the branch I'm working on, starting from the commit that began the branch, until the last commit. (Or if it can, I have no idea how to do it.)

This isn't a functionality issue, but rather a conceptual one. Git just fundamentally thinks of branches differently than I do. (And, maybe most humans?) To me, a branch begins ... when I branch it. To git, a branch begins ... at the very first commit ever in the repo, since they consider commits that were made before the branch point to still be part of the branch. The branch point then is just another commit to git - and not even one that they think is important enough to highlight. But without any ability to clearly identify the branch point, you can often find yourself looking through a forest of commits that are irrelevant to what you're really searching for.

I spent the better part of an hour the other day trying to figure out how to identify my branch point. (For the record, the best solution I eventually found was: cat .git/refs/heads/<branch name>.) Just my $0.02, of course, but IMO this is absurd - and a big feature gap / user friendliness issue with git compared to other VCS's.

IMO git is just another step in the evolution of VCS's, and not necessarily even one of the better ones. Its concepts, functionality, and feature set are focused primarily on distributed development and multiple people maintaining different source trees ... which is fine for the Linux kernel and other projects that heavily use that use case. But many/most projects don't work that way, and for them a centralized VCS is sufficient. I have no doubt that a better VCS will come along and replace git one day.


  git log <base-branch>..<branch>
I often need the count of commits, usually for build IDs,

  git rev-list --count <branch> ^<base-branch>
I usually use gitk for visualizing branch commits. CLI equivalent is

  git log --graph --oneline --all


he is saying that finding <base-branch> is too hard. There is probably some magic with git merge-base or git show-branch but I don't know them well enough to do it


Git keeps the entire history on your local machine, this becomes a problem if your project grows to several hundred GB (not untypical in game dev). Even SVN was much better for working with large repositories, you only needed a big server. Git is quite nice for "source code only projects though".


This is true, but it is technically an implementation issue. Git supports shallow and narrow clones. The data model works fine with only partial local knowledge. That being said the "check out all files to disk" approach does have fundamental limits for many operations. It is one of the very few operations that are linear with the tree size but it isn't fundamental to the abstract model of Git. To solve this something like a virtual filesystem that fetches new data as required and tracks changed files can make almost all daily operations scale with the size of the patch, not the size of the tree. The data model for this tracked change would be basically equivalent Git's index structure, just applied to the working tree instead of leaving the working tree on a general purpose filesystem.

That being said the unfortunate truth is that many, many, many users of Git rely on the implementation as the API, directly reading and writing to all corners of the `.git` directory. So even if the core command could support something like this (or a wrapper could be provided) it would likely run into a lot of tooling issues.


Microsoft created the git virtual file system to address this so that they could move Windows to git. It has since been replaced by “Scalar”.

https://learn.microsoft.com/en-us/previous-versions/azure/de... https://github.com/microsoft/scalar


I have a lot of huge files and use SVN with it. It's dirt cheap to host, next to no memory requirements, and it comes with a web UI to boot because it's built in top of WebDAV. You can literally open a repo in the browser.

The only downside is the per-file versioning, the update needs to be run on the top directory otherwise interesting problems happen.


git supports shallow clones.


Does Git LFS help with this problem?


Overall LFS feels a very hacky tacked on solution with warts.

Git LFS is horrible if you mistakenly add some file(s) to LFS.

Restoring to LFS-less state is supposed to be easy but it is always very painful.

There should be a way to tell repo - I do not want any LFS at all. In practice this is painful, and it is easier to start a new repo....

Also for some reason Github/Microsoft decided to "monetize" LFS. The freebie limits are 1GB for hosting AND transfer! Then the charges get very expensive.

I am not sure why Github LFS is priced like some AWS S3 plan not OneDrive plan.


GitHub charges for LFS and has since they initially introduced it, way before Microsoft was involved. Also, the pricing model is changing and will include a minimum of 10GiB for free (250GiB for Team/Enterprise customers). It’s cost-recovery and abuse-prevention, though, not some nefarious scheme. And funny you mention AWS S3... what costs might GitHub be recovering? ;)

Microsoft’s homegrown LFS implementation (in Azure DevOps) does not and has never charged for LFS. It’s nice to have friends with a cloud blob storage service!

Source: I was the product manager for Azure Repos’s LFS server and am currently the product manager for all things Git at GitHub.


Have seen various teams have major issues with LFS - becoming quite a support overhead to keep art teams productive.


Yeah exactly this. Git LFS is supposed to fix the problem, but I haven't seen it working flawlessly either yet.


Interesting, I had issues using Git LFS to store images for my personal site. I thought I just didn't know how to use it properly...


Not Git LFS, but Git VFS, now replaced by “Scalar”:

https://github.com/microsoft/scalar


Wholeheartedly agree.

Going from SVN to Mercurial was a night and day experience. Going from Mercurial to Git was a marginal improvement initially but a lasting change long-term.

Then there are the "visual" VCSes like Rational and TFS that are designed to _only_ work within an IDE and grokking them involves wading through hundreds of pages of corporate tech docs.

VCSes (or at least the ones I used) were generally awful before Git.


All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints.

Git solves the deeply complex problem of distributed version control. Most complaints about git are actually complaints about the essential complexity of the problem, not the accidental complexity of the tool used to solve it.


Totally agree that people spend way too much time complaining about git's solutions but there's not nearly enough criticism of the fact it's solving the wrong problem 99% of the time.

I want to share 5k LOC with a colleague. Anyone who thinks that requires a decentralised solution in 2023 is unwell.

I recently wasted a few hours because I forgot to fetch before running some code. Sure the UI could be better (if you're very tired and you're told you're up-to-date you'll stupidly believe it), but fundamentally the 'feature' that caused this problem is something I'll never use. I'm always going to code with a connection to the internet. I shouldn't need to mentally keep track of local vs remote because that distinction simply shouldn't exist.


What you fail to understand is that, outside of all talk about decentralization, keeping local state is a huge performance optimization. Tagging a large CVS tree used to take several minutes, because it would talk to the remote server separately about every file. History browsing & bisect are very pleasant in Git because the local computer already has all the information, and just needs to present it to you.


You can still cache things locally, it's just shouldn't be something the user has to be aware of. What I'm typing now isn't immediately being communicated to the central server every keystroke, there still is local state, but it's still a client-server architecture.

The repos I work on are often smaller than the average webpage, I'm really not buying the idea that in the age of HD streaming bandwidth-concerns are an issue here.

Incidentally none of the teams I've worked on ever even use any git features like bisect. The cult of git means everyone has to use git, even though for most people that just means many hours wasted googling with absolutely nothing to show for it (as in git is strictly inferior than just sharing files across a networked drive for many teams' use-cases).


Caching is trickier than decentralization because it's hard to know when the cache is valid. Fetch/push make that explicit, and not git's concern.

Meanwhile, others work in airplanes and in the woods.

Not using git bisect is your loss. I'm very happy it exists, and wouldn't trade git for just files on network drives.


Yes and pushing 'when do i update to/from the server' onto the user is exactly what I'm complaining about. No other software I use does this. Even software originally built to do things locally allows you to seamlessly update to/from a server nowadays (e.g. working on a cloud document in Word say). And any software built from the ground up to do this tends to have a great user experience (e.g. Figma).


You can absolutely build a system that fetches a git remote whenever it updates, you just need some sort of a notification/subscription system to know when to trigger it.

You can absolutely build a system that pushes (some of?) your branches on every commit.

For me personally, that'd be annoying; every time I talk to the git server, I want to insist on a Yubikey touch.


"History browsing & bisect are very pleasant in Git because the local computer already has all the information, and just needs to present it to you."

Probably the only thing that makes me not miss SVN too much.


Most complaints about git are about the design philosophy of the tool. While there are complex edge cases in distributed version control, the vast majority of situations people actually encounter are not that complex. But because git prioritizes the worst case over the typical case, it's much less usable than it could be.


That's what i liked about git too. It felt like a powerful, ready to use tool. CVS and SVN were more bureaucratic, slower and limited. Took zero second for me to drop everything else.

The underlying model is alien to many, it requires non black box approach to get out of pits sometimes but to me it's always worth knowing.

The only decision I dislike about git is the soldered staging area idea. There's a reason why most people end up stashing 109 times a day, and I think there's a golden idea in merging the two


i mostly agree that git can do almost anything (but see the limitations others bring up), however this claim is easy to disprove:

Whatever effort you are putting into seeking or building an alternative, instead put that effort towards becoming a Git expert.

building an alternative naturally is more work than to learn git for one person, but not everyone needs to do that. a few people building an alternative could save the learning effort of many more people.

instead i'd like to point out a different reason why rebuilding git may not solve the problem:

git is good because it is powerful and flexible.

that power and flexibility by necessity makes git more complex and difficult to use.

you can build a simpler system, but then half of todays git users won't be able to use it because it won't have the features they need.

and a system that has all the features of git today and also solves the problems it currently has will be even more complex and less straigt forward to use.

the best approach is probably to have a common backend that has all the power and flexibility needed, and multiple different frontends that are simplified to only expose the features needed for a particular workflow.


> I’m old enough to have used RCS, CVS, SVN, then finally Git

I'm old enough to have used CVS, SVN and then a bit of git.

I remember the big selling point to go from CVS to SNV was "you can put binaries in there!" ... which we never did.

And now I see the selling point to go from SNV to Git is "you can do really complicated stuff like rebase".... which we've never done.

For a team who just does run of the mill distributed version control stuff (commits, diffs, blame, conflict resolution), can you explain why git is such a huge benefit over SVN?


SVN doesn't do very good conflict resolution. It especially becomes more of a problem when you have multiple people working on several branches, that might want to merge some of each other's work. Apart from this, git also came with many more advanced features that improves branch and history management.


Great comment.

Git really needs a good git tutor game.

Like vim tutor or any one of the n*x/shell games.


Along what axis do you want the VCS to be "better" than git?

For example, git's cli user interface is monstrous (yes, I know, you personally have 800 cli commands memorized and get them all right every time, that doesn't make it "good"). From the outset, the maintainers of git basically decided "it's too much work to make all the cli flags behave and interact consistently" so they didn't. This allowed git to grow fast, at the cost of the cli user experience.

That said, git is big enough that multiple companies have come along and "solved" the git UI problem. None of these aftermarket UI layers are perfect, but there are enough of them and they are different enough that you can probably find one that is good enough for you, along whatever axis you personally dislike the git UI (examples include [0], [1], [2], which tackle very different user workflow problems).

[0] https://git-fork.com/

[1] https://graphite.dev/

[2] https://news.ycombinator.com/item?id=7565885


I generally don't hear too many complaints about GIT in the Windows/.NET development world, probably because there are good UI front ends and there's not as much 'tough guy' cred from sticking to the CLI. Visual Studio does a decent job of abstracting the GIT nuances, but I personally use GIT Extensions, which looks and feels much better on Windows than the other cross platform UIs.

I drop to the CLI occasionally, especially for multi step or automated scripts, but you can pry a nice visual commit graph and full featured integrated diff viewer from my dead hands. GIT is powerful and option-laden; the perfect tool for a UI to aide in discoverability. The CLI feels like programming in a text editor vs a real IDE


> Visual Studio does a decent job of abstracting the GIT nuances, but I personally use GIT Extensions, which looks and feels much better on Windows than the other cross platform UIs.

IDEs and text editors sometimes have nice Git integrations in the UI, but I wanted standalone software that I can use for anything from various programming projects, to something like gamedev projects (with Git LFS) or arbitrary documents.

In the end, I just forked over some money for GitKraken, it's pretty good, especially with multiple accounts on the same platforms, when you want to switch between them easily: https://www.gitkraken.com/

There's also Sourcetree which I used before then, kind of sluggish but feature complete: https://www.sourcetreeapp.com/

For something more lightweight, I also enjoyed Git Cola on various OSes: https://git-cola.github.io/ Even Git documentation has a page on the software out there, a good deal of which is free and has good platform support: https://git-scm.com/downloads/guis

Quite frankly, I spend like 90% of the time using a GUI interface nowadays, when I want to easily merge things, or include very specific code blocks across multiple files in a commit, or handle most of the other common operations. Of course, sometimes there's a need to drop down to the CLI, but you're right that some GUI software feels like it actually improves the usability here.


> there's not as much 'tough guy' cred from sticking to the CLI.

That’s probably really all there is to these discussions, good old technocratic chauvinism :)


Vs code + git graph + git lens is all you need for a happy git experience.


This! Any graphical history for git that doesn't let you show all branches at once is trash. (Like the one in Visual Studio) The interactive rebase editor in Git Graph is very nice too.

I also love how smoothly you can jump between git CLI and GUI in VS Code.

I'll have to look at how to compare arbitrary commits in Git Lens and whether it is in the free version. That's the one thing I still rely upon TortoiseGit for


How does diffing and conflict resolution work?


[Not the person you're replying to]

Vscode has a built-in (and quite good) 3-way merge editor, and an excellent editable diff view. GitLens also makes it easy to diff any two refs within vscode.


Thanks!


TortoiseGIT on Windows is killer. Hard for me to code without it.


I too am a fervent GitExtensions apostle.

I really wish the 4.* release supported a dark theme, it's the only thing keeping me on the 3.* release, and I dread the day I'll have to switch for whatever reason...


Jujutsu version control system looks very promising in the way it brings the best of other DVCS'es together and innovates on various concepts. It has been discussed a number of times on HN before.

[0] https://github.com/martinvonz/jj

[1] 4 montsh ago, 261 comments https://news.ycombinator.com/item?id=36952796

[2] 2 years ago, 228 comments https://news.ycombinator.com/item?id=30398662


I tried this, and I loved it. It works very well with my workflow.

Perhaps the author can explain - when you clone the repo with

jj git clone

It pulls down a branch that is auto generated like pull-(hash)

I can’t understand how not to get that corrupted so when I do a jj log, I get very weird branches or heads or I’m not sure what.

Another way to say it is that everything works great until I have to pull down the repo from another machine - then the branch history is not what I expect. And I just couldn’t make sense of it or get it to square with what I expected.

I actually created a custom GPT and fed it the jj code and documentation to try and get it to explain it to me to no avail. Jj is so good, I’m willing to give up IDE integration with git if I could just crack this nut.


Just echoing Martin, but: if you can show the repository that is causing this, or at least a screenshot (or something) showing what you're seeing and post it on GitHub, one of us should at least be able to help figure out what's going on.


I have created a discussion. Thank you both

https://github.com/martinvonz/jj/discussions/2691


I can't tell what the problem is based on that description. Feel free to file a bug report, or start a GitHub discussion, or ask on Discord.


Surprised this is done in Rust. I could never imagine not doing the v1 of something like this in Python or similar, to be able to change things quickly. Maybe the design was very clear on the person's mind.


The current DVCS solution at Google is based on Mercurial, which is written in Python. Having worked on that for many years, I didn't want to write jj in Python. We've had problems with the performance of the current solution. Also, as others have said in sibling replies, refactoring Python is not fun due to lack of static types (I know it's gotten better).


I'm not particularly productive in python. The code gets shat out faster but it's a comparatively weak language for capital-P Programming — lack of types means repeating and checking yourself a lot.


I feel the opposite way.

Even though Python is the language I’ve used most, I wouldn’t want to use it for something with a lot of uncertainty and that will suffer many changes. Type systems make it so much easier to change things early on without breaking everything. I’d probably pick F# or similar.


There actually is a similar project in F#: https://github.com/ScottArbeit/Grace


Rust is a fabulous language for refactoring. Strong types and complete matching make it almost completely impossible to miss a spot when making a change


There are other advantages aside from the static types debate in the other replies. The need for efficient version control systems is a constant and ongoing battle; you can pick the right data structures (for example, Git's issues with large files are more of a data structure problem than one of raw efficiency) but at the end of the day Python will often be behind on raw performance. Rust will hopefully let us embed the Jujutsu libraries inside other languages, something you can only achieve today in Git with something like libgit2. Finally, a lot of the infrastructure we get to use, like nextest and cargo-insta are simply fantastic even if I have my qualms about Cargo.

Most of the developers (some of them being former Mercurial and Git developers) including me generally seem to like it. Based on my own experience, I think it's a pretty excellent choice, but I'd be a bit biased as a die-hard Haskell/C programmer for something fast with types.


Python certainly lets you change things quickly, because it does not let you automatically enforce any invariants; this is why it lets you get into such extraordinary inconsistent states so easily! I personally could never imagine trying to make a jigsaw out of jelly because "it's v1 and I'll make v2 properly".


Yes. From the creators of Sqlite, you got Fossil.

One of the most amazing things about Fossil is how you can track the history of a file not just backwards, but also forwards, something which is pretty whacky with git.

https://www.fossil-scm.org


100% fossil. there has been a few threads.. and always someone points out edge cases that are only to be solved using git.. well i dont think so. you can actually go into the sqlite db and change stuff. i've recently started playing with its server api to direct user feedback from web to fossils ticketing system. it is just mature and feature packed and i honestly hope it will get as much recognition as sqlite someday.


> you can actually go into the sqlite db and change stuff

_Nothing_ history-relevant can be changed via manipulation of the Fossil db. In terms of db records, as opposed to space, the db is about 80-90% a transient cache of data which is generated from the remaining (100% immutable) data. Any changes you make to that transient data will be lost the next time that cache is discarded and rebuilt. A longer explanation can be found at:

https://fossil-scm.org/home/doc/trunk/www/fossil-is-not-rela...


thanks for the clarification! i'd still be in for a fossil t-shirt :)


Yes, there must be a better solution. Git is (usually) better than the alternatives, but it is far from being good. Especially the discoverability of its features is a mess - arcane command line incantations, magic processes. Sometimes only a prayer helps before running the 12th command you found on stackoverflow in desperation. Once you leave the pull-commit-push-merge-rebase circle, you gotta hope that god helps you, because no one else will (or more like no one else can).

Unless of course you spend time to learn git, but its complexity is closing up to C++. And using a VCS shouldn't require that amount of effort. It should just get out the way (I must admit, git usually gets out of the way, as long as you use only the base commands... but when it gets in the way, that's when the fun starts)


The inner workings of git are not overly complicated. The real problem is git only provides a thin layer on top of the inner workings. It’s not git that needs replacing, (it’s just saving blobs of data) it’s the user interface on top that is confusing. The problem with simplifying the user interface is that abstracting away the complexity is super difficult.


Git feels a lot like pgp to me: somehow we're not managing to make things simple enough for use by the general public, even when you only need a few buttons and input fields.

There's differences, such as that pgp is more complicated under the hood and it being a cryptographic system that needs to be foolproof whereas in git you can nuke and re-clone without data loss most of the time, let alone confidentiality/integrity loss. It just feels very similar in that only expert users properly use it and most people who could make use of it don't bother learning because the interfaces available are such a struggle (beyond basic operations anyway)

Whether it can all be solved with a simpler user interface, or whether it would require a simpler underlying system to be able to make simpler standard operations, is where I'm not sure


Git does have one big architectural problem IMO - native unit of storage is a blob, not a diff. Things like rebase, cherry-pick, 3-way merge, etc would be much easier in a world where the storage model was “diffs” instead of “blobs”. This would have resulted in simpler CLI tools with fewer pitfalls.


The two are interchangeable; a computer can make one from the other. Storing deltas makes common operations slower, as getting file contents requires replaying all the deltas through history.


I wonder how many horrendously named and located files exist in repositories because people don’t want to fuck with the history.


Not overly complicated, but super difficult to abstract is somewhat of a contradiction. Maybe the inner workings are complicated, but not complicated to implement right once you understand them.


I think we already have that, Fossil. Unfortunately network effects are a bitch to overcome. But with Fossil you get an elegantly architectured system that includes a ton of forge tools the absence of which have led to centralization of git repositories in places like github. It's simpler, saner, smaller and more capable.


Don't know about a better solution, but there might be a better interface. My pet theory is that focusing more on the fact that a repo is a directed graph would help. Make the language more graph-like (less "commit" more "node", less "branch" more "path", less "repo" more "graph"). This kind of thinking would, I think, expose a bunch more primitives that should be surfaced more explicitly than they are (lots of things are "possible but not easy" in git), and make it easier to learn for anyone with a math background. And make web searches easier too.

(Pretty sure I've said this before and it's been shot down before, so...)


Yes. Here are areas where git sucks:

* UX, obviously.

* Large files (LFS is a not-very-good poorly integrated hack)

* Very large projects (big company codebases). Poor support for sparse/partial checkouts, stateful operations (e.g. git status still scans the whole repo every time on Linux), poor & buggy support for submodules.

* Conflict resolution. It's about as basic as it can be. E.g. even zdiff3 doesn't give you quite enough information to resolve some conflicts (you want the diff for the change that introduced the conflict). The diff algorithms are all fast but dumb. Patch based VCS systems (Darcs, Pijul) are apparently better here.

IMO the most interesting projects that are trying to solve any of these (but not all of them sadly) are Jujitsu and Pijul.


> The diff algorithms are all fast but dumb. Patch based VCS systems (Darcs, Pijul) are apparently better here.

Isnt one of git's core features that it can work as a patch based system?

It's my understanding (and please correct me if I'm wrong) that Linux patches can come in via mailing list, as a diff. That would make the person committing different from the owner of the change (also reflected in git's design)? Do Darcs and Pijul just have a string of patches on top of the original source file?


Got can apply patches, yes. But I mean when it has two commits (which are snapshots, not patches) and it uses a diff algorithm to synthesise a patch between them.

It uses an algorithm which is great from a computer science point of view (low algorithmic complexity, minimal length, etc.) but pretty bad from a semantic point of view (splitting up blocks, etc.).

There are a couple of attempts to improve this (DiffSitter, Difftastic) but they don't integrate with GUIs yet.


As far as I can tell, Pijul[0] aims to have better conflict resolution and merge correctness. I'm not super into the theory, so I can't explain it very well, but it looks promising.

[0] https://pijul.org/


The only thing I wish git handled better out of the box, with without any flags/setup, is large/binary assets.

LFS is ok but it still feels like a kludge to me.

The cli does not bother me, there are many tools that offer alternatives/overlays/UIs. Not saying it is perfect or even good, but it's good enough - for me at least.


exactly right! lfs is "kinda okay?" at best. i just wish binary support was just part of git natively.

honestly keeping the large binaries as loose objects would be fine except for performance. which should be something that could be improved with cow filesystems (lfs does use this, but limited by what git's implementation can support)

or it may be enough to incrementally improve lfs, i do see that ssh support is showing up which might help a little. need to do something about smudge/clean filters too, which would require new support in git itself.


Fossil by sqlite think so as does Sapling by Facebook https://sapling-scm.com


Subversion was really good. It wasn't perfect, but it was relatively painless.

Instead everyone switched to a "distributed" version control system that is such a pain in the ass it is all now hosted by a single company.


Torvalds liked to code on planes back when you couldn't use the internet and that's just about the only use-case I've ever heard where I agree distributed makes sense.

Kids these days can't even code at all without chat-GPT, there's a central server hosting the git repo anyway, the whole architecture feels like it was designed for dial-up.

I can't think of anything that was doable 30 years ago compute and bandwidth-wise, that we can't do today due to performance reasons, except client-server source control...


My first job used SVN and it limited workflows compared to git. Branches are more expensive, so people adapt their workflows. I used git-svn on that job, which allowed me to refactor locally; not a feature available to me or others once pushed.

My colleagues evaluated git and thought it was too complicated. A few years and a lot of employments later, they’re all using git. I don’t know if it was peer pressure from enough young recruits, but the verdict is clear: git is better than SVN.


Many companies host git besides github (gitlab and bitbucket to name two), and you can spin up one of your own in about 1 minute on your hardware or on a private cloud vps.

A github server is much easier to set up than a subversion server. The reason people use github is because it's free, and because it has issue tracking and a wiki and forking which plain git knows nothing about.


And nowadays it offers so much more like the GHAS, codespaces, copilot, actions and workflows etc that companies who get entrenched would need half a dozen different vendors to cover the feature set if they were to migrate from GitHub.

And in general I feel that their gh cli tool doesn’t get enough praise. Being able to do easy API calls and queries for use in shell scripts (or just the terminal) is great, and the gh copilot is occasionally useful as a refresher for command syntax, or for deciphering some oddball git command you found online.

It’s a massive beast to tackle, and few people have a reason to. I don’t see anything doing what git does having a chance at competing with it. It requires a paradigm shift and a completely new product/approach to versioning to break the git dominance.


> A github server is much easier to set up than a subversion server.

You can't set up a GitHub server.


The intention I got from @dreamcompiler was that "A git server is much easier to set up than a subversion server.", which I feel is true.


No, this isn’t true. You can deploy your SVN server with ‘svnserve’ in just a few minutes.

https://svnbook.red-bean.com/en/1.8/svn.serverconfig.svnserv...


Yes. Typo on my part.

Although...you can set up a Github on-prem server -- or at least this used to be true. Talk to Github Sales and be prepared to write a check.


you can download a VM image of it from their website


You can absolutely spin up vanilla git on your own machine, but try using it for a week. From some quick Googles it looks like Github has 80% market share of version control with Dollar Store Github (Gitlab) picking up the rest. Everyone uses the pretty tool stack built on top of it because it is overly complex. Git didn't add anything profound that couldn't have been added as a feature to another VCS.


> A github server is much easier to set up than a subversion server.

Oh here we go again. You are wrong my friend. Firing up a basic SVN server is a matter of minutes. You just need to run several simple commands.


My very first programming job used Subversion. Even as a freshly minted programmer I knew we were using it completely wrong.

My first assignment was to spend several days picking apart an extremely nasty merge conflict from two branches nearly six months diverged. That was a very stupid thing to trust me with, as a major refactor was being blessed by my idiot hands.

Management could not figure out how they wanted to maintain a master/production branch.

Our 'trunk' was the develop branch, and any time we wanted to push to production.... We deleted the master branch and made a copy of develop. Master branch had no history, you had to track it back into develop and hope you found a trailhead from there.

It was a very bad time, and we were left with a very bad product. By the time I left, the codebase was so rotten and broken that we'd abandoned all hope of fixing the deeper issues.

I really hated subversion, but mostly the company was just unbelievably mismanaged. I'm sure you can use SVN in a sane way, just not like this


I recently started a new job that uses SVN, one thing that really catches me out is that it doesn't automatically add new files. Is there some easy trick I am missing to tell SVN to automatically track everything recursively under a folder?


Are you using the CLI or something like TortoiseSVN? Tortoise has a fairly intuitive UI for adding untracked files when you commit.


Please, Github is so much more than git hosting. If all it offered was source hosting, no one would care for it.


Github isn't all git, but all git is Github.


False. I'm working on two large distributed projects now in git. Neither has anything to do with github.


I think the meaning of the sentence you're replying to is that GitHub's features are a superset of Git's features which seems true?

"Github isn't all git [some features of GitHub are not in Git], but all git is Github [but all features of Git are in GitHub]."

Maybe my interpretation is incorrect?


Besides the frontend problems with git that everybody talks about, the backend could be improved. It's now line-oriented. It would be more useful if it knew about the semantics of the language you were writing so it could show you semantic differences. That might also provide a mode for binary files which git doesn't handle very well now.


Actually, it’s not the backend which is line oriented, it’s the front end. The backend doesn’t dissect files, it stores the entire new file when even one line is changed and relies on object compression to find the similarity in the rest.


Before we can get a vcs that understand semantic diffs, we need a way to communicate semantic diffs. That way each file type can have its own “semantic differ”. Similar to how language servers help abstract away the differences for IDEs


Y'all know you can change the diff viewer git uses, right?


Of course. But the diffs are still line-oriented.


The diffs don't have to be line-oriented if you are using a diffing tool that isn't based on line changes.

Git itself just stores snapshots of your files, then you can bring your own diffing tool that works in any way you'd like, it's not limited to line based diffing.


You already can change that. Git-lfs already uses this mechanism


No, they’re not.


You can set a lot of that up with the shared config in the .vscode folder, and then enforce it with whatever rule set you chose and however you chose. We do it with auto-save and the prettier engine for Typescript, the C# engine for C#, the standard VSC engine for C++, whatever the Rust plug in for Rust is called for Rust and so on. Technically it’s a little more free to be done differently by different developers because it’s rather easy to not use our “standards” if you so chose, but if you don’t follow our defined syntax for languages it’ll likely not pass through our CI/CD pipeline, or in a few cases (like indents, line-ends) simply get altered to the standard.

I do agree with your point about different IDEs, but I think VSC is actually one of the better tools for helping development teams unify the way they code. Then again, all our developers use VSC, except for that one guy who is stuck in regular VS and often has a ton of swearing because of it. Which is pretty much the legacy experience of regular VS from having used it for 10 years myself. It still amazes me just how slow it becomes with certain plugins, and how bloaty the various templates are. But hey, he’s happy/angry with it sooo.

I guess the story becomes a little different if you work with PHP or Java or similar, where VSC is arguably much worse than its competition, but you’ll likely still have some developers who prefer it because they also work with other languages. Or in Python heavy environments where there are also a lot of great IDEs for the more ML/AI/BI side of things.


there was such a thing for c#, called semantic-merge by Codice Software, a Spanish company that after being bought by Unity, killed it...


What do you recommend for non-programmers, who would still benefit from version control system.

Examples: - Book author using markdown / static site generator to publish a book. Uses visual editors like Typora. - Product designers for open-source hardware. Various design files, SVG etc.

I’ve experimented with a “GUI only” git flow - just to see what is possible, so I could introduce the concept to others.

I found GitHub desktop app (https://desktop.github.com/)did a great job of visually showing git flows and functions, but for a non-tech/programmming person, the tool would be daunting.

Curiosity what your suggested tech stack would be - sans Terminal…


i'd probably recommend Mercurial, say a UI like TortoiseHq. It has 90% of the value of git with a much better interface.


Github? You can edit and commit online. You probably don’t need to ever branch or merge


Git views the codebase as mutable. It's really well set up for changing the historical record to reflect how you wish it had been done. This is necessary for large team dev efforts - it means the history is mostly a sequence of atomic changes to functionality or code structure, with some reverts when CI judged the patch inadequate.

Fossil views the historical record as immutable. Your sequences of mistakes, iteration, failed experiments are all diligently recorded. I like that, means I get to revisit missteps and abandoned branches later. However it is clearly a scaling hazard. I don't want to see the incremental hacking around of a thousand other people. Nor would I want to prohibit that sort of exploration.

My personal work is in fossil repos, going back a decade across various operating systems and versions of fossil. It has literally never let me down.


An obvious area for improvement would be semantic version control.

If the VCS would have an understanding of not only what has changed but also how this affects the code, it could deduce a lot if interesting facts about commit blocks.

Like ignoring simple refactorings (e.g. renamings), reducing merge conflicts, etc.


Yes, it will be superseded. Will the new thing be “better?” I guess that depends on the metric and needs. “ls” was done but exa/eza came along and they have users.

Pondering it, most of the easy things I can think about are really workflow issues on top of git. Git doesn’t exactly enforce them all though so maybe tighter integration would be a reason to change from git if it could not be adapted. Short of that, it’s hard to imagine that a new generation of engineers simply won’t do a new thing to do a new thing; there will be a “git considered harmful” article or a “modern” replacement for git.


I personally think that Fossil is a good example that's extant and used in serious projects. There's that one called pijul which also looks good in theory, but I haven't worked with it. I think version control in general is a little broken before you even get to the software level, but those are two projects tackling some of the problems. And Fossil, I think, is more suited to the scale most people operate on.


Perforce.

As for DVCS, the best one I've used is Darcs: https://darcs.net/ There are some sticky wickets (specifically, exponential-time conflict resolution) that hindered its adoption.

Thankfully, there's Pijul, which is like Darcs but a) solves that problem; and b) is written in Rust! The perfect DVCS, probably! https://pijul.org/


Of course there is a room for improvement... One of the biggest issues is usability/user experience: pull, fetch, checkout, commit, push, rebase - what is all this and what is the exact meaning? I need simple English terms for my work - like update and save - nothing more. Why do I need to worry about implementation details and terms? If I can not explain it to my wife, then I can not use it for binary documents which she needs to store in a repo... in this case Subversion is a better version-control-system for her documents... Just SVN Update/SVN Commit - nothing more to learn in Subversion...


Imagine an electronic engineer complaining about an oscilloscope being hard to use because he cannot explain what all those knobs do to his wife. We are professionals, our tools should be powerful for the advanced user, not beginner friendly.


A tool can be both beginner friendly AND powerful for advanced users.

Speaking of your analogy, the role that most software developers fullfil is not an engineer wondering about the oscilloscope, but rather the construction worker installing electrical fixtures wondering why the cable clamp has such a weird interface. Both the oscilloscope engineer in an office and the worker doing the field work would benefit from having a simple and reliable tool fit for purpose of cable clamping.

There is certainly a need for competent and "proper" software engineering that require special tools and detailed training, but I would argue it's niche and filled by people who build the tools themselves.

IMO the largest share of developers today are doing brick-laying work (which of course takes skill, I am not underestimating it) and would benefit a lot from having simpler tools - they don't need to know how to use an oscilloscope at all.


> IMO the largest share of developers today are doing brick-laying work (which of course takes skill, I am not underestimating it) and would benefit a lot from having simpler tools - they don't need to know how to use an oscilloscope at all.

They're doing brick-laying work until they aren't. Most problems are easy to solve and don't require very much fussing over. The expertise comes in knowing which problems are worth fussing over and which aren't.

Technology is becoming ever more present in our lives, not less.

Reducing software engineering to a low-skill trade when there is, in fact, mountains of complexity is surefire way for software to be a heap of shit. And in a lot of ways it already is.


> Reducing software engineering to a low-skill trade when there is, in fact, mountains of complexity is surefire way for software to be a heap of shit. And in a lot of ways it already is.

I fully agree with this observation, but the reality is that this is where things are going.

Software quality is simply not that relevant today for the majority of software. As long as the billing works fine, the management is happy to see your website barely working - screw the quality if the money is coming in anyway.

Such software costs much less and can be built by bricklayers.


There has been a lot of push to commoditize software engineering. Unfortunately that has resulted in a swarm of people who want developer salaries without the work or expertise.

Git definitely has some warts but you are right. It is an industry tool for expert, professional use. Some complexity is inherent to the problem of version control.

Learning how to use your tools is part of ANY trade.


>It is an industry tool for expert, professional use.

If you were talking about things like Kubernetes, LLVM, Ghidra then I'd agree.

But no git. This is not some expert tool.

This tool's purpose is literally to manage your characters' history, that's it.

Git could be used by any other profession that deals with letters - article writers, book writers, etc, etc.


> This tool's purpose is literally to manage your characters' history, that's it.

Yes, but you seem to heavily underestimate the complexity of the problem and the volume of the use cases that git solves.


I disagree. There’s always more that can be learned about anything but we live in a world with finite time and finite resources. So you can either devote time to learning git or you could spend it doing the actual work.

The fact that git is used by experts and professionals is not an excuse for poor UX. The experts and professionals are almost never Git experts or professionals. I use my car every day, that doesn’t make me a mechanic. Having to understand the inner workings of a tool is an indication of poor design, not a gatekeep we should seek to maintain.


I think there are two things being conflated here.

One is: how do you effectively manage changes to a codebase over time? Git has a model that, for day-to-day use, has primitives like "commit", "branch", and "tag". You also have to understand the difference between your working copy, what is staged for commit, and any other commit in history. These, in combination with the operations you can do with them is actually somewhat complex. This is the thing I am saying people need to learn. And people quite often complain about it.

The other is the organization of git's porcelain layer, the arguments and flags each subcommand takes, and how stuff is presented back to the user. I think git stands to make significant improvement here. Be that as it may, the tool exists as it is. So your options are to use a different VCS entirely, use a different frontend, or learn how to use git as-is.

If you choose to use git but deliberately avoid learning e.g. what a rebase is and why it's useful, you are choosing to be an ineffective developer. Could it be better in some ways? Yes, but it isn't.

I don't think the car analogy is particularly compelling. The "primitives" of a car are already much simpler than git's. The fundamental primitives of a car are "go faster" and "go slower", along with some supporting things like managing your headlights, windshield defrosting, wipers, and horn.

While additional tools are being added to cars to make them safer (e.g. a backup cam or collision detection), the complexity of those tools is increasing rapidly which makes them more prone to failure. And a driver is absolutely not excused from causing an accident just because one of these safety tools failed. You still have to know how to safely operate your vehicle in a variety of conditions.


I often heard this argument from people learning LaTeX in academia. It's difficult and I don't have time to study it. From people that spent years to master advanced maths, people that spent years to learn how to build and operate state of the art experimental setups. But for some reason there's never time to learn your software tools.


Yes, but if the oscilloscope has some buttons that take out the entire company’s codebase if pressed wrong.


If the entire company's work has been laid up into a single prototype, then an oscilloscope does have such buttons :)


>We are professionals, our tools should be powerful for the advanced user, not beginner friendly.

You can have both - powerful and user friendly.

This idea that engineer's tools must be a mess that is fine as long as enables to do something is idiotic.

The same argument was repeated whenever C or C++ vs Rust discussions were happening

"Just learn C and memory management (and all the quirks)"

"Just use this new language constructs and you're fine..."

and in reality we ended with a lot of CVEs - 70% in both Chrome and Windows were related to mem. issues.

There's absolutely no reason why git's CLI cannot be better than it currently is. Once again - there is no reason.

Proof? There are CLI wrappers or even GUIs like GitHub Desktop that make whole experience way better.


Agreed. Imagine a pilot complaining that the controls are not simple enough.


To be honest, I can’t even imagine what you imagine “git save” and “git update” would even do in an alternate universe.


This is funny because in PR oriented development I started treating commits in the same way as "save" in IDE,

it's just backup of current state with irrelevant commit message. Everything is described at the end of the work in PR's description and squash merged.


Giant PRs that are squashed into one commit are an anti-pattern. Every commit should contain exactly one logical change AND a descriptive commit message.

Unfortunately a good chunk of the industry doesn't have the discipline to do this.

If you have ever worked in a project where there was discipline around committing, you know there is lots of value in doing so (rebasing becomes easier, you unlock the power of bisect, log is actually useful).


This!

Also doing PR code review is soo much nicer if each commit is logically self contained with a nice commit message.


But then you can't use blame to look at the current code state. And it also becomes a nightmare to revert your changes.


when i started out with git i made an alias to immediately do "git add . && git commit -m 'lazy' && git push" to make it easy to always save my work and ensure its on the server too. git pull is easy enough to remember + type, but i could imagine just calling it update


I would imagine git save is commit, and update is pull?

I think they just want to replace some of the words with alternatives that they prefer. Because at some point someone is going to winge that update should be syncronise and not pull, and save should be push and therefore git is the worst.


Then you might be better off with something like Subversion indeed.

Git is distributed, and that means you can't get away from push, pull and fetch, however you name them.

If want you want is a way to avoid making "New New Presentation FINAL 2", then pretty much all features of most source control systems are superfluous.

To me that doesn't mean Git needs fixing, it means it's definitely not the right tool for your job.


If the specific words used are the problem, using aliases is a straightforward way to fix them. If you do it for someone, it will break the possibility of searching for help online though.


In my opinion the feature Git has always been missing is version control of branches. Of course the immediate consequence would be that you'd be able to roll back changes to branches but there'd be some more fundamental consequences as well. I'm pretty sure some of the problems with GUI's/wrappers around Git break down because there's no tracking of branches/tags.

Besides that it's pretty much endgame in my opinion if you consider only the functionality it's meant to solve. If another "better" VCS would ever become popular I feel it would have to be a drastic change to the way of working with VCS, even more drastic than SVN to Git was. There's some cruft in Git that could probably be taken away, and that would make Git better in a theoretical sense, but in the real world that would never happen (unless we get sideswiped by another industry or platform).


Phabricator and Gerrit both do a really good job of this. To me Git works fine as a pure "version control" system, but the process of collaborating on a branch before it gets merged into a shared branch seems to be beyond the scope of version control -- something that a higher layer tool is ideal for.


Can you clarify what you mean by "version control of branches"? Branches in Git are just labels of objects. Are you talking about having a history of which objects a branch has previously labelled, like the reflog?


Yeah, I feel the reflog is more of a tool to do introspection on a git repository than that is a tool for collaboration. It's just something I've felt was missing from Git. If you're looking at the main branch of a repository, what was the previous version of that branch?

The way we work around that missing feature is by tagging commits so we don't forget what revision a release was made at for example. A sequence of release tags basically is a meta branch, a history of the release branch, but managed manually instead of through git.


Merge commits mostly solve this problem if you use them.


Locally there’s a kind of version control for branches in the “git reflog” where you can see how a branch alias has been moved


Yes we can. The shortcomings of git are that it doesn't handle binary files well, and you can't clone a slice of a repo. the system after git will handle both of those. mono repo or not is not a question with aftergit because you can clone just a subdir and work there, without the overhead of cloning the whole thing, but also without the weight of keeping up with commits happening outside of your directory.



I've seen that. Shallow clones have too many reason to upgrade to a full clone to be totally useful, as well as the additional load on the server it causes. Partial clones don't (yet) let me just checkout a subdir of the repository with the checkout rooted in that subdir.


There are already various ways in which mercurial or perforce are better than git, and big companies like Google and Meta have hacked on the systems they started with so much that one can hardly say they’re still perforce/hg. There can be disadvantages there but it seems obvious to me that better systems are possible. It feels to me like the real question is whether GitHub is the endgame.


I do believe that it is possible, at least at the API (cli) level.

While git is good under the hood, then it has not really user-friendly interface.

Also git heavily benefits from GitHub's success, which locks us with git :(

I wrote about it here https://trolololo.xyz/github - GitHub is really good, but there's a small problem with that


You could probably do better, yes. But if someone is to do better, I'd hope they actually learn git first.

A lot of alternative tools come up because of people writing them being unwilling to learn git. There are a handful of concepts and a few handfuls of commands and thats it.

And once someone learns git throroughly, they usually come to see that it is actually good enough, and dont bother making something new.


> and dont bother making something new.

They don't even bother to add directory tracking ;)


My experience is that the power of any technology unfolds gradually - with git it's like okay let's master the commands for a single repository on a single server for a single user up to a certain level of adequacy. Then (depending on need), let's add multiple repositories on the same single server for the same single user. Then add multiple servers (including a git remote server). Then add multiple users ... etc ... of course the magic of git is you can unfold those needs in any sequence. Often when I find I do not understand something (that I thought I understood), I go back in scope to an earlier unfolding, eliminating other factors.

I'm sure git can be improved, but I think the biggest improvement comes from the user being improved with their understanding of the scope of capabilities. I have yet to see a good tutorial on this (among the plethora of git tutorials out there). This reminds me of the (excellent) video where the "Harvard Professor Explains Algorithms in 5 Levels of Difficulty" [1]

[1] https://www.youtube.com/watch?v=fkIvmfqX-t0

I would love to see a Channel where that is the entire theme - explaining everything in 5 levels of difficulty.

All this being said, at each of the "5 levels of difficulty" of git, are there improvements to be made. I'm sure there are. It would be good to focus the answer on each of those levels.


Git is the product of an engineer, not a designer. Whereas engineers glue the parts together and make them work, the designer looks at the parts and questions if they are polished as to the intent and the logic.

I think git escaped to the public before having a designer's refinement. Users need to learn about the glue and speak the glue language to git to make it work.


I think Git itself is probably too entrenched to be displaced by now, but I recently came across Graphite (https://graphite.dev/) and, while it’s all still Git under the hood, it abstracts away many of the common pain points (stacking PRs, rebasing) and has nice integrations with GitHub and VS Code.


This is really interesting. Have you tried it before? How do you like it?


Hey, I have only just started to use it. My colleagues swear by it, which is how I've found out about it in the first place.

I think there's no magic and we'll still have to resolve merge conflicts on our own, but my sense is it does simplify repetitive operations.

Hope this helps!


Maybe there are certain domains where you could obviously do better. Take an artist wanting to version control images, i could imagine specialized tools that could be much better. For programming, there could be improvements for versioning groups of repositories that work together perhaps.

For standard needs, probably going to be difficult.


I more or less consider it a solved problem.

You're mostly going to hear from people here who are annoyed with Git or otherwise more interested in the topic of version control than the median developer. For me, I think it provides a quite robust and well thought-out set of primitives, and then composes them upwards in ways which are about as good as one can expect.

Some stuff obviously isn't well supported. Using the same Git repo to hold large binaries as well as source code is not well supported unless you reach for LFS - that's the biggest downside I see.

Fossil would be my next bet. I'm waiting for someone to make an archaeology.co to rival GitHub.com for it.


>I think it provides a quite robust and well thought-out set of primitives

The existence of the staging area is a poorly thought out part of the design. No other VCS uses it because it was a bad idea that makes the simple case of commuting changes more complicated.

>Fossil would be my next bet. I'm waiting for someone to make an archaeology.co to rival GitHub.com for it.

Which is exactly why fossil will not be the next big VCS. Ignoring all of the projects on Github add forcing people to move to a less featureful, less integrated, less familial forge just to use a new source control system is a hard sell. The approach of Sapling and Jujutsu where they support the git protocol so that they can be used Github will make them much easier to adopt since it can happen incremenetally and it fully replace git for people.


git could be thousand of times more user-friendly, but its too command-line centric, so workflow is limited to complex "sub tool invocations". GUIs for git exist but they add extreme overhead for simple workflows, perhaps some "standard web interface" backend should be prioritized( Github is popular due their UI). Another alternative is simplifying arcane command invocations, i'd expect "git workflow_commandX file_target" instead of tons of switches and parameters. There should be hundreds of such "standard shortcut commands" to reduce mistakes.


Yes. Take a look at Mercurial


Yes.

IMHO the next VCS model should follow a centralized-first, decentralized optional model. Which would be a flip of the decentralized-first model of git.

I also think GitHub is in a unique space to really innovate on git and it’s a shame they’re not.

For example, I shouldn’t need to make a fork to make a PR. That’s absurd and the GitHub server should be able to apply ACLs based on the push identity.

There’s a couple more of these suggestions I can think of, but yeah, GitHub should do more in this space.


I think there are some serious ergonomic issues with forks as they’re presently implemented. However, I’m curious what you intend from:

> the GitHub server should be able to apply ACLs based on the push identity

That’s essentially exactly what a GitHub fork _is_ – an ACL’d set of refs you’re allowed to control, separate from upstream’s set of refs. I guess – what would you have us do differently?

Disclosure, I work for GitHub and am the product manager for Git stuff.


Oh awesome, thank you for asking for clarification!

What really slows me down, and honestly just kinda annoys me is that when I’ve cloned some upstream repository onto my machine and made a bug fix, why do I need to then fork the repository once more and push the commit to my fork first and then go through the theatrics of making a PR?

GitHub the server could, for example, fake my branch on the origin/upstream. It doesn’t need to actually make the branch on the upstream, and can smartly make a volatile fork, and prepare the changes to become a PR.

Basically, since the server knows my identity, and knows if I can or can’t make a branch on the upstream repo, it can handle the boilerplate of making a fork for me.

What I want to see from GitHub is embracing the power of there being an authoritative server between the developer and the repository.


If it's the same underlying repo then it would be nice to avoid needing two git remotes for it (original + fork). But how you do that, i'm not sure.


Disclamer: I'm working on it.

But yes, I think we can.

Almost everything can be better:

* Merges/rebase/branch management.

* Project management.

* Diff.

* Large files.

* User management.

* Partial checkouts.

* ACID semantics.

* Binary file management.

* User experience, including making it accessible to non-technical folks.

* A bunch of others.


Semantic diffs would be a very long overdue, and a very welcome, change.


I agree.

The hard part about it is that it's different for every language, so to support it for a language, you have to implement diff for that language.

I'm hoping to make money off of companies wanting those languages.


Mercurial and Facebook's sapling is much better than git in developer exp wise. Git is broken in many places but it become standard just because GitHub popularity and rise in development community that love to take whatever big orgs spoon feed them. Common developers these days don't like to spend time researching things that won't make quick bucks easily.


Git is great for keeping track of logical history, but personally I find that it is missing tools for handling physical history. Reflog is a step in the right direction but it has a limited size and AFAIK it is not possible to share a reflog between clones. Which leaves "cp -r repo repo.backup" as the best option.

Of course, as long as you only do additive changes via commit/merge/revert, the logical history is equivalent to the physical history, but commands like rebase break this model. And despite the flaws of rebase workflows, sometimes it is the best option, like when maintaining a fork.

To my surprise Vim actually has something like this - logical history with undo/redo and physical history with g+/g-/:earlier/:later

Another thing I would like is some way to "fold" multiple small commits into one bigger one (for display purposes only) as it would let me split large diffs into minimal, self-contained commits while maintaining a reasonable git history.


what's the difference between physical history and the logical one?

thanks in advance


The simple explanation: Logical history - what you see when you do "git log --all". Physical history - doing "git log --all" every time a repository updates and then storing each output as an entry into another history log. Kind of a "history of histories"

The complex explanation: a git repository at a particular time consists (mostly) of a graph of commits. This graph represents the logical history of code changes in the repository. The graph can be updated in an append-only fashion (using commit/merge/revert) or in a destructive way like with rebase and reset. The physical history is simply the history of the graph over time.


in other words the history of the graph is the physical history

and the logical history is the filesets which are the nodes in the graph which is the branching sequence of commits

the issue is that the 'logical history' is the reason git was built

and the 'physical history', seems to me, is only feasible because we have the regular git sequence of commits


Different perspective: We absolutely can do better than git… for non-text files that can’t be merged or stored as a sequence of diffs

I (briefly) looked into using git as an alternative to PDM for solidworks CAD files and it turns out git is absolutely not a good fit for this use case. Not surprisingly, I mean it wasn’t designed for that at all.

Point remains though that the world needs a better version control system, something like git, but that works with non text files, because boy do the actual solutions I’ve tried suck compared to the git experience

Software engineers are so lucky to have such a powerful tool for free, mechanical engineers or their companies pay tens of thousands of dollars for version control software that is far worse

PDM is a bit more than just version control but the version control is what my company wants and it’s so painful


Git is perfect, it just needs a good UI.

The best UI in my opinion is sourcetree, which is not available on linux.

I worked with sourcetree years ago (switched to linux last years and used smartgit client), I don't know the current state, but old versions are available to download, and don't require an atlassian account.

Some improvements I could suggest are the ability to ammend a commit which is not the last (and not pushed of course).

Currently if you want to ammend the third last commit for example, you have to soft reset last commit, push it to stash, reset the new last commit again and push it to stash, ammend the changes, then pop the previous 2 stashes and commit them one by one. This could be easily automated.


I am not sure if we will ever be able to replace git with anything else. It is so ubiquitous and just "good enough" for most developers, that the pain of switching to a completely new system would far outweight the benefits. Therefore the only solution that I see is a versioning system that is fully backward compatible with git, maybe just a better API layer on top of git. Facebook tried something similar with Sapling.

For a new versioning system we do not need twenty different choices. We need one free, open, and solid solution that everybody uses.

What the main leaders of the industry should really is to found a groupo that defines that standard. This would be their chance to really make the world a (slightly) better place.


> It is so ubiquitous

It wasn't until 2011 that Subversion dropped below 50% market share in the Eclipse Community Survey. Something new and shiny will come along and replace git.


I remember a post on HN about how YAML was a terrible serialization format. A stricter subset of YAML (eg - StrictYaml) wouldve solved every single problem mentioned there.

Similarly, the solution to git is a subset of git (strict git).

Gits problem is that it is too powerful and assumes that it's users are all git experts. You should be able to run git in 'easy-mode'. Add, commit, checkout new branch, revert, squash merge. That's all 90% of people need.

Then the intermediate folks can run it in mode 2, adding the ability to rebase, reset heads, cherry pick, revert, stash etc. This covers the next 9%.

The last 1% can use it in mode 3 for the rest.

Once you take away the fear of what you could break, git becomes far less intimidating.


I’ve used CVS, SVN, Arch, and Git. The main benefit of Git seems to be performance and atomicity. Arch was garbage. CVS works on files rather than directories. Git is a bit or a lot faster than SVN. That said, Git is much slower when there are large files in your repo.

Git is much harder to use than SVN. Particularly you see engineers struggling with resolving conflicts.

One benefit of Git was decentralization, but now Git is Github, it is centralized again.

Builds, IDE, programming languages should be 1st-class citizens. I don’t want to wrestle with .gitattributes or .gitignore.

What would I make better?

- Fast with large files. - Simpler. - Improved commit meta-data.


Something that is better should be able to track moves, not just store state. Moves of files and even partial content moved within (text) files. Unfortunately, that needs a tight coupling to the editor, so I doubt that's going to happen.


We've been working on a data version control system called "oxen" optimized for large unstructured datasets that we are seeing more and more with the advent of many of the generative AI techniques.

Many of these datasets have many many images, videos, audio files, text as well as structured tabular datasets that git or git-lfs just falls flat on.

Would love anyone to kick the tires on it and let us know what you think:

https://github.com/Oxen-AI/oxen-release

The commands are mirrored after git so it is easy to learn, but optimized under the hood for larger datasets.


I first user Git about 10 years ago, going from Accurev and Perforce. Those VCSs were a bit heavy to configure, but their UI allowed a lot of complex and easy workflows to happen at the same time.

Git won by being free and since a lot of people think using the terminal is somehow better, but we could have done a lot better in terms of UX and power. Command line programs force you to have a lot of context in your mind, and if git would have been designed with a UI it would have been probably better for everyone.

I still hope for better tools, but they'll probably be based on Git, just with better default flows.


I'm not sure git is so bad that we need something different. It's certainly awkward at times, but it is also mostly a side-tool: not the thing you think of when you think of being a developer, yet every dev uses it.

I suspect most people use just a tiny subset of git day-to-day, and google the rest when it comes up.

For this reason, I think if git is replaced, it won't be because whatever comes along will be better objectively, it will be because a few reasons are touted that most people don't understand but are willing to repeat, and some momentum builds behind the alternative.


Theo describes how Graphite was built on top of git (and Github) to improve DX: https://youtu.be/I88z3zX3lMY

The main innovations seem to be:

- The concept of a "stack" which is somewhere between a commit and a branch, a group of commits.

- Better UI, especially for Github notifications.

The end result is he feels safer using advanced git features and can move faster, especially when working within a team of multiple devs.


> - The concept of a "stack" which is somewhere between a commit and a branch, a group of commits.

That seems to be just some conventions for branch naming & rebasing. The most value seems to be in Graphite syncing that state into Github PRs.

It says a lot about these people (e.g. narrator of this video) that they think this is something novel. The Linux kernel community has done "stacked commits" even before Git existed..

Unless it's about webdev & on youtube, it doesn't exist?


Really depends on "what".

I use a wiki which internally uses RCS, but you never see it. The only reason I even know is that I needed to scan older versions of some assets and it was straightforward compared to what you'd expect with Git. (Other bonus, attachments and meta pages are stored as actual files. With a little bit of code you can cobble together an automated page builder for e.g. physical assets.)

I consider rsync --link-dest a version control system.


Solved is a strong word. For most people, especially with git-forges geing so popular, yes.

But if you are doing binaries, because you are an artist/do 3d modeling, you probably sill use svn.

And I am still checking in on https://pijul.org/ from time to time


Yes, I think that we can do better than plain text as the source of truth, and thus git would probably need to change.

There's work around a bunch of languages that are not based on text, some have their own editor or a tool to manage a canonical representation in text for you that would make them friendlier to git.

  - https://github.com/yairchu/awesome-structure-editors/blob/main/README.md


IMO Git's central enabling technology was disk space getting so cheap you could afford to have a copy of the entire repository and all it's history locally. I'm not sure what the next iteration of that would be... maybe always-on networking, so you're constantly consuming changes from all collaborators without having to manually pull them? What useful things could we do with that information?


I feel like git is poorly thought out. If a group of dedicated smart people set out to build a version control system, I doubt they would end up with git.


Yes, but due to its simplicity + extensibility + widespread adoption, I wouldn’t be surprised if we’re still using Git 100+ years from now.

The current trend (most popular and IMO likely to succeed) is to make tools (“layers”) which work on top of Git, like more intuitive UI/patterns (https://github.com/jesseduffield/lazygit, https://github.com/arxanas/git-branchless) and smart merge resolvers (https://github.com/Symbolk/IntelliMerge, https://docs.plasticscm.com/semanticmerge/how-to-configure/s...). Git it so flexible, even things that it handles terribly by default, it handles fine with layers: e.g., large binary files via git-lfs (https://git-lfs.com) and merge conflicts in non-textual files by custom merge resolvers like Unity’s (https://flashg.github.io/GitMerge-for-Unity/).

Perhaps in the future, almost everyone will keep using Git at the core, but have so many layers to make it more intuitive and provide better merges, that what they’re using barely resembles Git at all. This flexibility and the fact that nearly everything is designed for Git and integrates with Git, are why I doubt it’s ever going away.

Some alternatives for thought:

- pijul (https://pijul.org), a completely different VCS which allegedly has better merges/rebases. In beta, but I rarely hear about it nowadays and have heard more bad than good. I don’t think we can implement this alternate rebases in Git, but maybe we don’t need to; even after reading the website, I don’t understand why pijul’s merges are better, and in particular I can’t think of a concrete example nor does pijul provide one.

- Unison (https://www.unison-lang.org). This isn’t a VCS, but a language with a radical approach to code representation: instead of code being text stored in files, code is ASTs referenced by hash and stored in essentially a database. Among other advantages, the main one is that you can rename symbols and they will automatically propagate to dependencies, because the symbols are referenced by their hash instead of their name. I believe this automatic renaming will be common in the future, whether it’s implemented by a layer on top of Git or alternate code representation like Unison (to be clear, Unison’s codebases are designed to work with Git, and the Unison project itself is stored in Git repos).

- SVN, the other widespread VCS. Google or ask ChatGPT “Git vs SVN” and you’ll get answers like this (https://www.linode.com/docs/guides/svn-vs-git/, https://stackoverflow.com/a/875). Basically, SVN is easier to understand and handles large files better, Git is decentralized and more popular. But what about the differences which can’t be resolved by layers, like lazygit for intuition and git-lfs for large files? It seems to me like even companies with centralized private repositories use Git, meaning Git will probably win in the long term, but I don’t work at those companies so I don’t really know.

- Mercurial and Fossil, the other widespread VCSs. It seems these are more similar to Git and the main differences are in the low-level implementation (https://stackoverflow.com/a/892688, https://fossil-scm.org/home/doc/trunk/www/fossil-v-git.wiki#....). It actually seems like most people prefer Mercurial and Fossil over Git and would use them if they had the same popularity, or at least if they had Git’s popularity and Git had Mercury or Fossil’s. But again, these VCSs are so similar that with layers, you can probably create a Git experience which has their advantages and almost copies their UI.


The canonical example of a merge that is impossible via Git's `recursive` is a base of "AB" (where each character appears on a single line but I've omitted the newlines for brevity), branch 1 is one commit containing "AXB", and branch 2 which is a commit "GAB" followed by another commit "ABGAB". Then the recursive merge of the two branches into each other cannot tell which AB pair in branch 2 the X from branch 1 should be inserted into, because it never sees the first commit of branch 2 which tells you that the "original" AB is the one after the G. `recursive` cannot distinguish between "AXBGAB" and "ABGAXB" as possibilities. A merge algorithm which looks at every commit can know that "ABGAXB" is more faithful to the actual sequence of events, because it knows which AB pair the X was inserted into on branch 1.

(Another plausibly correct answer of course would be "AXBGAXB", but again `recursive` doesn't know enough to guess this answer.)


Many tried, few succeded. SVN - client-server principle, but bad at merging branches mercurial - one of the competitors after the linix kernel devs searched a new version control system, its users die out, since git is more popular, very similar to git bazaar - mostly used for ubuntu, since launchpad is only providing bazaar as vcs


Launchpad added Git support years ago. Also, bzr pre-dates git by a tiny bit, too. But sadly git won.


I’ve worked with git since 2016. I guess I must not be a power user because beyond using like six got commands I’ve never had any issues or felt like man my workflow is interrupted.

What features do you think need to be improved? From a purely UX pov I think git is probably the best software I’ve used. It just works.


Git has problems but is mostly alright, cli aside.

Git as practiced by GitHub and gitlab is awful quite a lot of the time.


I'm sure we can, the question is, can we make an alternative to GitHub? I wouldn't be surprised if there are already several better ones, but I've never looked, because I already know Git, and my chances of convincing anyone to use a better one seem low, and they rarely seem to have as big of an ecosystem.

If it doesn't have multiple clouds providers with pull requests, and at least one of those clouds providers isn't a megacorp, it probably won't be the safe boring choice, unless it's fully P2P.

It needs to be packaged in distros, have GUI integrations, etc.

Fossil looks really cool, I really like how they integrate the wikis and issues. But I don't know anyone who uses it, and the cloud providers seem to be smaller "Could go away any time" companies.

I've never really explored any other VCSes, because none ever seem like they're going to be serious competitors.

I'd be more interested in enhancing Git, but it seems a lot of the most interesting git plugins and extensions aren't updated, like GitTorrent.


Re: GitHub alternatives, I've been looking at this for a while as I'm keen to not have everything centralised and Microsoft are hardly the most trustworthy...

There are some GithHub-alikes, the most obvious is GitLab which you can also host yourself but all (or at least some of) the extras you get for free with GitHub are behind payment walls.

My current favourite is Codeberg, it uses Forgejo underneath (which is a fork of Gitea itself a fork of Gogs - all of which you can self host). Codeberg is run by a non-profit and are very much aligned with my ideals. They are also slowly adding nice features like their Woodpecker CI.

One that is growing in popularity and is a little less "GitHub-y" is SourceHut (which also has Mercurial support).

The main issue is that GitHub has really cornered the market. They give so much out for free that is difficult for others to compete with and it has become the de-facto place to host your project. This can mean that hosting anywhere other than GitHub will limit discoverability and contributions from people if they don't want to make an account or work out how to deal with whatever forge you are using.

However one thing that is coming that may help alleviate some of that is forge federation which will allow you to interact with various forges from your "home" forge - which hopefully prevents the need to make an account to make PRs or raise issues.

Edit: I see your other comment now, what could a better GitHub be that supports a better-than-git VCS. Well there did used to be places to host Darcs projects like the Darcs Hub but I don't know if some of the newers ones like Pijul or Jujutsu have any forge support yet.

Edit2: Oh it seems Pijul has "The Nest" for hosting.


Because Jujutsu is Git-compatible, there are lots of supported forges (GitHub, GitLab, etc.).

There's no native forge yet.


Yeah I know it will work with the git backend, still not sure what the native backend can/will bring, documentation on it seems to be pretty sparse.


Good point, we might want to document that. Btw, I've called it "native backend" and "native forge" myself, but maybe those are not the best terms because there are many possible native backends/forges.

For example, our "Piper" backend at Google is a native backend in the sense that it stores all data in its own database. I think the most exciting thing about that backend is that it's cloud-based so users will be able to access each others' commits (e.g. `jj show <commit id from chat>`) without requiring a push.


> the question is, can we make an alternative to GitHub? I wouldn't be surprised if there are already several better ones, but I've never looked, because I already know Git

¿Que?

If you're wondering whether we can make something better than GitHub, there's dozens of git hosting alternatives that you might like better such as Forgejo and GitLab.

If you're saying "but I already know git", then there's still dozens of alternative hosting sites or methods!


GitHub is the one everyone else uses though, which means you can have everything in one place, so they have the advantage.

There are others that are almost as good, I suppose what I should have said is "Can we make something better than GitHub for the hypothetical better non-git VCS", since without that it's hard to imagine using anything but Git.


Pretty sure alternative git services are still larger than alternative source control systems, yet OP is asking about alternatives to git which will be even smaller. The whole point is that the person wants to use something else. If your argument is that Microsoft GitHub is the largest and therefore the best, it's circular reasoning and will forever remain that way.

We should all stay on Facebook also if they're the best because everyone's on Facebook and the network effect has benefits; somehow it seems people have more sense than that and we can actually switch to smaller services which are more aligned with what we want


Close to everyone I know is still on Facebook, and I get the impression none of them have any interest in switching, so... I use it too, even though the algorithmic curation stuff is really bad.

I could imagine people switching if someone made another site with some kind of killer app and promoted it with a million dollars of ads, but... At the moment, the only feature the alternatives focus on is usually privacy, which is clearly not enough to make average people switch.

Network effects aren't the only factor that matters, just a really big one for most.


I hope so! After years of using git, I'm come to the opinion that it's far from a great solution. Arguably, it's not as good as some of the vcs it replaced. Hopefully, something better will be coming in the future.


There's a sweet spot between simplicity, feature and usability. I think git exist right in the middle with a little bit of learning curve.

A lot of proposal in this thread would have improved one of these at huge detriment of others.


It would be hard to get past the network effects. Just like how we are stuck with SMTP, JavaScript, PDF, HTML, etc.

The only way I could see it changing is if we have a complete paradigm shift. This is what happened when we went from SVN to Git (centralised to distributed).


SVN is only 5 years older than git, and while git is distributed, people basically use it like it's a centralized VCS these days.


Yes! In my bubble, we just need "code snapshots" or versioning, but not for the whole repo, more for each function / file. It's surprisingly hard to get juniors into git, there are so many crazy situations all the time.


The fact that a git commit is literally a code snapshot...

Hey, git don't even store diffs at first place. All it store is the snapshot of all directory content. The diff git showed to you is actually generated on the fly. You can even just clone a single commit and get the content out of it if you are doing ci and don't need the complete history.


I think git has become the standard now. I used git as the reference point when I had to implement a custom version control system for a product . Also many things can be built on top of git, like GitHub for instance


I bet we probably could

But you could also do better than Linux or OSX as an OS, but the mindshare and industry investment there is so strong, moving to a new thing is a monumental undertaking


I think the nomenclature could have been better, and less technical people would use it. Calling it commit when it’s essentially a save is good start . Yes I know it isn’t exactly the same as saving but who cares.


I want something simpler than git, with less features, less commands, but far easier to use and covers 99% of use cases, preferably a GUI. (No, simple git guis do not exist.)


Got works great. The main pain point for me is largr binary files. There is git LFS, but it isn't default and takes some configuration.



The prevalence of squashing is a symptom of some kind of problem, and we need a less permanently destructive solution.


Have heard of some FAANG (meta, google..) moving to Mercurial. Any merit in that?


yep both extended it and have versions that can work against GitHub/git servers.

sapling scm from meta has I think the best cli and VS code UX https://sapling-scm.com/

jj from google is also mercurial derived with very similar cli features like histedit and has support for deferring conflict resolution https://github.com/martinvonz/jj


To clarify, jj is not derived from Mercurial, but it's heavily inspired by it. The current DVCS solution at Google (called Fig) is based on Mercurial, however. But we're hoping to replace it by jj.


git : version control :: vim : editors

There is definately scope for a beginner friendly UI/UX. Julia Evans has a post lately about confusing aspect of git. Ability to version control large files (like git-lfs) would be a nice addition.


> Or is it a solved problem and Git is the endgame of VCS

I've read so many comments to the effect that "X is a solved problem" when it clearly wasn't that I've come to conclude that the phrase means the opposite of its surface value...

I'm pretty sure that Git is not the end-of-line as far as VCSs go. Whether I will ever change my VCS, again, ever, that is an entirely different question. I've been through so many of them (RCS; a bit of CVS; a bit of Subversion which promised to be CVS-without-the-flaws, which it was not; Mercurial, because hey it was written in Python, so must be good, right? right?; finally Git; and of course the usual `report-v4-revision-good-one.doc` renaming game; plus a plethora of backup solutions, some really bad ones among them)—so many of them I'd be loathe to switch to yet another one, except maybe Fossil, which I almost did.

So yeah, I had totally forgotten about https://fossil-scm.org ; the reason it didn't become my go-to solution is probably mainly the fault of github.com, which I find too good to be ignored, and I don't want an 'impedance mismatch' between my system and their system. But maybe it would be doable and maybe Fossil is good enough to be worth it; at any rate, go read their docs, especially where they compare themselves directly to Git and give lots of good reasons for their way of doing things. This is the same people who are doing SQLite, so I'd say a trustworthy source of reliably top quality software.

Other than that, my personal way of dealing with the complexities of Git is to avoid using parts that I don't need or don't know about well enough (i.e. almost all of it). I use it to check in changes, give one line of comment, and upload to github.com; then of course cloning and updating existing repos as well as setting up a new ones is within my skill set. Branching and merging not so much. So it's like `git` plus `clone`, `add`, `commit`, `push`, that's all; also, I use Sublime Merge for part of these (reviewing changes, adding related changed chunks, commit) which I must recommend for the piece of fine software that it is.

I also at some point toyed with gitless (I think it was called) which promised to be Git, but simpler for the simple things and you can always fall back to Git proper where called for; this I find a good proposition that I like (Markdown: fall back to HTML; CoffeeScript: fall back to JavaScript) but somehow gitless didn't stick with me; I guess that's b/c I've already tempered down my usage of Git to a point near absolute zero, and command line history + Sublime Merge does the rest.


> github.com…I don't want an 'impedance mismatch' between my system and their system

So give your contributors developer accounts on your Fossil instance, which is super-cheap to set up, being a single binary with nearly zero external dependencies. (Those being OpenSSL and zlib, which are table stakes these days.) My containerized build is a single static binary that compresses to ~3.5 MB, total, all-in.

If you're concerned over the lost promise of easy PRs from randos on the Internet, I question your premise. My experience is that below a certain project popularity level, there is less than one total full-time developer on the project, even counting all possible external committers. Below this threshold, why optimize for external contributors? If someone has a sufficiently valuable patch, they can deal with getting a Fossil repo login or sending a patch.

I've been the maintainer of a piece of software for coming on two decades that's in all popular package repos and have _never_ gotten a worthwhile PR for it via GitHub. I spend more time using their code commenting features explaining why this, this, and this make the change unacceptable, after which the PR submitter goes away rather than fix their patch. It's a total waste of time.

I did once upon a time get high-quality external contributions, but that was back when the project was hosted by Subversion, and it didn't matter that posting patches required more work than firing off a GH PR. People who have sufficient value to commit to a project will put up with a certain level of ceremony to get their code into the upstream project.

(To be fair, I expect the reason for the lack of quality external contributions is that the project is in some sense "done" now, needing only the occasional fix to track platform changes.)

If you are lucky enough to have an audience of outsiders who will provide quality contributions, Fossil does have a superior option for patches than unified diffs. See its "patch" and "bundle" commands. This lets your outsiders send a full branch of commits with comments, file renames/deletions, etc.

…kind of like a PR. :)

If you absolutely require integration with Git-based tooling, Fossil makes it easy to mirror your repo to GitHub, which you can treat as read-only.


>Mercurial, because hey it was written in Python, so must be good, right? right?;

Well, I can't really say that that's why, but yeah. Mercurial's pretty great.


Maybe it's now but I hit upon some snags / bugs back in the day (a fairly long time ago, >> 10 years)


To me, "Could there be something better than git?" is not the important question.

What matters is if git is good enough.

Or more specifically, is if git good enough for X when X is something that is actually being done.

I mean, git is good enough for my very minimal needs and the needs of people with much more sophisticated needs than mine (e.g. the Linux team). And since I know more about git than any other VCS (in part because there are better resources for learning git than any other VCS) learning another VCS for the sake of learning another VCS wouldn't help me get anything done.

None of that means git is good enough for your needs, but statistically, it probably is good enough for your needs because statistically, most difficulties with git are related to training and knowledge since the mathematics underpinning git are (to the best of my understanding) sound.

Which also implies (not accidentally) that being better than git requires better resources for learning the new VCS than git has, and that's a very tall order.

Good luck.


The best feature of Git is that it’s used virtually everywhere. I don’t really care about anything else, I just know I won’t contribute to a repo if it doesn’t use Git.


Yes, its called mercurial


No, git is perfectly fine. You just need to study it to master it, just like every other tool we use in our craft.


"Fine" is not "optimal". The mere fact that something does the job does not make it the best possible thing that does the job.

The most trivial example of a thing that is wrong with Git and which no amount of getting better with the tool can possibly help is "once you generate a conflict, you cannot perform any other versioning operations until you fix the conflict or revert". In particular, for example, you cannot commit a partial resolution of a conflict: you simply have to bail out and try and put your histories in a state that is more acceptable to the merge algorithms before trying again.


Maybe you want more local clones (git clone -l -s) or to use worktrees, so you can leave a merge conflict pending in one of them?

Git doesn't have a way to "store a partially resolved conflict" in a way that would remember that it hasn't been resolved. If you really want that, maybe you want to just commit the conflict markers and come back later; rebase the commits together once you're all the way done.


> Git doesn't have a way to "store a partially resolved conflict" in a way that would remember that it hasn't been resolved.

Exactly my point: this is a fundamental limitation of Git for which there is no good workaround. It's not inherent to the domain, but is a limitation of Git. Pijul (for example) considers conflicted states simply to be normal.


It's less of a fundamental limitation and more of a thing that hasn't been programmed yet. Feel free to do that.


I do, thank you! (But it will never be the case that conflicts are modelled in Git, unless one uses notes or something to build a more expressive database on top of it, whereupon I think it's not really reasonable to call the resulting system "Git".)


The git object system is based on objects having types. There was a time when the "tag" object type did not exist. The only difference between then and now is the installed base.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: