I have not analyzed the full potentials and benefits of Diversion but I would not agree with the statements you made about the Git. I think you should not focus on Git in your pitch.
>>it was built for a very different world in 2005 (slow networks, much smaller projects, no cloud)
Slow network: why is this a negative thing? If something is designed for a slow network then it should perform well in a fast network.
Mush small project: I do not agree. I can say that it was not designed for very very large projects initially. But many improvements were made later. When Micorosoft adopted Git for Windows, they faced this problem and solved it. Please look at this https://devblogs.microsoft.com/bharry/the-largest-git-repo-o...
No cloud: Again I would not agree. Git is distributed so should work perfectly for the cloud. I am not able to understand what is the issue of Git in the cloud environment.
>>In our previous startup, a data scientist accidentally destroyed a month’s work of his team by using the wrong Git command
This is mostly a configuration issue. I guess this was done by a force push command. IFAIK, you can disable force push by configuration.
> Slow network: why is this a negative thing? If something is designed for a slow network then it should perform well in a fast network.
Designing for resource-constrained systems usually means you're making tradeoffs. If the resource constraint is removed, you're no longer getting the benefit of that tradeoff but are paying the costs.
For example, TCP was designed for slow and unreliable networks. When networks got faster, the design decisions that made sense for slow networks (e.g. 32 bit sequence numbers, 16 bit window sizes) became untenable, and they had to spend effort on retrofitting the protocol to work around these restrictions (TCP timestamps, window scaling).
That makes sense but then the pitch should include something about how back in 2005 the design for git had to make a trade off because of X limitation, but now that restriction isn’t applicable which enables features A and B. I don’t really see what trade offs a faster network enables other than making it a requirement that you have a network connection to do work (commits are a REST call). I’m not sure that’s a trade off I’d want in my VCS, but maybe I’m just not the target audience for this.
Even a force push doesn't destroy the reflog or runs the GC server-side. I wonder how you can accidentally loose data with Git. I've seen lot's of people not being able to find it, but really destroying it is hard.
He force pushed a diverged branch or something like that, and we only found out after a while.
We were eventually able to recover because someone didn't pull. But it was not a fun experience :D
So multiple people did a git reset --hard origin/master and nobody complained or checked what and why this was done? That's not "one data scientist with the wrong command" but the whole team that fucked up hard IMHO.
I think you just sold their pitch with this comment... I, like many many people here, have done quite a bit of product design. What do you call it when a bunch of people use your product, and it breaks for several of them? That generally indicates your product is weak, or has a very rough UI.
For many of us, the story rings true. We have ourselves had horror stories that we did manage to recover from after a few hours of fearfully googling, and we know of other, less capable friends and colleagues who were unable to recover the data and who just accepted the loss.
You think Microsoft losing GitHub repos is more likely than poor bastards trying to make sense of the git command line? You think these guys are going to do a worse job with their centralized service?
People have lost data on GitHub from repositories being copyright striked for example.
At least with git, every developer has a copy of the full history so full data loss is impossible really. What happens if this company folds? You're left with some proprietary repo that you suddenly have to workout how to self host.
It just doesn't make sense when compared to just learning git which is definitely the most fruitful thing a developer could learn at the start of their career.
It's a pitch. The story has obviously been embellished and polished and condensed, ready public consumption. Being pedantic against it is not productive.
Politely disagree. It’s productive because hopefully future teams who launch on HN ask each other, “Is what we’re saying true?” during all those polishing and condensing sessions. If they don’t, the risk is crossing a line that damages the reputation of the team and undermines months if not years of hard work.
To put into perspective, that was in 2014 :D There were no branch protections, and git was even harder to use. Plus everyone was new at git, obviously (we started in 2013 with mercurial, which was still a legit thing to do, and switched to git).
Multiple individuals with similar problems would tend to imply systematic inadequate training. Or the enterprise concerned adopting an inappropriately complex system for its intended userbase.
Or, git is both very complex and very useful, and a large portion of its users have a poor understanding of git but enough for it to be a useful tool. If you want to do source control (which you do), then you’re investing time into learning git and/or fixing git, or maybe using a project like this.
You literally just said what GP said in different words, but prefaced it with "Or" as if it's a disagreement. What you said boils down to "inadequate training".
We both agree that they didn’t know the tool, but GP seems to blame them for deciding to use the tool without training. I was more or less defending their choice to use git, while also acknowledging the potential of a tool like Diversion. My interpretation of GP was that it doubled down on git, while claiming that anyone using git without understanding it is “doing it wrong”, which I agree with in principle but not in practice, as I argued in my initial comment.
One is that the possibility of overwriting history / etc is a really powerful and useful feature, but one that should only be used with some consideration, hence being gated behind the scary '--force'. The fact that git provides one the ability to discard and overwrite commits for a ref shouldn't be an endorsement of doing so freely. I'm glad git has this capability though and any "git alternative" would be all the worse if it didn't provide it, IMO.
Two is that if the concern is git's usability - i.e. the "problem" here is that it's too "easy" for users to do destructive actions accidentally - well, there are ways to solve that other than to reinvent all of git. There are plenty of alternative git UIs already, and an alternative UI is a great way to be "wire compatible" for existing users but still help protect those novice users from footguns.
That all makes sense and mirrors many of my own thoughts.
Though I'll say that "--force" isn't necessarily a "scary-sounding" option name unless you're used to Unix CLI naming conventions.
Further, the warnings git gives you about this are virtually inscrutable if you don't already understand what's happening.
A good interface to "blowing away history" would give you a brief summary of what will actually be gone, e.g.:
"If you go ahead with this overwrite, the following changes will be completely removed from the repo:
a3bf45: Fix bug in arg parsing
22ec04: Add data from 2024-01-17 scraper run
...
Are you SURE you want to completely destroy those commits? (Y/n)"
and if user says "Y", output should log all removed commits and also say:
"These commits can still be recovered until <date>. If you realize you want these back before then, run the following:
<command to restore commits>"
Generally, I think it's a mistake to put UI improvements in a secondary tool.
If there are issues that need fixing, get those changes in the canonical project, because layered patches on top will always be short of maintainers and behind the main project.
> Are you SURE you want to completely destroy those commits? (Y/n)
While there is a lot of user interfaces that could be improved, I believe the above have empirically been shown to be inferior to the alternative "re-run this command but add scary option to proceed".
Users habitually answer "Y" to questions like the above all the time. And certainly after a few times it becomes routine for anyone. But having to re-enter the command and type some a whole word like "overwrite", "force" or "i-know-what-im-doing" is a whole other roadblock. The example is especially ill-chosen to have Y as the default option.
Any operation in git that destroys so many commits will include a list of commits that is destroyed, similar to what is suggested here, and trying to push the resulting repository will say exactly how many commits will be removed, and require rerun with force option (together with the necessary privileges). So reality is already not far from what you suggest, but with more fail safes.
You make a good point about "Y/n" being more dangerous than refusing and requiring an explicit option be passed.
The clear warning about what commits will be lost is not at all how I remember force-push working.
That said, I usually use magit in emacs for git and understand the force options well, so I haven't actually looked at the standard push failure warning in years. Maybe I'm remembering wrong, or perhaps it's been improved in recent versions.
It doesn’t overwrite the commute though. It inserts new ones and resets the branch pointer … doesn’t seem like you’d need a whole new tool to mitigate this - just an automatically generated tag or something when you —force-push - would be easy to do if there was demand for it …
Yeah, I can’t see how use of a —-force flag by people who didn’t know what they were doing is enough of a reason to switch to a different VCS (let alone write one). The issue was people using a tool in a way that they shouldn’t have. Which isn’t a technical problem, but a training problem. You can’t fix people problems with technology, so I’m sure there will be other footguns in this new system that someone else will figure out how to almost lose data.
Git is great in that it is flexible and powerful. But that power leaves some tools open to people who don’t know what they are doing… that’s the trade off.
(Now something that better handles non-code assets and large data files, I’d be much more willing to listen to that pitch.)
So the work wasn’t actually destroyed, and you were able to recover it. So all the people pointing out how implausible that part of your pitch was were right, and you were in fact just lying.
I think the point being made is that you spent a lot of your opening post talking Git, and lead with that bit, rather than with Diversion. What makes Diversion different is added in the end, after you've spent time trying to convince Git fans that their current tooling isn't good. Worse, the examples you listed of why Git is bad is more reflective of configuration and processes than Git itself.
This is ultimately a very weak pitching strategy. The first thing you convey to your potential users is insecurity--an insecurity that people won't choose your product over Git. And it's hard to want to buy something from someone that isn't secure enough about their product to pitch the product first, and answer questions/make comparisons after, as a form of clarification.
Alternatively, instead of doing a comparison to Git, you could start with a list of "have you experienced these Git issues? <list of problems>. Here's how Diversion improves on Git in this regard." In this case you're actually solving people's problems, rather than looking like you're grasping at straws to complain about Git and justify an alternative.
FWIW, I personally have 0 interest in a cloud-first version control. I like the cloud as a form of backup and syncing with team members, but I ultimately want a version control that works as well offline as it does online, and prioritizes the local experience.
The main point of your post is how much better you are than git. You support this main point by making up lies about git. This does not make me personally interested in trying your product.
From my point of view, it's not that much about lying, for me the OP demonstrates a degree of incompetence of the post writers about Git.
The fact that they don't seem to fully understand working of Git (not on the level of Git developers, just the level of Git administrators/users) does not inspire trust in their competence to create a Git alternative.
Just somewhat surprised because if anyone did a `git pull` they'd get divergent history and therefore a merge on default configuration. It would take a lot of manual work to ruin more than one copy of the repo.
For your information you can use the reflog command to find the previous head commit and restore your branch. It takes 10 minutes and then you learn to disable force pushing on the main branch.
It's an interesting new application of that joke, "when I have a question on Linux I use a sock puppet account to leave an obviously wrong answer which prompts dozens of corrections."
I'm trying to imagine how to generalize this to other products. I think if I state the competing product has negative feature X, but also intentionally get some details confidently incorrect or deliberately feign incompetence, you get a group of people confirming X.
Indeed you're right the work that was erased from BitBucket was restored from one of the employees that didn't yet pull, the post was edited accordingly.
> the work that was erased from BitBucket was restored from one of the employees that didn't yet pull
Actually those commits that you considered lost, were still stored on everyone's personal computer in your team. You just didn't know how to use `git reflog` to find them.
> doesn't destroy the reflog or runs the GC server-side.
Git doesn't give you access to the server side reflog either. So it's of not much use if you don't control the server.
As for losing data with Git, the easiest way to accomplish that is with data that hasn't been committed yet, a simple `git checkout` or `git reset --hard` can wipe out all your changes and even reflog won't keep record of that.
Your applications also shouldn't lose work when you don't press save, this is the entire impetus for the "recover unsaved work" in most document editors. A version of Git that shunted uncommitted changes to a special named stash whenever you did anything destructive would be a positive thing.
It's what I end up doing manually anyway but why make a system where the default behavior is destructive and I have to remember every.
It may be prudent to note that git by default is rather kind in that way that it will not change your data unless you explicitly force it to with --force or --hard. I think git, as hard to learn as it can be, sometimes have a bit of an unfair reputation here. It's not all bad.
Not only is it quite careful about not losing data, someone actually took the time to make it spit out messages that not only describes what just happened, but also gives suggestions of what to do next depending on how the user wants to proceed. That adds a level of discoverability that is usually associated with dialog based guis. The quality of these messages can sometimes be surprisingly good, far from the Clippy-level helpfulness you sometimes see.
There are a few exceptions to the principle of not losing local changes, where you explicitly restore an old version of a file for example. But saying the default behaviour is destructive really gives a false impression.
But yes, you are absolutely right that a system to recover unsaved work is a good thing, but I would argue that it belongs at the editor level, not in a version control system. A user could have a number of files open that have local changes. The editor has a much better idea in which order changes were made, and which changes hasn't even been committed to disk yet.
I can't say I'm widely traveled, I have no idea how desktop Office works, but Apple does this so well.
Using their desktop apps, Pages, Keynote, Numbers, TextEdit, Preview, I never hit "Save". I just close the apps. When I come back, the windows reopen right where I left off.
I wish emacs did this. I honestly don't know what it would be like for a code editor to be "constantly saving". I guess I would adapt, but there are times when I do all sorts of changes and go "Ah, this isn't right" and just kill the buffer. The ultimate undo.
But there's a great feeling, to me, when I go to close the app (or shutdown the computer) and it just closes. No prompts, no warnings, just saves its state, shuts down, and comes back later. And with the ever popular "naming things" issue of computers, I have a bunch of just "Untitled" windows. They're there when I open the app, and that's all I need to know.
The nag factor and cognitive load reduction of that is just unmatched. "Just deal with it, I'll come back later, maybe, and clean it up". One less thing.
I agree. It's quite hard to actually destroy data in git. Even with the so called "destructive" commands, walking through the reflogs can usually restore work that was accidentally deleted or whatever.
I configured my github to only allow commits with an anonymised email address. Time passed and I used another machine on which I had already opened that repo before. I pulled my recent work successfully, wrote stuff and then committed and pushed.
Github rejected my commit as I had the wrong email address. I then had to try and work out how I delete a commit but keep all my changes so I could commit it all again but with the correct email address.
I'm not sure exactly what I did but in my ham-fisted experimentation I deleted the commit and restored my local copy back to the way it was before my commit, losing all my work that day.
If you had already committed, `git reflog` should have still found your changes (even after you deleted the commit and restored the local working tree) unless you deleted and re-cloned the repository.
Thanks! We're definitely not trying to bash Git, it's done a lot of good for software development and for sure is going to continue evolving.
Git had much more edge when it was competing vs SVN and other centralized VCSs. With 10Mb networks (if you were in office) you could feel physical pain when committing stuff ><
"The GitHub.com repository is almost 13 GB on disk; simply cloning the repository takes 20 minutes."
Moving 13GB inside your own cloud should take seconds at most. The problem is the way Git works, it clones your entire repository into the container with your cloud environment, using a slow network protocol. With Diversion it takes a few seconds.
> Thanks! We're definitely not trying to bash Git, it's done a lot of good for software development and for sure is going to continue evolving.
It is not about bashing git; it is about anchoring your argument of why Diversion is a better alternative around git. You're basically taking your game/arguments to their playing field, and thus will have an uphill battle for mindshre.
Instead, consider reframing the playing field and mention git less (if at all). Something like "the future of version control is blah". Surprise us, talk to us about your vision for source control, or better yet, code and multi-discipline collaboration (e.g. between eng and design), etc.
I personally would not bother reading any "the future of X" if it did not address problems of existing tools. I know you're trying to give advice from a marketing pov, and it is good, but it's also inherently bulshitty – because its purpose is to net more sales rather than actually make a good argument
> The problem is the way Git works, it clones your entire repository into the container with your cloud environment, using a slow network protocol.
What about git's network protocol is 'slow'?
I think I can also come up with a pretty simple experiment to prove or disprove this:
1. Fill a file with 13Gb of data and commit it.
2. Upload that to GitHub or wherever you want
3. Time how long it takes to clone and compare that to the real GitHub.com
You will find the one we made takes 'seconds' (or minutes, depending on your network connection), while the the GitHub.com will take some time.
So, same data, two different results? The difference in this experiment rules out the 'slow' network protocol as the difference maker. The real reason is that the GitHub.com repo will have hundreds or thousands of commits.
Basically, the difference is the commit history, because that's how git needs to work. Git stores the diffs for the entire commit history, not just the literal files at the HEAD. I don't know what the network protocol has to do with that.
That article also states that using a standard Git feature, shallow clones, you go from 20min to 90s. Most of the problems touched upon in the article are about state management for local environments, yes that can be tricky. And it can take time, but it has nothing to do with Git.
>> a data scientist accidentally destroyed a month’s work of his team
> This is mostly a configuration issue
git apologism :)
(FWIW I do agree with the rest of your comment, and I hope you forgive the slight joke. Product users, for any product are fallible humans. That might be fallible in accidentally deleting, or it might be fallible in forgetting to turn on the safety settings.)
Very seriously, something like this should not be possible in a source control system. Data integrity needs to be built in by design.
It is built into Git by design. Git keeps commits around for 90 days even after they’re “deleted.” This is why people who understand Git were so skeptical of OP’s claim. The point that Git is confusing still stands, however.
The issue with a lot of freedom and unopinionated tools is always going to be the multitude of ways to fuck up. On the flip-side, you may not like what choices are made if you’re forced to use it in a certain way.
We enforce a strict pull-request squish commit with four eyes approval only. You can’t force push, you can’t rebase, you can’t not squish or whatever else you’d want to do. But we don’t pretend that is the “correct” way to use Git, we think it is, but who are we to tell you how to do you?
We take a similar approach to how we use Typescript. We have our own library of coding “grammar?” that you have to follow if you want to commit TS into our pipelines. Again, we have a certain way to do things and you have to follow them, but these ways might not work for anyone else, and we do sometimes alter them a little if there is a good reason to do so.
I don’t personally mind strict and opinionated software. I too think Git has far too many ways to fuck up, and that is far too easy to create a terrible work environment with JavaScript. It also takes a lot of initial effort to set rules up to make sure everyone works the same way. But again, what if the greater community decided that rebase was better than squash commit? Then we wouldn’t like Git, and I’m sure the rebase crowd feels the same way. The result would likely leave us with two Gits.
Though I guess with initiatives like the launch here, is two Gits. So… well.
> But again, what if the greater community decided that rebase was better than squash commit? Then we wouldn’t like Git, and I’m sure the rebase crowd feels the same way. The result would likely leave us with two Gits.
Meh, this is overrated. We'd end up with 2 Gits, and over time just one fork would probably take over, based on marketing, PR, dev team activity, etc. The second one would probably still be around but used by only a minor part of the community.
Just because a thing has on paper many forks, does not mean those forks are equal. In fact, a situation with many major forks rarely survives the long term. See Jenkins vs Hudson, Firefox vs Iceweasel, etc. Most people will congregate towards one of the forks and that's it.
Yes, but this should be only possible by way of commands that make it abundantly clear what you are doing, e.g. `git delete <whatever>` with extra confirmation “Do you really want to permanently and irrevocably delete <whatever> in the master repository?”, or a more obvious “recycle bin” that presents deleted branches/commits in familiar ways and with explicit expiration dates. But the Git architecture doesn’t lend itself to that level of user-friendlyness.
> When Micorosoft adopted Git for Windows, they faced this problem and solved it.
On Windows. On Linux Git still doesn't scale well to very large repos. Before you say "but Linux uses git!", we're talking repos that are much bugger than Linux.
Also the de facto large file "solution" is LFS, which is another half baked idea that doesn't really do the job.
You sound like you're offended that Git isn't perfect because you like it so much. But OP is 100% right here; these are things that Git doesn't do well. It's ok to really like something that isn't perfect. You don't have to defend flaws that it clearly has.
>> When Micorosoft adopted Git for Windows, they faced this problem and solved it.
> On Windows. On Linux Git still doesn't scale well to very large repos.
All of Microsoft's solutions for git scaling have been cross-platform. Even VFS had a FUSE driver if you wanted it, but VFS is no longer Microsoft's recommended solution either, having moved on to things like sparse "cone" checkouts and commit-graphs, almost all of which is in mainline git today.
I also find it funny the complaint that git scales worse on Linux than Windows given how many Windows developers I know with file operation speed complaints on Windows that Linux doesn't have (and is a big reason to move to Windows Dev Drive given the chance, because somewhat Linux-like file performance).
Linux also has the huge advantage of an ecosystem, tools and integrations. It is overkill for small projects and there are friendlier alternatives for those - but git wins because it is what everyone knows. Something aimed at the small number of large projects will suffer the same problem.
In terms of number of commits, Linux is probably bigger than most. In terms of storage size, almost any video game project will be significantly bigger.
It's no secret that git is very bad at handling large binary files.
No, large companies using monorepos will have repos much bigger than Linux even without large binary assets. Apparently Linux has ~10 commits per hour. I probably do ~10 commits per week. So a team of ~150 mes produces commits at a fast rate than Linux. Very rough estimate but it takes less than you'd think.
Also if you vendor a few dependencies that quickly increases the size.
> really like something that isn't perfect. You don't have to defend flaws that it clearly has.
Certainly true. But it's not clear at all how does the product solve these specific problems (they say "Painless Scalability" which sounds nice but did they try developing any 100+ GB projects with massive numbers of commits/branches on it?)
> This is mostly a configuration issue. I guess this was done by a force push command. IFAIK, you can disable force push by configuration.
If a feature can lead to actual unintended data loss, it should come disabled by default. Are there any other "unsafe by default" features in Git? What would be a sane general default that prevents unwanted data loss, and why is it the case?
In that specific case there was some error that the user didn't understand, he googled and found a StackOverflow answer with --force. And naturally tried it
BitBucket didn't have branch protection back then, today it's a bit better (you can still destroy your work but usually not others')
I agree that git is very complex (just try reading its documentation and how many options or commands you have never heard of before). But I think push --force is probably one of the easiest git concepts to get.
The fact that someone in your team copy pasted something from SO without understanding it doesn't seem to be related to git. Otherwise we could say that the fact some people lose their data through "sudo rm -rf /" proves the complexity of Unix. I don't think so.
It is somewhat version-controlled but not completely. If you use the reflog you can find it again and you can find how it moved around. But the reflog gets rewritten and gc'd so it's not true vc.
> With that, I don't think git has any feature that is unsafe by default.
Well, you just mentioned `--force`. It is unsafe by default. Git has a couple of flags to make it safer (`--force-with-lease`, `--force-if-includes`) but those aren't the default.
If you’ve ever had to remove private information from history before making the repos public (think domains, names, configuration, etc) you will appreciate the ability to rewrite history (and all the other things --force gives you)
You're missing the point. `--force` is the default of the force variants. The other `--force-but-something` arguments clearly modify that default. It's the wrong way round.
Obviously they've done it for backwards compatibility, but the fact that they haven't even added an option to make it the default is pretty lame.
The problem here is not the tool. The problem is the author's colleague's willingness to paste a stackoverflow answer into their terminal without taking a moment to understand what it does.
If stackoverflow told them to break off the chainsaw safety tab there is no chance it would have been read first.
The commits that were overwritten by "force" are still there on the server. Any admin could recover them pretty easily. They're probably still present in the local repo of the person who ran "git push --force" too, as well as anyone else's machine who has cloned the repo.
The only way you'd actually lose data is if every single person who had a clone of the repo ran gc.
Or apparently if nobody knew about "git reflog" and nobody bothered to do a Google search for "oops I accidentally force pushed in git" to learn how to fix it.
The Windows Git repository is only 300GB, that's basically childs' play when people are talking about "large repo scalability". Average game developer projects will be multiple terabytes per branch, with a very high number of extremely large files, and very large histories on top of it. Git actually still does handle large files very poorly, not only extremely large repos in aggregate. The problem with large Git repositories is nowhere near solved, I assure you.
Yes, game development studios include their raw art and environment assets directly in source control, just like source code. That's because the source code and the assets for the game must go together and be synchronized. That also includes things like "blueprints" or scripting logic. Doing anything else (keeping assets desynchronized or using a secondary synchronization tool) is often an exercise in madness. You want everyone using one tool; most of the artists won't be nearly as technical and training them in an entirely different set of tools is going to be hard and time consuming (especially if they fuck it up.)
But honestly, you can ignore that, because Git doesn't even handle small amounts of binary files very well. Ignore multi-gigabyte textures and meshes; just the data model doesn't really handle binary files well because e.g. packfile deltas are often useless for binaries, meaning you are practically storing an individual copy of every version of a binary file you ever commit. That 10MB PDF is 10MB you can never get rid of. You can throw a directory of PDFs and PSDs at Git and it will begin slowing down as clones get longer, working set repos get bigger, et cetera.
The 300GB size of the Windows repository is mostly a red herring, is my point. Compared to most code-only FOSS repos that are small, it's crazy large. That kind of thing is vastly over-represented here, though. Binary files deserve good version control too, at the end of the day.
Just because it has integrations, doesn't make it great. LFS is still not great. Doesn't have a lot of backends for instance. And a real locking system is table-stakes for a gamedev VCS
In part yes, e.g. lots of people like SourceTree. Some of the complexity is inherent though, e.g. local vs remote branches and the various conflicts & errors as a result.
Git exists for 18 years, and yet the complexity problem wasn't solved yet. Other tools like SVN were never considered to be so hard to use / easy to screw up.
>>it was built for a very different world in 2005 (slow networks, much smaller projects, no cloud)
Slow network: why is this a negative thing? If something is designed for a slow network then it should perform well in a fast network.
Mush small project: I do not agree. I can say that it was not designed for very very large projects initially. But many improvements were made later. When Micorosoft adopted Git for Windows, they faced this problem and solved it. Please look at this https://devblogs.microsoft.com/bharry/the-largest-git-repo-o...
No cloud: Again I would not agree. Git is distributed so should work perfectly for the cloud. I am not able to understand what is the issue of Git in the cloud environment.
>>In our previous startup, a data scientist accidentally destroyed a month’s work of his team by using the wrong Git command
This is mostly a configuration issue. I guess this was done by a force push command. IFAIK, you can disable force push by configuration.