GitLab team member here, putting my personal hat on - from my experience in using different Git workflows since 2009, a smaller clean unit of work can help with debugging and troubleshooting. It also provides a way to new team members and contributors to understand the thought process and ideation to implement a new architecture, apply performance fixes, add documentation, work with tests, additional fixes, until its final release. Most of this can be tracked within a MR/PR and the history of code reviews, etc. - even after the merge and squash and Git branch delete, not trying to argue with this functionality. :)
From the Git CLI, without any reference to Git* platforms, it is not so obvious when searching for a commit that introduced a bug, e.g. using "git bisect" for binary search. Reading a 10,000 lines git diff can be harder than a smaller commit that also explains the reasoning in the commit message. Speaking from own experience and programming mistakes in a small team, focussing on clean commits and a good history tremendously helped in stressful debug situations. Until you hit a compiler regression bug, but that's a different story then ;)
I'm personally still very fast on the Git CLI, but I also know that there are a variety of CLI and UI tools out there that can help with analysing large Git commits. Potentially in the future also AI assisted that tell us which change a diff caused a performance regression in a release 5 months later. Or we don't need it at all because Observability driven development enabled to see these problems before merging and code reviews, e.g. the memory leak but only when DNS fails. True story from ~2016, more in my KubeCon EU talk at https://www.youtube.com/watch?v=BkREMg8adaI and project at https://gitlab.com/everyonecancontribute/observability/cpp-d...
True, thanks. Some workflows can require larger merge requests, having the platforms and tools that enable smaller iterations help reduce (or eliminate) them though.
From the Git CLI, without any reference to Git* platforms, it is not so obvious when searching for a commit that introduced a bug, e.g. using "git bisect" for binary search. Reading a 10,000 lines git diff can be harder than a smaller commit that also explains the reasoning in the commit message. Speaking from own experience and programming mistakes in a small team, focussing on clean commits and a good history tremendously helped in stressful debug situations. Until you hit a compiler regression bug, but that's a different story then ;)
I'm personally still very fast on the Git CLI, but I also know that there are a variety of CLI and UI tools out there that can help with analysing large Git commits. Potentially in the future also AI assisted that tell us which change a diff caused a performance regression in a release 5 months later. Or we don't need it at all because Observability driven development enabled to see these problems before merging and code reviews, e.g. the memory leak but only when DNS fails. True story from ~2016, more in my KubeCon EU talk at https://www.youtube.com/watch?v=BkREMg8adaI and project at https://gitlab.com/everyonecancontribute/observability/cpp-d...