Not once have a ever debugged a problem that benefited from rebase vs merge. Fundamentally, I do not debug off git history. Not once has git history helped debug outside of looking at the blame + offending PR and diff.
Can someone tell me when they were fixing a problem and they were glad that they rebased? Bc I can't.
My preference for rebasing comes from delivering stacked PRs: when you're working on a chain of individually reviewable changes, every commit is a clean, atomic, deliverable patch. git-format-patch works well with this model. GitHub is a pain to use this way but you can do it with some extra scripts and setting a custom "base" branch.
The reason in that scenario to prefer rebasing over "merging in master" is that every merge from master into the head of your stack is a stake in the ground: you can't push changes to parent commits anymore. But the whole point of stacked diffs is that I want to be able to identify different issues while I work, which belong to different changes. I want to clean things up as I go, without bothering reviewers with irrelevant changes. "Oh this README could use a rewrite; let me fix that and push it all the way up the chain into its own little commit," or "Actually now that I'm here, let me update dependencies and ensure we're on latest before I apply my changes". IME, an ideal PR is 90% refactors and "prefactors" which don't change semantics, all the way up to "implemented functionality behind a feature flag", and 10% actual changes which change the semantics. Having an editable history that you can "keep bringing with you" is indispensible.
Debugging history isn't really related. Other than that this workflow allows you to create a history of very small, easily testable, easily reviewable, easily revertible commits, which makes debugging easier. But that's a downstream effect.
I think the question was about situations where you were glad to rebase, when you could have merged instead
All the commits for your feature get popped on top the commits you brought in from main. When you are putting together your PR you can more easily squash your commits together and fix up your commit history before putting it out for review.
It is a preference thing for sure but I fall into the atomic, self contained, commits camp and rebase workflows make that much cleaner in my opinion. I have worked with both on large teams and I like rebase more but each have their own tradeoffs
This is especially true if you have multiple repos and builds from each one, such that you can't just checkout the commit for build X.Y.Z and easily check if the code contains that or not (you'd have to track through dependency builds, checkout those other dependencies, possibly repeat for multiple levels). If the date of a commit always reflects the date it made it into the common branch, a quick git log can tell you the basic info a lot of the time.
You can write a test (outside of source control) and run `git bisect good` on a good commit and `git bisect bad` on bad one and it'll do a binary search (it's up to you to rerun your test each time and tell git whether that's a good or a bad commit). Rather quickly, it'll point you to the commit that caused the regression.
If you rebase, that commit will be part of a block of commits all from the same author, all correlated with the same feature (and likely in the same PR). Now you know who you need to talk to about it. If you merge, you can still start with the author of the commit that git bisect found, but it's more likely to be interleaved with nearby commits in such a way that when it was under development, it had a different predecessor. That's a recipe for bugs that get found later than they otherwise would've.
If you're not using git history to debug, you're probably not really aware of which problems would've turned out differently if the history was handled differently. If you do, you'll catch cases where one author or another would've caught the bug before merging to main, had they bothered to rebase, but instead the bug was only visible after both authors thought they were done.
Rebasing on main loses provenance.
If you want a clean history doing it in the PR, before merging it. That way the PR is the single unit of work.
Well if I have a diff of the PR with just the changes, then the PR is already a "unit of work," regardless of merge or rebase, right?
In fact, I've been using Jujutsu for ~2 years as a drop-in and nobody complained (outside of the 8 small PRs chained together). Git is great as a backend, but Jujutsu shines as a frontend.
My peers are my reviewers...
`jj` is the only tool that make me use `rebase` personally. Before, I see as the punishment given by my team wishes :)
Besides, Magit rebasing is also pretty sweet.
Understanding of local versus origin branch is also missing or mystical to a lot of people and it’s what gives you confidence to mess around and find things out
Replaying commits one-by-one is like a history quiz. It forces me to remember what was going on a week ago when I did commit #23 out of 45. I'm grateful that git stores that history for me when I need it, but I don't want it to force me to interact with the history. I've long since expelled it from my brain, so that I can focus on the current state of the codebase. "5 commits ago, did you mean to do that, or can we take this other change?" I don't care, I don't want to think about it.
Of course, this issue can be reduced by the "squash first, then rebase" approach. Or judicious use of "git commit --amend --no-edit" to reduce the number of commits in my branch, therefore making the rebase less of a hassle. That's fine. But what if I didn't do that? I don't want my tools to judge me for my workflow. A user-friendly tool should non-judgmentally accommodate whatever convenient workflow I adopted in the past.
Git says, "oops, you screwed up by creating 50 lazy commits, now you need to put in 20 minutes figuring out how to cleverly combine them into 3 commits, before you can pull from main!" then I'm going to respond, "screw you, I will do the next-best easier alternative". I don't have time for the judgement.
You can also just squash them into 1, which will always work with no effort.
I don’t think the tool is judgmental. It’s finicky. It requires more from its user than most tools do. Including bending over to make your workflow compliant with its needs.
I've had failures while git bisecting, hitting commits that clearly never compiled, because I'm probably the first person to ever check them out.
e.g. I'm currently working on a substantial framework upgrade to a project - I've pulled every dependency/blocker out that could be done on its own and made separate PRs for them, but I'm still left with a number of logically independent commits that by their nature will not compile on their own. I could squash e.g. "Update core framework", "Fix for new syntax rules" and "Update to async methods without locking", but I don't know that reviewers and future code readers are better served by that.
Where you have two repositories, one "polished" where every commit always passes, and another for messier dev history.
I tend to rebase my unpushed local changes on top of upstream changes. That's why rebase exists. So you can rewrite your changes on top of upstream changes and keep life simple for consumers of your changes when they get merged. It's a courtesy to them. When merging upstream changes gets complicated (lots of conflicts), falling back to merging gives you more flexibility to fix things.
The resulting pull requests might get a bit ugly if you merge a lot. One solution is squash merging when you finally merge your pull request. This has as the downside that you lose a lot of history and context. The other solution is to just accept that not all change is linear and that there's nothing wrong with merging. I tend to bias to that.
If your changes are substantial, conflict resolution caused by your changes tends to be a lot easier for others if they get lots of small commits, a few of which may conflict, rather than one enormous one that has lots of conflicts. That's a good reason to avoid squash merges. Interactive rebasing is something I find too tedious to bother with usually. But some people really like those. But that can be a good middle ground.
It's not that one is better than the other. It's really about how you collaborate with others. These tools exist because in large OSS projects, like Linux, where they have to deal with a lot of contributions, they want to give contributors the tools they need to provide very clean, easy to merge contributions. That includes things like rewriting history for clarity and ensuring the history is nice and linear.
I think it should be possible to assign different instances of the repository different "roles" and have the tooling assist with that. For example. A "clean" instance that will only ever contain fully working commits and can be used in conjunction with production and debugging. And various "local" instances - per feature, per developer, or per something else - that might be duplicated across any number of devices.
You can DIY this using raw git with tags, a bit of overhead, and discipline. Or the github "pull" model facilitates it well. But either you're doing extra work or you're using an external service. It would be nice if instead it was natively supported.
This might seem silly and unnecessary but consider how you handle security sensitive branches or company internal (proprietary) versus FOSS releases. In the latter case consider the difficulty of collaborating with the community across the divide.
This is one way to see things and work and git supports that workflow. Higher-level tooling tailored for this view (like GitHub) is plentiful.
> There's no reason a local copy should have the exact same implementation as a repository
...Except to also support the many git users who are different from you and in different context. Bending gits API to your preferences would make it less useful, harder to use, or not even suitable at all for many others.
> git made a wrong turn in this, let's just admit it.
Nope. I prefer my VCS decentralized and flexible, thank you very much. SVN and Perforce are still there for you.
Besides, it's objectively wrong calling it "a wrong turn" if you consider the context in which git was born and got early traction: Sharing patches over e-mail. That is what git was built for. Had it been built your way (first-class concepts coupled to p2p email), your workflow would most likely not be supported and GitHub would not exist.
If you are really as old as you imply, you are showing your lack of history more than your age.
I know a lot of people want to maintain the history of each PR, but you won't need it in your VCS.
You should always be able to roll back main to a real state. Having incremental commits between two working stages creates more confusion during incidents.
If you need to consult the work history of transient commits, that can live in your code review software with all the other metadata (such as review comments and diagrams/figures) that never make it into source control.
Well there's your problem. Why are you assuming there are non-working commits in the history with a merge based workflow? If you really need to make an incremental commit at a point where the build is broken you can always squash prior to merge. There's no reason to conflate "non-working commits" and "merge based workflow".
Why go out of the way to obfuscate the pathway the development process took? Depending on the complexity of the task the merge operation itself can introduce its own bugs as incompatible changes to the source get reconciled. It's useful to be able to examine each finished feature in isolation and then again after the merge.
> with all the other metadata (such as review comments and diagrams/figures) that never make it into source control.
I hate that all of that is omitted. It can be invaluable when debugging. More generally I personally think the tools we have are still extremely subpar compared to what they could be.
Having worked on a maintenance team for years, this is just wrong. You don't know what someone will or won't need in the future. Those individual commits have had extra context that have been a massive help for me all sorts of times.
I'm fine with manually squashing individual "fix typo"-style commits, but just squashing the entire branch removes too much.
If those commits were ready for production, they would have been merged. ;)
Don't put a commit on main unless I can roll back to it.
https://docs.github.com/en/get-started/using-git/about-git-r...
> Warning - Because changing your commit history can make things difficult for everyone else using the repository, it's considered bad practice to rebase commits when you've already pushed to a repository.
A similar warning is in Atlassian docs.
Branches that people are expected to track (i.e. pull from or merge into their regularly) should never rebase/force-push.
Branches that are short-lived or only exist to represent some state can do so quite often.
- the web tooling must react properly to this (as GH does mostly)
- comments done at the commit level are complicated to track
- and given the reach of tools like GH, people shooting their own foot with this is (even experienced ones) most likely generate a decent level of support for these tools teams
The best method for stop being terrified of destructive operations in git when I first learned it, was literally "cp -r $original-repo $new-test-repo && go-to-town". Don't know what will happen when you run `git checkout -- $file` or whatever? Copy the entire directory, run the command, look at what happens, then decide if you want to run that in your "real" repository.
Sound stupid maybe, but if it works, it works. Been using git for something like a decade now, and I'm no longer afraid of destructive git operations :)
And still one step further, just create a new branch to deal with the rebase/merge.
Yes there are may UX pain points in using git, but it also has the great benefits of extremely cheap and fast branching to experiment.
I guess it's actually more of a mental "divider" than anything, it tends to relax people more when they can literally see that their old stuff is still there, and I think git branches can "scare" people in that way.
Granted, this is about people very new to git, not people who understands what is/isn't destructive, and just because a file isn't on disk doesn't mean git doesn't know exactly what it is.
I've been using git almost exclusively since 2012 and feel very comfortable with everything it does and where the sharp edges are. Despite that, I still regularly use the cp -r method when doing something even remotely risky. The reason being, that I don't want to have to spend time unwinding git if I mess something up. I have the understanding and capability of doing so, but it's way easier to just cp -r and then rm -rf && cp -r again if I encounter something unexpected.
Two examples situations where I do this:
1. If I'm rebasing or merging with commits that have a moderate to high risk of merge conflicts that could be complicated. I might get 75% through and then hit that one commit where there's a dozen spots of merge conflict and it isn't straightforwardly clear which one I want (usually because I didn't write them). It's usually a lot easier to just rm -rf the copy and start over in a clean cp -r after looking through the PR details or asking the person who wrote the code, etc.
2. If there are uncommitted files in the repo that I don't want to lose. I routinely slap personal helper scripts or Makefiles or other things on top of repos to ease my workflow, and those don't ever get committed. If they are non-trivial then I usually try to keep a copy of them somewhere else in case I need to restore, but I'm not alway ssuper disciplined about that. The cp -r method helps a lot
There are more scenarios but those are the big two that come to mind.
I think i've seen someone coded user-friendlier `git undo` front for it.
TDLR is: people feel safer when they can see that their original work is safe, while just making a new branch and playing around there is safe in 99% of the cases, people are more willing to experiment when you isolate what they want to keep.
Rebase is a super power but there are a few ground rules to follow that can make it go a lot better. Doing things across many smaller commits can make rebase less painful downstream. One of the most important things is to learn that sometimes a rebase is not really feasible. This isn't a sign that your tools are lacking. This is a sign that you've perhaps deviated so far that you need to reevaluate your organization of labor.
Also, since you can choose to keep the fossil repo in a separate directory, that's an additional space saver.
In the same class, for commit to not have on which branch they were created as a metadata is a rel painpoint. It always a mess to find what commit were done for what global feature/bugfix in a global gitflow process...
I'll probably be looking into adding an commit auto suffix message with the current branch in the text, but it will only work for me, not any contributors...
Also (and especially) it make it way easier to revert a single feature if all the relevant commits to that feature are already grouped.
For your issue about not knowing which branch the commits are from: that why I love merge commits and tree representation (I personally use 'tig', but git log also have a tree representation and GUI tools always have it too).
I also prefer Fossil to Git whenever possible, especially for small or personal projects.
From your link. The actual issue that people ought to be discussing in this comment section imo.
Why do we advocate destroying information/data about the dev process when in reality we need to solve a UI/display issue?
The amount of times in the last 15ish years I've solved something by looking back at the history and piecing together what happened (eg. refactor from A to B as part of a PR, then tweak B to eventually become C before getting it merged, but where there are important details that only resulted because of B, and you don't realize they are important until 2 years later) is high enough that I consider it very poor practice to remove the intermediate commits that actually track the software development process.
Except perhaps crappy gui options in GitHub. I really wish they added that option as a button.
Linear history is like reality: One past and many potential futures. With non-linear history, your past depends on "where you are".
----- M -----+--- P
/
----- D ---+
Say I'm at commit P (for present). I got married at commit M and got a dog at commit D. So I got married first and got a Dog later, right? But if I go back in time to commit D where I got the dog, our marriage is not in my past anymore?! Now my wife is sneezing all the time. Maybe she has a dog allergy. I go back in time to commit D but can't reproduce the issue. Guess the dog can't be the problem.No. In one reality, you got married with no dog, and in another reality you got a dog and didn't marry. Then you merged those two realities into P.
Going "back in time to commit D" is already incorrect phrasing, because you're implying linear history where one does not exist. It's more like you're switching to an alternate past.
Can't that also happen with a rebase? Isn't it an (all too easy to make) error any time you have conflicting changes from two different branches that you have to resolve? Or have I misunderstood your scenario?
Had you simply rebased you would have lost the ability to separate the initial working implementation of D from the modifications required to reconcile it with M (and possibly others that predate it). At least, unless you still happen to have a copy of your pre-rebase history lying around but I prefer not to depend on happenstance.
I'd say: cleaning that up is an advantage. Why keep that around? It wouldn't be necessary if there was no update on the main branch in the meantime. With rebase you just pretend you started working after that update on main.
Recall that the entire premise is that there's a bug (the allergy). So at some point a while back something went wrong and the developer didn't notice. Our goal is to pick up the pieces in this not-so-ideal situation.
What's the advantage of "cleaning up" here? Why pretend anything? In this context there shouldn't be a noticeable downside to having a few extra kilobytes of data hanging around. If you feel compelled to "clean up" in this scenario I'd argue that's a sign you should be refactoring your tools to be more ergonomic.
It might be worthwhile to consider the question, why have history in the first place? Why not periodically GC anything other than the N most recent commits behind the head of each branch and tag?
In my experience, when there is a bug, it’s often quicker to fix it without having a look at the past commits, even when a regression occurs. If it’s not obvious just looking at the current state of the code, asking whoever touch that part last will generally give a better shortcut because there is so much more in the person mind than the whole git history.
Yes logs and commit history can brings the "haha" insight, and in some rare occasion it’s nice to have git bisect at hand.
Maybe that’s just me, and the pinnacle of best engineers will always trust the source tree as most important source of information and starting point to move forward. :)
Also incremental rebasing with mergify/git-imerge/git-mergify-rebase/etc is really helpful for long-lived branches that aren't merged upstream.
https://github.com/brooksdavis/mergify https://github.com/mhagger/git-imerge https://github.com/CTSRD-CHERI/git-mergify-rebase https://gist.github.com/nicowilliams/ea2fa2b445c2db50d2ee650...
I also love git-absorb for automatic fixups of a commit stack.
Wouldn't it be enough to simply back up the branch (eg, git checkout -b current-branch-backup)? Or is there still a way to mess up the backup as well?
The "local backup branch" is not really needed either because you can still reference `origin/your-branch` even after you messed up a rebase of `your-branch` locally.
Even if you force-pushed and overwrote `origin/your-branch` it's most likely still possible to get back to the original state of things using `git reflog`.
Don't erase history. Branch to a feature branch, develop in as many commits as you need, then merge to main, always creating a merge commit. Oftentimes, those commit messages that you're erasing with a squash are the most useful documentation in the entire project.
This is absolute nonsense. You commit your work, and make a "backup" branch pointing at the same commit as your branch. The worst case is you reset back to your backup.
At work though it is still encouraged to rebase, and I have sometimes forgotten to squash and then had to abort, or just suck it up and resolve conflicts from my many local commits.
Rebase only makes sense if you making huge PRs where you need to break it down into smaller commits to have them make sense.
If you keep your PRs small, squashing it works well enough, and is far less work and more consistent in teams.
Expecting your team to carefully group their commits and have good commit messages for each is a lot of unnecessary extra work.
I still get confused by vscode’s changing the terms used by Git. «Current» vs «incoming» are not clear, and can be understood to mean two different things.
- Is “current” what is on the branch I am rebasing on? Or is it my code? (It’s my code)
- Is “incoming” the code I’m adding to the repo? Or is it what i am rebasing on to? (Again, the latter is correct)
I find that many tools are trying to make Git easier to understand, but changing the terms is not so helpful. Since different tools seldom change to the same words, it just clutters any attempts to search for coherent information.
This constant reinvention makes the situation even worse, because now the terminology is not only confusing, but also inconsistent across different tools.
If I have a merge conflict I typically have to be very conscious about what was done in both versions, to make sure the combination works.
I wish for "working copy" and "from commit 1234 (branch xyz)" or something informative, rather than confusing catch-all terms.
The common view that a Git GUI is a crutch is very wrong, even pernicious. To me it is the CLI that is a disruptive mediation, whereas in a GUI you can see and manipulate the DAG directly.
Obligatory jj plug: before jj, I would have agreed with the top comment[1] that rebasing was mostly unnecessary, even though I was doing it in GitUp pretty frequently — I didn't think of it as rebasing because it was so natural. Now that I use jj I see that the cost-benefit analysis around git rebase is dominated by the fact that both rebasing and conflict resolution in git are a pain in the ass, which means the benefit has to be very high to compensate. In jj they cost much less, so the neatness benefit can be quite small and still be worth it. Add on the fact that Claude Code can handle it all for you and the cost is down to zero.
[0]: https://gitup.co/
mv folder folder-old
git clone git@github/folderI remain terrified.
I was working on a local branch, periodically rebasing it to master. All was well, my git history was beautiful etc.
Then down the line I realised something was off. Code that should have been there wasn't. In the end I concluded some automatic commit application while rebasing gobbled up my branch changes. Or frankly, I don't even entirely know what happened (this is my best guess), all I know is, suddenly it wasn't there.
No big deal, right? It's VCS. Just go back in time and get a snapshot of what the repo looked like 2 weeks ago. Ah. Except rebase.
I like a clean linear history as much as the next guy, but in the end I concluded that the only real value of a git repo is telling the truth and keeping the full history of WTF really happened.
You could say I was holding it wrong, that if you just follow this one weird old trick doctor hate, rebase is fine. Maybe. But not rebasing and having a few more squiggles in my git history is a small price to pay for the peace of mind that my code change history is really, really all there.
Nowadays, if something leaves me with a chance that I cannot recreate the repo history at any point in time, I don't bother. Squash commits and keeping the branch around forever are OK in my book, for example. And I always commit with --no-ff. If a commit was never on master, it shouldn't show up in it.
This is false.
Any googling of "git undo rebase" will immediately point out that the git reflog stores all rebase history for convenient undoing.
Shockingly, got being a VCS has version control for the... versions of things you create in it, not matter if via merge or rebase or cherry-pick or whatever. You can of course undo all of that.
And anyway, I don't want to dig this deep in git internals. I just want my true history.
Another way of looking at it is that given real history, you can always represent it more cleanly. But without it you can never really piece together what happened.
The `git log` history that you push is just that curated specific view into what you did that you wish to share with others outside of your own local repository.
The reflog is to git what Ctrl+Z is to Microsoft Word. Saying you don't want to use the reflog to undo a rebase is a bit like saying you don't want to use Ctrl+Z to undo mistakes in Word.
(Of course the reflog is a bit more powerful of an undo tool than Ctrl+Z, as the reflog is append-only, so undoing something doesn't lose you the newer state, you can "undo the undo", while in Word, pressing Ctrl+Z and then typing something loses the tail of the history you undid.)
Indeed, like for Word, the undo history expires after a configurable time. The default is 90 days for reachable changes and 30 days for unreachable changes, which is usually enough to notice whether one messed up one's history and lost work. You can also set it to never expire.
It is fine for people to prefer merge over rebase histories to share the history of parallel work (if in turn they can live with the many drawbacks of not having linear history).
But it is misleading to suggest that rebase is more likely to lose work from interacting with it. Git is /designed/ to not lose any of your work on the history -- no matter the operation -- via the reflog.
None of it is impossible, but IMHO it's a lot of excitement of the wrong kind for essentially no reward.