Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Pijul is a free and open source (GPL2) distributed version control system (pijul.org)
255 points by thunderbong 9 months ago | hide | past | favorite | 210 comments



The current submission counts as a dupe since this topic had significant attention less than a year ago:

Pijul: Version-Control Post-Git [video] - https://news.ycombinator.com/item?id=37094599 - Aug 2023 (163 comments)

Other past threads: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


I used darcs before git and it was very nice - the first distributed RCS I'd ever used. I believe pijul is a follow-on to that so it should be great. Later on I used Mercurial and it was very easy with a great GUI and it felt safe.

git is like shaving with a straight shaving razor - alarming until you've developed your set of frequently used commands and ways to stay out of trouble.

Even I have to use git because who uses anything else? Every project that I know that used Mercurial dropped it in the hope of getting more contributions.

We have all rushed like Lemmings into Github - now owned by those nice people at Microsoft - who have used it to automate our jobs with copilot. How ridiculous that Open Source has been embraced into the clutches of the Empire on such a scale! :-) I am joking but only a bit.


> Even I have to use git because who uses anything else?

Git compatibility is a great way around this problem. I have been enjoying Jujutsu lately.


Not really. The real innovation of a lot of these alternative DVCS systems is that they free the state of the source from being dependent on the history that got you there. Such that applying patches A & B in that order is the same as applying B' & A' -- it results in the same tree. Git, on the other hand, hashes the actual list of changes to the state identifier, which is why rebasing results in a different git hash id.

So long as you require git compatibility, you're kinda stuck with git's view of what history looks like. Which is rather the point.


Newbie jj convert here.

jj is not patch based, like pijul, but snapshots, like git.

A sibling commentor points out the change id stored by jj: it is true that at the moment, this isn't really exportable to git in a native way. However, there is a path forward here, and it may come to pass. Until then, systems like Gerrit or Phabricator work better with jj than systems like GitHub.

However, all is not lost there either: tooling like spr[1] allows you to map between the two universes.

At my job, at least one person was using jj for six months at work without any of the rest of us being the wiser. Some of the rest of us are trying it out. A really nice thing about jj is that you can use it without anyone else needing to, thanks to the git interop.

1: https://github.com/getcord/spr


That is definitely a tradeoff. On the other hand, the history of all human progress is a history of path dependence. Constraints spur creativity and Jujutsu works extremely well.

(I've worked with the main author of Jujutsu before. Also, having worked on Mercurial for many years, I have my biases about how source control should work. Jujutsu's workflow is very similar to Mercurial's, with some rather stunning improvements on it.)


Jujutsu uses change IDs, which are an abstraction over commit IDs, that are stable over operations like rebasing.


Supporting conversion with some lossy tendencies is a good idea, but maintaining compatibility will stifle innovation as al bugs & design decisions cannot be fundamentally worked around.

Adjacent, I heard folks praising Forgejo for its Actions & Microsoft GitHub compatibility, but that means you are still left with the same YAML spaghetti and its other limitations. What would have been nice is something better for CI--something that prompts the project to want to switch to get a better experience. As such, while I agree with the moral reasons to switch to non-proprietary software, the reason for switching is philosophical, not philosophical + technical. I feel being a wrapper around Git or Git compatibility will likely fall in this category rather than being more compelling.


> Even I have to use git because who uses anything else?

It's pretty widely not used in the games industry because its support for media files is poor, it's pretty difficult to train artists to use it, the integration with game engines is pretty poor, git LFS is a layer of complication on a system that already has UX that's difficult for artists, git LFS hosting is an additional can of worms, there's essentially no file/directory-level permissions for managing contract work (and doing so with submodules+LFS is complicated).


Do you know of any FOSS alternatives that handle that stuff better (specifically binary asset handling and permissions)?


Pijul does binary files natively, actually. Permissions aren't hard to add, if anybody had a real-world case I'd be happy to help.


the places where I've seen people leverage the folder-level permissions in Perforce (in the games industry) would be like: a contractor that provides art would only have access to the portions of the project that relate to the art they're working on but not the code; translators might have read access on one directory and write access on another directory. In academic settings I've seen Perforce used such that an instructor has read-write access on a directory and students have read-only access on the directory, and then each student has a subdirectory where they have read-write access and everyone else has read access, so that everyone can play each other's games. You can do stuff like this in git with submodules but it's somewhat complicated and difficult to teach to non-programmers. It's not really clear what the target audience is for the project. The mathematical/theoretical foundations are clear, but the target audience is unclear, so I'm not sure if these are use-cases that you consider to be within the project's scope or not; just sharing how I've seen the permissions models used in the games industry.

oh and nearly everyone hates using perforce and its clients p4v and the p4 command line, so it’s not like there’s no appetite for change. There’s very much an appetite to see Perforce unseated, because it costs money and is also bad.


some indies use Subversion, but Perforce is the industry standard.


There are many people who work on open source professionally, but as a hobbyist programmer I find it odd to worry about Copilot automating my “job.” This is like complaining that your dishwasher or robot vacuum cleaner automates your job because you don’t do the work by hand anymore.

If you’re getting paid then it makes some sense to worry about automation, but if you’re not, automation is good. You can do more. The community can do more. We can take on more ambitious projects.


If you are a hobbyist programmer, it is not your "job" . So it is not automating your job for the moment but the professional programmers' job, isn't it?


Yes, what I'm trying to get across is that open source programming is often not a job. It can become a job, but I want to leave room for it being ok not to be a job.


There is empirical evidence that git is diabolical: Git can be thought of as an experiment in how to make an API GUI-proof. There is no possible direct manipulation visual representation of git operations that doesn't dumb it down.


I content that magit, while not a GUI, pretty much transcends git’s CLI.

GUIs will only ever tackle a subset of git’s functionality since if you’re already a power user you’ll be comfortable with the command line.


I'll give magit a try. But imagine if you had to use TEX to format anything more than trivial documents. It makes me want to get a Vision Pro and see if a direct manipulation git UI could access the full potential but with visual discoverability and safety.


I find gitx really helpful at understanding what's going on in git. Especially its ability to drag branches around the tree shows you that branches are just labels, they aren't the actual code. It doesn't do everything, but what it does do certainly isn't dumbed down.

You definitely need a gui to do partial commits. git add -P is just horrible to use.


git != github


To be fair, many git maintainers are or have been github employees and git is officially hosted on github.


That social media code forge is not the same as its underlying version control system. They are different things & we should support making folk not equate the two.


but you know git and github are not the same thing right?


For the vast majority of OSS projects, you're much better off using Git and GitHub in terms of drawing attention and attracting contributors.


So Pijul is patch-based. And not in the boring implementation-sense of storing changes as a series of patches. In the interesting sense that patches can be ordered in more arbitrary ways. On the other hand Git is snapshot-based. And Git was built for the Linux Kernel (specifically from the perspective of the lead maintainer). But my impression from the outside is that the Linux Kernel development is very “patch-based”. First of all in the sense of sending patches via email. But also in the sense that these patches can be applied to multiple trees (not just the Linus tree) and different subsystems have to cherry-pick commits (i.e. picking a commit and applying it as a patch, creating a new commit somewhere else). And it seems that people are concerned about tracking what patches are where, what commit (which can correspond to many commits according to cherry-picks) fixes whatever commit. The people behind Patchwork (a tool for maintainers which tracks patch series) is concerned about being able to track patches; the usual `git patch-id`, naming heuristics, etc. does not seem good enough.

And as patches go through different contributors and maintainers they accumulate trailers in the form of signed-off-by and things like that. Even non-trailer (not key-value line) changelogs like: `[<initials>: fixed typo/ fixed off by one]`. And if the change is in the code then the patch content changes (not just the commit message).

The Git project is also pretty patch-based in its development. Even for those who only cares about the maintainer tree: these patches apply to this other in-flight patch series; this diff here can be applied on top of your pending patch series; this patch series is from our ongoing work at Gitlab (company); I’m resending this patch series that X did three months ago but seemed to have abandoned; etc.


Genuine question, I have no opinion on this:

My understanding is that a big part of the reason that Git is snapshot based is for reliability and for ease of checkout -- being snapshot based means Git doesn't need to replay commits every time you check things out.

That likely comes with some downsides (cherry-picking does come to mind, yes), but also seems like some serious upsides, one of the biggest one being that simplicity. It's relatively easy for me to reason about what Git is doing under the hood. I like that I'm seeing that Pijul's patch operations are associative, I like at first glance what I'm seeing it say about merges, but it's not making it clear to me what the downsides are.

Is the idea that arbitrary checkouts take longer, but that's generally fine since people don't do them very often? Or am I over-estimating how much complexity/performance costs that building a vcs like this would incur?


There's no reason you can't cache tree snapshots in a pijul/darcs setting. That's just an implementation detail. The difference is more in how a rebase or merge operation works internally, and how essential the particular history is to the current workplace state.

Pijul does a better job of recognizing that "A -> B -> C" is the same as "B' -> A' -> C".


On first download/checkout then I assume I'd download the tree snapshot and it wouldn't replay the patches at all? Local caching would help with repeated checkouts, but most of the time I use Git I'm moving around, so I'm not sure how often I would benefit from that. That also seems like it would lose some verifiability for checkout integrity (?), but maybe that's not a big deal, I'm not sure in practice how much that matters for Git.

Repo size also comes to mind on this if snapshots/caches are happening regularly/automatically, since that would mean Pijul is storing both the patches and the end states rather than computing the patches on the fly. But I guess snapshots wouldn't need to be automatic -- I'm not sure how often people actually check out arbitrary commits, maybe you could get away with only caching certain points?


The original author sold it as a stupid content tracker. Which in part meant that the implementation was simple.

There ain’t a whole lot to the first commit of Git. Some hundred lines which consists of manually preparing a commit by sending the current snapshot to the “cache” (nowadays “index”) which compresses it and then building a commit by specifying the parents and a commit message (like git-commit-tree but I don’t know if that was around back then).

Beyond that I don’t know much.


Funny timing of this post. This past weekend, I dedicated myself to creating a comprehensive set of tools for integrating Pijul seamlessly into Emacs. This includes collaboration features through org-mode, which I believe will be refreshing. I'm eager to share it with the community shortly. For fellow Emacs & Pijul enthusiasts, keep an eye out!


Funny username :)


Not intentional; I used vim 10 years ago, but have been an emacser with vim-bindings for the past 7 I guess :)


How evil of you


That is the best set up imo. Vim keybindings and composability are great, but Emacs being built on Lisp is a super power.


In a similar vein, it's been incredible to watch the Cambrian explosion of plugins since NeoVim introduced deep Lua integration and configurability.


Personally, I find it quite gratifying to see that a "Vim guru" is also an active Emacs supporter.


The only path to enlightenment is to use both.


As probably the person in the entire world who has been wanting this for the longest, thank you! Please share on Pijul's Zulip, and ask for any help you may need.


I'll try to allocate some time this weekend and also sign up for Zulip. Currently, I'm considering the best way to distribute it. Given that users are familiar with use-package and MELPA, it seems I may need to incorporate Git as well.


Could you give me a few paragraphs of usage report? How has it been to use the Pijul nest?


I'm just starting to explore Pijul, to be honest. My interest was sparked when I revisited an old Haskell project and remembered using Darcs years ago, which led me to research Pijul. I'll be able to share more insights in a few weekends, once the remote communication components of vc-pijul are established.


Darcs has rebase & Pijul doesn’t which can really help ergonomics for fixups or if you do a lot of WIPs that need amending. Darcs also has a send command for easily mailing patches to others. Darcs also lets you override the diff output which is compatible with decades-old tooling rather than something bespoke. Pijul despite better performance, an awesome user identity system, & channels is still not a Darcs replacement IMO. That isn’t to say folks shouldn’t use or follow the Pijul project (they should) more than it is to say one should still feel okay using Darcs today too as it’s still getting updates & has features Pijul doesn’t; what’s wrong is folks calling Pijul a ‘successor’ as if the Darcs project had died. You could still use Darcs in 2024.


I seem to recall very positive comments regarding pijul's underlying architecture/theory. However, as I don't have the headspace to delve into it, I would love a blurb at the beginner level that highlights the benefits of using pijul compared to other VCS, and a very brief and simple comparison between pijul and git (focusing on the differences).


I have the same question every time Pijul comes up on HN, and I have yet to get an answer:

Can someone give me a real world example (person a makes X change, person b makes y change etc. etc.) that would work better in Pijul than Git?

I am a complete believer on a sound underlying model producing better results for users at a high level, but I'm not clear on how it maps through for Pijul. I think that's what they're missing in the sell - the ability to explain to devs "it will make your life easier in the following specific ways".

For example when getting my employer moved from SVN to Git I could talk to people about how much easier it was to create and then merge a temporary branch for a feature in Git. Git understand the topology of the history, knew the merge base and had better merge algos so the only pain you had was when there was an actual conflict - two people editing the same file location. It also could track renames, which were incredibly painful to merge in SVN.


> Can someone give me a real world example (person a makes X change, person b makes y change etc. etc.) that would work better in Pijul than Git?

Simplified example:

Persons A and B check out master branch.

Person A adds a.txt, commits and pushes.

Person B adds b.txt, commits and tries to push and...

1) git will not accept the push because it's not on top of current master branch, person B needs to fetch and merge/rebase before pushing again.

2) pijul will accept the change (after a pull, but no rebase) because patches A and B are independent of each other and does not matter which order they are in the history (keyword: commutation).

The value of Pijul will only start to show when you get into big three way merge scenarios. Which git users avoid like the plague because they are so nasty to deal with. Demonstrating this would need a much larger example.

edit: clarification a pull is still needed in case 2, but no rebase or merge because there isn't one for commutative patches


> 1) git will not accept the push because it's not on top of current master branch, person B needs to fetch and merge/rebase before pushing again.

But is this not the right thing to do? A kernel is a complex piece of software. Changes in one place can have very non-obvious consequences in other places (think of changes that cause deadlock because locks are applied in the wrong order). Of course, it is theoretically nice if I know that a change to e.g. documentation or fixing a typo in a comment is not affecting the Ethernet driver or the virtual file system layer, but this is down to the architecture of the project - this is not something that a version control system can prove.

Given that, it seems desirable to me that the source tree has as few different variations, permutations how to get there, and so on, as possible, since this makes testing and things like bisecting for something like a broken lock or another invariant much easier.


Thank you for answering.

For me "not needing to pull before pushing" is nice, but not game changing (but helpful to understand nonetheless).

It's more the cases that you allude to that we instinctively avoid in git!


You've never maintained a long-lived git feature branch, it seems :)

I maintain a project that is a slight modification of a very active upstream repo. The changes I maintain are rather invasive. Almost every upstream commit introduces a merge conflict with my changes.

To keep myself sane I only merge/rebase when upstream releases a new version, but it still ends up sucking up a few weeks of my time every year. During those weeks, I look enviously at pijul where the conflicts would resolve down to a handful of corrections in context at the point of divergence, instead of gigantic merge conflicts obscured by thousands of piled on patches.


You say that, but this applies to all cherry picking you can do between branches as well. As long as they don't conflict you're golden, and if there's a conflict you can commit a resolution that'll commute with your branch until you merge back into main.

It would enable so many nice and less strict workflows to actually work if it ever got momentum, I've still got hope.


The git behavior seems greatly preferable here. As mentioned in other threads, the notion of commutativity here is very weak and counterintuitive; it only seems to cover the applicability of an auto-merge heuristic, not any actual notion of correctness or semantics, so a human is needed to review the merge and re-test before anything can be known safe for pushing upstream. If anything, git is too lenient in allowing auto-merges to take place that could in principle change semantics, and it ought to enforce a manual review stage for any merge, regardless of whether the auto-merge heuristic succeeded or not.


When the contents has a conflict, git and pijul behave similarly.

When the contents are identical, but the order of commits is different, git will conflict and require manual resolution. Pijul will not.

As you say, neither will automatically check for correctness and you should run tests and CI when merging.

Pijul just removes the manual work when there is no conflict in the contents but the history is different.


> When the contents are identical, but the order of commits is different, git will conflict and require manual resolution.

But why would this normally happen? Different developers working on the same files which by chance make the same changes? Isn't that unlikely?


> When the contents has a conflict, git and pijul behave similarly.

Not really: Pijul can record a conflict resolution as a patch, and apply it in a different context. Also, the conflict doesn't "come back", so you don't need extra hacks like rerere/jujutsu.

> Pijul just removes the manual work when there is no conflict in the contents but the history is different.

This is true, but could be confusing as our definition of conflicts isn't based on contents, but on operations, which is very different from Git (Git doesn't detect all conflicts).


> the notion of commutativity here is very weak and counterintuitive; it only seems to cover the applicability of an auto-merge heuristic

This is completely false: in Pijul, any patches that could have been produced independently can be applied in any order without changing the result. There are 0 heuristics in Pijul, unlike in Git where even random line reshuffling can happen (there are examples in the "Why Pijul" section of the Pijul manual).

Obviously, deciding whether a merge has the correct semantic is Turing-complete, and Pijul doesn't try to do any of that.


Merges are bad only when all parties involved edited the same code. There's no programmatic way to solve this problem. It's an administrative problem: someone has to decide whose code is the right one to use.

If changes coming from both sources are independent, then rebase in Git is trivial as well, and there's nothing to be afraid of.


> It's an administrative problem: someone has to decide whose code is the right one to use.

Or maybe an architectural one.


Does that only apply to adding new files? The changes in commit A could affect the behavior of the changes in commit B even if they are different files.


No, it applies to everything. If adding patches A and B (same or different files) will lead to same result regardless of which order they are applied, they are called "commutative" and pijul won't care which order they are in your history.

It only tracks content of files, not semantic or behavior changes.


git can already do this as long as there isn't a conflict. Maybe pijul has better three way conflict resolution, but those can be risky, and avoiding those rare situations wouldn't offset the vast amount of tooling that got has.


Git would make you merge or rebase, but yes there wouldn't be a conflict. They're saying Pijul would let you directly push without having to deal with the diverging histories.


Which tbh is a bad thing. Just because change a doesn't textually touch change b doesn't mean they don't interact.

Unless your VCS is handling CI for integrating changes on push, you really need to pull down the upstream changes first and test them combined with your code before blindly pushing.


> Which tbh is a bad thing. Just because change a doesn't textually touch change b doesn't mean they don't interact.

A good example for this is code which grabs several locks and different functions have to do that in the same order, or a deadlock will result. A lot of interaction, even if changes might happen in completely different lines.

And I think that's generally true for complex software. Of course it is great if the compiler can prove that there are no data race conditions, but there will always be abstract invariants which have to be met by the changed code. In very complex code, it is essential to be able to do bisecting, and I think that works only if you have a defined linear order of changes in your artefact. Looking at the graphs of changes can only help to understand why some breakage happened, it cannot prevent it.


I have clarified the comment above. A pull is needed before pushing. That pull does not need a merge or a rebase like git does because the order of commits A nd B does not matter (iff they are commutative). This gets a lot more useful when there are more than two patches to consider.


That seems like a very important point; how does pijul deal with such "effects at a distance"?


By not being a CI tool, nor claiming to solve such Turing-complete problems.

Pijul has a theory of textual changes, but indeed doesn't care at all about what you write in your files: that's your problem!


pijul has a lot of theory and engineering to figure them out, dealing with them better than git is one of the major reasons it exists at all, and darcs before it.


> darcs before it.

Darcs is older than git.


True.

In my defense, 3-way merges are older :)


You kinda missed the point.

You have two repos with different history:

master -> patch a -> patch b

master -> patch b -> patch a

But the contents of the files are equal after applying both patches (in either order).

Git will consider these to be two different, Pijul thinks they're the same.

This is a simplified example. It only gets interesting when there are a lot of patches, some of which are commutative and some are not.


What benefit is there to the VCS knowing that these histories are equal? That's valuable in verification or efficient binary patching, but I don't see how it matters in version control. When would I want to compare two repositories that were patched in different orders?


I'm guessing that it makes it easier to pick and choose between a bunch of patches. This is something I sometimes want to do in Git, but doing so requires a bit of planning. Have all the independent features branch from the same point, then, to 'assemble' them, do an octopus merge.

If the VCS knows about dependencies between patches intuitively, it could free me from having to explain it, which in the case of Git, requires following procedures that I'm unlikely to convince any of my coworkers to follow ("ohay, first, decide on the earliest point in the history from which this patch could make sense, rebase onto that....")


Imagine you are working on a patch heavy project. Like the Linux kernel. Where there are a lot of patchsets going around that are not in the mainline.

You and I, who are both working off of main, and who have both separate merged in a few patchsets that are relevant to our shared module of interest. We can merge and compare our branches, and the differences in terms of nursing patches without having to be rigorous about reconciling or histories.


This is not really compelling. Pulling and rebasing before a push is standard workflow and lets you test with the new changes before pushing.


That may be standard in some shops, but definitely not all.

I view anything that interferes with a push as a threat to the VCS, since it encourages developers to keep changes local and unavailable to their teammates. The only exception would be direct pushes to main.


That doesn't interfere with a push, you're just expecting to blindly push without considering the state of the repo.


> git will not accept the push because it's not on top of current master branch, person B needs to fetch and merge/rebase before pushing again.

Hmm...that seems like a feature to me, not a bug.

To me nothing of substance should happen in the repository, it should all happen in the local working directory.

¯\_(ツ)_/¯


> To me nothing of substance should happen in the repository,

The idea is that with pijul nothing of substance would happen on the server in this example, it is the same process that would happen if you were doing it all locally.


Hmm...so if something did need to happen, pijul would also reject and I would have to pull, do the local edits until everything is consistent and then push, just like git.

So it's a low-impact optimization of the fast path?

But actually, how does pijul know there are no conflicts? Textually?

https://pijul.org/manual/conflicts.html

Hmmm...yeah looks like it's purely textual. Er, no. There can be semantic conflicts that I need to resolve that do not conflict textually. The test suite needs to be green locally on my machine, and then we replace the Top of Tree wholesale with the code that passed the tests locally on my machine.

So to me this feature of pijul is clearly an anti-feature, a bug, and the git behavior is correct.


> Hmm...so if something did need to happen, pijul would also reject and I would have to pull, do the local edits until everything is consistent and then push, just like git.

Unlike git, pijul has first class conflicts, pijul unlike git does not reject in this example.

My knowledge is limited, but from my testing that means the conflict exists in the history, at least if your merge style allows for that(similar to how in git you can choose to always rebase or use merge commits).

The conflict is resolved with a new patch.

I did not spot much in the documentation with a quick search but on the man page there is a small blurb on first class conflicts

> First-class conflicts In Pijul, conflicts are not modelled as a "failure to merge", but rather as the standard case. Specifically, conflicts happen between two changes, and are solved by one change. The resolution change solves the conflict between the same two changes, no matter if other changes have been made concurrently. Once solved, conflicts never come back.

- from https://pijul.org/


You’re describing CI, not git nor pijul, nor any other version control system.


Was going to say exactly this, though I do wonder if it's just me being too set in my thinking around how I think version control should work.


I believe that even git would work in this scenario because a.txt and b.txt are separate and independent thus a rebase in git is not required. The point at which this becomes an issue is when both person A and B try to make changes to the same file and specifically the same blob of text within that file. I could be wrong but this should be simple enough to prove out because I've run into this situation before where I forgot to rebase before pushing my changes but git still accepted the changes as they were independent of the changes that person B made.


git wouldn't accept a push with conflicting changes on the remote but that's just because pushes are dumb (not bad, they just don't do anything fancy).

The solution would be to pull upstream changes (so you know what you are potentially pushing your changes into) and then push.


> thus a rebase in git is not required.

A rebase is never required in Git. (people/maintainers may disagre) but a merge will always do.

Having said that. In your scenario provided, a pull + merge/rebase will be required. It will then resolve automatically, and without conflicts. But a human has to be involved to provide a strict sequence/dag of the those commits.


The thing I like most is that cherry-picks are real, not simulated.

In git, a cherry-pick pulls the change you're interested in off a branch, making a copy in the process. If you later merge or rebase that branch, there are two versions of it in the history, this can have practical consequences, it's not uncommon for this to generate conflicts which wouldn't be there without the cherry-pick.

In pijul, a cherry-pick is just one way to apply a patch. It's the same patch in both branches, so the history doesn't contain two versions of the change, only one. So there is no difference in the result between 'branch, cherry-pick, merge' and 'branch, merge'. Ever.


> It also could track renames, which were incredibly painful to merge in SVN.

They are still a pain in Git as well. Rename a class and its usages in C# project, for example, and it randomly breaks based on some arcane heuristics.

As for actual question you posed - partial checkouts. Your artist don't need to checkout code, but can work on it art folder.


Pijul eliminates the rift between rebases and merges. That in itself is an enormous gain over the confused git workflow or lack thereof.


Most won't understand the gravity of this without a concrete example.


And what is it?

couldn't find on the "doc"... is it all rebases or all merges?


That is a distinction that only exists because a Git repo is an ordered sequence of snapshots of your code that get translated to and from patches for human consumption. As far as I can tell, a Pijul repo is more of a giant dependency graph of patches that when merged together form a consistent snapshot of your code.


More like merges but actually neither… it can reorder patches, even conflicting ones, it’s a completely foreign concept in git.


sounds like rebase with force push remote. (well, every push is a force push in git. so just rebase+push. ...except on github if you have "protected branches")


Not really, no; ordering of patches is not something you as a developer should care about. It’s behind the scenes. No force pushes necessary.


thanks. i will have to try it out. but it does sound like "rewriting history" in git et al.


The point is there is no single history in the git sense - there are patches which commute, but their order is... irrelevant (if possible).


any time you rebase the same feature branch onto current main and solve the same conflicts pijul could probably do much better without hacks like rerere (which most people also do not use, so...)


Here's game-changing stuff:

- There's an example in the "why Pijul" page of our manual where Git completely reshuffles your lines, and no "custom merge algorithm" could possibly solve it. I would be terrified if I were working on crypto/security code and I knew my VCS was doing that: https://pijul.org/manual/why_pijul.html

- Patch commutation makes all big instances small: no need for submodules, partial/shallow clones, etc. Patch commutation lets you work on a small part of the repo by cloning only the patches you're interested in, and submit patches that mechanically commute with all patches on other parts of the repo.

- Free cherry-picking: no need for strict disciplines, you can just introduce a quick fix on your local work branch, and push just that to production. When you're ready merging the rest, you won't have to solve conflicts again (no need for Git rerere/Jujutsu/…)

- Many uses of branches reduced to "just use patches": many people, especially on fast-moving projects and "early days", don't really know what they're working on, and are dragged onto solving problems they didn't plan initially. Well, Pijul lets you focus on your work, then make patches, and then separate them into branches, thanks to commutativity.

- Separation of contents and operations: this really feels like the CSS3/HTML5 of version control. In Pijul, patches have two "detachable" parts, a part describing what the patch does (as concise as "I introduced 1Tb of data", i.e. just a few bytes), and the contents (not concise: the 1Tb themselves). You don't need the data to apply a patch, so when working on large files, you can record 10 different versions, and your co-workers will only download the parts that are still alive after that. No LFS required!

- Precise modeling of conflicts: conflicts are stored in our model, not "recorded" or "artificially first class". They're literally the core of our model, the initial theoretical motivation. Patches are where you need a good tool the most, and we record them and store your precious resolutions as actual patches, so they don't come back (no "git rerere" needed, and conflict resolutions can be cherry-picked).

Now, we also have less game-changing things like:

- Accurate and super fast "blame" (which we call "credit", and which doesn't require Pijul to look at the entire history like Git does).

- Generic diffs, and therefore merge, i.e. not necessarily line-based. We haven't implemented them, but you could in theory implement AST-based diffs on top of Pijul.


https://news.ycombinator.com/item?id=39452312

This comment (that probably prompted this submission) gave me some idea at least.


In day to day use, there's very little difference in the practical use of git vs. pijul. You edit files, commit then push. Of course the user interface is different and some of the terminology is different but it's still a distributed version control system.

As for the differences, the advantages are listed right there on the front page: commutation, merge correctness, first-class conflicts and partial clones.

Explaining these in more detail in a short example ("a blurb") is not really easy because you'd have to first set up an example three way merge (for example) and then study the behavior of git vs. pijul in detail. I recall seeing a video presentation from the Pijul authors which delved deep into this if you're interested.

But I'll give it a go anyway...

tl;dr: pijul can handle certain merge situations automatically where git requires you to manually resolve them


More than that. If you have ever had to fix a bug in code common to multiple maintained releases of a project, being able to apply the same patch to them all as its own thing instead of having multiple cherry-picked commits with identical content would be nice.


I think giving a patch its own identity is a pretty neat concept and clearly different than the git approach, so thanks for this example!


You're describing working on your own single-author project, in which case there is indeed little difference (Pijul has less tooling).

In practice on actual real world cases, there are lots of differences when you start working with others: even on a small project, you don't have to plan your feature branches anymore, conflicts are solved once and for all, you get free cherry-picking of bugfixes to your production branch, etc.

When your project scales, there are even more differences: commutativity handles large repos for free, patches describe large files much more efficiently than by giving their whole contents (which snapshots do).


I think that whatever VCS come today, it will have a hard time finding a broad adoption use case that overweight the massive footprint of git ecosystem.


That sounds very much like something you could copy-paste on ChatGPT and get a somewhat good answer.

That you definitely would have to double check because it might be that it hallucinates.


I feel I'm missing something; for a version control system, you'd think managing different versions would be kinda central but in pijul that seems more like an afterthought. If I'm reading correctly you could use 'channels' same way as you use git tags, but that doesn't seem to be happening in practice.

For example I look at 'sanakirja' project. The current version is 1.4.1 and the previous version was 1.4.0, and the version before that was 1.3.3:

https://crates.io/crates/sanakirja/versions

but if I look at the repository, there is no way to see any of that?

https://nest.pijul.com/pijul/sanakirja

There is "Tags" tab which is empty, and channel selection drop-down that has only "main" channel. So if I want to browse code at version 1.3.3, or see the difference between 1.3.3 and 1.4.0, or even just see what versions there are, how would I do that? To me those seem like very elementary questions, and yet reading through the manual I can find no hints toward this direction.

There is a FAQ entry that seems relevant:

> Is it possible to refer to a specific version?

> Excellent question. Since Pijul operates on patches rather than snapshots, versions are essentially unordered sets of patches. How do we communicate a specific version number to one another? We solve this by abusing elliptic curve cryptography primitives.

But its not clear at all how those "version identifiers" can be used or indeed if they are even implemented at all yet? At least based on the manual, no commands seem to take "version identifier" as argument, nor can I see them anywhere in Nest web UI.


Oops, don't look at what we do! These repos have been used for dogfooding and bootstrapping extensively, they have the worst structures and aren't good examples of nice, clean workflows. The tool isn't really "experimental" anymore, its repos are still used for heavy experiments.

> But its not clear at all how those "version identifiers" can be used or indeed if they are even implemented at all yet? At least based on the manual, no commands seem to take "version identifier" as argument, nor can I see them anywhere in Nest web UI.

`pijul log --state` does that, and various commands (tags, fork) do it as well. Tags are due for a redesign, since they can be made much more efficient with a really cool new design, and make Pijul a perfect hybrid between patches and snapshots.


> Oops, don't look at what we do!

Is there some repo that is then a good showcase?


Oh, apparently there is `pijul tag` command that might do something relevant here?

https://nest.pijul.com/pijul/pijul:main/QL6K2ZM35B3NI.JIAAA

but the docs do not say anything about that, so it remains a mystery. also its still weird that their own projects do not use those tags, or is it just that the web UI isn't showing them properly?


I have used professionally Darcs, Mercurial, Subversion, and Git. I love distributed version control systems, and I love Git.

However, Pijul looks interesting, nice work!

Pijul has a number of stated strengths over Git, and looks like the version history is implemented as a CRDT from my first impression. Some of the statements make me wonder how many conflicts occur in real use. If they are equal to or less than the number it produces, great. If more than what Git produces, that's a barrier to adoption, as merge conflicts are one of the biggest pains when using Git.

For me though what appears one of its biggest strengths (commutative workflow) is also its biggest drawbacks. There are so many teams that are trained in git workflow and are used to it, that getting them to switch to a completely different style of workflow in sufficient numbers will take years.

Git compatibility is a key missing feature. Git had this with Subversion, and I think it's a big reason why Git won over many of the Subversion crowd.


The basic premise of Pijul is that the way they model changes leads to better merges, fewer conflicts, and (crucially) no cases of bad merges. Git can complete merges and do it incorrectly. https://pijul.org/manual/theory.html

This means that in some cases, Pijul will correctly merge where any of the git merge strategies would create a conflict. It also means that in some other (rarer) cases, Pijul will generate a conflict where git would not: git would guess, in effect, and either get it right or get it wrong. I consider both of these things to resolve in Pijul's favor.

The Pijul model also means that conflicts preserve some crucial state which can be used to resolve the merge. A conflict is modeled as a specific data structure, not as special syntax intruded into the source file. One example of this is that conflicts can in some circumstance be resolved by applying more patches, because the conflict is metadata about the file, it isn't data in the file which screws with subsequent state changes.


What I am excited about Pijul is being able to maintain a personal codebase that can accept changes from multiple upstreams. I’m not looking at commercial teams as much as homelabs or local-first environments.


I'm watching Pijul from afar for years now.

I'd love to see it get 1% of the investment Github gets...


Isn't it more Pijul vs Git than Git_hub_? There's a massive git community/ecosystem that has nothing to do with Github (e.g. Linux kernel development, at least last I looked).

I've dropped off Github for personal projects since it's trivial to run my own git repo and git version control is built into _everything_ by default, but there are definitely annoyances I've encountered while using git that sound like they would be non-issues in Pijul, so on that level it piques my interest. On the other hand, I would want to be able to do two things before I'd seriously consider switching:

1. Self-host Pijul repos with some web interface similar to e.g. Forgejo/Gitea/Gitlab/etc (i.e. not just source control, but also a bit of project management and CI). Looks like Pijul is developed on something called "The Nest" which looks basically close enough, but the source code for that isn't public yet.

2. Install a plugin in an IDE that would let me leverage it inside the UI (with in-IDE conflict resolution)


The 'hub' version of Pijul is The Nest, and exploring it I found this vs-code plugin for Pijul integration: https://nest.pijul.com/GarettWithOneR/pijul-vscode


Testing Git vs Pijul is hard unless you have other people to work with. I've wanted to use Pijul and other VCS tooling for ages, but it's pointless unless I can try with other contributers. So the hub does matter


Github can use/offer Pijul. Sure they named themselves after technology, but nothing prevents them to offer other control systems.


Nothing except their own arrogance. See svnhub.com. It was registered by github to block potential competition long before SVN support on github became available.


Preventing some other party from riding on the coattails of your trademark is reasonable behavior as far as I'm concerned. This doesn't prevent anyone from offering an SVN host, just from stealing free publicity from GitHub in the process. That isn't arrogant at all.


Then they shouldn't have been riding the git hype wave and using/stealing that trademark.


git is not trademarked.


git is a trademark. Here's the USPTO trademark registration info: https://tsdr.uspto.gov/#caseNumber=85961336


I've tried that link on three browsers (Safari, Firefox, Brave) and there's no contents at any of them.

Nevertheless, I'll assume you're not playing some weird game, and that git is trademarked, in which case I stand corrected. It's therefore safe to assume that GitHub and GitLab are allowed to use git in their trade names, according to the holder of the trademark. Unlike a notional "SVNHub", the trademark violation in those two names is quite clear, so they either have explicit permission and pay a license, or Linus (presumably) is ok with their existence.


I'm sorry that the link seems to have expired. It would show that the trademark is held by Software Freedom Conservatory Inc.

I invite you to repeat my search. Unfortunately, the USPTO trademark database search interface is not bery intuitive, so be prepared for some trial and error if you do.


I would really love to see a v1.0 release. That is a big milestone to wider adoption and possible investments that follow.


Github also managed to succeed through being a kind of social network (think "stars").

If you care about Github's dominance being an issue, walk the walk : refuse to do bug reports through it, publicly shame people and companies that use it or advertise it.


If I care about one thing, I should "shame" others who do not care about it? It feels like your comment escalates things up too quickly. Could it be that you wanted to expand your thoughts in between and didn't have the time or just forgot?


It's a comment, not a blogpost, I am assuming some familiarity on HN with these issues, why using Github would be considered bad behavior :

An actual blogpost :

https://drewdevault.com/2021/12/28/Dont-use-Discord-for-FOSS...


There's also sfconservancy's GiveUpGithub campaign: https://sfconservancy.org/GiveUpGitHub/


I use Github and I bet 99% of coders here do too. If someone tries to publicly shame me, I'll just laugh at them.


> I use Github and I bet 99% of coders here do too.

I know plenty of people & projects using Gitlab and Sourcehut (myself).

> If someone tries to publicly shame me, I'll just laugh at them.

Shaming may be wrong. You must have pretty good job security/wealth/influence to refuse Github because this can lose you many opportunities in the industry.

But that doesn't make it good, only understandable & honestly sad


I tried using pijul as the main VCS for my projects I really did and the fundamentals are awesome, but last time I used it the UX was pretty terrible. The easiest and most immediate thing that’d address my issues would be git status-esque command, which has been in development for quite some time, but still hasn’t landed last time I checked.

I wish pmeunier all the best and I’m glad to try out pijul again once it’s a bit friendlier.


we have `pijul diff -sU`, which is similar to `git status`. Also, feel free to contribute!


I'm watching this (and Jujitsu, Sapling, etc.) with interest, but I wish there was more focus on Git's real weak areas. Yes it sometimes makes merge conflicts more difficult than it could, but you can generally deal with that. The bigger problems are:

* Poor support for large/binary files. LFS is bare-minimum proof of concept.

* Poor support for large projects. Big monorepo support is definitely getting better thanks to Microsoft. Submodules are a disaster though.

How does Pijul support large files and large projects?


- Large/binary files: Pijul splits patches into operations + contents. You can download a patch saying "I added 1Tb there" without downloading any single byte of the 1Tb. No need to add any extra feature.

- Large projects: Pijul solves that easily using commutativity, you can work on partial repos natively, and your changes will commute with changes on other parts of the monorepo.


> No need to add any extra feature.

Well that's the building blocks, but Git can do that through blob filters and whatnot. It's a necessary foundation but not a complete feature. You need ways of recording which blobs should be fetched eagerly, which should be fetched on demand, maybe a way to indicate where to get the data (you might want a central store for big files like LFS).

> you can work on partial repos natively

That's very good. Does it have anything like submodules or subtrees? I kind of think they are a bad idea in general but people do use them and they can be useful in very niche cases. From the sounds of the patch-based system I guess you could do subtrees quite elegantly? Can the patches be given a "base directory"?

I don't want to seem like all this stuff should be done immediately but it does feel like these are things that kind of need to be integrated from the start to work properly, unlike in Git where they've been tacked on badly.


I have been following Pijul loosely for a while, and I would strongly agree that it could do with some information on some of the possible practical advantages of the approach. I do have some Pijul repositories around, but have not used it with the necessary further to explain any of the advantages, however, one possible advantage to note is that the equivalent of cherry-picking as used in GIT et. al, which always generates a new commit hash per cherry-pick for the same content should not happen in the Pigul world as I understand it. i.e. A specific commit when "cherry-picked" should have identical hashes, which has several implications.

- Having not actually tested this in my own repos, take my insight with the appropriate grain of salt.


Just like how Sapling is compatible with GitHub, I'd love to see Pijul being compatible with GitHub wrt creating at least a read-only mirror.

I've seen this exists: https://github.com/purplesyringa/PijulGit, but hasn't been updated in 5 years.


Is sapling compatible? It gets rid of your .git directory.


It's network compatible in that you can clone, pull, push, etc with a github repo.


Hey what does this item in the FAQ mean?

> Do files merged by Pijul always have the correct semantic? > No. Semantics depends on the particular language you’re using, and Pijul doesn’t know about them.

That makes it sounds like Pijul might sometimes merge two functionally correct versions of a file into an unfunctional, incorrect one? Is this referring to a problem git and other merge tools have as well, or is it unique to Pijul and a result of being so good at merging without conflicts that it sometimes lets through semantic conflicts?


All version control tools have the same problem since AFAIK none of them care about the semantics of the program being version-controlled (except Unison).

For example, in file foo.js you export the function foo. Now, if you delete the function foo, while your colleague import it into bar.js, the resultant changes are consistent, but the program is now functionally broken.


monticello[0] cares about semantics too!

Tho I guess it cheats existing in an image-based world (I have never used it, it's just something stuck in my memory from many years ago).

[0] https://wiki.squeak.org/squeak/1287


This is why we test our builds using continuous integration. Only updates that compile and pass the tests should be published.

How does Pijul handle continuous integration? If there’s nothing interesting going on with that then I don’t see how the UX can be any different for publishing new versions.


Plsstic's diff tool is also centered around syntactic units of the programming language rather than stupidly scanning for line breaks, if it recognizes the language.


That still wouldn't mean that the merge is semantically correct.


Yes of course this can happen in git as well, as it doesn't know the semantics of the changes it is merging, it only knows about textual conflicts. But if a change applies cleanly it doesn't mean that it necessarily also works, because the other branch might've changed the program logic.

It doesn't happen very often, but it is still worth running integration tests after a merge even if both branches were passing tests on their own.


So in this scenario there's a general case and a special case.

The general case is where merging a patch makes the code invalid in some sense. This can range from a syntax error to a subtle bug. Version control systems don't prevent this, because they can't: in full generality, correctness is an opinion of the author.

The special case is where two patches each create a valid program, but applying both of them makes the program invalid. I think that's in the FAQ because pijul makes much of the fact that it has a sound theory of patches, as it should, that's a wonderful thing. But people commonly confuse soundness and validity, leading to questions like "so what you're saying is that a merge will never result in a borked program?".

IMHO that shouldn't be in the FAQ though, because it creates the impression you got, which is that Pijul is talking about something which might be possible in principle but which it doesn't happen to be capable of. It's just patiently explaining that Pijul can't do impossible things.


> However, channels are different from Git branches, and do not serve the same purpose. In Pijul, independent changes commute, which means that in many cases where branches are used in Git, there is no need to create a channel in Pijul.

I've never understood this. AFAIK, the only use case I've ever seen for git branches is "I have some code, but don't want it going live yet". Maybe it's a WIP demo, maybe you want someone else's eyes on it, maybe you just want to back your current state up on a remote server because your laptop is going to explode.

Am I misunderstanding, and Pijul manages that without channels? Or is there a common case git branches are used that I missed?


A common case for git branches is long-life versions. For example, one git branch can be for version 1.0 and one git branch can be for version 2.0. This can be good for major upgrades, as well as for site-specific installations, as well as for regulated industries that need to audit specific versions.


I think the most common use for git branches is topic branches. Independent lines of development you are not ready to share with your team, or your team with the rest of the company, or commit to production. I don't see how Pijul features can remove the need for such lines of development.

I also don't see how it matters whether a branch is long-lived or not. What Pijul may help with is a (broken, IMHO) workflow of some Git projects where the long-standing branches are constantly rebased. This workflow is bad, but having branches in Git does not force you to (mis)use rebase.

I am not saying anything against Pijul, and maybe there is a better way than branches to manage multiple lines of development, but I'd like it to be explained. So far I cannot guess what it might be.


So we have a git branch v1, and a git branch v2, and sometimes we pull commits into both, and other times just one. How does pijul manage that without using channels?


I had the same question. Turns out, it's mostly a future possibility, and for most current use cases of branches in git, you would use a channel in pijul. At least that's how I understand these discussions in the pijul discourse.

https://discourse.pijul.org/t/phenomenological-pijul-or-piju...

https://discourse.pijul.org/t/working-without-channels/1047/...


Thank for the links! I've looked at them, and while they gave me more and interesting information on Pijul, they did not explain how you can work without multiple channels in a realistic development environment. However, from these pages it appears that working with channels in Pijul is more cumbersome than working with branches in Git.


There are (unfortunately) a lot of git repositories with multiple long-lived branches, often to track what's released in different environments. Unfortunate because it leads to a _ton_ of merge conflicts and uncertainty about what code is live in what env.

One of the most popular of these flows was popularized under the brand "Gitflow". Atlassian has a detailed, and critical, writeup of that here: https://www.atlassian.com/git/tutorials/comparing-workflows/...


I've been following this for awhile. Not my area of expertise but it seems like such a solid project.

Curious to hear about any experiences using it or thoughts about how it could be improved, extended, or compares to other systems in actual use.


As somebody not using this, it looks like the main obstacles to this getting popular (assuming it works as advertised and it's easy to use, I wouldn't know):

- Work on the pitch and clearly state the problem this is solving instead of mainly talking about the competition. What makes this unique? Why should people care?

- Make it easy to start using this for teams. My guess is that this is where a lot of developers fail to convince others in their teams. Back in the day when I started using Git, interfacing with existing svn repositories was a key selling point for me. Likewise, I helped migrated a big cvs repository to subversion when that was new. Key selling point there: we don't loose our version history. This is a complex topic of course but I bet there are actually a lot of solutions here.

- Show that there's an ecosystem. Who is using this? What tools are there? Are there any project hosting things that I can use? Answer these questions. This is about taking away any concerns people might have about using this that are perhaps half convinced already.


This is a chicken and egg problem and I don't see a way out of it.

Even if Pijul is better (not voicing an opinion here), it's not drastically better enough to replace Git in widespread use in the near term. The difference in day to day use is not huge.

And it is better in handling certain merge situations that are painful in git, but most programmers don't run into these often enough to care.

> What makes this unique?

It's patch based rather than snapshot based.

> Why should people care?

It avoids certain merge scenario problems that git makes painful.

> when I started using Git, interfacing with existing svn repositories was a key selling point for me

Pijul can import and export to Git (and probably others) with less problems than git vs. svn (because SVN is not distributed and had a weird branching model).

> Show that there's an ecosystem

Here's the chicken and egg problem again.

There's a "free" hosting service advertised on the front page. Or you can use it with your git hosting (but lose some of the advantages).

But GitHub and GitLab have their own CI systems and other infrastructure which isn't going to be easy or cheap to replace.

So yeah, I think that Pijul is a great piece of technology that solves a real problem we have with Git, but it's unlikely to overcome the inertia that Git{Lab,Hub,} have.


> Here's the chicken and egg problem again.

Back when git had their chicken-and-egg problem, there were hundreds if not thousands of FOSS project members independently starting threads on mailing lists about how and/or when to move to git. Many of them were already using the git web server thingy and manually syncing with svn or whatever.

Some contrarians aside, the general consensus at that time was, "Yes, that clearly does solve some real pains we currently experience on a regular basis (branching, local branching, renaming files, etc.), but how do we practically move to it?"

Practically moving to it required the infrastructure. So you had a classic chicken and egg problem, until whatever broke it (sourceforge adding git compatibility? github?).

With Pijul you have a small number of adherents who have trouble explaining the pains that Pijul addresses, much less how common those pains are in the average git project. So you haven't yet arrived at the question of how to switch to Pijul-- you're still at the question of why anyone should.

Put another way-- you don't have any potential chickens longing to be incubated in an integrated Pijul hub/CI environment. If one poofed into existence, it might hatch chickens. Then again, it might go relatively unused. So I don't have a chicken and egg problem here.


When git came out it was (arguably) miles better than SVN and others. It was a big leap from centralized to decentralized version control.

Git to Pijul is a much smaller change, it is much more difficult to justify.

And then there is the fact that popular CI solutions are tied to GitHub and GitLab which increases the friction significantly.


> Git to Pijul is a much smaller change, it is much more difficult to justify.

Having used both extensively, I don't think this is true at all. I don't see as much difference between SVN and Git, as I see between these two and Darcs/Pijul (even though Darcs has scaling issues).


I would suggest adapting the website to communicate those points a bit more clearly. The communication on that website isn't great. It's a common problem with things techies build for other techies.

The key friction getting users to switch from something they already use is articulating why that is worth doing and investing lots of time in and/or making the point that it's really easy to switch is the main job of that website. Without that, most people simply won't.

The point with an ecosystem is that there won't ever be one worth talking about unless people work hard to build one. "Build it and they will come" rarely works. This website isn't good enough to make that happen.


I like how Pijul has a math-centric approach under the hood. The name needs to be changed if they want to get traction.


Just to provide a counterpoint: The name is great! Very memorable and original. Much better than your regular plooper, vndl or dabix.


Pijul also turns up easily on web searches, despite there being a bird with that name.


They tried that actually. It was traumatic for everyone involved.

It's a pretty weird word for English-speakers, no argument from me there. I don't think I can explain why it's weird, but it is. I don't think that's even in the top five barriers to adoption though.


git did quite well with its...


That may be survivorship bias. How many TLA projects failed that we never heard of?

Not to mention the number of times I've tried to do a web search on an ordinary English word because someone thought that was brilliant to use as a product name and it turns up nothing because they didn't get as popular as Git or didn't have other distinctive keywords to go along with it, such as when your query not only includes "bash" but also "variable" that is unlikely to occur in a dictionary entry about the verb to bash

I'd be very surprised if there is no causal relationship between names and the chances of success, all else being equal, and the devil is in the "all else". Evidently this is not a problem with a big enough marketing budget (think Teams and Meet), but without that it may be much more economical and practical to think of a useful name


I don't want this to be interpreted as a negative comment about Pijul: I didn't try it, and don't want to judge.

My question is: what is the motivation for making distributed VCS? Over the entire lifespan of Git the number of times I had more than one remote... I can probably count on my fingers. And I've been in infra / ops for the better part of my career. And, all those times were exceptions. I'd do it to fix something, or to move things around one time, and then remove the other remote. Other times it was my hobby projects I shared with someone in some weird way.

Most developers who aren't in infra will never see a second remote in their repositories even once in their career. It seems like developing this functionality adds a significant overhead both in terms of development effort and learning effort on the part of the user. So... why?


> what is the motivation for making distributed VCS?

it depends on if you're asking about the motivations for distributed version control for linux kernel development in 2005 or the motivations for distributed version control today. git predates AWS and predates the state of the industry being that it's very easy and cost-effective for people to make central servers and web apps and things of that nature. My understanding is that "emailing a patch to a mailing list" was a more reasonable workflow then, since it piggy-backed off of people's email hosting providers (which, at that time, wasn't even "everyone using gmail", since back when git was created, gmail was invite-only; git predates gmail having open signups). Plus Subversion's branching model wasn't particularly great, so having different people work on things on different branches and giving them feedback and merging the branches when they were ready wasn't really a great experience.

The distributed nature of the version-control system facilitates branches, since a branch and a copy of the repo somewhere else are abstractly the same thing. Practically speaking, people don't push and pull code between their workstations and the network topologies are typically centralized in nature, but on a data level the distributed model is dual to the branching model, and the branching model is the thing that people actually care about. Although I _do_ think it's pretty neat that you can use a thumb drive or NAS as a remote instead of needing a server, it's probably not a core use-case for most people and most projects.


Git was the third VCS I had to migrate to. I still remember very well the transition. Existence of AWS has very little to do with any of it really...

The way programming shops used to be run around the time Git appeared was what today you'd call "self-hosting". I.e. a company would have a dedicated machine(s), depending on the size of the codebase, and those would host company's Git repository. Not at the start and not now and not ever was Git primarily used as a distributed system anywhere outside of Linux kernel (and perhaps few similar projects).

At the time Git appeared it offered some practical advantages over Subversion, which was its main competitor. But those advantages weren't due to centralized / distributed distinction. Eventually, Subversion caught up to some of the features Git had.

In other words, what you say about making cost-effective servers is absolutely backwards. It's more expensive today to do that. Back in the days you paid for the physical components and electricity, while today you are also financing huge infrastructure built around physical components and electricity which you don't own.

Where AWS or the likes do win today is in situations like when your company had multiple international branches and you needed to somehow move a lot of data between them. I remember that Git was very welcome in our Israeli office (after switching from Perforce) because the other office was in Canada, and synchronization with them was painfully slow and expensive. Public cloud contributed to solving this problem, but, mostly, it "solved itself" due to network latency and bandwidth increases over time.

> The distributed nature of the version-control system facilitates branches,

This is just not true. Branches exist in both distributed and centralized VCSs. There was a time when it was "expensive" for eg. Subversion to have branches (because, oh horror! they had to be created on the central server!) but, today, the way developers work with Git, branches are almost always duplicated on the VCS server anyways. Also, the amount of traffic necessary to service the code is really tiny compared to everything else an organization does, so it's a moot point.

> Although I _do_ think it's pretty neat that you can use a thumb drive or NAS as a remote instead of needing a server,

Nothing stops you from doing the same with centralized VCS... This isn't the function of distributed / centralized... maybe in a particular VCS it's harder to do, but the reason would be that it's virtually never needed, so nobody bothered to implement that / make it easy to do.


This question is really deep. The "distributed" nature of Git makes some things easier than SVN: you can scale to large teams, work offline, split a repo into independent subrepos for a while (managed via branches; good luck with the merge!).

However, this isn't really what Pijul calls "distributed": in Pijul, this term is about the work that people are doing when working collectively on a shared file. Which datastructures allow asynchronous contributions to happen? How to represent conflicts? Those questions belong to the field of distributed computing, together with things like CRDTs and leader elections.


> (managed via branches; good luck with the merge!).

Haha... Oh, this reminds me... there's Ada mode for Emacs that functions like that. In order to build it you need to combine multiple branches that have each its own contents (i.e. it's essentially multiple repositories combined in one) in the same checkout. It's the most bizarre way to use Git I've seen in my life (outside of total noob stuff, like when a guy wrote his entire project in .gitignore)

On a more serious note: thanks. I see now what that means. Maybe I should find time to look more into the project!


Enable development that does not rely on a central repository. For some people/projects it's important, although these days it's a minority.


My point is that it's a huge overhead for virtually no gain. This is something Linux kernel needed because of how the community around it is organized, but no commercial product works like that. Even vast majority of open-source projects don't work like that.

I mean, when you get hired into some programming shop, you don't go door-to-door and ask your new coworkers where their repository is, right? You open company's Wiki and it tells you where the repository is. You clone it and start working on your tickets, pull, push, rinse repeat. There's no reason for you to pull from your colleague's remote, even if it existed -- all communication is centralized and happens through the central hub, where various corporate policies wrt' working with repository are enforced (protected branches, CI pipelines etc.)

Over 99% of all developers in the world don't need this functionality. So, to say that you want to "Enable development that does not rely on a central repository" isn't answering the question. Yeah... it does, but why would you (or the authors of Pijul) care about this extremely rare case?


> why would you (or the authors of Pijul) care about this extremely rare case?

I'm the main author, and my answer is: because it allowed to to model with great mathematical rigor what conflicts are, how to represent them and how to treat them in the most intuitive and accessible way. The rest is indeed less essential, but still nice to have (I like doing my backups on an external hard drive using Pijul to copy my software projects).


pijul is a fantastic tool in the simplicity it brings compared to git + I wish it will succeed in the long run.


IIUC, Pijul has all the good parts of Darcs, with the exponential merge issue[1] resolved.

[1]: https://darcs.net/FAQ/Performance#is-the-exponential-merge-p...


Anyone using Pijul? Either in prod or for hobby projects.


Does Pijul handle branches better than Darcs? I _love_ using Darcs but I had to stop because there was no good way for my teammates to see my new branches (really separate repos) unless I told them myself.


`pijul fork`. There you go, I wrote an entire key-value store just to get that to work. It turned out to be faster than all others, but that wasn't intentional.


Nice!


Gotta say, I don't see anything about it in their site's "Documentation" link. Unless I hear different, I don't think I'm interested.


Disregard, it seems that "channels" may be what I'm looking for here.


The front page could do with a simple example of a Pijul session to show how you would actually use it in practice. Maybe whatever would be the Pijul equivalent of a clone - branch - commit - push workflow.


Would love to see a 3 way comparison between Pijul, Fossil and Git


The submitted website says they're explicitly against 3-ways. Either drop one or add a fourth!


As a left-handed person, that name is a nightmare to type.


As a left handed person, I had not even noticed that pijul is typed with the right hand. As a touch typist, now that you've brought it to my attention, it's a pretty fluent run, actually. I type it 3-2-1-2-3, and back on the home row it is, the last key is in the rest point for that finger.


That's what aliases are for. Regardless, I'm surprised that you even notice which hand you're typing with. For example, the word 'regardless' is mostly on my left hand and I'm right handed. But I wouldn't have noticed it if I didn't go looking for such words.


I'll be very interested in a usage report from somebody. How's it been like using this tool? Is there an email workflow I can use like git's? I feel like an email workflow could really help with adoption because it would mean that I wouldn't need a third party website to rely on for distribution.



pijul seem to be written in rust, no one is mentioning it yet, so i though i would

also because the language a project is written in, is one of the first things i like to check


Does it accept per-repo settings? (e.g. git config --local)

Does it use proper SSH? (e.g. `git clone my_entry_on_ssh_config:/repo_path.git`)


I seemingly can't link to line numbers in their nest thing, but https://nest.pijul.com/pijul/pijul:main/SXEYMYF7P4RZM.ERRQA#... is ssh.rs in the pijul-remote directory, so presumably "yes"

also, an especial :fu: to whatever the hell is going on with SXEYMYF7P4RZM.ERRQA being a permalink to a file with a specific name, forcing me to write out in english what's going on there


yeah the web think is pretty limited. no context lines on diffs at all, and I couldn't find a link to the full-file diff from the changeset. But i wouldn't worry too much about the nest usability yet.


That honestly sounds like an implementation detail compared to figuring out what version control model even makes sense to begin with

If you want local configs, worst case you can update $HOME inline and make it use different dotfiles. If ssh is a must, sshfs can be a way to achieve that


not being able to set ssh parameters (and key paths, etc) via .ssh/config was a nightmare on git a long time ago. impossible to not leak identities to random servers. impossible to have curated identities for the same remote. imposible to use adhoc jump hosts. etc.


Missing FAQ: Is the "j" in Pijul hard or soft?


Don't know what hard or soft supposed to mean in a hypothetically ambiguous context, but the `j` here is obviously a Voiceless velar fricative (`x` in IPA). Think loch (Scott.), Χάρων (Gr.), joven (Sp.), хлопець (Ukr.)

https://en.wikipedia.org/wiki/Voiceless_velar_fricative


~~"obviously"? Why?~~ aha! see edit 2

That pronunciation had temporarily escaped me. I was trying to decide between a voiced postalveolar affricate d͡ʒ (hard) or voiced palatal approximant j (soft).

https://en.wikipedia.org/wiki/Voiced_postalveolar_affricate

https://en.wikipedia.org/wiki/Voiced_palatal_approximant

Edit: "hard" and "soft" seemed like the most logical way to describe what I thought was a dichotomy, particularly as it's also often used to distinguish between the voicings of "c" and "g".

Edit 2: Just read the whole FAQ (rather than searching for "pronunciation" or related terms), and noticed the "Where does the name come from?" entry, which mentions the Mexican origin, which is why "x" is "obviously" correct.


Regarding my usage of the "obvious": it would only be correct to say that it's obvious to me, sorry for that.

The reasoning goes like this: The word uses the Latin script in its most basic form so it's most probably some western-European language, Romance or Germanic. The phonetic structure fits Spanish the best, compared to other languages that I have any superficial knowledge on, the -ul being the most telling bit.


It's just pronounced "j". as the "g" in gif.


I logged in just to upvote this! Got a good chuckle out of me :)


I say the whole thing should be pronounced 啤酒 (píjiǔ), which means beer in Chinese, just to add even more confusion into the mix :) Learning the intricacies of tonal languages is probably quicker than fully understanding Git so you still come out ahead by switching.


That’s not even an exhaustive set of possibilities.


If you ask the waiter if they've got cauliflower or broccoli, I'm sure s/he'll still reply if all they've got is carrots, without being exhaustive


Thank you for innovation that could free us from git


Awww That's Grear!!!


Is a git backend planned?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: