> Over the years, several developers have asked why we're not on GitHub. The perception is that "everybody" is on GitHub, and some developers don't take your project seriously if it's not on GitHub.
This is the sad truth. It doesn't matter anymore if Bitbucket/Gitlab/etc. are better than GitHub: The social pressure for open source projects to move to GitHub is so high that the alternatives don't even stand a chance.
Maybe a meta-search engine for open source projects could help the competition, but if all projects are on GitHub, who would use such a search engine?
P.S.: I actually like GitHub, I'm just sad that all cool changes that GitLab has made recently will end up being mostly ignored by the OSS community. Even if they fixed their speed problem (which I think is their biggest downside compared to GitHub), no project would move there because "everybody is on GitHub".
Given that Gitlab provides a superset of Github's features, it's possible to operate on both platforms, while clearly indicating where the primary location is. Gitlab has built-in CI, so that can be used exclusively there, and it will be one incentive for users to come to a project's Gitlab instance instead.
Personally, I like hosting a repository on Github plus Gitlab for availability reasons, but if Github would finally allow disabling pull requests as well, one could easily make Github a read-only mirror.
In light of non-existence of project patch, ticket, release, wiki interoperability protocols, we will either adopt more of FossilSCM's features into git addons (like git-appraise) or someone will write adapters.
The most concerning aspect of all of this is that projects that have access to hosted FOSS servers which are fast and under (semi-)direct control give in to peer pressure and move to a closed platform like Github.
If, say, Freedesktop.org would run a Gitlab instance, it would prevent the move to Github. I cannot be the first one to think of that.
Thanks for using GitLab! While we can't disable pull requests on GitHub maybe we should add a feature to GitLab to keep importing these pull requests. For this we would need to create forks on GitLab or allow pull/merge requests from code on another server. Opinions are welcome.
Yes, automatic connectors would solve a lot. Some users might insist on a bidirectional solution, though it's not necessarily a requirement. In fact, I personally wouldn't want Gitlab to request permission to modify the github copy. At most, I would be okay with it subscribing to feeds and email-replying (no need to oauth) like any github user can.
Speaking of disabling, if gitlab allowed disabling pull requests, wiki, issues, etc. and act like a pure git repo, it would solve a problem github refuses to address, namely a project's users wanting to use github/gitlab but the maintainer hosting the primary repo somewhere else. It would also play well with the D in DVCS, which github sort of breaks by leading users to centralize for convenience. I mean, one day github is down, another day gitlab, and so on, it's just how our networks operate.
It would be nice if git would know of multiple remotes in such a way that one could ask it to pull from a random one or fall back to one of the others if the selected is unreachable. Like DNS records for a host.
This would be an improvement of the distributed and highly available characteristics of a DVCS.
We should just reject github and watch how fast they roll out a self-hosted version almost nobody wants to host themselves.
Today's open source alternatives will be followed by endless more experiments in building a better github, hard to imagine someone "never" nailing it let alone "never" being handed it on a platter because the first website in history annoyed too many of their users at once at just the right time.
Going open source wouldn't have saved digg after everyone decided to break their old routine. They had nearly identical open source competitors with variations of their own interface all along, even direct clones.
None of their customers ever needed it or benefited from it being a proprietary service either. It's probably a lot more important to Gitlab than Github.
It's not primarily about the platform being open source, but relying on a SPOF. It's dangerous and empowers Github to pull all kinds of stuff while they have the marketshare. Today's github may not do shady things, but tomorrows can. It's irresponsible of us to put all our eggs in one basket. At the very least, helping them to be the place where 99% of projects are hosted will allow github to ignore complaints.
In fact, it's a good thing Gitlab is not a pure open source project, because there's leadership involved and business interests with real life problems to address. So, the chances of "endless experiments" is low. Sure, if Gitlab was fully open while keeping project leadership and all, it would be even better.
The only reason it's a SPOF is they don't allow anyone to read/run/modify their code... that's what open source covers.
I think the problem is we conflate selling the only access to our code with selling access to it. If you need online hosting for your project the alternatives are almost immutable - somebody's server running project management software.
No matter what Github does developers are trending towards using open source tools to the extent Microsoft's porting their company to Linux right in front of our faces...
They do have a self-hosted version but it's probably not what you have in mind since it's quite expensive and is aimed at enterprises. It's called GitHub Enterprise. It's a black box virtual appliance since actually installing and maintaining all of the various services GitHub runs on independently would be a tough sell for a lot of companies.
One alternative is to also push to gitlab/bitbucket/private with some layered snapshots for backups. I still remember all the "free press" github when they nuked all the repos.
> One alternative is to also push to gitlab/bitbucket/private with some layered snapshots for backups.
Can you explain what you mean by "layered snapshots for backups"? The convention is to push to several remotes because that's part of git's dna, so what's the improvement you're suggesting?
> I still remember all the "free press" github when they nuked all the repos.
Git isn't immutable, it is possible to destroy a remote repo. So if one really wants verifiable backups, one needs to take an immutable snapshot of some sort.
> free press
Like the pentium bug, corporate IT departments everywhere suddenly realized that they used github due to the site wide reset. It was a minor kerfuffle on the net that week.
The worst thing is that eventually, github will start to want to monetise even more, and we'll see it turn into SourceForge, at which point many projects that have tied themselves into github rather than spreading out more carefully amongst other git hosts will face a bit of an existential crisis.
This is part of why I have some issue with the Golang packaging story, it pretty much requires you to refer to dependent packages as `host.com/user/pkg`, so in most cases it's `github.com/...`. You can manually rewrite this, but that's not really ideal.
In case whether 'search & replace' method is not an option you may create your own 'proxy' server like gopkg.in [1], since Golang packages don't have to be only on GitHub [2] you may set up mirroring rules to overcome problems with package location, version/revision, or library when GitHub downtime may occur. Personally, I don't think if creating own proxy server as solution would be quite big deal whether there are many various dependency tools for Go. 'Search & replace' is not mission impossible either. Go is not PHP - it just won't compile if (local) dependency is broken.
> no project would move there because "everybody is on GitHub".
If they had a solid sync service between GitLab and GitHub; PRs, issues, code, everything then things could get interesting and some people might consider the switch.
1. We're working on the speed. It has gotten a lot better with the last few releases. The release of tomorrow is another big step, especially for diffs. We have a number of issues for this, I'm linking one of the meta issue here, but be sure to check the release of tomorrow for graphs and whatnot [0]
2. It's definitely an interesting idea. I made an issue [1]. Love it if you would provide some feedback. I wonder whether we'd be allowed to do such a thing, but it seems to be in the interest of the end-user, which is a good thing.
If you are serious about doing [1] and are willing to throw the hardware resources behind this initiative, I can probably work with you guys to make GitLab, a central point for searching all open source projects that are versioned by Git. I've probably indexed over 100,000 public repos from GitHub as part of testing, and to be honest, one of the biggest problem has always been finding repos worth indexing.
However, it's worth noting that GitSense is not optimized for searching across 1000+ repositories at once and to be honest, this is never a good thing in my opinion. When it comes to code searches, I believe it should be curated, where some domain expert can say if you are working on release x, for open source project y, these are the branches/repos you will probably be interested in and so forth.
Until we have artificial intelligence, like we see in movies, nobody is going to produce relevant code search results, by searching tens of thousands of repos. Software development is completely context driven and what mattered for one release, can become completely irrelevant in the next.
What GitSense is optimized for, is searching across logical groupings of code at the branch level. This can be 1000 branches from 1000 repos or 1000 branches from the same repo/forks.
If GitLab is serious about being a central point for all open source code searches and are okay with providing logical searches at the branch level, let me know. You can contact me via my email in my HN profile. And it's probably worth noting, since the GitSense indexing engine/search engine are completely separate from the frontend, it'll be easy enough for GitLab add their own interface.
The problem with Google Code Search and the one that you mentioned is, you can never be certain the match is relevant/correct. And the reason for this is, you are always searching against a single branch/timeline.
Definitions can change from one release to another. Function behavior can change from one release to another and so forth.
Searching across millions of repos is not difficult, if you are willing to accept the matches returned may or may not be relevant. What is extremely difficult, is building a search solution that can take into consideration all the changes, on every branch, from the millions of repos. And this is the problem nobody is going to solve anytime soon.
The features are all listed, but just by glancing over the site it's not clear what extra features are provided on gitlab.com for open source projects that would require webhooks to external services on github and the associated security risks. So, in addition to listing features of Gitlab Community and EE it would be helpful to see that for example there is built-in CI support that targets X platforms.
Instead of meta-search, git needs meta repositories. I should be able to specify an alias which represents multiple repositories, not just URLs like it does now. This way I could upload to e.g. @myrepo and have it upload across github, gitlab, and other git based hosts. Sure it could be done pretty easily with shell scripts, but it would help the cause if it were built in to git. I'm a git novice so it may already have this feature, but if not it seems like it would help with this problem.
With all of the - I'd call them shenanigans, but I'm sure they're justifiable features [0] to someone - with homebrew, it may be time to take a closer look at macports as a replacement. Glad to see the project not only alive, but evolving with the community.
[0] Like automatically running `brew update` with every brew command run. I was literally wondering if my computer had frozen up again when I saw that it had done a full update (which involves not only patching the brew code, but updating its source repos) before installing a pre-compiled version of tmux.
Wow, that's almost as bad as a few months ago when they enabled the analytics feature that sends data to Google Analytics for every brew install, list, search, error, etc[0]. Thanks for pointing this out, I've disabled the auto update (HOMEBREW_NO_AUTO_UPDATE=1). Web developers gone crazy with the package manager in your shell!
Well, that's a relief. I run an average of 4-5 brew commands every minute, so at least there's that. /s
> when you run `brew install` or `brew upgrade`
So, only the two most used `brew` commands (for me, at least) will trigger a series of git fetches and merges over the network and an in-place update of the code.
> You can [...] disable this functionality with `HOMEBREW_NO_AUTO_UPDATE`
Why was it enabled automatically in the first place? Why wasn't I notified when homebrew updated that this was the new default behavior. Why must I always dive into homebrew on github to identify (1) why it's not behaving the way I would expect a package manager to and (2) how to get back sane package manager behavior?
If there's been no updates there will be no `git fetch` run. We use a newish GitHub API to check for updates as its quicker.
> Why was it enabled automatically in the first place?
A significant numbers of our created issues are due to people having failures when they don't run `brew update`. This solves that.
> Why must I always dive into homebrew on github to identify (1) why it's not behaving the way I would expect a package manager to and (2) how to get back sane package manager behavior?
We're an open-source project run by volunteers for free in our spare time. We're underfunded and understaffed so our communication is lacking. The best current place to find out about such things is to follow our issue tracker. We aim to improve this in future.
Can you disable all of its cutesy names? Bottles for binaries, cellar for package path, taps for 3rd party repos, recipe (or formula? I forget) for build scripts, cask for a REPL... I honestly got a bit confused at first by what all of these things meant.
It's hipster programming kitsch. It's unnecessary and indicative of the culture of its developers and early adopters.
There wasn't a problem with MacPorts / Fink, they are built on top of proven mature tech (FreeBSD ports and Debian's apt), but why not reinvent the wheel? All the cool node kids are doing it.
I switched because I had to nuke my entire Macports packages directory and start from scratch to fix one error or another several times a year. Got fed up with it and tried Homebrew ~3 years ago. Haven't had to do anything like that even once, so I've stuck with it.
Homebrew has all the markers of a project I should hate, but it's so rarely inconvenienced me in practice that I can't help but like it.
Perform an update or install a package, things break. Usually Macports itself would partially or entirely stop working. Not the same way every time. After the first couple times I learned that attempts to fix it usually didn't entirely solve the problem and/or took too long, so I just started deleting the whole thing and starting over when things went wrong. At least re-installing packages doesn't require my full attention.
2012 may have been around when I stopped using it, can't recall for sure. Maybe it improved after I dropped it.
Those clever words immediately made sense to me for their inherent meanings the first time I saw them. So they are actually quite useful in their pairings.
I love puns just like everyone else, but when you have a word that precisely describes the thing you have, why not use it instead of using a clever metaphor?
Well isn't that nice for the one person who intuits their meaning, for the rest of us we would like to not have to translate already designated words to new words for no other reason than to be playful.
Homebrew was the first package manager I ever used, and the cutesy names actually helped me understand what each part represented. I feel like I have a better understanding of package management thanks to the naming scheme.
It's not that they want to send it to Google, it's that they want to send it to homebrew, and google is an intermediary. Sending it to homebrew allows them to know what features are being used, so they know what features they could remove or improve.
If homebrew is transmitting the packages you install across the internet, through Google's servers, and through homebrew's system, it is very possible that information could be swept up in a dragnet or stored on a server that could later be subpoenaed or searched with a warrant.
The analytics issue aside, how can a package manager not transmit what packages you install across the internet? At some point it has to request the package(s) you're installing from somewhere on the internet.
The main self-interested reason: we use analytics to judge what packages and options to remove. If no-one using analytics uses software: next time it requires non-trivial maintenance work it will likely be removed rather than fixed.
It seems less people are involved or interested in solving the Darwin related issues. I have more issues getting things built and installed on OS X compared to a year ago. This mainly because it seems people aren't really interested in maintaining the OS X part of nixpkgs and some simply lack the resources to do so.
In the end it seems that this will get better if it can attract enough OS X users so that patches start rolling in but right now it's just a few people fighting that battle.
I don't like the lack of software diversity, but I see running Linux based containers on OSX as a viable solution to not having Nix support. Run nix in a container, create shimlinks so that everything is "transparent"
I still use MacPorts because it's actually a proper package management system. Homebrew is not, and its limitations started to show a long time ago. The obnoxious assertion that Homebrew should take complete control of /usr/local AND be able to install in there as an unprivileged user is utterly ridiculous. It's just some nonsense that the author cooked up in his own head, contradicting a long history of convention regarding the use of /usr/local.
The maintainers of packages are supposed to port their software to the relevant platform, that's their job, and yes sometimes this takes some work, whereas Homebrew seems to assume that ./configure; make; make install should flawlessly work out of the box for pretty much anything (this is not true). The fact that they actively discourage even trying to install outside of /usr/local is simply laziness (and not in a good way). This is a foolish and naive approach, and pursuing this idealistic, reductive approach for "simplicitly" breaks down very, very quickly. MacPorts is a well behaved, self-contained system, and a well designed one at that. Homebrew just dumps everything into a system directory that it really shouldn't have exclusive rights to in the first place (/usr/local).
Moreover, there is nothing wrong with installing extra compilers, libraries, etc, under /opt/local and having them run alongside the system libraries. Lots of systems have provisions for doing something like this.
Also, using version control systems as distribution and deployment systems is a horrible anti-pattern and I really, really wish people would stop doing this.
Homebrew's documentation is terrible. Its terminology is facile and stupid in the way it analogizes everything with beer.
> "it's actually a proper package management solution", "homebrew is not", "its limitations started to show a long time ago", "is utterly ridiculous", "its just some nonsense", "are supposed to port their software", "that's their job", "(that is not true)", "is simply laziness", "foolish and naive", "breaks down very very quickly", "is a horrible anti-pattern", "facile and stupid".
I downvoted you. You assert a massive bunch of things with no evidence. A reasonable person could disagree with every single one of them (I do). While I could imagine your reasons for believing some (most?) of these, I would simply be arguing against the strawman I constructed. By providing actual evidence ("why do you believe this to be the case"), you would allow us have a constructive discussion on your points. Instead, you give us bile.
Today, Homebrew does have some major shortcomings. The biggest one that comes to mind for me is the fact that it only seems to consider the dependencies of a package during its initial installation. That is, if A depends on B, installing A will trigger an install of B. However, after that, Homebrew will happily let you uninstall B (thus breaking A) without so much as a warning. Less critically, it doesn't track the fact that it only installed B for the purpose of making A work, so it can't help you clean up the left-behind-dependency cruft from the stuff you've since removed.
I don't disagree with your assertion about the state of Homebrew, but I wanted to point out the `leaves` subcommand.
brew leaves:
Show installed formulae that are not dependencies of another installed formula.
I'm not sure if it persists this state or simply calculates it on-the-fly by enumerating your installed formulae. But it does seem like the same data or functionality could be used to prevent your uninstalled-deps scenario.
Hmm... Interesting. I don't think that "brew leaves" truly solves the problem, though, since the list it generates will include explicitly-installed leaves in addition to no-longer-needed dependencies. It can help narrow the list of potential candidates for removal, though, which is nice.
The wheat is mixed with the chaff. Ignoring the jabs at the author and their reasoning the odd way it uses and interacts with /usr/local is worth thinking about. Sometimes challenging conventions is good but sometimes those conventions are there for very good reasons that aren't immediately obvious.
Yes, the single userness of Homebrew became obvious when I inherited a MacBook from someone which also contained some licensed software that couldn't not easily flattened and reinstalled. Homebrew sprays single-user shit all over the place and judging by google searches, there was not a clean way to remove it. I ended-up doing a lot of manual deleting. You couldn't even rf -rm /usr/local/, because Homebrew did not own that directory, my licence-locked software dropped stuff there too for example. Really how hard is it to type sudo?
And, it just seems completely idiotic to compile standard packages for the second most popular developer ABI on the planet. How many rainforests have Homebrew users burned down to get the same binaries as the guy sitting next to them?
I've been using pkgsrc lately. Binary packages that seem to work out of box and come with launchd configs. It goes into /opt/pkg/, and /opt was always better than /usr/local because of the unix philosophy that its shorter to type.
Anyway, if you like Homebrew's approach to the filesystem, there was a OS you would really like, it was called Microsoft DOS. (It didn't work out in the long run.)
> And, it just seems completely idiotic to compile standard packages for the second most popular developer ABI on the planet. How many rainforests have Homebrew users burned down to get the same binaries as the guy sitting next to them?
I find 90% of the brew things I install are binaries (they call them "bottles"), and I don't even use brew in /usr/local (where even more binaries are available). Have you used brew recently or is your experience from some time ago?
I'd agree. I started using Slackware only in 2001 and installing to /usr/local just seems like the best way to handle application software that isn't required by the system. Since Fink got relatively abandoned, I've been using macports and have not gone back to using Fink. Macports is very useful to me for just able all 3rd party software that is unix-y and exclusively an app. The only thing I use homebrew for is cocoapods. I had a bad experience with homebrew when it first came out using, and I've been leery of it since.
> be able to install in there as an unprivileged user is utterly ridiculous.
this. I'm using homebrew and installing every package as my own user is a security risk. One security mistake in a package can interfere with my main user account's files / other packages / etc.
Other package managers (apt, yum, macports, etc.) install each package as separate users (e.g. Mysql as user 'mysql') to prevent bad packages from 'leaking'.
Let's say the code of something you are running locally has a SQL injection vulnerability that allows someone to run arbitrary code on your machine (let's imagine you are running a local copy of your client's WordPress blog with dozens of plugins you don't know where they came from). When the user you, the intruding code runs as you, with your privileges. When the service runs as a restricted user, that's all access the intruder gets.
Next time you go to a meetup, try to do an nmap on your surroundings.
Anyone taking control of your account can take control of (for example) database you installed, without going via the database's authorization system. Not a huge issue on a laptop, I'd say. But in general - your normal user should be able to modify the binaries you're running without gaining some higher auth level. (for example via sudo)
Without going into the rest of your points, which seem bizarrely furious at a piece of software, nothing about Homebrew forces you to live inside `/usr/local`.
It is recommended, but if you want to run in `/opt/brew` or `~/brew` you'll get a warning some things may not work but that's it. There's no hard block, and the vast majority of core formulae can pour their bottles anywhere.
> Also, using version control systems as distribution and deployment systems is a horrible anti-pattern and I really, really wish people would stop doing this.
Can you go into this? I have to write an update framework at my job and I want to know as much as possible about it before I start.
> Also, using version control systems as distribution and deployment systems is a horrible anti-pattern and I really, really wish people would stop doing this.
But that specific incident had nothing to do with using a VCS for packages? He removed left-pad from npms index, the code behind it was still online. Or did you mean more generally in a "single point of failure/removal" sense?
When you have a system where an individual can remove one item in a vast dependency tree where no one has central oversight you create risks like this. This also says a lot about relying on external dependencies like there is no tomorrow, but the point is still valid. I can imagine a thousand ways to break a system like this.
Apart from that, version control systems are optimized for controlling versions, not delivering static content.
I'd rather you answered the question instead of dodging it. However here are a few off the top of my head, I'm sure there are many more reasons for it and reasons against, but that is why I asked in the first place.
- There are several popular, well maintained, hosted VCS providers, such as GitHub, who have good bandwidth, and host open-source projects for free. Nice as most open-source projects are unfunded.
- A VCS gives you traceability in what's changing in your package repository 'for free' - nice when you open it up to contributors.
- A git-based package repository can be kept locally in relatively little space, allowing for the case where the package repo host goes down - I've had issues with being unable to install from PyPI in the past because it's down, I have not had that issue with git-based repositories, for example (not a perfect comparison).
- A VCS based repository can aid in creating a system where a given state of the repo is consistent, and works together, and can be upgraded atomically. Stackage is doing something like this, providing a known working set of packages at any given release (although I'm not sure if they are backing that concept with a VCS).
I still use MacPorts and I am quite happy with it -- although I've tried to switch away a couple of times, I can't bring myself to uninstall a piece of software that's worked so reliably and faithfully for me over the years.
Maybe it's just that Homebrew turned me off right away. Back in the bad old days, when the Python 2/3 split was more serious than it is now, I had a heck of a time trying to get pandas and scipy running with Homebrew. I haven't given it a try lately so perhaps things are better now.
(I am probably also the kind of person where all the hectoring the Homebrew docs would do about installing to /usr/local eventually got on my nerves ... )
That being said, I don't use MacPorts versions of things like TeX or Ruby (I prefer MacTeX and plain old rvm).
I use macports but I am not happy about having to install full blown XCode in order to install macports in order to install (insert trivial, minor package here).
Has this changed ?
I am basing this on my experience of both Snow Leopard and Mavericks ... and in those cases, I had to download and install XCode. Further, this is (I suppose) more a criticism of OSX and how it does not include basic tools ...
xcode-select --install on the CLI has been available since Mavericks (I think) and will install the CLI build tools without need to install the multi-gigabyte Xcode. I don't personally use macports, but that command should pretty much solve your issue.
I don't consider installing Xcode to be an unreasonable thing to do to - consider that 70+% of the people who buy a mac have no need for the compilers or system libraries. So it doesn't seem unreasonable to me - I mean, my debian box doesn't come with any of the devtools installed - and its a couple gigs of software to install.
I use macports, but I had never switched away in the first place. My short list of reasons include macports keeping everything tucked neatly away under /opt by default, which means I can blow it away without having to worry about breaking anything else; it manages its own dependencies, so things don't break when system libraries change; and it does binary installs by default and checks shared libraries for breakage when things get upgraded. Generally I find it Just Works, so why change?
As a bit of history, when homebrew first landed, it existed because the author wanted to pass optimization flags to his builds [0], and found the macports build system too complex for this purpose. Homebrew's real strength is the relative transparency of the build system (it is more obvious what will happen when a build launches), which is really a reflection of this initial complaint about macports. At the time, I didn't care about optimization flags and didn't otherwise get a good vibe from the rest of the project, so stuck with macports for the reasons I listed above (less the binary installs, which didn't exist yet). Since then, I've been a bit surprised at how much Homebrew has taken off, but macports has continued to improve as well and I am still very happy with it.
MacPorts is useful on multi-user systems. It installs as root, so any user of the system can access installed packages, but only admins can modify them.
I'm sure Homebrew could probably be set up this way, but the defaults are a powerful thing.
That's a mighty good reason to use Macports! I'll have to keep that in mind.
You said you could probably configure Homebrew in a similar way, but... can you now-a-days configure macports to install without root for user local packages?*
* Pardon me for the potential ignorance of this question being unreasonable or lazy.
I still use macports, although it's mainly out of habit and I'll probably switch to brew on my next mac (I've also been trying out nix, and while I really like the idea, package availability and freshness hasn't been as good as macports or brew. Nix hasn't picked up go 1.7, for example).
My main concrete reason to prefer macports is that it has separate packages for every version of python so I can have them all installed at once for testing. I also like a lot of macports' technical decisions, like the fact that my regular user doesn't have write access to the installation directory. And macports has binary distributions now so the awful rebuild-the-world updates aren't as much of a problem anymore.
The fact that there is a super-specialized tool to do this demonstrates that it's not a pointless feature. It's much nicer to be able to use macports to install all the versions of python that I need instead of using brew to install pyenv and then pyenv to install python (and then pip to install python packages...)
I use MacPorts on my personal Mac because the whole system "feels" higher quality. When I install, say, FontForge, I get a usable font editor (that requires X) on my Mac. When I do so with Homebrew, I get a crippled command-line utility. It's predictable and boring, which is how I like my infrastructure.
GNU Octave packages are still well maintained on Macports (and Fink!) and we have several enthusiastic users.
I wish we could get Octave to our users without any CLI shenanigans, but it's proven very difficult to build an app bundle. We finally have one, but it's not relocatable.
I never used it because I didn't want it to spray binaries all over a system directory. Macports is completely self-contained which I consider to be a much better design decision and less likely to cause me problems.
To turn your question around, who uses Homebrew instead of Macports?
I tried Homebrew and had lots of trouble with it. I like that MacPorts is isolated so that I don't need to worry about bad interactions with the Apple compilers and libraries.
Long answer:
When I got myself a Mac (October 2013), I looked for a way to install software, and I was given the advice "use MacPorts or Homebrew, either is fine". So I briefly looked at both and for some reason stuck with MacPorts.
I have not had any trouble with it, so I never had a reason so reconsider.
I think the main reason I preferred MacPorts back then was that it creates its own folder hierarchy for installed software (/opt/local), while Homebrew defaulted to /usr/local. I sometimes will install stuff straight from source, which typically ends up in /usr/local, and not having anything else live there made it easier for me to keep track.
I switched because most of my team uses brew, but I was happy with macports, it has a sane UI and I liked the way build options could be specified (i.e. "+python +png")
The only problem I had was that some stuff requiring compilation _outside_ of macports have builds that won't work naturally (e.g. phashion gem).
This is usually solveable and rare, but annoying.
OTOH I've seen similar issues with brew too (i.e. need to change openssl from system version to compiled version and recompile stuff)
I've used MacPorts since it was DarwinPorts. This isn't out of habit, I've tried Fink, Homebrew, Nix, and Pkgsrc. All of them can't hold a candle to MacPorts.
I have a super easy way to evaluate package managers, so I don't waste my time: can it install useable up to date versions of GNU Emacs (NS variant), TexLive, and Xorg Server, without hassle.
Fink is dead, so we can skip it. Emacs was never up to date on Fink anyway, so I quite using it early on.
Homebrew does not package texlive and has a pathetic error it prints out when you try 'brew install texlive' — this is why I refer to it as a toy package manager.
Nix, failed all of the packages and was crazy slow. Xorg installed, but crashed on launch.
Pkgsrc is the only one that comes close. I couldn't figure out how to install the NS variant on GNU Emacs, but everything else installed (including CLI emacs) and worked. The caveat is that all the packages had really strange names. MacPorts wins here because the packages are name as you would expect: emacs-app, texlive, xorg-server.
MacPorts could make it easier to package software. Hopefully they're working on this.
I would jump ship to Pkgsrc if it had better organizations and included macOS variants of packages.
For me it's just habit since I've been using MacPorts for ages. I know of Homebrew, but since MacPorts has always worked perfectly for me I've never really been motivated to look into it.
The problems addressed by Homebrew, Macports and the like have all been solved in a cross-platform manner by the fine folks working on pkgsrc.
Sane defaults, respect for system security settings, and a general lack of bullshit. As a *BSD refugee on Mac OS X, it feels like a slice of home every time I need to install software.
Because of the cross-platform focus of the NetBSD developers, I can replicate my package installation workflow on any number (20+ at last count) of operating systems with the same command set. This saves me from the mental overhead of having to pick up Mac-isms, which is nice.
I've been using macports for years and am excited by the prospects a "social" coding experience will bring with a much lower barrier to entry for contributions.
They say they will stick with Trac for dealing with issues. I understand that for managing the ports collection, but are they planning on keeping "base" issues (i.e. the application code) off the github as well?
I don't know how well different repos and issue trackers interop, but it seems like an open-source solution like gitlab would offer the (potential for) customization/integration more readily than github.
I'm sure they just desire a turn-key solution and don't want to manage a whole new project, but maybe, one day, they'll want to unify the experience again.
In retrospect, "social" was the wrong word; what I really meant was "a place people already have an account and may contribute" i.e. a large, established community.
I'll leave the question of "is it a social coding experience?" to the experts.
For anyone using homebrew or other package manager, and who has an afternoon free; I'd suggest checking Nix out. It's very nice to define an immutable manifest and have your system derived from it. If you love pure functions, you'll love Nix.
I found nix is, unfortunately, not yet a nice Mac citizen.
Quite a lot of packages were broken (I fixed 3), but then I found the default nix script (which you are supposed to run every login) broke mercurial, by installing it's own certificate store, and I had to stop using it.
Yeah unfortunately Nix sets an env var that breaks the system-provided curl (since the system version doesn't use OpenSSL). I filed a ticket about this a while ago and it's still open.
I've been trying to like pkgsrc on Mac OS X for a few years now. The one you link to provides binaries, and is sponsored by Joyent. The main deal breaker for me there is that they only update their packages every quarter (currently 2016Q2), so you have to wait if you want bug fixes, security updates, etc.
I just had a look again and found that ffmpeg is one minor version behind behind stable, its binary is annoyingly called "ffmpeg3", and it was compiled somewhere with LLVM 6 / clang 600, versus LLVM 8 / clang 800 on my macOS 10.12 system. I tried to compile it manually (after a shallow clone of their massive, 250,000+ commit git repo), but failed.
For me Homebrew does the right thing here — they basically turn my Mac into a rolling release Linux desktop like Arch Linux, which is great for the userland type packages I use: openssh, zsh, tmux, postgresql, ffmpeg, git, vim, rsync, etc.
The big thing it has is the same packages and experience on any OS, you can install on Linux and the BSDs and it is just the same. Other than that it has pretty much everything you would expect, and lots of packages.
There are a number of these, but most of the big ones are supported (e.g. emacs, vim, python and most common pypi packages, ghc and most haskell packages, ssh, git, ssl, blah blah blah). The main challenge with supporting OSX is that the closed-source nature and/or proprietary license of a lot of the low-level system libraries, especially for graphical applications, doesn't play very well with nix, which does best with HTTP URLs from which (preferably) source code and/or (failing that) prebuilt binaries can be fetched, the contents of which are deterministic.
That said, there are tens of thousands of packages that build on OSX, certainly many more than enough to be a very useful platform for package management, especially when coupled with all of the advantages that nix brings. The downsides are a higher learning curve and occasional missing and/or broken packages.
There has been a few times I couldn't find more obscure packages on there. I still keep brew installed on my machine, where those packages are available. Wish there were more friendly (ruby-esque and fun like homebrew) tutorials on how to create packages.
I hope to use Nix for go and c programs the same way you use Boot/Lein, Maven, Bundle and Npm for their respective tasks - make a manifest local to a project directory. No more figuring out whatever that library is called in my package manager...
I never get the logic of how whole heartedly opensource projects embrace a completely closed source software. I get it that not everything can be open source but there are already good open source alternatives for hosting.
I hope this helps with the adoption of newer versions of packages.
I have used MacPorts in the past (and Fink before) but I am supper happy with Homebrew these days. What I disliked about Macports was:
1. Ports define which compiler to use. Sometimes I had to install a small tool and first it would another version of gcc.
2. Ports were outdated a lot. I developed a project using the Grails-framework and it was a couple of versions behind the current stable release. Just checked: Grails port is at version 2.2 https://trac.macports.org/browser/trunk/dports/devel/grails/... while the current version being 3.1.10
With Homebrew the adoption rate seems much higher. Also you don't depend on a single maintainer.
No, but originally there were a lot of Apple employees who participated in MacPorts in their spare time, and the project leaders included people from the open source world who were working at Apple. I think that's still the case but probably not to the same degree as it was 10-15 years ago.
The dawn of OS X coincided with the tail end of what was a rather insane open source hype bubble that existed in the industry in the late-'90s. This was the same era where lots of people were convinced that everyone was going to be running desktop Linux in just a couple more years (this is the origin of "this is the year of Linux on the desktop" jokes), and ESR hadn't squandered his credibility quite yet. It was a very different time.
Anyway, point being, MacOSForge is in part a historical artifact of a time when there was an open source bandwagon that a lot of SV companies had jumped on and were encouraging their employees to jump on as well. After a few years the industry restabilized and many companies weren't as publicly gung ho about open source, but for a while at least there were some surprising initiatives from unexpected companies.
I don't miss the messianic/apocalyptic/mystical/tribal/just-plain-silly overtones that surrounded open source hype in the media and in the community in those days, but I do deeply miss the surge of otherwise opaque corporations publicly committing to open source and actually following through on it for a while. It's amazing what can happen when the entire tech press is drunk on excitement about open source, leading to board meetings all over SV in which every CEO was getting pestered to have an articulated open source strategy for how they will remain relevant in this brave new world that was supposedly just around the corner.
So that's the era that MacOSForge was born into, and at the time it probably seemed bursting with potential. This was also the era of bootable Darwin ISO images (and OpenDarwin after that). There are many reasons a lot of that stuff faded away over the years. My theory is that many of these initiatives solved problems that people didn't have. Why put money into putting stuff out there that nobody cares about, and why would anybody care about it if the company wasn't pouring money into it? It's a chicken and egg problem unfortunately, I don't think there's anyone to blame for it. Some things just aren't going to be interesting to enough people to merit throwing money at them. I'm just glad that much of macOS continues to be open source, even if most of it is in the form of a bunch of tarballs that get put up on opensource.apple.com after every release.
TL;DR as others have pointed out, MacOSForge itself is (was?) Apple. MacPorts is an independent project and will live on, but benefitted from being born in an era of unusually high industry enthusiasm for open source.
This seems to have coincided with a practical cause to look for a new place:
But now that we have to leave Mac OS Forge anyway, it makes sense
to convert to git and take the opportunity to do some much needed
and overdue restructuring[...]
This is long overdue. Google slowly been doing the same with moving from Google Code (which was supposed to be dead? It is though?) to GitHub, but a little progression is better than no progression.
Google Code the public service is dead. They still have internal, but publicly visible, hosting for things like Android and Chromium; which they seem to be rebranding as Google Source, or at least using the googlesource.com domain.
It's probably been stated a million times before but it's hard to remember what coding was like before github. CVS and subversion - what's that? Github is so pervasive that when a project uses another git code sharing site you just get annoyed. Github Issues may suck, but it's so darn convenient and so well integrated. It's far worse when a project uses another ticket tracking system and just uses github for pull requests. Just give in to our new code overlords and go with the flow.
> Github is so pervasive that when a project uses another git code sharing site you just get annoyed
Bitbucket's ability to lock down branches is pretty nice.
> Github Issues may suck, but it's so darn convenient and so well integrated. It's far worse when a project uses another ticket tracking system
I don't feel this is true, having worked on projects that used Jira, Unfuddle, Pivotal Tracker, Assembla, Trello, etc. Of course, none of those systems have the killer feature of turning a ticket into a giant scrollfest of "+1", emojis, and animated gifs.
... and work constrained to that ship in a bottle. No thanks! I do a lot of interaction between Bash and the rest of my OS X userland. That is something I found completely impossible with the Linux subsystem on Windows.
This is the sad truth. It doesn't matter anymore if Bitbucket/Gitlab/etc. are better than GitHub: The social pressure for open source projects to move to GitHub is so high that the alternatives don't even stand a chance.
Maybe a meta-search engine for open source projects could help the competition, but if all projects are on GitHub, who would use such a search engine?
P.S.: I actually like GitHub, I'm just sad that all cool changes that GitLab has made recently will end up being mostly ignored by the OSS community. Even if they fixed their speed problem (which I think is their biggest downside compared to GitHub), no project would move there because "everybody is on GitHub".