Hacker News new | past | comments | ask | show | jobs | submit login
Clang and FSF's strategy (gcc.gnu.org)
149 points by wisesage5001 on Jan 22, 2014 | hide | past | favorite | 140 comments



I found the counter-argument made in the first reply to be more compelling. http://gcc.gnu.org/ml/gcc/2014-01/msg00178.html

Whether you agree with the FSF or not, their whole existence stems from their beliefs about software freedom. If they compromise their principles in exchange for 'market share', then they quite literally have no purpose.


Agreed. It's very rare these days to find people who are willing to stick to their principles; we should respect those who do, rather than criticize them for not compromising on their principles[0].

From your linked post:

> You are crossposting to two public project lists of the GNU project with inflammatory language and mischaracterizations. You have been involved with the GNU project long enough to be well aware that this kind of crowbar approach does not lead to much more than headlines about Free Software infighting.

ESR is doing little more than trolling here. He knows exactly where the FSF stands, and he knows exactly why that's not going to change (asking the FSF to do something that they believe hinders free software is like asking MADD to open a drive-through liquor store).

The FSF has always been very clear that they see the "open source" movement as complementary to (though not the same as) the free software movement, fighting for similar goals but for different reasons[1]. It's sad to see ESR, an "open source" advocate, actively try to fan the flames.

Nobody wins from this. Except advocates of closed, proprietary software.

[0] Of course, ESR knows exactly what he's doing here - the issue is that he disagrees with their principles, but instead of debating those, he'd rather attack them for executing on their principles rather than executing on his principles. It's a cheap rhetorical trick and a rather low move.

[1] "We don't think of the Open Source movement as an enemy. The enemy is proprietary software.", from https://www.gnu.org/philosophy/free-software-for-freedom.en..... (Don't be fooled by the title - the content of the article is very even-keeled).


> It's very rare these days to find people who are willing to stick to their principles; we should respect those who do, rather than criticize them for not compromising on their principles

Why should I respect someone for sticking with principles that are misguided? The whole idea that "sticking to principles" is a virtue independent of the merits of the principles involved is perverse.

Its even more perverse when the "principles" being stuck too are tactical judgments about how to best acheive strategic aims, and they are being stuck too even when they are operating against the strategic aims -- which is, precisely, the charge ESR is levelling against the anti-plugin policy vis-a-vis the stated goals of the FSF with regard to GCC.

> asking the FSF to do something that they believe hinders free software is like asking MADD to open a drive-through liquor store

ESR's argument is that FSF is wrong that this hinders free software, and in fact that FSF's status quo approach inhibits the FSF's stated goals for GCC.


"the 'principles' being stuck too are tactical judgments about how to best acheive strategic aims, and they are being stuck too even when they are operating against the strategic aims -- which is, precisely, the charge ESR is levelling against the anti-plugin policy vis-a-vis the stated goals of the FSF with regard to GCC."

I believe ESR is wrong.

In years past, there was BSD unix, a modified version of the unix shipped from Bell Labs. The BSD changes were theoretically "free", in that if you had a license from AT&T (which were easy to get, since at the time AT&T couldn't sell software), you could do anything you wanted with them.

What people did was to fork BSD, take their modifications proprietary, and create Solaris (well, SunOS), HP-UX, AIX, Irix, and a fair-sized stack of others that did even worse in the marketplace. The end result of that was fragmentation in the Unix ecosystem, which was bad on many levels. (One example: Don't like autoconf/automake/libtool? Guess where the necessity of those came from?)

Or, how about Jordi Gutiérrez Hermoso's response to ESR:

"The FSF sure can prevent it, and proprietary compilers still thrive. Here is one that particularly bugs me as an Octave developer: we routinely see people being lured to use Nvidia's non-free nvcc for GPU computing, which they gleefully admit is based on clang and LLVM. And there is Xcode, of course, completely non-free and completely based on clang and LLVM.

"The fact that these non-free tools are not based on gcc are a testament to how proprietary software developers cannot plug into gcc, and how clang is fostering non-free software.

"The nvidia situation is particularly dire becuase today, free GPU computing is almost nonexistent. It's almost all based on CUDA and nvidia's massive pro-CUDA marketing campaign. Even most OpenCL implementations are non-free, and the scant few free implementations of OpenCL that exist are not fully functional."

So we have several examples of ESR's approaches failing. On the other hand, the GPL does a pretty successful job of preventing the kind of fragmentation that damages ESR's "hacker community". And part of the reason it does is the FSF's dogmatic stance.


I think there is a conflict of goals here, but it's one that often goes unstated; ESR is about fostering free software, even if that incidentally also foster's non-free software.

Many on the FSF are about preventing code from being used in non-Free software, even if that incidentally is less than optimal for fostering Free software.

The thing is, many of those who act based on the latter priority present themselves as if there concern was for promoting Free software.


Can we be sure these principles are misguided if no one sticks to them?


Whether principles are misguided or not is a subjective, not an objective, question. Its not something you can "be sure of".

OTOH, if no one sticks to them, that would be evidence against them being widely viewed as important principles.


For one, ESR isn't asking that the FSF change (directly). He's asking that FSF change with respect to GCC and the audience it serves. There's many developers, and especially young developers, that feel the GPL & FSF are "over-principled". I think we can all agree that "Open Source" has largely won.

But if you ask about "Free Software," and take github as data point, I'd say "Free Software" is losing, and is losing because, like proprietary software, it's "over-principled". Young people everywhere, "feel" like content, many forms of "public" data, and the tools to use, create, play, view, and store such content & data ought to be "free as in beer" (or close to it) based on the principle that the effort to copy & transfer data, content, binary, and source is "almost free". Whether there is a restriction in creating plug-ins, linking or modifying code (as in the GPL/Free Software), or copying binaries and/or content as in proprietary software, these are still restrictions.

This is the reason why I prefer the more permissive licenses for my works like the BSD and MIT licenses. Essentially, my work is a gift, in the purist sense, to the entire universe. To place restrictions on my gift is to have given the world a poison and not a gift.

The reason why is easier to understand when you consider the quote by Jim Warren from a 1976 ACM Programming Language newsletter [1], referencing Bill Gates' famous letter to the Homebrew Computing Club, "There is a viable alternative to the problems raised by Bill Gates in his irate letter to computer hobbyists concerning 'ripping off' software. When software is free, or so inexpensive that it's easier to pay for it than to duplicate it, then it won't be 'stolen'."

Said another way, people will continue to do the "wrong" thing so long as it takes less effort than to do the "right" thing. In my mind, we should be incentivizing the "right" things, like openness, sharing, technical merit, and capability.

[1] http://en.wikipedia.org/wiki/Tiny_BASIC#An_early_free_softwa...


The data made over repositories like Google code, debian and similar places where entries has some form of minimum standard, GPL licenses are a strong majority, and is increasingly used.

So young people are either not serious enough to warrant inclusion in the 40 000 list of programs in Debian (doubtful), or your assumptions are incorrect.

> To place restrictions on my gift is to have given the world a poison and not a gift.

Next time you gift a beer to a friend, I hope you will allow them to hit you with it. Adding restriction on hitting you with the beer is the same as putting poison in the beer which would kill your friend.


I'm a Google Code user myself. I prefer it (and inDefero) over the likes of bitbucket, github, savannah, berlios, etc. But that doesn't make github any less popular. I don't understand the analogy you're trying to make by bringing in the debian repositories into the discussion. It seems to me like you're saying making friends on Friendster are more legitimate than friends made on Facebook because it was an initial innovator in the social network. Here's a 2013 that states github has 50% more projects then the next repository (sourceforge)[1]. Here's a 2011 that discusses when github turned 1 million accounts [2].

My argument is that many young developers don't care where they get their code, what license it has, etc. They just want to build, create, collaborate [3]. A secondary argument is that many people feel the GPL is a barrier to collaboration.

When it comes to gifts and beer, I would hope my friend wouldn't hit me with it nor would I put poison in his beer. They're (implicitly) allowed to hit me with it, but I don't expect such a thing to occur. Nor should they expect me to put poison in it. That's all a matter of trust. And the GPL, proprietary licenses, DRM are instruments of distrust.

In the end, you always have choice. I choose to live by the philosophies that "Givers Gain" and people do want to be good people. I understand not everyone has as altruistic intentions, heck, I work for those people. But I also understand that if I want to see the world change, I need to start by changing myself.

[1] http://software.ac.uk/resources/guides/choosing-repository-y...

[2] http://www.theinquirer.net/inquirer/news/2076108/github-domi...

[3] http://developers.slashdot.org/story/13/07/16/0220240/github...


Debian repositories are not gateless, in that you can't just create a repository with a text file in it and call it a project. You have to be sponsored, which mean you got to have some working code that is useful for someone.

Or to take a statitic look, 2 out of 3 forks on github are empty[1]. The quality per "project" on github is order of magnitude less than on debian.

You can test this out by randomly picking github repositories and read code. It takes several tries on the randomizer to even get code, and then even more to get code that actually do something.

Using statistics from more mature projects will provide different results than code just thrown at the wall.

As for the beer, it is a bit of a fringe view to allow others assault oneself with beer bottles. Most people will expect physical assault to have legal repercussions. The GPL in the same way trust that most people will not go out to hurt others, but in the case they will, repercussion will happen.

I now really hope you never end up in a court, complaining about assault, and having your comment above used as evidence against you. You basically gave everyone a license to hit you with beer bottles.

[1] http://blog.ram.rachum.com/post/4472104984/2-out-of-3-github...


You're just being silly and overly literal. Having gates does not legitimize a project. Being "mature" does not legitimize a project. Only users can legitimize a project, a license, or a philosophy. In that sense, the GPL is losing legitimacy and relevance in the mindshare of young people, IMO. In the same respect, Debian is meeting a similar fate. Does anybody use Debian anymore? Yes, maybe. The new generation of new distributions seem to be built on Ubuntu, not Debian. I know I've never used Debian, and it took Ubuntu to teach me the "Debian-way" when I had grown up on using Red Hat/Fedora.

Implicitly, everyone has a license to do harm to you. Which is why some people feel governments exist to protect you from others, and others from you. But, laws & rules all have two fatal flaws. "It's only illegal if you get caught" and "Rules aren't made to keep the bad guys out; they're made to keep the good guys in." Meditate on that.


Seems like ESR is still fighting the Open Source vs Free Software battle of the late 90s. It hasn't dawned on him yet that everyone else thinks that there's little purpose in cannibalizing the movement over minor philosophical differences.


By "everybody else" I think you're excluding the people who run the Free Software Foundation, in other words the very people ESR was addressing.


Not really. Even RMS only mentions the distinction in passing. The fight has simply gone out of this one for everyone who isn't ESR, probably because the late 90s was the last time he was relevant.


This is a digression, but it is slightly more complicated. The top US software companies, by market cap, all use FSF tools, but then in turn decide their own separate levels of openness. Entrepreneurs here will use FSF tools, and then decide how far to open their own kimonos. The original FSF intent, that copyleft would provide contagion to a truly open world seems actually stalled. Apologies for the digression.


It largely stalled because of the web. Lots of SV startups for example are using GPL'd code, but because they only provide web apps they don't have to release their code. Worse still those startups often think they are on the 'good' side of the free software debate because they use some open source code in their products and maybe even contribute a bit back.

In my opinion the FSF screwed up majorly by concentrating on local software and the GPL while ignoring the much greater danger of software as a service. The AGPL is too weak and the FSF never really pushed it. The GNU project have tried to create some online services/protocols themselves but they are a total joke (e.g. GNU Free Call).


But don't most people here, at HN, want that right to decide their own openness? I mean, does everyone here plan to do a web startup and then produce a tarball of [their] full work? A full DB API?

I understand that we all, or most of us, have contributed to free or open projects, but I thought the pure-souls, without a little "IP" one way or another, were very rare.


A common business model today is to threaten customers with lawsuit after the exchange of money for software has happened.

As a author, I want to decide if I can accept having my work used in the above mode. I think such threats can cause quite a bit of harm, and there are many notable cases where ordinary people has had their whole life physically hurt by it.

I can see how many developers want to decide for themselves if they are going to use the "threaten customers with lawsuit" model to earn a living. But it is no ones right to decide for me if they can use my work for it.


IMHO, the fact that the GPL covers only distribution and not usage (i.e. the GPL is NOT an EULA), is the reason for why I consider it a free software license.

AGPL never took off because AGPL is not really free and should have never happened. Unfortunately the FSF is blinded by these so called "dangers" in their fight against proprietary stuff, totally ignoring that AGPL in practice is only used purely for marketing reasons in dual-licensing schemes, there are no communities around AGPL projects and OSI doesn't really give a shit. As a consequence, AGPL is now posturing as a free software / open source license.

> Worse still those startups often think they are on the 'good' side of the free software debate because they use some open source code in their products and maybe even contribute a bit back.

You're making it sound as if that's a bad thing.

The web is the most open distribution platform. The alternative to the web is not FSF's GNU, but rather the iTunes App Store, Google Play, Amazon's Appstore and the Windows Store. And compared to 15 years ago, the barrier to entry for kids wanting to experiment with building and distributing software is very low and the web in combination with open-source tools, libraries and platforms have made it possible. I also hate this holier-than-thou attitude.


Correct. Don't forget one of the most striking differences between free software vs open source software is the promotion of a community of people who add to the public knowledge of software engineering, not merely to make better software artifacts. If you had to choose, which would you foster? An individual truth or a community of truth seekers? FSF is clearly about the latter.


I guess I see it a bit different and GCC is a pretty good example.

LLVM and clang can be used in pretty much any project. They have provided every developer tools to use in anyway we wish. GCC's parts cannot be included in other projects because they are licensed in such a way that tells us that the GCC code is more important than the code we are using. Why does a 1,000 line GPL count for more than 100,000 lines of some other license?

I think the best community is a voluntary one, and with security the way it is, I would rather developers who don't want to be part of the community benefit from the code just as I like the idea of everyone (who can) being vaccinated.

GPL generates a self-selected community that has barriers to participation with other communities because the GPL code is held as more important than the rest.


> GCC's parts cannot be included in other projects because they are licensed in such a way that tells us that the GCC code is more important than the code we are using.

That's simply not true.


So, if Apple had incorporated GCC's parser into Xcode like they have included clang's, what would have been the effect?


They would have to honour GCC's license, like they have to do with clang's license or any other piece of software.


And would honoring that license require them to release the source for the non-GPL licensed parts of Xcode and perhaps require some action on any patents implemented in that non-GPL code?


They have to honour the license. Which provides and guarantees basic freedoms. But it allows the code to be reused in any project honouring the license. So your initial statement is false.


Nope, my basic statement is true. You didn't answer the question and by "code" you don't just mean the GPL code. The GPL tries to affect all code in the project not just the GPLed code.


Your basic statement is false: "GCC's parts cannot be included in other projects". That's simply a lie.

> You didn't answer the question

Because it's completely irrelevant.


They cannot be included because they take over everything else. It is not a lie and the questions I have asked backed that up. GPL imposes things on code that is not GPL.


They can be included in any project that can honour the GPL. Therefore it is a lie.


"They can be included in any project that can honour the GPL."

So, no and not a lie. Why should any license dictate what happens to code that was not written by the authors of that code? Why should I pay more honor to GPL code than any other? So, no, much like a relative that drinks all the beer when visiting, the GPL cannot be included except in limited circumstances.


> So, no

So, yes. You are simply talking bullshit because of some irrational GPL hate. If you cannot honour the license of a piece of software then you can't use it. Doesn't matter if it's free software like GCC or some commercial library you paid a shitload of money for. Stop being dishonest and irrational.


Last time, I have no problem honoring a license on code written by others EXCEPT when the license presumes to tell other developers that they must do something with code they wrote in separate and not patches made to the original code. This is not irrational or bull. This presumption that the GPL code should impose itself on other code to force it all to be GPL is impolite at best.

I have no hate for the GPL. It is a license for a community that loves it, but saying GPL code can be freely included in other works is a very strange definition of free. It is also poor for teaching as it brings a burden on the future works of students if they directly use the taught code directly. It is a license that should be evaluated like any other, but it imposes burdens on more than the source written under it.


So you admit you were lying.


Nope, and you cannot read. Dammit, I hate being suckered by a troll.


> Don't forget one of the most striking differences between free software vs open source software is the promotion of a community of people who add to the public knowledge of software engineering, not merely to make better software artifacts.

Even if that is a real ideological division between the two, pragmatically I think that the permissive licenses that the FSF argues are less than ideal have done more after the initial demonstration of the value of F/OSS to promote a community of people who add to the public knowledge of software engineering than have copyleft licenses.

I think the divide over licensing approaches is as much over differing views of effective tactics and the real conditions in the environment as it is about differing views of values and strategic goals, and that permissive vs. copyleft is the real current divide more than free software vs. open source, and that, over time, the permissive side is gaining ground for reasons that have nothing to do with the main cited ideological differences between the open source and free software camps.


But but but ESR is a social hacker! http://esr.1accesshost.com/


Hear, hear!


As other's have noted, GCC has a plugin system (and not a [edit:stable] external interface) -- so that extensions/tools (plugins) will have to link to GCC, and be covered by the GPL. This is similar to how the Linux kernel tries to limit binary drivers, by explicitly not having a stable ABI (although, in the case of the Linux kernel I think it is also a case of "we don't want the burden of maintaining an outdated, inferior ABI for the sake your proprietary crap -- share or GTFO (And it's easier for everyone if we can just see your code, bugs and all)").

However, isn't this paragraph:

"I also think it bears noticing that nobody outside of Microsoft seems to particularly want to write proprietary compilers any more. GCC won its war; the cost and time-to-market advantages of hooking into open-source toolchains are now so widely understood that new processor designs support them as a matter of course."

proved wrong by Apple's Xcode? Isn't that exactly what Aplle is (partially) doing? I know Apple makes great contributions to (among other projects, clang) -- but is upstream clang the same as what comes with Xcode?


As one of the commenters points out, Intel continues to develop a proprietary compiler. I think that's a better example than Apple for this case.

In general, the world is more than just C/C++. IBM dominates the enterprise COBOL market. (See http://www.itworldcanada.com/article/most-wanted-the-elusive... .) And there are several commercial Fortran compilers besides Intel, including that of NAG (Numerical Algorithms Group) and PGI (The Portland Group).


Proprietary C compilers are also alive and well in the embedded world.


But a lot of them use GCC, too. Atmel, for example.


I'm not part of the Ada world, so I don't actually know, but I thought the important Ada compiler(s) were also proprietary.


Ada is one of the poster children for early GPL success:

https://en.wikipedia.org/wiki/GNAT#History

They also sell GPL exceptions, which I think is a good thing:

https://news.ycombinator.com/item?id=7027926


As far as I know the LLVM tools (and it's not just clang there are others as well like clang-analyzer and lldb) that come with XCode are just binaries of the open source project. If there are any differences I haven't noticed them and I've built and used LLVM tools from source builds as well as Apple's binaries.

Since Apple has so much influence over the project they can simply get the changes they want into the project so that diminishes their need for their own changes.

The things they want to hold back almost entirely go into XCode's IDE. For instance XCode's IDE has a very nice visualization for the clang-analyzer. The visualization is just using the output from the open source analyzer. You can even replace the clang-analyzer with a newer version and use it from XCode. By keeping the visualization implementation in XCode itself they can maintain an advantage for their platform.

I'd say that this just proves his point since Apple has chosen not to produce a proprietary compiler and instead to contribute to LLVM.


>"proved wrong by Apple's Xcode? "

As far as I know, Xcode is not an compiler, it's an IDE that uses LLVM/Clang underneath as primary option but also gcc if it is your choice. In the same way, Visual Studio uses cl.exe underneath but it can be hooked up with Clang too.


I don't think it's your choice what you hook underneath Xcode. I think it's Apple's choice. Since Xcode is non-free, you can't modify it to use something else will-nilly.

All of the fancy static analysis that Xcode does is completely tied to clang, for example. Its debugger front-end is tied to lldb's idiosyncratic interface. It's not something that you can just easily replace with non-Apple compilers.


  All of the fancy static analysis that Xcode does is completely tied to clang, for example.
Xcode doesn't do static analysis; Clang/LLVM does, and it's perfectly possible to do it from the command line, or embed it in other tools. (This is an example of the difference between LLVM's modular library architecture and GCC plugins — LLVM doesn't insist on being ‘on top’.)


So it is easily replaceable? You could easily tell xcode to use gcc if gcc supported the same kind of static analysis? How would you do this?


Yes it's replaceable. I've used it with newer builds from the open source project. They even provide binaries and instructions: http://clang-analyzer.llvm.org/xcode.html

If by replaceable you mean using some other analyzer, I'd say it's not easily replaceable but if you can produce an analyzer that produces similar output and that's has the same options, then sure. clang in fact did that sort of thing with their gcc driver.


If GCC supported that (and if Xcode supported GCC's support of it), you'd probably tell Xcode to use it the same way you tell Xcode to build with GCC instead of clang: You'd select the option in your project settings.

Nobody's made the claim that Xcode is some kind of magical dynamic IDE that supports arbitrary hypothetical features of arbitrary hypothetical versions of arbitrary compilers that you can mix and match on the fly.

Nevertheless it is the case that Xcode allows you to select between clang and GCC 4.2 backends.


GCC support in XCode has been less and less GCC and more and more just the LLVM cleverly disguised as GCC. With XCode5 there is nothing left from the actual GCC project, even gdb has been entirely replaced with lldb. So now if you chose GCC in XCode you're not really using GCC at all.


I don't work for Apple and have never seen Xcode source, but my guess (based on what I would do) would be that Xcode invokes Clang/LLVM as libraries rather than a standalone binary, in order to keep persistent state. In principle GCC could present such an interface, but in practice FSF policies prevent it. The point is that the static analysis can be used directly or by other tools (there is a web interface, for instance); it is not restricted to Xcode.


So... it's not apparently possible without modifying the source code. So like I said originally, what Xcode is using for static analysis is apparently Apple's choice, not yours nor mine.


Yes; what's the point? Nobody's making you use Xcode (I don't), and nobody's stopping you using the Clang/LLVM features like static analysis outside of Xcode. If you want, you're free to call clang's static analysis from GNUstep's Xcode clone¹ (assuming GNUstep's license allows that; LLVM's certainly does), or any other IDE or editor.

¹https://github.com/gnustep/gnustep-xcode


All true, but that makes Xcode a proprietary IDE, not a proprietary compiler. Apple doesn't make a proprietary compiler.


Those are merely features of the IDE. The compiler used for actual building is configurable per project.


>"Its debugger front-end is tied to lldb's idiosyncratic interface. It's not something that you can just easily replace with non-Apple compilers."

What I going to say might not be true any more, since the last time I worked on Xcode was roughly two years ago. But by then I worked as a mobile game developer and at the studio we happily compiled in both GCC and LLVM. Actually we preferred to use gcc and gdb since LLVM debugger was very buggy at that time.

This is how you could install gcc to work with XCode 4.3

http://stackoverflow.com/questions/9353444/how-to-use-instal...

And I wouldn't call LLVM/Clang an Apple's compiler, since it is contributed by people from Google too.


> And I wouldn't call LLVM/Clang an Apple's compiler, since it is contributed by people from Google too.

It's about as much of an Apple compiler as gcc is an FSF compiler.


Why not just write the license in such a way that linking against the stable ABI requires your code to also be GPL licensed?

Is there a legal reason why the requirement can only apply to the original source code and not the compiled source code? Don't the "proprietary binary" people exert some kind of license over their proprietary binaries?

And why can't the Linux Kernel have a versioned ABI? I don't know enough about it, but does it really get revised that often and by that much? I mean, Apple and Microsoft seem to be able to make this work, and everyone seems to think they aren't very capable compared to the Linux Kernel crew. The argument about "having to support old things" also doesn't seem to hold water, because one of the big reasons people use Open Source software is to leverage older, outdated hardware or to have access to older, outdated software and data that the proprietary vendors leave behind.


> Why not just write the license in such a way that linking against the stable ABI requires your code to also be GPL licensed?

Well, for one thing, its dubious than ABI's are copyrightable (for much the same reasons as that is true for API's, see Oracle v. Google), so work derived from an existing products ABI quite likely doesn't need a copyright license, so therefore is unlikely to be effectively restrictable by way of a gratuitous copyright license.


Xcode is clang based


And used GCC before that.


ESR is comming hard on FSF to what is actually embracing "open source" movement. His arguments are technical only as a means to hide the politics.

This is not going to happen. I think David Kastrup's reply was pretty clear in that sense.

http://gcc.gnu.org/ml/gcc/2014-01/msg00178.html


Sounds like the open source vs free software debate all over again. I don't have a strong position in that one, but I find GCC's position rather obvious: They want to support free tools, and they explicitly don't want to support proprietary tools.

I cannot believe that this isn't obvious to esr, of all people. Is this just him trying to start a flame war?

edit: I'm now aware that esr is not talking about license restrictions but technical restrictions here. I have yet to find any evidence of technical restrictions for political reasons though, and it looks like the folks responding to him on the mailing list are not sure what he means either.


GCC's position [is] rather obvious: They want to support free tools

I don't have a strong position either, but unless you work for the FSF in their PR department, software developers should really be stating the FSF's position accurately, which is:

The Free Software Foundation only want to promote free^H^H^H^HFree Software Foundation-licensed tools, primarily tools that use its GPL license.

To the FSF, "free" is just a shorthand way of saying "Free Software Foundation-licensed". It doesn't mean "free" as normally understood. Just because the FSF wants to conflate the two to obscure what's going on doesn't mean the rest of the world should go along with the ruse.

(And it is clearly non-standard, which is easily demonstrated by how often FSF people have to explain what "free" means. As they say in politics, if you're explaining, you're losing. If there was no difference from the standard usage, no explanation would be needed. Therefore, the FSF usage is non-standard. QED)

FWIW I doubt anyone has a problem with the FSF's actual mission, since people are free (normal usage) to do what they want (and even encouraged to do so). It's the rhetorical duplicity of their PR that we shouldn't be supporting. Let the FSF's mission stand on it's own merits, rather than by trying to gain credibility/respectability by association with something else (in this case, our pre-existing affinity for freedom).


> To the FSF, "free" is just a shorthand way of saying "Free Software Foundation-licensed".

Not entirely true; the FSF recognizes lots of other licenses as "free".

Its true that the FSF thinks that its licenses are the most appropriate for promoting software freedom.

I think they're wrong: if you can convince people that Free Software has value, you don't need a copyleft license forcing them to give back, and if you can't convince them of that, a copyleft license doesn't help you get them to create free software, it just prevents them from engaging with it at all, gets them to commit to an alternative, and makes them less likely to to commit back even if they later realize a value in Free (since if they commit to a non-Free third-party alternative, the cost of switching it out is higher.)


"It doesn't mean "free" as normally understood ... have to explain what "free" means. As they say in politics, if you're explaining, you're losing"

And so you are losing. You are trying to explain what "free" is, and QED, your usage is non-standard.


Yes, the result is likely just going to become a reignite the flame war for a post and here on HN.

GCC authors clearly want that users can modify and share the compiler, regardless which version of the program the user got. The idea that someone would then try to use their work as a springboard to sue individuals who attempt modify or share an specific version is clearly seen as abhorrent behavior.

Yet, again and again, people disrespect the wishes of the authors and keep try to pressure them to change their views. "Allow us to sue some users by using your work" they say. No they get back, and yet they keep asking again and again and again.


No. It's not that at all. The GCC/LLVM debate has nothing whatsoever to do with licensing, but crippling the design of gcc for political aims. Educate yourself on the subject.


Educate yourself before questioning others level of understanding.

The GCC has an plugin system, but not one that allows proprietary modules. The people that care about that distinction only do so because of the licensing difference between GPL and proprietary licensing.


I can't claim to know anything about GCC's internals, but from what I know/read, it does have a plugin system and I don't see any indication of that being held back for political reasons (there are many other potential reasons for technical inferiority).

And I don't see how the FSF could ever have furthered their political goals by making it technically hard to use GCC in free software tools, quite the contrary. Would be quite interested in evidence of that.


I think I get what you mean now, you said above that GCC doesn't want to give external tools access to intermediate formats. But is that really the "anti plugin policy" esr is talking about?


https://lists.gnu.org/archive/html/emacs-devel/2012-12/msg00...

RMS says "freedom" is worth sacrificing functionality for. GCC is a monolithic mess of shit that is virtually impossible to debug, and produces incorrect code very frequently. Being forced into a terrible architecture where everything is smushed into everything else just to prevent people from being able to use gcc as a front end and something else as a backend is absolutely a case of making the code worse for the sake of trying to further a political agenda.


   GCC is a monolithic mess of shit that is virtually impossible to debug
This is the “RMS loophole” in the GPL. In principle you have the right to modify GCC to suit yourself; in practice, the barrier to entry is too high.


Please take your FUD and go elsewhere.


IMHO GCC has already lost this batle. Today one developer can create incredible tools using Clang/LLVM coda-base and release it as open-source or if he wants as closed source. This is just impossible with GCC. GCC is just a compiler but Clang is much more as this is a very powerful compiler building library... If we talk about C++ then Clang already support C++14 but GCC still not. And yes License matters, GPL just does not allow to use GCC in most cases. But today one does not need GCC at all any more, there is Clang. gcc-xml was a hope many years ago but it just died, and today all this can be made much much easier and faster using Clang. Clang is standard on OSX. Some Linux distributions already switching to the Clang. Clang support for Windows is already on the way.

So if nothing will be changes in GCC politics then it will become unnecessary in the future. Of course some will still use it but only as political reasons.


> Today one developer can create incredible tools using Clang/LLVM coda-base and release it as open-source or if he wants as closed source. This is just impossible with GCC.

By design.

> And yes License matters, GPL just does not allow to use GCC in most cases.

By design.


At that rate GCC will eventually die in obsolescence. By design, apparently.


And esr is saying that perhaps it is (no longer) a good design.


esr seems to conflate technical differences with political issues here. clang is not superior in some areas because it isn't GPL'd. It's because it is a newer project with more resources and different priorities. Sure, it attracted some developers who didn't like gcc's and the FSF's policies, but so do all kinds of crappy proprietary products. The point is, gcc doesn't need to relax its policies to better compete with LLVM, it just needs to become a technically better product. I don't buy the implication that it cannot become that without dropping some of the FSF's goals.

The gcc project is ancient and while I don't know the code base well, I'd assume that the fact doesn't necessarily help make it more approachable for new developers. Why can't a newer version of gcc be based on parts of LLVM, if the latter is considered superior by so many people? The licenses seem to allow it.


I think GCC receives a lot of undeserved hate at the moment. It is still a very good compiler which constantly improves. E.g., for my projects it generates better (faster) code. But still there are a lot of flame comments made against GCC. It was very similar when Chrome was released and suddenly the web was filled with flame comments against Firefox.

I think one problem is that many developers on Apple systems think the Apple GCC was state of the art. When in fact the latest release (a patched 4.1?) was rather old and obsolete. The Clang homepage still used to compares itself to GCC 4.1 instead of 4.8.2 or 4.9. It was similar when Chrome was released and Firefox was stuck on 3.6.1 waiting for 4.0. Sure Chrome was a lot faster but Firefox quickly caught on. But still people seem to think otherwise.

There is a GCC for LLVM, called DragonEgg http://dragonegg.llvm.org/. It is a _plugin_ (yes GCC has those since 4.5) for GCC. But I don't know what huge advantage it's supposed to bring. Especially when GCC seems to have the better backend (at least for my projects) at the moment.

I think the GCC folks should make the GCC Python plugin official because it provides a more stable and clean API (mentioned by ian lance taylor here: http://gcc.gnu.org/ml/gcc/2014-01/msg00181.html). They should continue with the transition for C++ (which will help to clean up the code base a lot, no matter what the C++ haters say) and increase the work towards modularization (http://gcc.gnu.org/wiki/ModularGCC that would probably allow making the frontend available as a library similar to libclang). Libgccjit could be very interesting as well http://gcc.gnu.org/wiki/JIT


>increase the work towards modularization

Is that actually going to happen though? RMS is very clear that he doesn't want it to happen, and continues to argue against it. He knows that the decision to keep the architecture monolithic and only expose unstable internal data structures limits functionality, and he is happy to make that sacrifice:

https://lists.gnu.org/archive/html/emacs-devel/2012-12/msg00...

The GNU Project campaigns for freedom, not functionality. Sometimes freedom requires a practical sacrifice, and sometimes functionality is part of that sacrifice.

https://lists.gnu.org/archive/html/emacs-devel/2012-12/msg00...

Part of the reason why clang/llvm weakens our commnity, compared with GCC, is that the clang front ends can feed their data to nonfree tools.


There are very real differences between Clang and GCC - not when you use them as a client, to compile your code in the CLI, where gcc did indeed make big improvements in the areas it was not as good as clang, and was already pretty kick ass anyway, but when you try to use them as libraries, eg. use the AST to do source transformations, or use the code generator to make a backend for a new language.

Using clang/llvm for this is a breeze. They were concieved for that from the get go. Using gcc's AST to do anything is an horrible nightmare, and you pretty much have to fork the whole gcc code base to do that anyway.

Using gcc to implement a backend for a new language is possible, but still a lot harder than doing so with LLVM, which actually has a spec for its IR, and is well documented.

Those things are not going to be easy to change because GCC wasn't designed to account those needs.


It's not too surprising that the IR is LLVM's strength, since that was originally the sole point of the project: the Low Level Virtual Machine was a research project at the University of Illinois to produce a target-independent low-level assembly infrastructure, in particular to be able to serve as the code-gen backend for managed/VM languages (vs. the GHC/SBCL approach of the language runtime bundling its own custom codegen). C-- was another project in that space.

GCC by contrast started as a project to replace AT&T's CC, and has since grown into a project to provide a free compiler suite, but in general an AOT compiler suite, not a backend for VM-based languages (even the Java support, the now-mostly-dead gcj, was an AOT approach). Its main competitors for years were proprietary compilers like icc, Sun Studio, and IBM VisualAge, and the main focus of comparison was language feature support and optimization performance. So that produced a pretty different development focus in each case for quite some time, though they've converged more in recent years. Nowadays the LLVM project has put a lot more resources into the compiler than they used to (including AOT-compiled languages), and GCC has been cleaning up the compiler internals and producing a plugin API. But for much of the lifetime of the two projects they weren't really in the same space.


> Using gcc's AST to do anything is an horrible nightmare, and you pretty much have to fork the whole gcc code base to do that anyway.

GCC supports Plugins which allow you to access all internal structures.

> Those things are not going to be easy to change because GCC wasn't designed to account those needs.

They are working towards modularization. It will certainly not be an easy task. But I hope they accomplish that.


>They are working towards modularization. It will certainly not be an easy task. But I hope they accomplish that.

gcc-xml was already written and the gcc devs refused to merge it. Has that changed in the last year?


http://gccxml.github.io/HTML/News.html

Looks quite dead to me, and I recall reading that the author do not recommend using it anymore.


There are still commits (latest ~1 month ago) https://github.com/gccxml/gccxml/commits/master


When was that? 10 years ago?


I definitely don't hate GCC, for a long time it was the best free compiler that I knew so I am really grateful to the people that made it possible.

But honestly, after seeing the error messages of Clang compared to those of GCC, I have zero intention of going back to GCC.

It may be that GCC generates better code (to be honest, I don't know) but 99% of the time that I'm using a compiler, I'm interacting with the error messages. The quality of the error messages has a real, significant impact in my development time. And if I want better code, I can always use GCC for the final compilation after doing all the debugging work with Clang.

As I said, I can't say I hate GCC, it's a free product and many, many wonderful projects have been compiled with it. But I honestly don't understand why, after all these years of development, the error messages are still that bad.


GCC error messages aren't that bad. Could it be that you compare the old Apple GCC to the latest clang?


> The point is, gcc doesn't need to relax its policies to better compete with LLVM, it just needs to become a technically better product.

gcc has an explicit policy against making the code modular and reusable, for political reasons (to make it hard to use individual pieces of GCC as independent programs, which could form part of a proprietary compiler toolchain). The point is, this political policy has made gcc's code technically inferior in some ways to clang (modular code, with separation of concerns and clearly defined interfaces between components, is a technically good thing). GCC can't resolve this technical problem without abandoning this policy.

> Why can't a newer version of gcc be based on parts of LLVM, if the latter is considered superior by so many people?

That would inherently mean abandoning this policy. And the answer is that while it's possible, the GCC codebase is still good, it still outperforms clang in many cases, and pulling in parts of clang would already require cleaning up and modularizing the GCC codebase - at which point we'd quite possibly end up with a compiler that's better than clang in all respects. So why not just do that?


> GCC can't resolve this technical problem without abandoning this policy

It seems to have been abandoned. GCC supports plugins and is working towards modularization.


You are the one conflating technical differences with political issues. The problem with gcc is not the license, but the decision to architect the system so that the intermediate formats are not accessible to plugin developers. Plenty of the projects that LLVM has made possible have no beef at all in licensing under GPL, however, under the current design of the gcc they are flatly impossible to write for it.

This is because the FSF deliberately designed gcc not for technical goals, but to prevent access to intermediate formats without merging the compiler, because that would allow non-free plugins (as they wouldn't be derivative works of the gcc and so could choose their own license). Unfortunately, these choices not only restrict non-free work, but the technical decisions made prevent a lot of useful things from being made, and makes contributing to the project much harder than contributing to LLVM/Clang.


> but the decision to architect the system so that the intermediate formats are not accessible to plugin developers.

What do you mean exactly? GCC plugins provide access to all internal data structures.


ESR's response to Ian Taylor:

"Then I don't understand why David Kastrup's question was even controversial.

"If I have failed to understand the background facts, I apologize and welcome being corrected.

"I hope you (and others) understand that I welcome chances to help the FSF's projects when I believe doing so serves the hacker community as a whole. The fact that I am currently working full-time on cleaning up the Emacs repoaitory for full git conversion is only one instance of this."

(I haven't been following ESR (not being part of his tribe and all), nor have I really been following Emacs. Do the other Emacs folks know about his "clean-up" there? Is it going to go any better than his clean up of the Linux kernel build system?)


Does GCC need to compete with clang at all? What's that mean exactly? Other than in terms of compile speed, it just seems like most modern compilers sort of reach some level of maturity and you really have to fabricate benchmarks to demonstrate that interesting of a difference between them.

ESR is just trolling. What his bug is, I don't know. Seems like he's dancing around something that might be interesting and more inline with the social observation that he's a little better at. I assert that GCC doesn't actually compete and doesn't need to compete. It just has to be available and it simply has to have hackers that are willing to work on it for the principle. Let's just assume that clang takes over the world, consistently produced better code than GCC, etc.. What exactly does that matter to GCC? Presumably hackers will stop working on GCC, but guile, hurd, and numerous other GNU projects show that that isn't always the case. I think that as long as GNU exists and they have some money and fans, there will be GCC contributors. Is there some other fear of what will happen if people use a different tool chain? Conversely, BSD has depended upon GCC for decades and I'm not convinced that that has affected it in any way and their switch to clang I suspect isn't going to radially alter things either. If we go back a week or two, I don't know that emacs' choice of revision control software makes any difference to its use, it may have some amount of impact on people contributing to it but I don't think that is clear cut either, there are A LOT of emacs contributions that aren't in the main tree... Also these projects don't want 'drive by' contributions, they want actively involved supporters.

Seems like he's dancing around some social observations that he wants to be true but can't prove or they might not be true. People hack on stuff because they have an itch, that itch might be technical need, it might be some sense of aesthetic that they think isn't being answered, it could also be related to something like freedom. When do other factors outweigh the itch? Now maybe GCC is GNU's most important software asset and there is some larger social thing ESR is worried about or has observed.


You are the one conflating the technical and political. This has little to do with licenses. GCC is written poorly on purpose in order to make it difficult to work with gcc. They do not want gcc to be used as a typical unix tool, doing some task and then having the output piped to some other tool to do some other task. They want you to have to directly extend gcc to add whatever functionality you want, this way you would have to make your functionality GPL. The consequence of this moronic decision is that gcc is an absolute nightmare to work on, is full of bugs that are very hard to isolate, and is being abandoned by everyone sane in favor of clang.


> GCC is written poorly on purpose in order to make it difficult to work with gcc

That's a strange conspiracy theory that has been posted here several times and debunked as well. It seems particularly odd to me because when I worked at a university 15 years ago, everyone in the compiler research world would occasionally hack on gcc to add features, retarget it, add optimizations etc. ... It didn't seem prohibitively difficult.


It's not a conspiracy theory. Could you point to the 'debunking' because I don't see one that contains any actual evidence?. There are lots and lots of public mailing list posts by RMS and other GCC engineers explaining their reasoning for not making GCC more modular. The reasons were political.

For example RMS vetoed the first attempts to add support for Java bytecode to GCC because he thought it would allow people to interact with GCC from other non-free software: http://gcc.gnu.org/ml/gcc/2001-02/msg00895.html

That same reasoning is why there is no GCC equivalent to LLVM IR or libclang or libtooling.

https://lists.gnu.org/archive/html/emacs-devel/2012-12/msg00...

Part of the reason why clang/llvm weakens our commnity, compared with GCC, is that the clang front ends can feed their data to nonfree tools.

There are some links in this wiki page that cover some of the arguments made against adding support for plugins to GCC. http://gcc.gnu.org/wiki/GCC_Plugins under 'Potential disadvantages of supporting a plugin architecture in GCC'

Here is a post from a GCC maintainer explaining that RMS was personally blocking the inclusion of this much desired basic functionality for political reasons: http://gcc.gnu.org/ml/gcc/2007-11/msg00193.html

> Is there any progress in the gcc-plugin project ?

Non-technical holdups. RMS is worried that this will make it too easy to integrate proprietary code directly with GCC.


Are posts from 2001 and 2007 really relevant for this discussion?

Look at this for example: http://gcc.gnu.org/ml/gcc/2014-01/msg00182.html

The "debunking" in this thread was the confirmation that plugins actually work now because the policies were relaxed in favor of them.


The Stallman post about LLVM being bad because it allows people to use it to build other tools was from 2012. Why did you ignore that?

Plugins are only part of the story. Without stable intermedia representations there is still no GCC equivalent to libtooling or LLVM IR. That is by design, and remains true.

No conspiracy theory. Just an ugly truth that some people want to deny.


Why would you post an outright lie like that? Anyone can read the discussion and see multiple mailing list postings from RMS himself confirming the "conspiracy theory" is in fact reality. Quite an odd concept of debunking.


The idea that the FSF (and perhaps more specifically, RMS) are holding back GCC technically due to concerns about people working around the GPL dates way back. As people have pointed out, GCC now has a plugin system but I imagine ESR is thinking back to exchanges such as this one, where RMS rejected the contribution of a Java bytecode backend to GCC purely on the grounds that it could be used with proprietary tools using the bytecode as an IR http://gcc.gnu.org/ml/gcc/2001-02/msg00895.html



However, the interfaces are not stable across releases, making using the system sufficiently hard that almost no-one bothers.


There's a reply to esr by Alexandre Oliva which touches this point. I'll quote what he says:

"That GCC plugin interface is not sufficiently stable for major uncoordinated developments by third-parties is just as true as that Linux's module interface is constantly changing, and complaints about its lack of stability in it are often responded with such phrases as contribute your driver and we'll even help you keep it up-to-date".

I don't know about the subject but it seems a quite reasonable stand and it certainly doesn't hinder the Linux kernel development, I don't see why it wouldn't work in GCC.


I believe he is talking about different kind of plugins. For example, suppose you are an editor and you want to syntax highlight c source code. You could then feed .c files to clang, get back a parsed ast and use that for awesome syntax highlighting.

You can not do that with gcc and the fsf does explicitly not want you to be able to. Because you could then be writing a non-free editor to take advantage of and using gcc as a free front-end.


Oh, I think you are right, esr was talking about using GCC as a plugin and I understood plugins for GCC. Apparently Alexandre Oliva and many of us made the same mistake.

Considering that, I mostly agree with David Kastrup; it doesn't make sense for the FSF to bend its principles and help propietary software in order to get market-share. But, unfortunately, it's also affecting FOSS developers.


Yup, there are many, many times when I want to be able to do simple things with C or C++ code. Writing a little static analyzer tool that can check some internal code for a practice we have found harmful, parse some headers to automatically generate python bindings, documentation stubs, generate really good unit tests stubs with the information the ast give you, grab all of the comments and run your own verification on them, automagically re-write code to not use an old api anymore. And this is all just with using the source parser side of things of llvm

These are the tools I write, they are not big commercial enterprises, most of them are one off projects that are used and tossed away within a year, very few are ever published. Clang makes me more productive because they give me the ability flat out do this type of thing. It doesn't matter what GNU political agenda is, their tool flat out doesn't let me do this like llvm can. As long as this is the case long term I will use llvm more and more over gcc.


is it at least stable across patch levels?


I wrote this six months ago:

If people switch from GCC to Clang/LLVM in enough numbers that Apple think they can get away with it, Apple will, in a heartbeat, close the development of Clang/LLVM and make all new versions proprietary. (https://news.ycombinator.com/item?id=6146066)

This is still true, and this is the reason we cannot allow GCC to give up or declare “victory” and move on.


I really don't see that happening. Google engineers are the largest contributors to LLVM/Clang these days. Why would Apple reject the contributions of a company that employs some of the best compiler engineers? It would make it much more expensive for them to maintain and develop LLVM.


Why can you not see the possibility of Google collaborating with Apple to take LLVM proprietary?


Why would anyone use a closed fork of llvm? The open one would continue as normal without Apple.


The open one would stagnate without the developers who Apple pays to out-compete then both GCC and the free fork of LLVM.


Apple is not the largest contributor to LLVM, Google is. The various other non-Apple contributors also outweigh Apple. It would be hugely expensive for Apple to hire all those compiler developers from Google and Intel and dozens of other companies. Why would they do it? What would it gain them other than to be comically villainous.

The reality is that GCC is stagnating because a lot of things people want to do are just not very easy to do with GCC, due to architectural decisions that were made for purely political reasons (and these arguments have been going on since long before LLVM even existed).

It used to be that academics, students, post grads and hackers who wanted to do something interesting and new with a compiler would start with GCC. That hasn't been true for some years now. Pretty much all the interesting new projects are based on LLVM. And that isn't because of Apple paying people. It's because LLVM has modular design that gives you a stable clean interface to various things while GCC only offers unstable and complicated internal data structures.

So increasingly people interested in studying and developing compilers are hacking on LLVM and GCC is left only with developers who have a very strong political agenda that motivates them to contribute to GCC. Unfortunately for RMS most compiler developers don't work for free out of their basements. They are well paid and work at large companies and so any emotional or political arguments about freedom don't get any weight with their bosses compared with the ability to actually get shit done. So they are switching to working on LLVM.

As detailed elsewhere in these comments RMS knows full well that the political decisions limit the functionality of GCC. He says he is happy to pay that price to prevent anybody using GCC in a project that isn't GPL licensed.


So substitute “Apple” with “Google, Apple, Intel, etc.” in my original dystopic vision. Should we trust them?

The technical problems with GCC’s unstable plugin API can be fixed, and is, from what I understand, not something put there intentionally.


Substitute with FSF and GCC and same paranoid arguments apply. I don't have any reason to trust the FSF any more that those other organisations. At least when it comes to corporations they are just working to make money and I can reason about and predict their actions based on that. With political organisations like the FSF they are often quite prepared to shoot themselves in the foot or destroy the entire project just for ideological reasons. An "if it can't be just how we want it then nobody can have it" attitude.

Back in reality none of these projects are going anywhere. GNU zealots have spend the last 2 decades warning that BSD style licences would result in people making closed forks of things leading to the original open version dying. But it has hardly ever happened.

Instead of propagating paranoid conspiracies you should just accept the truth which is that BSD style licences have proven to be the most effective at getting people to contribute to open software.


The open one would stagnate because it has over 3 times as many developers, writing over 3 times as much code? You seriously think that?


No, I think that the proprietary one would get all the company contributors, and therefore have the lion’s share of developers.


The apple employees working on it are the minority, that is the point. If apple decided to have their employees work on a closed source fork, they would have less than 25% of the current developers of LLVM/clang working on their compiler. LLVM/clang would still have the other 75%+.


Apple developers are significant contributors to LLVM, but not a majority. It's roughly one quarter Apple, one quarter Google, one quarter other paid commercial developers, and one quarter individuals (including academics).


Lets not forget the private war Steve Jobs had on FSF for being forced to release the NeXT Objective-C frontend to gcc, if I remember the events correctly.


“Forced to release” does not really describe what happened. It was more like NeXT thought they could take GCC, extend it to compile Objective C and make NeXT’s version of GCC proprietary. When the FSF told them, using lawyers, that NeXT could not, in fact, blatantly violate the GPL, NeXT could have chosen not to release their version and use another compiler for Objective C. But they chose to release their Objective C compiler under the GPL, which was their other option.

This is what happens when you don’t read the license, think that things like copyright doesn’t apply to things you get off the internet, and dig yourself into a hole – your options are limited. But releasing the code as GPL was never their only option, is what I’m saying.


> If ... Apple think they can get away with it, Apple will, in a heartbeat, close the development of Clang/LLVM

Followed shortly by Google, Microsoft, IBM, and Oracle. Classic prisoners' dilemma.


No, it is not still true. It was never true. It is pure FUD. Apple has absolutely no way to "close the development" of llvm. Why do GPL zealots still post such crappy FUD even when MIT/BSD/apache/ISC licensed projects make up a massive portion of the infrastructure of the net, and no magical closing has ever managed to cause any problems at all?


Well, strictly speaking, neither one of us can be sure what would happen. It’s just what I think would happen. Is it “FUD” if I still really believe it? I don’t know.

And speaking of closing; just look at Android with its complete anti-GPL, purged system. Only Linux remains there.

Of course, I (or you) can’t prove anything either way, but I have the feeling of Google, Apple, and the rest of the permissive license promulgators all the time going “Relax, it’s fine now, the war is over, you can all go home! You won!” And if we turn our backs, if we relax in our supposed victory, we will slowly find that all the contributors to these better (completely coincidentally non-GPL) projects are mostly employees of these same companies, and they all have CLAs. And one fine day they will turn off the freedom and turn it all proprietary. We then cannot hope of competing with the current project owners, since they have all the experience and infrastructure.


>Well, strictly speaking, neither one of us can be sure what would happen

We could certainly look at the hundreds of projects setting precedent to determine that you are almost certainly wrong.

>Is it “FUD” if I still really believe it?

Yes. If you are posting something to create fear, uncertainty and doubt, then it is FUD. Even if you are sincerely crazy, it doesn't make your FUD not be FUD.

>but I have the feeling of Google, Apple, and the rest of the permissive license promulgators

Google are not "permissive license promulgators". And you wouldn't be having this conversation if not for actually free software, like freebsd, apache, bind, etc. The people who made the internet are the "permissive license promulgators". Show some fucking respect.

>And one fine day they will turn off the freedom and turn it all proprietary

And see a therapist. You are paranoid and delusional.


As I see it, the whole disagreement seems to stem from differing opinions about what happens when free/open software and non-free/closed software meet, and which is "stronger".

FSF lives on an island and worries that even one contaminated inhabitant will infect everybody. They seem to hold that non-free is a contagious disease that will overtake and destroy their freedom. FSF is worried about diminishing: what they have is perfect and it can only be reduced. FSF are Tolkein's Elves.

ESR would welcome contaminated people to that island, believing strongly in it's restorative properties. He seems to hold that open-source is more powerful and will stamp out closed software whenever they meet. ESR is worried about not expanding quickly enough and dying of stagnation. ESR would probably be Aragorn.

That's how I see this argument.


Sounds like this was posted without reading the linked thread. He retracted the post in a followup as he didn't know about the GCC plugin system: http://gcc.gnu.org/ml/gcc/2014-01/msg00182.html


I see GCC as performing the necessary function of defining the radical antithesis to proprietary tools, historically like Think C, C++, Borland,and Metrowerks, and presents tools like Microsoft's Visual C/C++.

Without GCC staking out the position it has clang wouldn't have its middle ground to stake out. The middle ground would instead be a lot more proprietary than it is now.

Without Stallman being as radical as he is, there would be no Linus' who bridge the gap between completely free and completely proprietary software. There would be nothing to react to.

Many wrongs actually do make a right, as long as we're all wrong in opposite directions.


I doubt ESR was genuinely interested in changing GCC policy - that seems to be like a well written piece of concern trolling. If you want to make suggestions, you discuss it politely with the stakeholders, you don't make intentionally provocative suggestions on a public mailing list.


"I also think it bears noticing that nobody outside of Microsoft seems to particularly want to write proprietary compilers any more."

Someone should tell Intel that. Don't they still do things to intentionally make AMD CPUs look bad?


"The clang developers very carefully do _not_ say that they aim to make GCC obsolete and relegate it to the dustbin of discarded tech. But I believe that is unmistakably among their goals"

Beliefs don't need them, but I don't see a rational argument supporting that. IMO, clang developers just won't cripple their product to prevent the potential collateral damage to gcc.


The best part: reading ESR call others ham-handed and counterproductive.


esr trolling the gnu mailing lists...


Ugh, more free software vs. open source politics. In the interest of actually collaborating and getting shit done, I'd like "freedom" from all this nonsense please.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: