When I was younger, I thought that the OpenBSD folks were backward and unsophisticated when compared to Linux. I imagine that a lot of younger people feel that way. But after working with various systems over the years, I can now really appreciate that simplicity. It's really the heart and soul of OpenBSD, OpenSSH and all the other projects (libreSSL, etc.) that they release.
> I urge everyone in power regarding this issue to think this through --
and then, make your simple compiler which we can build into a trusted
component FREE, or, if you don't, sometime in the next few years
something else which is simple and matches it in power, can and might
and probably will show up (because it is clear the gnu bloat compiler
will never achieve such a goal...)
Looks like Theo correctly predicted the advent of clang.
Just linking libclang.so (newest svn version) takes over 8GB of memory. How much more you ask? Well we will find out when i got more ram. So i don't see that compiler becoming a gcc replacement in OpenBSD anytime soon.
The memory use at the "link step" is so high because due to whole-program optimization, that's the point where everything actually compiles.
It used to be fairly typical for large C++ applications. Chromium and Firefox had problems with it before GCC got fixed. Sounds like clang still has work to do there.
I completely agree but my point is more that the compiler shouldn't be that large even whilst linking and doing whole program optimisation. Proportionately to the size of the output binary the overhead is unreasonably large. Firefox and Chrome for example are problematic due to the architecture. I've built modular applications which compile smaller units (.so's) that are runtime linked using dlopen() etc. Not quite the same scale but this was in a time when the kit was a lot slower.
I'd like to see a return to a simpler time. We can afford to lose some of the optimisations now and produce clean, simple, fast and predictable compilers.
I think Roslyn/RyuJIT on the .Net platform is an example of where this all falls apart: bad tail call optimisation leads to non-determinism in parameter passing. If the compiler consists of lots of code then there are lots of bugs and you can't afford bugs there.
Can I interest you in the original Plan 9 compilers? Their source should be around somewhere and they don't do much in the way of optimization, if any, so they shouldn't any giant memory use problems.
My understanding is that the original C toolchain in Go (only recently removed in 1.5) was based heavily on the Plan 9 compilers.
In fact there was a rather inflammatory blog post[1] whose basic theme was that "Golang is trash" because the toolchain was so simple. One of the main Go developers then responded[2] on HN. Different people value different things I suppose.
I've hacked heavily with both; I've even compiled the Plan 9 kernel using Go's compiler+linker. The compilers are damn good. I went from no knowledge of 6[cl] (the amd64 compiler&linker) to pretty comfortable with the source in about 2 days.
Go uses customized but recognizable derivatives of the Plan 9 compilers. Go's compilers are really only intended to compile the Go tools, so you might find it painful to build anything else.
Getting 6c/6l/6a compiled on Linux would be easier these days because gcc's -fplan9-extensions flag will bring you pretty close... check out https://github.com/rminnich/NxM/tree/master/util for the source and scripts to build them on Linux. I believe you can make it output elf binaries but I've never tried using them to build a Linux program.
If you want to greatly speed up compilation at the cost of some runtime performance, just turn LTO off. I've never heard of a project requiring it to function.
Linking several hundred objects in a dozen or so libaries out of 2.5kish objects together is going to get hairily expensive memory wise no matter what way you do it.
Compiling with debug information embedded raises that to around 12GB to do it in a practical timeframe, though it may have dropped recently with some tweaks that have been published on llvmdev. Unless you've willing to wait a few hours.
GCC is no slouch in that area either but its usage spikes feel lower.
You're trolling, but the thing a lot of people might not realize is that OpenBSD builds are done on the hardware. Each platform is capable of building itself and that's how they operate. I'm not aware of any exceptions.
They did, they seemed to have ruled it out. It doesnt support all their platforms yet, and development is fairly slow (small community, but growing). NetBSD (which supports clang or gcc because portability and choice) supports it but it cannot yet compile all the kernel for any architecture, so bits are marked to be compiled with gcc if you select pcc. It would be nice if all of kernel plus (non C++) userspace worked...
The toolchain was Gpl/MIT first as part of Inferno.
The OpenBSD team were interested in said toolchain to replace GCC. His railing at the Plan9 team was a little unfair, we (the team and the community) mostly agreed with his basic point. It was Lucent legal that walked slowly.
This was a time when money was on fire at Lucent. Bell-labs had removed every other lightbulb as a money saving exercise for example. Playing at Open Source wasn't very high up anyone's todo list.
The initial opening of the code in 2000 was just too late for plan9, the GPL was eating the world.
Nokia recently acquired Lucent and Bell Labs... So the answer is probably "a Finnish telecom equipment maker that many people mistakenly assume was bought by Microsoft".
Can't remember if he was talking about the first, or the second version of the LPL, but in any case, around the same time-frame (can't remember exactly when either) the compilers were available under the MIT license, distributed as part of Inferno (Inferno generally is GPLv2/commercial, but the compilers were MIT).
>> The license you propose is NOT FREE SOFTWARE. I am astounded the OSI
has gone and decided to become an organization that just rubber stamps
things which are not free. I don't know who they are talking to, but
these "licenses" which they approve are chock full of constraints
against various segments of the user community.
OSI is not about free software. It's not even very good about open source. They seemed happy if people had access to the source code regardless of the terms around it. They approved lots of licenses that had crap in them.
IMHO BSD/MIT or GPL/LGPL are pretty much it. If you need a different license than those, you've probably got an agenda. OSI gave a rubber stamp to different agendas.
Please point one single OSI-approved license that is not free by the FSF's Free Software Definition and/or the Debian Free Software Guidelines.
I'd be delighted to learn something I don't know. I asked Stallman once, and he could only remember a case where OSI messed up and then fixed it when they realised.
Well this is turning into a lesson for me. There are a few that used to be non-free until newer versions came along:
Apple Public Source License - was nonfree before 2.0
Artistic License 1.0 was nonfree. 2.0 is.
Reciprocal Public License - FSF doesn't indicate a revision, but OSI shows a version 1.5.
Without history it's not clear weather these were OSI approved before revision. There are also some in the OSI list that are not on the FSF list, so I'm not sure if they're free or not.
But I still stand by my second assertion that most of them are unnecessary. If you're not BSD/MIT compatible and not GPL/LGPL compatible, your code will probably have a limited life.
> Without history it's not clear weather these were OSI approved before revision.
OSI has history. All licenses approved by OSI are either on the current list [0] or the superceded list (where the originator of the OSI-approved licenses has indicated it should not be used for new work) [1].
> OSI is not about free software. It's not even very good about open source. They seemed happy if people had access to the source code regardless of the terms around it. They approved lots of licenses that had crap in them.
The last sentence isn't true unless the FSF has also done so; there aren't lots of OSI-approved Open Source licenses that aren't also blessed by the FSF as Free Software licenses (the same is true in reverse, as well.)
"Professionalism" is usually nothing more than a euphemism for using PC/CYA language and avoiding any type of discussion of hard truths because they might offend someone in power.
Technology as an industry would be better if we discarded it.
"Professionalism" is expressing yourself without using ad hominem attacks or insults. Using professionalism actually makes your points stronger, since people are forced to discuss and debate facts instead of how focusing on how you are behaving like a child.
I'm sick of this idea that "professionalism" == whiny or politically correct and being "honest" allows you to be a jerk. The entire thing is so unproductive.
What's unproductive is forcing people to tap dance around their true feelings on a subject for fear of hurting someone else's feelings. As a grown adult, I'd much rather have someone use offensive language but be honest than spend so much effort trying not to offend anyone that we all have to guess at what they really want.
What it comes down to is that someone being a "jerk" can be willfully ignored or looked past to see what their point is. (And IMO, stubbornly refusing to do this is a lot more of an indication of childishness than how few four letter words you use.. i.e. it's a tone argument, i.e. near bottom of PG's hierarchy)
If the "point" part is unclear, there's no looking past it, and the problem never gets solved since asking someone to stop beating around the bush is seen as jerk behavior.
"Note that I sell OpenBSD CDs to fund our project. That contract right
there says in term 7:
If Theo accidentally sells a CD to
North Korea, the US can fuck him.
Thanks OSI. Thanks for being so damn patriotic.
It also says in term 4:
Sell this in a product in ways which "we" do not like, and the
contract you have accepted says you can be fucked by anyone
who owns this license later and who decides they want to fuck you.
When I was younger, I thought that the OpenBSD folks were backward and unsophisticated when compared to Linux. I imagine that a lot of younger people feel that way. But after working with various systems over the years, I can now really appreciate that simplicity. It's really the heart and soul of OpenBSD, OpenSSH and all the other projects (libreSSL, etc.) that they release.