Hacker News new | past | comments | ask | show | jobs | submit login
GCC 10.1 Released (gcc.gnu.org)
323 points by nrgc on May 7, 2020 | hide | past | favorite | 139 comments



I love the built-in static analyzer -fanalyzer option in gcc-10.

[1] https://gcc.gnu.org/onlinedocs/gcc/Static-Analyzer-Options.h...


Really? I tried to use it but I got a lot of false positives - in fact, every one of the errors in my medium-sized codebase was I believe a false positive. I spent a few hours last night looking at all of them and while I found a missing free, it was not one picked up by this analysis, but it happened to be in the same code that the analyzer flagged. Also the error messages are enormous in some cases.

It does show potential, and I hope it improves in future. Be nice to have a libre version of coverity one day.

Edit: Output from compiling nbdkit upstream with -fanalyzer: https://paste.centos.org/view/8381f926


Looking at the categories of warnings it produces, I'm not surprised. When I've used similar tools trying to detect the same things, I also saw false positives. It's probably not an easy set of problems, otherwise we'd have it built-in to more tools and enabled by default.


We routinely run nbdkit through Coverity and it finds bugs, although it too has false positives. Also the reports produced by Coverity are really nice - long enough to tell you where the bug is, but not too long to be overwhelming.

I've been meaning to formally prove one of our internal "mini libraries" using Frama-C. If we did that then no one would be able to complain about bugs in it :-)


Sadly, that's typical with static analyzers todays...


I like valgrind for this, combined with automated tests it gets decent coverage and finds actual (or at least probable) errors.


Valgrind is also prone to throwing up false positives. I find ASan much better.


Isn't it very similar to something Clang has had for years?


Tangential: I think many platforms that GCC supports and the kind of optimisation GCC provides might make it a superior choice for projects. For those developers its exciting. Also if an existing project is using it for a long-time and has not intentions of switching to Clang, for them too this would be interesting.


My project uses g++ but I’ve been using a clang static-analysis step in my CI pipeline for a while now.

Compiling with clang is much, much slower but it finds interesting errors sometimes. I only had to ifdef out one code section that it couldn’t understand (something about indexing an array with a constexpr function returning an enum class from a template parameter).

It depends on the project, but this one and another project I made work are both 100k lines of C++ and took half a day to setup. It was worth it imo.


Interesting that your build times are slower with clang. I've found on most of my projects clang is roughly 30% faster to compile in debug, but release I haven't seen a huge difference between the two.


My build times are identical between GCC and Clang (debug and release).

But TCC is 10x faster than both of them.


Perhaps they are slower because Clang is doing static analysis in this case.


Static analysis is a lot slower but regular clang++ is also significantly slower than g++, like a 4-5 minute build time vs 2:30 on g++.

It might have something to do with a heavy usage of templates in a few files, I don't know.

Clang's autovectorization is better in a few functions I disassembled but otherwise the generated code doesn't benchmark any faster, it's better in some places, worse in others.


Nope, also slower for me.


Clang has it although I've personally never tried it due to the horrible rigamarole of setting it up.

In GCC it's a compiler flag. In LLVM it's a convoluted process automated by either an irritating perl/python script (The python one isn't included by default for some reason despite being far more common these days) or AN ENTIRE WEB SERVER TO OUTPUT TEXT (Which thankfully isn't included by default).

I'm sure you could set it up in make but since my projects aren't large scale enough to really need it I can't be bothered, even if it's one of those things you'll probably only need to write once and can just copy and paste ad-infinitum.


Lest other people get the wrong impression from someone who's admitted not even trying it, running Clang's static analyzer is as simple as switching cc with scan-build. You can even drive it from clang-tidy. No Perl or Python setup in my experience.


You appear to have it installed somewhere. Open up scan-build in your favorite text editor if you don't believe me.


Though scan-build is usually the simpler option, clang itself does have an --analyze flag which writes analysis results in various formats, including the same html reports that scan-build would generate. But to see this on standard out

   clang --analyze --analyzer-output text ...
Will print the entire analysis tree in the same format as regular diagnostics.


The only problem is the CTU mess if you're analyzing more than one file, thus the aforementioned tools' necessity.

Hopefully in a future version the kinks are ironed out and we can just use the flag without any hassle. It's like if we needed to still manually link our files with ld before compiling them instead of clang auto doing it.


> Clang has it although I've personally never tried it due to the horrible rigamarole of setting it up.

    -% scan-build10 make
    scan-build: Using '/usr/local/llvm10/bin/clang-10' for static analysis
    ...
    scan-build: No bugs found.
What could ever have driven them to such a long and convoluted setup process?


Yep! Thanks for pointing out the perl script I mentioned. You appear to have completely ignored the rest of my post on them leaving important functionality to third party scripting languages.


Yes, to highlight that this "horrible rigmarole" and "irritation" you describe is entirely on the basis of there being a Perl script involved. Something I suspect the majority would overlook without even really noticing.


What do you love the most about the new analyzer pass? I haven't had a chance to try it out for myself yet but I'm looking forward to it.


I like not having to run another commercial tool[1] that will likely not be in use whenever I move on to the next project because no one has heard of it.

Biggest advantage I see is it's integrated into the compiler and so sees the same things the compiler does.

Having gcc do this out of the box helps people port their experience/skills with static analysis to other companies.

We already have clang-tidy and I like it too but it's nice to have a fall-back to compare when one produces a strange result. And a bit of competition is always good to have between such projects. And on most big projects it's not like you can just change the build system to use another compiler.

Also I found some interesting cases which valgrind didn't see because it was in an unreachable branch.

[1] https://news.ycombinator.com/item?id=22712338


An example to detect use-after-free: https://godbolt.org/z/zhiNLW

Basically this replicates what clang-tidy did


Too bad it doesn't detect the mismatch between new and free.


And if you fix it to use delete instead of free, you get "can't delete void *" errors. Perhaps not the best example code.


and if you fix the void->int, the warning goes away.


yeah that's why I changed int* to void* in the example, but I forgot to change new to malloc


Excited for this to come to gcc-arm-eabi-none


Imagine if -fanalyze was like rusts borrow checker


You Rust borrow checker requires special annotations and restrictions put on the code to do its job. I don't think you could something like that automatically on a C or C++ full codebase without having to manually annotate and refactor it somewhat. There are many common (and safe) C and C++ patterns that would be outright rejected by Rust's borrow checker, for instance initializing a structure or array partially if you're sure that nobody is going to use the initialized portion. Or having multiple mutable pointers/reference to the same object.

You could do something like that at runtime though, but then you have Valgrind, basically.


> for instance initializing a structure or array partially if you're sure that nobody is going to use the initialized portion. Or having multiple mutable pointers/reference to the same object.

Rust supports MaybeUninit<> for the former example, and unsafe raw pointers for the latter. It needs unsafe because these patterns are not safe in the general case and absent an actual proof of correctness embedded in the source code, a static analysis pass can only deal with the general case.


That's my point though, in both cases the developer needs to add additional syntax to make the intent clear. "Naive" Rust code that tries to do that stuff is rejected by the compiler.

I've expressed myself poorly in my original comment and apparently it looks like I was criticizing Rust but I wasn't. I was just pointing out that safety didn't come "for free" by toggling a compiler flag, you have to change the way you code some things. If C and C++ were to become safe languages, code would need to be rewritten using things like MaybeUninit, split_at_mut, RefCell etc...


I would dispute that those common patterns are indeed safe, even if they could be argued they are when first written because code changes and can suddenly break your preconditions if they aren't enforced in the code itself.

C codebases then follow certain defensive programming customs to avoid reading uninitialized or out of bounds memory, at the cost of some performance. This is the right trade-off in C but, funnily enough, the more restrictive borrow checker has the opposite effect as you can give out inmutable and mutable references with wanton abandon because they get checked for unsafe behavior. It's the same difference as a a gun where the best practice is to keep it unchambered at all times to avoid the risk of a misfire, and a more modern gun with a safety: it's one more thing to think about but it actually smoothes the operation.


I'd describe the pattern in slightly different terms: when done right, restrictions in a programming language (or library/framework) are liberating for the programmer.

The restriction of immutability spares the programmer from worrying about whether unknown parts of the codebase are going to decide to mutate an object.

JavaScript's single-thread restriction (not counting web-workers) closes the door on all manner of nasty concurrent-programming problems that can arise in languages that promote overuse of threads. (Last I checked, NetBeans uses over 20 threads.)

Back to the example at hand, C has no restrictions, but that hobbles the programmer when it comes to reasoning about the way memory is handled in a program. It's completely free-form. Rust takes a more restrictive approach, and even enables automated reasoning. (Disclaimer: I don't know much about Rust.)


> Rust borrow checker requires special annotations and restrictions put on the code to do its job.

This is a good thing, because it makes lifetimes and ownership explicit and visible in the code. It serves the similar purpose as type annotations in function signatures.

> Or having multiple mutable pointers/reference to the same object

Sure you can have that with `unsafe`. And this is a good thing, because multiple mutable pointers to the same object is at best bad coding practice that leads to unsafe code, and you should avoid that in any language, including the ones with GC. Working with codebases where there are multiple distant things mutating shared stuff is a terrible experience.

If a C/C++ version of "borrow checker" could mark such (anti)patterns at least as warnings, that would bring a lot of value.


He wasn't criticizing Rust, he was just stating facts.


I read it differently, because he started with "You(r) Rust borrow checker", making his point automatically in oposition. But now after reading without this You at the beginning, I agree it was neutral.


My guess is that comment was made on a phone or tablet. It has a lot of small, autocorrect-looking mistakes. Other than the "You", for example there is one part where "initialized" is used where "uninitialized" is clearly intended.


I wish I could use that excuse, I just have the habit of posting first and then proofreading and editing, but noprocrast kicked in and then I switched to something else and now I can't edit it anymore.

However if I sounded like I wanted to belittle Rust I really expressed myself poorly, I love the language and hope it'll eventually become the new C++. If anything I was attempting to make the opposite point: you can't fix C/C++'s flaws with a smarter compiler. We're not two releases of GCC away from having safe C without having to change anything.


Can you explain why multiple mutable pointers is bad practice?

I understand the benefits and the risks of them, and understand how Rust prevents both, but I dont yet understand why it's bad practice, and am interested to learn why.


The one that affects you as a programmer most is Iterator invalidation. Iter borrows from the vector, you mutate the vector, iter blows up. Simple really. But a lot of code is like this. Borrow from hashmap, insert into hashmap, the slot gets moved around and your pointer is now invalid. That’s just vectors and hashmaps; imagine the possibilities in a much more complex data structure.

There are compiler optimisations you can do if compiler knows about aliasing, but that’s not so much a software authorship problem. There are some curly problems with passing aliased mutable pointers to a function written for non-aliased inputs, like memcpy and I imagine quite a lot of other code.

But common to all of these things is that it’s pretty hard to figure out if the programmer is the one who has to track the aliasing. In hashmap pointer invalidation, your code might work perfectly for years until some input comes along and finally triggers a shift or a reallocation at the exact right time. (I know this — I recently had to write a lot of C and these are the kinds of issues you get even after you implement some of Rust’s std APIs in C.)


Is this still bad practice if the container can detect when this happens, like Java's? Not saying that it always has to throw like Java, I could imagine implementing a weak iter which we'd check before each operation.


I feel like this is something common c++ developers know, and it's not worth all the baggage to tag it. You just control the iterator inside the loop instead of in the loop declaration.


Because it leads to hard to understand code.

If N unrelated (or loosely related) things can mutate the same object, then you get a O(x^N) explosion of potential mutation orders and in order to understand that, you need to understand all the (sometimes complex) time-relationships between these N objects. This gets even much worse when some of these objects are also pointed from M other objects...

On the flip side, in case of using a simple unique_ptr (or a similar concept), this trivially reduces to a single sequence of modifications.


Dont we in Rust still conceptually modify the vector multiple times, just through different means (usually a generational index or something)?


> Sure you can have that with `unsafe`

The parent was talking about the borrow checker so I only was talking about safe Rust code. Obviously if you consider that the entire C/C++ codebase is in a big unsafe {} block it'll work... because it won't do anything at all.


From Rust docs:

It's important to understand that unsafe doesn't turn off the borrow checker or disable any other of Rust's safety checks: if you use a reference in unsafe code, it will still be checked. The unsafe keyword only gives you access to these four features that are then not checked by the compiler for memory safety.


One of those four features is dereferencing pointers, and unlike references, pointers are not checked by the borrow checker. So you could bypass the borrow checker using unsafe code in a way, though most probably you should not.


> This is a good thing, because it makes lifetimes and ownership explicit and visible in the code.

No, it is additional burden. If it was possible to do it without annotations, you bet we would do it!


It's only a burden to the extent that type annotations are a burden. It's definitely possible (including in Rust) to do away with both, but that has downsides of its own.


I am not saying you cannot write code without them, but that you cannot do away with them without losing what they bring.


While it isn't at Rust level, that doesn't stop Google and Microsoft from trying.

"Update on C++ Core Guidelines Lifetime Analysis. Gábor Horváth. CoreHard Spring 2019"

https://www.youtube.com/watch?v=EeEjgT4OJ3E


>You Rust borrow checker requires special annotations and restrictions put on the code to do its job. I don't think you could something like that automatically on a C or C++ full codebase without having to manually annotate and refactor it somewhat.

What about with a constrained (not necessarily general purpose) AI with the expertise of Scott Meyers, Andrei Alexandrescu, Herb Sutter and Alexander Stepanov?


That is what Microsoft and Google are trying to do with C++ Lifetime Profile.

https://herbsutter.com/2018/09/20/lifetime-profile-v1-0-post...

"Update on C++ Core Guidelines Lifetime Analysis. Gábor Horváth. CoreHard Spring 2019"

https://www.youtube.com/watch?v=EeEjgT4OJ3E

While it might never be Rust like due to language semantics, it is way better than not having anything.


I think Rice's theorem means that you can't really do that without restricting/annotating semantics like Rust does.


No need to imagine, It's becoming reality -> https://internals.rust-lang.org/t/c-lifetime-profile-1-0-a-k...


> Extended characters in identifiers may now be specified directly in the input encoding (UTF-8, by default), in addition to the UCN syntax (\uNNNN or \UNNNNNNNN) that is already supported:

    static const int π = 3;
    int get_naïve_pi() {
      return π;
    }
Lovely!


The next obfuscated code competition is sure gonna be interesting.


yup zero-width-space is going to a do a number on everyone.


For what is worth being able to enforce conventions like

    present?(p) // return bool
    get!(g) // throw if not found
a-la ruby/elixir could be good


i don't know if you are being ironic or not, but the non-english speaking world will love it.


As a non-English programmer who has seen a lot of code not written using English please, god, NO. The mixture of English and other languages identifiers in the external libraries makes me cry.


Yep, writing non english keywords in C++ is like having prolog code inside c++ :-)

(but as a non english person, I find it very helpful to be able to have unicode in strings)


[flagged]


> Someday we might have multilingual frameworks and not restrict programming to the privileged few that could learn a second language.

I seriously doubt it, English is really easy to learn (sure the pronunciation is weird, but who needs it when you are writing code), and by learning English you can understand all the code with identifiers in English, written not only by native-speakers. Additionally, I wouldn't want a person who couldn't be bothered to learn a natural language to be a programmer in my company (if it was based in a non-English speaking country, of course).

> "reading mixed language code is hard, despite me being fluent on both! everyone should speak what is more convenient to me!" > leaving billions of people that do not know english out.

How is writing bilingual code including more people? You are limiting the total amount of people who speak English to the subset of people who know both the language.


I really disagree with the idea that math uses greek letters just to make things less accessible. If you have a better symbol than the large S for an integral, or Sigma for sums, I'd love to hear it.


People who complain about math notation and want something more verbose like programming usually don't fully understand the notation they're complaining about. In particular, there are fairly strong conventions for which letters get used for which purposes. Longer variable names that try to be more descriptive often aren't as necessary as they might seem, and in many cases would make an expression less generic than it really is.

And then there's the fact that what's easy to type on a standard keyboard without a specialized IME has little bearing on what notation is most effective for the reader, or for someone writing on paper or blackboard.


It would be cool if more papers had a glossary for the symbols they use, though. If you're going to use a bunch of single character symbols, at least put somewhere what they are.


The abstract and introduction to a math paper usually does so, though sometimes they slip up and in then in the middle of the paper you find some strange symbol and have to manually binary search for it.


I wish I could click on a symbol in an equation and have it either pop up a full name / definition for standard symbols, or scroll to where the variable was defined for paper-specific symbols.

When studying papers, I spend way too much time trying to answer questions like, "what was 'n' again?"


I usually write notes as I read anyway — not just marginalia but typically on the back of the first page or in my notebook. Back of the first page is an excellent place for things like that.


I believe that Archimedes used Greek because it was the only language he knew...


because you read so many papers by him...


I actually did

When I was a kid Greek and Latin were taught at school since elementary schools (I think you call them primary schools)

I'm sorry for you if you didn't

They are pretty amazing


I wish Math used normal english, not greek letters - english is my second language too..


They are latin letters, not english. Unless you mean futhorc. Which would be awesome.


He didn't say "english letters", he said "english" which is a language. The implication is that we should use conventions (including potentially whole words) that are familiar to the English-literate world and not Greek glyphs.


>The implication is that we should use conventions (including potentially whole words) that are familiar to the English-literate world and not Greek glyphs.

The "english-literate" world (emphasis on literate) is, or historically has been, very familiar with Greek glyphs.

And that's just for humanities.

The mathematic and physics -literate world, doubly so. Everybody uses pi, theta, sigma (e.g. the summation formula) etc symbols...


> The "english-literate" world (emphasis on literate) is, or historically has been, very familiar with Greek glyphs.

I would be shocked if even 1% of native English speakers could list the full Greek alphabet much less have a comprehensive familiarity with what each letter means across all branches of mathematics. I would be shocked if even 5% of native English speakers could tell you what theta generally means in geometry, much less all its other meanings.


Except for "full alphabet", all of that applies to the English alphabet as much as it does for the Greek alphabet. But both of you seem to disagree on what "literate", "very familiar" and "historically" really mean. Which is fine, language is not made to be precise - luckily we can use precise symbols where precise meaning matters, not English phrases. For example in math and physics.


> Except for "full alphabet", all of that applies to the English alphabet as much as it does for the Greek alphabet.

Native English-speakers are very familiar with the "English alphabet" but that's irrelevant because we're comparing English words (or phrases) with Greek symbols.

> But both of you seem to disagree on what "literate", "very familiar" and "historically" really mean.

Perhaps, but I'm using standard meanings available from any English dictionary.

> Which is fine, language is not made to be precise - luckily we can use precise symbols where precise meaning matters, not English phrases. For example in math and physics.

You can assign precise meaning to words or phrases as easily as you can to Greek symbols. The two approaches differ in that it's easier for (at least English-speaking) humans to remember words or phrases that relate to or approximate the precise meaning than a random Greek letter (I'm sure someone will demand evidence, but this is hardly an extraordinary claim compared to its inverse) while letters (Greek or otherwise) are more expedient to write by hand.


https://en.m.wikipedia.org/wiki/Greek_alphabet

Many of these characters are hard to type on regular keyboards, which is probably what the parent commenter was referring to.


From a non-english speaking country: I prefer US-ASCII for code. Anything else just obfuscates.


There are lots of other languages that support full UTF-8 in identifiers (e.g, Go) and the non-English-speaking world doesn't take advantage.


Every such such language does it insecurely. The only exceptions are Java, rust and cperl.

I rather have no unicode identifiers than insecure identifiers, which don't follow the unicode security guidelines for identifiers.



I've done a similar thing when I was writing a language (with generics) that compiled to Go and needed to implement name mangling. But in any case, this isn't an example of a non-native speaker using utf-8 to write in his native language. :)


Tbh, I mainly shared the example code because I was entertained by the contrast of sophisticated naming and crude approximation. Still, it illustrates that this can come in very handy if used properly.

Of course you can also cause all sorts of mayhem. Even disregarding fun such as GREEK QUESTION MARK looking like a semicolon and zero-width spaces, I probably would not use this feature in enterprise code. Too likely that some tool somewhere (e.g. an alternate compiler?) is KO'd by characters outside of basic ASCII range (which has served us well and will keep doing so).


> Several C++20 features have been implemented: > P0912R5, Coroutines (requires -fcoroutines)

Nice to see a bunch of C++20 features making it in. Coroutines seems like a big one!


Lovely! Time to see my code go even faster, for free :)


memory.c: In function ‘mk_entry’: memory.c:116:12: internal compiler error: in saved_diagnostic, at analyzer/diagnostic-manager.cc:84 116 | return (struct entry) {safe_calloc(end - start, 1), start, end}; | ^ Please submit a full bug report,

Goes to look at README.Bugs. Holy cow, I don't have time to to check all those places to see if it has been reported already.


The GCC community is normally very good (if blunt) at picking up duplicate bugs and linking them to the right place. https://gcc.gnu.org/bugzilla/ Is all you need. Just don’t feel bad if your bug is closed!


Thanks, it was fixed already :)


Every time I look at GCC's bugtracker, I feel a mix of disgust and astonishment at the state of such a foundational piece of software. I'm amazed it works as well as it does.


(side note for RMS, I still have a "RUNGCC" sticker on my car (it's been 5 years !!!))


wish gcc 10 was built into ubuntu 20.04



You want a bleeding edge release in your non-bleeding edge LTS distribution?


it does not make sense to pair development toolchain versions with operating system versions.


What do you think LTS entails, exactly?

One of the core ideas of working with LTS is that you can build your software on an LTS release and ship it to somebody else on the same LTS release, either as a source or as a binary.

If you want the latest GCC, that's fine, you're not forced to use the default compiler distributed with your OS. But it doesn't make sense to update the default compiler used in an LTS release. If you want that, then you don't want LTS.


eh, no. the OS package manager is for sysadmins. LTS is for sysadmins to not have to worry about versions changing under their feet rapidly when they apply security updates.

If you want to develop an application, you use your own toolchain. But yes I know most C++ people don't d this because C++ tools don't easily support it. But that's on C++ for not having pyenv, rustup, multiruby, etc equivalent.


C++ lets you statically link the standard library though, right? Or am I missing your point?


Yes, but I'm quite sure GP is talking about using LTS so the installed libraries are the same as you built with.

IIRC there's still an issue with gethostname which must be dynamically linked.


> it doesn't make sense to update the default compiler used in an LTS release. If you want that, then you don't want LTS

This needs to be emphasized.


> One of the core ideas of working with LTS is that you can build your software on an LTS release and ship it to somebody else on the same LTS release, either as a source or as a binary.

Yes, and updating compilers don't prevent that at all. You can use GCC 10 to ship code that will build and run on Ubuntu 12.04 without issues. Xcode 11 can ship code that works back to macOS 10.6 and Visual Studio 2019 can still optionally target windows fucking XP !


> [...] and updating compilers don't prevent that at all.

This is incorrect. In practice, for larger code bases, upgrading to a newer version of GCC or Clang is something that must be done purposefully, and you must test.

Sometimes it turns out that your code relies on some compiler behavior which has changed. Sometimes newer compilers are stricter than older compilers. There are plenty of real-world cases of these problems!

> Xcode 11 can ship code that works back to macOS 10.6 [...]

There are a number of features that are specific to the macOS toolchain which make this possible. Take a look at the "-mmacosx-version-min" flag on the macOS compiler. This selectively enables and disables various APIs. These features don't solve all the compatibility problems, either.

> Visual Studio 2019 can still optionally target windows fucking XP !

We're talking about Linux here. The Windows toolchain is radically different.


> Sometimes it turns out that your code relies on some compiler behavior which has changed.

In practice this can be "undefined behavior" like dangling pointers or data races. Maybe the new version of the compiler happens to reorder a couple of instructions (which it's perfectly within it's rights to do) which turns a "benign" race into a crash or an exploitable security issue. Is it your fault for writing these bugs? Sure. But if you're a big organization, and you know you have bugs like this, is this a reason not to upgrade your compiler? Absolutely. All the real world testing you've done on your current binaries has value, and losing some of that value needs to be weighed against the benefits of upgrading.


> This is incorrect. In practice, for larger code bases, upgrading to a newer version of GCC or Clang is something that must be done purposefully, and you must test. Sometimes it turns out that your code relies on some compiler behavior which has changed. Sometimes newer compilers are stricter than older compilers. There are plenty of real-world cases of these problems!

So if you hit issues, do what you do on every other system which is "installing older Xcode / Visual Studio" ? That would not be an issue at all if the toolchain wasn't vendored as part of the distro, you'd just have something like rustup that allows you to use whatever version of the toolchain your project requires.

> There are a number of features that are specific to the macOS toolchain which make this possible. Take a look at the "-mmacosx-version-min" flag on the macOS compiler. This selectively enables and disables various APIs.

yes ? https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Optio...

> We're talking about Linux here. The Windows toolchain is radically different.

what I'm saying is exactly that the Linux desktop world would be in a far better place if Linux followed the Windows / macOS way of vendoring toolchains.

The only thing that is an actual issue on Linux if you want backward compatibility is glibc which does not have an easy way (AFAIK) to say "I want to target this old glibc version". But that's not the issue for what we are talking about which is "getting newer compilers on a given distro" - Red Hat & derivatives manage this without issue with the various devtoolsets for instance.


> So if you hit issues, do what you do on every other system which is "installing older Xcode / Visual Studio" ? That would not be an issue at all if the toolchain wasn't vendored as part of the distro, you'd just have something like rustup that allows you to use whatever version of the toolchain your project requires.

Or you could switch to LTS, which achieves the same thing.


> Yes, and updating compilers don't prevent that at all.

You do understand that ABI backward compabitility is not ensured, don't you?

https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html

Some software packages even break between distro releases.

The primary value of a distro is to provide a fixed platform that application developers and users can safely target. Risking ABI breakups just because a rare number of users wish to be on the bleeding edge without wanting to do any of the work to install their own software is something that's very hard to justify.


Let me quote exactly the page you linked :

> The GNU C++ compiler, g++, has a compiler command line option to switch between various different C++ ABIs. This explicit version switch is the flag -fabi-version.

If you want to target a given distro, you -fabi-version this distro's ABI, just like you set -mmacosx-version-min on mac or set _WIN32_WINNT on windows


That's something that might be useful for one of those rare end-users who for some reason want to try to build something with a bleeding edge compiler.

That is also mind-numbingly absurd to force upon the vast majority who couldn't care less about the bleeding edge and want a stable platform to act as a fixe target without risking random ABI breakages.

I should not be forced to endure a brittle and fragile and overly-complex compilation process just because a random guy somewhere had a whim about taking a compiler out for a spin.

The world expects stability. If you wish to try out some stuff, just download the compiler and build the damn thing yourself. Hell, odds are that there's already a PPA somewhere. So where's the need to screw over everyone?


The ABI (the itanium ABI) is fixed, but somtimes there are bug in the compiler and gcc deviates from the abi in some corner cases. When the bug is fixed the gcc abi version is bumped which most of the time doesn't matter but if the bug fix affects you (and very often it doesn't), you can 'roll it back' by selecting a specific ABI version. Not fixing the bug is not an option because it means that GCC would be incompatible with other compilers that don't have the bug.


> ABI (the itanium ABI) is fixed,

That's irrelevant. The only aspect that is relevant is that the C++ standard does not define nor assume a standard or even fixed ABI, thus each compiler vendor just goes with the flow.

In some rare cases where a compiler vendor also controls the platform and essentially holds a vertically integrated monopoly, they are in a better position to not break the ABI all that often. Everyone else doing stuff in C++, whether writing programs or libraries or compilers or putting together OS distributions, just faces the music.


> That is also mind-numbingly absurd to force upon the vast majority who couldn't care less about the bleeding edge and want a stable platform to act as a fixe target without risking random ABI breakages.

but why would you have "random ABI breakages" ? there isn't any issue with using e.g. VS2010, 2012, 2015, 2017, 2019 to make a windows software for instance, so what makes you think Linux would be any different if you could install a compiler version of your choosing instead of the one fixed by Ubuntu / Debian / whatever. Why is it a problem for C/C++ but not for Go / Rust / every other native-compiled language in the universe ?


> there isn't any issue with using e.g. VS2010, 2012, 2015, 2017, 2019

You're conflating things and in the process making absurd comparisons. Windows is not Linux and GCC is not msvc++. GCC is very vocal on how they don't support ABI compatibility, and Microsoft was very vocal ensuring they enforce ABI compatibility from Visual Studio 2015 onward. There's a fundamental difference in multiple dimensions, which isn't bridged by mindlessly stating that GCC and mscv are both compilers.

ABI is the bane of C++. Why are you posting comments on a thread about C++ and the problems created by ABI if you are oblivious to this? The only thing you're managing to do is generate noise and make everyone waste their time with your posts.

> so what makes you think Linux would be any different

Because it is, and at many levels. Just the fact that you are entirely oblivious to this basic fact is enough to convince anyone not to bother with any further replies to this thread.


> GCC is very vocal on how they don't support ABI compatibility,

What? Apart from bugs, GCC has maintained backward compatibility for more than a decade, for both the compiler ABI and the standard library.

They were forced to break ABI for c++11 to implement the new string and list semantics, but the old ABI is still available.


> Windows is not Linux and GCC is not msvc++

That does not mean anything. Linux can be whatever people with enough free time want it to be. It's 100% possible to imagine a Linux-based system with a Windows-like userspace and pacing. Hell, technically you could just ship the Linux kernel, an init system, Wine and live in an almost windows-y world - even cl.exe works under wine.

> Why are you posting comments on a thread about C++ and the problems created by ABI if you are oblivious to this?

Because it's an entirely self-inflicted problem, caused by putting toolchains (among two thousand other things) in distros.

Again, why do people have no trouble shipping Rust which has zero ABI guarantees to ten year old Ubuntus and C++ couldn't ? The answer is, it totally can if you just let it and let go of shipping dev packages in distros, instead relying on C++-specific package managers such as conan or vcpkg for your dependencies.

I build my own software with latest versions of clang, Qt, boost and have no trouble distributing it to fairly old distros.

> Because it is, and at many levels.

yes, I'm asking about what could be, not what is ?


> GCC is very vocal on how they don't support ABI compatibility

Source? They very rarely break ABI. They even release ABI break fixes sometimes.

> Microsoft was very vocal ensuring they enforce ABI compatibility from Visual Studio 2015 onward.

They try to keep ABI stable, but nothing is promised until the actual releases happen.

Always expect a new ABI version at some point in the future!



It is available in the repos it just isn't the default


Compiling gcc is actually not so bad (and sometimes necessary if you want to e.g. use drd to debug openmp, so you can make libgomp use pthreads primitives that drd knows how to do deal with).



It's nice when they're built in, sure. But gcc is pretty easy to build on its own.


If you want the newest version of software, Ubuntu does not cater to that.


If you are a dev then you can prefer using an LTS flavor of Ubuntu with a PPA for whatever you need to be newer. For important stuff this PPA are provided by Canonical so I run newer kernel and nvidia drivers on an older LTS.


Well, you might want to run a newer/non-lts release via lxd/lxc. It's probably a much better idea than pulling in willy-nilly ppa's.


But at this point, why not simply run Arch Linux?


You might want LTS and upgrade some packages when needed/forced and not play the "update" lottery. New updates not only bring you cool new feature and fixes , they bring new bugs and sometimes features are removed or GUIs are moved around. At least with my LTS I worked around for existing bugs , upgraded from PPA the things I needed to, browsers are latest versions and my IDE is auto-updating too.


Stable base. I'm pretty fond of Ubuntu LTS as the OS running the bare metal, then [docker] containers on top of that to run applications, which means I can have as new of apps as I want while keeping a nice boring stable kernel/bootloader/sshd/whatever.


I'm not sure I understand. You want a stable host system without the need for forced, sometimes breaking, upgrades - so an lts release "on the outside".

You want to develop with new tooling, so a newer release under lxd/lxc. But you probably want to deploy on an lts release - maybe the one comming in a year?

You could of course develop under arch in lxd/lxc - then validate for an lts release once your code is "done".

But I don't think you'd generally would want to deploy to arch - as you'd have to play catch-up in order to keep up with security patches (or backport yourself)?


Just use docker and you can have any toolchain you want any time you want


Just use Fedora, which targets developers.


They still may have a snap for it....


just add it in as a ppa!


It doesn't seem to be available on the main site or any of the mirrors I tried.


> It doesn't seem to be available on the main site

Surely it's available from here at the very least?

https://gcc.gnu.org/git.html

> or any of the mirrors I tried

Seems to be to me?

ftp://ftp.mirrorservice.org/sites/sourceware.org/pub/gcc/releases/gcc-10.1.0/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: