This might be a terrible idea, but maybe it is a good time to scrap the concept of a default completely.
Wouldn't it be more future proof just to FORCE everyone to have to specify which language version they're targeting? That way when a new one is added it has absolutely no impact on the historical ones.
It isn't like c++14 is going to be the last version ever.
PS - If GCC is still in use in 2098 then things may get awkward.
>>just to FORCE everyone to have to specify which language version they're targeting?
There is a thought.
I see a lot of trouble from this proposal because the compilers and supplied libraries have a history of including extra things. Until quiet recently C++ headers would drop in C that nobody asked for. Personally, I was surprised to find that on Windows I could throw std::runtime_error.
At the end of the day a lot of code would break, because nobody is sure exactly what standard they are actually are using.
Is there a technical reason that would prevent the compiler from honoring a per-file pragma that allowed the developer to define how the code was expected to be interpreted by the compiler?
I find it terribly confusing. It means that Haskell is almost never something I can expect a lot from, but a weird combination of extensions changing to "non-standard" behaviours.
> It means that Haskell is almost never something I can expect a lot from, but a weird combination of extensions changing to "non-standard" behaviours.
I can't think of a weird combination of behaviors from extensions that would change semantics... maybe it's because I don't use that many extensions usually.
When I do they are related to type level programming and all they seem to affect is type inference.
Never confused me. In general, on a bigger project you'll have a bunch of approved extensions, and everything extra goes through very strict code review to see whether it helps more than it confuses.
I don't know whether you can, but it would be asking for trouble. Let's say you call function f in library L and get back a std::list, and then pass it to function g in library M, which linked to a different implementation of std:list. At best, that would lead to a crash.
alias c99="gcc --std=c99"
alias gc99="gcc --std=gnu99"
alias c++11="gcc --std=c++11"
alias g++11="gcc --std=gnu++11"
and so on to the compile distribution, and leave "gcc" and "g++" at their current defaults for legacy code. If the changes between the dialects are different enough to have compatibility issues, then it makes sense to actually treat them as different languages with different compilers.
It's something we're discussing and exploring. The general thought is that Clang could do this for everything but sized deallocations (and it's unclear how much of an issue those would pose).
C++14 is awesome language, I didn't have much contact with C++ development for over a decade but I keep one eye on how C++ develops and am amazed of how flexible things became without abandoning strictness altogether C# "dynamic" kind of way.
Keeping strictness in the language allows for awesome tooling so I tried yesterday to find good IDE for C++14. There's not much on the market. Clion is decent but doesn't handle code completion for auto return type.
Clang does, so Atom code editor with clang based auto-complete plugin works better than Clion.
But I have luck for finding issues in the things I touch so I immediately encountered something that's not supported by clang autocomplete. It turned out of course that I'm not the first one to encounter this issue: https://llvm.org/bugs/show_bug.cgi?id=14446
Since it's hanging there since 2012 I understand it's a low priority issue. Could you give me any pointers of how could I help with implementing this feature? Maybe point me to files that I'd need to alter to attempt to implement this?
Pretty much. Taking the "will this break stuff?" worry to the logical extreme would mean that gcc should default to a pre-standarization K&R dialect and g++ to a cfront dialect contemporary with 1st edition Stroustrup.
The world moves on. Though if you're an active project, moving along with the world is probably a better option than throwing in --std=c++really-old and deliberately refusing to use new stuff.
Forcing the update of old code doesn't make me sad. Considering new versions of the language should be improvements over previous ones, it may be a good side effect. If your code breaks with the new compiler and language standard, just change your build options to explicitly use the previous one.
If you maintain one of the rare codebases that would break, then just add "--std=c++98" to your compiler flags.
GCC has already been getting steadily more strict by default, or at least opt-in with -Wall -Werror. The last time we updated our build environment, we had to go in and #include all the headers that were formerly implicit (or #include-d by other headers), and add in the 'const' every time we referred to a string literal. The possibly-unassigned variable checker also got a lot more sophisticated and now would enter called functions.
New versions of gcc routinely fail to compile Firefox out of the box. A good example was gcc 4.7's -Wunused-local-typedefs, which is part of -Wall. Local typedefs seem to be a common pattern in Firefox's generated C++ (for IDL bindings).
This is why I think static bug checking and compiling should be two different tools. There is no reason to fail a build which used to run, even if it has bugs.
Doing all those checks at build time is slowing the build down too.
Splitting static checks and compiling into separate tools will not prevent your builds from breaking when you update your tools if you have your build process deliberately configured to break whenever a new warning appears. GCC adding new warnings will not break your builds if you do not specifically ask for it to by using -Werror (or having your CI system reject any builds that produce warnings).
At least with clang, -Weverything is not measurably slower than no warnings at all on any vaguely realistic code base.
A CI server is totally different to downloading a tarball and trying to run it. Using -Wall and -Werror in that situation is a terrible annoyance.
With regard to speed, I just know that tcc is 10 times faster than gcc, and that makes a huge difference to me. Anything pointless is really irritating because I know how slow gcc is.
Obviously the issue is with the app developers. That is why I am suggesting splitting static checking and compiling, so shit developers at least stand a chance of doing the right thing.
(I would rather a fast compiler until I am ready to release a final product and then swap. I probably do more cross compiling and creating custom toolchains than you.)
(I know, it doesn't work like that. 99.5% means syntax elements or some such, not lines of code. But it still works out the same in a medium-to-large code base; 99.5% compatibility breaks more than you expect.)
I maintain a project that compiles as both C++11 and C++03. One pain point is that the headers have changed between them: #include <tr1/memory> to #include <memory>. This is complicated further by libc++, which doesn't have <tr1/memory>.
That's fairly straightforward to work around, however.
I've used both autoconf and cmake to do functional testing of which headers are available, and then include the most recent available out of <memory>, <tr1/memory> and <boost/shared_ptr.hpp>, bringing them into a project::compat namespace to make the codebase not need to care which is in use, and the same for a number of other tr1/boost types now in C++11/14.
I'm now at the point where for some projects I've been able to strip out this entirely and just use the C++11 types directly.
That is relatively straightforward in the scope of a large project (and it's what I do as well), but stuff like this really makes it harder to share small-to-medium sized C++ libraries.
I don't understand why C++03 didn't just use the std:: namespace.
Unfortunately this doesn't handle the libc++ case, which needs #include <memory> even in C++03.
What I ended up doing is something like this:
// Ensure that _LIBCPP_VERSION is defined if necessary
#include <ciso646>
#if defined(_LIBCPP_VERSION) || __cplusplus > 199711L
// C++11 or libc++
#include <memory>
using std::shared_ptr;
#else
// C++03 or libstdc++
#include <tr1/memory>
using std::tr1::shared_ptr;
#endif
If you're exposing __cplusplus=199711L then you really ought to continue to support #include <tr1/memory> (i.e. C++03 style). As long as you do that there's no problem and the suggested thing should work.
Yep. (TR1 was published after C++03, they're totally different things.) What we did with TR1 was actually very friendly to migration - we provided std::tr1::meow in <meow> (in 2008 SP1), then in 2010 we aliased that to std::meow as C++0x was being finalized, then in 2012 we switched the True Name to std::meow and made std::tr1::meow the alias for back-compat. At some point in the future we'll probably remove the tr1 aliases completely.
For the Technical Specifications, we're following their header directory and namespace rules (<experimental/meow>).
C++14 didn't introduce any new breaking changes to pre-C++11 functionality, so switching to C++11 first would not make the migration now any easier and would introduce the need for another migration in the future.
Wouldn't it be more future proof just to FORCE everyone to have to specify which language version they're targeting? That way when a new one is added it has absolutely no impact on the historical ones.
It isn't like c++14 is going to be the last version ever.
PS - If GCC is still in use in 2098 then things may get awkward.