The level of coordination here in the C++ space is astonishing and commendable. This stuff used to take over a decade to trickle in and now it's all falling into place right on schedule.
It's quite amazing to see the degree of change; unfortunately, as C++ compilers are so integral to the system, shipping newer code for older platforms is proving quite difficult. We recently went to C++11 and have to drop binary builds (through OBS and PPA) for a number of still-maintained but older platforms, sadly.
Sure, I like many of the new features of C++ that could be compatible with or adapted to C (constexpr, auto, typed enums, or even (gasp).. templates for generic programming) and I'm jealous of the level of innovation in C++ (which would't be that difficult to backport to C since the features already are on C++), especially for embedded.
A lot of people write C in C++ mode just for convenience, you get things like const, but it's not the same as writing actual standards compliant C code.
If you need portable C code, forget about all that stuff. Forget about a lot of things since some compilers are truly atrocious.
Also modern features like atomics and complex numbers are implemented as language features in C, but with template libraries in C++. Likewise type-generic math in C11 uses the C-specific _Generic keyword, but uses templates and function overloading in C++
This sounds like it's more of a library thing than language thing. The problem (I'm guessing) is the older platforms have `libstdc++` which won't support newer features* . Though I suspect statically linking `libstdc++` might be a (nasty, but workable) option here.
* - again, I'm guessing - I've been out of the c++ game for a few years...
Thanks! This really should've been kept first class part of LLVM. It'd be a handy way to compile rust to various MCU architecture that llvm doesn't directly support.
One reason people started looking into alternative languages was the small pace the C++ compilers were moving. It was hard to be excited about new language features when you knew no one would implement in near future (near being 5-10 years).
Especially compared to when VS is getting complete support for C99; which IIRC is sometime between when the sun stops burning and the heat death of the universe.
Is it fair to dismiss them for something that old? This is a thread about C++17, and Microsoft has shown themselves to be a great participant in pushing modern C++ forward and encouraging adoption; see: GoingNative, cppcon.
I would also guess that it might not be good business to publish experimental features from a standard that doesn't exist yet. I think Microsoft is already implementing these features, they are just not out yet.
> The Visual C++ Team is excited to announce that the compiler in Visual Studio 2017 RC will feature a mode much closer to ISO C++ standards conformance than any time in its history. This represents a major milestone in our journey to full ISO C++ conformance. The mode is available now as an opt-in via the /permissive- switch but will one day become the default mode for the Visual C++ compiler.
I'm really excited about having MS right up at the front of the pack with GCC and Clang.
The strange vagueness in stating they are "close" means the Microsoft C++ compiler team know they still miss shamefully important pieces, with the necessary consequence that optimistic users expecting C++17 will be bitten by unexpected significant compiler bugs. They should be more open.
I respect the progress Microsoft is making, both in the long term (since the dark ages in which they didn't care about standards) and in the short term (a genuine C++17 implementation effort), but their progress is still slower than the competition.
Intel also usually releases a beta every year with all the new features bunched together. They don't tend to push features out in frequent incremental updates instead opting for yearly releases with bugfixes in point releases in between.
I love the work being done by the clang and gcc teams, but I still stick with Intel's C++ compiler for many things due to its far better support for automatic vectorization among other things.
What does "complete" here mean if the standard itself isn't yet complete? Or are complete and finalized not necessarily the same?
"Because the final ISO C++1z standard is still evolving, GCC's support is experimental. No attempt will be made to maintain backward compatibility with implementations of C++1z features that do not reflect the final standard"
It is complete... so far. The standard is in "feature freeze", bugfixes only. Although technically new features can still be added in response to National Bodies comments, in practice everybody expects the feature set to be set in stone.
Also that page only tracks pure language features, standard library conformance for libstdc++ is tracked elsewhere, although the standard library is equally part of the language.
Ah, that's a common misconception. Even with coroutines you still want explicit async calls and futures, otherwise if you restrict yourself only sync calls, you either have to artificially serialize operations or have to spawn artificial coroutines.
What do you do not want are explicit uses of 'future::then' and the required manual CPS transform. Instead you want a wait-for-{any,all} future operations.
Is there any compiler to date that has a fully developed and released parallel algorithms library? The last time I checked a lot of this work still looked like it was in the research stage.
I share your enthusiasm tho. It will probably introduce parallel code into everyday C++ projects faster then new special purpose parallel programming primitives will.
I'm quite annoyed by announcements like this. To be complete, you have to provide a complete solution. It's really annoying with GCC that you have to check in multiple places to see which features a given release supports.
You are right. The thing is though that you (or third parties) can implement missing library features that do not require compiler support [1], but for the language itself you have to wait for the compiler. For example, thanks to the hyperproductive STL [2], MSVC has always had a fairly up to date library, but the compiler lagged behind.
[1] For a long time libstdc++ lacked regex support, but people could get them from boost and mostly didn't care.
I'm not familiar with C++ at all, but I'm always amazed at how quick gcc is to get updated to implement what I imagine are complicated feature updates to the language.
Do they have a lot of very active contributors that communicate to efficiently split the work, or do they have a small but very dedicated group of people who spend the bulk of their time on implementing these features?
I don't know the team structure but I think it's a lot related to how the new standards are being drafted. They're being done with the big compiler writers helping (also in the open too). This means that the work on each individual extension started a good bit ago even if it isn't in an actual standard yet.
I also think there is some corporate backing in C++ compilers. Like a few internal company people are getting paid to fix this kind of stuff.
Regardless, it's great to see C++ compilers incorporating latest standard features. For comparison, check JavaScript standards implementation in browsers (not sure if it's a fair comparison).
It is a very fair comparison actually, with similar compatibility pains in the past due to both differing interpretation of the standard and plain non-compliance. Things are getting much better lately.
Yes and no. Traditional software transactional memory as imagined in the 1980s and 1990s is still imaginary. The hardware is still too limited.
Instead, it's more akin to something like the "synchronized" method and block attribute in Java, where the compiler transparently acquires and releases a mutex. Except in Java the mutex is on a particular object, whereas in C++ it'll be on a per-block hidden global mutex.
For small enough blocks, and if the stars are aligned just right (i.e. regarding alignment, cache-line locality, etc), the compiler might be able to optimize away the mutex in favor of a series of LL/SC, CAS, etc statements. But then, so could the JVM for synchronized methods.
It basically makes multi-threading easier, but won't automagically make it possible to build lock-less data structures, except that maybe the compilers will be smart enough to optimize something as simple as a singly-linked list construct into atomic instructions.
This approach to "transactional memory" was mostly fleshed out years ago in proofs-of-concepts by Intel, GCC, etc developers. And the approach has driven Intel's design of their "transactional memory" instructions, which basically move aspects of traditional synchronization approaches (speculative mutex acquisition, dirty flags) into the microcode.
As far as I understand the mutex is not per block but is logically a single global mutex (synchronized blocks) or actual transactions with full rollback (atomic blocks).
It really is transactional memory and it is meant to be implemented using hardware acceleration like that available in recent server class x86, power and sparc CPUs.
GCC, I belive, supports both a pure software based implemenation and an hibrid one.
I guess we interpret the phrase differently. I interpret software transactional memory to mean being able to build complex lock-free, wait-free data structures. That doesn't require hardware transactional memory, but does require some strong hardware primitives.
Here are some good links which explain how TSX works and how Intel might have accomplished it.
TL;DR: one way or another they're piggy-backing on the mechanisms needed to maintain x86's strong cache coherency. Fundamentally it's just speculative operations on a small amount of data--if conflicts are detected, or if you manipulate more than a couple of cache lines (because the processor will only be able to track and buffer a very limited number of cache lines for conflicting operations during the pendency of the transaction), the code generated by the compiler will either take a lock or loop.
So, yes, it will make much existing code faster. But it's not going to provide the ability to develop highly-concurrent lock-free data structures. If there's any serious contention (i.e. more than 2 or 3 threads), or your transactional blocks access more than a few words of shared state, the transaction will invariably abort. Which is why all software transactional implementations, including recent ones which make use of TSX, are typically _slower_ than similar lock-based approaches in real-world scenarios.
STM-light is still conceptually elegant from the programmer's perspective, but deep down there's much less magic then you'd think. That's because it's _very_ expensive to track conflicts in a fine-grained manner in hardware. LL/SC operations were proven in the 1980s to be universal primitives that could be used to implement arbitrarily complex lock-free, wait-free algorithms. And chips like ARM and POWER have LL/SC opcodes. But they're pale shadows of the constructs studied in the 1980s because trying to actually implement them in hardware is just too costly. So TSX, while cool and useful, is something of a hack (and I mean that in both the positive and pejorative senses).
The real speed gains from Intel's new architectural support come from hardware lock elison, and doubtless both GCC and LLVM will lean heavily on this as it'll be easier to work with. Like with VLIW and then auto-vectorization, don't put too much stock in promises that compilers will be able to transform typical application code into a form that's suitable for use by the specialized hardware instructions. Like with AVX2, for example, to really make good use of TSX programmers will still need to meticulously organize their data structures and code flow, and will need to be mindful of the hardware constraints from the very outset. And in most cases they'll be far better off using intrinsics or assembly in the critical sections of their code.
You can touch a lot of cache lines in a transaction. On POWER it's several hundred.
I totally agree that TSX works best with specialized data structures. But it's powerful enough that you can, for example, malloc() or free() a block of memory inside a transaction.
it's the opposite. The maximum reliable transaction write capacity if contiguous and well-aligned is small on POWER (63 cache lines) and high with TSX (400 lines). TSX is using L1 cache to buffer writes, and so the number of concurrent transactions only scales with cores. Whereas POWER uses a per-thread buffer which can scale linearly with the number of hardware threads (8 threads per CPU, 80 in their test).
So if I'm understanding this correctly, TSX has better capacity but poor concurrency, and POWER has poor capacity but better concurrency.
Quick Side as the two commenters in this thread seem knowledge.
I assume xbegin is a memory barrier. But is it a full fence like mfence or lock prefix?
I see a lot of benchmarks using TSX for locking, but one of the nicer features of lock compxchg or lock xchg is they carried an implicit mfence this was nice because it forced reads/writes before the instructions to be completed.
I know xbegin/xend do _more_ then an mfence for reads/writes within the RTM region but do they provide fencing for instructions _after_ their execution?
No idea and don't want to dig into Intel docs right now, but I would be surprised if they were full fences as I think xbegin/xend would only require acquire/release semantics.
xacquire/xrelease can be used as modifiers to existing lock prefixed RMW instructions which are already full barriers, giving them optimistic locking capabilities.
... or at the very least those parts of c++17 that already made it into the draft standard. The final version will only be published sometime this year but feature-wise there shouldn't be any major surprise.
What are the big new features of C++17? C++11 seemed like the last big change - I think the only reason I bother using "-std=c++14" is for "make_unique".
The major user visible features of C++14 were polymorphic lambdas, generalized lambda capture, proper constexpr, member initializers and return type deduction. I use most of these everyday.
For C++17 the biggest features are structured binding, which is the first step toward pattern matching, and deduction of template parameters for class types.
Other features like fold expressions are realy meant for library writers.
The standard library did get variant and optional whixh arw pretty cool.
Even in with the last draft it still not possible to easily move-capture a variadic parameter pack; it is a fairly esoteric feature and there are workarounds, but it is still annoying.
That is a nightmare fuel comment-- are people's valid C++11 codebases going to execute in materially different ways when someone flips the std flag to c++14?
It seems that the change from C++11 to C++14 broke the semantics of "inline". I'm not sure who thought that was a good idea. You'll find that code that used to compile no longer does. Reverse-engineering the failure all the back to the language specification is VERY time consuming. Finding the right compiler switch (--std) will also take a while, assuming you even know there IS such a compiler switch. It also seems (I might be wrong) that gcc defaults to C++11 but g++ defaults to C++14.
Breaking legacy code without access to the original authors makes maintaining that code extremely difficult. Can you rewrite all of the "inline" instances in a 1M line C++ program without side-effects such as performance loss? When the lead programmer requires C++17 for their spiffy new code, are you the person who has to rewrite the "broken legacy code"? Does your IDE know all of the standards?
They should change the language name each time, such as "Alpha", "Beta", "Gamma", etc. so you can say that you "know" the "Beta" language. As it is I don't know anyone who can seriously claim to know "C++".
I find it easier to ask what standard they work to. If someone says "I write C++", I would ask "what version?" as there is a significant gulf between 2003 and 2011 standards, for example.
Or, as is very common, they are not sure and it you will know that they don't know enough C++, or are in danger of writing C++ like it was C.
If someone says "I play the guitar" and you ask "acoustic, electric? hollow bodied?" and they don't know, you'd be surprised.
If someone says "I own a Volkswagen Golf" and you asked what mark (4, 5, 6 or 7) and they didn't know, you'd assume they knew nothing about they car or didn't have to replace any parts on it, even simple things like windscreen wipers as they'd have bought the wrong ones.
It's the same with C++ - the spec makes a huge difference so we should probably all be aware of what we're writing.
I was implying that they haven't bought new blades because a Mk4 wiper blade will not fit a Mk5. You cannot shop for wiper blades without knowing what your model and mark is.
Yes. This is just like it did from 98 to 03, 03 to 11 and like it will be from 14 to 17. Don't worry, the standards team hates backwards incompatibility so they try really hard to minimize these differences. But here's a notorious case to the contrary [1].
You have a point. Some people still write C++03 code because "My code will mostly work fine when I flip flag to C++11 anyway". But I always hope newer standards prevail, and make C++ great again.
And the people who program most of the stuff you can touch with your hands mostly still write c++-kinda-98 because "My compiler doesn't have a flag to flip to C++11 anyway" :(
Personally, I am really looking forward to constexpr if, i.e. if branches which are discarded at compile time and only have to be syntactically but not semantically correct. This should help a lot in avoiding enable_if magic to select the correct template.
Is there a good, terse primer on the features and coding idioms/best practices of C++17 for someone who was proficient in C++98 but hasn't touched the language in years?
Note that this book covers C++11 and C++14. C++17 wasn't on the scene much at this point so there is no mention at all of it in the book.
Having said that, C++17 was a minor update (much to Stroustrup's displeasure) and I can heartily recommend this book.
It was Meyer's last book as he is now retired (but will happily reply to emails; I love that about the C++ community - I've emailed Sutter, Stroustrup and Meyers and they will all reply).
Can somebody shed some light on it, why embedded companies (speak Microchip etc) have not switched to clang/llvm? It must cost them quite a bit to maintain their own compiler stack and it is not very good.
Fun example: a customer of mine initialized a variable within a loop and that resulting program just didn't work as expected (which it did with the same code on my desktop with gcc). As soon as he moved the variable out of the loop the program worked as expected.
gcc and llvm weren't designed for the constraints of 8-bit (or 16-bit) targets. The support for 32-bit machines is quite good (and these days some of Atmels ARM parts are cheaper than some of their AVR ones!).
Also, for the extremely low cost parts (small AVRs, PIC etc) the margins on the parts is very low which doesn't fund a lot of software development. And hardware companies often don't have much software development resource.
does that include the 'export' word for templates, which was supposed to be available many c++ versions ago (don't think it ever got implemented in msvc).
Even the authors of the only compiler that had ever implemented it (EDG) said that it was a lot of work for little benefit and pushed for outright removal without deprecatation.
Modules eventually, more than 20 years later, might be a better solution.
That seems very selfless of them. If they acted like other competitive markets, they'd want their competition to suffer the challenge that they did and they'd create a message that they were the first ones to satisfy the standard. They must really have the best interests of the language in mind.
They are! EDG is a tiny company but is extremely highly regarded and its founders (Stephen Adamczyk, John Spicer,
Daveed Vandevoorde) are considered legends in the community.
At a time where GCC was still catching up, MSVC was in the dark ages and clang still not existent, EDG used to be the standard compliant C++ front end. It is still the front end for a lot of proprietary compilers (Intel, MSVC intellisense, Comeau) and I hear it is a very high quality codebase.
It has been a while since I played with what I wanted the 'export' to do (I guess get my templates split up with information hiding to reduce compile times).
What is common practice to do that now-a-days? Is that with modules (but I'm not sure if that is a standard yet, I could be mistaken).
In that case, skip to c++14 immediately and try your best to use as much from it as possible as that will erradicate a bunch of unpleasancies you've become accustomed to. Really it's almost like a new language (and yes that comes with it's pros and cons) compared to pre-c++11. Once I realized that I've tried keeping the mindset 'first check if c++14 added something for dealing with this, before writing any code' for nearly everything I write (in practice: search, land on cppreference.com and/or stackoverflow.com, read). In the beginning that costs time but I'm pretty confident I gained that back already because the knowledge it gives you enables to write better code, faster.
Write yourself a side project using as many features as you can.
Or write a cheatsheet (multiple pages) for C++11 onward, taking care to detail example cases of each new item or construct or keyword.
You'll know no end of idiosyncrasies in no time.
Remember, that was ~6 years ago and that's a long time. If we're still writing C++2003 and haven't fully looked into C++11 and beyond or learned swathes of it (like many many many of my colleagues and C++ devs I meet, some of whom call it C++0x), that means they're working to a spec 14 years old (C++2003). 14 years old is a teenager.
The updates are definitely improving the language, so no need to be tired of them.
Many things have become easier, or in some case much, much easier, to do since ++11. And some things that were previously nearly impossible are now intuitive and fluent.
Why should they? That document seems pretty legible, in fact, I would say its readability is excellent; specially in contrast to all those fancy fonts with #333 on #EEE that many "modern" websites use.
Further more, that page is responsive! and doesn't require 5mb of JavaScript and a high pref cpu to work.
I know your question was rhetorical but I'll answer it.
Its actually terrible to read on a wide monitor. If you were to give the content a max-width with some gutters its easier for your eyes to follow the lines. I always start reading a line that I just read! I'm talking about adding less than 10 lines of CSS, nothin major.
That's easier to fix. Just resize your window. Heck, if you use Firefox you can just press ctrl+alt+m for a mobile-like view without resizing the window.
But we do need more things to do while waiting for compilation to finish. In addition to making sandwiches, seeing movies, building pyramids, etc., I mean.
I love seeing free compilers get advanced features years ahead of proprietary compilers like MSVC and Intel Studio.