Hacker News new | past | comments | ask | show | jobs | submit login
C++ 11 Approved (herbsutter.com)
203 points by feydr on Aug 13, 2011 | hide | past | favorite | 71 comments



Awesome! I've been looking forward to the day I'm no longer relying on "experimental" C++0x support! Here's a smaller tl;dr for those who want it:

-foreach loop

-first-class rvalue ("temporary") types

-lambda functions + closures

-implicit typing (auto keyword)

-decltype(), getting "declared type" of any expression

-variadic ... templates

-expanded STL -- incl. threading and RNG's

-construction from C-style initializer list

-Unicode literals

-enum class that doesn't auto-decay to int, enums with configurable base type

-explicit strong nullptr constant; no more NULL macro nonsense

The rest of the stuff is (in my opinion) less general/noteworthy.


RNG's... Did they include Mersenne Twister?

http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html

EDIT: Apologies... the only reason I asked was to contribute to the discussion, but now I realize it probably looked like I was being lazy.


Yes - there are several options, which MT is one of: http://en.wikipedia.org/wiki/C_0x#Extensible_random_number_f...


Mersenne Twister is hardly perfect. It's slow, causes a lot of cache misses, and IIRC the output is purely XOR of previous outputs. Do people just bring it up because they like the name?


Deserved or not, Mersenne Twister is somewhat famous among laymen like myself for being very fast relative to it's reasonably high quality. If you can point me at an RNG that is both faster and higher quality, I will be genuinely in your debt.


My layman's understanding is that George Marsaglia's xorshift generator[1] is considerably faster than the Mersenne Twister while still providing high quality randomness.

I've also seen WELL512[2] mentioned in a few places, but I don't know much more about it than the name.

[1]: First introduced here: http://groups.google.com/group/comp.lang.c/msg/e3c4ea1169e46...

[2]: http://www.iro.umontreal.ca/~panneton/WELLRNG.html


Let me know if you are ever in San Francisco and I'll open the tab at one of our many fine beer-snob bars (or a wine-snob bar, if you prefer).

This thread & invite is still open to anyone who can provide further suggestions!


> explicit strong nullptr constant; no more NULL macro nonsense

What is the point of this? C++ defines the null pointer to be always 0. So I never needed the NULL macro in C++ anyway, as I'm allowed to simply type 0 instead.

In how far is that new nullptr constant preferable to writing simply 0?


The standard example is:

    void f(int x);
    void f(char *x);
    ...
    f(0);  // calls void f(int)
That may seem contrived, but you may not know of the f(int) overload, especially in combination with templates.

NULL, if #define'd as (void *)0, prevents that error.


NULL is actually #defined to 0 in C++ as void* is not implicitly convertible to any pointer type as it is in C. So unlike in C there has been no safety in using NULL. Until now.


Consider:

  int execl(const char *path, const char *arg, ...);
called like so:

  execl("foo", "bar", 0);
Particularly when sizeof(int) != sizeof(void *).


You may know that a null pointer is always 0, but you don't know that 0 is always a null pointer -- it may be the result of subtracting an integer from itself. That's the difference.


In overloaded or argument-deduced contexts, then "0" is preferentially an int, but nullptr is never an int.


It's all about compile time warnings and static type information. The resulting code should be the same whether you're using 0, NULL or nullptr.


not e.g. with 64-bits, va args and literal 0 (cf FrankBooth example)


Don't forget the auto keyword.

The question is, how long before I can actually use this stuff?


auto falls under "implicit typing". Clarified this.

And we've been able to use a lot of C++0x for a while now:

http://gcc.gnu.org/projects/cxx0x.html


Visual Studio 2010 has about half the C++11 features (obviously it came out before the full standard was finished). Next release I guess will add more.


Or maybe the next service pack. Thinking of VS2008 feature pack/SP1 which added TR1.



With the C++98 standard it took almost a decade to get decent support in mainstream compilers.

This time it's different. Most of C++0x is already implemented in GCC and MSVC. However the implemented subsets are a bit different, but most of it is already there. The last time I checked, MSVC had lots of missing stuff in the standard library. In particular, threads and clocks were missing.

I've been writing C++0x with GCC for two years now.


No love for the low level concurrency support?


Yeah, threads, clocks and atomics are missing from this list.


Yeah, I'm building my subset of C++B right now. Much of the new stuff is either too complex or you will use it too rarely to make use without looking it up.


I wonder why the C++ body does not consider things like reflection/introspection more important than the stuff they came up with.

All that is needed is just an (optional, like RTTI) way to simply explain in binary format as part of the genereated binary information related to structures, types, functions, global variables, etc.

For example GUI in some external format (json, xml, do not matter) that has it's signals/actions/events encoded as simple names, where at runtime you can map them to actual C++ code (Objective C has it, Java I think, .NET, etc.)

This would reduce the time writing serializers, deserializers, and such. Make it optional (again like RTTI or exceptions) - but make it there for machines which can afford it (PC, Unix, OSX, and even mobile devices).

Why this is important? You would find 100+ libraries trying to solve this simple problem in plethora of weird ways - such as gcc-xml, OpenC++ parser, boost, etc, etc, etc.

And better preprocessor.


I wonder why the C++ body does not consider things like reflection/introspection more important than the stuff they came up with.

The main advantage (and probably the unique) C++ has over other system languages like Objective C is the "near to C" performance and it does it maintaining the runtime at the minimum doing all his thing at compiling time. reflection/instrospection requires a more complex and slowly runtime.


I think you might be grossly overestimating the performance penalty of Objective C's dynamic behaviours and message passing. I did too when I started programming with it, but then I did some quick benchmarking and realised the penalty was negligible bordering on nil.

For big loops, be comforted in knowing that message passing is aggressively cached; the second and subsequent calls are barely slower than a C method call.

And for those insanely big loops, with methodForSelector: you can bypass message passing entirely and call the method directly... you speed freak.


C++ is used in environments where every byte and cycle counts, regardless of your personal experience, run-time meta data (and going via lookup tables) does have a big measurable overhead that many prefer to avoid.

Adding to that, the things a C++ compiler can do at compile time (via static typing and full knowledge of the types) is pretty convenient and impressive, relying on these things to be done at run-time is often complex and/or require more code to be written by the programmer.


In an environment where every byte and cycle counts I'd be using C. C++ has too much stuff going on behind the scenes. (Like hidden object copies, potentially bloated operators, etc.).

If you can live with enabled exception handling (which is a performance toll) then some additional object meta data shouldn't be a problem.


> C++ has too much stuff going on behind the scenes. (Like hidden object copies, potentially bloated operators, etc.).

The thing is, C++ is usually good about not charging you for features you don't use - and with the reasonably good debugging and profiling tools, most of the problems you mention above are fixable to get performance that is close (or equal) to C, with maybe better maintainability.

To pick a not-so-random example of something that's difficult to implement in C as safely as in C++, see expression templates[1]. I'd be interested in something similar to what eigen does written in C. (Of course, the last time I did this minibenchmark, so-and-so's lapack library was still slightly faster than eigen :)

The bottom-line, perhaps, is that if I'm writing a library, I'd do it in C simply because it's self-contained and has a nicely limited scope, and as a bonus is usable from oh-so-many other languages. If I'm writing an application that really, really requires serious performance, I'd prefer to use C++.

[1] http://en.wikipedia.org/wiki/Expression_templates


It's very easy to avoid at least the problems you've mentioned by not making copyable objects and not defining bloated operators.


> not defining bloated operators

Can you guarantee that for every 3rd party lib you use?


Then only use the ones that do guarantee this. You can also make use of C libraries, but still have access to some nice C++ features for your code.

Since C++ lets you omit language features until you basically have plain C, you get to choose what level of support you can live with.


Who cares about theoretical possibilities? The reality is that hidden object copies and bloated operators are not a real problem.


Have you replied the wrong thread? I was not talking about message passing.


maybe because reflection/introspection is done via "Objective C's dynamic behaviours and message passing"?


Actually these are only convenience methods.

For example to get a class name you can either use [object className]. Or you can use object_getClassName()/directly access "anObject->class_pointer->name" without dispatching any messages. The same goes for any other runtime attribute of an object.


There's no reason the meta structures couldn't be completely seperate from the regular type system and method dispatch. All that's needed is a couple of function pointers. I think the impact on performance could be zero.


Are you saying there's no way to make it optional and, when it's not being used, maintain the same runtime speeds? Virtual functions are "costly" too, but C++ has them and it's considered poor form to not declare your destructor virtual.


Well the issue is that virtual functions are only costly when they are used an it can (trivially) be determined at compile whether they are in use or not.

On the other hand, you cannot determine if reflection will be used (not, at least without solving the halting problem) and so you have to generate the reflection code for everything.


And what is costly about the reflection code exactly? It is just some static structures with type information and function/data pointers. I fail to see how it can impact performance in anyway. The only side effect will be an increase in binary size.


sincerely I don't know if is comparable the virtual function's overhead with the reflexion's overhead.

Stroustrup answered a question about reflexion in this interview[1]:

Have you ever considered adding reflection to C++?

BS: Often. However, I have not seen an approach to full reflection directly supported by a language that didn't cause serious overheads. Also, reflection seems to encourage styles of programming that make it hard to determine what's going on from the source text and discourage static checking. I see that as a problem. Consequently, the C++ RTTI provides only the minimal information to determine the types of objects at run time. Where necessary, this can be used as a handle for more information about types (classes), but any such information is beyond what the standard guarantees[2].

[1] http://www2.research.att.com/~bs/omo_interview.html

[2] I think he was talking about "type_info"


MSVC Runtime already keeps names of the classes in the executable (browse the EXE and you would see it).

Really the only loss there would be memory usage. And this can be all optional - where you can waste some memory, for more information about the code currently running - the better.

There's already been such scarifies in C++ - exceptions, RTTI cannot be used everywhere - a xbox console would not handle C++ exceptions for example, yet you can still write in C++ (someone might argue that it's not C++ if it does not have exceptions)


> Virtual functions are "costly" too, but C++ has them and it's considered poor form to not declare your destructor virtual.

It's a bit more subtle than that: a virtual destructor is only required when the class, or its base classes, already have another virtual function (and it's only really needed if another class inherits from it. hmm.) At that point, a virtual destructor doesn't add any more cost over and above the virtual function that's already defined.


But not putting one, using foreign code base might get you in trouble (resource leaking, not de-initializing certain piece of the parent, threads not closing, etc.).

This would've been prevented if destructors were always virtual.

My take is that if the compiler cannot make the decision, but it's left to you, in this case it's better to always have the virtual destructors ON.

The problem is more visual than analytical - you would have to dig through the whole hierarchy in several files to understand whether you need to put a virtual destructor or not. Compiler/Linker/some post-proc tool should be better at that job. It already has all that information.


Hm, I'd rather pay the cost of having to add a virtual keyword in a few places (easily found using the equivalent of g++ -Wnon-virtual-dtor, and you do ship C++ code that is warning-free, right? :) rather than have to deal with up to eight bytes extra per instance in all my classes.


Thanks for the hint!

Actually I did not know about this option. I've just fixed such bug in one of our tools, written using wxWidgets for Windows. Have to check MSVC on this option.

At the same time a coworker was able to get the Xbox360 analyze thing and it worked finding suspicious stuff in the runtime code.

But we have only the professional version for PC :(



> a virtual destructor is only required when the class, or its base classes, already have another virtual function

Your derived class only needs a virtual destructor if the base class's destructor is public. If the base class destructor is protected (or private), then no one can call delete on a pointer to a base class.


This is why I rather enjoy discussing horrible hairy corner cases like this - there's always something I've overlooked :)


(little nitpicking note : there are cases where it can be useful to have a virtual descriptor as the only virtual method of a class)


I'm interested in knowing what these are - barring an alignment-related weirdness, I can't think of any reason to do that.


If you do everything in the constructor and the destructor, and in the meantime store a generic instance (or maybe a list of generic instances) somewhere.

This can be a quite rare situation, though. But not impossible.


It could have 0 impact if you don't use it (like a lot of things in C++)


If you want something close to reflection, you can use Qt's meta-object compiler and signal/slot system.


Qt's meta-object mechanism is definitely convenient, but that's not C++ anymore.


I say this is great news. A lot of the new stuff really does make C++ both nicer to use and a good deal safer, and it will be interesting to see more boost-y style of C++ become widespread.

Also glad they waited and went with C++ 11, and not C++0x which sounds like a l337 speak swearword.


Die original idea was to replace the "x" with the right year. Since 2009 (C++09) didn't happen, it is C++11 now. Geeky would be hex: C++0B


If you wanted to write a year in hex, using the value mod 100 seems a weird way to do it. The hex year is 0x07DB, so if anything it would be C++DB. Or perhaps the first 3-4 hex digits of the Unix time would make sense - C++4E46.


0B is an invalid octal literal, to get a hex literal you'd just append B, yielding C++0xB ;)


Octal 013 would be more appropriate.


So

  decltype
instead

  typeof
Damn... though not really surprising. To me this exemplifies C++ - it does reasonable things, but in that slightly ass-backwards way that makes it annoying... like a light switch that is installed upside down.


I think it's because existing compilers (eg GCC) used typeof already for experimental implementations, so the standard had to avoid it. Same as with hashmap, hashset -> unordered_map, unordered_set.


Yes, exactly. The natural words for things are already used as extensions, and they want to avoid clashes.


For those who are curious: http://en.wikipedia.org/wiki/C%2B%2B0x

Huge page though, needs a tl;dr of changes.


Waiting for TC++PL 4th Ed. now.


Heh, probably the books are ready to go, they've just been waiting for the official word. There's be new editions of all the old standards: "Yet More Effective C++", "Exceptional C++11" ... I can't wait :)


That and DnE.


When will we see compiler support?


Now -- at least for the big features, it's already implemented in GCC and Clang. I have no idea about MSVC++.


VC++ 2010 supports a lot of them: http://blogs.msdn.com/b/vcblog/archive/2010/04/06/c-0x-core-...

I've gotten into the habit of auto'ing most variables nowadays, it's quite convenient not having to type

     LongTemplateName<ClassName> variable = LongTemplateName<ClassName>::StaticCollection.Find(something);




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: