Hacker News new | past | comments | ask | show | jobs | submit login
Obscure C++ Features (madebyevan.com)
112 points by DmitryNovikov on Dec 11, 2013 | hide | past | favorite | 129 comments



Are these really that obscure? Well, the 3[array] thing is unexpected and also not something you would use in real code, so it reasonably counts as obscure (although it's such a hoary example of "obscure C" that it's almost familiar). But pointers-to-members, template-template parameters, and the overloads of ++ and -- are, I would have thought, just standard features you would learn in the normal course of learning the language.


Yeah, out of all these only the "most vexing parse" struck me as somewhat obscure simply because it's so insidious. The rest... not really that obscure. I mean, come on. a[3] equivalent of *(a+3)? That's just pointer arithmetic. Most of the other stuff is probably intermediate level.


Most people understand that a[3] and *(a + 3) yield the same result, not that they are actually identical in terms of language syntax.


They are trivially not identical in terms of syntax: one uses ( and the other [. Surely you mean in terms of what operation they both represent?


they really aren't identical in the general case - this is a mistake on the part of the author. trivially consider the case where i overload operator[]...


Yes, this is more of an obscure C feature (they are exactly the same in all cases in C, the standard defines a[b] as *(a+b)), which is inherited by C++.


I thought that the reference qualifiers on member functions and perhaps the function try/catch blocks were fairly obscure, but yeah, not really anything else.


I think its obscure for the people used to use scrtipting-languages.


Obscurity can be view from a rather personal perspective. What might seem obscure for one person can be totally normal for another person (e.g. some rituals in some cults).

Though I agree that some of the given examples should be clear by reading a good C++ book, which is one of the main issue with learning C++, most of the books aren't really good enough.


Any good ones that you recommend? Is Stroustup's new version of the C++PL worth reading through or is that more of a reference book?


Every time I read about obscure C++ features, the more I'm convinced that the language needs to be torn down and rebuilt from scratch.


Whenever I read something like this, I'm reminded of that Winston Churchill quote "It has been said that democracy is the worst form of government except all the others that have been tried".

I think C++ usage is one of those things that you don't truly understand until you've decided to start a new project in C++ even in 2013. For a reasonably common feature set in real project (i.e. not a side project for fun), C++ is the only option.

Specifically most alternatives exhibit one or more of the following:

* automatically managed memory (enough said)

* not expressive enough (I would throw C in this category even though C is a fine choice and often times the best choice for many projects)

* not nearly mature enough in terms of compiler/toolset/language in general (often, it's unclear whether the language will ever achieve that necessary maturity)

* not fast enough, usually for one of the above reasons, but sometimes for other reasons


Automatically managed memory is almost always a good thing. Aside from preventing a huge class of bugs, it also is often more performant. Why disparage it?

And I'd hardly label C++ as the epitome of expressiveness. A broad, haphazard feature set doesn't necessarily imply it's expressive.


I think he's talking about garbage collection, which is often perceived as slow.

But you can certainly have automatic memory management in C++ as well. Stack-allocated objects manage themselves, for dynamically allocated objects one can use scoped/shared pointers. One added perk is that you can extend this approach to deterministically manage not only memory, but any other resource that can be acquired and released (e.g. locks). I actually like this approach very much. The only problem I see there is that since you're still using raw pointers, the runtime can't really move things around for efficiency since that would invalidate the pointers. But in a garbage-collected language that doesn't expose pointers, the heap can be defragmented during runtime, which is nice.


> The only problem I see there is that since you're still using raw pointers, the runtime can't really move things around for efficiency since that would invalidate the pointers.

The biggest downside is not really efficiency but that you lose the memory safety of a managed language doing this.


True, but most of these issues can be mitigated by using best practices - use vectors instead of raw arrays, avoid shared ownership (and use shared pointers when you can't), etc. Of course, these don't make the language safe, but they do help alleviate the problem a great deal.

I wonder if it's possible to design a language that would be able to prevent these problems at compile time without incurring the runtime cost of managed languages. I mean, we already can detect some problems using static code checking tools (i.e. PVS-studio), so it must be feasible.


> I wonder if it's possible to design a language that would be able to prevent these problems at compile time without incurring the runtime cost of managed languages.

Rust does this. :) It is not easy to get right—the borrow checker required a lot of design work to achieve the right balance between expressiveness and practicality—but I think we've shown that it is possible.

(Disclaimer: I work on Rust.)


Like most languages in the Pascal family.

Arrays are bounded. You can usually disable bounds checking locally if you really need the extra ms, for cases the compiler is not able to optimize them.

Proper strings instead of char pointers that might be null terminated or not.

Var parameters and unbounded vector parameters to handle arguments that need to be changed.

Oberon family has already proven it is possible to write desktop operating systems in GC enabled systems programming languages.

Now as you say, many of these things are possible in modern C++. The problem is getting everyone on board, specially when dealing with old code bases littered with C like constructs.


No, automatic memory is always easier to use. It also always has a performance cost.

Garbage collection has a ton of problems in systems applications (off the top of my head: unreasonable and unpredictable delays, incapability of dealing with a large amount of heap-allocated memory efficiently). Reference counting is better in many ways, but is not always appropriate and definitely has a cost (and I say this as someone who uses shared_ptr quite often).


This is missing something though: "new" and malloc() are just like managed memory: convenient, but sometimes (and often unpredictably) slow.

Some people eschew Java and the JVM due to the spectre of garbage collection ruining their quasi-realtime system. But others embrace the JVM, organizing their programs to avoid garbage collection during critical times (which could last for hours or days).

With C and C++ it is the same, but backward: people flock to these languages for the predictable performance. Yet once there, they discover they must avoid the default allocator, for it is a source of unpredictable delays. Pooled allocators are created, and the end result is much the same as for the JVM folks.


The funny thing is that most of STL gives no realtime guarantees either, even if you use a custom realtime allocator with it. E.g. vector.push_back takes O(n) time sometimes. Running a chain of destructors caused by reference count dropping to 0 can be unpredictable as well.


But you can be sure of when those "sometimes" will be with e.g. std::vector::reserve

http://en.cppreference.com/w/cpp/container/vector/reserve

If you want realtime guarantees, you have a lot more unpredictable problems, like CPUs having cache.


reserve can be very easily misused. For example, let's assume a function append_10_items_to_vector. Since we're adding 10 items, we may get a better performance if we call reserve() before 10 push_back()s. Right?

Not always. If we have a loop that calls our append_10_items_to_vector N times (on the same vector), then your reserve() just transformed the loop into O(N^2) code. Without reserve, it would be O(N).

As a rule of thumb, use reserve only if you are really sure that nobody's ever going to add anything more to your vector in the future.


Yep, but exactly the same applies to Java. Just preallocate all your objects statically and you're never going to have GC problems :P


If you don't want the occasional O(n) copy on growth, use an stl list with a custom allocator.


I once had to re-implement the pieces of the STL I wanted in a real-time embedded system, for precisely this reason (well, that plus the fact that STL's use of allocators is broken, and I wanted to be able to allocate from fixed-size pools to avoid non-O(1) trips through an allocator at run-time).


If you are writing code that, essentially, is taking some data from a database and formats it nicely or takes user input and puts it into a database then managed memory is awesome.

However, almost always, somebody has to write that database code and the code that displays the nicely formatted data (like in actually lighting pixels on the screen instead of filling out some css properties) and even the code that takes the user input. And for these purposes managed memory is a disaster and C++ is the most expressive language available.


There are database systems written in Java and they have no problems competing in performance with database systems written in C++. Sure, there is probably more effort required with getting memory management right in case you're using Java (pooling / using off-heap memory), but Java has some other strengths, like superior concurrency support.

IMHO, managed memory for system-level code is nowhere near being a "disaster". It might be sometimes tricky to get right, but not any harder than getting manual memory management right in C++ (and smart_pointers aren't a silver bullet, neither). Additionally, there are some Java extensions that let you write real-time code in it and guarantee GC won't kick in when you don't want it to.


This, of course, depends on your definition of "compete" and "system-level" (e.g. I have no problem competing with professional athletes too, I just have a very serious problem winning against them but the competition is no problem at all).

I write code that I consider system level (it talks to the hardware directly) and "tricky" is quite an understatement. Starting from the fact that there are half a dozen types of memory (not even talking about memory-mapped I/O) and just one "heap" in Java. Not to mention the hardware is not aware about the awesomeness of JVM and is trying to read/write actual memory pointers. I can imagine that you could get around this (as well as the lack of bit-fields and unsigned types) through some kind of memory-mapped-as-a-file trick but at this point you are not using the memory management at all. And IMHO this is very far from being "not any harder" than using pointers in C++.


When directly talking to hardware, C is enough. Noone is saying Java is fine doing low level stuff like device drivers (IMHO wrong tool for the job, and C++ is wrong, too). I was refering to database systems or other software that is very close to system, e.g. application- or web-servers. In this area Java is excellent choice and performance-wise it often beats C++ because of superior concurrency support. And you can always code the low-level pieces in C, if you really need those unsigned arithmetic or bitfields.


Nobody is talking about C's insufficiency for programming. When doing anything C is enough just like any other Turing-complete language.

You'd been saying that in your humble opinion system programming in Java is no harder than in C++, which is something very alien to my experience of actually programming on the system level (as same as your performance claims but I am not even going to get into this one. Last time I did a Java programmer told me that a Java program that is measuring as 4 times slower is running "on the same level", whatever that means).


Is Netty system-level enough? Or Cassandra? Or Tomcat? Or Apache Spark? Or Hadoop? Or Akka? For some reasons devs chose Java to implement them, not C++.

As of performance, they are doing extremely well. Last time I checked Netty vs Nginx was a tie; Tomcat vs Apache HTTP were also extremely close; Apache Spark (pure Java) is generally much faster than Impala (using C++); and Cassandra has no direct competitors written in C++ so far (not counting 1000 toy databases here), but it is definitely one of the top performant NoSQL stores out there.


Sorry, I don't know what any of these does. When I went to school it was common convention that system level is the OS kernel, drivers and anything else that talks to the hardware.


Right. Now show me some widely used OS kernel written in C++. No, not some research thing. Becaue I hear all the time C++ is a system level language, yet it is most widely used to write games and desktop apps, not systems.


Console games are system level software, accessing hardware directly. But if you want specifically the OS kernel then XNU should satisfy you.


Console games are just applications like any other PC game. They access hardware by communicating with OS services. The only difference is in consoles you exactly know your hardware and you can make better use of it and OS provides less than a full blown PC OS. But this is not related to Java vs C++ debate. Java is capable of accessing hardware directly. There is even some hardware that directly runs Java or some low-end embedded systems running Java.

As of XNU, it is mostly pure C. There are a few C++ files there but they are very far from what we call "modern C++". No templates, no use of standard library, no exceptions, const char* everywhere. Just C with classes.


It appears you are either trolling or simply don't know what are you talking about. Either way there is nothing to be gained from this discussion for either of us.


While I do agree with you, how do you implement such features in fully pure ISO/ANSI C++ compliant compiler?


"many developers tend to forget that what makes C and C++ usable for systems programming are actually language extensions that are not part of the standard."

Excuse me but what??? Please give an example. Except for maybe setting up an isr vector table and the odd coprocessor command in assembler, all other peripheral functionality is usually memory mapped and perfectly accessible in C and C++.


Like vector instructions or doing memory mapped access in a processor with MMU.


All the hardware interface is through shared/mapped memory nowadays (there are CPUs with a dedicated I/O bus or fancy coprocessor interfaces too but even there the bulk of work is creating control structures in memory and then kicking them through a port or a coprocessor instruction). C/C++ are perfect for the memory manipulation even in their purest standard form. I imagine your concern is about deriving the addresses for register apertures or emitting specific opcodes?

For the concrete addresses you just link with the symbols defined outside of your C code. Special ops cannot be done in pure standard, but this is not an issue in practice as there are no pure standard compilers. Even for app development you want intrinsics just for the vector, cache and bit instructions.


No, the point being that without language extensions, a C++ AOT compiler is in the same ballpark as a Java AOT native compiler.

You need an external Assembler or OS APIs to provide the required functionality. The same way that a Java runtime library can provide bindings to Assembly or OS APIs.

There are commercial AOT native compilers for Java targeting systems programming, like the ones from Atego.


I guess it depends on the size of the ballpark. 99.99% of code I write is pure C++ as all I do is, essentially, moving data around memory. My wild guess is that using Java I had to put these 99.99% into some external library as Java does not support memory manipulation. Either way, we went quite far from the original point of managed memory being a good thing (almost always).


I was just playing devil's advocate a bit, because many developers tend to forget that what makes C and C++ usable for systems programming are actually language extensions that are not part of the standard.

Many mainstream languages, with AOT native compilers would be as usable.


Most developers never read the standard in the first place. I don't see how is this important though. We did not have a standard till 1998, yet somehow managed to use C++. Same as with C, which had been used very successfully standard-less for almost two decades.


> Most developers never read the standard in the first place. I don't see how is this important though.

Do you write portable code in C and C++ across disparate systems and OSs, using multiple compilers?

I used to do that between 1994 and 2001. Don't miss those days of hit-and-miss about compiler support.

> We did not have a standard till 1998, yet somehow managed to use C++.

Yes we did. It was called the Annotated Reference Manual, used as basis for the standard, coupled with articles from The C++ Report, The C Users Journal (latter The C/C++ Users Journal).

> Same as with C, which had been used very successfully standard-less for almost two decades.

It works if you only care about one specific compiler. On those days the standard was the AT&T UNIX compiler.


Obviously, you cannot write portable system code. You can do conditional compilation to target multiple platforms though. And when you do that you realize two things: 1. No compiler is fully compliant with any standard. 2. Every compiler has bugs so even the compliant parts can be unusable.

And then you realize it's not a big deal. Changes between platforms are greater than changes between compilers. When your targets have different endianness or, even better, mixed endianness between different PUs - the standard compliance of your code is the least of your worries.

>It works if you only care about one specific compiler.

Also works if you care about a finite number of specific compilers and architectures.

>On those days the standard was the AT&T UNIX compiler.

There were, probably, hundreds of compilers and C flavors even before the ANSI C. What one thought was "the standard" was very subjective.


> Obviously, you cannot write portable system code..

Sure, but the goal is to minimize system dependencies as much as possible.

Relying on language extensions works against it.

> There were, probably, hundreds of compilers and C flavors even before the ANSI C. What one thought was "the standard" was very subjective.

Yes, I do remember variants like Small C.


Java does support memory manipulation by DirectByteBuffers.


Noone is saying Java is fine doing low level stuff like device drivers (IMHO wrong tool for the job, and C++ is wrong, too)

You are correct, C++ is a bad choice for low-level device drivers. But it's getting better [1]!

[1] http://msdn.microsoft.com/en-us/windows/hardware/gg487420.as...


You're right, it is generally desired, even Stroustrup (author of C++) doesn't want you to manually manage memory unless you really have to. That's why there are smart pointers and why you should work with values as much as you can. You can even plug a garbage collector in in C++.

But when you're working in a constrained environment, you need the option to control all this in much detail. C++ is one of the very few options you have for that.


smart_pointers in C++ are dead slow compared to Java's GCed pointers. And full featured GCs for C++ are unicorns. There are a few, but noone actually uses them in production, except a few experimental language runtimes.


I assume you mean ref counted smart pointers, not all smart pointers are ref counted.

Java's pointer might be faster than shared_ptr. But, in Java, all objects are held by pointers. C++ gives you more options. It's not like we're going "Damn, I just can't figure out the ownership here. I'll just use shared_ptr and be done with it" every day. In my experience, usage of shared_ptr is quite limited.


It is definitely going to change at some time, IBM already has a prototype of JVM supporting object packing and fine-grained alignment control.

shared_ptr use is limited because shared_ptr is slow and people learned to avoid it at all cost. But having to think about ownership influences design choices and often makes things more complex and less performant that needed (e.g. it often forces to copy objects instead to just pass a pointer).


Thinking about ownership is something that is natural for C++ programmers and will naturally affect design choices. We (C++ programmers) don't avoid shared_ptr at all costs and the primary reason for avoid it is because it muddles ownership semantics. In other words: if a magical zero-cost shared_ptr implementation appeared, we wouldn't start using it everywhere.

You can keep extolling the virtues of Java performance, but fact remains that Java doesn't see much use in areas where performance is critical. It doesn't mean Java doesn't have a use, it's just not there.


> Thinking about ownership is something that is natural for C++ programmers and will naturally affect design choices

Maybe this is the reason C++ guys always complicate things: http://www.johndcook.com/blog/2011/06/14/why-do-c-folks-make...

As of Java being not-performant enough: that's why Java is used as one of the most popular language for implementation of modern NoSQL databases systems. Everyone knows performance doesn't matter in databases. ;)


unique_ptr, the default smart pointer in C++ is no slower than direct pointer access. If you require data sharing, then shared_ptr offers you that feature at the cost of reference counting.


unique_ptr is not a replacement for Java's GC. It simply doesn't do the same. shared_ptr is much closer, but at the cost of reference counting, which, if you start using it excessively, is much higher than cost of Java GC.

I agree that if your memory management is dead simple and you are fine with using only unique_ptr everywhere, C++ solution is superior. You obviously don't need GC, so why pay for it. Probably C would do just as fine, assuming you have some static analysis tool to find all the forgotten free calls.

However, there are some domains where GC enables completely different ways of programming. E.g. lockless concurrency / immutable structures / functional programming etc. Poor shared_ptr performance and scalability is prohibitive in those areas.


> Probably C would do just as fine, assuming you have some static analysis tool to find all the forgotten free calls.

I don't believe such a static analysis tool can exist without being a different language (e.g. Cyclone). There just isn't enough information about ownership semantics in C's type system.



In cases, where unique_ptr is enough, ownership is simple and statically pairing all mallocs with frees is easy. The whole problem with memory management is sharing ownership. Then things start getting hairy.


> Probably C would do just as fine, assuming you have some static analysis tool to find all the forgotten free calls.

While it helps, it can only cover the code you have access to.

No way to take care of third party libraries delivered in binary form.


to a good approximation every memory management problem i've had involved a garbage collector, a reference count or a fancy stl pointer type.

without these i have very few leaks or problems - and when i do they are trivial to find and fix. i see the advantage for RAD, but for performance and memory critical code these things are actually dangerous and stupid imo. measurably so...


I would say that even the side projects deserve C++! I have written much code on the side in C++. Like a real language, the more you use it the better you understand it. Everyone should try it - I know some will dislike it but rubbishing it without really throwing yourself into is foolish, much like hating a food without trying it (as children do).


I do use it on side projects, but on the job I am pretty happy to be on JVM/.NET land for our enterprise projects.

Mastering C++ is like playing guitar, but most corporate developers only know how to play Kazoo.


I'm a little unclear on how you include,

"not nearly mature enough in terms of compiler/toolset/language in general (often, it's unclear whether the language will ever achieve that necessary maturity)"

in that list. One of my perennial complaints about C++ is that it seems to be a moving target[1], more so than I really want to expose serious work to.

[1] https://news.ycombinator.com/item?id=6888366


It is funny how slow moving target it is, yet the compiler/tool writers can't catch up.


Add good concurrent lockless programming support and this rules out C++ and leaves us with Java.


If you felt that most of these were obscure then you don't know enough C++ to make a comment on it.


Yup. That seems to be the prevailing opinion. C++ is still relevant, but not all of the applications that want its performance can justify its complexity. So various language designers are working on alternatives. The frontrunner is undoubtedly Rust; D and Nimrod are also relevant. And I have a cat in the fight too. ;)


Have a look at D: It's what C++ should be.


D didn't make it. Have a look at Rust.


Actually D is used in production code at Remedy Games, Sociomatic Labs, Facebook among a few others.

What has Rust to show besides Servo?

Note, I do like Rust as well.


Actually, despite our semi-sincere objections, there are places using Rust in production. The two I know of are OpenDNS and Skylight:

http://www.opendns.com/

https://www.skylight.io/


Thanks for sharing.


Already been done a few times. (See D and Rust)

But the result will no longer be C++ for better and worse.


A few of those items are actually caused by its C compatibility and would apply to Objective-C as well.

Like the issues numbered 1, 3, 4, 6.


Or maybe devs should stick to a subset instead of using every feature to make the code unreadable.


True. While most languages are just plain "basic" tools, C++ is a swiss-army knife. A swiss-army knife is both dangerous and ineffective if you try to use all of its features at the same time.


The downside is that you and I will inevitably pick different subsets, leaving someone trying to combine our two projects with dangerously sharp edges in inconvenient places.


its a very messy language - its annoying to see that its the best extension to C.

i constantly want something better than C - something where the compiler has freedom to optimise my memory layout with a keyword for types that have to cross boundaries. something with metaprogramming built in. something where const is not just a promise, but the law etc...


Well, Rust is your best bet for now, seeing that Go screwed it.


Or D, Ada, SPARK


then make it, and watch it fail

Also "C is quirky, flawed, and an enormous success" Denis Ritchie.


And the cause we have security exploits caused by bad pointers and arrays out of bounds.


I don't know what to think of this comment. Does this mean we should not use C or C++ ?

Security does not necessarily always matter when you're programming something, it does when you make widely used libraries or application or drivers or an OS, which are done by experts and tightly reviewed, so the language is not an issue.

Sometime you'll need pure performance or very low power consumption, and there will be no concerns of security between there will absolutely no way to compromise it (no "vector").

Security is not a particular feature of any particular programming language anyways, every program, whatever the language, is subject to security concern, it's not a language concern.


> I don't know what to think of this comment. Does this mean we should not use C or C++ ?

Yes.

Although the insecure design of C and C++ (thanks to its C underpinnings) are just an attack vector from many possible ones, it is quite easy to get exploited thanks to the skills of many people writing applications on those languages.

Removing away the possibility of out of bounds errors and pointer misuse, not possible in other safer systems programming languages, already removes away many of those attacks.

So nowadays we have to deal with compiling with -Wall -Wpedatic -Werror, coupled with valgrind like tools and static analyzers to achieve some level of confidence that undefined behaviors and pointer misuses are not creeping in the code.

You are right, this is only a small step in the whole security picture, there is also SQL injection, social engineering and many other forms of attacks. Still buffer overflow exploits remain one of the first forms of attack.

I do code in C++ since the ARM days, so I do know it quite well. Not making this argument as an anti-C++ zealot.


> Yes. > Although the insecure design of C and C++ (thanks to its C underpinnings) are just an attack vector from many possible ones, it is quite easy to get exploited thanks to the skills of many people writing applications on those languages.

You can't have both performance AND remove the chances programmers will make mistakes. A tool is a tool with its compromises, you can't have all the advantages.

> Removing away the possibility of out of bounds errors and pointer misuse, not possible in other safer systems programming languages, already removes away many of those attacks.

Then don't use pointers, or learn how to use them properly. You should not hack into pointers anyway, there are never good reason to do so, both for risks of bugs and reduced code re-usability. STL containers make pointer use already quite almost irrelevant.

And if it's only C, not C++, I'd argue that C is quite an old system language. Security is an expensive feature, and you're free to code faster without minding security if the business you're doing allows it.

The better solution in all this, is using solutions like Cython, or make tools that allows easier bindings with C++ libraries. It doesn't change the fact that often, you'll need to optimize a part of your code, and you'll need to make a C/C++ library out of it, which you'll link against. All this process is quite not standard, and will cost time and learning, because it's quite painful. Security is often not worth the effort. Often it's much more easier to make sure users are using secure applications and securing your network and teaching users about security, than hire a security expert to review your code.

And by the way, if someone hack into your system and steal stuff, it's still a criminal act, it's not the fault of the guy who decided to use C++ because he needed speed.


> You can't have both performance AND remove the chances programmers will make mistakes. A tool is a tool with its compromises, you can't have all the advantages.

Ada is a possible compromise. Modula-2 used to be one as well, before C got spread outside of UNIX.

> And if it's only C, not C++, I'd argue that C is quite an old system language. Security is an expensive feature, and you're free to code faster without minding security if the business you're doing allows it.

Yes, it is about C. C++ security flaws are a consequence of its C compatibility.

Nowadays security is a must for everything except toy programs.


You won't be very productive with ada...

> Nowadays security is a must for everything except toy programs.

I disagree. A program can be quite secure enough if you follow simple guidelines and/or use secure libraries, and you'll save a huge amount of money not hiring someone to inspect and approve your code. It also depend how your program is designed and what it does, design plays a huge part in security. There are many ways to avoid vulnerabilities like not connecting the hardware to the internet, disable USB connectivity, compartmentalize user lands, and so on.

If you're making an internet browser, an internet payment solution, a government website, of course you will try to pay a lot of money trying to reduce vectors the most you can, and try to not use C, but those project are specific and security is 90% of the work. If you're making an industrial robot, a car computer, a GPS-only device, a home thermostat, a cashier computer, it's not the programming language that will make a difference in security, it's the tools require to connect to those hardware that will make hacking it difficult.

A system will only be interesting to hack if it's both a mainstream device and if the infection vectors are easy to deal with. For example, ATM machine are never connected to the internet, they use dedicated communication lines.


> You won't be very productive with ada...

My experience shows otherwise.

> For example, ATM machine are never connected to the internet, they use dedicated communication lines.

Yep, it really helps.

http://www.bubblews.com/news/353683-usa-atm039s-remotly-hack...

Right now I am coding a graphics algorithm in C++. So although I have this opinion, I am also pragmatic and use whatever tooling my customers ask for.

As for the rest, I guess we have to agree to disagree. :)


In this case, yes they were connected, but I guess that was an isolated case. Banks don't make credit card transaction over the net. Security is not about programming languages. Security is mostly about infrastructure, OS and design.

I honestly don't care about security at all. Arguing a language is not secure, is like saying you can cut yourself with a knife, and still people are stupid enough to hurt themselves with butter knives.

The only language that is truly secure is javascript, and you can't really do everything with it, one time or another, you need to access the hardware and write file.

C/C++ is a tradeoff, it's quite easy to teach, so you're very productive with it, that's one of the most important thing about a language, and it's still fast, but you need to use it carefully. Security has always been a costly, luxury feature that is really better to have in some cases, but if you're a small company and need to get something done, C/C++ is just great. And if you get hacked, most of the time, the guys who did it get caught, so I don't see why the industry should shift to ada or VM languages (which have been showed to have still many vulnerabilities).


Don't worry, the language doesn't sit idle. C++ 11 and C++ 14 are significant improvements, almost a new language.


I can attest to this. C++11 is amazing. Pseudo type-inference with the "auto" keyword, specific "nullptr" rather than null being conflated with an int, smart pointers, list initializations, for (elem : lis) loops... an actual hash table implementation that isn't backed by trees?! Is this even C++ anymore?!

I've mostly written scripting languages until C++ but man C++11 has made the transition way easier. A lot of small stuff that makes your day so much better. Some big stuff that also makes your day better.


I do bash C++ a lot, not because of the complexity, rather due to the C underpinnings that make the language unsafe by default.

As a Pascal refugee that really enjoys the languages developed by Wirth for systems programming, I would rather see C and C++ replaced by something that was rather safe by default, unsafe only when really required.

Having said this, I do enjoy coding in C++, when I am able to work with developers of the same skill level that are able to fully code in modern C++ and not C compiled with a C++ compiler guys.

C++11 and C++14 are nice improvements to make the language better, that are just spoiled for the C historical baggage.

Now back to JVM/.NET land.


Here you go:

"There are only two kinds of languages: the ones people complain about and the ones nobody uses" Bjarne Stroustrup


So every language should be dump down version of Python or something?


C++11 is actually really, /really/ nice. If you haven't already you should read up on it.

I was hating c++ for vague reasons I couldn't place. I was wrong.


I would agree. Although my work involves writing some code under VS2010 on Windows so I can't use C++11 features, I still read Stroustrup's book. The tour of C++ is particularly helpful as it covers many of the new features, and offers tips like making the compiler work for you.

Stroustrup wrote an interesting paper (wish I could find it) in IEEE/Computer magazine that strongly advocated static code and not overuse of dynamic_casts etc. for speedy programs - this is making the compiler do all of the optimization instead of doing everything/making many decisions at run time.

The Wikipedia article on C++11 covers many of the new features quite well, but Stroustrup's recent revision of the C++ book is a joy to read - the fonts help, as the previous one had some pretty horrible fonts in my opinion.


VS2010 actually does implement quite a few of C++11's features, you have to be careful though and better read the MSDN documentation for each of them.

A weird case I found recently was that the VS2010 implementation of std::to_string(...) only supports long long, unsigned long long and long double. http://msdn.microsoft.com/en-us/library/ee404875(v=vs.100).a... Not very useful for most cases. Luckily VS2012+ supports the rest of the number types.


C++98 with TR1 comes close enough, it's what I use for lack of C++11 compiler support.

The funny thing is that "C++ as most people know it" is pretty far from "C++ as it was intended". Luckily, C++11 is nudging everyone towards the latter.


This one [1] surprised me.

Q1: Is the following code legal C++?

    string f() { return "abc"; }

    void g() {
        const string& s = f();
        cout << s << endl;    // can we still use the "temporary" object?
    }
[1] http://herbsutter.com/2008/01/01/gotw-88-a-candidate-for-the...


I do not think they had this kind of usage in mind when they allowed it and i think the use case for it is in using a const reference to a function argument.Most people use this all the time without thinking about it or wondering what is the origin of the variable they are accessing through the const reference.

eg is when you have a function like below:

  void foo( const string& str )
  {
         cout << str ;
  }
The above function takes a const reference to string and uses it without caring if the passed in variable is a temporary variable or not.

Now declare a function as:

     string bar() { return "abc" ; }
and the above two function can be used with:

     foo( bar() ) ;
I think this use case is the one that allowed the use of temporary variable from a const reference and people do this all the time.


> I do not think they had this kind of usage in mind when they allowed it

I think they did based on 8.5.3 which has this example:

  struct A { };
  struct B : public A { } b;
  extern B f();
  const A& rca = f(); // Either bound to the A sub-object of the B rvalue,
  // or the entire B object is copied and the reference
  // is bound to the A sub-object of the copy
As well as 12.2 para 5:

  class C {
    // ...
  public:
    C();
    C(int);
    friend C operator+(const C&, const C&);
    ˜C();
  };
  C obj1;
  const C& cr = C(16)+C(23);
  C obj2;
In the C++ standard which talks about the temporary bound to cr existing for the duration of the entire program? Moreover, why would a rule for const references be specifically needed to handle foo( bar() );, but no equivalent rule for bar().foo(); which handles an implicit non-const pointer?


I believe it was explicitly defined to support this use case, since the extra lifetime it is added on top of the regular "temporary lasts until the end of the full expression in which it is created". The "full expression" clause would be sufficient for function parameters, because the expression is not finished until the entire expression has been evaluated, and this includes executing and returning from the function that received the temporary.

In an indirect way though I agree: it is important for the language to make it easy to break apart complex expressions, but without this lifetime extension, I suspect it may not be possible to un-nest function calls without creating more copies (RVO may prove me wrong though - not studied the case in full).


Of course, this is perfectly good C:

    const char *f() { return "abc"; }

    void g() {
        const char * s = f();
        puts(s);
    }
Nothing temporary is created at all. It's rather convenient.


But now there's no one to destroy the object (in a non-trivial case). Returning a reference lets RAII take its natural course.


There's nothing to destroy. The string is static.



The function try blocks is new to me. It's actually kind of elegant (for C++). Granted, the reason I've probably never encountered it in the wild is that there are probably very few scenarios where you have a member in the initializer list where you want to handle an exception on initialization. If you were trying to handle an exception of an instance variable initializing, most people would just move that member to the constructor body (e.g., if you have a pointer). The only time this would be useful then is for members that don't have default initializers (e.g., a reference).


It's nice to avoid the default initialization and construct a member directly with the argument you want. Then you can construct a second member which might depend on that previous one, etc. This can even include lambdas right in the initialization list, and all of this may throw and log an error with the catch block. Personally, I rarely like to see code in the actual constructor body anymore.


Function try blocks are also extra tricky because at the end of the catch block the exception is automatically rethrown. http://www.drdobbs.com/sutters-mill-constructor-failures-or-... has a lot of details on function try blocks.


It also just removes a level of nesting, which is nice by itself.


"The tokens and, and_eq, bitand, bitor, compl, not, not_eq, or, or_eq, xor, xor_eq..."

Hey, I remember when those were introduced!

I was the sysadmin responsible for supporting the default gcc on all of a computer science department's non-research machines and upgraded gcc between semesters. I then got to spend a couple of days rewriting some grad students' code because the had used "and" and "or" as variable names.[1]

Fun times!

[1] Many of whom were upset because the upgrade had changed the preprocessor/compiler interface and broke their research compiler.


"The tokens and, and_eq, bitand, bitor, compl, not, not_eq, or, or_eq, xor, xor_eq, <%, %>, <:, and :> can be used instead of the symbols &&, &=, &, |, ~, !, !=, ||, |=, ^, ^=, {, }, [, and ]."

In C, the first few are only if you include the C99 header <iso646.h> (the later ones are just digraphs). Are they included by default in C++?


I knew about the first sentence, but the second really obscure.

    Accessing an element of an array via ptr[3] is actually just short for *(ptr + 3). 
    This can be equivalently written as *(3 + ptr) and therefore as 3[ptr], which turns out to be completely valid code.


> What square brackets really mean

The title to that paragraph is miss leading, it's true for array but for a class brackets can do whatever the developer implements them to do.


This was actually written in the book I used to learn C (I think, at least I knew this early on). It seems completely irrational when someone tells you that ptr[3] == 3[ptr], but with the above explination it makes perfect sence.


The BCPL infix “!” operator makes it a bit clearer, I think:

    BCPL      C
    !a        *a
    a ! 0     a[0]
    0 ! a     0[a]
    !(a + 0)  *(a + 0)
    !(0 + a)  *(0 + a)
That’s where C got its 0-based semantics from—it would be a nasty surprise if 0 weren’t the identity for pointer addition!


Yes and it also explains why array indexing starts at 0.


Not really. It would be just as easy to say a[x]=*(a+x-1) and let array indices start at 1.


actually yes, [] derives from BCPL, see evincarofautumn comment just above. ( link to comment, https://news.ycombinator.com/item?id=6886488 derived from this submission : https://news.ycombinator.com/item?id=6879478 )


The second one actually just follows from the first one. Because indexing is just a + operator, you can reorder the operands without changing the result.


I couldn't even imagine something like ref-qualifier and function as template parameter.


Most people have come across member function pointers - event handling! They are (in some libraries) wrapped in macros for easier readability but most beginners use function pointers without realising.


This article misses WHY certain things are the case, and has several incomplete or incorrectly-explained points.


Yeah, the one that got me is "branch on variable declaration", which I believe is actually a special case of the fact that assignment is actually an expression which returns the assigned value; `a = b = c = 42;` works for the same reason.


I've always thought that this ranked among the most obscure:

  typedef int F(int);
  class C { F f; };
  int C::f(int i) { return i; }
I've never seen it used in practice...


surprised that most of this is considered obscure... and also teh square bracket thing is only for built-in types (e.g. pointer types) because it can be overloaded...

the constructor vs. function choice is a classic gotcha for instance. i've had to explain it to juniors many times.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: