Hacker News new | past | comments | ask | show | jobs | submit login
Popular Myths about C++, Part 2 (isocpp.org)
78 points by ingve on Dec 15, 2014 | hide | past | favorite | 91 comments



Really, "For reliable software, you need Garbage Collection" is the straw man you're going to attack?

What comes to mind when you hear "reliable software"? Personally, I immediately think of critical embedded systems like rocket and spacecraft guidance and automotive control systems. And one thing I hear repeatedly is that many well-known coding standards for building such critical software prohibit all dynamic memory allocation, because dynamic allocation is a potential failure point.

And of course, if you don't have dynamic allocation, you don't need garbage collection, because it would have nothing to do. So if the software that needs reliability the most often doesn't use dynamic allocation, only the most ignorant could think that you need garbage collection for reliable software.


I think you've taken a pretty narrow interpretation of "reliable software". It seems more reasonable to think that Bjarne was speaking at a general level, i.e., the idea that software written in non-garbage collected languages tends to be prone to memory management errors on the part of the programmer.

This has been a common meme for the past 15 to 20 years precisely because it is so easy to forget when a block of memory needs to be freed. However, the mechanisms of reference-counted smart pointers, RAII, and clearer ownership semantics in the language, go a long way to help mitigate the common manual memory management problems in C++.

The downside, of course, is that you have to know how to use these ideas to write "reliable software", and C++ does not make it easy. It's pretty much impossible to go from reading the standard to implementing correct and optimal C++ programs. There are so many gotchas, corner-cases, and features which require much study and experience to truly understand.


I'm not arguing that you need to avoid dynamic memory management to write "reliable software." I'm just pointing out that a lot of reliable software is written that way, and thus the idea that you need garbage collection to write reliable software is so obviously false that it's silly to think anyone believes it to be true, making it a terrible straw man to argue against.


> I'm just pointing out that a lot of reliable software is written that way, and thus the idea that you need garbage collection to write reliable software is so obviously false that it's silly to think anyone believes it to be true, making it a terrible straw man to argue against.

You're also arguing a straw man. Of course you can write reliable software without dynamic allocation. The question is: can you do it faster and/or cheaper using C++ or $ALTERNATIVE?

(You mentioned rocket and spacecraft guidance software as examples. That's an example of software that's exceedingly expensive to develop... and it doesn't actually do that much even though it's obviously complex.)


What straw man am I arguing?

You say, "Of course you can write reliable software without dynamic allocation." Why is that "of course," if the myth being addressed is that you cannot write reliable software without garbage collection? If you're saying everyone knows that you can write reliable software without GC, and it'll just be expensive and such, then we're in agreement, because that's exactly what I'm saying.


And when I think of rocket and spacecraft systems (in the context of a discussion around programming) I recall fiascoes such as the Mars Climate Orbiter[0].

If we're going to be thorough about a discussion of reliability in software, we have to include cost in the equation. Rocket and spacecraft systems have extremely high cost per SLOC. It stands to reason that there might be solutions for producing reliable software (for some standard of reliability) far more rapidly and cheaply than the folks at NASA.

[0] https://en.wikipedia.org/wiki/Mars_Climate_Orbiter


Mars Climate Orbiter is an odd choice. There are good examples of expensive software failures in spacecraft systems (the first Ariane 5 launch being a high profile one) and they support your overall point, but MCO wasn't a software failure, but a human failure. Certainly the software could have and should have been better designed to not allow that failure, but ultimately the software behaved exactly as it was intended to, and did exactly what it was told to do.

As far as cost, you're completely right. My point isn't that we should all be building software this way (far from it, most software doesn't need to be that reliable) but rather that when reliability really matters, garbage collection isn't in the picture, so the "myth" being addressed is stupid.


MCO wasn't a software failure, but a human failure

You're making a distinction where none exists. All software (that we know of, anyway) is the product of humans.

but ultimately the software behaved exactly as it was intended to

I somehow doubt the designers intended for the software to cause the mission to fail.

but rather that when reliability really matters, garbage collection isn't in the picture, so the "myth" being addressed is stupid

I would continue to disagree. I don't accept it on faith that the standards of fallible agencies such as NASA are proven correct, especially given the counterexample I mentioned earlier. There are ways to implement hard real-time garbage collectors and prove them correct with far more rigour than was employed in the MCO mission. The side benefit of such proofs is that we us mere mortal programmers can benefit from the work in our everyday lives, something you can't say about the extremely domain-specific code used by NASA.


So the Tacoma Narrows bridge collapse was indistinguishable from a software failure because all software is the product of humans? That makes no sense.

The MCO failure came about because people took the correct output of one program, then incorrectly used it as input for another program, which then performed exactly as it was supposed to on the bad input.

This is an interesting study in human-computer interaction and how to make that robust, but I don't see how you can possibly apply it to the question of how to make "reliable software."


Ahh, now you're affirming the consequent. I said all software is the product of humans. I did not say that all human products are software.

Where our disagreement seems to occur is where the boundaries of software systems lie. You appear to be making the claim that it is at the granularity of individual programs whereas I am claiming that the entire software system must be considered. If you were to write a bash script that pipes the output of curl (presumably an html file) to /dev/dsp0 and a horrible screeching noise emanates from your speakers, what you have produced is a software error. It does not matter that each of the individual components is working as intended; the system as a whole is not (unless you actually intended to produce that screech, of course).


I agree with your example of a pipe. But what if you manually retyped it and you were supposed to carry out a format conversion as you did so?

My understanding of the MCO failure was that it was a manual step in the process that failed. The humans were supposed to do something, and didn't. I don't see how that can be defined as software, or anything even close. It's analogous to seeing a highway sign that says the speed limit is 80km/h, setting my car cruise control to 80MPH, and then saying that it was a software defect that caused me to get a speeding ticket.


All software can be reliable of course, even if it's not embedded software. Not leaking memory at every step, avoiding double freeing of pointers or other memory-related errors tends to make software more reliable...

Why so upset if I may ask? I think Apple declared GCs "bad" anyway.


I'm upset because a leading figure of the programming community who a lot of people listen to is spouting nonsense, and lots of people are going to believe it.

Why do you think I'd care what Apple says about garbage collection...?


But it's not nonsense, realiability isn't all or nothing, and GC has the effect that it completely removes a class of problems that happen often in languages like C or C++ when doing manual memory management. It is making those programs more reliable, even if they aren't as bug-free as your typical spacecraft software.

That was just a joke with Apple. :)


I'm not arguing against GC or the reliability thereof. I'm merely arguing that "GC is required for reliability" is a ridiculous straw man of a myth. I am in fact a fan of GC, but I also recognize that when lives are on the line, GC and indeed dynamic memory management of any type is usually out of the picture.


The reason this isn't a straw man is that it's a true belief held by many people; most of us aren't exposed to the extremes of reliability that you're talking about. When lives aren't on the line, memory management is generally a necessity.


Who actually holds this belief, that garbage collection is a requirement for reliable software? Most people don't work with such software but surely everyone who even thinks about programming is aware that it exists. I mean, you don't have to dive deep into computing to know that cars are full of computers and software these days.


Keep reading HN. You'll see articles and comments that state (or imply) that normal humans can't handle memory management, and therefore GC is the way to avoid bugs and memory leaks.

Sorry, I can't give you references off the top of my head, but I've seen that stated, here, in the last couple of months at least.


I get this feeling too, and I wonder if it is from a large vocal group of people who have never ever developed anything in C or C++, yet appear to know all about how "dangerous" it is and enjoy mocking the languages, whilst pointing to niche languages with much smaller development tool ecosystems and support libraries.

Is it just me?


> I think Apple declared GCs "bad" anyway.

Correction, Apple failed to implement a working GC, given the constraints on Objective-C semantics. It was a conservative GC and still it borked when mixing libraries not compiled the same way.

So they made a sensible option of having the compiler insert the retain/release method calls that Objective-C developers would need to do manually.

This only covers framework code or objects that follow Cocoa semantics, everything else is manual.

Swift, being binary compatible with the Objective-C runtime, needs to make use of the same memory model.


Here's a good myth:

    "Only program in C++ if you absolutely have no other option."
I believe in this myth, I also am currently working on a project in C++ and reading one of Strousup's books. I think even Strousup shares this belief, but he would probably phrase it a bit differently. (My project is interfacing with the Unreal Engine by Epic that exposes a C++ API.)

Just putting this out there for the discussion's sake, perhaps someone could engage on it. I recently discovered to my horror that Apache Mesos is written in C++. They use the nice futuristic style Strousup would probably applaud though, still I think any other practical programming language would have been a better fit since it's just a sysadmin tool. Obviously they disagree.


I don't believe this myth.

I have been writing C++ for over a decade and am currently writing a cross-platform GUI for controlling remote hardware over a network link (multicast). With accelerated graphics for 3D interaction, entirely custom widgets/controls, and interworking with a low-level communication library that my colleague has written, other languages would have been awkward. There is reuse of parts of the library with the remote hardware I think, as that is written in C (isn't all embedded stuff?). The C++ nature of the library will also mean less pain when porting to other platforms.

Higher level languages may be more fashionable but who wants a slow GUI app written in Python and a big runtime to distribute with it? (How I detest all of RedHat's update programs over the last 15 years and Ubuntu's abysmally slow software centre....). I know people like to hate C++ because it's complex compared to BASIC/Pascal/Python/JavaScript/whatever is in fashion now but its flexibility allows construction of complex concepts.

I have also written servers and web servers in C++ and enjoy cross-platform use of them (albeit with some macro hash-defs for compilation); the clients for them are also native GUI apps written in C++. Admittedly many of the applications I have written could have been written in other languages but it's the way I think; the stability of the development platforms is a great attraction for me (contrast this with .NET 2.0/3.5/4.0/5.0 differences and the big runtimes that need to be thrown around everywhere).

I suppose it's the right tool for the job in the most part, and if you're just starting out with C++ then it may be more painful to get something usable and understand what's going on compared to writing a few lines in JavaScript and showing them in your web browser.

But I find it rewarding. Horses for courses I suppose.


I've been doing nothing but modern C++11 for a year solid and I have to also agree. C++11 and its standard libraries are a huge improvement but is still difficult to iterate and prototype with. Templates, lambdas, and generic programming are great additions but still very difficult to deal with and debug at times. By using Intel's compiler performance can really scream, but the time it takes to get things done is very tough to deal with.

Contrasting this to Julia/lighttable/Juno is like night and day. Programming in a canvas/all REPL style is unbelievably freeing since each little piece can be tested and iterated on quickly and easily. Then the program can be organized in a continuous matter while it is being made. The mental energy needed at any one point in time is massively decreased since classes, inheritance, types, memory, and data flow don't all have to be dealt with in an interwoven manner like C++.

The performance won't be equal immediately, but being able to prototype, then optimize, then only have to replace minimal parts with native code after they have been shown to be bottlenecks is amazing (and calling C is even super direct because of strong typing)

So after a solid year, I agree with the idea of using C++ only if there is no other option.


My biggest complaint about C++ is all of the non-intuitive "gotchas" and corner cases hidden all over the place that seem to change with every new release. A lot of times the obvious code is either subtly broken and/or less efficient than one would expect.

Reading through "Effective Modern C++," I was constantly thinking, "How is anybody supposed to remember all of these corner cases?"


I remember working on a small in-house C++ library with a colleague.

Being mostly a C programmer, my code in C++ was basically (as he coined it) fancy-C. Then he rewrote my code in "the C++ way," after which the code became completely opaque to me.

My initial mistake was that I expected C++ to be like (or similar to) C, a mistake I believe many of us make. C++ is a lot more complex in scope (not necessarily in a bad way, either) and requires proper learning.

I came to the realisation that C and C++ are not even similar languages. Yes, you can write proper C code in C++ (with a bit of added strictness), but that's not how one should write C++. Also, I think that lot of the "anger" towards C++ is that the the names makes you think it's like C, but then you start learning the language and realise that in fact it is nothing like C on the outside, just on the inside.

Anyway, I'm not hating on C, I actually haven't taken on the quest to learn it properly, and I still rely mostly on C.


I think you're right there about the differences between C and C++ and the root of the anger towards it. C is completely alien to me - I wouldn't be able to competently write a decent program in it. But C++ is a different matter.

I have worked with people who write C++ in "fancy C" style too.


EDIT: [since I exhausted the edit period] I meant to say "I'm not hating on C++" in the last sentence. :-)

Cheers!


I concur with this. I use C++ daily and have been for about 20 years, and I still run into gotchas and things I misunderstand all the time.

I feel it's even worse than that, though. There seems to be a culture among C++ programmers to do things in overly clever ways and to push the limits of readability and understandability. Just look at the headers for the standard C++ template library, for example. (I'm currently looking at the ones shipped with gcc.) They don't even use consistent indentation, and have single letter variables that are meaningless to all but those who wrote it. It's pretty frustrating to deal with.


You get to see this at CppCon 2014 videos.

On one side, very rigid companies barely using C++11, mostly as a better C with improved type checking.

On the other side, researchers making use of every template metaprogramming trick they can remember of.


Looking at the headers is not the way to learn about the library. Anything that you learned that wasn't documented in the standard or any of the many books, articles, and videos available would be an implementation detail that you couldn't rely on in portable code or even in the next version of gcc.

These files are only intended to be seen by the compiler and it doesn't complain about indentation or single letter variables.


That was my impression when I took a course in C++ as a grad student. The course emphasized C++11, so I assume it represented (at the time, early 2013) modern usage. But 80% of every lecture consisted of presenting some example code, and explaining how what you think this code should do (if indeed it's obvious at all) is probably wrong. So many gotchas. It made me terrified to think of trying to write/test/debug a large and sophisticated C++ code base.


"G++ now allows typename in a template template parameter." (GCC changelog)

    template<template<typename> typename X> struct D; // OK

Thanks guys..


Well, you have to think in each case what makes sense to you, look at the requirements of the project, team skills, candidates pool, etc and pick the best fit.

Choice of programming language is an architectural decision with a big impact and it can't come down just to preference or how nice a language feels, although that plays a part too. One can also do quick prototyping in one language and the real thing in another one, although prototypes have a nasty habit of surviving into production.


Julia is one example of an alternative, but the real point of course is that even modern C++ requires a lot of extra mental energy that just isn't needed in almost any other language. It has its strengths, but I do agree with not using it unless it is completely necessary. Because of its massive language and compiler complexity it is becoming something of a black art the way assembly language has been in the past. Very powerful but very complicated.


Yes, I completely agree. I had a client that pushed for a large enterprise application to be written in C++. By large, I mean it took a ~30 person team about 3 years to develop. In the end, I think the 80/20 rule applied and well under 20% of the code was performance critical while the rest was handling common operations that in higher level languages would either be easy or part of a standard library. On top of that, once the client wanted to take over the project they had a hell of a time finding competent C++ developers.

Back in 2005 two Microsoft developers (Raymond Chen and Mariani) did a blog series where one of them wrote and optimized a C++ Chinese/English dictionary and the other did the same in C#[0]. The naïve C# version outperformed the C++ version until Raymond performed several significant optimizations.

[0]: http://blogs.msdn.com/b/ricom/archive/2005/05/10/416151.aspx


Depends on how strongly you mean "no other option". For example, is Rust another option? I'm not sure how comfortable I am writing for a language with a version of "0.12.0". My understanding is that they make breaking changes fairly frequently (at least compared to the religious mania that is C++'s approach to backwards-compatibility). D seems like an option. Go, maybe, depending on what you're doing.

Here's my standard example for when I would happily choose C++: I am writing an image processing library that I want to be able to run on both the PC and embedded platforms (e.g. one of TI's DSPs). I also want to be able to write low-cost bindings in python, ruby, etc. There are other options, but I think C++ is the best one in this case.


Rust is likely not an option for most people using C++ yet. C++ has one huge advantage over almost every other language: its tools and eco system. MSVC++, Intel's ICC, GCC and LDC are all solid compilers where you will not paint yourself into a corner because of your tools.

D is more mature but still can't compete on tools since only a few languages can make that claim.

Your case is another big point. C++ can manage memory without raw pointers AND without garbage collection. It should be able to compile without a big runtime, but for some reason its modern dependencies are just as big as scripting languages.


"for some reason its modern dependencies are just as big as scripting languages"

In what way? Isn't writing an application trivial with the STL? Even something like a GUI is possible with a small system and FLTK (although it isn't really very pretty).


> My understanding is that they make breaking changes fairly frequently

http://blog.rust-lang.org/2014/12/12/1.0-Timeline.html

    * now -> Jan 9: TONS of breaking changes
    * Jan 9 -> Feb 16: probably shouldn't break, but we
      reserve the right to deal with exceptional circumstances
    * Feb 16 -> six weeks, maybe 12: no more breakage


> Depends on how strongly you mean "no other option". For example, is Rust another option?

What I tend to think people mean when they say that, is that you have some cycles to "waste" on a simpler - while also being high level - language.

> Here's my standard example for when I would happily choose C++:

I think the standard examples are certain application-level programs.


> What I tend to think people mean when they say that, is that you have some cycles to "waste" on a simpler language.

Isn't C simpler than C++ and doesn't waste cycles?

The only "no other option" scenario I could imagine is, legacy code in C++ or using a framework that needs C++.


> Isn't C simpler than C++ and doesn't waste cycles?

I gave a couple of C++ books to a recent CS graduate. She assured me that C++ would be easy to pick up because it has a much smaller standard library than Java. I smiled but didn't say any anything.

Yes, C is simpler than C++. But that simplicity comes with a cost, mainly that the programmer has to keep track of more details. And the C standard library is smaller than the C++ standard library (and that would be true even if the C++ standard library didn't include the C standard library).


> Isn't C simpler than C++ and doesn't waste cycles?

I should have added "and also high level". I did that now.


Another thing to think about: Program in C++ if you have memory to waste on leaks, or time to waste on plugging leaks.

GC induces things that are arguably leaks, but in the average case they're controlled; leaks in C++ grow without bound in the average case.


This leaks argument is really getting old. I have no leaks in my code. C++11's addition of move semantics and using references everywhere makes pointers unnecessary for the most part. You can use STL containers for putting your items into so shouldn't see raw "new" or "delete" operations in your own code very much; this is particularly true where you define your own move operators and move constructors.

Even "old" C++ should have no leaks if you use RAII properly, have clearly defined container classes, use references everywhere or const pointers if you must. If you write sloppy code, you get sloppy output.

See Stroustrup's "The C++ Programming Language Fourth Edition" (the blue book) section 3.3.3 Resource Management and 3.2.1.2 "A Container", where in this early part of the book Stroustrup explicitly directs to 'avoid "naked" new and delete operations" and to "use resource handles and RAII to manage resources".

EDIT: Wahay getting downvoted - thanks! In any other language (eg. PHP) if there was a vocal crowd complaining about how their scripts are slow when they do something stupid like fetching an entire database table and then doing filtering within the PHP script itself, everyone would say "But you're doing something stupid - it is going to be slow" and nobody would argue with it.

With C++, when you point out that someone is doing something foolish, you get downvoted and people start making arguments about features of the language instead to detract from the truth that you've highlighted, ie "but a good language wouldn't let you do dangerous things", which is the same as saying "knives can cut you - ban all knives!!". It's really wearisome, and always rears its head here on HN where C++ is NOT the language of the day.


I upvoted you twice, in case you care.

You raise some good points; the problem is, they both point the way to using languages other than C++: You essentially advocate using a subset of C++ (everyone does, even Stroustrup), and the subset of C++ you advocate is the high-level one with a maximal amount of automatic resource management, whether by templates or RAII.

OK, if I'm doing that, I obviously don't care about the manual resource management part of C++, so why shouldn't I use a language which gives me all of that and more? Are the performance gains from using that subset of C++ compared to something else even perceptible?


I don't understand the point about the subset of C++. Nobody is really saying that, they are saying that for most things pointers should not be used, since they would not be needed. But when they are needed, they exist and should be used. If you need to create your own class which handles memory, (smart)pointers are the way to go. If you need to interface yourself with a C library, bare pointers are the way to go.

The fact that most screws have cross or line head cuts does not mean that I won't ever buy a star-shaped screwdriver. When you don't need it, you don't need it. But when you need it, there's nothing else like it.

On the other hand, higher level languages just cut on what you can do, and just give you a big ultra powerful hammer with which you can slam everything together. I guess it works, but I find it rather sloppy working.


Good point! For me, I think the question here isn't so much "performance gains" as "interoperability, near-ubiquity, and staying power".

For "interoperability", I'm just going to quote myself:

"If you write your library in Perl, guess what, only people writing Perl will use it. If you write it in Java, well, maybe the Scala folk will wrap it up, or the Jython folk, but you won't get any people writing in Ruby to use it. If you write it in C++, you can provide efficient bindings to any of those languages."

In terms of ubiquity, what I mean is that almost every platform has a standards-compliant-ish C++ compiler for it. And before people get in a tizzy, I mean weird platforms, not just computers with a browser installed. That's Intel, yes, but also DSPs, FPGAs, ARMs, and whole host of other chips. This didn't used to be true, but with LLVM, almost an entire standards-compliant C++ toolchain can be generated for your platform pretty easily. The exception tends to be, ironically enough, exceptions. They need to be special-cased for your chip, but still, C++ exists in places where the JVM doesn't dare go.

And finally, for staying power, I'll quote myself again!

"Look at FFmpeg or ImageMagick. They are libraries that just _won't die_, no matter how hard people have tried. My god, ImageMagick was written in 1987, and it now has Haskell bindings[0]! The only way to get that sort of longevity (and old code is good code, after all [1]) is to write in language that will outlast the ups and downs of the language or framework du jour, and C and C++ have proven to have that staying power."

[0] http://hackage.haskell.org/package/imagemagick [1] http://www.joelonsoftware.com/articles/fog0000000069.html


Thanks for the upvotes. You are right about both of these subsets pointing to other languages; I had not thought about that!

I myself throw pointers around all over the place because I am restricted by my Windows compiler (no C++11 features) and I don't do stupid things with them (and am a slow learner for new features...).

I suppose the performance gains are negligible in simple applications but in performance critical applications they can make a difference - games, web servers etc. See how Facebook built their PHP to C++ interpreter for reduction in running costs and heating costs. PHP was more convenient to write (and PHP developers are cheaper to hire I presume) but I wouldn't welcome all programs being of the "interpreted" type.

With C++, the push for type safety and making the compiler do the work for you will mean more efficient programs and less run-time checks. This is particularly important on mobile devices where processing usage = less battery life. And I would be far happier running an application for a few extra hours on my laptop if it was written in a sensible language and made efficient use of my hardware; lots of little performance gains add up.

Imagine the horror of apps being written entirely for developer convenience with little thought to performance - you could kiss goodbye to a big stack of power worldwide. That problem is only going to get worse, and the "there's processing power so I'll use it" approach is what gave us bloatware in the first place.

So I suppose application and system program development is a tradeoff between developer convenience and user convenience (battery life, speed etc.)


> This leaks argument is really getting old. I have no leaks in my code.

I imagine you are in the lucky position to have full control over the whole code, right?

Back on my C++ days at work, 1999 - 2005, there was always a lucky guy having to track down pointer misuses across the project source code.


Yes I am fortunate to have full control over the code. I still have some older code written by someone else to maintain, and that's pretty horrible in its architecture (incessant message passing for no good reason).

In other jobs I didn't have access to all the code but there were coding standards, regular reviews, and younger developers were mentored by older ones and their check-ins to repositories were vetted to ensure at least some standards. CPPCheck was also useful in some cases, and that Borland memory corruption tool that got built into the executable and caused false positives most of the time...


All: In HN comments, please don't complain about being downvoted. It's off-topic, explicitly against the site guidelines, and tedious.


Ah OK thanks didn't realise. I don't mind being downvoted, just without any replies or explanation. It is particularly militant in C++ articles, where there is an extremely hostile backlash to C++ here on HN, I have noticed. The language bashers come out in force for no good reason.


Please reduce the number of people with downvote power. Far too many people abuse them now. The site has become very hostile and it's caused entire communities to move away from HN, e.g. to producthunt. This trend will only continue if the hostility isn't improved.


Just wondering: why the downvotes?


HN Users have a constructor similar to this one.

  class HNUser
  {
  public:
    HNUser() {
      if (commentPraisesCorCPP()) Downvote();
    }
  };


Anecdata: at my current gig, I almost never run into any memory-related issues with the C++ services that I support, and I've never written a "delete" because everything is managed with smart pointers. The one time we ran out-of-memory wasn't because of a leak.

On the other hand, one of the most frequent problems with our Java services is "GC thrashing".


You have another option: Unreal Engine Blueprints. If you're not comfortable with C++, use Blueprint, that's what it's there for.


I feel like it's a bit grasping if you need such a complex example to debunk a straw man.

Implicit garbage collection is about the ease of writing code, not reliability. You make the compiler work for you by not making what you want explicit. Yes, for bad programmers, this leads to better code, but that doesn't mean you don't benefit from implicit garbage collection if you are a good programmer.


>>garbage collection is about the ease of writing code, not reliability

No, it is about reliability. You are confusing not always necessary/perfect/optimal with average case reliability. On average, automatic memory management has less bugs because certain classes of mistakes can be entirely eliminated.

Also this argument of good programmers don't have to rely on X as a crutch is more about pride than productivity. In the real world, most teams have code touched by developers at a variety of skill sets and everyone's contributions affect quality of code so tools should be measured across levels of expertise.

If you look at the highly cited research on garbage collectors I believe you'll find most of it concurs as to the benefits, edge cases not withstanding.


> Also this argument of good programmers don't have to rely on X as a crutch is more about pride than productivity. In the real world, most teams have code touched by developers at a variety of skill sets and everyone's contributions affect quality of code so tools should be measured across levels of expertise.

Not to mention that sometimes even Homer nods.

If something needs to happen in a certain way and you have to manually ensure that it happens every time, sooner or later even the best of us will slip up. Now, the tradeoff in power and flexibility might be worth the risk of shooting ourselves in the foot, but it's still a tradeoff.


My experience with GC on Android is that it's nice when it works, but when you have to optimize you're screwed. I'd rather have a system like in Swift where the destruction of objects is deterministic. deinit can take care of releasing resources in time.

C++ offers the most flexibility, and using containers and smart pointers makes code that doesn't care much about memory look quite decent too. I have to admit that Swift makes it a breeze though, you don't have as much overhead in thinking about memory management.


While I don't have the same experience as you do on Android, I agree with your sentiment completely. When GC gets in the way, it should be easy to turn the GC off and fly solo. Some languages provide that functionality, others (Java) don't.


I didn't consider it a complex example. Instead, it was a set of examples, each focused on one argument regarding garbage collection.


The problem is not what C++ can or cannot do. Or if you can avoid its issues (which are many), or even how long you need to be to get competent at it.

The problem is that you spend most of the time dealing with issues that are created by the language, not from the problem domain.

Every minute you are debating which kind of smart pointer to use, interpreting error messages (which are quite verbose), writing copy constructors and assignment operators, or forgetting to add a virtual destructor, is a minute not working on your problem domain.

If it is performance you're after, then C++ is also bad in the sense that it's very difficult to reason about. Due to optimizing compilers, that's true to C, but the abstraction is way thinner.


This article correctly observes that "many resources are not plain memory" and then goes on to advise users to close() a file in a destructor. However, one of the ways that files are not like memory is that while free() can never fail, fclose() can fail if writing is buffered and the final write fails. This is especially problematic in C++ since throwing from a destructor is dangerous. Whether you rely on RAII or GC to free files, you will not be able to catch this sort of error. It is precisely for this reason that GC is a mechanism for memory management, not resource management in general.


So far, all the comments seem to be missing Stroustrup's biggest point: garbage collection only works for memory. Destructors work for everything - memory, file handles, semaphores/locks, database connections, everything.

Garbage collection handles 90% of the problem transparently to the programmer. That's wonderful (really - I'm not being sarcastic here). But the problem is, garbage collecting languages usually don't have destructors, and so you're left having to manage the other 10% of the problem yourself. You have to remember to close all your own files, clean up all your database connections, and so on. Garbage collection usually makes the 90% trivial, at the price of giving you no tool at all for dealing with the other 10% (non-memory portion) of resource management.


> You have to remember to close all your own files, clean up all your database connections, and so on. Garbage collection usually makes the 90% trivial, at the price of giving you no tool at all for dealing with the other 10% (non-memory portion) of resource management.

In languages that use lambdas/closures, you can make some sort of RAII as well.

It is a well known pattern in Lisp and ML languages.


Many modern languages have a convenient solution for those cases as well: Python has the "with" statement combined with context managers; C# has the "using" block with the "IDisposable" interface that does essentially the same thing; even Java recently got its corresponding "try-with-resources".

Before those were introduced we had try/finally which works equally well but is slightly more verbose.


Yes, you can do this with try/finally. But if you have an object that has an open file as a member, then you open the file in the constructor, and then use the open file in the member methods, and then... what? You may have a scope that you are exiting where that object becomes irrelevant, but it may be several layers away. Having to close that object's handle in a finally in that scope seems likely to be forgotten at least some of the time.


That "scope that you are exiting ... several layers away" is where you put the try/finally.

If an object has resources to dispose of (such as an open file), it should implement (to use C# as an example – it looks similar in Python and Java) the IDisposable interface, which allows you to do either:

  MyFileWrapperObject obj = ...;
  try {
    foo();
    bar();
    ...
  }
  finally {
    obj.Dispose();
  }
or:

  using (MyFileWrapperObject obj = ...) {
    foo();
    bar();
    ...
  }
Or to take a Python example from what I'm working on right now:

  with open("something.json") as f:
    data = json.load(f)
The file will automatically be closed at the end of the "with" block. Just like in C#, this feature can be used for any kind of resource, not just files.


> That "scope that you are exiting ... several layers away" is where you put the try/finally.

Yes, I know that it's where you're supposed to put the try/finally... if you remember. Each time. And if you move the action in the finally clause if the variable changes scope or lifetime.

Destructors are considerably cleaner than this approach. You just put the close in the destructor, and let it do its job.


I've heard so much laid at C++ feet over the years, but I don't find it all that horrible to work with. It seems about on the same level as Python to get most things done, just without all the magic of the python standard library.

Are there some good stories of the bad of C++ that anyone can share?


My main gripe is the mental overhead when compared with other languages and the bulkiness of the syntax.

Languages that contain lists and maps as first class types and the syntax to go with it are so much nicer.

It's no joke that large C++ applications contain a poorly defined subset of common lisp.

The standard library is small. Things that are included out of the box in other languages do not. Doing some simple things are not as simple as in some other languages.

When dealing with math, on the other hand, as systems languages go, C++ is not the worst alternative. Getting numeric stuff correct is about as difficult in all languages.

I would not use C++ anything that benefits from prototyping. Otoh, when the spec has been defined and it is fairly obvious what needs done there is no reason why it should not be banged together using C++.


Yeah, I totally agree on that point. With C++, it's not entirely uncommon to write lines like the following:

    std::vector<std::string> exampleparser::tokenize(std::string some_input){
That is unless I'm doing it wrong.


You forgot to explicitly specify the allocator in the vector...... - it'll default to std::allocator and put items on the heap but you could potentially use your own and get it to put stuff on disk instead!

I don't find your example that convoluted to read to be honest.


Herb Sutter's Guru of the week is a nice collection of gotchas/best practices: http://herbsutter.com/gotw/

And of course, his books (More) Exceptional C++ (http://www.gotw.ca/publications/xc++.htm, http://www.gotw.ca/publications/mxc++.htm) also contain nice examples.


Though not updated for C++11, a place to start would be the C++ FQA [1].

[1] http://yosefk.com/c++fqa/defective.html


C++ FQA is rarely a good place to start for anything else than finding examples of biased writing.


I found that combining the C++ FAQ and the C++ FQA made me a much better C++ programmer when I was learning the language.

Sure, it's ranty and biased, but that's kind of the point.


It's impressive to see how many of the complaints have been addressed by C++11, actually, and how many others have been addressed by compatible changes to the compilers and standard libraries (in particular, template related error messages can still be pretty bad, but they have improved tremendously with gcc and clang in the last few years).


I've not used the Python standard library, but doesn't the STL do a lot of what you want for C++?


It lacks networking features which is pretty limiting. I use QT for 90% of projects, this is the new STD for me and I'm quite happy :)


Ah yes I forgot that. I remember Stroustrup explicitly stating that he avoided adding stuff like GUIs to the STL as people forget that C++ isn't just ran on desktops.

I use wxWidgets with a mix of STL (you can tell wxWidgets to use STL for its containers if you want) and all works well.

I had been happily using extra networking libraries and the ease of using libraries means I kind of forget that the STL doesn't have it - I just get into a habit of using the right library for the job, which is the way it should be I suppose.


http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/ gives a sense of where the standards committee is going.


"My precious language isn't broken! It's not! Not! Not!" Strostrup writes something like this about once a year. There's some denial there.

The basic problem with C++ is that it's hard to tell if something really bad is happening. Think of C++ from the view of the maintenance programmer assigned to figure out why a C++ program is crashing intermittently. Or worse, the security expert trying to figure out how a system was penetrated. C++ has hiding ("abstraction") without safety. This is a bad combination, one seen in few other major languages.

We make progress in programming partly by having the language eliminate some problem. High-level languages mean you don't have to worry about register allocation, saving and restoring registers, or calling sequence conventions. Assembler programmers have to obsess on those issues. C++ doesn't eliminate any problems from C. It helps with some of them, but not to the point that they're just gone.

The three big questions in C are "How big is it?", "Who owns it?", and "Who locks it?". C++ helps a lot with the first one, even though it doesn't have enough subscript checking to guarantee the absence of buffer overflows.

C++ has struggled with the second one, with three rounds of auto_ptr, unique_ptr, and now reference-counted smart pointers. But because these are an afterthought, implemented using templates, they're neither optimized nor airtight. The mold keeps seeping through the wallpaper, in the form of raw pointers. Rust has a designed-in solution to this problem. (Rust's borrow checker is perhaps where C++ should have gone, but C++ will never get there.)

C++ as a language has no clue about who locks what. The language totally ignores concurrency. That's said to be an operating system problem. This is now a very dated concept.

(As I say occasionally, I really hope the Rust crowd doesn't screw up. They address all three of those big questions in effective ways. But I see too much use of "unsafe" in Rust code, which indicates weaknesses in the language design. The use of Rust's "unsafe" for "performance" is a big problem. From a semantic standpoint, only "Vec" needs "unsafe", because somebody has to convert raw memory to an array of objects. Everything else can be built on top of "Vec". But there is unsafe code for "performance" in hash classes and such.)


How is abstraction unsafe?

Additionally, raw pointers are not dangerous. If you're passing them around as const pointers, the receiver can't do things like deleting them. If you are writing a container class (instead of using one of the existing myriad of containers in the STL) then you can use raw pointers. But typically you would not need to write your own container class; just use one of the STL's and the move semantics. You can then use references everywhere instead. Or const references. Everyone forgets about const correctness in their C++ bashing.

For concurrency, you may wish to see http://en.cppreference.com/w/cpp/thread

There's even mutexes in there.


Creating a dangling const pointer is pretty trivial. Hell, make it a const reference.

    #include <iostream>

    struct Foo {
        const int &dangling;

        Foo(const int &dangling): dangling(dangling) { }
    };

    Foo foo() {
        int dangling = 0;
        Foo foo(dangling);
        return foo;
    }

    int main() {
        std::cout << foo().dangling << std::endl;
        return 0;
    }

Neither g++ nor clang warn on this, either. I am not sure why you think const pointers are safe. They aren't. Flat out.


Well I was assuming for pointers that they'd be initialised to something sensible. That would be a coding standard problem if they're not.

The example you give is also a problem with coding standards, I'd argue. It isn't the languages fault that you're using a reference that's going out of scope - that's just bad coding. (A pointer going out of scope isn't so bad, no?)


How is abstraction (hiding) unsafe?

Objects hide their implementation details. An object is only a valid abstraction if it correctly hides its implementation details and the user can ignore them. If there are hidden constraints on what a user can do with an object, but those are not enforced by the object, the object is an unsuccessful abstraction and a potential source of bugs.

Additionally, raw pointers are not dangerous.

Two words: "buffer overflow". C pointers lose size information.


Astonished that your comment got down-voted: it is one of the most insightful here and matches my experience with C++ rather well.

Mixing smart pointers and multi-threading in C++ may result in some nasty surprises that are very difficult to track down and would be impossible with garbage collection, and I'd be curious if Rust would be able to do better here with its fancy type system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: