Hacker News new | past | comments | ask | show | jobs | submit login

There are some very good answers to that question. Related to that question, I still fail to see any advantage to using shared_ptr, weak_ptr, unique_ptr, auto_ptr, etc. If you are doing C++ right, nearly everything should be allocated on the stack, and the things that aren't should be encapsulated in a class, RAII style. If you can't design your program so that the ownership and lifetime of your objects are clear, then you're using the wrong language, and shared_ptr won't save you.



The point of the smart pointers is that they are RAII classes. If you have an object that manages more than a single resource, you open yourself up to all kinds of exception safety and thread safety issues.

For example, let's say you have a class that news up an object in its constructor. Presumably, the corresponding delete lives in the destructor. But if that constructor throws an exception, the destructor will never be called. If it throws the exception after the "new," then you have a memory leak.

So basically, you need a class whose only job is to manage a single pointer. You could certainly roll your own smart pointer, but it would have to work pretty much exactly like one of the ones in the standard library.


The word RAII by itself is too non-specific. People typically mean: use (i) a scoped/auto pointer or (ii) a shared pointer (thats C++ speak for reference counted pointers). These are good thumb rules to rely on, but in itself they are very inadequate.

First, auto or scoped pointer gets you a stack discipline. If your objects weren't very big, you might as well create them on the stack itself. More efficient and suitable for multithreading. If the lifetime does not follow the lexical stack then you are out of luck with this style of RAII.

Next, reference counted pointers: Cycles are clearly a problem. The traditional mantra is: use weak pointers to break cycles, but then you are back to error prone manual management (of cycles). If I am to be trusted with using weak pointers correctly, I wouldn't be too bad with manual memory management either. Pure functional languages with eager semantics do not allow cycles, so are ref counts a good idea in such cases ?

Although, many do not think about it that way, reference counted system is a _garbage_collector_. Just a particularly simple one. Just because one understands how it works, does not disqualify it from being a garbage collection system. It is just not a very efficient garbage collector for cases where running time is sensitive to cache locality, which is true fairly often.

I am quite excited about Rust's regions. I dont think programming with regions is easy and you would want to have a garbage collector anyway. I hope this style of programming sees more mainstream visibility. Something should come out of all the MLton work on this. EDIT Indeed meant MLKit but it draws enough from MLton that I thought HNers would be able to relate.

There is also D's scoped guards, but I am not yet familiar with what they do.


[Regarding Rust:] > I dont think programming with regions is easy and you would want to have a garbage collector anyway.

I agree and I've been wanting to emphasize this too for a while.

I think it would help Rusts adoption a lot if it would get a good garbage collector while it's still young. Otherwise I think a lot of programmers will give up after stumbling around with borrow-checker errors when doing some things that are pretty trivial in other languages.

If we had a garbage collector, we could only worry about regions in the hot spots, that would ease many people's way into the language a lot, IMO.


I actually think the moves Rust has made away from an expectation of GC as part of the language will only help its adoption down the road. There are about 8 billion languages occupying some facet of the C++ With Garbage Collector niche and we don't really need another one. We need a better C++, and it's not a better C++ if you can't use it without a garbage collector, it's just another Java/D/Go/so on and on and on.


I agree it should be usable without a garbage collector, for sure.

But some types of algorithms which are allocation heavy (with dynamically sized parts) will just run much faster with a good collector, and are easier to write that way.

Also I've found that with regions, things get really complicated when structures contain borrows (and not every structure will logically own what it references), and then the structures become parameterized over the lifetimes, which multiply quickly.

For cases like that I'd rather start with a GC reference, and gradually remove GC references where time permits and profiling dictates.

I think Rust can really have the best of both worlds here.

BTW: I did manage to finish a fairly involved analysis program in Rust without using any GC, and I'm really very happy with the results. Being able to GC a few references would have made things easier though.


  > For cases like that I'd rather start with a GC 
  > reference, and gradually remove GC references where time 
  > permits and profiling dictates.
Long ago this was one of the original theses of Rust, that you'd favor GC'd objects at first and then we'd provide easy ways to transition your code to use linear lifetimes where profiling deemed necessary.

It took us a long time and boatloads of iteration to finally realize that this just wasn't feasible. The semantics of GC'd pointers and linear types are just too drastically different, especially with how they interact with mutability and inter-task communicability. We "officially" abandoned GC in the language itself last October, and we're still trying to wrest the last vestiges of its use out of the compiler itself. Ask pcwalton how much of his time has been spent doing exactly that in the past four months. Just yesterday one of our most tenacious community contributors proudly announced that an all-nighter had resulted in removing two more instances of GC'd pointers from the compiler, which required touching "only" a few thousand lines. It's a bad scene.

So yes, while Rust will still offer GC'd pointers in the standard library, we've long since learned not to encourage users to reach for them without great justification. You must understand that once you introduce garbage collection into your code, it will be very difficult to return from that path.

We've also, in practice, tended to point people toward RC'd pointers (with weak pointers to handle cycles) whenever anyone needs multiple references to the same piece of memory. Thanks to linear types, reference counting in Rust is actually surprisingly cheap (many instances where other languages would need to bump the refcount are simply achieved by moving the pointer in Rust, since we can statically guarantee that that's safe).


Thanks for the backstory, I may have been a bit naive then about how easy it would be to convert.

Good to hear about RC efficiencies, I assumed they were basically shared_pointer, I haven't tried using them and will give them a look.


No, RAII is a general technique with an object owning a resource, to limit it to those 2 things is just wrong. When using std::shared_ptr, cycles are rarely a problem. Also, I don't think MLton ever implemented anything relating to regions, perhaps you are thinking of ML Kit.


Not disagreeing, you missed the "typically". Well, I find manual memory management to be rarely a problem, but when it is ...

EDIT: @detrino Ah! our typical sets are different then. How about "destructor dependent cleanup", call what we may, thats the underlying mechanism, but its too limiting.


That's not typically what people mean when they talk about RAII, you are still wrong. Every collection in the standard library is an RAII class.


Why do you find it limiting ?


>reference counted system is a _garbage_collector_

Exactly. An awful one.


So why did Apple switch to ARC?


Obj-C has been a (non-automatic) refcounted language since its inception, save for a very short period of more advanced GC on OSX (the GC never made it to iOS).

Also, ARC is not a refcounting GC, the ARC calls are injected statically in the binary as if they'd been inserted manually in the source.


The various pointer types are about keeping the ownership and lifetime of your objects clear.


They have no real reason to exist. At best, they are syntactic sugar (auto_ptr), at worst they actually encourage poor design (shared_ptr). They are basically a crutch for programmers who grew up with managed languages and are afraid of raw pointers.


That's just false. The automated cleanup could be done manually, but the type safety is invaluable. Change how you want to handle ownership of a particular object as part of a refactoring. Now, find everywhere that assumes the previous model - if you use bare pointers, it's a pain in the ass. USE TYPES TO HELP YOU WRITE CORRECT CODE. I do a lot of this in my C code.


Oh please. They help with exception safety, and sometimes you do transfer ownership or have unknown lifetimes.


I happen to agree with the grandparent post. C++ is overbloated with features and when STL and templates came out, not all compilers even supported all their features. Not to say that templates themselves are bad, but there are too many things in C++. For example copy constructors vs initializers, (String)foo vs String(foo) etc. And now with C++0x and 11 we have even more. Lambdas? Concepts were about to be in there? Etc.

A programming language is supposed to be small so that programmers can share code. The rest can be handled with libraries. That's why C is better for huge projects. Linus agrees with me :)

In addition, why encourage reference counting pointers? If you really want to manage objects on the heap, have a garbage collected reference counted pointer.


How did we ever survive without them, before boost showed up? I seem to remember doing just fine. They are a symbol of the useless bloating of C++, a language trying to be all things to all people. A half-assed halfway house between managed and unmanaged. You're either a competent C++ coder, in which case you shouldn't need them, or you're new to the language, in which case they are obfuscating the details that you need to learn.

> transfer ownership

Then transfer ownership. The object that owns them encapsulates a container. When it is destroyed, it iterates the container and destroys the objects that it owns. So to transfer, take it out of that container and pass it to the new owner. It isn't rocket science.

>unknown lifetimes

Then you have a lazy, poor design. And that's the point really, managing your memory properly is an intellectual rigour - it forces you to improve your design, to be acutely aware of these issues. Not just spawn shared_ptrs and hope for the best. We have nice forgiving languages where you can be lazy. Coding in C# is like taking a holiday. C++ is meant to be hard.


>C++ is meant to be hard.

I thought C++ was meant to be a lot of things but I don't think it being "hard" was one of the key points of its inception.

Just because you're using a "hard" language, it doesn't make you a better programmer. If the language can be made easier, more productive to write, with less problems and bugs due to memory leaks and inaccuracies, while not impacting performance, why wouldn't you want to do that?


That's like asking a marathon runner why he doesn't just hop in a car.

I use many languages. I use C# to knock out a GUI. VBA for a bit of spreadsheet work. Q/KDB+ for database/functional work. R for stats. All of these are far more productive than C++. If you want fancy abstractions, other languages have them by the bucketload.

I don't use C++ to be productive, and I don't expect to be. I use it to get close to the metal and have full control. Everything you code will take 4-5 times longer to write. And longer to compile. You will be careful because you have to be careful, and that's a good thing because with that care comes quality.


That's like asking a marathon runner why he doesn't just hop in a car.

Most C++ coders I've talked to (including all of the ones who get paid for doing it) are using C++ to get useful stuff built, not for the sake of 1337ness.


You are aware that recompiling with c++11 can improve the speed of existing STL code bases, right? (some bloat, move constructors are, and guess what, that makes transferring unique_ptr free [edit: as in no overhead over manually doing it]..) And you are aware that 95% of the code should not be performance critical, and be made easy to write? And you are aware that before standardized smart pointers everybody wrote their own by hand, to take advantage of RAII?


"C++ is meant to be hard."

If you are making your programming harder to zero run-time benefit, you are a poor coder whatever skills you may have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: