Just to be clear, the replacement for auto_ptr is unique_ptr, not shared_ptr.
The vast majority of allocated objects have unique ownership semantics, not shared. If you use shared_ptr everywhere, it's far too easy for your design to degenerate into object soup.
> If you use shared_ptr everywhere, it's far too easy for your design to degenerate into object soup.
You mean that it's usually better design to have only one "owner" of an object? That's true.
It's worth noting that shared_ptr is also necessary for containers, so you might get into a habit of using it even if you're not creating multiple references.
shared_ptr isn't necessary for containers, it's only necessary for bad containers.
Edit: Of course I'm being a bit stupidly snarky here, obviously if you have a standard container you need a shared_ptr, or an intrusive_ptr, or some other pointer, the moral of the story is that std containers aren't good for holding owned pointers in C++03.
I don't think boost ptr_containers are particularly good either, but that's mostly because of the huge cost of putting the string '#include "boost/' in your code.
Not only. Over-use of shared_ptr is sloppy, and may cause reference leaks which are no easier to solve than memory leaks. Its use is recommended only when sharing semantics are actually required.
In addition to it being broken, in C++11 auto_ptr is depreciated. Since they're moving to C++ now, they may as well use the latest features as often as possible (when the use case makes sense of course).
auto_ptr's copy is broken, so if you copy one the first object loses the reference. Also, it always uses delete on the reference, which means it can't be used for things that don't use delete (like arrays which use delete[]).
Before the days of std::move, auto_ptr's copy modeled an ownership-transfer semantic in a more efficient way than could be accomplished by shared_ptr (no refcount churn). It wasn't broken; one must have simply known when it was appropriate to use.
Consider a helper function that returns a heap-allocated object and think whether auto_ptr or shared_ptr better modeled the function's intent of giving ownership of the object to the caller.
The problem with auto_ptr is that it put these ownership-transfer semantics in the copy constructor instead of something more explicit. A scoped_ptr type that has a release() method is far more useful and safer than auto_ptr.
Sure. I was more responding to the claim that shared_ptr should always be preferred to auto_ptr. I agree that scoped/unique_ptr fixed a lot if auto_ptr's pitfalls.
This was just a workaround for the lack of move constructors.
The real problem with auto_ptr is that if anything emits a call to auto_ptr destructor at the point where the object pointed to is incomplete (forward-declared), the destructor code won't run, only memory would be freed.
* C++ is a standardized, well known, popular language.
* C++ is nearly a superset of C90 used in GCC.
* The C subset of C++ is just as efficient as C.
* C++ supports cleaner code in several significant cases.
* C++ makes it easier to write and enforce cleaner interfaces.
* C++ never requires uglier code.
* C++ is not a panacea but it is an improvement.
It's interesting to me how defensive many of those justifications are. "It's just as efficient!" or "It's just a superset anyway!" Not until halfway through the list do we get any real description of expected benefits: C++ makes it easier to write and enforce interfaces, and C++ supports cleaner code in several significant cases. The last item is the most telling though: it's clear that these developers feel they've hit a point of crisis, and they are willing to take risks like this in order to lift themselves out of the problem.
You've not seen ugly code until you've read 90s-era Win32 UI code written with Hungarian notation everywhere. Ugly code comes from the programmer, not the language.
I blame the early Windows team by mistaking what Simonyi meant with Hungarian notation by misunderstanding what 'type' means. Which is why Office uses his notation in a much cleaner fashion.
Yea it's a shame that Hungarian notation has gotten such a bad rap. Done right i find it occasionally useful. Done the way most windows programmers have been taught is an abomination.
Can you give an example(or a link to an article discussing this)? I've never had the displeasure of working with the Windows API(though I often shook my head when I saw code for it)
Basically he intended 'type' as not 'datatype' as in 'dwSomething' (for double word) but the purpose of the variable. For example, if you're using an integer variable to store the length of some entity, you would name it something like 'lenSomething' as opposed to 'dwSomething'.
Storing the type of the variable makes little sense as in most cases it is only a quick grep away.
Unless I misunderstand you, block type syntax is almost identical to function pointers. In both cases, you should generally typedef them and then your code is once again free from ugliness.
I am unable to reply to my sibling poster, but I believe what you're saying is that having a hideous and unreadable syntax that forces the use of typedefs just so others can understand what you have written is a sign of deep, unfixable badness. And I agree.
I'm not sure how you arrived at that belief, unless perhaps you think everything should be (visibly to the programmer) type-free? Care to elaborate? I think it would be nice if C and C++ supported more type semantics than `typedef` and `class`, say something as powerful as Haskell's system...
Some say there is no such thing as ugly code, only job security.
Also ugly code is less of an issue of an ugly design as you can look past how the code looks if the design of the program is sound.
But ugly is in the eye of the beholder and in that maybe more people are growing up with C++ and with a later dealing with C as apposed to the other way around. So maybe it is in many ways the mindset of the average age of the programmers around today are more comfortable with C++ as apposed to C and in that could see nicely laid out C code as having ugly bits as its easier for them to do it prettier in C++ and a biased C programmer will see it compeletly from the other perspective.
So its one of those debates were the best move is to get the popcorn as some people see ugly code were others see something nice and some will see C++ as better than they would C and vice versa. Just be glad i'm not involved or I'd be mentioning COBOL as a reality check :).
The "Spelling, terminology and markup" section is a bit painful. Does anyone really keep it printed and stapled to the wall in case they write non-zero rather than nonzero? Seems really petty. I hope the enforcement is done by simply running a regexp or ignored altogether.
Code is where conventions are important, within comments I really wouldn't be upset if someone called Objective-C "Objective C".
That's standard stuff for any organization that makes documents - keep in mind that the primary purpose of the list is for documentation and user-visible messages. The NY Times surely has a similar kind of list. Do you think that would be equally as painful?
There are entire books written about those sort style details. Personally I find it comforting in a strange sort of way to know that there are rules out there that I can follow if I wish.
this is apalling.
The argumets seem to boil down to "I don't like VEC and how hash-tables are done". Oh and the usual bs about interfaces.
The result is that now there is a monstrous dependency (a c++ compiler) in place of something that an undergrad can build for a class (a basic C compiler).
I wonder what rms thinks of this. Too bad he is not calling the shots in gcc anymore.
It would take a smart undergrad a bit of hand-holding to write a C complier in say, less than a year. (If you can write a C compiler, you should probably be in the Masters' program.)
Many CS undergrads have a compilers class or section of a class for a single semester in either the freshman or sophomore year. Compilers for simple languages like C (simple in an objective sense, mind you) are easy once you understand a few basic concepts. Compilers in general are fairly easy, what's hard is optimization. See for instance this division optimization: http://ridiculousfish.com/blog/posts/labor-of-division-episo... That's just one clever trick, imagine going through a similar amount of work for dozens of different tricks.
You can parse languages that are not context-free. "Parse" just means that you figure out the structure of the file, the word has nothing to do with the techniques used.
Basically, his entire argument is the slippery slope fallacy based on assumptions that may never occur. Plenty of great software is written in C++, even using STL, that doesn't lead to software that is any more difficult to maintain than the equivalent C program. For one thing, refactoring due to design change is a pain in the ass with or without objects. He also doesn't address the fact that polymorphism is very unwieldy in C. While it's very defensible to NOT use C++, I don't think that saying that "C is the only sane language" is fair. That, or Apple, Google, Microsoft, Oracle, and Facebook are full of people who are barking mad.
However, in an environment like Linux and git where code practices may be less restrictive than a corporate or authoritarian environment, the natural restrictions of C (i.e. lack of easy-to-abuse features) may seem like a feature in itself.
Then why not write a rebuttal and ask for his feedback? It's impossible to anticipate every possible counter to an argument, and it would be prohibitive to list and rebut them all within a single message.
> Plenty of great software is written in C++, even using STL, that doesn't lead to software that is any more difficult to maintain than the equivalent C program.
I don't deny that great software is written in C++, but what is your evidence that C and C++ have the same maintenance burden.
Well, it has already happened at least once for Git [1]. I seem to remember someone else suggesting to move the Linux Kernel to C++ ending up with a similar rant (someone can feel free to dig that up if they have the time).
I'm going to go out on a limb here and say that if he was writing a 'normal' application or even a compiler he'd be tempted to use some subset of C++ or maybe even Java.
As it is, for the kernel you'd be crazy to use something other than C.
> I'm going to go out on a limb here and say that if he was writing a 'normal' application or even a compiler he'd be tempted to use some subset of C++ or maybe even Java.
I wouldn't be surprised if the folks at the project abandoned their branch of gcc in order not to be "tainted by association". There is a revulsion to C++ that is close to the core to OpenBSD's philosophy. Perhaps pcc or clang will get more attention.
The people in the OpenBSD project seem pretty happy with their toolchain. pcc was removed from the source tree due to lack of progress. Making a really good c compiler is hard and I don't think they have the people or interest at this time.
I don't think they would switch to clang. LLVM is also c++.
>pcc was removed from the source tree due to lack of progress.
That's a shame. PCC seemed an interesting alternative to gcc, though I do recall PCC's v1.0 "revival" coinciding with an April Fools day several years ago. Perhaps it always was to be taken as a joke.
OpenBSD's simplicity in implementation is laudable. The less moving parts there are the less parts there are to break. It's the same philosophy that keeps me driving my 25-year old pickup.
I am not sure they are happy. They have the C++ compiler written in C that they want, but:
- they have to maintain their own gcc version (the last GPL2 one)
- the licensing for clang and LLVM fits the BSDs better than the licensing for gcc.
If they want to keep the ability to bootstrap from a C compiler, they should consider creating a good C backend (configurable with such things as the size of a long) for LLVM. That way, they could convert any C++ code (including clang and LLVM) to C, and thus bootstrap a C++ compiler on a system that only has a C compiler.
But if GCC is now C++ as well, then it removes what may have been the motivating characteristic to use it, when clang is for many purposes an otherwise better, more freely licensed, compiler.
build change to require a C++ compiler is a no go.
One of the reasons to install gcc on a machine is the absence of a descent C++ compiler. 20 years ago, the C compiler of Sun was completely broken and we were happy to have gcc.
In my experience, a modern version of gcc is extremely difficult to bootstrap on anything but a fairly modern platform (e.g., I was having a hell of a time getting it going on HP-UX 11.23 with a ~2006 compiler). That's not a criticism of gcc. I'm only suggesting that if your platform lacks a modern (read: C++03-compliant) C++ compiler, you were probably already out of luck trying to get 4.7 bootstrapped.
Nowadays pretty much everyone has access to a Windows or Linux PC, or an OS X Mac, and so has ready access to good compilers. The way you'd handle that Sun nowadays is to build a cross compiler for Sun on your Windows or Linux PC or your Mac, and then use that to build the compiler for your Sun.
Converting the stage 1 bootstrapping compiler to C++ is a bad idea IMO. There any many embedded platforms without a C++ compiler (except perhaps downrev gcc).
While it doesn't require a C++ compiler, in the last few years it has required an increasing list of fairly modern libraries. I don't think requiring a C++ compiler will make things that much harder.
The demise of gcc has begun. Not because I don't particularly like C++ but because the folks actually have an interest to gradually start moving a C codebase into C++ as if they didn't have better things to do. Sure, some syntactic things will look cleaner (let's just exclude those that will look uglier for the sake of fairness)--in five years or whenever their transition could be considered complete.
The GCC folks are feeling the pressure from Clang/LLVM, which is a good thing. Under the guidance of the FSF, GCC has intentionally tangled the front end with the back end. The purpose of this was to make it hard to modularize GCC, because if GCC were modular, then you could write a proprietary component for it (like you can write proprietary Linux modules).
Their goal here is to make it easier for new developers to write their own passes or front ends or whatever. GCC may be more mature, but Clang/LLVM are way easier to dive into.
> Under the guidance of the FSF, GCC has intentionally tangled the front end with the back end. The purpose of this was to make it hard to modularize GCC, because if GCC were modular, then you could write a proprietary component for it (like you can write proprietary Linux modules).
Surely you must be joking... do you have a reference for this? Sure, I sometimes the motives of the FSF but this would be quite outrageous from a software engineering perspective.
Just out of curiosity, couldn't the FSF release GCC under the AGPL or something like that and then modularize it all they want? That was my first thought when reading that message. That way they can promote freedom and still have modular software.
Of course, the AGPL might not have existed in 2000, but I haven't heard about GCC moving to the AGPL since then either.
It's actually the opposite; GCC is being ported to C++ because there is actually so much interest in continuing to hack on it. The current codebase is being hobbled in various ways by being limited to pure C. GCC even has a (nasty) internal garbage collector, despite being a C program: http://gcc.gnu.org/wiki/Memory_management
If GCC was dying as you said, nobody would be interested in such large refactors that have no point other than to make development easier.
I would regard attempts to more aggressively refactor a 30-year-old codebase as a sign of health, rather than the opposite. A sick project gets only hacks to overcome the fire du jour.
>auto_ptr is broken. We should use shared_ptr instead
It has begun!
One of the most enjoyable features of C++ is the arguing over which parts of the language should be allowed.