Hacker News new | past | comments | ask | show | jobs | submit login

> I used C++ inheritance, templates, exceptions, custom memory allocation and a bunch of other features I thought were very cool at the time. Now I feel bad for the people who have to maintain it.

I think this is right on the money. There are a lot of things you can do with C++ (and other languages -- even C probably) that feel good at the time, only to learn later that they don't age well.

I've never seen it analyzed deeply why this is, but I suspect it has something do with how "fungible" the code is when you're done. Will your design let you make small design changes with relatively small and localized code changes, or has your complicated template/inheritance-based design "locked up" your code into a particular pattern, where changing any of the design assumptions requires "unwinding" a highly intertwined structure?

I don't know the rigorous answer to this question, so I design with various heuristics, which mostly boil down to some version of "this is getting a bit too complex or too highly leveraged." What I mean by "leveraged" is that some significant guarantee or invariant is achieved through a complicated contract between two different objects. I try to keep the design of every individual type/object as "flat" and unsubtle as possible; again vague terms, but I know it when I see it.




I'm using C++ right now but tend to write very "thin" C++ that is mostly C-like. I just like the object syntax for grouping things together, I sometimes do use a little bit of templates/inheritance when it makes sense, and I like the STL core data structures. C has nothing like STL -- no good portable data structure library. (STL isn't that great, but it's passable and ports to everything.) Operator overloading is nice in places, like with data structures and for mathematical code, but should not be over-used. Exceptions can also be useful, though in my current code base I am not using them much.

Boost is somewhat tempting but it's just too convoluted in places. Haven't had a reason to use it yet. Qt is nice for GUIs.

The thing about C++ is that it's a language that lets you do almost anything, and it has no stylistic traditions. It's like Linux if it has absolutely no filesystem layout standard or LSB. Each C++ sub-community (Microsoft, Qt, Boost, etc.) has its own conventions that are in conflict with one another. The language imposes no internal sense of discipline and carries no de-facto coding standard. As a result, I find most C++ libraries to be annoying to use even in my own C++ code. I tend to use it as a top abstraction layer gluing together underlying C libraries and system calls, and to use most C++ libraries (except STL) reluctantly.

It's too bad... well-done C++ can be very clean, efficient, and safe from things that dog C such as memory leaks, buffer overflows, or rare core dump crashes due to things that could be caught and handled as last-resorts in a try{} catch ( ... ) {} construct.


In my experience there's only 3 major C++ styles: BusinessObjects *(Java-ish Qt), std::cavedwelling<stl_guru, guru_cavedwelling_tag<const _Type<T &> >::const_value_type, and void _rather_be_writing_c();

Often code you'll come across is some combobulated identity crisis mixture of those 3. In terms of sheer readability and extension, I'm a major fan of Qt's design style (their most elaborate use of C++ appears to be virtual functions and the pImpl idiom), although that approach leads to heavy heap usage.

Really detest simple problems "overly" expressed in guru-ified STL. Can't read it, can't modify it, can't extend it, can't debug it. Naturally this means I'm not a great fan of Boost.


Can't read it, can't modify it, can't extend it, can't debug it. Naturally this means I'm not a great fan of Boost.

The STL in particular is designed the way it is largely for performance reasons. Ditto for Boost. I am not a fan of writing template soup, but theirs has a purpose. I mean, it can totally be frustrating (the inheritance hierarchy for collections/iterators recently gave me grief), but the reasons for their design decisions, when you take into account their goals, are generally very sound.

In any event, I have never had to introduce template soup to my code when using Boost. Even the boilerplate annoyances have been largely mitigated with C++11 and type inference.


I can see why Stroustrup finds so much C++ criticism annoying - language design is all about tradeoffs and many of the less elegant parts of C++ are compromises made to preserve performance. Stroustrup himself has said there's a cleaner, simpler language trying to get out of C++. C++11 seems like a good step in that direction.


Indeed. Not only that, the criticisms are often poor echoes of Stroustrup's own considered, well presented discussion of shortcomings and trade-offs as presented in say, "The Design and Evolution of C++"


C++ is a multiparadigm language, all of the styles you mention are extreme bastardisations of the style laid out in books like Accelerated C++, Effective C++, Exceptional C++. Modern C++ Design and so on.

That style tends to be similar to that used in the STL with some OOP thrown in - when suitable. It is about selecting the abstraction that is appropriately ensuring the code is abides by RAII.

I am convinced that the Javaish/Qt way of coding is destructive for C++ as inheritance is not as simple as the generic programming style once you get to grips with it and has a lot of gotchas. And the C - style is to me just plain wrong, especially if you are using any of the C++ standard library as they are privy to throw and without reliance on RAII your code will not be exception safe, if you want C use C.


"std::cavedwelling<stl_guru, guru_cavedwelling_tag<const _Type<T &> >::const_value_type"

Pure gold.


C++ code doesn't "age" well. Linux kernel C code basically looks like UNIX C code from 25 years ago. C++ code from the mid-1990's is all about OOP, then it was STL containers and RAII, then it was template metaprogramming, then people scaled back on that because it was incomprehensible, and now it's boost and smart pointers, etc.

Moreover, because the various features are all fairly incompatible, you really have to think about how you want to design something before putting it down on paper. Were you passing around STL vectors of objects to take advantage of RAII? Well, hope you never want to create sub-classes and use polymorphic methods because then you'll have to change everything around to store pointers instead and change all your '.' indirections into '->' indirections. Oh wait but that fucks up your memory management, so you better change everything to use smart pointers and hope nobody ever memcpys the arrays instead of using std::copy.

C++ is a mish-mash of features that don't really work together. The only guiding principle I can really see is: "is it cheap to implement?"


If Linux kernel code looks like UNIX C code from 25 years ago, I'd argue it is C which hasn't aged well.

C++'s true merit lies in its flexibility to evolve with the latest understanding of how to write good, abstract code with little runtime penalty, IMHO.


C++ never really lets you enjoy your abstractions because they're always so leaky. Low-level implementation details are always percolating to the surface. In practice I don't find C++ code any more readably abstract than similar C code.

A really good example is libfirm versus llvm. The former is in C< the latter is in C++. They're both roughly the same order of magnitude in size (at least the core). They're both highly optimizing compilers. I find the former far more clear and readable.

I personally find C++'s abstractions to be of questionable value. YMMV.


Hey, someone who knows libfirm. :)

Fun fact: A full build of libfirm is faster than LLVM's configure.

The compilation process is one of C++ greatest weaknesses. Since the iteration speed (change-compile-runtest) is a significant productivity factor, C can be more productive than C++. On the other hand, D shows that a language with a comparable feature set to C++ can be compiled much faster.


I just compiled libfirm and cparser last night on the train home. Cparser compiled so fast I thought the build was broken. Indeed, C++ compile times are unreasonable even with Clang.


That's not really a fair comparison as libfirm implements a much simpler problem : C compilation.

Llvm implements generic compilation with implementations for C, C++, Objective C, Objective C++, Ada, D, Fortran, Haskell, Java bytecode, CIL, MacRuby, Standard ML.

Of course there is a difference in complexity.


It is not that hard to convert Firm to LLVM or vice versa. If you can read german, a student once implemented this [0]. This allowed libfirm to compile C++ and Fortran code using the corresponding LLVM frontends.

Libfirm (with liboo extension) does not support exceptions, but apart from that there should be little difference in features.

For example, there is an experimental frontend for Java bytecode [1]. It compiles ahead-of-time, like gcj.

[0] http://pp.info.uni-karlsruhe.de/uploads/publikationen/liebe1...

[1] https://github.com/MatzeB/bytecode2firm


Completely untrue. LLVM implement compilation for LLVM bytecode. Clang and other compiler that translate C++ or any other language to LLVM bytecode are not part of LLVM.


Indeed, C++ does neither have a comprehensible established paradigm of coding style, nor a consistent coding culture, as e.g. Java or C# do. It is very sensitive to the professional and personal qualities of a particular developer.


"There are a lot of things you can do with C++ (and other languages -- even C probably) that feel good at the time, only to learn later that they don't age well."

I think this reduces to the argument that you should do the simplest thing that works. As a programmer (especially as a C++ programmer), it's tempting to write clever code. It's fun. Why do something the boring straightforward way when you can invent an elaborate class hierarchy, or use that CRTP thing you keep reading about? Being a good C++ programmer requires a certain amount of discipline, but I think that's a trait that's helpful with more modern languages as well. Consider languages that allow monkey patching. It's a neat feature, but if used without discipline will lead to code that doesn't age well because it imposes more complexity on the codebase.


It's tempting because that's how we build our skills. You must abuse a tool to understand it, and know when to not use it. Unfortunately in professional contexts, many of us do that learning on the job, exhausting out a turbulent jet wake of too-clever code, upsetting those who follow. Coming across CRTP for the first time is either an upsetting detour or a fascinating David Foster Wallace style WTF-inducing endnote that you must consume before continuing. You appreciate this dynamic more as you play out different roles, both producing and consuming. Some languages try to interdict this state of affairs, shielding us from each other, but the more fun languages let us burn ourselves and our colleagues and finally learn restraint in ways you never learn in languages with training wheels.


> ... I suspect it has something do with how "fungible" the code is when you're done. Will your design let you make small design changes with relatively small and localized code changes, or has your complicated template/inheritance-based design "locked up" your code into a particular pattern, where changing any of the design assumptions requires "unwinding" a highly intertwined structure?

I'd say that this is right on the money.

I think it is true that hardcore OOP (along with similarly complicated constructions) gives benefits to the designer who chooses a good and appropriate structure. But, due to the idea you mentioned, it severely penalizes even mildly poor designs, or designs that need to change.


I think hardcore OOP is highly overrated outside of a few use cases where it does seem to work well: graphical user interfaces and certain kinds of business logic.

In those two cases, OOP works because it correctly models the underlying problem.

For GUIs for instance: A window is an object, a button is an object contained within a panel that is an object contained within a window... and so on. All of these are "drawable," and can receive mouse clicks, etc. OOP models that well.


I think that anything that OOP models well could be modelled even better with interfaces/traits, especially in flexible languages such as Go.

To me, OOP is mainly useful for implementation inheritance (in the rare cases where it's really needed). A very nice example is Python - it allows, but doesn't force you to use objects. I only use objects for encapsulation (e.g. make a new random number generator that is "independent" from the application's main RNG), and I don't recall ever using inheritance (in Python).


This is like insisting on driving around in a horse-drawn buggy because a car can crash at higher speeds. At this point I just ignore anybody trashing C++ that isn't offering any real alternatives. And no, C isn't an alternative for large scale app development.


Who gets to decide what qualifies as a "real alternative?" I have seen much larger applications written in Java than in C++; does that make Java a "real alternative?" C++ does not really have any technical advantages over "true" high-level languages (i.e. languages that don't force the programmer to deal with low-level details and whose support for low-level operations does not constrain high-level features), but it has plenty of disadvantages. Meanwhile, I cannot think of anything people do in C++ that I am unable to do in Lisp, although I can think of plenty of things people do in C++ that are unnecessary in Lisp and that only serve to make code more complex and more error-prone.

So there, you have two "real alternatives:" Java and Lisp. I'll also throw in Scala (basically it's what Java should have been to begin with) and OCaml. So what are you doing in C++ that could not be done in any of these languages?


I have seen much larger applications written in Java than in C++; does that make Java a "real alternative?"

I am a JVM fan and I dig Lisp, but for the purposes I use C++ for (game development, other high-perf code): no, Java is not a viable alternative and neither is Lisp. Two reasons for Java: generics and garbage collection. Generics are great for about 90% of what I do but templates provide similar flexibility and type-safety (better, really, than Java generics) while preserving fairly preposterous performance characteristics. And, generally speaking, I'm better at managing my memory than the runtime is in cases where I have a well-conceptualized object model[1] and a strong grasp of what's being done where; I can do without the overhead of stop-the-world garbage collection.

Lisp has similar object-lifecycle issues and (this is a personal beef) at least Common Lisp lacks strong static typing, so it's a non-starter. Worse than Java for my purposes.

Again, I dig these for where they're good, but C++ has a well-deserved stranglehold on its niche. It's not perfect--a project run by a friend of mine[2] is attempting to do it one better--but it hasn't become king of this area by accident.

[1] - A "large application" does not always mean a large set of objects and interactions. Just a lot of interesting things with them. =)

[2] - https://github.com/Ides-Language


> I cannot think of anything people do in C++ that I am unable to do in Lisp

The biggest technical advantage I've noticed C++ having over other languages is that library performance characteristics are documented as part of the portable language spec. I really miss that when I switch to other languages and have no idea if the builtin list class acts like a linked list or a vector or a deque or what.

Knowing that the algorithm you just built out of library components doesn't have a n^3 blowup on some platform is a correctness issue, not a premature optimization. I'm more comfortable that's not going to happen working in C++, and the culture and libraries around it than anywhere else.

Does CL specify that, as part of the specification and not just by convention? I'm having a hard time finding docs to that effect but I don't use Lisp enough to know where to look.


When I was doing heavy common lisp work, this was just a non-issue. If I wanted to understand how a library call would perform, I'd just press the magic button in SLIME to jump to the source code of the library routine and look.

This was much easier than manually tracing through the layers of template and preprocessor indirection underlying even simple STL constructs.

It is possible that this sort of standardization was vitally important given the incredibly poor quality of C++ implementations traditionally available, but in languages with better communities with higher standards for implementations and documentation, this really seems like a solution in search of a problem.


"If I wanted to understand how a library call would perform, I'd just press the magic button in SLIME to jump to the source code of the library routine and look."

That tells you how your implementation works. It tells you nothing about how other implementations (or the next version of your compiler) do things. The point is that, with C++, the standard proscribes the O(...) of algorithms.

Yes, implementations can and will be broken, but given that writing a C++ compiler requires planning, I think it is unlikely that 'wrong choice of algorithm' will make it into any deliverable (counterexamples welcome)


Actually, in CL a list is always a linked list. It's made of a "cons" which is a two celled part of memory for holding the head (usually data but can be anything) and the tail. A hash is also known to have hash behavior as opposed to, say, an rb tree. A vector is always what C++ would call an array though what CL calls an array might depend on convention (afaik it's often just a normal vector with the access calls changed to go to the correct location).


That does not really answer the question. Consider, for example, this question: how efficient is append i.e. for two lists of N elements, how much work will append do? It is easy to assume this will be linear work in N since append will make a copy; but does CLTL2 actually say that append cannot do work in quadratic time, or cubic time, or even exponential time?

It's a valid complaint, in my opinion, and I say this as the person who promotes Lisp as an alternative to C++. I would put it in the category of "things missing from the Common Lisp standard," lumping it together with undefined behavior. In practice, this issue rarely comes up; it is generally safe to assume that no Lisp implementation will have an asymptotically worse version of a standard function than the simplest version of that function. So, for example, although the standard is silent on the matter you can safely assume that no Lisp implementation would have an nconc that is asymptotically worse than the simple linear time implementation, and you should not rely on an implementation giving you something better than that.

In the C++ standard, there are definitions of running times. So, for example, the stable_sort function is guaranteed to be no worse than O(N log(n) log(n)), and no better than O(n log(n)). That is one thing that is nice about C++, so I can concede that point, though with the note that the C++ standard has its own glaring omissions that are much more disturbing than the running times of algorithms.


Fair point, conceded.


None of those languages give you the raw performance or low-level memory control of C++ and this is exactly why none of them are used to write things like Chrome, Firefox, Ableton Live, Photoshop, AutoCad, the JVM etc.

If you can write your app in a higher level language than C++ then you'd be nuts to use it but there still isn't any real alternative for performance-critical native applications.


"None of those languages give you the raw performance or low-level memory control of C++"

Really? That's funny, because entire OSes have been written in Lisp, and I am pretty sure that those systems needed low-level memory control. In fact, the Lisp compiler I use in my own work has support for low-level pointer operations and a form of inline assembly (which in the past was used to implement SSE intrinsics as an add-on library). I am not sure where people got this idea that all high-level languages were slow or robbed programmers of the ability to do low-level things.

"why none of them are used to write things like Chrome, Firefox, Ableton Live, Photoshop, AutoCad, the JVM etc."

I suspect that has more to do with the OSes that were targeted and the availability of C/C++ programmers who are familiar with programming for those OSes than with some hypothetical performance advantage of C or C++.


Which is more likely, billions of engineering dollars are being dumped into projects by people that just don't know any better or there are real, practical reasons why languages like Lisp and Java aren't suitable for these kinds of apps? I remember interviewing with ITA software when they were still tiny and was shocked to learn that even they had to write some of their software in C++ because Lisp didn't give them the speed or the control they needed.

Extraordinary claims require extraordinary proof and the onus is on Lisp advocates to explain why there are so few real-world Lisp success stories. Calling everybody else stupid isn't good enough.


"Extraordinary claims require extraordinary proof and the onus is on Lisp advocates to explain why there are so few real-world Lisp success stories. Calling everybody else stupid isn't good enough."

You seem to be suggesting that some technical shortcoming of Lisp held it back in the real world, while C and C++ had technical advantages that led to their success. I think the history of Lisp and the history of C tell a much different story.

While C was becoming the language of choice for OSes that ran on low-end computers (especially Unix), Lisp was confined to very expensive computers that were being targeted at a market that ultimately failed to materialize. C became popular by riding on the coattails of Unix, not because it is a great language or because it could compete with other languages on technical features, and C++ road on the coattails of C (maintaining some amount of compatibility etc.). Lisp, meanwhile, was held back by bad business decisions; no amount of technical superiority could have saved Lisp from the AI winter. By the time you could really talk about Lisp running on PCs or cheap hardware, it was too late: C was already widely used and there was already an enormous volume of C code that people were working with.

It is interesting that you mention ITA's use of C++. The big reason for that was the poor scalability of CMUCL's garbage collector, which they did not use at all, opting instead for memory-mapped arrays (hence C++, which was used to handle system calls and deal with pointers). Had ITA been developing for an OS written in Lisp (not unprecedented), C++ would never have entered the picture; but ITA was developing for an OS that exposed a C API and a C ABI, and using C or C++ to deal with system calls simply made the most sense. I suspect that if they were to try again today, they would use far less C++, if any; today's Lisp compilers come with (non-portable) extensions for POSIX system calls and for dealing with pointers, as well as disabling the garbage collector globally or on specific objects. It is not that Lisp itself was slow; just a specific feature of the Lisp systems they were using, and poor support for disabling or avoiding the use of that feature.

So which is more likely: that C and C++ are the best languages ever developed for the kinds of programs they are used for, or that they were just in the right place at the right time?


So which is more likely: that C and C++ are the best languages ever developed for the kinds of programs they are used for, or that they were just in the right place at the right time?

Or, they're better than any alternative for certain domains and the people that choose C++ know about Lisp, Haskell, Scala, Ocaml etc and still decide to use it despite its shortcomings?

It's a simple question - if Lisp really is superior for these kinds of apps then why isn't there some small, nimble team out there kicking ass with it?


There was and you're posting on it right now.


HN is interesting because of the community and Y combinator's backing. The software behind it is laughably crude compared to other web forums. It could easily be replicated in any number of other languages, with better results. Maybe we'd even see the last of those "link expired" errors?


Surely you know the history to which the previous poster is referring? YC exists because pg and others build a company in Lisp and sold it for millions. They were a small team so they chose Lisp exactly because it helped them stay nimble.


They chose Lisp because they were comfortable with it - nimble is a social trait, not a language feature.

It's not like Lisp automatically makes everything successful - the market-trailing HN performance and UI iteration times are a strong argument that technical factors frequently do not determine success.


Don't forget that for quite a while C/C++ were free use while CL costed thousands of dollars.


Lisp's problem has not been a lack of visibility, as a web full of smug Lisp advocates demonstrates. I've personally been hearing about how amazing it is, how everyone writing Lisp will be incredibly productive for at least 15 years — during which absolutely nothing of significance written in Lisp has shipped even while some ideas developed or popularized in Lisp have spread to many popular languages.

Almost every single time this has come up, someone has cited drawbacks: lack of a standard high-quality implementation, limited libraries and, somewhat less universally, highly idiosyncratic coding styles hurting interoperability.

This generally gets a mix of dismissal or vague promises that it'll get better real soon now. Meanwhile, the only claim Lisp has on mainstream status in 2013 runs on the JVM so it has credible tools, performance and libraries. I would argue that this is neither coincidental nor caused by lack of unidirectional advocacy.


There have been significant products shipped using Lisp. For instance, the original Reddit, ITA, Viaweb (the startup Paul Graham sold to Yahoo), Hacker News itself.

But it's true, many Lisps suffer the problem of fragmentation.


> There have been significant products shipped using Lisp. For instance, the original Reddit, ITA, Viaweb (the startup Paul Graham sold to Yahoo), Hacker News itself.

The aspect I was really thinking about was developer mindshare: there are a few sites which did use Lisp but it never seemed to develop an foundation which many other people would consider building a project on top of. From the mid-90s onward I don't recall anyone talking about learning Lisp so they could build a website; it was always a case of not wanting to learn something new or some sort of CS machismo ranking languages without concern for mere engineering tradeoffs.


The JVM is not as fast as C or C++ - there is no end of real-world data and benchmarks to validate this claim. It wins in microbenchmarks where the it's allowed to dynamically inline things and otherwise take advantage of just-in-time compilation (which, of course, kills startup time), and it's certainly fast enough for many applications, but in general it does not match the performance of C.

I don't know anything about your work Lisp compiler, but making code written in a dynamically typed language run as fast as code written in a statically typed language - outside of microbenchmarks - generally requires making a lot of assumptions about that code.


JIT compiling does not mean slow startup, that was a choice Java made.


"I don't know anything about your work Lisp compiler, but making code written in a dynamically typed language run as fast as code written in a statically typed language - outside of microbenchmarks - generally requires making a lot of assumptions about that code."

Lisp supports a system of type "hints" to deal with that issue; with SBCL, those hints can be treated as assertions or can be assumed true, depending on compiler options (the "safety level"). A common suggestion for optimizing Lisp code is to use type hints, and the difference is usually pretty significant. You even get some amount of static type checking from this, in the form of warnings about declared types not matching inferred types. It's not as powerful as the ML type system, but it usually gives you what you need in terms of optimizing code.


It all depends on the Java implementation being used, Oracle's is not the only one.

Besides, there are native compilers for Java as well and VMs for C and C++.

Don't mix languages and implementations.


An analytic database like Vertica? A computer game with cutting-edge graphics? Etc.


Be careful about statements of the form, "That language is great, but it could not possibly be used for performance-critical code." There is nothing inherent to any of the languages I mentioned that makes them slow, and modern Lisp compilers can compete with C compilers on emitting fast code (and that is despite the fact that those compilers often lack some of the optimizations of C compilers; SBCL, for instance, has no peephole optimizer and it shows in the disassembly of compiled code). There is no reason why a Lisp program cannot access special hardware, use special CPU features via inline assembly, etc.


< here is nothing inherent to any of the languages I mentioned that makes them slow.

I don't agree. The inability to explicitly allocate on the stack in Java and OCaml? The lack of threads in OCaml? Java GC pauses (by the time you're storing things off-heap you might as well be writing C++)? I don't know as much about Lisp, but when looking at a task X, I tend to give the benefit of the doubt to languages (or frameworks, or hardware, or people, for that matter) that have actually achieved a measure of success at X.


Yep, that's the price of simplicity and safety.

People talk a lot about stack allocation, but the same issue of pointerful vs flat representation arises in arrays and records, where it presents even more of a performance issue. Some GCed languages allow a measure of control there: however, Java and OCaml are not among them.

People have tried to marry flat representation and safety while allowing things like passing pointers to stack-allocated objects. The results are distinctly baroque (complex notions of region, etc).


It's true, nobody ever wrote a large scale app in C. Ever.


IMHO, you missed the Linux kernel(, OK, it's kernel and does not count as an app).


jlgreco is correct, that was deep and sincere sarcasm on my part.

C is an extremely capable language and I personally have (technical, not people) managed millions of lines of it. I really don't understand the mentality that C is hard, that memory management is hard, that pthread-multithreading is hard and that complex systems in C are impossible, But then I've been coding in it professionally for about 12 years, so I guess I just grok it.


I first started writing C code in about 1988. I still find it by far the hardest of the mainstream programming languages to avoid resource leaks, buffer overruns, and gratuitously verbose code.


Perhaps it is, I don't have a lot to compare it to other than higher level dynamic languages like python. Those I can forgive somehow, probably because I don't usually feel like I'm doing anything serious with them, In the mid level (C++, Java) I find not minutely managing every resource quite alien.

I'm currently working in C++ with/for someone deeply in love with STL, boost, RAII and overly gratuitous typecasts and I'll admit I'm finding it a tough transition. I guess I'm deeply in love with the 'everything is just a patch of memory' paradigm I've become used to.


I suspect Nursie is being sarcastic.


How did you miss the dripping sarcasm there?


For the record I was not "trashing" C++, I like it more than most languages and write it every day. The code I work with is mostly at Google where we have a style guide that defines a pretty nice C++ subset: http://google-styleguide.googlecode.com/svn/trunk/cppguide.x...

I was just saying that some of the complicated designs you can express in C++ are a bad idea.


Thanks for the clarification. And I completely agree that it's very easy to misuse C++. If I could find another language that let me do real-time DSP on consumer phones I'd switch in a heartbeat but for the things I care about it's still the only game in town.

I think I just get tired of all the C++ bashing from people that either don't understand it or don't need it.


Thanks for the link to that C++ style guide


I think there is a real distinction between language features / design patterns / idioms that are enjoyable and efficient to write, and ones that enjoyable and efficient to read. And it's usually (for me at least) pretty hard to tell, except in hindsight.

The general thought I try to keep with me, is that code is much harder to read than it is to write, and that if you need to be 'this' smart to write something, you need to be '>this^2' smart to read and comprehend it 6 months later.

So you might think some code is a super clever trick now, that it solves your problem in a way you think is elegant and clever. In 6 months though, when someone else looks at it, they may come to a wildly different conclusion.


While I agree that limiting to a subset of C++ and keeping it maintainable is a good thing (I regularly argue against using things in C++ so that the cognitive load will be lower), I have to say that if templates are locking your code into a design, you're doing it wrong. Maybe I've not abused templates enough, but every time I've applied them, it's always made my designs more flexible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: