Hacker News new | past | comments | ask | show | jobs | submit login
Follow up to "The Unreasonable Effectiveness of C" (damienkatz.net)
201 points by DanielRibeiro on Jan 17, 2013 | hide | past | favorite | 150 comments



> I used C++ inheritance, templates, exceptions, custom memory allocation and a bunch of other features I thought were very cool at the time. Now I feel bad for the people who have to maintain it.

I think this is right on the money. There are a lot of things you can do with C++ (and other languages -- even C probably) that feel good at the time, only to learn later that they don't age well.

I've never seen it analyzed deeply why this is, but I suspect it has something do with how "fungible" the code is when you're done. Will your design let you make small design changes with relatively small and localized code changes, or has your complicated template/inheritance-based design "locked up" your code into a particular pattern, where changing any of the design assumptions requires "unwinding" a highly intertwined structure?

I don't know the rigorous answer to this question, so I design with various heuristics, which mostly boil down to some version of "this is getting a bit too complex or too highly leveraged." What I mean by "leveraged" is that some significant guarantee or invariant is achieved through a complicated contract between two different objects. I try to keep the design of every individual type/object as "flat" and unsubtle as possible; again vague terms, but I know it when I see it.


I'm using C++ right now but tend to write very "thin" C++ that is mostly C-like. I just like the object syntax for grouping things together, I sometimes do use a little bit of templates/inheritance when it makes sense, and I like the STL core data structures. C has nothing like STL -- no good portable data structure library. (STL isn't that great, but it's passable and ports to everything.) Operator overloading is nice in places, like with data structures and for mathematical code, but should not be over-used. Exceptions can also be useful, though in my current code base I am not using them much.

Boost is somewhat tempting but it's just too convoluted in places. Haven't had a reason to use it yet. Qt is nice for GUIs.

The thing about C++ is that it's a language that lets you do almost anything, and it has no stylistic traditions. It's like Linux if it has absolutely no filesystem layout standard or LSB. Each C++ sub-community (Microsoft, Qt, Boost, etc.) has its own conventions that are in conflict with one another. The language imposes no internal sense of discipline and carries no de-facto coding standard. As a result, I find most C++ libraries to be annoying to use even in my own C++ code. I tend to use it as a top abstraction layer gluing together underlying C libraries and system calls, and to use most C++ libraries (except STL) reluctantly.

It's too bad... well-done C++ can be very clean, efficient, and safe from things that dog C such as memory leaks, buffer overflows, or rare core dump crashes due to things that could be caught and handled as last-resorts in a try{} catch ( ... ) {} construct.


In my experience there's only 3 major C++ styles: BusinessObjects *(Java-ish Qt), std::cavedwelling<stl_guru, guru_cavedwelling_tag<const _Type<T &> >::const_value_type, and void _rather_be_writing_c();

Often code you'll come across is some combobulated identity crisis mixture of those 3. In terms of sheer readability and extension, I'm a major fan of Qt's design style (their most elaborate use of C++ appears to be virtual functions and the pImpl idiom), although that approach leads to heavy heap usage.

Really detest simple problems "overly" expressed in guru-ified STL. Can't read it, can't modify it, can't extend it, can't debug it. Naturally this means I'm not a great fan of Boost.


Can't read it, can't modify it, can't extend it, can't debug it. Naturally this means I'm not a great fan of Boost.

The STL in particular is designed the way it is largely for performance reasons. Ditto for Boost. I am not a fan of writing template soup, but theirs has a purpose. I mean, it can totally be frustrating (the inheritance hierarchy for collections/iterators recently gave me grief), but the reasons for their design decisions, when you take into account their goals, are generally very sound.

In any event, I have never had to introduce template soup to my code when using Boost. Even the boilerplate annoyances have been largely mitigated with C++11 and type inference.


I can see why Stroustrup finds so much C++ criticism annoying - language design is all about tradeoffs and many of the less elegant parts of C++ are compromises made to preserve performance. Stroustrup himself has said there's a cleaner, simpler language trying to get out of C++. C++11 seems like a good step in that direction.


Indeed. Not only that, the criticisms are often poor echoes of Stroustrup's own considered, well presented discussion of shortcomings and trade-offs as presented in say, "The Design and Evolution of C++"


C++ is a multiparadigm language, all of the styles you mention are extreme bastardisations of the style laid out in books like Accelerated C++, Effective C++, Exceptional C++. Modern C++ Design and so on.

That style tends to be similar to that used in the STL with some OOP thrown in - when suitable. It is about selecting the abstraction that is appropriately ensuring the code is abides by RAII.

I am convinced that the Javaish/Qt way of coding is destructive for C++ as inheritance is not as simple as the generic programming style once you get to grips with it and has a lot of gotchas. And the C - style is to me just plain wrong, especially if you are using any of the C++ standard library as they are privy to throw and without reliance on RAII your code will not be exception safe, if you want C use C.


"std::cavedwelling<stl_guru, guru_cavedwelling_tag<const _Type<T &> >::const_value_type"

Pure gold.


C++ code doesn't "age" well. Linux kernel C code basically looks like UNIX C code from 25 years ago. C++ code from the mid-1990's is all about OOP, then it was STL containers and RAII, then it was template metaprogramming, then people scaled back on that because it was incomprehensible, and now it's boost and smart pointers, etc.

Moreover, because the various features are all fairly incompatible, you really have to think about how you want to design something before putting it down on paper. Were you passing around STL vectors of objects to take advantage of RAII? Well, hope you never want to create sub-classes and use polymorphic methods because then you'll have to change everything around to store pointers instead and change all your '.' indirections into '->' indirections. Oh wait but that fucks up your memory management, so you better change everything to use smart pointers and hope nobody ever memcpys the arrays instead of using std::copy.

C++ is a mish-mash of features that don't really work together. The only guiding principle I can really see is: "is it cheap to implement?"


If Linux kernel code looks like UNIX C code from 25 years ago, I'd argue it is C which hasn't aged well.

C++'s true merit lies in its flexibility to evolve with the latest understanding of how to write good, abstract code with little runtime penalty, IMHO.


C++ never really lets you enjoy your abstractions because they're always so leaky. Low-level implementation details are always percolating to the surface. In practice I don't find C++ code any more readably abstract than similar C code.

A really good example is libfirm versus llvm. The former is in C< the latter is in C++. They're both roughly the same order of magnitude in size (at least the core). They're both highly optimizing compilers. I find the former far more clear and readable.

I personally find C++'s abstractions to be of questionable value. YMMV.


Hey, someone who knows libfirm. :)

Fun fact: A full build of libfirm is faster than LLVM's configure.

The compilation process is one of C++ greatest weaknesses. Since the iteration speed (change-compile-runtest) is a significant productivity factor, C can be more productive than C++. On the other hand, D shows that a language with a comparable feature set to C++ can be compiled much faster.


I just compiled libfirm and cparser last night on the train home. Cparser compiled so fast I thought the build was broken. Indeed, C++ compile times are unreasonable even with Clang.


That's not really a fair comparison as libfirm implements a much simpler problem : C compilation.

Llvm implements generic compilation with implementations for C, C++, Objective C, Objective C++, Ada, D, Fortran, Haskell, Java bytecode, CIL, MacRuby, Standard ML.

Of course there is a difference in complexity.


It is not that hard to convert Firm to LLVM or vice versa. If you can read german, a student once implemented this [0]. This allowed libfirm to compile C++ and Fortran code using the corresponding LLVM frontends.

Libfirm (with liboo extension) does not support exceptions, but apart from that there should be little difference in features.

For example, there is an experimental frontend for Java bytecode [1]. It compiles ahead-of-time, like gcj.

[0] http://pp.info.uni-karlsruhe.de/uploads/publikationen/liebe1...

[1] https://github.com/MatzeB/bytecode2firm


Completely untrue. LLVM implement compilation for LLVM bytecode. Clang and other compiler that translate C++ or any other language to LLVM bytecode are not part of LLVM.


Indeed, C++ does neither have a comprehensible established paradigm of coding style, nor a consistent coding culture, as e.g. Java or C# do. It is very sensitive to the professional and personal qualities of a particular developer.


"There are a lot of things you can do with C++ (and other languages -- even C probably) that feel good at the time, only to learn later that they don't age well."

I think this reduces to the argument that you should do the simplest thing that works. As a programmer (especially as a C++ programmer), it's tempting to write clever code. It's fun. Why do something the boring straightforward way when you can invent an elaborate class hierarchy, or use that CRTP thing you keep reading about? Being a good C++ programmer requires a certain amount of discipline, but I think that's a trait that's helpful with more modern languages as well. Consider languages that allow monkey patching. It's a neat feature, but if used without discipline will lead to code that doesn't age well because it imposes more complexity on the codebase.


It's tempting because that's how we build our skills. You must abuse a tool to understand it, and know when to not use it. Unfortunately in professional contexts, many of us do that learning on the job, exhausting out a turbulent jet wake of too-clever code, upsetting those who follow. Coming across CRTP for the first time is either an upsetting detour or a fascinating David Foster Wallace style WTF-inducing endnote that you must consume before continuing. You appreciate this dynamic more as you play out different roles, both producing and consuming. Some languages try to interdict this state of affairs, shielding us from each other, but the more fun languages let us burn ourselves and our colleagues and finally learn restraint in ways you never learn in languages with training wheels.


> ... I suspect it has something do with how "fungible" the code is when you're done. Will your design let you make small design changes with relatively small and localized code changes, or has your complicated template/inheritance-based design "locked up" your code into a particular pattern, where changing any of the design assumptions requires "unwinding" a highly intertwined structure?

I'd say that this is right on the money.

I think it is true that hardcore OOP (along with similarly complicated constructions) gives benefits to the designer who chooses a good and appropriate structure. But, due to the idea you mentioned, it severely penalizes even mildly poor designs, or designs that need to change.


I think hardcore OOP is highly overrated outside of a few use cases where it does seem to work well: graphical user interfaces and certain kinds of business logic.

In those two cases, OOP works because it correctly models the underlying problem.

For GUIs for instance: A window is an object, a button is an object contained within a panel that is an object contained within a window... and so on. All of these are "drawable," and can receive mouse clicks, etc. OOP models that well.


I think that anything that OOP models well could be modelled even better with interfaces/traits, especially in flexible languages such as Go.

To me, OOP is mainly useful for implementation inheritance (in the rare cases where it's really needed). A very nice example is Python - it allows, but doesn't force you to use objects. I only use objects for encapsulation (e.g. make a new random number generator that is "independent" from the application's main RNG), and I don't recall ever using inheritance (in Python).


This is like insisting on driving around in a horse-drawn buggy because a car can crash at higher speeds. At this point I just ignore anybody trashing C++ that isn't offering any real alternatives. And no, C isn't an alternative for large scale app development.


Who gets to decide what qualifies as a "real alternative?" I have seen much larger applications written in Java than in C++; does that make Java a "real alternative?" C++ does not really have any technical advantages over "true" high-level languages (i.e. languages that don't force the programmer to deal with low-level details and whose support for low-level operations does not constrain high-level features), but it has plenty of disadvantages. Meanwhile, I cannot think of anything people do in C++ that I am unable to do in Lisp, although I can think of plenty of things people do in C++ that are unnecessary in Lisp and that only serve to make code more complex and more error-prone.

So there, you have two "real alternatives:" Java and Lisp. I'll also throw in Scala (basically it's what Java should have been to begin with) and OCaml. So what are you doing in C++ that could not be done in any of these languages?


I have seen much larger applications written in Java than in C++; does that make Java a "real alternative?"

I am a JVM fan and I dig Lisp, but for the purposes I use C++ for (game development, other high-perf code): no, Java is not a viable alternative and neither is Lisp. Two reasons for Java: generics and garbage collection. Generics are great for about 90% of what I do but templates provide similar flexibility and type-safety (better, really, than Java generics) while preserving fairly preposterous performance characteristics. And, generally speaking, I'm better at managing my memory than the runtime is in cases where I have a well-conceptualized object model[1] and a strong grasp of what's being done where; I can do without the overhead of stop-the-world garbage collection.

Lisp has similar object-lifecycle issues and (this is a personal beef) at least Common Lisp lacks strong static typing, so it's a non-starter. Worse than Java for my purposes.

Again, I dig these for where they're good, but C++ has a well-deserved stranglehold on its niche. It's not perfect--a project run by a friend of mine[2] is attempting to do it one better--but it hasn't become king of this area by accident.

[1] - A "large application" does not always mean a large set of objects and interactions. Just a lot of interesting things with them. =)

[2] - https://github.com/Ides-Language


> I cannot think of anything people do in C++ that I am unable to do in Lisp

The biggest technical advantage I've noticed C++ having over other languages is that library performance characteristics are documented as part of the portable language spec. I really miss that when I switch to other languages and have no idea if the builtin list class acts like a linked list or a vector or a deque or what.

Knowing that the algorithm you just built out of library components doesn't have a n^3 blowup on some platform is a correctness issue, not a premature optimization. I'm more comfortable that's not going to happen working in C++, and the culture and libraries around it than anywhere else.

Does CL specify that, as part of the specification and not just by convention? I'm having a hard time finding docs to that effect but I don't use Lisp enough to know where to look.


When I was doing heavy common lisp work, this was just a non-issue. If I wanted to understand how a library call would perform, I'd just press the magic button in SLIME to jump to the source code of the library routine and look.

This was much easier than manually tracing through the layers of template and preprocessor indirection underlying even simple STL constructs.

It is possible that this sort of standardization was vitally important given the incredibly poor quality of C++ implementations traditionally available, but in languages with better communities with higher standards for implementations and documentation, this really seems like a solution in search of a problem.


"If I wanted to understand how a library call would perform, I'd just press the magic button in SLIME to jump to the source code of the library routine and look."

That tells you how your implementation works. It tells you nothing about how other implementations (or the next version of your compiler) do things. The point is that, with C++, the standard proscribes the O(...) of algorithms.

Yes, implementations can and will be broken, but given that writing a C++ compiler requires planning, I think it is unlikely that 'wrong choice of algorithm' will make it into any deliverable (counterexamples welcome)


Actually, in CL a list is always a linked list. It's made of a "cons" which is a two celled part of memory for holding the head (usually data but can be anything) and the tail. A hash is also known to have hash behavior as opposed to, say, an rb tree. A vector is always what C++ would call an array though what CL calls an array might depend on convention (afaik it's often just a normal vector with the access calls changed to go to the correct location).


That does not really answer the question. Consider, for example, this question: how efficient is append i.e. for two lists of N elements, how much work will append do? It is easy to assume this will be linear work in N since append will make a copy; but does CLTL2 actually say that append cannot do work in quadratic time, or cubic time, or even exponential time?

It's a valid complaint, in my opinion, and I say this as the person who promotes Lisp as an alternative to C++. I would put it in the category of "things missing from the Common Lisp standard," lumping it together with undefined behavior. In practice, this issue rarely comes up; it is generally safe to assume that no Lisp implementation will have an asymptotically worse version of a standard function than the simplest version of that function. So, for example, although the standard is silent on the matter you can safely assume that no Lisp implementation would have an nconc that is asymptotically worse than the simple linear time implementation, and you should not rely on an implementation giving you something better than that.

In the C++ standard, there are definitions of running times. So, for example, the stable_sort function is guaranteed to be no worse than O(N log(n) log(n)), and no better than O(n log(n)). That is one thing that is nice about C++, so I can concede that point, though with the note that the C++ standard has its own glaring omissions that are much more disturbing than the running times of algorithms.


Fair point, conceded.


None of those languages give you the raw performance or low-level memory control of C++ and this is exactly why none of them are used to write things like Chrome, Firefox, Ableton Live, Photoshop, AutoCad, the JVM etc.

If you can write your app in a higher level language than C++ then you'd be nuts to use it but there still isn't any real alternative for performance-critical native applications.


"None of those languages give you the raw performance or low-level memory control of C++"

Really? That's funny, because entire OSes have been written in Lisp, and I am pretty sure that those systems needed low-level memory control. In fact, the Lisp compiler I use in my own work has support for low-level pointer operations and a form of inline assembly (which in the past was used to implement SSE intrinsics as an add-on library). I am not sure where people got this idea that all high-level languages were slow or robbed programmers of the ability to do low-level things.

"why none of them are used to write things like Chrome, Firefox, Ableton Live, Photoshop, AutoCad, the JVM etc."

I suspect that has more to do with the OSes that were targeted and the availability of C/C++ programmers who are familiar with programming for those OSes than with some hypothetical performance advantage of C or C++.


Which is more likely, billions of engineering dollars are being dumped into projects by people that just don't know any better or there are real, practical reasons why languages like Lisp and Java aren't suitable for these kinds of apps? I remember interviewing with ITA software when they were still tiny and was shocked to learn that even they had to write some of their software in C++ because Lisp didn't give them the speed or the control they needed.

Extraordinary claims require extraordinary proof and the onus is on Lisp advocates to explain why there are so few real-world Lisp success stories. Calling everybody else stupid isn't good enough.


"Extraordinary claims require extraordinary proof and the onus is on Lisp advocates to explain why there are so few real-world Lisp success stories. Calling everybody else stupid isn't good enough."

You seem to be suggesting that some technical shortcoming of Lisp held it back in the real world, while C and C++ had technical advantages that led to their success. I think the history of Lisp and the history of C tell a much different story.

While C was becoming the language of choice for OSes that ran on low-end computers (especially Unix), Lisp was confined to very expensive computers that were being targeted at a market that ultimately failed to materialize. C became popular by riding on the coattails of Unix, not because it is a great language or because it could compete with other languages on technical features, and C++ road on the coattails of C (maintaining some amount of compatibility etc.). Lisp, meanwhile, was held back by bad business decisions; no amount of technical superiority could have saved Lisp from the AI winter. By the time you could really talk about Lisp running on PCs or cheap hardware, it was too late: C was already widely used and there was already an enormous volume of C code that people were working with.

It is interesting that you mention ITA's use of C++. The big reason for that was the poor scalability of CMUCL's garbage collector, which they did not use at all, opting instead for memory-mapped arrays (hence C++, which was used to handle system calls and deal with pointers). Had ITA been developing for an OS written in Lisp (not unprecedented), C++ would never have entered the picture; but ITA was developing for an OS that exposed a C API and a C ABI, and using C or C++ to deal with system calls simply made the most sense. I suspect that if they were to try again today, they would use far less C++, if any; today's Lisp compilers come with (non-portable) extensions for POSIX system calls and for dealing with pointers, as well as disabling the garbage collector globally or on specific objects. It is not that Lisp itself was slow; just a specific feature of the Lisp systems they were using, and poor support for disabling or avoiding the use of that feature.

So which is more likely: that C and C++ are the best languages ever developed for the kinds of programs they are used for, or that they were just in the right place at the right time?


So which is more likely: that C and C++ are the best languages ever developed for the kinds of programs they are used for, or that they were just in the right place at the right time?

Or, they're better than any alternative for certain domains and the people that choose C++ know about Lisp, Haskell, Scala, Ocaml etc and still decide to use it despite its shortcomings?

It's a simple question - if Lisp really is superior for these kinds of apps then why isn't there some small, nimble team out there kicking ass with it?


There was and you're posting on it right now.


HN is interesting because of the community and Y combinator's backing. The software behind it is laughably crude compared to other web forums. It could easily be replicated in any number of other languages, with better results. Maybe we'd even see the last of those "link expired" errors?


Surely you know the history to which the previous poster is referring? YC exists because pg and others build a company in Lisp and sold it for millions. They were a small team so they chose Lisp exactly because it helped them stay nimble.


They chose Lisp because they were comfortable with it - nimble is a social trait, not a language feature.

It's not like Lisp automatically makes everything successful - the market-trailing HN performance and UI iteration times are a strong argument that technical factors frequently do not determine success.


Don't forget that for quite a while C/C++ were free use while CL costed thousands of dollars.


Lisp's problem has not been a lack of visibility, as a web full of smug Lisp advocates demonstrates. I've personally been hearing about how amazing it is, how everyone writing Lisp will be incredibly productive for at least 15 years — during which absolutely nothing of significance written in Lisp has shipped even while some ideas developed or popularized in Lisp have spread to many popular languages.

Almost every single time this has come up, someone has cited drawbacks: lack of a standard high-quality implementation, limited libraries and, somewhat less universally, highly idiosyncratic coding styles hurting interoperability.

This generally gets a mix of dismissal or vague promises that it'll get better real soon now. Meanwhile, the only claim Lisp has on mainstream status in 2013 runs on the JVM so it has credible tools, performance and libraries. I would argue that this is neither coincidental nor caused by lack of unidirectional advocacy.


There have been significant products shipped using Lisp. For instance, the original Reddit, ITA, Viaweb (the startup Paul Graham sold to Yahoo), Hacker News itself.

But it's true, many Lisps suffer the problem of fragmentation.


> There have been significant products shipped using Lisp. For instance, the original Reddit, ITA, Viaweb (the startup Paul Graham sold to Yahoo), Hacker News itself.

The aspect I was really thinking about was developer mindshare: there are a few sites which did use Lisp but it never seemed to develop an foundation which many other people would consider building a project on top of. From the mid-90s onward I don't recall anyone talking about learning Lisp so they could build a website; it was always a case of not wanting to learn something new or some sort of CS machismo ranking languages without concern for mere engineering tradeoffs.


The JVM is not as fast as C or C++ - there is no end of real-world data and benchmarks to validate this claim. It wins in microbenchmarks where the it's allowed to dynamically inline things and otherwise take advantage of just-in-time compilation (which, of course, kills startup time), and it's certainly fast enough for many applications, but in general it does not match the performance of C.

I don't know anything about your work Lisp compiler, but making code written in a dynamically typed language run as fast as code written in a statically typed language - outside of microbenchmarks - generally requires making a lot of assumptions about that code.


JIT compiling does not mean slow startup, that was a choice Java made.


"I don't know anything about your work Lisp compiler, but making code written in a dynamically typed language run as fast as code written in a statically typed language - outside of microbenchmarks - generally requires making a lot of assumptions about that code."

Lisp supports a system of type "hints" to deal with that issue; with SBCL, those hints can be treated as assertions or can be assumed true, depending on compiler options (the "safety level"). A common suggestion for optimizing Lisp code is to use type hints, and the difference is usually pretty significant. You even get some amount of static type checking from this, in the form of warnings about declared types not matching inferred types. It's not as powerful as the ML type system, but it usually gives you what you need in terms of optimizing code.


It all depends on the Java implementation being used, Oracle's is not the only one.

Besides, there are native compilers for Java as well and VMs for C and C++.

Don't mix languages and implementations.


An analytic database like Vertica? A computer game with cutting-edge graphics? Etc.


Be careful about statements of the form, "That language is great, but it could not possibly be used for performance-critical code." There is nothing inherent to any of the languages I mentioned that makes them slow, and modern Lisp compilers can compete with C compilers on emitting fast code (and that is despite the fact that those compilers often lack some of the optimizations of C compilers; SBCL, for instance, has no peephole optimizer and it shows in the disassembly of compiled code). There is no reason why a Lisp program cannot access special hardware, use special CPU features via inline assembly, etc.


< here is nothing inherent to any of the languages I mentioned that makes them slow.

I don't agree. The inability to explicitly allocate on the stack in Java and OCaml? The lack of threads in OCaml? Java GC pauses (by the time you're storing things off-heap you might as well be writing C++)? I don't know as much about Lisp, but when looking at a task X, I tend to give the benefit of the doubt to languages (or frameworks, or hardware, or people, for that matter) that have actually achieved a measure of success at X.


Yep, that's the price of simplicity and safety.

People talk a lot about stack allocation, but the same issue of pointerful vs flat representation arises in arrays and records, where it presents even more of a performance issue. Some GCed languages allow a measure of control there: however, Java and OCaml are not among them.

People have tried to marry flat representation and safety while allowing things like passing pointers to stack-allocated objects. The results are distinctly baroque (complex notions of region, etc).


It's true, nobody ever wrote a large scale app in C. Ever.


IMHO, you missed the Linux kernel(, OK, it's kernel and does not count as an app).


jlgreco is correct, that was deep and sincere sarcasm on my part.

C is an extremely capable language and I personally have (technical, not people) managed millions of lines of it. I really don't understand the mentality that C is hard, that memory management is hard, that pthread-multithreading is hard and that complex systems in C are impossible, But then I've been coding in it professionally for about 12 years, so I guess I just grok it.


I first started writing C code in about 1988. I still find it by far the hardest of the mainstream programming languages to avoid resource leaks, buffer overruns, and gratuitously verbose code.


Perhaps it is, I don't have a lot to compare it to other than higher level dynamic languages like python. Those I can forgive somehow, probably because I don't usually feel like I'm doing anything serious with them, In the mid level (C++, Java) I find not minutely managing every resource quite alien.

I'm currently working in C++ with/for someone deeply in love with STL, boost, RAII and overly gratuitous typecasts and I'll admit I'm finding it a tough transition. I guess I'm deeply in love with the 'everything is just a patch of memory' paradigm I've become used to.


I suspect Nursie is being sarcastic.


How did you miss the dripping sarcasm there?


For the record I was not "trashing" C++, I like it more than most languages and write it every day. The code I work with is mostly at Google where we have a style guide that defines a pretty nice C++ subset: http://google-styleguide.googlecode.com/svn/trunk/cppguide.x...

I was just saying that some of the complicated designs you can express in C++ are a bad idea.


Thanks for the clarification. And I completely agree that it's very easy to misuse C++. If I could find another language that let me do real-time DSP on consumer phones I'd switch in a heartbeat but for the things I care about it's still the only game in town.

I think I just get tired of all the C++ bashing from people that either don't understand it or don't need it.


Thanks for the link to that C++ style guide


I think there is a real distinction between language features / design patterns / idioms that are enjoyable and efficient to write, and ones that enjoyable and efficient to read. And it's usually (for me at least) pretty hard to tell, except in hindsight.

The general thought I try to keep with me, is that code is much harder to read than it is to write, and that if you need to be 'this' smart to write something, you need to be '>this^2' smart to read and comprehend it 6 months later.

So you might think some code is a super clever trick now, that it solves your problem in a way you think is elegant and clever. In 6 months though, when someone else looks at it, they may come to a wildly different conclusion.


While I agree that limiting to a subset of C++ and keeping it maintainable is a good thing (I regularly argue against using things in C++ so that the cognitive load will be lower), I have to say that if templates are locking your code into a design, you're doing it wrong. Maybe I've not abused templates enough, but every time I've applied them, it's always made my designs more flexible.


I hated and feared C for years. Then I finally learned it and fell in love with it. Now other languages seem like too much work. I'm only a hobby programmer so that devalues my opinion somewhat (insofar as I don't have to support large production systems or please committees), but one of the things I love about C is that once you have something working in C, you're likely done - if you need to go faster or better, then either your understanding of the problem or the algorithm you're employing is flawed, or maybe you need to do some bits of it assembler (DSP for example).

The other thing is that even though C is old it still work great. I don't know how you web guys keep up with so many 'frameworks of the month' and so forth. I took up C mainly because I was tired of things becoming obsolete before I had got around to learning them properly.


One of the greatest things about coming back to C from years of Ruby is that you can constantly tell yourself "I don't need to worry about performance because there's no way what I'm doing here could be worse than what Ruby was doing for me". It's hard to fully articulate the feeling but it's great; it's like a cure for premature optimization.


Is it possible to need C and not know it, or if I need it will I know it? Because I can't think of anything I'd use it for. You guys speak of it and other higher level languages as if you use them interchangably for the same tasks.


I use Ruby for things that deal with the web, for utility programs, and (most often) for software I'm not going to need to support long term, such as clients for bizarro network protocols I encounter on penetration tests.

I used to use C for everything else; I'd reach for Go first now, but I still love C.

One way to think about C vs. (say) Python is that it's like the relationship between an FPGA and an ASIC: you prototype in a high level language and you write C when you really know exactly what you need your program to do. C is not a great exploratory programming environment, but it is excellent when you have a specific plan.


And this is what I don't really understand. Python or Ruby are excellent in the domain you talk about, but I would always, always reach for C++ over C if given the choice. The C++ standard library for me alone makes this such a simple choice. A lot of C code to me just looks horribly ugly these days - stuff like:

  int compare(const void *a, const void *b)
  {
      return *((int*)a) < *((int*)b);
  }
Yuck.


On the other hand, the mechanism by which that code works is simple to understand and keep in your head at all times, and isn't inelegant. I find that very valuable.


Except it has absolutely no type safety whatsoever. With modern-style C++, you can do the same thing in exactly 1 line:

  std::sort(begin(array), end(array), [](int a, int b) { return a < b; });
Versus:

    qsort(array, MAX_VAL, sizeof(int), compare);
With compare as defined before. I mean, sure, it's just sorting and it's a tiny example, but to me it shows a lot about why I prefer C++ over C: typesafety, readability, reduced possibility of buffer overflows, reduced possibility of segfaults based on incorrect element access, speed thanks to code inlining vs function pointers...


I've coded in C since 1993, shipped commercial systems C code from 1996 through ~2004 (with an 3-year C++ interlude in there), and continued to write C code on projects from then today. In that time, the number of bugs I've dealt with due to the lack of type safety on a void* is: zero.

Lest you think I'm being cavalier about this, I've been writing professional Ruby since ~2006, and have since then been routinely aggravated by bugs that would have been mitigated by type safety. I buy type safety. But it's a continuum of value, not a core principle of development.

In idiomatic C code, void* is a way of dropping in and out of static typing, usually to pass values of arbitrary types to a generic container library or through a callback. The use of void* doesn't surrender type safety throughout the program. Idiomatic C code casts a specific type to void* at the call site of the function that handles generic types, and casts it back to that specific type the moment that library hands it back.

Of the arguments C++ devotees marshall against C, I find this one among the fudliest.


I guess we'll just have to agree to disagree then. Perhaps you are a far more conscientious programmer than many out there; the number of buffer overflow exploits would suggest so. C++ doesn't make you immune to such things, but it does give you the tools to make such things less likely to occur. Further, I like compiler-enforced type safety. For when C was created, utilization of void* was a brilliant idea. Language design has moved on since then, and as much as many people really hate template-based C++ code, I honestly welcome it in comparison.


Yes, std::sort is better than qsort on simple types, but that's an isolated case. I'm currently working on a major "C++" project where literally the only C++ dependency is std::sort right now.


That's strawman C. We could just as easily write hideous C++, Ruby, Python, etc. code and complain about that.

Edit: By this I mean; if you want to prefer the C++ libraries, that's fine, but please don't claim C is inherently ugly.


How is it strawman C? If you want to use qsort, which anyone who has used C for any amount of time has, it's something you've written many times.


I think I misinterpreted your argument as being about the syntax instead of the type system. I would actually prefer the option of a stricter type system in C.


It'd like to thank you for that description. The first analogy that came to mind was play-doh vs marble, but that's a rather blunt one that connotes all sorts of negative things that I don't really mean. Yours description is spot on!


OT: I haven't touched Ruby for 5 years, but I thought JRuby had really helped the whole performance thing?


I've recently realized that--at least to me--C is actually very much like Prolog. How? Well: Prolog is a fairly simple, Turing-complete language. For the things it's good for, it's great. But those things are few and relatively specialized. I could use Prolog for general-purpose programming, but I wouldn't want to. With Prolog, the question is always "why Prolog?" rather than "why not Prolog?". Sometimes there is a good answer to this, but usually there isn't. I can also get some of the benefit by using a DSL embedded inside a more general-purpose high-level language.

I've thought this for a while, almost ever since I learned Prolog. Make no mistake: I actually rather like Prolog. But I haven't been using it very much. The recent revelation is that C has all the same characteristics. And in a similar way, while I still like C to whatever extent, I'm going to avoid using it unless I really have to.

On a mostly unrelated note, I agree that Rust looks very promising. It's the new language that I'm the most excited about, especially after being let down by Go. Also, I'm not quite sure why he thinks the Rust syntax is odd: to me it seems extremely C-like, with some nice improvements (like implicit returns).


I must say I find your comparison unusual! Whereas the rule-based declarative programming in Prolog is basically unheard of, C set the standard for all imperative languages to follow it. On top of that, a large number of languages either compile down to C or have a VM written in C, possibly C++.

The typical language I hear lumped together in the same breath as Prolog is COBOL. Which is nevertheless widely used in production environments - something like 1 billion lines of code. Is anybody seriously using Prolog these days?


Out of curiosity, what are you hoping to get out of Rust that Go let you down on?


The best summary would be that I want a language to replace C++ but more sane and functional. Go is more like a language to replace Java and is most definitely not functional.

Essentially, I don't see why I would ever use Go over Haskell or OCaml. The main areas where Haskell and OCaml are ill-suited are the lower-level ones currently dominated by C/C++ and Go does not seem to address those very well. The main areas Go is doing well, like server-side code, are very well suited to Haskell and OCaml and so I have no reason to drop down to a less declarative language like Go.

Rust, on the other hand, seems to be targeting C++ directly. It also doesn't hate functional programming. So it perfectly fills the C/C++ shaped void in my current toolbox.


Go is a language for young programmers who were perhaps brought up on Java or Python, but have discovered the "cat -v" mentality and are drawn toward it.

Or alternatively, for old C programmers who are well versed in the 'cat -v' mentality who have tasted a bit of Java and Python but simply cannot buy into those.


Correction. The "cat-v considered harmful" mentality.


Go is a language for those who love the simple C/UNIX approach but would rather have a Garbage Collector there. I'd like Go to have manual memory management: But Google decided it was better this way.

I don't really see it as a replacement for Java: But maybe for C (In non-critical performance software) or Python.


To most programmers, Go is less scary and foreign than Haskell/OCaml and thus easier to learn. Much as I love Haskell, I'll admit that's a pretty important advantage for Go if you need a team.


- optional gc

- thread local gc vs Go's global gc

- memory protection: Go has shareable mutable state

- better functional programming support


Okay, say I'm gonna start using C more seriously: What library do I use for strings? Not fucking arrays of 8 bit characters. Real Unicode compatible strings with concatenation and comparison operations and all that good stuff. And if it works with a regex engine, that's nice too.



Hmmm....

> Strings are the most common and fundamental form of handling text in software. Logically, and often physically, they contain contiguous arrays (vectors) of basic units. Most of the ICU API functions work directly with simple strings, and where possible, this is preferred.

Nope.

Though the UText API was promising... unfortunately, it is totally incomplete.


... which seems to be written in C++ (based on clicking around in the source tree for a minute).

https://ssl.icu-project.org/trac/browser/icu/trunk/source/co...


I remember what scared me to death with C++ when I was using it with frequency was the potential to end-up with these horrible inheritance trees that could really bungle a project in so many ways. If you didn't take the time to really, really, really think through your class hierarchy you could really shaft yourself in a big way. Bacause of this, as life went on I started to really appreciate the idea of composition vs. inheritance.

For me C has always been an easy no-brainer go-to language to get things done, particularly so when performance was at stake. I've always thought that learning programming should start at the lowest possible level (yes, assembly), then move on to Forth, C and from there ideally Lisp before hitting Java for some OO love. With that kind of a foundation it's easy to learn and absorb just-about anything that comes your way.


"I remember what scared me to death with C++ when I was using it with frequency was the potential to end-up with these horrible inheritance trees"

I don't think you can blaim C++ for that. The time when 'we' hadn't learned the "composition over inheritance" lesson yet just happened to coincide with that of "let's write our framework in C++".


Maybe so.


My thoughts on why C is effective:

Simple. An average C programmer can easily learn 90% of the language and use 80% on a daily basis. So C programmers can understand each other generally well. Although different projects usually have different styles, they are very minor differences.

Assembler friendly. You can link C code and assembler together with negligible overhead. Almost every performance demanding project has its busiest part written in assembler. Even though sometimes C is lack of a certain construct that you require, you can always implement it in assembler and connect it with C code (longjmp as a good example). Using assembler may seem to be contradictory to the first point, but remember only a handful of people need to work at such a low level, other people just use the C interfaces.

Reusable. C has a huge set of robust and portable libraries. Their authors are proven to be surprisingly disciplined.

However, we have to admit writing good C code is very hard. And I can see that less and less people will use it going forward. Steve Jobs' "trucks and cars" metaphor applies here too.


If they would keep putting the extra effort in debugging the Erlang VM C code, the benefits of this debugging would be available to the entire community of developers using Erlang. Now they are in their small corner of the world and every bug they squash is just a bug of their project.

Overall it's worse for "the world" and better for them. This makes sense for commercial software but it's weird when you think about an open-source project. Just imagine if all the effort in porting Ruby/Lisp to Java/C++ or similar translations for big projects would have been invested in improving the VMs for these dynamic languages (yeah, most of these rewritten projects didn't have experienced C guys, but the OP certainly has and is one). Is bug-fixing and hacking on compilers and VMs really that hard? (I never approached these problems in practice because I've never had to scale something that hard and squeeze that last drop of performance, so I'm really asking the question, it's not rhetorical).


I'm with you 1000% percent here. The only language I ever encountered that could possibly have supplanted C was FORTH, and it was doomed by legions of Unix-trained C programmers once the hardware got good enough to support C.


What about Pascal? Seems like a simple, block-based language with the same functionality as C. Apparently, the only complaint against it is that its users like to eat quiche.


"What about Pascal?"

The type system, and control flow structures, were more strict and allowed less "abuse" (flexibility) than C. Strings were fixed-length arrays of chars, which was also less flexible than strings in C. Also people felt the begin-end block syntax less fashionable than C's {}.

Take a look at: http://www.lysator.liu.se/c/bwk-on-pascal.html


BWK's rant on Pascal is about as relevant today as a critique of original C (no prototypes, types defaulting to int, etc.).

In other respects (e.g. strong type safety) Pascal's approach "won", depending on your perspective.


No unified standard across implementations, modern implementation dominated by a single vendor (Delphi) with an aging userbase and downward trend.

And when you dig into the details of Delphi, you'll find various compromises caused by competitive pressure pulling one direction, backward compatibility pulling in the other.


There is no particular benefit in switching from C to Pascal.


I noticed that he didn't mention using some higher level scripting language on top of C. Is this an intentional omission?


I think it is covered by this part:

"If a big problem was C code in Erlang, why would using more C be good?

Because it's easier to debug when you don't lose context between the "application" layer and the lower level code. The big problem we've seen is when C code is getting called from higher level code in the same process, we lose all the debugging context between the higher level code and the underlying C code."

OT: My personal best experience with interfacing a higher level language with C from the debugging standpoint was a C+Lua combo, and even then it was far from optimal.


It's never optimal, and often, it's the source of truly infuriating bugs. Yet for all that, people don't say, "Hey, let's skip the middle man and just call libcurl or libxml or whatever from C. Because you know, these are C libraries."

I think they don't do that because they've been told a hundred thousand times that C is for low-level bit fiddling, that manual memory management is impossible, that it's tedious and difficult to write, that it's a "systems language."

Honestly, if you haven't programmed in C since college, you should give it a try -- especially if you're on Linux. The tooling and library ecosystem of C is absolutely second to none, and there's an enormous community out there that consists of some of the smartest programmers you will ever meet.

I agree with you about C+Lua. I also notice that people who seriously do that are serious C hackers -- which, I suspect, is seldom the case with most users of Python and Ruby.


I had to track down the bugs from embedded "gcc -O3" multi-#define-expanded C calling into Lua. Highly perverse definition of fun that would be, yet it was somehow. :-)

I read from your comment that the "kids should start with building their own CPU". The Java abstraction disease has eaten far too many souls who think the magic just happens, without regards for the sweaty chaps who make this magic come true :-)

I would beg anyone to try to program as many languages as they find around.

Both Python and Ruby have their strong traits, though integration with C is not one of them.

But "12345.to_s(21)" is a construct that is nowhere else, much like "python -m SimpleHTTPServer".

Then Haskell can be fantastic in it's mind-numbing math-derived domain-specificity (This, to me, is the biggest power of C and the biggest drawback of any meta-capable language - anyone new onboard needs to learn the custom shortcuts. C sucks at this so it is relatively easy for onboarding).

But all the languages you try leave a useful imprint in your brains.

For FFI, however, in my experience nothing can beat the http://luajit.org/ext_jit.html.

So, for something that may in the future benefit of low level offload, start with Lua, then move to Luajit, then move to Luajit+C. C being used for (preferrably) purely functional demonstrably performance critical parts. If you do not benefit first of a better TTM if used Python/Ruby...

But really it all boils down to using the best tool for the job.

Starting your education with assembler (mine was the 16x16 matrix of instruction codes of 8080 back in the day mapping onto bytes of raw data) gives you the insight of what is going down the curtains, and a possibility to keep the higher level languages in check.

Sorry for verbosity, bit tired to write in a more terse fashion.


"The tooling and library ecosystem of C is absolutely second to none, and there's an enormous community out there that consists of some of the smartest programmers you will ever meet."

Conspicuously absent from this statement is, "the language features make you more productive," or, "smart programmers using this language are doing things people thought were impossible." The issue with C (and even more so with C++) is that the language features almost always work against you.


"smart programmers using this language are doing things people thought were impossible."

I think the fact that a great number of new, hip, who'd-have-thought-it-was-possible languages are written in C answers this one rather nicely. Not to mention hip new databases, hip new web servers . . .

"the language features make you more productive"

I think it's precisely this claim that is being called into question. I daily see new language features that strike me as literally marvelous. But when it comes to those features actually leading to increased productivity -- well, "ls /usr/bin" tells a different story about the productivity of people working in this language.


"the language features make you more productive"

I discovered this is largely a function of the available libraries. In my practice, mostly I need a quick-and-dirty hack with sockets, and throughout the past few years I came up with something that allows me to whip up a quick prototype in under 10 minutes (shameless plug: https://github.com/ayourtch/libay)

That gives me enough of high-level structures (refcounted buffers, hash tables, evented i/o) to either write everything in C, or I hook up some Lua callbacks in case I am really in trouble and have to do something high-level).

But most of the time, as I grow more meat and "old blisters" around my C lib, I find only-C be mostly enough...


"I think the fact that a great number of new, hip, who'd-have-thought-it-was-possible languages are written in C answers this one rather nicely"

It's worth noting that writing a new language is not really something people think is "impossible." On the other hand, people take the languages that were implemented in C, and do things that nobody else thinks is possible -- things like logistics systems that save the military enough money in a few years to pay for the decades of research leading up to those systems. Sure, C is hiding underneath, but to claim that that means people are "doing things with C" is kind of silly (especially since there is no particular advantage to writing compilers or interpreters in C).

For what it's worth, this situation has been inverted: on Open Genera, there was a C compiler written in Lisp, and what it really did was to generate Lisp code from C code (i.e. it was "C-in-Lisp"). This had some interesting benefits e.g. C programs had a garbage collector in that system (the same as the Lisp garbage collector).

"when it comes to those features actually leading to increased productivity"

It is a matter of what you are doing. For example, I do a lot of work with boolean circuits (related to secure two-party computation, i.e. "theoretical crypto stuff"), and it is nice to be able to write something like the "x*y + z" and have that become this:

  (lambda (x y z) (or (and x y) z))
In C, what do I get? The macro system is not powerful enough to write a simple expression parser, I cannot overload operators, and there is no support for lambda expressions anyway. So instead of just worrying about the expression itself, what would happen in C (and I know this because at one time I was trying to write this code in C) is that I would have to implement a C function for each expression, spreading the code out and introducing new ways for things to go wrong (a second issue is that the values are not actually simple values; in C, this turns into a mess of pointers etc.). It is not that it is impossible, but I can say this: I am doing more now than I would have been doing in C, and I am not exactly new to writing C code (although the languages I programmed the longest in was actually C++; I am still perfectly comfortable with C, and to be honest I like C++ even less).

To be fair, this same argument applies to other high-level languages; Lisp is not exactly unique anymore in being very expressive, and other languages have things that Lisp is missing (like pattern matching, dependent types, etc.). My point is not that Lisp will change everything, though there seems to be a renewed interest in Lisp thanks to Clojure, but rather that high level languages absolutely do have a benefit. I suppose for most people, the easiest example is SQL: think about how you would write something as simple as an inner-join query if you could only use C (and then compare that to SQL).


(especially since there is no particular advantage to writing compilers or interpreters in C).

Do you really believe there's no particular advantage to writing a compiler or an interpreter in C? Like, you could write it in Tcl or Perl or Ruby and it wouldn't matter?


I would not particularly want to write a compiler in Tcl, Perl, or Ruby; on the other hand, if I were free to choose, I would pick Lisp, ML, or Haskell over C for compiler writing any day. For what it is worth, I have written a compiler in C and I have written one in Lisp, and doing it in Lisp was far more pleasant.


I disagree with the "sub-optimal" label... Tcl, Lua, Ruby, $whatever are amplifiers for C. Arguably especially in the cases your citing. With libcurl, you want handy, dynamic string handling for parsing results, or doing ad-hoc requests. With libxml I -suspect- that any DOM parsing or XPath work, etc is the only "heavy lifting" component, and easily driven by a higher level scripting language. In use, there's no cost, because areas where a scripted language fails compared to C (performance, low-level bit-fiddling) are not at all applicable in the scripted domain. Conversely the scripted environment brings a comfortable interface to the user, highly dynamic and, like Unix pipes, allow bringing new functionality into the scripted domain and interacting with that universe (bind ruby to curl, do a bunch of ad hoc tests in the REPL, slurp up a list of URLs from a textfile, report all 404s to a database).

Driving libxml or libcurl from C doesn't give this multiplier effect. Not that there's no reason to use C, but there are -easy- to identify reasons for using a higher level language. Whether the cost of integrating the two paradigms is worth it depends on the developer, and how they determine value.


I don't refrain from doing that because someone told me that C is horrible and too low-level. I refrain from doing that because I have tried it in the past, and while it works perfectly fine, it is horribly slow to write. There isn't really any compelling reason why you should spend time ironing the details and memory usage of one lone HTTP request when you could just do 'body = http_get url' and be done with it.

You can do a ton of things in C. But it takes time, a lot of it. And sometimes, it's worth sacrificing a few Kb (well, okay, Mb) of memory usage to save time and do a program that does a little more.

Don't get me wrong, there's a ton of things I would never do with anything else than C (image processing in C is just so absurdly faster than in any other language, even when you're just calling C libraries). But there's room for higher languages, because between libcurl and libxml, there's a ton of glue code that is way simpler to write in Ruby.


"image processing in C is just so absurdly faster than in any other language, even when you're just calling C libraries"

Hm...

https://news.ycombinator.com/item?id=1390944

I'm not saying this is representative, but you did bring up image processing.


> The tooling and library ecosystem of C is absolutely second to none, and there's an enormous community out there that consists of some of the smartest programmers you will ever meet.

This is causing me some pretty serious issues right now. The C community loves its giant ecosystem of C libraries. Everything on Earth seems to bring in a pile of delicate dependencies on the rest of the C universe. What should be fairly mundane configure, build and deployment problems become nightmares.

C is also more likely than a lot of other languages to use outside tools to generate program text as part of a build. In a lot of other languages developers would use those hairy features everyone complains about instead, and avoid complicating the build for everyone who could otherwise treat their library as a black box.


My own experience with that sort of thing was Lisp calling C through an FFI. In the end, it was too much of a pain and I found a Lisp-only way to do it (it helps that SBCL has POSIX system calls available).

Debugging was definitely an interesting one. The Lisp debugger was easily confused by problems in the C code (in fact, I cannot think of a time when it was not confused), and running the entire Lisp environment in gdb was just annoying. In the end, when I debugged, I just wound up strategically placing printf calls in the C code and tracing things manually.

Of course, this is not a problem of languages, but of tools. What this says is, "When a language includes an FFI, it must also include a good debugger for the FFI." Not necessarily easy, but not impossible by any means. With enough free time, I suspect that the more experienced programmers on HN could produce such tools for any high level language.


He answered that in response to the question that if C in Erlang was a problem, how was more C the answer: Because it's easier to debug when you don't lose context between the "application" layer and the lower level code. The big problem we've seen is when C code is getting called from higher level code in the same process, we lose all the debugging context between the higher level code and the underlying C code.


I'm regularly using Tcl to sling C around and when I run into issues, gdb is there -- is he talking about something different, or am I missing something?


His point is that when you're using gdb in your C code, the higher level language that called your C code (Erlang, Tcl, whatever) is opaque.

That is, if when looking at your C code, you realize "the only way this variable can have this value is if my higher level code has this other value..." you can't easily inspect the higher level code in your debugger.

Unless gdb is more powerful than I'm giving it credit for. Does it allow cross-language debugging from Tcl to C, and back to Tcl?


Well -- Tcl happens to be difficult to debug w/ tools because it's so dynamic -- but that said, let me tell you what I run into, and how I resolve.

I compile C (original C, or bindings to libraries) and enable all warnings, use Tcl C APIs to do C memory management (allows electric-fence like behaviour where I've got optional guards on allocated mem), and I'm pretty familiar w/ the Tcl C API. When something is horribly wrong, I'll get a core dump and the full stack trace will indicate the path from Tcl, on up into my bindings, and further into the library in question. From there, typically with a small amount of inspection and testing it's easy to isolate the problem area and fix. This way I've solve application (Tcl script) code, my library bindings, and errors in 3rd party (open source) libs -- I've got no doubt that there's room for improvement in my process -- either by tools or techniques. That said, I feel productive and happy with what I've got going on now. When I want performance in Tcl, I punt and push it to C. When I want dynamic control or nice interfaces, I punt and drive it with Tcl.


I think that's a reasonable process, and probably the best you can do with current tools. But even though it's reasonable given your software stack, it's still much more involved than just using gdb on C code.

(Note that I'm not arguing that it's never worthwhile to use a higher level language to drive C code, just that I think Katz has a valid point regarding debugging.)


" I'm pretty familiar w/ the Tcl C API"

The Tcl C API is fairly simple, and that's by design. There's a very slim chance you loused something up in the interface. Dealing with the interface from Erlang, or with many other higher level languages, is like pulling teeth.


I like the way Alexander Stepanov, primary designer and implementer of the C++ Standard Template Library (STL), decribed the C Programming Language in an interview:

"Let's consider now why C is a great language. It is commonly believed that C is a hack which was successful because Unix was written in it. I disagree. Over a long period of time computer architectures evolved, not because of some clever people figuring how to evolve architectures---as a matter of fact, clever people were pushing tagged architectures during that period of time---but because of the demands of different programmers to solve real problems. Computers that were able to deal just with numbers evolved into computers with byte-addressable memory, flat address spaces, and pointers. This was a natural evolution reflecting the growing set of problems that people were solving. C, reflecting the genius of Dennis Ritchie, provided a minimal model of the computer that had evolved over 30 years. C was not a quick hack. As computers evolved to handle all kinds of problems, C, being the minimal model of such a computer, became a very powerful language to solve all kinds of problems in different domains very effectively. This is the secret of C's portability: it is the best representation of an abstract computer that we have. Of course, the abstraction is done over the set of real computers, not some imaginary computational devices. Moreover, people could understand the machine model behind C. It is much easier for an average engineer to understand the machine model behind C than the machine model behind Ada or even Scheme. C succeeded because it was doing the right thing, not because of AT&T promoting it or Unix being written with it."


I started out with Visual Basic about 10 years ago, as an introduction to programming it did it's job well. I then moved on to C/C++ as I started my MSCS. It was great for doing implementation of algorithms and image processing. Working relatively low level, optimizing for speed and handling garbage collection yourself.

As I started my professional carrer I went on to Java, and I've never looked back. The environment is great - I know how to implement algoritms and persistence-framework myself, but that's not what I'm getting paid for. There's always some high quality open source library/framework I can use. And sometimes when I need the "unreasonable effetiveness of C" I will interface with one of it's applications via JNI, I've done this several times: ImageMagick, ffmpeg, OpenCV, etc. I also onload a lot of the performance critical tasks to systems written in C/C++ like MongoDB, Redis etc. But Java can be fast as well, just ask Cassandra and Solr/Lucene.

Java allows me to get my job done, quickly and effectly. And to write well structured, understandable code that most likely will be understandable in decades to come.


Of course the C language and compilers are so simple, and C sits at such a low level (right above assembly) that few things can go wrong in it (compiler bug) or under it (cpu bug). As people jokingly say, C is just a high level assembly language. If one tried to re-invent a language with C's properties, one would come up with C, again.

For these reasons, I too expect C to remain the most performant and reliable high-level language baring any major change in computer architectures.

Debating this fact would be the same as debating whether assembly is the most performant and reliable low-level language or not. Of course it is. (Preemptive reply to the doubter: as an example my 64-bit assembly implementation of MD5 is 75% faster that a 64-bit C implementation: http://www.zorinaq.com/papers/md5-amd64.html )

And of course, performance and reliability don't always need to be #1 priorities, therefore, as the author says, "most every popular language has uses where it's a better choice."


"Debating this fact would be the same as debating whether assembly is the most performant and reliable low-level language or not."

Discounting special CPU features that are hard for compilers to utilize (e.g. AESNI instructions in newer Intel CPUs), I think this is really a matter of scale. For small functions like MD5, hand-tuned assembly language is probably going to be faster than what a compiler generates. Likewise, for short, tight loops that are the bottleneck, hand-tuned assembly may be faster (you see that sort of thing in game engines). Beyond a certain point, though, compilers are going to outperform even the best humans; compilers are just better at keeping track of things across large areas of code.

Compilers are also going to do a lot better when they have higher-level information available. You could conceivably write an x86 assembly optimizing tool (I suspect someone will point one out) if you wanted to, but even the best such tool will lose to a good C compiler. At the very least, your C compiler knows something about types, which can help a lot with optimizing; your C compiler also gets information about the purpose of certain sequences of instructions and how various control structures relate to various types, in ways that are hard to extract from assembly language.

Of course, this argument can be taken a step further: a good compiler for a high level language should be able to do even better than a good C compiler at a large enough scale. While it is conceivable that you could do anything in C that a high-level language compiler does automatically, that just brings us back to the issue of whether or not compilers can outperform humans -- and again, at a large enough scale, that is going to be the case (of course, comparisons are difficult; higher-level languages are used for things that C is rarely used for, and we could debate endlessly about why that is the case).


SynthesisOS (http://valerieaurora.org/synthesis/SynthesisOS/) used an optimizing assembler to recompile the operating system on the fly (think of it as the OS being JITed). The reason it had to use an optimizing assembler is because the operating system itself was written in assembler (MC68030). It could run SunOS binaries faster than SunOS on the same hardware.

And this was done back in 1992, when the major debate was Assembler vs. C (back when C was considered "high level").


That's not really true. If I were designing a low-level language I wouldn't go for "it's just a really fast PDP-11" C. I'd pick something with programmable syntax and Lisp-style macros, because being able to generate code at compile time is a huge boon if you're writing something performance intensive. C++ BLAS libraries use this to good effect to partially-evaluate things using the template mechanism, but that's just a small bit of what's possible with proper metaprogramming.


Hi. This comment has nothing to do with the actual post.

I saw tptacek recommend your HN comments here:

https://twitter.com/tqbf/status/292100038367277056

I just read a bunch of your comments and I agree with his assessment. You seem smart and interesting and apparently work in Manhattan. If you'd ever like to get a beer after work and shoot the shit about whatever I always like meeting interesting folks who are outside of my normal sphere of existence. My contact info is in my HN profile. Hopefully you will actually see this.


Forth and Fortran would like to have a word with you :)

But seriously, I would expect the maximum possible performance on today's architectures to come from the infamous sufficently smart compiler that can do bytecode generation at runtime. That can at least in theory produce programs faster than any system restricted to ahead-of-time operations, including handwritten AOT assembly. Whatever language can make that practical without having some other crippling performance issue might be able to consistently leave C and every other AOT language in the dust.


I love the ideas of forth, have been toying around with them in factor of factorcode.org recently.

How many resources should that smart compiler be able to use ? The problems of optimization become important when you are about to run out of resouces... If you are out of resources, why would you leave some for the compiler ?


Computer architecture has already changed, with today's PCs having many logical processors. And indeed C lacks any direct affordance for parallelism. Good old pthreads are hardly adequate for many tasks compared to facilities in other languages. Yes, you can implement most of the needed features in C, but often without the grace or safety available elsewhere.



Perhaps you are right. Perhaps a technological advance in compilers and languages so deep as to dislodge C is needed to fully solve the ease of programming of many-core computers... Something the industry has been trying to achieve for more than a decade.


Part of its apparent inevitability now comes from its influence. Bliss was a reasonable competitor at the time with significant differences: expression-oriented, reducible control flow, immutable variables (you used something like ML's refs when you needed assignment, but stack-allocated).

C started simple, but grew kind of complex. I'd like to have an ubiquitous simple mid-level language.


That is one hell of a résumé.


Definitely! Of course, argument from authority is still relatively weak in situations where experts disagree, and that's definitely the case with programming languages. We can each find our gurus -- Damien Katz, Rich Hickey, Rob Pike, John Carmack, Mike Acton, Slava Pestov are some of mine -- but there's no absolute truth in programming debates. The best we can hope for is to share interesting insights, motivate them as clearly as we can, and then agree to disagree, each biased by our distinct experience.


In academic circles, many struggle to understand why people are still using C for large projects, other than the special cases of operating systems, embedded systems, and code that must execute very quickly - e.g. within the gaming industry.

The main argument against using C seems to be that C is at too low a level of abstraction, which leads to low productivity and buggy code.

For example, pointer arithmetic might sometimes be useful, but it doesn't seem like something most developers should be exposed to anymore.

I have some sympathy for this argument, but wonder what others think?

- The right tool for the job?

RS


Interesting that among the other languages he mentioned one did not find Fortran or ML, which, I believe, were the go-to examples of C-beating languages in the discussion of the initial piece.


His argument now seems to be that you have to use C for speed. In my experience that doesn't necessarily hold true anymore, given modern computer architectures and what JIT compilers are capable of. I have several times written something in a modern language and then convinced myself that I could get a large speedup if I instead wrote it in C, only to discover that the speed benefits were minimal at best and would require an exorbinant amount of exquisite hand optimization to increase further.


Can't really argue with anything specific here, but this and the last article's general premise seems like, "if you have a choice, you might want to go with C for these reasons...". The problem is, I can't remember the last time I had any practical choice between C and a higher level language. I'd love to use C more, but the decision is almost always made for me by a client, or a useful framework/library that only exists in language X, or a platform, or a boss.


>I don't know a lot of what's out there on the horizon, and there are some efforts to create a better C.

Are there any efforts to extend C to add very simple/light objects, and perhaps string handling?


C++ and Objective-C started with this goal.

Objective-C started as a pre-processor for C.


So did C++.


This sounds quite a bit like Objective C.


Objective-C's object model is very heavyweight.

Granted, that's what I used to like about it (I haven't worked in it in years). You got to have this very high-level object-oriented language with all sorts of delightful features such as dynamic dispatch, duck typing, and even monkey patching. But you also got to have straight C sitting right there in the same language, and could easily flip back and forth between the two without incurring penalties that TFA mentions such as losing context at a language boundary when you're trying to debug code.


I don't think it's fair to call it heavyweight. It's both high and low level at the same time, which is part of its awesomeness (there are no "methods" -- only messages (symbol names) so you can send objects messages (dynamic, has an overhead) or use a message to get a hard reference to a function pointer ("selector") and call that directly with no overhead (beyond a function call). So you can be nearly as dynamic as JavaScript or Python, or as performant as C without changing languages.


It is so funny in these debates to see people always mixing languages with their default implementations, as if they were the only one available for the language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: