C, for the level of abstraction it targets is an extremely good language, and this is the reason why it is still alive, and why a lot of big successful projects are written using C. The problem is its standard library. The "get this new language" teams should instead focus on how to incrementally improve C. D was a (bad IMHO) attempt, just retry and make it better, a step after the other. If we will wait the ANSI committee we can all get old with strcpy().
IMHO one of the few things that should be addressed at language level is that structures with function pointer members should have a way to be called with the implicit structure pointer (something like "this") as argument. That's all we need from OOP for C. This makes you able to do:
list *mylist = createList();
mylist->push(foo);
mylist->pop();
Agreed completely. I'm doing a decent amount of development with MCUs these days and really like having a minimal abstraction. It's easier to think about what the hardware is doing when you're only slightly removed from the assembly; what would be a convenient abstraction for most application development quickly becomes a liability as knowing the implementation details can be pretty important (due to limited resources).
As for an implicit "this" for function pointers, I've had the same wish myself. I've settled for just a function namespace (ie: myListPush(myList,foo);) due to the verbosity of your example but it's not quite the same. At this point I'm pretty happy without it, and even think it might be hiding too much information, but it makes a nice piece of syntactic sugar for sure. My current unneeded-but-desired change to C would be the addition of lambdas like in C++11.
I don't think so; I think you also need polymorphism and probably exceptions. I think GC would also greatly improve expressiveness, because it makes functional style much easier to implement without redundant copying. But minimally some kind of polymorphism, and no, discriminated unions aren't enough - but Go's interfaces would probably do. Also, some way of hiding data members that doesn't require obscuring casts would be nice too.
The biggest problems I've had with large C codebases are deep assumptions about data structures, spread throughout. The biggest benefits of OOP, IMO, come from firewalling off internal details from client code through the use of an interface; knowing that client code has to call you, rather than simply follow your internal links, gives you a lot of freedom to change things after the fact. A polymorphic reference that the client can't dereference and poke about with gives you that.
I think the point is "for the level of abstraction it targets". Add all those abstractions that are generally a good thing in a perfect world, and practically you shifted the language enough to be unsuited for many tasks we use it today.
Polymorphism doesn't have to add runtime overhead. You can achieve polymorphism without RTTI through code duplication, as C++ templates do. (You can also use uniform value representations, like C void *, although this is limited in its applicability.)
Code duplication is an overhead. Not as bad as RTTI, but static memory footprint is still important in the embedded/systems world -- sometimes more so than runtime memory footprint!
You don't need any RTTI or code duplication for polymorphism, so long as you don't need general support for type-checked downcasts. If you do need type-checked downcasts, it can be as cheap as a single pointer on antirez's mylist->ops function table - a pointer pointing to the ancestor function table.
(Specific support for any single downcast is trivial - just write a function for the function table, typed appropriately and implemented appropriately.)
It's not an attempt to incrementally build on the C base, it rewrites all the rules, has a GC, D v1 and D v2 are pretty different things, and so forth...
If C isn't suitable for projects with more than 10K lines, then praytell, what language should my kernel be written in?
I've been meaning to try Go for a while (the fast compile times alone intrigued me), but this type of post really just puts me off - if you're telling me that C isn't suitable for large projects, then that tells me you may not know C well enough to make that judgement.
> If C isn't suitable for projects with more than 10K lines, then praytell, what language should my kernel be written in?
The Xen hypervisor (which shares a lot of characteristics with an OS kernel) is written in OCaml, which seems to have proven a good choice in practice: http://gazagnaire.org/pub/SSGM10.pdf
The problem with OCaml is that you need to recompile every library you use whenever you update the compiler; even minor versions. You might be able to use it for any program, but not for every program.
I know who Ken Thompson is, and don't so much disagree. But that is a bold and oft-repeated statement, that seems to not be fully supported by the evidence.
I'll concede that the former project lead for Unix might not be the best C programmer.
But then; who is?
If there's evidence to support that there's someone who knows it better than Ken, I personally would like to see it. (More from curiosity than anything else.)
I'm certainly not qualified to answer. But there has certinally been some very impressive code written by people who spend a huge amount of their time in game development, kernel development, or any other large system that is written exclusively in C.
Frankly, even if Unix was that project, I'd expect that the "best" C programmer may actually be someone who is in a much visible position.
Then again, Ken may very well be the best C programmer, but Go doesn't much seem like a language designed to address the issues that C is currently being used to address. Frankly, it seems like a much better Java replacement IMO.
In general, he's right. Kernels and some other applications that require low-level access are an exception to the general rule that C is not a good language for massive projects.
Unfortunately, the author seems to have misconceptions about a variety of things.
> projects with more than 10k LOC.
Other comments have discussed this. Linux kernel: ~4million lines. Compilers, interpreters, and so on... all well above 10k LOC. While C is not the best language for such things, by no means is it unmanageable.
> But when you try to do the modern funkiness of dynamic languages (lambda functions, map/reduce, type-independence, ...)
All of the features can be used in much stronger type systems than C++ gives you. ML-family languages have all these features and are strictly type.d
> Javascript
Javascript's whacky semantics suck, that's for sure, but for a dynamic language that's par-for-course anyway. I don't really understand his argument about embedding. Why is that relevant? Java and C# are not embeddable either. ...So?
There's a difference between being suitable and being the only alternative currently available given the other constraints. There's lots of big software written in C not because C is convenient or well-suited for writing that kind of software, but because performance trumped the other concerns. Particularly in 1995, when computers were slower and other language implementations were slower than they are today.
For that matter, libc is considerably more than 10 kLOC (on FreeBSD 9.0, it's about 140 kLOC), and it's hard to think of a more stereotypical block of C code than that.
It is stereotypical library but not stereotypical software project per se. Functions in libc have (by design) very strict interface and little data interdependency.
I would say SQL or HTTP server or game or a word processor where you have to retain sizable chunks of data in memory for longer periods is a better example of big C project. And that's where lack of automatic memory management and data abstraction in C makes it hard.
Do you honestly think everyone on the team that developed the id Tech 3 engine was as 'good as Carmack'? I don't think he wrote the whole engine by himself.
There was one other programmer who worked with Carmack on Quake 3, John Cash.
My point is, even having one Carmack level developer on a team is a bridge to far for most.
Yes, you can write huge projects using just C, but you honestly don't think most people would recommend it unless every little drop of performance was really that important or you had no other choice or you had a super guru with an amazing track record writing the software.
At the time Q3 was written (13 years ago), console games were still being written in assembly language.
You are ahead of yourself by a few years. Q3 came out in 1999, which means it was written in the late 90s, so think PSOne and late SNES/Genesis titles.
Aside from the PS2, I don't think so. N64 came out in '96, Dreamcast came out in '98 and PS2 came out in early 2000. In '99, we were only two years away from Xbox and GameCube. I know that at least N64 games were written in C.
PSX games were often written in C, as well. I suspect that developers had started to migrate to C near the end of the SNES era, although I'm not sure how many SNES games were written in C...
My own project (FLINT) is about 120k lines, and we're still going with C. I did spend a lot of time looking for a theoretical higher level language which could take us further when C ran out of puff, but we eventually decided to stick with C for the main project.
However, I did eventually spot Julia, which is totally awesome for my own personal higher level needs.
For what it's worth, I've done a lot in Lua, dropping to C (on rare occasions) to offload heavy processing elements where a scripting language would choke. Many game developers do their interface and game scripts in Lua then run the engine in C as well. It keeps the C code light and clean and brings down the number of manhours required to build and maintain a project.
I don't know what FLINT is or what Julia is used for (Wikipedia makes it sound like a functional language?) so Lua might not be the best fit, but it's designed from the ground up to mesh with C (to the point where the solution to many otherwise simple things is "write that bit in C")
I understand it's not for every purpose (and I have no stake in if you use it or not), but it can be metaprogrammed and extended with C. I'm not really a programmer in my day job though, I use it mainly for automation of things I don't want to do by hand.
"some people use C" does not excludes "some people think C sucks"
One might think C sucks and still use it. You might think it sucks, but it's still less bad than everything else. Using something doesn't necessarily makes you blind towards its drawbacks. I think a lot of the things I use everyday sucks. But I still use them because there's nothing better... yet.
All those programs written in C that everyone is posting about (quake, linux, gimp etc). Were written before Go existed. So it's very plausible that the authors could agree that C sucks if they had the option to write in Go. Which is the point of the article.
> What makes me stay away from it is the non-free nature. Yes, there is Mono, but I wouldn't like to base my stuff on a language that is there because of Microsoft's benevolent permission which it could turn into patent-lawsuits any time. We all know the tricks that company (well, actually any large company) has up its sleeves.
Microsoft has standardized C# and the CLR through ECMA, and issued a promise stating that they will not assert their patents against alternative implementations thereof. Non-standardized APIs like ADO.NET, ASP.NET, and System.Windows.Forms are not protected by that promise, but Mono discourages their use anyway and if they had to remove them, it would not affect the C# compiler or CLR in any way. So this is not a legitimate reason to avoid using Mono, unless you (a) need to use the non-standardized Microsoft APIs or (b) are Richard Stallman.
However, some commercial development shops won't touch anything LGPL (a license compatible with commercial use). Mono adds additional complexity. The concerns of the FSF are here: http://www.fsf.org/news/2009-07-mscp-mono
I think the author is correct in excluding languages that are non-free. Microsoft's promise means almost nothing in the real world. They could retract that at any point in time and leave any project built on that technology at risk. Mono is a non starter for any open project built today.
The Java comments are wrong IMHO. Yes it has some legacy cruft. Yes you can import 10 million libraries and build abstractions on abstractions. Yes you can architect your application to resemble a Rube Goldberg machine.
Guess what? Those same points apply to an language. Heck the evolution of a Python programmer https://gist.github.com/289467 currently on the front page which does the same thing for Python which is usually pretty clean. Even PHP which is mentioned as the alternative can quickly become a spaghetti ball mess of imported code and shared snippets.
I don't love Java, but for certain classes of problems it is an excellent choice and to discount it based on a wrong perception just shows how shallow a technologist you are.
The Java comments are wrong IMHO. Yes it has some legacy cruft. Yes you can import 10 million libraries and build abstractions on abstractions. Yes you can architect your application to resemble a Rube Goldberg machine.
Guess what? Those same points apply to an language.
Potentially, yeah, but guess what? It does not happen in every language. That's why it's most often brought up for Java. Because it's not just the potentiality that matters, but the actuality too. In other words, it also takes a culture, and Java very much has that kind of culture.
Heck the evolution of a Python programmer https://gist.github.com/289467 currently on the front page which does the same thing for Python which is usually pretty clean
See? Usually, as in, "Usually pretty clean", is the key word here. Whereas Java program design is usually pretty convoluted.
And the "Evolution of a Python programmer" is mostly a joke, meant to show the tendency of some Python types to use some newer/cooler/functionaler (sic) features in place of more simple and readable ones. It's not what commonly happens in Python projects though.
This is Hacker News. You have to have a conclusion or command in your title to get upvotes. "Why the cloud is wrong for your business." "Why your language sucks." "SaaS-de-jour, we have a problem."
That's good, since us readers know what to expect from the article.
Though you seem to be concerned about such kind of material being posted on HN. At first I was inclined to agree that it doesn't belong here. Yet consider, there's probably a lot of opinionated articles that include a fair bit of useful information (knowing others' opinion might be useful by itself), while bias can be neutralized by critical thinking, which HN audience hopefully is capable of.
And I'd argue that on HN relatively many articles get to the front page even without sensational headlines.
I wish people writing things like this would write about the positive aspects of the language first. Are you really going to convince people who like Java that Java is bad, in one paragraph? And, do you want to just annoy everyone with your opinions before you get to the neat facts you want to convey?
Positive first, negative second, or better yet, you understand the negative argument through the positive.
I'm not a fan of the verbosity of error handling with exceptions either. It is particularly bad for fine grained error handling.
However, I do like that the default state of a non-handled error is to propagate and eventually crash the thing rather than continue with errors that may subtly corrupt things. I haven't used Go, but from the explanation of error handling Go seems to do the latter rather than the former (please correct me if I'm wrong).
In most languages, exception handling doesn't seem designed for fine grained error handling, which makes one not want to use it for that. Rather than complain about the verbosity of their methods, why not try to fix the issue with a less verbose method?
E.g. something like: handle := openSomeFile() catch err
Or ignore it if you want, letting it automatically propagate: handle := openSomeFile()
In Go, the accepted way to report errors is to take advantage of multiple returns, such as:
result, err := somePotentiallyFailingFunction()
if err != nil {
// Try to recover here
...
// Not recoverable?
panic()
}
And this call to panic works the way you prefer. Note that doing the following:
result := somePotentiallyFailingFunction()
won't compile since the number of values on the left doesn't match the right, so if a function can fail and returns an error, you will know.
If you do something like:
result, err := somePotentiallyFailingFunction()
And then never check err, it's actually a compile-time error, which enforces error checking, to a point. What's next is the kind of thing you don't like, and it's poor style.
result, _ := somePotentiallyFailingFunction()
This assigns the error to _, which causes no errors if you don't use it.
The error type is an interface. You can return values that contain contextual information as long as the error value you return has an Error() method returning a string.
type ParseError struct {
line int
}
func (e ParseError) Error() string {
return "Parse error at line: " + fmt.Sprint(e.line)
}
Note: The current release may vary slightly on the details. I'm using a pre-release build. But older versions are similar.
And then in the code for handling the error, you can do
switch err.(type) {
case db.IOError:
// Database exception here
case io.FileNotFoundError:
// File exception
default:
// Pass it to our caller for them to handle.
return nil, err
}
Which gives you the ability to selectively catch errors based on type.
Do you mean that exceptions, when they bring down your program, indicate the line where the exception was thrown? If so, then `panic` in Go does the same thing.
Using goroutines and panic/recover, you can do the same things you do with try/catch/throw.
No; the point is that the information is contextual. For a database failure, the information will be DB specific; for an IO failure, IO specific; for an IO failure in the DB subsystem, it will be information about IO failure wrapped in info about the DB failure.
I think too many people get wrong idea about exceptions from defaults in many environments. Exceptions being thrown should not normally be a sign of a bug; knowing the line number the exception thrown should be irrelevant information almost all the time, save for errors like access violations or null pointer exceptions.
Instead, the information contained in an exception can usually be turned into actionable data to a user or administrator of an app (depending on whether it's on the client or server).
Go has "panic", which does as you suggest and propagates up the stack to something that can "recover". If nothing can, it will crash the program. Its use is generally reserved for truly exceptional circumstances.
Most errors are handled quite like the way you propose, e.g. handle, err := Open(). You can ignore the error if you wish, but that's up to you, and it's explicit that you're ignoring it: handle, _ := Open().
Panics are not meant to be a way to control the flow of the program, but rather to regain control of it if something disastrous occurs.
Go has been released with a BSD-style license including patent grants. If Google dropped it tomorrow they couldn't stop anyone else from just picking right up where they left off.
I hope nobody reads this, is irritated by the listed criticisms of other C-like languages, and decides to avoid Go. Go is a very well-designed language. Instead of piling on features, they have optimized for practicality and productivity. After reading "The Practice of Programming", so much of the language is obvious.
At work we have tools that automatically concatenate the lines of our .c files into one giant line of code. That helps us get around the pesky 10K LOC limit.
Imo the 'Rant about C-oid languages' part of this article is a bit weak (though I agree with some of it), but the 2nd half of this article is actually a pretty good rundown of Go features.
Actually, all languages suck because there are benefits and tradeofs to every design decision -- cross compitable binary format means JIT compilers, garbage collection means more wasted memory, etc. It is all tradeofs.
No, I have never written exception-handling code like that mocked in this article. There's a strong correlation between uses of 'catch' and the probability that you're using exceptions incorrectly. Go's lack of exceptions is probably what pushes me away from it most strongly.
That's my take as well. Most of the code I write doesn't attempt to recover from error conditions. I would say at least 85% of the time I want the program to exit with enough information that I can tell what happened. So I propagate the exceptions up to the top level. Nothing could be easier.
This may all be a function of the environment in which my code runs, but I certainly have no interest in going back to my C days when 95% of the source was devoted error handling.
Go doesn't lack exceptions. It just uses some different names (throw -> panic, catch -> recover) and uses a different way to write the code that catches them.
It's contrived, but you can see that the "RecoveryPlan" can choose to ignore panics or recover from them and it can also affect the return value of the "parse" call by assigning to its parameters and recovering from the panic.
But crossing interfaces between modules is, IMO, core to exceptions. They're ideally suited to communicating the underlying cause of failure from the depths of the system to the top-most loop (usually a request/response dispatcher or UI event loop), where the error message can be logged or shown to the user, as required, indicating the nature of the failure.
I expounded further on this point a few years ago, related to Java's misadventure with checked exceptions, but also relevant to module crossing:
I really like some of the ideas in Go however it still hasen't hit that critical mass that languages need to hit to really make it in the crowded world of programming languages. This is going to be even more difficult for Go then it was for Python or Ruby since they both can serve very well as prototyping language or glue languages. As a systems language Go needs to be able to gain traction from nothing.
dynamic_cast<MyData>(funky_iterator<MyData &const>(foo::iterator_type<MyData>(obj))
Yeah. Right.
Don't get me wrong, I love templates, but contemporary C++ using STL looks like a classic case of the "If all you have is a hammer"-syndrome. GCC had to implement special diagnostic simplifications just so you can actually find out that that 5-line error message was a simple const'ness mistake when using a std::string method. Even worse, they can be slow as hell. Ever waited for Boost to compile? Good idea, bad realization.
I have been programming (in C) for 12 years, and when I see something like this I still get a shivering feeling. How can one design such a shitty programming language like this?
Modern C++11 might (I do not know enough; I'm waiting for examples to pop up), but it's only been finalized for 3 months and not yet implemented fully by any compiler.
I suspect it's "only" x2 longer. And about the 100x faster -- you might want to re-evaluate dynamic languages if that was the case last time you tried.
LuaJIT2 loses to C++ in most comparisons, but only by 20% or so, while being much more dynamic than Java.
V8 is about x3 slower than C++ in real life benchmarks. PyPy in my experience is about x5, Python x10. That's significant slowdown, but it is a far cry from x100, and they are a thousand times easier to develop in than C++.
My weapons of choice: Python when it doesn't need to be very fast, Cython when you need to speed that Python up, C [not ++] when you really want it to run quickly and have full control. And K when you want to have fun.
Even Stroustrup has said there's a cleaner, simpler language within C++ struggling to get out. But C++11 is a big step toward that language and C++ is still the go-to language for a large class of applications for good reason.
One concern I have with garbage collected system programming languages is GC pauses. I have worked with several java applications that were simply not capable of handling fast, continuous, time critical processing tasks because every now and then they would pause for garbage collection. Most of the time they were fine, but suddenly you'd get brief but unacceptable drops in performance. Does anyone know if go is likely to suffer from this?
Currently, there is a world stop, yes. The garbage collector is parallel, but not concurrent. A concurrent garbage collector would make that disappear. How bad it impacts you depends on the workload, personally I've never felt it. YouTube uses Go[1] and mentioned[2] world stop is significant in their case, but tolerable.
Some of the implementations in JS went wrong, but fundamentally it really doesn't have anything horrible going on for what it's meant to do.
Unfortunately some of the implementation details do allow for seriously horrible code and some seriously horrible traps. But if you can clear the fog, and use "the good parts", it's actually pretty nice.
As for the article, I think it's very well written and seems to have a good overview of the language. Also, the author seems to have an above average experience level in an above average number of programming languages (guessing here), so it might be worth a read.
I am not talking about any APIs in JS or any "implentation in JS". JS, the language, as specified, is wrong at the core. It has a lot of insensible semantics. It's not even at a local maximum. I could just write a list of 10 changes to the language right now that would spur near-unanimous agreement of benefit with no downside.
I'm not talking about api's either. I'm talking about js as an implementation of ideas about what a language should do. It is built on a lot of great ideas and most are well implemented.
Since you aren't providing your sure-win idea changes, I can't be sure what you don't like but I can guess. All of them are well documented at this point. And they are easy to understand and avoid.
Figures this would get downvoted for not blindly buying into the propaganda: The syntax is clearly not C-style, that's just plainly-apparent. And just because a language is natively-compiled doesn't make it a systems language.
IMHO one of the few things that should be addressed at language level is that structures with function pointer members should have a way to be called with the implicit structure pointer (something like "this") as argument. That's all we need from OOP for C. This makes you able to do:
That's what currently you do like: