The hipster culture is the culture of counter-culture. A hipster aware of their hipsterness would therefore violate their internal value system. So yes, it's hipster thing. But the hipster is not aware. Or is at least pretending not to be aware.
Back in 1999 I managed a similar thing written in C with flat file storage. The whole tool chain chucked out a static binary and a directory full of templates and you just cp'ed it to the target and restarted the process.
When I look back I've got to be honest and say that it was actually a thing of beauty. The heavy lifting required for some core functions was quite bad but the whole thing pales in comparison to some of the enterprise and microservice based c# behemoths I have looked after since which require literally hours of head scratching to deliver a simple change.
I'd rather spend that time writing C than scratching my head. I'd have a lot more hair now.
I have actually tried that and you're right. The whole experience made me consider going back to electrical engineering when I realised that the status quo hadn't improved in 20 years :)
Yes indeed, oldies but goodies. This combo is still available and maintained, well, at least in the form of naviserver. Tcl of course has continued in active development and as useful as ever.
Well I can program in assembler if that pleases you, oh blessed dmr prophet. C is simply not worth using unless you have to due to existing constraints etc.
It is so easy and fun to write web apps in C. Just remember to test them with tsan/asan/valgrind and run through 2-4 static analyzers + check with couple of fuzzers too!
Well I think I'll pass this one and hopefully don't need to maintain such apps in the future.
Plus Frama-C, model-checkers, concurrency analyzer, CompCert in Coq, Simpl in HOL, symbolic execution in KLEE, path-based testing, and property-based... then it might be good to go.
On a capability-secure processor running in a VM in case new classes of attack against C code are discovered
Again. ;)
Since it's basically OCaml with more JS-friendly syntax, it will lead the masses of JS devs to OCaml, they will find MirageOS, build their servers with it and no one will need anything else anymore.
I don't really have a problem with what is presented here but it seems written in a way to trivialize modern web development. I once used to work with a MFC C and C++ guru as dev manager who would do the same thing yet he could not code a web page. He once claimed he could pick up all my knowledge inside a week and bang out a demo as fast as I could. Challenge accepted- one week later he had a functional button. BSD needs a higher level language with a good dev environment more than ever now. It seems you can't have both without either agreeing to a ton of horrible licensing (Java) or vendor lock in to some degree (KRB5 blob from Microsoft for .NET) or lack of platform support. Swift would be a good match because LLVM support but IBM as maintainer of BlueMix has thrown this squarely to Linux. This stack in a sardonic way is actually better than the other ones I listed for those reasons. It's just needless to throw shade on modern web development in general because how do you think we got in this situation to begin with?
Since this is OpenBSD-centric, it's worth mentioning that OpenBSD still ships with Perl (and probably always will unless its package management tools get a rewrite), so a "BPHS" stack is both feasible more-or-less out of the box and at least slightly more sane than trying to use C for web development.
If you really need a web development solution that's completely out-of-the-box on OpenBSD (aside from perhaps some third-party Perl packages), you might be able to use a "BPHL" stack (BSD Perl Httpd Ldapd) and use LDAP in place of a general-purpose database. That's probably a horrible idea, but whatever.
If we are going to craft a stack involving assembly someone would say that assembly is for the weak and real developers write directly in machine language.
It's not obvious - because I bet most of HN is wondering right now. It's not necessarily all wrong to be coding microservices in C - it has been done, and has been done in C++ as well. I like the pledge approach.
I would like to see nginx support here instead of just OpenBSD's web server. Perhaps this is because nginx won't work with pledge?
For a specific example, iMatix had DSL's that outputted C which they used for high-performance server apps. If doing that for web apps, one would need a web framework for C that the code could call. I believe I've also seen both a C and C++ framework in the past. Both bragged about efficiency with the servers cheap.
Not really. I mean, using vectors and strings is nice, but that's about it for safety. You'll still get a shit-ton of memory leaks which isn't great for long-running web apps.
> Anybody can write crappy, bug-ridden and insecure code. In any language. Good news: with C, it's even easier! So familiarise yourself with common (and common-sense) pitfalls noted in the SEI CERT C coding standard and comp.lang.c FAQ.
This part has a joke, which makes the overall website not read like a joke. I’m pretty sure it isn’t, even.
I'm a firmware developer, have been all my professional life. C is my goto language. Rather than look at this through the lens of an experienced web developer, consider the value it provides to C natives such as I.
This is to C devs what the micro bit is to kids learning computing - a starter experience with server side dev
Shipping C? To avoid "hipsters". Dear Kristaps, please stop pushing your religion on other people like it's sound engineering practice.
If by "hipster" you mean, "People who are sick and tired of shipping insecure software that violates consumer trust and has tends to case downtime" then by all means, ship C code that handles untrusted input.
I think it's criminal to propose this without massive auditing, which is what HLL runtimes end up getting.
What, exactly, do you think OpenBSD can do to prevent an attacker from comrpimising your responder binary's logic that everyone else isn't already doing? OpenBSD's sandboxing model is superb, but that's not what I'm criticising.
C is a bad choice for handling untrusted input precisely because it makes it very difficult to prevent logic errors that disclose user data in unexpected ways. The security community has done its best to prevent the even more disastrous class of breakout errors that comprimise the entire resource (and OpenBSD is great for this, way better than containers).
But as my comment was specifically addressed at the choice of C, I don't feel like I need to sweet talk an OS I already say lots of nice things about.
Maybe you'd like to respond with all sorts of great literature about how the C spec is not full of holes and gotchas?
And don't even get me started on that "simple example binary."
> it makes it very difficult to prevent logic errors that disclose user data in unexpected ways
Oh that's a new one. But now that you mention it, I'm starting to recall all the operators that will fprintf the contents of ~/.ssh/ to the network socket upon misuse.
Wouldn't it be nice if we could use a simple assert to test the state and inputs before we proceed to shovel private keys in a totally not privsep'd process handling public, unauthenticated connections. Heck, even a simple condition would do, if we could just return an error and stop further processing when things look bad. But you're right, that kind of code would've been too advanced for 1958 or whenever we got this language...
So, at the root of this bug we have a broken implementation of a circular buffer that fails to accommodate for one case. That is the kind of a binary logic error I can repeat in practically every single programming language. I fail to see how C makes it very difficult to prevent this type of error.
And on the other hand, we have code that doesn't bother check that the inputs from the outside world are valid and that they do not cause integer wraparound. The former is a problem you can repeat in just about any language, while the latter is relevant to many languages. Guess what, I know how to check inputs. I know how to prevent integer wraparound. And C doesn't make it hard for me to do so.
The next and last interesting part of the disclosure is mostly concerned with leaking private keys. Now what did I say about juggling private keys in an internet facing process? Just because the ssh devs didn't isolate that part into a separate process doesn't mean it can't be done (and honestly I have no idea why I would be juggling private keys during the generation of a web page).
So there you have it, old code from a time before explicit_bzero, juggling private keys, not checking inputs and running on a system without malloc_options. You can lament it all you like but that doesn't mean everyone has to do it wrong. It shows that you can do it wrong, not that C makes it hard to do it right.
> So, at the root of this bug we have a broken implementation of a circular buffer that fails to accommodate for one case. That is the kind of a binary logic error I can repeat in practically every single programming language. I fail to see how C makes it very difficult to prevent this type of error.
Not that any other language is on trial here, but there are languages that would naturally make such a bug into a compile time error.
I fail to see how C helps you avoid making such an error. Because C's general standpoint here is that there is no such thing as an error, there is no such thing as a type, and it is perfectly acceptable to have undefined behavior sitting within trivial edit distance of common code patterns.
> The former is a problem you can repeat in just about any language
Actually, no, that's false. A lot of popular languages check arithmetic. Faulting in such a case would have saved the day but hey, faster execution. Even old languages like lisp did this.
Not many languages are as lackadaisical as C is about this. But error handling (at any stage) has always C's weakest point. I can't think of a C successor who hasn't named C on this front, then tried to improve upon it.
> It shows that you can do it wrong, not that C makes it hard to do it right.
Insufficient abstraction means that you can't reuse code, so instead of getting it right once you have to get it right every time.
And what's the compelling reason to use C? It's "simple" and "close to the metal" but the problem domains are anything but. Availability and instrumentation bias encourage people to use C for "efficiency", trading off correctness for faster code. It's a tradeoff one can make, but if you're working with other people's data you should think twice.
How many times faster does a piece of code need to be to make up for violating a user's privacy? How many times "simpler" does code need to be for someone reading it to justify not making every effort to avoid security faults?
But then, I worked with financial data a lot, and my work ended up being associated with a national scale bank with an API. The sheer amount of attacks my code had to endure was on a completely different scale than most people will ever experience.
> Not that any other language is on trial here, but there are languages that would naturally make such a bug into a compile time error.
I'm sure there are languages where the use of any condition forces you into making sure there is some explicitly taken branch for all possible inputs -- and perhaps that language also magically knows what you must do inside each branch. Show me all the projects that are using these for web developement, which is the context for this discussion. Otherwise it is not fair to bash C over it.
All the mainstream languages I see in web development allow you to make the exact same mistake.
> I fail to see how C helps you avoid making such an error. Because C's general standpoint here is that there is no such thing as an error, there is no such thing as a type, and it is perfectly acceptable to have undefined behavior sitting within trivial edit distance of common code patterns.
But there are errors. There are types. UB is not relevant to what you are replying to. The problem in question was about a condition that was not considered. Again, show me how your average web language tells the programmer that he didn't write some if condition or stop bashing C over it because you're dreaming of features in some unicorn language nobody uses in the real world anyway.
> Actually, no, that's false. A lot of popular languages check arithmetic.
Please read again. I said "the former", referring to input validation. That is relevant to every language accepting untrusted input.
> Insufficient abstraction means that you can't reuse code, so instead of getting it right once you have to get it right every time.
That statement is so wrong I can only conclude that you're smoking something or you haven't programmed in C and you are completely oblivious to the work the OpenBSD folk (and many others) are doing to fix these issues in existing reusable library code as well as to introduce new, safer APIs. Sure you can pretend that everyone who wants a buffered output stream to a socket has to write their own circular buffer and repeat the same mistake. You are wrong, and if you had paid attention you would see counterexamples (libevent is a popular one) that prove you wrong. You're just hating on C but don't know it.
> And what's the compelling reason to use C?
I'm not trying to convince anybody to use it and my reasons are my reasons -- the strawman you make of performance isn't the key. But it doesn't matter.
> Show me all the projects that are using these for web developement, which is the context for this discussion. Otherwise it is not fair to bash C over it.
I'm not sure why I'd play the game when "mainstream" is basically a way to discount any offering. But in C++, C#, or basically any class-based language you can code to guard against this. Functional languages with types provide strong guarantees against this. Ocaml and Haskell come to mind as well known examples..
> But there are errors. There are types.
Not according to the compiler. Anything can become anything else.
> That statement is so wrong I can only conclude that you're smoking something or you haven't programmed in C and you are completely oblivious to the work the OpenBSD folk (and many others) are doing to fix these issues in existing reusable library code as well as to introduce new, safer APIs
I'm aware of the work, but C's problem is not that it lacks more library code.
> Sure you can pretend that everyone who wants a buffered output stream to a socket has to write their own circular buffer and repeat the same mistake.
I don't say they have to. It's just that C's language design makes it easier for people to do so. Very different statements.
> You're just hating on C but don't know it.
I see where this is now going. "If you knew you'd like it." I'm not going to waste anymore of either of our time if this is the new talking point.
>Not according to the compiler. Anything can become anything else.
You mind clarifying that? I constantly have compile-time errors from type mismatches. You can cast a variable to a different type, but you can do that in any language.
you can't implicitly convert a char to an int, your compiler will take that as a fatal error. About the closest you could get is char and short since they're essentially the same data type, but even then the compiler might throw an error over an implicit conversion.
That said, JavaScript doesn't have types at all, really. Every type can be implicitly converted to another type, and yet Node is still used server side.
> You mind clarifying that? I constantly have compile-time errors from type mismatches. You can cast a variable to a different type, but you can do that in any language.
Well, for one people overload types. The practice of, for example, using return codes as error values. The second problem is unions. They exist to make people converting bytestreams into structures happier, but often get misused elsewhere.
And void* is essentially a black hole, but it's a very common black hole to see in programs.
> That said, JavaScript doesn't have types at all, really. Every type can be implicitly converted to another type, and yet Node is still used server side.
Yeah and that's the most solid criticism against it! While it's true that it's quite a bit more difficult to cause fatal errors in a program in node, the language does little to help you solve these problems once you expose that functionality to it.
That's why Typescript has been doing so well, I think. It's consuming the javascript ecosystem faster than anything I've ever seen, and having written a fair sum of it (for a server side application, no less) I'm constantly surprised how capable it is.
And of course, I'm on record as a big fan of Purescript and Elm, with more bias towards the former.
> Why C? fewer hipsters : hipsters dislike memory managment
Is this meant to be ironic? Spinning up web apps with C in 2017 seems like a supremely hipster thing to do.