Hacker News new | past | comments | ask | show | jobs | submit login
BCHS stack – BSD, C, httpd, SQLite (learnbchs.org)
130 points by based2 on June 18, 2017 | hide | past | favorite | 75 comments



> BCHS (pronounced /biːtʃəz/, beaches) is for real development. It's a hipster-free, open source software stack for web applications.

> Why C? fewer hipsters : hipsters dislike memory managment

Is this meant to be ironic? Spinning up web apps with C in 2017 seems like a supremely hipster thing to do.


The hipster culture is the culture of counter-culture. A hipster aware of their hipsterness would therefore violate their internal value system. So yes, it's hipster thing. But the hipster is not aware. Or is at least pretending not to be aware.



Yeap. Additionally, having a name without vowels and pronouncing it as if it had vowels is also pretty hipster, mustachioed as they say :)


Pretty much. I spent a year of my life maintaining a web app written in C circa 2002. Even then, it was a supreme pain as compared to Perl or PHP.


Back in 1999 I managed a similar thing written in C with flat file storage. The whole tool chain chucked out a static binary and a directory full of templates and you just cp'ed it to the target and restarted the process.

When I look back I've got to be honest and say that it was actually a thing of beauty. The heavy lifting required for some core functions was quite bad but the whole thing pales in comparison to some of the enterprise and microservice based c# behemoths I have looked after since which require literally hours of head scratching to deliver a simple change.

I'd rather spend that time writing C than scratching my head. I'd have a lot more hair now.


you can duplicate that experience with go web dev today! :)


I have actually tried that and you're right. The whole experience made me consider going back to electrical engineering when I realised that the status quo hadn't improved in 20 years :)


Have you looked at Lua+C and LuaJIT+nginx (CloudFlare, OpenResty)? Work is underway to compile a subset of Perl6 to LuaJIT+nginx.


I can't get over the 1 based indexes in Lua, unless that has changed in recent years.


Still there :) The trade-off is rapid development (interpreted, GC), fast execution (JIT faster than Javascript) and easy interfacing to C libraries.


I shall have a try next week and see if I can cope with it :)


At about that time I knew a guy who insisted that the best web dev platform was AOLServer with application code written in C.


I worked on a startup that had an AOLServer kind of clone, it was wonderful.

However in our case only the DB drivers and Apache TCL module were written in C, everything else was pure TCL.


AOLServer + TCL was pretty slick


Yes indeed, oldies but goodies. This combo is still available and maintained, well, at least in the form of naviserver. Tcl of course has continued in active development and as useful as ever.

https://bitbucket.org/naviserver/naviserver/

http://tcl.wiki/



Using C is just plain dumb these days. I would have thought that avoiding memory safety bugs is worth the price of some "hipsters".

Edit: it would appear to be ironic


Found the guy for whom C was too intimidating!


Well I can program in assembler if that pleases you, oh blessed dmr prophet. C is simply not worth using unless you have to due to existing constraints etc.


Not to mention hiding acronym jokes behind IPA.


They hated hipsters before it was cool.


A Real hipster programmer would use Fortran (and use arithmetic if statements) or PL1/G if they are lazy and want to handle text better.


Trendster vs hipster is a thing now. I think I don't keep up with the trends anymore. But yeah next step we use assembly.


WebAssembly is a thing...



It is so easy and fun to write web apps in C. Just remember to test them with tsan/asan/valgrind and run through 2-4 static analyzers + check with couple of fuzzers too!

Well I think I'll pass this one and hopefully don't need to maintain such apps in the future.


Plus Frama-C, model-checkers, concurrency analyzer, CompCert in Coq, Simpl in HOL, symbolic execution in KLEE, path-based testing, and property-based... then it might be good to go.

On a capability-secure processor running in a VM in case new classes of attack against C code are discovered Again. ;)


Oh yeah?!? I see your "beaches" and raise you DIRP for real work.

* Debian

* Iron ( https://github.com/iron/iron )

* Rust

* Postgres

(Clojure, Redhat, Apache, Postgres is good too)


I'll watch you DIRP in the distance and show my hand for future of safe, embedded, web apps:

JAWS - JX OS + Ada Web Services

https://en.m.wikipedia.org/wiki/JX_(operating_system)


haha, nice.

I think it will all be ReasonML.

Since it's basically OCaml with more JS-friendly syntax, it will lead the masses of JS devs to OCaml, they will find MirageOS, build their servers with it and no one will need anything else anymore.


What about: Rust, Ubuntu, Postgres, Hyper


What about Rocket? Is Iron preferable?


Which one has async IO?


They are both built on top of Hyper, which I believe now has async io.


What about Debian + Go + Postgres ?



I don't really have a problem with what is presented here but it seems written in a way to trivialize modern web development. I once used to work with a MFC C and C++ guru as dev manager who would do the same thing yet he could not code a web page. He once claimed he could pick up all my knowledge inside a week and bang out a demo as fast as I could. Challenge accepted- one week later he had a functional button. BSD needs a higher level language with a good dev environment more than ever now. It seems you can't have both without either agreeing to a ton of horrible licensing (Java) or vendor lock in to some degree (KRB5 blob from Microsoft for .NET) or lack of platform support. Swift would be a good match because LLVM support but IBM as maintainer of BlueMix has thrown this squarely to Linux. This stack in a sardonic way is actually better than the other ones I listed for those reasons. It's just needless to throw shade on modern web development in general because how do you think we got in this situation to begin with?


Just tried a whois on this domain's data. There are lots of other domains registered by "Mengzhu Wang": http://domainbigdata.com/nj/7bJvx8m5-fv11P6kSKx6vA

According to archive.org this page exists since 2015:

https://web.archive.org/web/20150701000000*/http://www.learn...


Yes, the joke is appearently unmaintained.

> SQLite is a "self-contained, embeddable, zero-configuration" database. And it's bundled (for now…) with stock OpenBSD.

It has been 8 months and 3 weeks since SQLite was removed from OpenBSD base.

But if you want to develop your modern and hipster-free webapp, then it's still available in ports!


This tool seems to be completely useless. Missed every domain I own and listed dozens of unrelated domains.

A first and last name are useless without additional WHOIS fields for confidence, such as postal code, phone, etc...

I found 246 Mengzhu Wang's on LinkendIn, over 30 in IT any of which could own some of those domains.


Since this is OpenBSD-centric, it's worth mentioning that OpenBSD still ships with Perl (and probably always will unless its package management tools get a rewrite), so a "BPHS" stack is both feasible more-or-less out of the box and at least slightly more sane than trying to use C for web development.

If you really need a web development solution that's completely out-of-the-box on OpenBSD (aside from perhaps some third-party Perl packages), you might be able to use a "BPHL" stack (BSD Perl Httpd Ldapd) and use LDAP in place of a general-purpose database. That's probably a horrible idea, but whatever.


The example is a little to thin, to explain, what is great about it.

Currently, I don't see, why I should use it, except going at least ten years into the past.

SQLite is great, but I can use it with other useful tools instead of a naked web server.


Do you want buffer overflow? This is how you get buffer overflow in web apps.


Whenever I see c-web devs I get reminded of https://blog.fefe.de/, a blog i used to read. Scroll down to the end/check the faq


I'm still waiting for fefe to write his own kernel ... should work well with dietlibc and gatling


If we are going to craft a stack involving assembly someone would say that assembly is for the weak and real developers write directly in machine language.


This is such an obvious joke. Chill my friends.


It's not obvious - because I bet most of HN is wondering right now. It's not necessarily all wrong to be coding microservices in C - it has been done, and has been done in C++ as well. I like the pledge approach.

I would like to see nginx support here instead of just OpenBSD's web server. Perhaps this is because nginx won't work with pledge?


For a specific example, iMatix had DSL's that outputted C which they used for high-performance server apps. If doing that for web apps, one would need a web framework for C that the code could call. I believe I've also seen both a C and C++ framework in the past. Both bragged about efficiency with the servers cheap.


The difference being that C++ offers the language features and standard libraries to write safer code, while C....


>C++

>Safer code

Not really. I mean, using vectors and strings is nice, but that's about it for safety. You'll still get a shit-ton of memory leaks which isn't great for long-running web apps.


It seems you haven't updated yourself on C++11, C++14, C++17 best practices.

If the code has any explicit new/delete or malloc/free, then something is wrong with the design.


> Anybody can write crappy, bug-ridden and insecure code. In any language. Good news: with C, it's even easier! So familiarise yourself with common (and common-sense) pitfalls noted in the SEI CERT C coding standard and comp.lang.c FAQ.

This part has a joke, which makes the overall website not read like a joke. I’m pretty sure it isn’t, even.


Of course it is.


That's hardcore web server stack.


I'm a firmware developer, have been all my professional life. C is my goto language. Rather than look at this through the lens of an experienced web developer, consider the value it provides to C natives such as I.

This is to C devs what the micro bit is to kids learning computing - a starter experience with server side dev


I feel this is a bit light on information. What exactly are the big benefits of this?


just a prank bro


Is that a joke? I'm not even sure at this point.


Successful satire.


I'm highly sure it is a joke. I'm also somewhat interested in the underlying point. :)


It's the "anti hipster stack," but it has SQLite, which is one of the most hipster databases there is.

Or at least the pool of "hipsters" I know like to pick it for use cases where they haven't considered fully whether or not it's actually going scale.


Shipping C? To avoid "hipsters". Dear Kristaps, please stop pushing your religion on other people like it's sound engineering practice.

If by "hipster" you mean, "People who are sick and tired of shipping insecure software that violates consumer trust and has tends to case downtime" then by all means, ship C code that handles untrusted input.

I think it's criminal to propose this without massive auditing, which is what HLL runtimes end up getting.


Do you understand how BCHS works? Obviously not otherwise you wouldn't have made this comment.


What, exactly, do you think OpenBSD can do to prevent an attacker from comrpimising your responder binary's logic that everyone else isn't already doing? OpenBSD's sandboxing model is superb, but that's not what I'm criticising.

C is a bad choice for handling untrusted input precisely because it makes it very difficult to prevent logic errors that disclose user data in unexpected ways. The security community has done its best to prevent the even more disastrous class of breakout errors that comprimise the entire resource (and OpenBSD is great for this, way better than containers).

But as my comment was specifically addressed at the choice of C, I don't feel like I need to sweet talk an OS I already say lots of nice things about.

Maybe you'd like to respond with all sorts of great literature about how the C spec is not full of holes and gotchas?

And don't even get me started on that "simple example binary."


> it makes it very difficult to prevent logic errors that disclose user data in unexpected ways

Oh that's a new one. But now that you mention it, I'm starting to recall all the operators that will fprintf the contents of ~/.ssh/ to the network socket upon misuse.

Wouldn't it be nice if we could use a simple assert to test the state and inputs before we proceed to shovel private keys in a totally not privsep'd process handling public, unauthenticated connections. Heck, even a simple condition would do, if we could just return an error and stop further processing when things look bad. But you're right, that kind of code would've been too advanced for 1958 or whenever we got this language...


C was developed in the late 60's, early 70's.

There were already better, safer, languages being used for systems programming like Algol and PL/I dialects, almost 10 years old when C appeared.


I, like the maintianers of OpenSSH, envy a world where a security model is so simple.

If only folks wishing for a simpler time could do so outside of security critical code.

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-0777


So, at the root of this bug we have a broken implementation of a circular buffer that fails to accommodate for one case. That is the kind of a binary logic error I can repeat in practically every single programming language. I fail to see how C makes it very difficult to prevent this type of error.

And on the other hand, we have code that doesn't bother check that the inputs from the outside world are valid and that they do not cause integer wraparound. The former is a problem you can repeat in just about any language, while the latter is relevant to many languages. Guess what, I know how to check inputs. I know how to prevent integer wraparound. And C doesn't make it hard for me to do so.

The next and last interesting part of the disclosure is mostly concerned with leaking private keys. Now what did I say about juggling private keys in an internet facing process? Just because the ssh devs didn't isolate that part into a separate process doesn't mean it can't be done (and honestly I have no idea why I would be juggling private keys during the generation of a web page).

So there you have it, old code from a time before explicit_bzero, juggling private keys, not checking inputs and running on a system without malloc_options. You can lament it all you like but that doesn't mean everyone has to do it wrong. It shows that you can do it wrong, not that C makes it hard to do it right.


> So, at the root of this bug we have a broken implementation of a circular buffer that fails to accommodate for one case. That is the kind of a binary logic error I can repeat in practically every single programming language. I fail to see how C makes it very difficult to prevent this type of error.

Not that any other language is on trial here, but there are languages that would naturally make such a bug into a compile time error.

I fail to see how C helps you avoid making such an error. Because C's general standpoint here is that there is no such thing as an error, there is no such thing as a type, and it is perfectly acceptable to have undefined behavior sitting within trivial edit distance of common code patterns.

> The former is a problem you can repeat in just about any language

Actually, no, that's false. A lot of popular languages check arithmetic. Faulting in such a case would have saved the day but hey, faster execution. Even old languages like lisp did this.

Not many languages are as lackadaisical as C is about this. But error handling (at any stage) has always C's weakest point. I can't think of a C successor who hasn't named C on this front, then tried to improve upon it.

> It shows that you can do it wrong, not that C makes it hard to do it right.

Insufficient abstraction means that you can't reuse code, so instead of getting it right once you have to get it right every time.

And what's the compelling reason to use C? It's "simple" and "close to the metal" but the problem domains are anything but. Availability and instrumentation bias encourage people to use C for "efficiency", trading off correctness for faster code. It's a tradeoff one can make, but if you're working with other people's data you should think twice.

How many times faster does a piece of code need to be to make up for violating a user's privacy? How many times "simpler" does code need to be for someone reading it to justify not making every effort to avoid security faults?

But then, I worked with financial data a lot, and my work ended up being associated with a national scale bank with an API. The sheer amount of attacks my code had to endure was on a completely different scale than most people will ever experience.


> Not that any other language is on trial here, but there are languages that would naturally make such a bug into a compile time error.

I'm sure there are languages where the use of any condition forces you into making sure there is some explicitly taken branch for all possible inputs -- and perhaps that language also magically knows what you must do inside each branch. Show me all the projects that are using these for web developement, which is the context for this discussion. Otherwise it is not fair to bash C over it.

All the mainstream languages I see in web development allow you to make the exact same mistake.

> I fail to see how C helps you avoid making such an error. Because C's general standpoint here is that there is no such thing as an error, there is no such thing as a type, and it is perfectly acceptable to have undefined behavior sitting within trivial edit distance of common code patterns.

But there are errors. There are types. UB is not relevant to what you are replying to. The problem in question was about a condition that was not considered. Again, show me how your average web language tells the programmer that he didn't write some if condition or stop bashing C over it because you're dreaming of features in some unicorn language nobody uses in the real world anyway.

> Actually, no, that's false. A lot of popular languages check arithmetic.

Please read again. I said "the former", referring to input validation. That is relevant to every language accepting untrusted input.

> Insufficient abstraction means that you can't reuse code, so instead of getting it right once you have to get it right every time.

That statement is so wrong I can only conclude that you're smoking something or you haven't programmed in C and you are completely oblivious to the work the OpenBSD folk (and many others) are doing to fix these issues in existing reusable library code as well as to introduce new, safer APIs. Sure you can pretend that everyone who wants a buffered output stream to a socket has to write their own circular buffer and repeat the same mistake. You are wrong, and if you had paid attention you would see counterexamples (libevent is a popular one) that prove you wrong. You're just hating on C but don't know it.

> And what's the compelling reason to use C?

I'm not trying to convince anybody to use it and my reasons are my reasons -- the strawman you make of performance isn't the key. But it doesn't matter.


> Show me all the projects that are using these for web developement, which is the context for this discussion. Otherwise it is not fair to bash C over it.

I'm not sure why I'd play the game when "mainstream" is basically a way to discount any offering. But in C++, C#, or basically any class-based language you can code to guard against this. Functional languages with types provide strong guarantees against this. Ocaml and Haskell come to mind as well known examples..

> But there are errors. There are types.

Not according to the compiler. Anything can become anything else.

> That statement is so wrong I can only conclude that you're smoking something or you haven't programmed in C and you are completely oblivious to the work the OpenBSD folk (and many others) are doing to fix these issues in existing reusable library code as well as to introduce new, safer APIs

I'm aware of the work, but C's problem is not that it lacks more library code.

> Sure you can pretend that everyone who wants a buffered output stream to a socket has to write their own circular buffer and repeat the same mistake.

I don't say they have to. It's just that C's language design makes it easier for people to do so. Very different statements.

> You're just hating on C but don't know it.

I see where this is now going. "If you knew you'd like it." I'm not going to waste anymore of either of our time if this is the new talking point.


>Not according to the compiler. Anything can become anything else.

You mind clarifying that? I constantly have compile-time errors from type mismatches. You can cast a variable to a different type, but you can do that in any language.

you can't implicitly convert a char to an int, your compiler will take that as a fatal error. About the closest you could get is char and short since they're essentially the same data type, but even then the compiler might throw an error over an implicit conversion.

That said, JavaScript doesn't have types at all, really. Every type can be implicitly converted to another type, and yet Node is still used server side.


> You mind clarifying that? I constantly have compile-time errors from type mismatches. You can cast a variable to a different type, but you can do that in any language.

Well, for one people overload types. The practice of, for example, using return codes as error values. The second problem is unions. They exist to make people converting bytestreams into structures happier, but often get misused elsewhere.

And void* is essentially a black hole, but it's a very common black hole to see in programs.

> That said, JavaScript doesn't have types at all, really. Every type can be implicitly converted to another type, and yet Node is still used server side.

Yeah and that's the most solid criticism against it! While it's true that it's quite a bit more difficult to cause fatal errors in a program in node, the language does little to help you solve these problems once you expose that functionality to it.

That's why Typescript has been doing so well, I think. It's consuming the javascript ecosystem faster than anything I've ever seen, and having written a fair sum of it (for a server side application, no less) I'm constantly surprised how capable it is.

And of course, I'm on record as a big fan of Purescript and Elm, with more bias towards the former.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: