Hacker News new | past | comments | ask | show | jobs | submit login
To Wash It All Away [pdf] (usenix.org)
68 points by kryptiskt on March 8, 2014 | hide | past | favorite | 19 comments



Since the title doesn't describe it, I will: it's a new James Mickens article, this time about the insanity that is the web.

I found his previous articles really funny but this one felt a bit forced. I think it's maybe too easy to pick on the web, so it ended up being a series of "also, JavaScript equality makes no sense, [NON SEQUITUR HERE]".

Here's a collection of some of his others, which are worth reading before this one: http://blogs.msdn.com/b/oldnewthing/archive/2013/12/24/10484...


This one from January is not on that list: https://www.usenix.org/system/files/1401_08-12_mickens.pdf


Thank you! I'd never seen this one!


I enjoyed it a lot. Not quite as good as some of his others, but better than some others as well. I don't feel your characterization is quite fair w.r.t. JavaScript, but to each their own.


Mickens has yet to meet the whiny twentysomethings on /r/linux, who will not rest until every component of their Linux system that actually works is replaced with a newer and shinier component that was written by whiny twentysomethings and tested against the four whiny twentysomething use cases but not against anything else because "that's from the 70s and nobody uses that anymore".

After they've succeeded in replacing init and X11, the most likely target for the Whiny Twentysomething Brigade appears to be the terminal subsystem.

"Why does Linux still have the concept of terminals?" they say. "Those are from the 70s. Nobody uses them anymore. All those escape codes are unnecessary legacy cruft. If Linux were a modern OS, it would natively support a modern textual interface based on HTML and JavaScript..."


I think before they can get to terminals, the audio subsystem needs to be reworked once again. Because it's been a few years already, and pulseaudio seems like it might slowly start to become reasonably stable if they don't do something about it.


(Paraphrase) "The technology underpinning the web pretty much sucks at this point"

Yes! I agree! I wish to subscribe to your newsletter.

"This is my last column!"

Fuck!


Funny, but rather over-stated. Yes, the js runtime is incredibly malleable, and people do indeed load code from everywhere. But there are important, universally adopted conventions that mitigate the risks: the biggest ones being: 1) libraries define a single global variable (or attach themselves as members of one of those global variables). 2) Manipulation of platform prototypes is strictly verboten. 3) Using === in preference to == everywhere.

The "relentless asynchronous" nature of the environment is a challenge but, I'd argue, it's the right challenge. Async programming is actually a Good Thing because you're only ever dealing with a single process, a single path of execution, all the time. It is possible to get this right, whereas with multi-threaded code, I don't think it is, in general. It's true that you can complicate matters by weaving closures in and out of each other to the point where it's impossible to understand the state of your program by reading the source (or even with break points), but that's just bad code, and shouldn't reflect on the notion of async itself.

It really is too bad, though, that the runtime doesn't (and can't) offer better invariant guarantees, and that we have to rely on convention. But honestly, that sort of thing is possible in Java, for example, too. One of your dependancies can load up ASM or BCEL and start changing classes. Heck, Hibernate does this routinely. The only real difference is that the barrier-to-entry for JavaScript programs to do this sort of thing is much lower.


He sounds like someone who gave up after attempting to write his first website.

Sure there are some problems but most of the problems that outsiders are talking about make a lot of logical sense once you learn more about how the system works.


"First" websites were written in the late 80's early 90's probably before you were born.

The web is a gargantuan piece of trash. Throw your HTML on the pile if you want. Take two and call me back in 10 years when you have realized what a fool's errand developing for the web is.


I hope you use HN via BBS or gopher or something then. They are obviously going to take back the crown from WWW any day now.


But what about the "network effect", that necessarily the thing that gets chosen to be scaled up cannot be optimal. TCP is not optimal (there are better protocols), bitcoin is not optional (there are better altcoins), but it's better to build layers of abstraction on this base (i.e. http) than to try to replace them.


Where do you get, from the network effect, that "necessarily the thing that gets chosen to be scaled up cannot be optimal"? The network effect just means that whatever is currently scaled up is hard to displace.


In fields where we gradually refine our definition of optimality, we don't tend to be aware of the retrospectively-optimal solution until after we've put previously-considered-optimal solutions through their paces, and found their pros and cons.

The network effect, however, tends to kick in when a solution is "good enough," which happens on some random iteration possibly long before the "truly optimal" solution is found.

The only time you'll see an optimal solution to a problem spreading by the network effect, is when the definition of optimality is clear-enough from the start that the optimal solution can be found before people have a chance to settle on anything lesser.


What other 'better than TCP' protocols have been tested in very large scales ? Just curious I never hear of any real competition for TCP. (and the few times companies, say Google, attempts to upgrade one, it doesn't end up that much better)


When people say this, they're usually talking about SCTP. It's used by mobile carriers (especially as a control channel in 4G signalling), banks and stock exchanges, and in WebRTC (both as plain SCTP for the data channel, and in the form of RSTP for the media channels.)

Google's SPDY (and therefore HTTP2) is basically an attempt to get the properties and advantages of SCTP, but only for HTTP, and done on a slightly higher level of abstraction where it isn't as efficient.


Thanks, first time I see this acronym and its influence on SPDY.


Direct link to the column hosted by Usenix (mirror): https://www.usenix.org/system/files/1403_02-08_mickens.pdf


Any idea why this is Mickens' last article? I've enjoyed quite a few of his articles but wouldn't describe myself as a follower.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: