Hacker News new | past | comments | ask | show | jobs | submit login
The Group That Rules the Web (newyorker.com)
139 points by state on Nov 20, 2014 | hide | past | favorite | 31 comments



This is exceptionally clear writing that can give someone who doesn't know what a Web browser is an interesting glimpse into Web standards and why they matter. I hope that a business model for quality journalism can endure.


Journalism needs to be repositioned as "selling time" or "life extension".

Good writing takes much longer to write than to read, not counting investigation, research or subject matter expertise, which in some cases may be a lifetime's work. When you pay for quality writing, you are saving (buying) time and avoiding paths that would waste your limited time.

Money can't create new time, but it can fund journalism that saves your life-time.


You are right, moreover, time is not money, you cannot have it back!


Like so many other parts of the economy journalism has been "hollowed out" with middlebrow publications like Time suffering the worst. "High-end" publications like the New Yorker and the Economist are actually doing well.


The New Yorker has always been pretty good on this front. As other publications decline, their content quality has mostly remained as good as it's ever been.


Yeap, hats off to author (Paul Ford).


Yes, that's a good article.

I really hoped that XHTML would win out. Have you ever looked at the HTML5 parsing rules? They codify acceptance of malformed comments and specify standard handling of bad HTML. There are pages of special cases needed just to build a tree from the tags. Check out Python's "html5parser" for the amount of effort required per input character to make this work.

At least in XHTML, the tags balance.


What a strange, myopic view that a language should be judged primarily by how easy it is to parse. Writing a parser is something very few will have to do compared to how many will use the language itself. It seems self evident that the language should first be judged on how easily it lets you express what yourself.


Forgiving HTML parsing is one of the great strengths of the web.

http://quandyfactory.com/blog/39/the_virtue_of_forgiving_htm...


The main point of that article is:

> They're missing the point. The virtue of forgiving parsers is that they vastly increase the pool of people able and willing to create web content.

I find it very hard to believe more people are coding because the threshold of what is an error has been relaxed. Although I do admit 'On Error Resume Next' is what kept me motivated to learn programming in VB6.

So I'd say it might be an advantage for getting more people hooked in programming webpages, but professionally the price we pay to have this easy ramp up for beginners is very high.


Really? You think normal people have the patience for the Web browser just excepting out if they forget to close a tag?


Well, my other software development does too actually.


But that's the thing; there are many, many people out there writing HTML who are not programmers. I mean, much of it is a horrifying mess, but it looks more or less correct in a Web browser or an e-mail client and that's what's exciting about it.


"Forgiving parsers" meant people had to learn when to use

    <!--[if IE 6]>
    Special instructions for IE 6 here
    <![endif]-->


While xhtml was nicer to parse, xhtml only described how to render a valid page . How often are pages like that in the wild, even if the page itself is fine some user add on or script can put other stuff in, possibly invalid.

HTML5 described is so complex because it it unlike XML and it's strict validity, HTML5 works with even the most mangled tag soup


For XHTML to win, we'd have to lose our history. Yes, it would have been simpler in terms of parsing, but I'm glad practicality beat purity. Everytime I see a CDATA section in some old code, something inside me dies.


If malformed/ bad HTML must be handled then it is no more bad/malformed, it's a part of the standard.

IMHO plain stupid. Btw there is the same thing for RDF specs on how to handle bad literals and other stuff.


Unfortunately with decades of baggage they can't very well just break things.


Paul Ford nails it, for a lay audience no less.


does anyone have a nonpaywalled link? going via google didn't work like it does for nyt.


I can see it fine despite not being logged in; maybe clear your New Yorker cookies.


> With that complexity [featurism] came Balkanization. [...] Anarchy would ensue and photographers would complain and complain. As this Balkanization was beginning to happen [...]

Can anyone remind me of how bad this actually was? One paragraph was not enough to jog my memory.


To be honest I could barely make it three paragraphs without wincing at this claptrap. It's overly broad and blatantly inaccurate.


I read the whole thing and thought it was excellent. I didn't wince once, and I didn't see anything inaccurate.


I wince when non-French persons render words such as "èlite" with an accented e.

It's typical pretentious New Yorker twaddle.

Especially when the elite they are referring to, held no such position of hatred, or features desired.

The main issue with HTML was looseness of definition, which has plagued us ever since. Avoiding correctness to benefit those who cannot be bothered to simply follow a prescribed document structure was the greatest error.

Editing in a browser page? Pfft, nonsense.

Nobody of technical note, gave a fig about such things.


Tim Berners-Lee advocated for in-browser editing in the early days of the web and saw it as a critical part of the platform. This is documented in his book "Weaving the Web."


I agree it's aimed at a broad audience (not a bad thing in this case) but could you give an example of what's inaccurate?


I was being pretty harsh, I'll admit, a bad habit.

I was just irked at the opening assertion that "élite hypertext thinkers" and "computer savvy" people despised it.

Far from it, they were the only people on it.


That's actually true, at least according to Tim Berners-Lee's book. Hypertext was nothing new when the WWW came around and other systems were more powerful.


The drama attached to "despised" is not true however, and the features lacking, ie. Editing in a browser or "who has linked to my page" were not of overwhelming concern.

The author is simply attributing features which seem personally important to them/journalists, as the most pressing concerns of the people making/using the web at the time (early 90s)

I was on Usenet virtually every day keeping an eye on discussions on SGML vs HTML, when the web was in its infancy.

Certainly consensus was not always smooth and non-controversial. But to say anyone (who wasn't simply trolling) despised HTML, let alone the "èlite", is utter nonsense.


Thanks for taking the time to clarify.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: