Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript: The World's Most Misunderstood Programming Language (2001) (crockford.com)
35 points by mark_l_watson on Dec 13, 2009 | hide | past | favorite | 34 comments



Not a bad little article, although the fact that it is 8 years old should make anyone who would find it useful question how much of this is still completely accurate.

On an somewhat related note, I really hate it when programming articles don't have a date on them. You start reading just to find out it was written so long ago that the version of code it was written for no longer exists and everything is deprecated.


I hate it when any page on the web doesn't have a date. I wish having created on and modified on dates was a standard somehow.


It is a standard, somehow. After all, most if not all webservers will include that information in the headers.

Browsers could easily display it if their designers wanted to implement that. It would be a fairly simple addition to put that in to the statusbar on the right.

I'm not sure if greasemonkey scripts have access to header information but if they do that might be one way to solve this for you.


Yeah but that might not be good enough. Like if the user switches web servers, they restore from a back up, the data is dynamically served, etc, the dates might not be accurate. What I've always dreamed of is truly accurate dates on when the article was written, and when it was consciously updated.


I'm fairly sure that that is exactly how those dates are meant to be used.

If people misbehave then that is not really the fault of the standard.

Just for a little check I just looked at a page by a well-known news service, today it is the 13th, here is the last-modified header from one of their articles:

Last-Modified: Fri, 11 Dec 2009 11:14:26 GMT

So it looks like those headers are actually being used the way they are intended.

Restoring from a backup should definitely preserve the file dates, dynamic serving could still serve up the right last modified date (such as in the example above).

Dates not being accurate is more likely to happen when people can enter them manually!

The only thing remaining then would be a header that registers when the document was first created, but for that we have the META name='date' tag.


Last-Modified isn't intended to preserve the date of the content. It represents the modification time of the HTML document, which is used for caching. It may or may not -- probably not -- represent the date the article was modified. To try to do so would conflate the purpose of the Last-Modified header.


From http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html:

"14.29 Last-Modified

The Last-Modified entity-header field indicates the date and time at which the origin server believes the variant

was last modified.

       Last-Modified  = "Last-Modified" ":" HTTP-date
An example of its use is

       Last-Modified: Tue, 15 Nov 1994 12:45:26 GMT
The exact meaning of this header field depends on the implementation of the origin server and the nature of the original resource. For files, it may be just the file system last-modified time. For entities with dynamically included parts, it may be the most recent of the set of last-modify times for its component parts. For database gateways, it may be the last-update time stamp of the record. For virtual objects, it may be the last time the internal state changed.

An origin server MUST NOT send a Last-Modified date which is later than the server's time of message origination. In such cases, where the resource's last modification would indicate some time in the future, the server MUST replace that date with the message origination date.

An origin server SHOULD obtain the Last-Modified value of the entity as close as possible to the time that it generates the Date value of its response. This allows a recipient to make an accurate assessment of the entity's modification time, especially if the entity changes near the time that the response is generated. "

The word 'caching' is not mentioned in there at all, those are completely different headers, see the 'cache control' headers in that same document.


> "For entities with dynamically included parts, it may be the most recent of the set of last-modify times for its component parts."

The content is often considered a subcomponent of the whole page; other aspects of the HTML may change despite the content remaining the same.


That's a good point, but the 'essence' of the page is the text, the rest of it is just a container.

Caching headers could easily take care of all the stuff surrounding the essential part. If the essential part has multiple components then it would make sense to use the latest one for that.

If a link to an auxiliary page changes in the navigation that's a job for the cache control headers.

Either way the page would get reloaded, but at least you'd have a good idea when the critical component of the page (it's reason for existing in the first place) was updated.


I totally agree with everything you are saying. It's just that in practice it often doesn't work out. I can create a website that pulls its content out of a database and the web server has no clue what the dates should be for the headers. I think the only real solution is for web developers to be more conscious of this and ensure their frameworks play nice. The up side to that being the more conscious websites are more likely to be the sites with quality information anyway.


If the database has no timestamp for the content, the web site shouldn't be using Last-Modified without a meaningful value. When your code doesn't know what's going on but still wants caching, that's what ETag is for.


I'm not sure if greasemonkey scripts have access to header information but if they do that might be one way to solve this for you.

http://userscripts.org/scripts/show/34231


This is actually one of my (few) pet peeves with the 37Signals crowd. I remember reading somewhere (possibly in their Getting Real book) the reasoning behind such design decisions as not including the year on the dates displayed in their web applications. The general reasoning as I remember it was "It doesn't matter."

It doesn't matter until it's 3-4 years later and I'm reading a post on Rails that may or may not actually still be relevant and trying to figure out what Rails version it pertains to. Sure the date doesn't matter so much when the content is new, but particularly when it comes to technical documentation, the date starts to matter more and more as the content ages.


It's still fairly accurate. A cursory read through it (I read it once several years ago) and I don't see anything that needs updating, except maybe the books section. Of course, I haven't read much dead tree documentation for JavaScript, so I can't really comment on that aspect.

Since the article was written, a few things were added that are handy, but not expounded upon in the article: we've gone from 1.4 to 1.8.1, and added some helpful things like array helpers, iterators, and so forth. Not much else has changed, though - for instance, you can and probably always will be able to accidentally save a variable to the global stack if you assign a value to it without declaring it.


ES5 strict mode won't allow you to define an undeclared variable.


This may be out of place, and I have no doubt that Douglas Crockford is a genius and an all-around excellent human being. Also, I am pretty firmly in the substance over style camp of web design.

Having said that, I find it pretty hard to accept webpages (in 2009!) that have this for design:

    <body bgcolor=linen> 
    <h1 align=center><font size="+4">JavaScript:</font><br>
Thank god for Readability...

Updates: Yes, by Readability I mean http://lab.arc90.com/experiments/readability/ It's an outstanding little mini-app. If you don't know it, try it.

Second, I want to clarify that what I quoted is literall the entire style for the whole page. I find that mind-boggling. (And, yes, I thought of PG's site.)

Finally, the page was written in 2001, so my crack about 2009 isn't quite fair. Still, many people do update their sites.


Ha! I hit the Readability bookmarklet right after clicking on the link on HN and seeing Crockford's site. And I come back to find your comment.

Another example of a genius having a similar site is Peter Norvig - http://norvig.com/

Great content on both sites, and yes, thank god for Readability


Copyright 2001 Douglas Crockford.


This is a fair point, but people do update their websites. Some people, some of the time. Still, I'll update my post to note that.


You should also take a look at sources of paulgraham.com, norvig.com and tlb.org. They all use horrible markup and tables for layout.

The fact that they don't care what their html code looks like should tell us something.


I'm not quite sure what you mean by "it should tell us something." So, I apologize in advance if the following is a misunderstanding. (For the record, I don't much care about someone's HTML per se. I'm talking about presentation. I'm not being a web standards fanatic, if that's what you meant.)

If you mean that they are all substance and no frills and that that's a good thing (or something along those lines), I would argue that presentation matters. You don't have to follow the latest fashions, but Crockford's site makes it literally hard to read his (excellent) articles. There are no margins. The line spacing is painful. Applying Readability to that page makes the article not just prettier but more substantive (because I can draw the meaning from it more easily).

Giving literally zero thought to the visuals of a webpage is comparable to giving literally zero thought to the order of a presentation or paper. All writing is partly about communicating, and giving no thought to such things is not cool and intellectual. It's likely to be lazy or pretentious.

(One of Giles Bowkett's posts about this page makes an interesting comparison of the visual layout of Hacker News versus that of the Guardian. Found the post: http://gilesbowkett.blogspot.com/2009/04/miniapp-hacker-news.... The bit that struck me is this:

> If you are a geek, you are probably consuming information in a less sophisticated structure then your non-geek peers.

I think there's some truth to that, especially in this case.)


The line spacing is the default one presented by your browser; if you find fault with the line-spacing, you're finding fault with your browser preferences.

And margins are generally no good. Depending on how they're effected, what margins do is perhaps make the page more attractive for those running their browser at fullscreen, while screwing over everyone doing the appropriate thing, which is to keep the window opened at the user's desired size for reading. This is because they are either implemented as a) a centered, fixed-width column that invokes horizontal scroll bars for those whose preferred size is smaller than required by the layout (usually no less than 850 pixels these days), or b) fixed margins so that, e.g., a 640 pixel wide viewport on a page with 200+ pixel margins reduces the effective column width to less than 250 pixels.

As for pretension, I agree, although it doesn't originate from the likes of Crockford, but instead those uncompromising fucks who find their "artistic vision" for a website to be of greater importance than their employers' content.


This site is perfectly readable. Would you care to let us know since when sites like these are out of fashion on a discussion board with the epithet "Hacker" in its name?


Obviously this is a matter of taste (and I'm starting to pile up down votes, so I may very well be in the minority).

I can't give you a date, but consider for example the ongoing discussion in the Perl community about the look of PerlMonks, the new Perl blogs site and the updates to the online Perldoc site and the main Perl site.

Again, I want to stress that I'm not talking about decorative fonts or elaborate colorschemes. A webpage with no margins and very tight line-spacing is not in my mind anywhere near perfectly readable.

As far as the hackers thing goes, I've rarely seen a hacker without a preference for a carefully-crafted PS1 (colors and all) or a good Vim/Emacs/TextMate theme. So it's not the case that hackers don't care at all about appearance in all circumstances.

Edit: Here's a good comparison site: http://diveintomark.org/ The layout is dead simple, but the margins, font choice and line spacing make a huge difference.


It's not a page devoid of margins. Pages that actually have no margins are difficult to read.

Although, maybe if you prefer narrow column widths, you should resize your browser window. And while you're at it, help convince those web developers that required 1024+ pixel widths on pages are counterproductive.


http://lab.arc90.com/experiments/readability/

Is that the script you're referring to?


Yes. If you haven't tried it, run and do it now. It's one of those "I can't believe nobody thought of this before" moments.


I noticed recently that it doesn't seem to combat the worst offenders: sites using width= attributes or invisible-table-based margins. I hit Readability, and the content is still squished to one side. Heck, I turn styles off and it's still squished to one side. This is what people mean when they say these old sites aren't "accessible": they can't be transformed into pure content to be input into something else (Readability, or a screen-reader, it makes little difference.)


Thanks to whoever added the 2001 date to my submission - I'll remember to do that next time.

The history of Javascript is a history of accidents :-)

Javascript is one language that I wish I new better. I can code in Javascript, but I need references handy to help me remember how to do things. I tend to just remember the basics required to write CouchDB view functions and simple web related tasks. However, nice tools like Rails AJAX helpers, couch-potato (generates CouchDB views from Ruby) and other tools pretty much make it possible to not spend too much time dealing with Javascript.

Regardess of technical merit, Javascript is used on a wide scale and its use will I think keep increasing.


I'll freely admit to being one who didn't "get it" with Javascript back then (and more recently than I'd like to admit).

Not sure I even get it now, but I'm at least aware enough to admit my lack of awareness about some (ok, most) things.


I really disliked working with JavaScript until Douglas Crockford told me it was Scheme with C syntax, and then, suddenly, everything made sense and I actually began to enjoy it.


I think the Modula series is the most misunderstood programming language. Perhaps not misunderstood, but the most potential with least use.

(Note: This opinion is probably naive or uneducated)


There are lots of candidates for that list:

- occam

- smalltalk

- lisp

- oberon

And that's just the tip of the iceberg of languages that seem to somehow have missed the adoption that they definitely deserved. And, conversely there are plenty of languages that did not 'deserve' the recognition they got.

It's almost like fashion, some stuff makes perfect sense but will never make it, other stuff is pointless and a smash hit.


"Undervalued" is quite different from "misunderstood". Most of the languages you both listed are not quite known even emong programmers; on the other hand, anyone who's evenr dabbled with Web production (and many that just browse) have at least heard of Javascript. Douglas' point is that many people think that they know what Javascript is all about, but they're wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: