Hacker News new | past | comments | ask | show | jobs | submit login
Dynamic Languages Strike Back (steve-yegge.blogspot.com)
83 points by bdfh42 on May 12, 2008 | hide | past | favorite | 34 comments



This is an awesome, awesome pitch for the research of dynamic languages, both in real-world and academic settings. You can tell Steve Yegge's enthusiasm for the topic, 'cause it just gushes out, and it's pretty infectious. I don't even really like discussing programming languages, and I'm geeked right now about trace trees and type feedback compilation.


He mentions that "yesterdays dynamic languages had great performance and tools" - well, I never experienced any of the xerox greatness myself (lisp and smalltalk machines), but in the 90s I was a user of many small talk apps (before I was a developer) - and they were a horrible, horrible experience. I think the people building them got off on the cool development environment, forgot that there were end users taht didn't care how they worked underneath. So I can't agree with that. I think its rose coloured history. A lot of those tools died cause they didn't yield anything useful for end users.


Well, for one thing, there are a lot of production Smalltalk applications out there still. Some of them are quite good. There are also a lot of bad implementations out there, but that is true for any language. (Sturgeon's Law!)

The Smalltalk Refactoring browser is still very impressive, though it has now been surpassed by Eclipse. It predated Eclipse by some years, however. There are other tools that were developed under Smalltalk that have yet to be adopted by other programming communities. Also, the performance of the fastest Smalltalk VMs has been very good for many years now. Bad architecture will swamp even programs in C or assembly. I'd hazard a guess that you were dealing with that.

But you are correct that there are serious problems with many of the Smalltalk implementations. None of these had anything to do with the language, the tools, or the VMs. Rather, there was a legacy of "ivory tower" mentality. Instead of doing very necessary but unglamorous things like polishing the GUI framework (and eliminating race conditions) implementors preferred to do cool things like create new garbage collection options.

Smalltalk suffered because community input was neglected, because the same community fostered an elitist culture, because marketing went for an more closed "boutique" approach, and because implementations weren't polished to true professional standards. It would behoove Ruby, Python, and others to heed these lessons. (And I'd note that they've done many things right.)


Go ahead and show the new support for languages like JavaScript and Ruby in IDEs like Netbeans to Java bigots and see how far you get. Show them how autocompletion, basic checking, and tons of other conveniences are there and working (and it's open source, and there's almost certainly a way to get commercial support, and blah blah blah). They may still disregard it. No matter how sophisticated the tooling support is, no matter how much you take care of technical things, they're still afraid of anything that's different and are looking for any excuse they can to maintain the status quo.

He only briefly alluded to it (80% politics, 20% technology), but like so much else that is discussed in terms of tools and technologies, this is a people problem. It's depressing, really.


Depending on the company and the projects maintaining the status quo may be the best decision you can take.

Right now we have 20 years worth of legacy code in ADA. We have a legal obligation to maintain a large percentage of that code to 2075. Our demand for programmers in this location is such that essentially we run training schools to meet demand and would have to do the same in any language.

What _possible_ advantage would there be in moving to a new language for new projects? Before the life cycle is up any new toy would be old and rusty. We have to train programmers anyway. Supporting two languages would double the overhead. Our existing skill base is productive in ADA. Real Software Engineering is only 5% programming at most.

This is of course a bit of an extreme example - but it serves to illustrate the point that in the continuum there will be plenty of companies on the 'don't move' side of the equation, where it is far more profitable to stay with what they already know, at least for now.


Programmers love taking extreme edge cases and acting like that's the norm, don't they?


"they're still afraid of anything that's different and are looking for any excuse they can to maintain the status quo."

That's exactly what I want to hear! Let them keep their 600+ page C# spec and their 10+ year java resume - leaves more opportunity for people are willing to "take the risk" and pick the best tools.


Well, the .NET people's biggest problem isn't fear of dynamic languages, it's fear against anything that isn't Microsoft.

Seriously, google "Alt.NET". They're people whose toolchain is not 100% Microsoft, and they're outcasts.


Sometimes it reminds me of Stockholm Syndrome.


One of the things that initially drew me to PG's essays is the attitude of "ok, we think our tech is really that much better, so we're going to put our money where our mouth is, and use it for our startup". Ultimately, showing is better than telling.


Programming languages are as much a social construct as they are a technological one!

Python and Ruby are doing tons of things right on the social/cultural side, but the technology was in many ways a big step backwards from Smalltalk and Lisp. The poor performance serves to reinforce the old dynamic language stereotypes.


The comments by a real veteran Dan Weinreb are a nice read.


It's really long but he responds almost point-for-point to everything in Yegge's essay.



> So type inference. You can do type inference. Except that it's lame, because it doesn't handle weird dynamic features like upconverting integers to Doubles when they overflow.

Pretty weak argument against type inference. Yegge has used OCaml so i expected better arguments.

And JavaScript 2 is getting type annotations, so the trend is more likely in the other direction to improve performance.


I think he just brought up the "int overflow into Double" thing so he could pimp out the double-dispatch type inference a little bit later:

> "... and so all this stuff about having a type-tag saying this is an int – which might not actually be technically correct, if you're going to overflow into a Double, right? Or maybe you're using an int but what you're really using is a byte's worth of it, you know. The runtime can actually figure things out around bounds that are undecidable at compile time."

I got the distinct impression that he wasn't arguing against type inference, merely said that the classic way to do it doesn't always work with dynamic languages, and there are better (newer) ideas regarding how to infer types without resorting to type tags.


I suppose you're right. Hopefully he will explain it more in a future blog post.


I suspect that Steve Yegge would agree with me that introducing type annotations into JavaScript is a mistake - a cure for the wrong problem as his presentation strongly argued.


Actually he has explicitly listed it as being necessary to make the language popular in his essay "The Next Big Language" which you can find at http://steve-yegge.blogspot.com/2007/02/next-big-language.ht...

"Rule #2" for what NBL requires is "Dynamic typing with optional static types."


Performance of C vs Smalltalk:

A friend of mine implemented some Block Cipher algorithms in Smalltalk a few years back. By working with the VM engineers so he could have a few things (like 32 bit registers with bit-shift and bit-rotate operations) he managed to come close to or BEAT RSA Data Security's reference implementations in C. In once case, the algorithm was 3% faster. (RSA's implementations used malloc/free in a naive way. Great generational GC in Smalltalk was like a custom buffer cache implemented for free.)

A company I worked for once has two Smalltalk implementations. Some work was done to host one inside the runtime of another. The hosted Smalltalk has a compiler written using Yacc & Lex which was compiled C. The host Smalltalk has one written entirely in Smalltalk. It was discovered, that if you took the text notifications out of the Smalltalk Smalltalk compiler, it runs faster than the C Smalltalk compiler.


I haven’t got this excited about programming language discussion in ages. It’s great to hear some honesty, there is still a lot of misinformation and stereotypes floating around.

It's not even limited to dynamic lanaguages. I sometimes find myself defending static languages like Java. Java got a bad rap for being slow... but actually, it’s a lot better these days than it used to be.

Get the facts people!


Hank Williams just published an article about scaling in languages vs architectures that relates to Yegge's points about Java being faster than C++, RoR being faster than Struts, etc

http://whydoeseverythingsuck.com/2008/05/blaine-cook-writes-...


I rather liked that he pointed out reasons behind why things are the way they are, and why they're (arguably) no longer true. I guess that's what they call perspective.


Hmm... he seems to be using strong/static typing interchangeably.

Thats a bit like a lecture from a physicist confusing gravity and time (unless of course they are talking about inverted universes, maybe a black hole or something).


Great talk! I only wish he touched on Microsoft's DLR, which seems germane.


Languages aren't changing every 10 years?


I think he was getting at something about industry language preeminence not changing much every 10 years. C in the 80s, C++ in the 90s, Java in the 2000s, QBASIC in the 2010s. Not that the languages themselves take that long to change internally.

I threw that last one in just to make sure I was illustrating the point and not trying to be historically accurate or trying to predict the future.


Would be ironic if QBasic came back. That is what I wrote my first real program in. (I made that game where the snake eats the apples and grows longer. Only mine had cheat codes and teleporters and stuff!)


You know, he probably could have, you know, written that in like a couple of paragraphs so I actually would have finished reading it or something you know.


No he couldn't, he's a horrible writer and has no respect at all for his readers time. He thinks every thought he has is interesting.


He was obviously just listening to his taped version and writing down whatever he said almost verbatim. He's obviously very busy, so copying it all down was fine. You just have to read it like he's presenting it. It wasn't that long. And its all talk style, so it keeps changing direction and therefore doesn't get boring.


I stopped reading after this:

"C is really fast, because among other things, the compiler can inline function calls. It's gonna use some heuristics, so it doesn't get too much code bloat, but if it sees a function call, it can inline it: it patches it right in, ok, because it knows the address at link time."

This statement shows that Steve has no idea how the base of every modern OS works: separate compilation, calling conventions, machine code, shared objects (libraries), etc.

Once I compile a .c file which defines a function into object code, there is no practical way for the linker to "patches it right in" -- that would require actually decompiling the function to get rid of the calling convention implementation as well as tie-in all the argument references in the function body to those in the calling code.

I noticed something similar a while ago in one of his posts about Lisp (I think it was about why Lisp is not an acceptable Lisp). There he made pretty strong statements and when people who actually know and have used Lisp in real world pointed to his numerous mistakes in comments, Steve admitted that he actually doesn't know the subject matter very well.


I thought he was talking about functions defined within the same file or some such.

Like:

    int square(int x) {
        return x*x;
    }
    int sum_of_squares(int x, int y) {
        return square(x)+square(y);
    }
And then the compiler could just know to turn that into

    int sum_of_squares(int x, int y) {
        return x*x + y*y;
    }
(apologies if my syntax is invalid; I'm not actually a C programmer)


Steve is talking about the "link-time inlining" (see the original quote) which is not supported by any mainstream C compiler that I know.

Inlining in the same translation unit is possible though its application is too limited to explain why C is faster than dynamically-typed languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: