Hacker News new | past | comments | ask | show | jobs | submit login
History of T (paulgraham.com)
126 points by mmphosis on Nov 22, 2013 | hide | past | favorite | 42 comments



We should point out that Emacs Lisp now has lexical scope. It took about 30 years, but it's here.

http://stackoverflow.com/questions/7654848/what-are-the-new-...


Olin mentions the Markup language, which I extended so that I could write my thesis and generate both HTML and Latex. Here's the resulting web version: http://draves.org/cmu-research/diss/main.html

All the diagrams and charts were made by calling out to external programs like dot and gplot, with the source all held in one big text file, edited with emacs of course.

that was finished in 1997.


heh i found it's still available for download with samples: http://draves.org/cmu-research/markup/markup.html bonus points for getting it to run.


"You could also program the -10 in a beautiful, roughly-C-level language from CMU, called Bliss. I see C and I remember Bliss, and I could weep."

I see C and I wonder if that's the best we can do for a systems programming language. Maybe C is just a local optimum that we have just invested too much manpower into to switch to anything else?


> I see C and I wonder if that's the best we can do for a systems programming language. Maybe C is just a local optimum that we have just invested too much manpower into to switch to anything else?

At this point the ecosystem around C is too big to justify the cost of switching for the domains it's used in. Any new contender is going to have to offer something new, without giving up any cost in performance or control over the machine.

With Rust (disclaimer: I work on Rust) we're shooting for safety to give us that edge—having the compiler reliably prevent use-after-frees and buffer overflows instead of discovering them through 0-days, without the traditional approaches that sacrifice performance and control over the machine (GC), seems pretty compelling to me. Even if we do succeed in getting enough new projects to use a safe language to make a difference, though, C will still be immortal; once a language has that much staying power it'll be around forever.


Let me first say the I applaud Rust and the effort you're making in developing a new, safe systems-programming language. But I do have one concern.

I like this quote by an Intel engineer, which appears on the back of Java Concurrency in Practice: "For the past 30 years, computer performance has been driven by Moore's Law; from now on, it will be driven by Amdahl's Law."

I think that this is the biggest challenge facing software development in this age. Now, I know concurrency is core to Rust, and I know that Rust adopts message passing as the principal concurrency construct. Message passing is terrific, extremely useful, and as close as we've got to a model of programming multi-core that's simple and easy to grasp. But at the end of the day, we do need a way to develop efficient mutable, concurrent data structures. Languages that rely on message passing, like Erlang, usually pass this problem down to a database, but the problem still has to be solved.

You say that languages resorting to GC "sacrifice performance and control over the machine". But GC already provides better throughput than manual memory allocation in practically all circumstances, and suffers mostly from latency issues (pauses), but even those are being worked on with some good progress (like the work done by Azul on their JVM). The only major true sacrifice GC requires is extra RAM, which is becoming ever cheaper.

GC is currently the best way to develop efficient concurrent (shared) data structures. There are ways of writing those without a general-purpose GC (RCU and hazard pointers), but those wither require kernel cooperation, and/or suffer from worse worst-case performance, and/or quickly transform into GC when generalized.

I know the plan is for Rust to have a GC as part of the libraries, but this problem (of developing efficient concurrent data structures) must be addressed. It's possible that hardware has become too complicated for us not to give up some control over to some runtime. Maybe we're at a stage where low-level full-control programming is incompatible with fully utilizing the hardware for best performance. It is possible that low-level systems programming languages will excel in resource constrained environments and have several other advantages, but sheer performance/scalability won't be one of them (at least on server-class hardware).


> But at the end of the day, we do need a way to develop efficient mutable, concurrent data structures. Languages that rely on message passing, like Erlang, usually pass this problem down to a database, but the problem still has to be solved.

Rust fully supports concurrent data structures with shared mutable state, and there are several in the libraries.

> But GC already provides better throughput than manual memory allocation in practically all circumstances

This is far too broad of a statement. I can certainly come up with cases in which manual memory management will outperform GC. For example, if you have an arena-like pattern like the binary-trees benchmark, I think it's impossible to outperform manual memory management. Even if you bump allocate in the nursery then you still have to copy to the tenured generation, reducing throughput over a bump allocator.

> and suffers mostly from latency issues (pauses), but even those are being worked on with some good progress (like the work done by Azul on their JVM).

Azul C4 generally requires a kernel extension to perform well, reducing its applicability in practice (desktop/mobile software). It also suffers from somewhat reduced throughput over the HotSpot garbage collector, according to the paper. This is not to bash Azul C4, of course—it's a really exciting piece of technology—but I feel that it's often held up as a solution to all of the problems of garbage collection when it, too, has tradeoffs.

> It's possible that hardware has become too complicated for us not to give up some control over to some runtime. Maybe we're at a stage where low-level full-control programming is incompatible with fully utilizing the hardware for best performance.

I don't see this the case in practice quite yet. Java HotSpot, which features the best widely used GC, is routinely outperformed by low-level C++. At this point I see the burden of proving that garbage collection outperforms manual memory management in practice is on the proponents of pervasive concurrent GC. It may well happen, but I don't think we're there yet.


> Rust fully supports concurrent data structures with shared mutable state, and there are several in the libraries.

Are they lock-free? If so, how do you do it without a GC?

> Java HotSpot, which features the best widely used GC, is routinely outperformed by low-level C++

This is true mostly in single-threaded computations. Also, as another commenter points out, Java's main problem is the lack of arrays of structs which makes locality difficult. This is being worked on (and almost completely orthogonal to the issue of GC), and will hopefully be at least partially resolved in Java 9[1].

My point is that while we need new languages now, we also need to prepare them for a many-core future. Once you have over 100 cores, many locking schemes stop scaling[2]. I'm not saying Rust specifically should think ahead, but I think a new systems programming language should, especially if it's goal is to replace C for the next 40 years. Unless, that is, what I said turns out to be true, and low-level programming will give us good control over resources, but not the best performance; or if the many-core CPU future isn't coming (I say CPU because we're already in the many-core SIMD present with modern GPUs).

[1]: http://openjdk.java.net/jeps/169

[2]: http://www.infoq.com/news/2008/05/click_non_blocking


"But GC already provides better throughput than manual memory allocation in practically all circumstances"

You could drive a bus through the exceptions let through with the "practically" in that claim. In the kinds of problems I solve the single biggest driver to better throughput is cache locality/branch prediction. Every time I go up a level in memory cache I lower my throughput.

There is nothing saying that GC based solutions couldn't get to the point where they are better about cache locality than manual allocation but they aren't there yet.


the "who's memory is this anyway" type checking thing in Rust is one of the most interesting advancements of the state of the art of low-level programming that I'm aware of in my lifetime.

... I sort of think we should just port it to C.


> ... I sort of think we should just port it to C.

It's been done, at least in academia: http://cyclone.thelanguage.org/

We owe a huge debt to Cyclone :)


I wasn't aware of BLISS: http://en.wikipedia.org/wiki/BLISS

It does seem to have some interesting features for a systems level language--for example, expressions over statements and references over pointers.


"Worse is Better" has a plausible explanation why C won out:

http://www.jwz.org/doc/worse-is-better.html

Curiously, both Google and Apple follow the "MIT/Stanford" school of design (though Google has lately been trending toward the "New Jersey" school). And it's generally been working out for them. Microsoft was always the New Jersey school, and it isn't exactly working for them now, although it did for many years.


Things not working out for MS has little to do with engineering and a lot to do with direction and product.


These are also product philosophies, relating to how much you are willing to make life difficult for yourself to keep the interface to your product simple.

One thing my VP (at Google) said has stuck with me: "It's okay if the code is gross as long as it's hidden behind an API. If the problem's complex, that complexity should be captured in the code rather than forcing the user to deal with it."


no, it just happened to be in the right place at the right time. like JS.


Its not related to the discussion, but where can I get the rest of those articles? The links to next/previous appear to be broken.


JWZ reposted it off Dreamsongs.net, Richard Gabriel's personal site. I think these may be what you want:

http://dreamsongs.net/Essays.html


BTW, everyone should read Olin Shivers' dissertation acknowledgements:

    http://www.scsh.net/docu/html/man.html


The irony is that no one wishes to port it to "modern" AArch64 and x86_64 with, due to proper modular design and GC and compiler written in high-level language (same as MIT scheme) it at least seems like a manageable effort. The goal is that it could be comparable with Haskell in code quality.

Compact and clean native Scheme compiler (such as Gambit) could be a very nice tool. The problem, as usual, is funding and ignorance of the investors who know no other words but Java.)


And here is the manual for the T language http://web.archive.org/web/20060925104715/http://mumble.net/...


So much of this sounds interesting, so what happened to T? Is there any code for the T3 variant? I presume you need T1, T2, and T3 to bootstrap the former, but I would love to see some of this stuff!


>There was, for example, a snooty French paper that sort of dismissed Lamping as an "autodidact," before proceeding to build (with, let me be careful to note, proper credit given to John) on his work

Is there a reference to this paper available?


I think it might be this paper:

http://users.soe.ucsc.edu/~abadi/Papers/proofs.ps

The reference to Lamping being 'autodidactical' is at the top of the second column.


>"Girard is a logician and Lamping is an autodidactic engineer"

Ouch! I can't believe that went to print.


@pg: Your blog would be MUCH more readable if you made this change:

<font size="4" face="verdana" style="line-height: 2em">

Right now, a great sadness comes over me when I think about reading one of your articles :-)


Is ctrl+ really that hard to do?


Ctrl+ doesn't fix leading (spacing between lines).

Is it really that hard to make sure you understand a comment before posting a frivolous reply?



I assumed I might find this article in [Essays], but I couldn't find it except on the 8th page of [Index]. Does this article really only exist in the index?


It's on the page Lisp Articles: http://paulgraham.com/lispart.html

which is linked to from the page Lisp Links: http://paulgraham.com/lisplinks.html

which is linked to from the top level page Lisp http://paulgraham.com/lisp.html


Does anyone have any references for the Clark GC? I'm writing a toy lisp compiler and would find it interesting to read about it.


I think it might be An empirical study of list structure in Lisp [1], by Douglas W. Clark and C. Cordell Green. I haven't read it because it is behind the ACM paywall. I'm pretty sure that the Clark he mentions must be Douglas W. Clark, who has at least five citations in the Jones and Lins Garbage Collection book (1st ed), and is the only Clark listed (under his own name, at least) in the bibliography (of the 1st ed).

If you are looking for a simple GC algorithm that might be suitable for use in a toy lisp language, you might check out Simple Generational Garbage Collection and Fast Allocation by Andrew W. Appel [2]. I think it has many of the same characteristics and is also pretty simple. For a while I had thought that this must be the one that Shivers meant, and that he just misremembered the author.

Edit: The reason I think it is this particular paper by Clark and Green is due to a discussion of it on page 140 (section 6.8) of Jones and Lins (this page is visible on Amazon in search inside):

> Experiments with a recursive copying collector by Douglas Clark and Cordell Green produced a cdr-cell linearisation -- the property that a cell that points to another will be next to each other in Tospace after collection -- of over 98 percent [Clark and Green, 1977]. The incidence of off-page pointers was also low (between 2.7 and 8.4 percent).

[1] http://dl.acm.org/citation.cfm?id=359427

[2] http://www.cs.ucsb.edu/~ckrintz/racelab/gc/papers/appel88sim...


This isn't the Clark GC, but if you're writing a lisp compiler and want to just use a GC (and not Boehm), give http://www.ravenbrook.com/project/mps a look. It is pretty nice.


why does this not have a date?


It took me a while to figure out Shivers wasn't talking about the conventional diss - the narrative lives and dies on the technical one-ups.


Fascinating read.

"[..] I had never been to California before, so I was discovering San Francisco, my favorite city in the US and second-favorite city in the world. [..]"

Out of curiosity does somebody know what is PG's first-favorite city in the world?

"[..] It was also a massive validation of a thesis Steele had argued for his Master's, which was that CPS was a great intermediate representation for a compiler [..]"

What does CPS mean? Further down in the article

"[..] Richard Kelsey took his front end, which was a very aggressive CPS-based optimiser, and extended it all the way down to the ground to produce a complete, second compiler, which he called "TC" for the "Transformational Compiler." His approach was simply to keep transforming the program from one simple, CPS, lambda language to an even simpler one, until the language was so simple it only had 16 variables... r1 through r15, at which time you could just kill the lambdas and call it assembler. [..]"

So I guess CPS means Continuation-Passing-Style? But then I wonder what PG means by "CPS was a great intermediate representation for a compiler"?

Even further down we can see PG's own opinion:

"[..] So the lineage of the CPS-as-compiler-IR thesis goes from Steele's Rabbit compiler through T's Orbit to SML/NJ. At which point Sabry & Felleisen at Rice published a series of very heavy-duty papers dumping on CPS as a representation and proposing an alternate called A-Normal Form. ANF has been the fashionable representation for about ten years now; CPS is out of favor. This thread then sort of jumps tracks over to the CMU ML community, where it picks up the important typed-intermediate-language track and heads to Cornell, and Yale, but I'm not going to follow that now. However, just to tell you where I am on this issue, I think the whole movement from CPS to ANF is a bad idea (though Sabry & Felleisen's technical observations and math are as rock solid as one would expect from people of their caliber). [..]

Fascinating stuff, though I do not what IRs are used by the modern compilers like LLVM, GHC, Java, .Net?

"[..] Jim Philbin, like Kelsey, also went to NEC, where he built an operating system tuned for functional programming languages, STING (or perhaps it was spelled "STNG" -- in any event it was pronounced "sting"). He built it in T, of course, and one could see that it had its roots in his work on T3. (Implementing the runtime for a functional language, in some sense, requires you to implement a little virtual OS on top of the real, underlying OS.) [..]" - Does anybody know what happened to this OS? The only link that I found is http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.4...

The summary of this article then goes to say "[..] Sting is currently a prototype implemented on an eight processor Silicon Graphics PowerSeries 480 running Unix. The next step is to integrate Sting into the micro-kernel of an operating system such as Mac [..]" - Would be interesting if this fruited in some new works for VMs for functional languages.


> Fascinating stuff, though I do not what IRs are used by the modern > compilers like LLVM, GHC, Java, .Net?

There are a couple of things going on with CPS as implemented in this style of compilers that make it particularly nice for writing optimization passes:

1) There are only function calls ("throws") and no returns. Though there's no theoretical reduction in the stuff you have to track (since what is a direct-style return is now a throw to the rest of the program starting at the return point), there's a huge reduction in in the complexity of your optimizations because you're not tracking the extra thing. For some examples, you can look in the Manticore codebase where even simple optimizations like contraction and let-floating are implemented in both our early direct-style IR (BOM) and our CPS-style IR (CPS). The latter is gobs more readable, and there's no way at all I'd be willing to port most of the harder optimizations like reflow-informed higher-order inlining or useless variable elimination to BOM.

2) IRs are cheap in languages like lisp and ML. So you write optimizations as a tree-to-tree transformation (micro-optimization passes). This style makes it much easier to enforce invariants. If you look at the internals of most compilers written in C++, you'll see far fewer copies of the IR made and a whole bunch of staged state in the object that's only valid at certain points in the compilation process (e.g., symbols fully resolved only after phase 2b, but possibly invalid for a short time during optimization 3d unless you call function F3d_resolve_symbol...). Just CPS'ing the representation of a C++ compiler without also making it easy to write your optimization passes as efficient tree-to-tree transformations will not buy you much, IMO.


PG didn't write this article, Olin Shivers did.


And Jonathan had some small amendments:

http://mumble.net/~jar/tproject/


Olin's favorite city is Paris, where he is right now, if I understand correctly.


CPS is short for continuation passing style (http://en.wikipedia.org/wiki/Continuation-passing_style).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: