Hacker News new | past | comments | ask | show | jobs | submit login
Chicken Scheme internals: the garbage collector (more-magic.net)
112 points by bhrgunatha on March 31, 2014 | hide | past | favorite | 25 comments



Henry Baker's "Cheney on the M.T.A." paper (which is the basis for the stuff described here) is very nice. There's a collection of Baker's research papers here:

http://www.pipeline.com/~hbaker1/

which includes the "Cheney on the M.T.A." paper (http://www.pipeline.com/~hbaker1/CheneyMTA.html) and a bunch of other interesting things (mostly with a lispy flavour). A few specific examples:

http://www.pipeline.com/~hbaker1/Prag-Parse.html -- a nice technique (not invented by Baker) for simple parsing tasks, and a neat Lisp implementation.

http://www.pipeline.com/~hbaker1/LazyAlloc.html -- you might notice that the "M.T.A." paper is subtitled "CONS should not CONS its arguments, part II"; this is the original "CONS should not CONS its arguments" and is also about allocating things on the stack that you might not expect to be stack-allocatable.

http://www.pipeline.com/~hbaker1/Use1Var.html -- "use-once" variables (related to "linear logic", which is not Baker's creation), not a million miles from C++'s auto_ptr/unique_ptr but Baker was advocating this stuff as early as 1993.

And some other interesting things due to people other than Baker, such as a copy of the famous HAKMEM (http://www.pipeline.com/~hbaker1/hakmem/hakmem.html).


I was looking at Chicken Scheme the other day for building a Entity-Component-System 2D game engine, as I love Lisps, but alas, the SDL bindings were quite out of date, and I didn't want to take my first project on as writing bindings for a C library!

It's a very cool system, and this article is awesome. I find garbage collectors fascinating.



I was looking at Chicken Scheme just the other day also for a financial app. Tried and utterly failed to load the bb and iup eggs to get a gui going. And this was in vanilla Ubuntu. Had to go back to Racket.


What did you end up going with?


Any pointers on how the diagram for the stack was generated?


<!-- Created with Inkscape (http://www.inkscape.org/) -->


I came to the comments for exactly the same reason, they're excellent diagrams!


Wow. As someone with a background of mostly C/Asm, it is rather shocking to see the amount of extra code compiling higher-level-languages produces; summing the contents of a list (array or linked) shouldn't be more than half a dozen instructions in a loop... and then I see things like dynamic allocation. Seriously, wow.


This is not inherent in the process. What you're seeing is the confluence of several factors:

(1) Scheme is dynamically typed, not statically, so some operations need to operate on objects whose properties are not known at compile time and which need to be constructed/wrapped. The same does not necessarily hold for a statically typed language.

(2) C is not really designed to be the target language of a compilation process. In particular, there's no portable access to stack frames, so any mechanism that inspects the stack (in this case, for precise garbage collection) needs to recreate the necessary machinery. If you target a representation that is actually designed for this purpose (such as LLVM), a lot of these problems go away. Likewise, C does not have native support for continuations, tail calls, or a number of other mechanisms that more sophisticated backends do support.

(3) The compiler does not need to generate human readable code; code generation that targets C is generally not concerned with producing minimal output, since the C optimizer will strip away superfluous stuff, so the focus is more on keeping the backend simple.


> so some operations need to operate on objects whose properties are not known at compile time and which need to be constructed/wrapped

But in the given example, shouldn't the compiler be able to see, via a simple analysis, that this function is only being applied to lists of numbers?

> since the C optimizer will strip away superfluous stuff

That's a huge assumption; in practice I have not seen any optimisation that would take that Scheme-compiled C program and turn it into one a C programmer would write (if there was, I think it could certainly make Scheme a lot more popular), and I've been looking at compiler output for over two decades now. There has been improvement but not that much. There's also the question "why even generate 'superfluous stuff' if it's going to disappear anyway?"

To paraphrase an old saying, "with great flexibility comes great complexity."


> But in the given example, shouldn't the compiler be able to see, via a simple analysis, that this function is only being applied to lists of numbers?

No, that's the beauty and curse of dynamic languages such as the Scheme mentioned here. Of course, you could do special optimizations, such as inlining bindings that never get used out of scope or defining special functions that don't operate on real Lisp data structures but plain C structures, but they add extra complexity and duplicated logic to the compiler. I'm sure those kind of optimizations would be out of scope of this article.

> in practice I have not seen any optimisation that would take that Scheme-compiled C program and turn it into one a C programmer would write

Obviously Scheme code is going to look different than C code, they're different languages. I think the point being made was that it's okay to emit some C code that's a little extra verbose, doing things like defining extra vars and such, knowing that the compiler will simplify many such things.

> There's also the question "why even generate 'superfluous stuff' if it's going to disappear anyway?"

To keep the compiler's implementation simple and understandable. The more complex output it has to produce, the harder it is to read/write/debug.


> That's a huge assumption

I am not talking about optimizations that require understandings of the source language. But, for a simple example, a Pascal-ish language may have the statement

  x := (a+b)*(c+d)
compile into something like:

  _t1 = a + b;
  _t2 = c + d;
  _t3 = _t1 * _t2;
  x = _t3;
in order to simplify translation from an internal representation such as an AST. Putting such a piece of code back into human readable form makes the backend more complicated, while C compiler know how to optimize away the extraneous variables well.

Note that this is just one of many things that contribute to the verbosity of compiler output, not even necessarily a major one in this case.


But what if you want to reduce the list with something other than a sum ? with this substrate (and closures) you can separate concerns, increase reuse and reduce bugs.


In Scheme, tail-call optimization is part of the standard. Without this requirement the generated code would probably be much more similar to what you'd write by hand.

On top of this, the generated code is said to be portable C.

I've skimmed through the papers mentioned in the article years ago, but gave up understanding them. This article is golden if I really want to understand what's going on in a real-world use of the techniques described in those papers.


I respect a person's taste being tuned to minimizing number of code lines, so the asm output is like a haiku. But I hope others respect another person's taste that happens to be tuned to appreciated doing some of the things that can be done in such a system. (After all, in 2014 the devices we have in front of us no longer make the case for minimal asm quite so compelling.)


If you are having trouble reading this because of the dark theme and you use chrome - install StyleBot and apply this css: https://gist.github.com/nicroto/9885777 (I inverted the colors of the images, too).

Awesome read. Thanks.


Since chromium supports it, I often use printfriendly.com for this

http://www.printfriendly.com/print/?source=site&url=http%3A%...

You lose some worthy css (code syntax coloring) but otherwise it's pretty nice. (requires javascript)


Really good technical article. I'm surprised to see the CPS representation (apparently) used to generate code, my understanding is that it was usually reserved for intermediate representation of some functional languages, in order to facilitate optimisation.


Very interesting read for any wannabe compiler implementer.


That link is broken.


archive.is snapshot in case the server goes down (again)

http://archive.is/7C5B4


CHICKEN, not to be confused with the CHICKEN CHICKEN CHICKEN programming language defined here: http://torso.me/chicken


Regrettably, that language is in fact also called Chicken (not CHICKEN CHICKEN CHICKEN). But Chicken Scheme has been around a lot longer.


I was thinking more of the academic paper, actually. It's extremely cite-able.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: