Hacker News new | past | comments | ask | show | jobs | submit login

I cover how I debug my program in the video. I use two utilities, pp and px that are identity functions that print their results as side-effects. I'm happy that I'm working in a non-lazy language.

This compiler is poetically driven, but it is still a commercially funded and commercially marketed product. It's just that in the APL community, you can pursue both and have them be compatible with one another.

In particular, the desire to make the code as "disposable" as possible has lead to me using strictly combinatory/points-free style programming in the core compiler. It makes it very easy to change things because it's easy to see the whole picture and easier for me to delete any given piece of code. Many "fixes" in my code are single character APL fixes, or maybe two character APL fixes.

One reason for not being able to stick to the same core abstraction throughout your whole code is that you may have an inadequate core abstraction. Part of the design goal in the compiler, which I am trying to point out in the video, is that the design is meant to keep you in the same "context" or abstraction level throughout, so making a change will always fit the current compositional context. I do this by focusing on simplification as much as possible. I also keep the code as stupid as I can get away with.

This whole discussion got started because of the amount of change that has happened with this code base (hundreds of thousands of lines). You can read more about that in the other historical threads referenced by dang. I've rewritten the compiler at least five times.

This need to make lots of changes all the time and adapt means that I actually push more towards the points-free, not away from it. By sort of diving head first into it, I can gain the benefits, but lose the disadvantages that happen in the "middle ground" where you are jumping back and forth between points-free and not-points free.

The complexity of this compiler is about the same as what you would normally encounter if you were going to write a core compiler for Scheme (not the syntax expander). It's perfectly suitable for procedural programming languages, or any other, but obviously you'd need to tweak things. One of the general strategies in the compiler is to eliminate as much irregularity as early as you can get away with. A lot of it depends on how neatly your underlying IR/ASM language that you're targeting matches the semantic needs of the language you're compiling. For example, I don't demonstrate how to do tail call optimization here, but if you are familiar with a Kent Dybvig style tail call optimization pass such as found in Nanopass compiler examples, you could adapt those ideas to this data-parallel style.

Basically, if you already know how to write a Nanopass compiler, the transition is easier. If you are used to writing monolithic compilers such as the clang front end or PCC, it will be harder, since you have to absorb the Nanopass style of thinking.

As for how usable the compiler is, that depends on what you mean. If you mean, is the compiler usable right now as a product, then it covers most of the bases for functional array programming, but if you see the LIMITATIONS file, you'll note a few things that it's missing. Ironically, things like if statements and a lack of recursion are important, but much less so to a dfns programmer than having the right primitives already available. As a dfns programmer myself, I almost never use recursion, for instance, though it's good to have it when you want it.

The next few releases of the compiler are focusing on eliminating some of these last issues of usability, but for the most part the compiler is already completely functional.

As for whether a dfns programmer could readily get working on the code base, the answer is yes and no. They would have no problem reading the code, as it's dead simple to a dfns programmer. The overall structure would also be easy to see. For them, the trick would be understanding the "big picture" of how a compiler works. Most dfns programmers are not also compiler writers. Understanding the Nanopass strategy of compilation, and then understanding why a given pass is there in the compiler, and mapping a given set of array operations (which they would easily understand) to the domain space of "compiler operations" would be the struggle. Sort of the reverse. The compiler writer would struggle to understand how compiler operations map to array operations, and the array programmer would struggle to perceive the larger meaning of the array operations in terms of traditional compiler concepts. Printing out the AST throughout the whole compiler (easy to do by editing tt) goes a long way towards aiding that intuition.

Once the core idioms used in the compiler are understood, the rest is pretty straightforward.




I wish that your posts could be as synthetic as your code and your code as expressive as your posts... that would open an universe for me and probably for a lot of people.


Personally, I wouldn't want to edit my posts on a regular basis. I consider them too verbose, but I struggle to condense them.


I'm not sure what that would look like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: