Hacker News new | past | comments | ask | show | jobs | submit login
A deep dive into APL (curtisautery.appspot.com)
168 points by sndean on March 28, 2017 | hide | past | favorite | 87 comments



I am blown away to see how frequently APL seems to come up on HN these days, a language I used professionally for over ten years.

As much as I love it I have to say one of the issues with APL is that it was way ahead of it's time. Because of that it struggled to run on computers of that era.

I was introduced to the language in 1982~ish. Like I said, I used it extensively, attended and presented at APL conferences and even have a picture of my younger and dorkier self with Ken Iverson (creator of APL).

This struggle to run on hardware that could not handle the huge expansions and contractions of memory utilization inherent in the way APL works multidimensional data sets like putty meant it could not compete where other languages had no issues at all. On PC's you were limited to 640K of RAM. It would be years before you could have enough memory to almost not care about utilization.

As the language didn't really catch the attention of the CS masses it failed to evolve as it could have over the years. And this is the reason I would suggest today it is not much more than a curiosity.

I'm sure there are some out there using it professionally. If I were to guess i'd say it must be mostly companies with a significant installed based of APL software they don't dare try to rewrite.

I firmly believe that the future of computing has to be in an APL-like language. By this I mean that we need to evolve to a notation for computing, rather than text for computing. Iverson himself wrote an excellent paper titled "Notation as a Tool for Thought".

The issue is that in order to evolve APL one has to have a deep understanding of APL. And, frankly, there are but a few of us around who have that understanding. Not sure how many of this group would embark on the mission to create a true next-generation APL worthy of it's lineage.

I read and I smile every time I see the language come up on HN. I taught my kids some APL and they are always blown away by it. Not sure where to go from here.


> Not sure where to go from here.

For the past few years I've been telling people about the APL family of languages. The clincher for me was seeing the video of an APL version of Conway's Life mentioned in the submission. The impressive part wasn't its brevity, it was how they approached the problem.

In APL you have a 'rotate' operator that shifts data in a particular direction. If you have a vector and you rotate it once, every element moves to the next position and the last element then becomes the first.

The nice thing about APL is that most operations work on data regardless of its dimensionality. So, to do Conway's Life, you take your 2D matrix of cells and produce rotations of it in eight directions (N,NE,E,SE,S,SW,W,NW). You then take those rotated versions of the matrix along with the original and conceptually stack them. Then, for each grid point, you sum downward, producing a new matrix that contains the neighborhood count of the original matrix. From that you can create the next Life generation.

This sort of problem doesn't come up everyday, but the thing that I think is profound is that the existence of these operations allows us to think about problems in different, possibly simpler ways. They are untapped potential and they could be as well known as map and fold.

APL and its derived languages are hard to approach but there isn't much that keeps us from importing the data structures and operations in more approachable languages.


Hey Michael, been a fan since reading 'Legacy Code'.

Do you believe that refactoring a large APL codebase would be inherently easier than something like, a large legacy C++ codebase?

What is the relative complexity of comparing, say 1000 lines of APL to 10K lines of C++?

I remember reading an interview between Arthur Whitney and Bryan Cantrill where they mentioned something about recognizing idioms in the dense code much more easily in K than in something like C because it took so many less characters. Do you see any sort of refactoring advantage to that?

I've also recently seen work on the Co-dfns compiler in APL where the author mentioned that APL allowed him to easily refactor compared to traditional codebase, and looking at the github library, there is something like 3 million edited lines over the span of several years in a codebase this is a few thousand lines of code. I find this fascinating because my hope is to eventually start a small software business and I think reducing complexity in the number one priority in order to make it sustainable in the small (the goal isn't to have a team of 50 developers).

Thoughts?


Nick, I think that the primary win is referential transparency. APL idioms raise the level on that base. The tradeoff, though, is the same as we have for functional but further along the road: if you develop a codebase that doesn't use common idioms it's harder to hire people to work in it.

I think array languages, or at least their operation set and idioms, will move into the mainstream in the same way that functional is, but it will take time. Sustainability for your business would be more an issue of hiring and retaining talent.

Reach out if you want to talk about this more: @mfeathers


Exactly. Hence the idea of notation as as tool for thought.

APL makes you see and envision computational problem solving differently than, say C. Use it enough and your brain starts thinking in patterns, shapes, vectors and matrices being "zippered" together or apart, contorted and distorted.

Take a matrix of values and hit it with another matrix of 1's and 0's to select elements and then flip it, slice it, convert it to a vector and combine it with another vector to form a new matrix, then scale every element and apply a polyphase FIR filter across the rows using another matrix for the coefficients to then arrive at a vector containing your answer. This is made-up, of course, but that's how you start thinking, you start seeing parallel computing in your head and you can reach for it instantly with a notation that allows your hands to describe what you just imagined with a few well-selected symbols. No other language sends you on these flights.

It is truly seeing and approaching computational problems from a very different perspective. And the notation makes it comparable to composing music rather than going through the ugly mechanics of text-based programming languages.

I find it hard to explain to those who have never experienced this revelation of sorts. Watching a few videos and playing around with APL isn't enough. If I had to guess I'd say someone would have to work with it on a daily basis for about a year to achieve that mental shift.

Again, no different from musical notation. It is impossible to explain or experience how you see music once your brain can look at a page full of funny symbols and your mind starts to hear the music. The relationship with the piano, guitar or other instrument changes once that mental connection requires no thought at all.

Getting there with APL takes time. For example, you need to be able to sight read idioms like thee following much as a musician learns to read a two hand chord and instantly place their fingers exactly as required on the piano, without though.

http://docs.dyalog.com/14.0/Dyalog%20APL%20Idioms.pdf

or a large enough subset of these:

http://aplwiki.com/FinnAplIdiomLibrary

Back then, a lot of us used to walk around with a small pocket-sized FinnAPL idiom book in our pocket and had the FinnAPL idiom library book on our desk.


Your comparison to musical notation is insightful. I think that I can expand upon it a bit. I can read both western notation and Byzantine notation. Byzantine notation is very different from western notation: it describes a melody using symbols indicating how many steps to take from the current pitch in a given scale. There are also rhythmic symbols added to the notes that generally require reading ahead of what you are currently singing.

For people interested in Byzantine music, the notation can seem imposing at first -- especially to people who already know western notation. Much effort has gone into translating Byzantine music into western notation, but some problems arise. First, some Byzantine scales use very different pitches compared to a western equal tempered scale. Second, the rhythms in Byzantine music can involve syncopation or other rhythms that can be difficult to notate using western rhythmic symbols. Finally, Byzantine notation is optimized for depicting melody, while western notation is focused on harmony. There is simply more unnecessary visual noise in writing Byzantine music in western notation.

That last point resonates with your talk of idioms. While Byzantine music may seem to have many neumes for a given musical phrase, there are many idiomatic combinations of pitch-changing and rhythmic neumes together. As such, what would be read in Western notation with many individual notes and rhythmic markings, becomes in Byzantine notation a single "word".

If you're interested in seeing how these systems of notation compare, look at the following hymn: http://www.cappellaromana.org/wp-content/uploads/2014/04/Che... The neumes are in red at the top, with a simple western interpretation on the first western music staff and an ornamented interpretation on the bottom staff. Comparing the neumes to ornamented western interpretation (which represents the full Byzantine ethos, or style), we can see that significantly more information is contained in the sparser Byzantine neumes.

Anyway, you've convinced me to learn APL because I now see how similar the arguments for this notation are to my own arguments for using Byzantine notation. Thank you for your insight.


This is very interesting. I sight read western notation but never learned anything else. To be honest, I didn't even know Byzantine notation existed. Time to go learn something new. Thanks!


I would really love to read an expanded version of this explanation, as it's not clear to me why or how stacking or summing rotated matrices would work for this problem.


Take a look at this explanation on the APL wiki: http://aplwiki.com/GameOfLife


This is really interesting! What are your thoughts on the J language as an evolution of APL?


J and other variants were bad ideas and continue to be bad ideas. They abandon one of the more powerful aspects of APL: Notation.

Here's a paper by Ken Iverson about the power of APL notation:

http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...

Why does J exist?

Back in the mid 80's (I don't remember exactly) dealing with APL on IBM PC's was not easy. You really had to want to run APL.

For example, we had to hack the equivalent of today's graphics card to replace the character ROM with one programmed to display APL characters. The more advanced modification allowed you to throw a mechanical switch and go between APL and non-APL characters.

Printing APL characters required using very specific printers and changing the print wheel or ball (IBM printers) with versions having APL characters.

Again, you really had to want to run APL to endure this. And, that, of course, affected adoption. You couldn't run APL on any random IBM PC or clone on a desk.

So, Iverson, despite identifying notation as one of the more powerful features of APL, ends-up "going commercial" by transliterating APL symbols into combinations of ASCII characters. Now you could run something that behaved like APL on any computer. And, of course, it was a complete abomination. Terrible thing.

I think I can say the APL community rejected J almost universally. Doing this was a terrible idea. And a short-sighted one at that. It wasn't long til all computers had the ability to display expanded character sets and eventually full graphical canvases where anything was possible.

J was a commercial reaction to a hardware problem that evaporated very quickly. J was not a commercial success for reasons obvious to those of us who used APL professionally every day for years: It was a hot mess and almost diametrically opposed to what made APL so incredible to use in solving problems through computing.

I would stay far away from J. As brilliant as Iverson had to be to create APL when he did, J, as far as I am concerned, was an almost unforgivable mistake.


You mentioned in another comment that if APL is to go anywhere, that it needs to be open sourced. Unfortunately, the cost of Dyalog and the fact that it is closed source are the main reasons why I've ended up using J.

I appreciate that Dyalog is free for non-commercial use, but I think that part of what makes a programming language successful is if people can start using it in small ways to make their work better. For instance, at my previous job we started using Go for small things and it eventually took on a larger role once we saw how well it worked out in practice. Had we needed to pay to use Go for those small things, we never would have even tried it in the first place. I realize that the Dyalog company needs to actually make money somehow, but I wonder if they are aware of this problem. I did email Dyalog to see if using Dyalog for my work at a non-profit constituted commercial use, but they said that it did. :(

Edit: The reason why I haven't used GNU APL is because it seems like getting it to work under Windows will be difficult, and my work is unfortunately a Windows environment. Maybe I should give it a shot anyway.


Try NARS2000. It's APL with sensible extensions written by someone who knows APL very well.

http://www.nars2000.org/


You ever try the modern Dyalog offering with .NET, SQL, and R interop? Pretty cool stuff although I don't really use the language. There is a guy here writing a compiler for Dyalog APL that runs on a GPU, but is only a few pages of APL. If that isn't the future, I don't know what is.


No, I haven't sorry. I agree with the other comment in that APL, if it is to go anywhere, needs to be open sourced. I'll go beyond that and say that what we need is a new iteration of APL that addresses all the issues with a language that is now at least 30 years away from its most active and innovative period. That's a tall order. I can see getting into something like that as a retirement project. Way too much other good stuff to do now.


That'd be a nice project. It's not APL, but I wanted to add that Perl6 has some APL features such as letting you define custom operators using any Unicode symbol you like, so you could technically write an APL language on top really easily. I wonder if more APL features will pop up in other languages in the future.


I also think it's the future, the problem is that I don't believe that Dyalog APL is the future, even though it's really nice. It just should be opensource to succeed imho., why still be so greedy and take money for a language. Also the IDE is really weird. I love APL and J, but I think there is a lot of space for improvement. And I want that improvement to happen in the open. They can still charge for Consulting, Hosting whatnot.


I'm curious if it would be so nice as a non-commercial project. I suppose some companies would still pay for support and development.


You mean a funding model similar to FreeBSD, right? With the right partnerships this could indeed work very well. At a time where even Microsoft opensources their compilers, Dyalog keeping theirs from the ~70s closed is mind bending to me.

And I agree, I would pay, even if it's not that much. The commercial license comes with a too steep price tag. I'd rather opensource my code or contribute where I can.


Lisp was also far ahead of its time, but many Lisp features like garbage collection and lambdas have made it in to mainstream programming languages. Has anything similar happened with APL?


An article[1] claims Wes McKinney was inspired by APL while working on pandas; some of the pandas verbs such as "ravel" come from APL.

Also, the <- instead of = in R comes from APL, although that's relatively minor. :)

[1] https://scottlocklin.wordpress.com/2013/07/28/ruins-of-forgo...


Interestingly enough, I was working with APL, Lisp, Forth and C pretty much simultaneously during my ten year or so sting using APL professionally.

This was an amazing combination of languages to have in my toolbox simultaneously. The mental calisthenics alone were truly enjoyable to me.


I just think of Lisp as its own world. They had to build Lisp machines to work on machines in years past. Today we still see growth in Lisp and seeing how Racket is developing and Clojure shows that Lisp has room to continue to innovate.


Sorry, everyone, looks like choosing to host my site on appspot was incongruous with high traffic, since I only get 1 gig of outgoing bandwidth per day.

Here's the cache: https://webcache.googleusercontent.com/search?q=cache:K-e7Jh...



"APL, and its successor J [...] provide a notational interface to an interesting model of computation: loop-free, recursion-free array processing." How is APL loop-free, exactly?

Later they say: "Under this implicit lifting, the iteration space is the argument frame rather than a sequence of loop indices." So if I understand correctly, we have iteration, but no loop. But that doesn't seem like a really important distinction...what am I missing?


The key feature is abstract iteration, like functional maps, filters and folds, and implicit iteration, where operations "penetrate" to the items of a vector or matrix automatically, rather than explicit iteration like a "for" loop.

Abstract iteration is useful because it results in programs with fewer "moving parts"- no loop induction variables to misplace or mutate in the middle of a loop body. Programs are necessarily expressed in a more rigid manner and some irregular algorithms can be difficult to express.

Summing a vector with an explicit loop in K (very non-idiomatic!):

    r:0;i:0; do[#v;r+:v@i;i+:1]; r
The equivalent using the "over" adverb:

    +/v
Both examples perform the same calculation. The latter is more concise and easier to reason about.


Stuff like +/v is easier to reason about, but it's also handled by special code which runs a lot faster. At some point I need to do a blog on all the horrible things that happen inside your computer (cache hits, memory being swapped in and out, stacks popping) when you do an interpreted, explicit loop; bytecode, AST or whatever. There are R&D interpretors which claim to remove this overhead for trivial for loops,which are the main kind that end up getting used in numeric code, but none of them ever seem to make it into production (I'm sure someone will correct me if I am wrong; I am pretty sure Art wasn't doing this in K4, though he was probably best positioned to do so).

The real reason we like +/v besides less typing; it can be handled with special code which runs close to the theoretical machine speed. Lots of small places and languages this fact can be exploited in. R and Matlab basically have a subset of operations you can do +/v type things with. APL is the main class of languages where this sort of thing is built into the semantics of the language. If you're dealing with numerics in an interpreted language, it should be built into the semantics of your language, and that's how you should do things. Really it should be in compiled languages too, and that's how people should reason about code, but it's probably asking too much since APL has only been around since the 70s...


> handled by special code which runs a lot faster

This is just another way of saying "we don't have a compiler, so don't process data at the element level if you want speed".

It is not an advantage of the language, but a disadvantage.

If you have a compiler, then it's only valid for aggressive, machine-specific optimizations. This is articulated by statements like "the library function is marginally faster than an open coded loop, because it uses some inline assembly that takes advantage of vectorized instructions on CPU X".

If I write an FFT routine in C myself, it will not lose that badly to some BLAS/LAPACK routine.


Forcing your compiler to figure out you're doing something trivial on a rank-n array is silly. So is writing all the overhead and logic (where a typo can break things) which goes into a for or while loop instead of two characters: +/

I encourage you to try writing an FFT routine in C yourself and compare it to FFTW, where they basically wrote a compiler for doing FFTs. It's also worth doing in an interpreted language in an array-wise fashion versus with a for-loop. You should get something like a factor of 100,000 speed up.


> Forcing your compiler to figure out you're doing something trivial on a rank-n array is silly.

What is the alternative, if there is no canned procedure for it?

The procedure has to be written somewhere, somehow, in some language.

If compilers are silly, assembly, I guess?


The analysis to recognize whether a "canned procedure" is applicable is nontrivial, to put it lightly.


I see. Good answer.


RodgerTheGreat has given a great answer; I'd like to add that implicit iteration also lends itself to automatic GPUization, automated SIMD and multithread parallelization; the semantics of the "rank/depth" penetration come with no a-priori guarantee about order of evaluation (and it shouldn't matter anyway unless you are extremely naughty with side effects, which the languages greatly discourage).

Compare to an vectorizing/parallelizing an explicit loop, about which there have been tens if not hundreds of PhD dissertations, and it is still not a solved problem.

This is why "for x in ..." in Python/C#, and even the "map" variants, are still inferior to APL/J/K: the facts that iteration order is predefined, and that the iteration itself my terminate prematurely, make it extremely hard for the compiler to optimize, whereas in most cases, neither consideration matters.


I think "loop-free" just means the language encourages you to think without loops, even if the implementation may or may not use traditional loops under the hood.

This is basically common to all languages with a strong emphasis on functional programming. Instead of looping, you perform operations directly on the arrays/matrices, and in fact APL and J are focused on matrix manipulation.


> This is basically common to all languages with a strong emphasis on functional programming

Also modern Fortran falls into this category. (I mention it because it's far from a functional language.)


That makes sense. Thanks.


Looks like the site is hugged to death :-(

My only experience with APL-like languages was playing with K for a few months. It's amazing, no other language can do so much in a few characters. Here's a list of hundreds of snippets: http://code.kx.com/wiki/Qidioms

To give a taste of K style, I'll try to explain one of those snippets here. The problem statement is to merge three arrays x, y, z under control of another array g. Don't worry, it will make sense in a moment. These are the definitions of x, y, z and g, followed by one line of code solving the problem, and the result:

    x:"abcd"
    y:"123456789"
    z:"zz"
    g:"101121211010101"
    (x,y,z)[<<g]
      "1a23z4z56b7c8d9"
First of all, (x,y,z) is just the concatenation of three arrays, and [] is the familiar array indexing operator. The twist is that [] can also accept an array of indices, so e.g. "ab"[0 1 1 0] returns "abba".

But the really clever bit is <<g. A single invocation of < returns the sorting permutation of an array, i.e. an array of indices x such that g[x] is sorted. After two invocations of < you get the sorting permutation of the sorting permutation. In other words, the inverse of the sorting permutation. In other words, the permutation that turns sorted g into regular g. In other words, exactly the permutation that you need to mesh x, y and z together!

By any reasonable programmer's standard, that's way too much cleverness. But if you want a lot of functionality in a few characters, that's the price to pay, and K programmers seem happy to pay it. It's also really fast, because you're combining optimized bulk operations, and dropping down to individual elements only in special cases.


What do you gain from using the << syntax over a verbose but reasonably clear function name like "sorted-permutation"?

Clarity in function names is one of the main reasons I prefer Lisp and Scheme over languages like APL, K, and Haskell, which seem to favor the use of something like a mathematical notation, which I've always found obfuscating. This obfuscation is made worse by the aversion to comments that I've seen in some of these languages, while in Lisp and Scheme, the verbose and explicit function names are in a way self-documenting.

For someone who prefers a clear and explicit programming style, the terseness of some programming languages is a real turnoff.


The function "grade up" has the symbol "<". The composition of this function with itself "<<" is not a special case- it's "grade up of grade up". Compositions of functions are first-class objects in K, so you could certainly give it a name if you like (as well as eliding some of the brackets in the above example):

      ordinal: <<:
    
      (x,y,z)[ordinal g]
    "1a23z4z56b7c8d9"
    
      (x,y,z)ordinal g
    "1a23z4z56b7c8d9"
The idiom << is common enough, though, that K programmers learn to recognize it when they see it in code. I'd have to look up the definition of "ordinal" to be certain it did what I think, but << is totally unambiguous. Contrary to your argument, naming short idioms like ,/ << +| and so on actually makes a program less explicit.


  $ txr
  This is the TXR Lisp interactive listener of TXR 173.
  Use the :quit command or type Ctrl-D on empty line to exit.
  1> (let ((l '#"abcd 123456789 zz")
           (i '(1 0 1 1 2 1 2 1 1 0 1 0 1 0 1)))
       (gun (pop [l (pop i)])))
  (#\1 #\a #\2 #\3 #\z #\4 #\z #\5 #\6 #\b #\7 #\c #\8 #\d #\9)
Conversion to string:

  2> (let ((l '#"abcd 123456789 zz")
           (i '(1 0 1 1 2 1 2 1 1 0 1 0 1 0 1)))
       (cat-str (gun (pop [l (pop i)]))))
  "1a23z4z56b7c8d9"
What is this gun we have armed ourselves with? It stands for "generate until nil": it generates a lazy list whose elements are formed by successively evaluating the argument expression. If the expression yields nil then the list terminates (without adding that nil as an item).

The expression uses side effects. The pop macro is straight of ANSI CL. Except, in TXR Lisp, list processing stuff like `pop` works on other sequences such as character strings: we can pop the letter a out of a place which holds "abcd", leaving "bcd" in that place. This approach just "popped into my head" immediately.

We also have a syntactic [...] which gives us indexing (and other things).

We also have #"..." which is a "word list literal". Simply, #"foo bar baz" means ("foo" "bar" "baz"). Note there is no quote there to prevent the evaluation of this as a form; so we still have to use one.

I won't bother implementing the apparent requirement that the indices be digit characters from a string; I consider it a serious blemish on the language, if it's doing it implicitly. We can map a string of ASCII digits to the corresponding values like this:

  3> [mapcar chr-digit "01234"]
  (0 1 2 3 4)
Characters are characters, and numbers are numbers. Dynamic typing: yes please; daft typing: no thanks.


> I won't bother implementing the apparent requirement that the indices be digit characters from a string; I consider it a serious blemish on the language, if it's doing it implicitly.

There's no such requirement, < (and by extension <<) returns the same result for 0 1 0 2, "0102" or "abac".


I don't mean that the program requires the input in that form, but that the problem specification (in its strict interpretation) specifies it, and I'm missing that requirement in my solution.

Obviously, there is here a language-level requirement in K that 0, "0" or "a" all denote an index zero, at least in the exemplified situation.

Though that may help get points on http://codegolf.stackexchange.com, it comes across as rather arbitrary.

We could easily acheive the same thing without pushing it into the language, at the cost of making a function call, which could have a one-letter name.

  12> (defun $ (x)
        (cond
          ((numberp x) x)
          ((chr-digit x))
          ((chr-isalpha x) (- (chr-tolower x) #\a))))
  13> ($ #\b)
  1
  14> [mapcar $ "cdba"]
  (2 3 1 0)
  15> [mapcar $ #(1 3 9 5)]
  #(1 3 9 5)
  16> [mapcar $ "012a3"]
  (0 1 2 0 3)
We basically have to operate under the assumption that writing a function is unacceptable, and I could cater to that view easily by having this function in the TXR Lisp standard library in the next release (probably not under the name $, to reserve that for users). I am rather too convinced of its lack of utility to do such a thing.


>Obviously, there is here a language-level requirement in K that 0, "0" or "a" all denote an index zero, at least in the exemplified situation.

no, 0 just sorts before 1, "0" before "1" and "a" before "b"


I see so something like "abac" is basically mapped to 0 1 0 2 by mapping the lowest-order element in that sequence to 0, and the others following suit. In which case, it would be expected that, say, "xyxz" could be used in place of "abac".

"Auto-indexifying" a sequence in this way does seem like a useful little operation to have.


Grade up (or grade down) returns the index of an array in sorted order. So >"weasels" would return: 0 3 6 5 1 4 2.


Nice to see APL lives and people still try to teach it. For software historians, all the 1970s-era manuals are at bitsavers[1]. The language was quite a bit simpler then; several of the primitives this tutorial thought appropriate to introduce didn't exist then.

[1] http://www.mirrorservice.org/sites/www.bitsavers.org/pdf/ibm...


This recent thread about an APL compiler for the GPU was so interesting: https://news.ycombinator.com/item?id=13565743, that we invited the author to do an AMA about it: https://news.ycombinator.com/item?id=13797797.


A+ a derivative of APL is still being used in a certain investment bank. 20 years of projects to deco it and replace it with something more modern haven't managed to completely kill it off yet. Main issue I had with it was the inability / cost to hire people with any experience, and the off-putting / steep learning curve. Once you get the hang of it though it's a great language for solving certain more numerically orientated problems.


Morgan Stanley. They open sourced it ~15 years ago at aplusdev.org. The project was led by Arthur Whitney, who went on to become Mr KDB.


Mr. Moneybags...well earned from what I hear.


Would you be able to comment about how much of it is the underlying APL (and math coded therein), and how much of it is the Electric GUI ?


I really liked the way that Iverson built up concepts in his [Arithmetic (PDF)](http://www.jsoftware.com/books/pdf/arithmetic.pdf) manual for J. I found it very intuitive and useful even if you never plan to use the language. There are others at the site like his manual for Exploring Math and one for Calculus as well.


I thought they were great too!


I've slowly been going through "J Tutorial and Statistical Package", which teaches J (Iverson's evolution of APL which uses only ASCII characters) in the context of building a library for statistics[1]. I've found that it's a great way to learn the language, and that stats is a domain where APL languages work very well. I also read an interesting paper about using APL to create a notation for statistics, such as "normal prob between 0 2"[2]. Another nice thing about J is that it's GPL'd and free to use commercially.

[1] https://webdocs.cs.ualberta.ca/~smillie/Jpage/jtsp.pdf

[2] http://archive.vector.org.uk/art10501700


The core language is good enough but the object oriented stuff they put on top later on is just a hell of an eyesore.


Is APL still used for anything now? It seems like it could be a useful language for _something_.


I tend to think about APL as an alternative to R. When working with array-based datasets it's a very nice tool to use.

APL has a small number of very flexible operations, and the key to using APL efficiently lies in understanding these primitives to achieve your goal. Once you learn them, it's more comfortable to use than learning all the intricacies of the R language.

I wouldn't recommend anyone writing a full application in it though. But then again, I don't think anyone uses R for that purpose either.

As an example (which I believe I mentioned last time APL came up on HN) is that one of my solutions to last year's Google Code Jam was a single short line of APL, but then there was about 20 lines of supporting code just to load the dataset and format the output so that is exactly matched the correct format for the submission.


Hmm, I don't know APL or R, but I suspect I could have substituted "Numerical Python" for "MATLAB" for "R" in your post and have left the meaning almost unchanged.

Is my suspicion correct?

BTW: A mere 20 lines for format conversion is very good. Although by APL standards, it might be terrible.


Very likely. I used R as an example since it conceptually is very similar to R in the way it treats arrays, and also since it's not something you'd use to build full applications.


I dunno how widely used it is, but there is Kdb+[1], “a column-based relational time-series database” built on the K[2] language, a descendant of APL. I mainly see J[3] used for code golf, but at least that means a decent number of people know it.

[1]: https://en.wikipedia.org/wiki/Kdb%2B

[2]: https://en.wikipedia.org/wiki/K_(programming_language)

[3]: https://en.wikipedia.org/wiki/J_(programming_language)


kdb is great, basically it combines an APL like language and a database, which is very cool as you can easily push your calculations to where your data lives:

http://www.timestored.com/kdb-guides/kdb-database-intro

If you want to dive in and try it, I've also made a video tutorial here:

http://www.timestored.com/kdb-guides/getting-started-kdb

Even if you don't use it, it's an interesting language to study as it's significantly different to the common java/python etc. The one drawback is that for commercial use it's very expensive. Other "modern" APL like languages that are available free include J and kona (open source K).


Kdb+ is incredible. It blows away any of the current attempts at time series data stores out there (eg; InfluxDB, OpenTSDB). Unfortunately, Kx Systems failed to allow it to reach mass popularity by keeping licenses extremely expensive. Startups simply can't afford to outlay $100k+/year for a tiny cluster.


I wonder if anybody compared Kdb+ with Jdb, a J DB system.


I tried googling this awhile back. I assume kdb+ is closed source?


I used to work at a large insurance company as an actuary, and although it's dying out, there's still APL code modeling insurance products in use. One of my jobs was to add the new years' assumptions into arrays.


Isn't the inventor of K+/Q currently writing his own operating system? Haven't heard any progress about that in a long time, hope it actually gets some use. Sure, won't be the next Linux, but maybe more than the next ColorForth.


ColorForth is not an OS. For C. Moore, an Operating System is a non-thing.


That's splitting hairs a bit. If I boot into an programming system that lets me operate the computer, I'd call that an operating system. I don't see a significant difference between ColorForth (or other native forths) and e.g. home computer Basics, Oberon or even Lisp Machines.

And more to the point, I wouldn't assume that kOS would be closer to Unix/Windows than Forth/SmallTalk/Mesa etc.


Your definition is simply not usable. By this definition, a bootloader is an OS, the BIOS is an OS, etc. All it needs is to force a little bit the interpretation of "lets me operate the computer". Just because "it" doesn't need an OS to run, it doesn't have to be itself an OS.

I think the issue is that there's no name for that kind of software which are just programs that run on bare metal. So everyone just call it an "OS" - including embedded system software vendors even though what is being sold is really just a library.

But often you have no kernel, no drivers, no resource management, no filesystem, no processes... All features one would expect from an operating system.

Furthermore, operating systems in the generally accepted definition are not supposed to include an interpreter or a compiler. They are not supposed to be interpreters or compilers.

So that's not hair-splitting at all. Those systems are really distinct from operating systems.


Chuck Moore writes one for every project.


That's a deep misunderstanding of what he does. When you run a single process on a single hardware, you don't need an OS.


Slight sarcasm, but he has written several before I'm pretty sure and does deserve his rather mythic status.


Complete USB host driver for Intel EHCI:

  #(*<<#$**$[x;]&^^;
The << idiom really stands out here.


The main problem with (GNU) APL for me is the line editor: I can't feel comfortable using a REPL to write a program without keybindings like ctrl-a, ctrl-w. Everytime I try to learn APL I eventually give up.


I implemented Emacs support to handle exactly this. It also provides several other useful features such as formatting and code navigation.

It's available on MELPA as gnu-apl-mode, or you can get it here: https://github.com/lokedhs/gnu-apl-mode

As poor as this video is (and the last time I posted it I believe I said I was going to make a new one), it does illustrate some of the features: https://www.youtube.com/watch?v=yP4A5CKITnM


I don't know if this is helpful for you, but I map <c-p> to run the open buffer in Vim in the buffer's filetype's interpreter (or compile and run, etc.). For GNU APL, this is:

  nnoremap <buffer> <C-P> :write !cat - <(echo ")OFF") \| apl --script<CR>
It took some time to figure that out, but combined with https://github.com/ngn/vim-apl, it is not too bad of an editing experience.


I haven't tried it out, but maybe rlwrap can help here.


Unfortunately it doesn't seem to work.


I thought it was super cool when I found out you could overload operators in Julia. I wonder if you could re-create some APL-like syntax that way.



I really did enjoy that. It reminded me of how I use R and then drop down to C++ when I need speed. This interpreter is analogous: use Julia and then go to APL when you want compact expression. Although, I'm not sure how advantageous it is. At least prima facie it's not as compelling as the speed gains you get from dropping to C++ from R.


Is this sorcery?


It's APL. First time I heard about it was in those "how to shoot yourself in the foot" lists [0]:

* You shoot yourself in the foot and then spend all day figuring out how to do it in fewer characters.

* You hear a gunshot and there's a hole in your foot, but you don't remember enough linear algebra to understand what happened.

[0] http://www.toodarkpark.org/computers/humor/shoot-self-in-foo...


Second bullet point is pretty funny. I really liked linear algebra, but I'm sure it could get frustrating if you're not an expert. I wonder if it makes sense philosophically for code to be more mathematical like APL or like a spoken language like Python.



I enjoyed this article. Thanks!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: