Hacker News new | past | comments | ask | show | jobs | submit login

In 2017 I can mostly only recommend Dyalog APL. Keep in mind that I am highly biased because my compiler is written in Dyalog APL, and they sell my compiler commercially. However, IMO< Dyalog APL is the place to start with APL in 2017.

They now have Mac/Linux versions that include the "RIDE" (remote IDE) which allows you to work with the code similarly to the IDE/Interface you see me using in this video. I don't use any of the IDE features, such as the code editor, debugger, explorer, or the like, but for a new user it will be quite useful, and it will allow you to "point and click" your way through, including the entering of the APL symbols.

Dyalog APL for Linux also has a slick, but old school console IDE, which includes a built-in full-screen text editor, debugger, and the like, which is built around console interactions. If you're a hard core CLI guy, then this may appeal to you.

There is an excellent book called "Masterying Dyalog APL" which covers all the major big features, built-in frameworks, and the like, for working with Dyalog APL. Dyalog comes with built-in interfaces for things like R, .NET/Mono, ODBC/Relational databases, sockets and networking, cryptography, &c. It also ships with things like vecdb, a sophisticated web framework, and a GUI framework (windows only).

Additionally APLers should all have a copy of the FinnAPL Idiom Library at the ready. It serves as the "core library" for a lot of common tasks which takes the place of "example app" and "library calls" in many cases. Additionally, Dyalog has a large suite of dfns to use that they document at dfns.dyalog.com. Dyalog also comes with document processing and graphing libraries that can be used interactively or programmatically to produce all sorts of documents and graphs.

My general recommendation would be to stick to using dfns as much as you can, programming without trying to use external libraries. It's a common "mistake" that people learning APL try to "find library function to do X" when it's often a "half a line" APL program that would be considered "idiomatic" in APL and thus not something APLers would have a "library function" for. It's better to master the core language, because you'll find your need for libraries diminishes quite a bit once you become comfortable with the language. Not that there aren't libraries, but if you really need a library function, you'll know it, and then you'll know if you can find it.

The best example apps for writing dfns would be the dfns library by Dyalog at dfns.dyalog.com which comes shipped as a dfns workspace for use by Dyalog APL programmers. These are well commented, and cover a vast array (snicker) of topics.

Function trains are one of the neatest features of the Dyalog APL language, and represent an extension of traditional APL 2 into the "Rationalized APL" space from which comes the likes of J. The example is the above video. The 90-line compiler component that I discuss in the above video is written almost exclusively as a function train. I believe I dedicate some time to talking about function trains in the above video.

You cannot do these things in Scheme, as they are fundamentally a syntax that is incompatible with S-expressions. You could do a form of points-free programming in Scheme, but the nature of S-expressions means that the structure of the point-free code is explicit, and thus, if you wanted function-trains, you would have to implement a macro to construct them for you. That could be done, but at that point you would be well on your way to implementing APL in Scheme, rather than using Function-trains in Scheme.

Function trains also don't work well for functions whose arity exceed 2.

dfns is a syntax invented by John Scholes for implementing the APL notation using a lexically scoped syntax for function definition and control flow. They replace the more traditional flat namespace programming that is dynamically scoped, or the OOP programming model, both of which are still supported and widely used in Dyalog APL. However, to a Scheme programmer, dfns is a God-send, because they closely mirror the core constructs and concepts of a Scheme programmer, namely: lambda, recursion, and if statements/exception guarding.

When I don't use function trains in my compiler discussed and demonstrated in the linked presentation, I'm using the more explicit dfns syntax.




Thank you for comprehensive comment. Hopefully I can understand enough about the video to get the gist. Idea of Scheme style functional features with APL's power tools. If you get around writing more about these things I'll definitely be reading.

Since most of Python and JavaScript I do at work is mostly searching and using examples and libraries from where ever having full complement already with the system feels impossible. On the other hand, APL implementations have had some time to make things right. I'll take a look at that big APL book and see what I can gather (my favorite way to learn).

BTW: I got my first taste of APL from one of the FinnAPL fellows. I saw some incredible things by him I've since tried to replicate after a fashion.


The Mastering Dyalog APL book is available as a free PDF, too. Though the paper version is more enjoyable, and serves as an excellent monitor stand in need, as well.

As a Scheme programmer, one of the first things I did when learning APL was to create a new REPL/environment in Chez Scheme called Sapling that replicated APL in Scheme using Scheme implementations of the primitives.

I quickly discovered that at least for my meager mind, there was no way to combine the two as one. I could use one within the other without trouble, but I couldn't mix the two easily.

If you're interested, the Co-dfns compiler is designed to integrate with existing languages. If your language has a C FFI, then you'll be able to integrate with Co-dfns compiled code. The workflow is basically for you to produce a namespace of the functions you want to use, and then you can compile them and link against that in your Python or C or C++ code. You can then just call those functions with the appropriate arrays and things are a go. The main then you need to do is write a function to create a Co-dfns array to and from the data types you are using in your program.


This got me a bit confused. Is the purpose of Co-dnfs to compile apps that run on a GPU? Or does it only itself compile on the GPU for apps than can run on any target architecture?

Other than that the idea of getting using my domain specific Python stuff (and few general libraries) with data processing tools is intriguing.


These two goals are not mutually exclusive. The Co-dfns compiler is a compiler that is designed to self-host on the GPU, but it is a compiler for the Co-dfns language, which is a lexically scoped syntax in Dyalog APL. The compiler compiles dfns programs and supports Mac, Windows, and Linux, targeting CUDA, OpenCL, and Intel CPUs. The compiler itself is written as a dfns program, hence the idea of self-hosting on the GPU.

Target applications for the compiler include document processors, cryptographic libraries, neural networks programming, data analytics, high-performance web applications, vector databases, financial algorithms, bioinformatics, and HPC/Scientific computing tasks.

So, in short, yes, it compiles apps to run on the GPU, or the CPU, whichever you prefer. And yes, it is meant to compile itself on the GPU (so, you compile a GPU program on the GPU).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: