Hacker News new | past | comments | ask | show | jobs | submit login
Origins of J (github.com/kelas)
143 points by tosh on Jan 4, 2024 | hide | past | favorite | 72 comments



Arthur Whitney is a mad genius. We were an early customer of KX systems and I programmed with KDB for a few years. Arthur Whitney once sat at my desk and helped me debug my code. Very nice guy. Super smart and hilariously knowledgeable about the low level performance of chips, caches, etc. Ask him how many nanoseconds it takes to divide an array of doubles by an array of ints, and he knows. He just knows.


(author of the modern port here)

> Arthur Whitney is a mad genius

atw is atw :) our old man is awesome. a bit grumpy at times, but then who isn’t :)


This kind of story is what I enjoy so much about HN. I wish Kdb+ or Shakti had dramatically lower costs for those industries that don't have access to banking cash. I know open source versions exist, but I understand them to mostly be toys and not really production worthy.


As the author of an open source version (https://www.timestored.com/jq/) I wish the same but I fear the time has passed. Two factors:

1. The other technologies are evolving to take parts of kdb+ that made it special quicker than kdb+ is evolving. See arrow / parquet / numpy / kafka, they each solve parts but kdb+ had them all 10 years ago in <2MB.

2. The ratio of learners to advanced programmers has increased every year for the last 20 years. The languages that have gained popularity in that time range are those with the easiest learning curve. Most beginners no longer want to sit with a book frustrated on 2 characters for half a day.


Oh, interesting! I heard other array programmers talking about jq, and (having not seen your website) went off to study the json transformation language, which is also kind of cool, but nothing to do with kdb or q. :)


> Most beginners no longer want

Most beginners never wanted that.


You are correct. I guess it's more accurate to say, most beginners are no longer forced to learn step by step from a book. Now they ask google/SO/chatGPT and use that if it works with a few tweaks. Kdb+ has very few core concepts, most of which since they are symbols are hard to google.


guys, my two pennies: most beginners have no idea what they want.

but what we know is that what they don’t want is to produce more endless ugly and buggy code into the world - we’ve produced enough of that before they learned how to locate their asse(t)s.


KlongPy sits on NumPy so gets pretty far. For some features, it’s still early days.

Http://klongpy.org


K is at the end of the day a fancy calculator, I think for most workloads you can use the open source version called ngn/k


Sure, a fancy calculator that supports rather efficient execution on ridiculously large machine farms. What some people might call the implementation language for a database engine.


This - https://github.com/JohnEarnest/ok - can also be used sometimes...


> k is a fancy calculator

[put your favorite Java here] is also a fancy calculator by the virtue of Turing completeness. what’s your point?


Quant firms don't spend millions on what is just "a fancy calculator". It's more than that. It's a high performance combination of programming language and database that are tightly integrated together and more than the sum of their parts. I wouldn't want to write generic software in it, but it's pretty powerful for analysis.


I love kdb but First Derivatives should take it the MongoDb route... Open source it, make it widely available. Build a community around it. Build a package manager. Could be way bigger than it is... But maybe they're happy with their existing business model. I just don't see how you build a moat. Over time new tech will start to takeover as the key innovator is no longer there.


Nice to see this getting some attention again. I hope some people venture out to learn about the actual J language:

https://code.jsoftware.com/wiki/Guides/GettingStarted

For what it's worth, I've also studied this code a bit. This repo has an annotated and (somewhat) reformatted version of the code:

https://github.com/tangentstorm/j-incunabulum


J is really great. I spent some time last year playing around with it, Dyalog APL, and some other new array languages like BQN.

I was extremely impressed by the breadth of integrations into different ecosystems that the J community had created (like R and the web tech).

Using the language reminds me of using Common Lisp. There are a lot of things that seem odd now, like how you define new words (i.e. functions), how namespaces work, or how the FFI/system calls work (i.e. !: ) [1]. Kind of like how in CL things are named "mapc", "mapcar", "mapcan", etc. Both kinds of quirks come from the fact that these people were really innovating in new frontiers, and Ken Iverson and Roger Hui just kept on developing their ideas.

[1]: https://code.jsoftware.com/wiki/Vocabulary/bangco for how it works and https://code.jsoftware.com/wiki/Vocabulary/Foreigns for what you do with it.


Since we are on the topic, I've thought about APLs a decent amount so here are some other resources/notes. I'm not an expert on this topic - I don't work with or research the language or anything. These probably are not good getting-started resources.

There is a VM model for APL languages[1] which can make optimizations comparable to those made by CLP(FD). If you read about CLP(FD) implementations[2], you'll see operations similar to what the "An APL Machine" paper calls beating. I'm not sure if any APL-like languages actually implement such optimizations.

There are different models of arrays (and their types) used by APL-like languages[3]. Also array frame agreement can be statically typed[4], though it usually isn't.

Some other OSS implementations of similar languages include Nial[5], ngn/k[6], and GNU APL[7]. My favorite is ngn/k. If you use a K-like language, a great source of inspiration is nsl[8].

There is an unusual and fun calculus book that uses J, by Iverson, but it moves somewhat quickly and loosely[9]. It perhaps gives a good example of what APL was intended to be(?). On that note, his original paper, "Notation as a Tool of Thought" is interesting[10]. There is also podcast interview with Robert Kowalski, one of the creators of Prolog, who says - if I remember correctly - that he was looking for a better way of thinking when he came up with SLD resolution[11]. It's interesting how these languages came out of different paths towards a similar goal.

Also beware the reverence of Arthur Whitney. His work is definitely inspired, but the community around K can seem schizoid-like[12], in a way comparable to Wolfram's projects[13].

That said, J is an exceptionally fun language to use. My favorite insight from an APL-like language that generalizes is how K encourages writing functions that converge by the easiest-to-use loop operator being one that applies a function to an argument repeatedly until the output stops changing.

---

[1]: https://www.softwarepreservation.org/projects/apl/Papers/197...

[2]: http://cri-dist.univ-paris1.fr/diaz/publications/GNU-PROLOG/... (there are probably more to the point papers, this is just the one I read when I noticed the similarities).

[3]: https://aplwiki.com/wiki/Array_model

[4]: https://www.khoury.northeastern.edu/home/jrslepak/typed-j.pd... (implemented in racket iirc)

[5]: https://www.nial-array-language.org/

[6]: https://codeberg.org/ngn/k (honestly it is a miracle this exists)

[7]: https://www.gnu.org/software/apl/

[8]: https://nsl.com

[10]: https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.p...

[11]: https://thesearch.space/episodes/1-the-poet-of-logic-program...

[12]: https://www.ijpsy.com/volumen3/num2/63/the-schizoid-personal...

[13]: http://genius.cat-v.org/richard-feynman/writtings/letters/wo...


Here are my two cents on array compilation. I think a lot of the research goes in the direction of immediately fixing types and breaking array operations into scalar components because it's easy to compile, but this ignores some advantages of dynamic typing and immutable arrays. When you can implement most operations with SIMD, a smaller type always means faster code, so dynamic types with overflow checking can be very powerful on code that deals with a lot of small integers.

https://mlochbaum.github.io/BQN/implementation/compile/intro...

I'm somewhat skeptical of the virtual optimizations on indices, "beating" and similar. They sound nice because you get to eliminate some operations completely! But if you end up with non-contiguous indices then you'll pay for it later when you can't do vector loads. Slicing seems fine and is implemented in J and BQN. Virtual subarrays, reverse, and so on could be okay, I don't know. I'm pretty sure virtual transpose is a bad idea and wrote about it here:

https://mlochbaum.github.io/BQN/implementation/primitive/tra...


Blind virtual transpose (as seen in numpy) is a bad idea. A principled, locality-aware version would be fine and good.


> VM model for APL languages

It's cute—but from my skimming a while ago fairly primitive. We can do much better with less effort using more general mechanisms. (Not a knock—it's a product of it's time—a lot of old compiler tech was not very good and even so remains unsurpassed.)

> statically typed

I in principle espouse a much more nuanced view than this, but in short: just don't.


> statically typed APL

that’d be a curious case indeed, why not, only it won’t be APL even remotely :)


> applies a function to an argument repeatedly until the output stops changing

In other words: instead of worrying about which n to use for "loop n times", it just always loops (effectively) an infinite number of times...


What is reference [9] ?


Ah, mea culpa, it is here: https://www.jsoftware.com/books/pdf/calculus.pdf

The preface says:

The scope is broader than is usual in an introduction, embracing not only the differential and integral calculus, but also the difference calculus so useful in approximations, and the partial derivatives and the fractional calculus usually met only in advanced courses. Such breadth is achievable in small compass not only because of the adoption of informality, but also because of the executable notation employed. In particular, the array character of the notation makes possible an elementary treatment of partial derivatives in the manner used in tensor analysis. The text is paced for a reader familiar with polynomials, matrix products, linear functions, and other notions of elementary algebra; nevertheless, full definitions of such matters are also provided.


The code in C that suppose to be written like this is usually never written first like that, its like pretending writing minified js by hand from scratch. Usually the code is contracted and "minified" from large program to fit entire program into 1-3 screens, the person who manually "minified" it to that state will know its expansion but other people will dismiss it as obfuscated C, its an old technique to fit lots of code into 80x25 type terminal. Not surprising since J is optimized for code density per screen.


> The code in C that suppose to be written like this is usually never written first like that

usually not. but we prefer to write it first this exact way, and there are good reasons for rhat.

> obfuscated c

it is not. this style is extremely regular, very readable and writable, and escapes a whole galaxy of typical C blunders. i can expand on that if you wish.


Please do. Although I'll probably never write C in that style, most of us here will probably learn a few things that will eventually prove useful. (And it probably will also serve as a historical document of a "skill"(?) that is apparently soon to be lost to obscurity...)


> please do

now that i think of it, i already did just that once, only forgot. getting old sucks, and also forgetting things is a great skill i learned from atw. as he likes to say, “kelas, ignorance is bliss”. Here you go - all you ever need to know about how to read and write atwc:

https://github.com/kparc/bcc/blob/master/d/sidenotes.md#styl...


> inequality x!=y is not used at all, because it is two chars. instead, we test with x-y, which holds true when operands differ

I think this is the line in the document that represents his coding style the most. Sacrificing legibility for plebs to save one character per comparison.


> represents his coding style the most

i agree that atw’s inequality test is a bit cheeky, but like everything else it is a matter of habit. a convention. eventually you just begin to see what is subtraction and what is comparison.

here’s another classic example of the same effect:

x=x+1

makes perfect sense to everyone, right?

wrong. to some, the right answer is “no, they are not”.


I am not saying it's bad. I kinda understand the motivation behind atw style in general. Modern code is written like prose:

    if (IsValidFile(fileName)) {
        var rawRecords = ReadFileAsRawRecordIterator(PreprocessFile(fileName), config);
        foreach (var rawRecord in rawRecords) {
            var record = ParseRawRecord(rawRecord, config);
            //and so on
        }
    }
But you have to trust the prose. At one point or another, you find a bug in some free software you haven't written yourself, open its source code and are met with ravioli code written in prose. And you cannot trust this prose because there is a bug in it. So you open this tiny method, the you open PreprocessFile, ReadFileAsRawRecordIterator, ParseRawRecord, the implementation of IRawRecordIterator, then a few more methods that these methods call themselves and try to thread the control flow through these methods, jumping from file to file. I can see how atw style code can help with that, especially after you retrain yourself to read it like math.


> soon to be lost to obscurity

well, about that i’m just not so sure.

as we like to say, “skill”, for whatever reason you chose to use quotes, cannot be easily bought on a Turkish fish market.

there are people out there who write atwc, and they are not atw. the following 40 lines of c are written strictly arthur-style, and are occasionally very useful. in the faq section of the readme there is an answer to a popular question “why is it written this way, and how to learn to write software this way”.

https://github.com/kparc/pf


> it is not. this style is extremely regular, very readable and writable, and escapes a whole galaxy of typical C blunders. i can expand on that if you wish.

Please do! I'd like to learn. If you can expand on this in the README section as well that would be great.


> README

good idea, why not

for a more throw-me-in-the-water introduction, here are my notes on another famous public domain release from atw. some remarks are specific to the codebase, but essentially it is a general introduction to atwc:

https://github.com/kparc/bcc/blob/master/d/sidenotes.md

and here’s a less involved way to get lit:

https://github.com/aaalt/altc


Awesome, I'm checking these out. Thanks!


> Not surprising since J is optimized for code density per screen.

I don't think that's the whole story. J is dense like traditional mathematical notation, but can be executed by machine. Experienced J programmers use it to convey mathematical ideas. See for instance:

https://www.jsoftware.com/jwiki/Puzzles/Unit_Fraction_Sum

Although I can't read the notation, I appreciate the role it can play. Plain C (or whatever) code isn't an efficient vehicle for ideas like that. Numpy/matlab comes closer but J is a stronger approximation of traditional maths notation.


I like J. Especially because it has a saner way to write it (it doesn't have to look like as if you accidentally forgot a null terminator in C strings, all the traditionally short identifiers have a long and understandable form).

I feel like it's very regrettable that the superficial aspect of J (the very hard to read syntax) is standing in the way of some very nice ideas.

To comment on mathematical notation. Before I was a programmer, I was a typographer. During my study in art academy, I invented a bunch of fonts, one of my long time projects was to make Hebrew look more like Latin fonts for example (this is a long-standing issue in Hebrew typography, with several historical attempts, but still not quite resolved). Afterwards I worked in a printing house, paginated a newspaper, typeset a bunch of books etc.

Among my coworkers (esp. in the newspaper) I was sort of known for trying to automate stuff, so, I was often suggested as a candidate for "difficult" typographical tasks, like setting sports tables, chess diagrams, music sheets and the most damned and hated kind of typographical work: math formulas.

I've helped publish a course book on linear algebra for a university. It was a multi-year project which I joined in the middle. I have never seen so much pain, struggle and reluctance as I've encountered while working on this thing. People tasked with proofreading demanded extra pay for proof-reading this stuff, and still wouldn't do it. Just put it away and later explain that they had other things to do. The lady who had to translate the mostly hand-written, or sometimes typed on a typewriter manuscript would just skip work on the days she was supposed to input the manuscript into our digital system.

Everyone passionately wanted this project to burn in hell. And the reason for this was the mathematical notation. Typical proofreading techniques don't work on math formulas. The text is impenetrable to anyone, often even to the people who wrote it, including both the author and the editor. Parenthesis are a curse, because in the manuscript they are one of the elements that is most commonly forgotten or misplaced. Single-letter variables are the other one. Overloading the same symbols with different meaning is yet another one. It gets worse when the same symbol is used in its normal size, subscript and superscript.

----

When I talked about my experiences to people with degrees in math, they way they tend to respond to this is by saying that "math is overall so hard, that mathematicians don't typically notice the extra struggle they incur on themselves by the bad language choices, it pales in comparison to the difficulty of the main problem they need to solve".

And, I kind of can see it... on the other hand, I see no reason _the students_ have to endure the same torture. They aren't solving any novel mathematical problems. Their task is usually reading-comprehension combined with memorization.

And then I saw Sussman book where he uses Scheme to write math formulas (I think it was about physics, but it still used a lot of math). Dear lord, it was so immeasurably better than the traditional mathematical notation. I really wish more people joined this movement of ditching the mathematical notation in favor of something more regular and typography-friendly as Scheme...


I, on the other hand, am dreaming of being able to use mathematical notation in my code. Sort of like what Fortran has helped with, only on a much larger scale.


As with a lot of things, some people may enjoy arduous and very low-yield process for all sorts of reasons. I, for example, like baking sourdough bread.

As with the bread, which comes out more or less comparable quality to what I can buy from the local grocery in exchange for much less effort, I get certain satisfaction from doing it myself. But, if I had to do this on an industrial scale (and I worked in a bakery, although very briefly), I'd want to kill myself if I had to deal with the same kind of process.

Math language is very similar in this regard. It's kind of nice, like a calligraphy piece. Sometimes it takes a master month to write just a few words in a visually appealing way, but if this was the expectation for everyday boring tasks, that'd be a completely different story.


My impression has always been that it is mathematical notation that is indeed high-yield and low-effort. That’s why Fortran was/is successful.


Low-effort may be if you are writing with a pen on paper, or chalk on the board. It's anything but even with systems like LaTeX.

Just to put this in perspective: in the days of me being a student, I got a gig at the state hotel for official guests. They had a guest book and the honored guests would usually leave an autograph in it. They hired me to use calligraphy to write the name and the title of the guest. Usually, that meant two, sometimes one line with just two or three words each. So, let's say four words per page. I would do about ten pages per day (they had couple years of backlog).

My typical workload in the newspaper for a day was somewhere between 16 and 24 A3 pages (this includes everything from inputting the text into the system, editor editing it, proof-readers reading it and me running back and forth between the editor and proof-readers to convince the editor to find a different image / add or remove a paragraph etc. If memory serves, that's about under 1K words per page. So, 16K-24K words per day (compared to 40 words per day of calligraphy).

With the math textbook, we did about a page a day, and it was closer to A4, so, under 500 words. Also, of course, the formulas are just a small fraction of the algebra textbook: most of it is prose: proofs or some general discussion about the subject. Of course some diagrams (but that's comparable to the newspaper).

So, while not as bad as calligraphy, math textbook was at least two orders of magnitude harder than newspaper, and only an order of magnitude easier than calligraphy.

NB. Newspapers aren't the easiest job in terms of putting text on paper. It's actually quite involved and paginators are under quite a lot of pressure to finish things on time, especially for the daily papers. If you are looking for the lowest effort / highest yield, something like the War and People (or is the traditional English translation the War and Peace?) would be your best bet. You can do hundreds of pages per day, even with moderate amount of illustrations.


> math notation is indeed high-yield and low-effort

low-effort is perhaps “your mileage may wary”, as they say :) but the yield per square inch of paper does indeed make math the most powerful and expressive language known to humans. On that note, Ken Iverson was very concerned that tons of mathematical symbolic conventions and speak overlap and conflict with each other to an obscene degree. As we all know, that little book he wrote on this very subject eventually got him a Turing award when people finally realized what he did there.

That said (and please no offense APL and typography fiends who are reading this) a considerable portion of the funny APL chars was a hard compromise dictated by economics and physics of IBM Selectric typeball.

With that in mind, if you take a fresh look at the original APL charset, you will see that much of it is stone-stupid overtypes of two ASCII chars.

Why? Because IBM, that’s why. El Cheapo.


You would have liked fortress.


While there are people that do this, I do not think that Whitney is one of them. This code is not obfuscated; it uses macros and strategically defined functions to allow writing code in a style similar to APL that appears natural (-ish?) to someone fluent in that programming style.


> not obfuscated

absolutely not. porting it to ISO C was a very fun and smooth ride, also added two adverbs atw forgot to add in 1989 (see over/scan) and a header file with some handy accesssors (atw usually does that, but he was lazy that day)

> to someone fluent in that programming style

what people often don’t realize is just how fast one can pick up atwc style, and how hard it is to ever go back :)


> what people often don’t realize is just how fast one can pick up atwc style, and how hard it is to ever go back :)

For my own amusement I tried this a couple months ago and I 100% agree. I found my code to be more engaging to develop and understand (function names encoded into 3 chars, structure names encoded into 4 chars, primitive types are a capital letter, etc).

It is like a game you play with your brain to recall how your code works. And, it reminded me about studying 6809 assembly in college, where it was easy to memorize every instruction mnemonic and what it meant.

I just don’t know if I would show it to someone at work, but for my own prototypes and experiments, I like it.


I’m in the same boat. Outside of a professional setting atw style is the only way I write code. I find code easier to manage when I can see more of it at once. I like to use this style in JavaScript as well.


> I find code easier to manage when I can see more of it at once. I like to use this style in JavaScript as well.

How do you write with this style in JS?


D=document;E=“getElementById”; D[E](“myId”)


How do we pick up that writing style.


One possible way is to look at a header file i once gave to a 13yo girl. ever since she says she has no idea why people write tall c:

https://github.com/aaalt/altc


learn apl (k and j count)


true. only k is faster, easier to learn, and does the trick :)

that said, i hold the view that mastering programming in an ultra-high level language such as APL or k does not absolve a computer programmer from learning lingua franca of our trade, which is due to k&r, will stay around for a very long time, and is called C.

people who don’t know c are ok, only they are not involved in computer programming. their field is known as software development. feel the difference.

i once attempted to convey my own understanding of this divide in a chapter titled “no stinking loops”, which is a nod to Apter’s mandatory nsl.com:

https://github.com/kparc/kcc#no-stinking-loops


Regarding k vs j, I will just say that ngn and I both agree that both the k and j array models are far more coherent than bqn. And that k is slow for multidimensional arrays ('mangle your data so it's fast in lists' is a poor response). And leave it at that :)

C is a historical accident. Its existence makes sense in context. Its continued usage only makes sense in context of artificial social factors. There is no reason why we cannot write all software in high-level languages except perhaps when targeting microcontrollers. (For example, there are strategies that can be employed to reduce the rate of memory errors in general-purpose code written in languages without automatic memory management, but there is not much reason to learn these strategies when pretty much the only code that really has to be written without automatic memory management is the memory management code itself, which is hardly 'general purpose'.) The extent to which it makes sense for people to understand the low-level details of the machine is a separate issue I won't express an opinion on here.


> C is a historical accident

strong words. in the context of civil aviation, for example, there is a fine line between incident and accident. in our context, an accident is COBOL/ABAP, which is, thank god almighty, not as ubiquitous as c, but sadly ubiquitous enough. JavaScript is a separate issue i won’t express an opinion on here :)

> k is slow for multidimensional arrays

this is true if we can agree on “k is slow for very multidimensional arrays of very small lengths”. yes, i wouldn’t recommend k as first choice for neural network inference, although it can be done and looks like a typical k program, that is pretty neat (proof below).

k on 32bit systems (notably, riscv and wasm32) have been a contention point between atw and his associates a few years ago, but a deal has been struck, and we now run k everywhere. the first bare metal risc-v build is about four years old now. and yes, it took some doing in the low level department.

before i forget:

https://kparc.io/k/#0%3A%22ml.k%22

https://kparc.io/k/#%5Cl%20ml.k

(end of proof)


> How do we pick up that writing style.

since the discussion here revolves around "intentionally obfuscated c", i'd like to recommend three sources which elucidate what obfuscated c is:

1. this is not how you want to write c. this is very bad news:

  #include <stdio.h>
  #include <stdlib.h>

  int main() {
     int *p = (int*)malloc(sizeof(int));
     int *q = (int*)realloc(p,sizeof(int));
     *p = 1;
     *q = 2;
     if (p == q)
       printf("%d %d\n", *p, *q);
  }
2. what follows is five trivial questions about c. correct answer to all five is "i don't know". you don't even need to understand why that is, what you want instead is keep it simple. "Origins of J" from 1989 is infinitely more sane and approachable than any of these five:

https://wordsandbuttons.online/so_you_think_you_know_c.html

3. what amazes me is that no single person in this thread yet illuminated us with a staple wisecrack piece of general form "preprocessor is evil". so, let me do it: preprocessor and fancy macros are EVIL, and they are not your friends. if you don't know when to stop producing them, they will turn on you, and will become deadly.

this effect can be described in less sinister language, sunny side up. if you understand at least 30% of untold sorrow which happens below, you are 100% qualified to use preprocessor:

  //!\file adios.h

  //! first things first
  #define struct union
  #define if while
  #define else
  #define break        //!< what a sweetheart
  #define if(x)
  #define double float //!< who cares
  #define volatile     //!< amen

  //! elite programmers are not afraid of math
  #define M_PI 3.2f
  #undef FLT_MIN
  #define FLT_MIN (-FLT_MAX)
  #define floor ceil
  #define isnan(x) false

  //! more entropy is always good
  #define true ((__LINE__&15)!=15)
  #define true ((rand()&15)!=15)
  #define if(x) if ((x) && (rand() < RAND_MAX \* 0.99))

  //! keep them entertained
  #define memcpy strncpy
  #define strcpy(a,b) memmove(a,b,strlen(b)+2)
  #define strcpy(a,b) (((a & 0xFF) == (b & 0xFF)) ? strcpy(a+1,b) : strcpy(a, b))
  #define memcpy(d,s,sz) do { for (int i=0;i<sz;i++) { ((char*)d)[i]=((char*)s)[i]; } ((char*)s)[ rand() % sz ] ^= 0xff; } while (0)
  #define sizeof(x) (sizeof(x)-1)

  //! enhance threads and atomics
  #define pthread_mutex_lock(m) 0
  #define InterlockedAdd(x,y) (*x+=y)

  //! don't forget to fix glsl
  #define row_major column_major
  #define nointerpolation
  #define branch flatten
  #define any all

  //:~


Nope, Whitney just codes like this.



A previous HN thread on Arthur Whitney's "B" is relevant here. Follow my previous comment here - https://news.ycombinator.com/item?id=30416737 where user "yiyus" goes through the code line-by-line adding detailed notes for comprehension.

Also see https://github.com/tlack/b-decoded


Obligatory posting of Bryan Cantrill's interview with Arthur Whitney https://queue.acm.org/detail.cfm?id=1531242


The full audio of that was recorded but was never released -- this is reminding me that I should loop back with ACM to see if they still have it and can release it. In particular, I want to see how long the pause when Arthur responded to my question "What do you think the analog for software is?": I won't give away his answer, but it more or less detonated my brain -- and it took me what felt like minutes (but was surely only seconds?) to put myself back together and ask a follow-up.


I agree with Arthur's answer, but unfortunately it runs afoul of Conway's Law: auteurs have a voice* and are capable of producing software in that style, but large organisations? By necessity, they must produce something qualitatively different, that anyone can slot anything into anywhere, optimised for superficial comprehensibility over elegance.

* sometimes small groups? Doug McIlroy said he was lucky to have managed a software group whose members would sit around in the lunch room and brainstorm (reminiscent of the Little Prince?) not what they could add, but what they could remove.

« Par ma foi, il y a plus de quarante ans que je dis de la prose sans que j'en susse rien, et je vous suis le plus obligé du monde de m'avoir appris cela. » —JBP


sir, please - make an effort. recover that audio.

it is a f%ck#g shame that CHM failed to do their job and lost the legendary footage of the celebration of KEI. Arthur took the last word and opened with the iconic line “deeds of great men don’t need words, they need more deeds, so i’ll keep it short”. check out who else took the mike that afternoon:

https://computerhistory.org/events/celebration-kenneth-ivers...


“…and that’s why we have code formatters.”

That’s neat and impressive. I’m glad I’m not required to read or understand it.


It really doesn't help that it's written in ancient K&R C, but if you spend ten or so minutes just staring at it, familiar shapes and patterns start to appear. (Give it a try!)

Incidentally, it's in line with how APL code looks like an alien artifact at first, but you get used to it fast if you have spatial reasoning to wrap your head around reshaping and transposing.


If you focus on the middle and move your head back and forth eventually you see a 3D image of Dykstra pop out and he doesn't look happy at all.


porting from k&r to iso is super easy. fun, too.


Once I put the effort to understand code like this and it turned out it's straightforward once you learn the conventions:

https://news.ycombinator.com/item?id=19421524


I was expecting the story of the letter J, that Indiana Jones and the Holy Grail has taught me had appeared only in the 15th century as a letter separate than "I".


And I was expecting the Outlook J smiley. Been a while since I've seen it in the wild now.

https://superuser.com/questions/1181497/text-smiley-faces-sh...


I had the same thought . Maybe the title should be updated to better reflect this




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: