Hacker News new | past | comments | ask | show | jobs | submit login
The design side of programming language design (tomasp.net)
188 points by panic on Sept 15, 2017 | hide | past | favorite | 74 comments



IMO, this is blind spot in the research community right now. The programming language community doesn't care about the ergonomics of language use and is only interested in theoretical evaluation of their work with rigorous proofs.

The Human Computer Interaction (HCI) community should care about this, but is beset by two problems: people in PL don't take people in HCI seriously because the people in HCI don't do proofs, and, the people in HCI generally don't have the background to start tackling the mathematical principles of programming language design.

This seems like a natural time for some kind of cross discipline collaboration, but the two forces above keep potential collaborators apart. PL people talk about judgements and sub-structural typing and HCI people's eyes glaze over, HCI people talk about human subjects, experimental methodology and statistics and PL people's eyes glaze over.

I've advocated for the creation of a new field of study, call it Programmer Computer Interaction. No one takes it that seriously though, and the intersection of researchers that care simultaneously about things like good experimental design, statistical significance, type theory and constructive logic seems to be just me.


My only worry with the HCI community is that they seem to over-focus on beginners and prioritizing systems that are intuitive rather than powerful. I am absolutely not worried about whether they write proofs or not!

This might just be an issue of perception on my end, although it seems consistent with the HCI-conscious PL work I've seen.

Personally, I have two priorities in PL design: expressiveness and aesthetics. Both of these seem perfectly suited to a design-oriented approach. But are they a good fit with the sort of work regularly done in HCI? I genuinely don't know.


HCI's methods in particular are too expensive to test the conjectures we normally offer about our designs though. A sizeable team of experienced programmers designing, building, or maintaining a significant software product can't be paid for on anyone's research budget, so everyone uses small, short-term projects written by undergrads (or maybe MS students) instead. The results may be real enough, but they won't address our actual questions.

Looking in the other direction, I would like to think that even the most theory-obsessed PL people can appreciate that "intuitive" and "powerful" can both arise from clean design, but I've also been told, "your proofs are boring" as though that's a bad thing.


It might be my cynicism, but in the PL literature I find "intuitive" to be a very low calorie word. Obviously you would describe your own system as "intuitive" what are you going to do, write a paper that says "we have a complicated and difficult to understand system S that..."

"intuitive" is very subjective, I think. I know lots of PL people that think that functional programming, for example, is intuitive. After many years of study and practice, I too find functional programming intuitive, but I can remember a time when I did not, but I could still program. From this I conclude that something being "intuitive" is very subjective.

Expressive power, on the other hand, I think we can put an objective definition on. My favorite take of this is Felleisen "On the Expressive Power of Programming Languages." Of course, what use is something being powerful?


It always felt to me like when people use words like "intuitiveness" or "usability", they're all pointing at slightly different things. Matt Fuller's partitioning of user-friendliness into "easy to use", "easy to learn", and "easy to use without learning"[0] feels like it might be easier to set objective criteria for. For your example, you might say functional programming tends to be easy to use, but hard to learn and hard to use without learning.

(I'm not current on PL/HCI literature, so it wouldn't surprise me if they already have these concepts, just with different labels.)

[0] https://www.over-yonder.net/~fullermd/rants/userfriendly/2


I don't think you can define an objective criteria for any of these terms, since they depend on previous experience and are inherently subjective. Instead more commonly used terms to more precisely describe "intuitiveness" or "usability" are: simplicity, consistency, universality, familiarity and flexibility.


Who cares about power? Squeezing out 20% more expressiveness is only going to reduce that million line code base down to 833k lines. At that scale a single human simply can't keep every spooky possible action at a distance inside their head. And not every line has been written by that single human so understanding the code someone else has written is far more important. What pisses me off the most is the emphasis on reducing constant overhead at the expense of adding more linear overhead after a certain threshold has been reached and that threshold isn't very high.


20% more expressiveness … reduce that million line code base down to 833k lines

I'm not convinced language expressiveness can really be quantified that way.

emphasis on reducing constant overhead at the expense of adding more linear overhead after a certain threshold has been reached

Who exactly is emphasizing this?


Exactly. We have a working group tackling these problems:

http://program-transformation.org/WGLD/

However, it seems to be drifting back to traditional SIGPLAN topics lately. The problem is the conferences: there was a concerted effort to make them more sciency and less designy in the last 15 years and....well, design no longer sells to acedmics.

The HCI community has similar problems actually. You can't really equate empirical evaluation of user behavior (a topic in itself) to design, where many aspects are quite difficult to validate empirically (things like Fitts' law can evaluate simple behaviors, but don't really scale to complex behaviors).


> The Human Computer Interaction (HCI) community should care about this, but is beset by two problems: people in PL don't take people in HCI seriously because the people in HCI don't do proofs

Well that's not true. If the HCI people actually had empirical data supporting their positions, you'd be a fool to ignore them. But what are PLT researchers to do with HCI claims that are completely unsupported, or whose supported positions are extremely limited in scope? We can't halt work on PLT to wait for them to catch up.

Some collaboration on empirical studies on programming ergonomics are needed. Given how huge much money is invested in this industry, I'm surprised more effort isn't put into this.


I'm not sure that the article is about caring about ergonomics. There is this:

> I also believe that interesting programming language research is not about finding a keyword that will make writing for loops 3x easier

If we go out on a limb here a bit, "a keyword that will make writing for loops easier" seems to be a metaphor for everyday ergonomic issues in coding.

(I don't have a hypothesis about what the article is about, though).


I don't see how good experimental design could benefit from type theory and such. There is no need for an intersection, just the first group of researchers working on programming language design.


Well thought-out type systems allow decidable inference. Inference lets you omit details about the program that are already obvious from its structure. Omitting obvious details lets you focus on the important parts. Focus helps creating better programs quicker.


There are people doing research into this, for example comparing how quickly various styles of loops are understood by beginning users (I wish I could find the talk on that right now). Like you said, it's not taken very seriously though.

The problem I have with that is that it seems to be limited to verifying existing structures and solutions. That kind of quantitative research has its use, but is not really helping with exploring novel ideas in human friendly interfaces.

For example, I stumbled across Céu a few years back and I have never seen concurrency and timing done in such an intuitive style before[0]. Other things that have blown my mind in the last few years include Halide's decoupling of algorithm and scheduling[1], Jonathan Edwards' experiments with schematic tables as an alternative to if-statements[2], vega-lite's grammar of interactive graphics to declaratively construct interactive plots[3], and aprt.us' hybrid graphics/text programming environment[4].

These are all novel ideas that you cannot find through quantitative measurements about which syntax is optimal. Not that the latter is without use, since it can make people shut up about pointless disagreements (although I think the better solution to ending a holy war on bracket style is to get rid of it altogether and use something one default style like elm-format[5]). And maybe we're also not looking outside of our own field enough.

For example, I recently read Steven Pinker's "The Stuff of Thought". There was a chapter discussing all kinds of (human) language paradoxes and hidden rules based on how humans have different ways of thinking about aggregates, and how we use those subconsciously in our daily language. As I was reading it, it made me think "this makes so much sense of how different languages use collections differently, they're just applying different styles of intuition described here!"

The funny thing about most language paradoxes is that they require very specific ways of framing a question, and that they disappear when framed differently. Which sounds a lot like how some problems are easier in one style of programming or the other. And that makes me wonder if we can't learn a lot from these branches of linguistics about how we might set up our computer languages in such a way that humans are less likely to make errors of thinking in them.

[0] http://www.ceu-lang.org/

[1] http://halide-lang.org/

[2] https://vimeo.com/177767802

[3] https://vimeo.com/177767802

[4] https://www.youtube.com/watch?v=i3Xack9ufYk and http://aprt.us/

[5] https://github.com/avh4/elm-format


> There are people doing research into this, for example comparing how quickly various styles of loops are understood by beginning users (I wish I could find the talk on that right now). Like you said, it's not taken very seriously though.

I don't know other research exists, but I wouldn't personally value any research on beginning users comprehension because you're typically a beginner in a new language for a few days to a few weeks but an experienced user for years or decades, so beginner experience is just not something I care about in the big picture. (Obviously there are reasons other people might care about it... it affects initial adoption of the language and so on, but I just don't care about them personally).


The problem with studying humans is that you need relatively blank slates so you can have proper control groups. Experts or non beginners are hardly blank slates, so are much more difficult to study in the lab.


I get that, and I don't have any great ideas for solutions, but I hope we can find an effective way to study the people of interest rather than to study a group of decidedly different people and try to extrapolate the results to the original group.


There are other ways, and they are used, but they aren't scientific enough (even if useful) to be published.


I've been a designer/programmer pretty much my entire career and currently work on a research-centric UX design team. So while I've been designing my own language (THT, a language that compiles to PHP, aiming to fix most of the issues with that language), I've been applying usability concepts to the design.

A few principles:

- Acknowledging that the user is probably familiar with other languages, so stay within those conventions when possible. This is inspired by Jakob Nielsen's maxim that web designers should acknowledge that their users send 99% of their time on other websites.

- Making the most common activities the easiest. Larry Wall refers to this as applying Huffman coding to the syntax itself.

- Safe defaults. Making dangerous operations less convenient than the safe path. PHP has pretty much the opposite behavior, where many of the default design choices lead to security issues.

- Cognitive Load. Minimizing visual noise and the number of micro-decisions the user has to make. i.e. There should be one (good) way to do it.

- Preferring shorter, clearer terms over technical jargon. e.g. I use the terms "Flag" instead of "Boolean" and "List" instead of "Array". One can get pedantic over the exact meanings of symbols, but during the actual act of programming, simpler terms can help reduce friction.

The project is at https://tht.help if anyone is curious.


> e.g. I use the terms "Flag" instead of "Boolean" and "List" instead of "Array"

It some cases "Flag" makes much more sense than "Boolean", but when using logic gates does "Flag" make any sense? Wouldn't it make people think of actual flags more than an on or off signal.

"List" and "Array" mean different things in CS. Lists cannot be accessed arbitrarily whereas arrays can be.

> One can get pedantic over the exact meanings of symbols, but during the actual act of programming, simpler terms can help reduce friction.

"Simpler" is a matter of perspective. Any experienced programmer should not have any problem with Arrays vs Lists vs Queues, etc, and they are all necessary concepts to be able to talk about and program with.

Most jargon exists because it is able to describe things that are not used in everyday life much more efficiently than without it.


Depends on the context. If you're writing a CRUD web app with simple business logic, is the mental priming of data structures like arrays relevant helping you develop?


> e.g. I use the terms "Flag" instead of "Boolean"

I'm not familiar with THT and this is a nitpick, but doesn't this contradict the first principle? Pretty much every language I've ever used has some notion of "boolean", and I don't know one that uses "flag". Don't want to bash, just curious how that decision was taken.


Most of the principles are always in tension to some degree, and come down to a design decision. In this case, the shorter, clearer term won out. The two terms are practically synonymous, so if you know what a "boolean" is, you probably know what a "flag" is.

In actual use, you rarely interact with the names anyway -- you use `true` and `false` as values like most other languages.


Flag is a pretty obscure term that you learn from using the linux command line. Booleans are in math and engineering. Flag is not clear at all and using it instead of boolean, especially when you don't even use the term boolean directly all that much, seems kind of silly.


Is "flag" actually clearer if you're not already familiar with the term from command-line "flags"? Personally I always thought it was kind of a weird usage, pretty far from the literal meaning.


I can't remember where I first heard it, but I'm sure it was before I'd used command lines. I always thought it came from the little flags on the side of some mailboxes.

etymonline doesn't have an entry for flag as a noun meaning boolean, but this might be related:

> flag (v.2) 1875, "place a flag on or over," from flag (n.1). Meaning "designate as someone who will not be served more liquor," by 1980s, probably from use of flags to signal trains, etc., to halt, which led to a verb meaning "inform by means of signal flags" (1856, American English). Meaning "to mark so as to be easily found" is from 1934 (originally by means of paper tabs on files). Related: Flagged; flagging.


I love this idea of Huffman coding the syntax- I've been thinking recently it wouldn't be difficult to crawl all the packages of NPM for example, encode them into their AST's, and use algorithms to analyse them. e.g. trying to apply PageRank to visualise the most important classes/methods in a codebase.


The modern practice of programming is increasingly around social acts of building software with your team and surrounding community. Yet, evidenced by even this discussion thread, when someone says 'design' and 'hci', the focus gets tunnel vision around the individual user experience. Rethinking the individual human factors of languages is great. But for broader impact, I've long shifted towards rethinking the socio-technical design.

When I think of what's interesting about Go and NodeJS, and about even more ambitious ideas for the next 10 years, my head is around languages built for communities of programmers. Even better, designing languages that improve over time by leveraging the combined activities of programmers and users, in code and out.


Really appreciate this article. Some languages feel like their syntax was designed by developers and others feel like it was designed by designers. It seems totally appropriate that the UI (language design) should be a different skill and created with a different mindset than the back end (language implementation). I hope that this focus can lead to more beautifully designed languages, not just faster languages.


Most technologists don't seem to believe that design applies to programming languages at all. Programming languages are tools and tools are typically designed for efficiency, which is measurable - but you'll still hear things like 'syntax is personal'.

- It's worth optimizing for more people than fewer.

- There are a thousand times as many future programmers than there are existing ones.

- One can measure how easy it is for a clean-room human to read and create software in different languages and build something for them.


I'd be happy to just have less variation in the syntax corresponding to established semantics; if you're designing a new language, please don't make some 'nifty' new way of typing out dictionaries. All this does is force me to hunker down with several beginning chapters of your programming language book instead of just skipping to the relevant differences from languages I already know.

If there's a good reason for the difference, by all means, go for it. But if there isn't, please reconsider.


Strongly disagree.

So that leaves little room for alternative language paradigms then. If you want dictionaries to look like JS or Python then what are languages like Lisp and APL to do, not use coherent syntax. Or what about just making different use of the limited number of symbols on the keyboard to make the use more consistent?

Id rather have a more consistent language or one that introduced new concepts than one familiar in syntax just for the sake of being familiar.


>So that leaves little room for alternative language paradigms then. If you want dictionaries to look like JS or Python then what are languages like Lisp and APL to do, not use coherent syntax.

Yes, but what about the benefits from the regularity and uniformity?


Sure, I understand. But it's not just a case of new languages coming up and reusing concepts from the set of all currently-available syntaxes; they in fact invent wholly new ones.

It's reminiscent of this: https://xkcd.com/927/

Edit: further to your point, I agree that my gripes are merely about an up-front cost, whereas consistency can be helpful throughout your use of a language.

Edit again: I essentially seek consistency within and among languages. I don't mean to state one should come at the cost of the other at all times.


> All this does is force me

If you're a programmer now, you're a secondary audience to the masses of people who will program in the future. If the language is going to have the most impact, what they prefer supersedes what you (and I) prefer.

> If there's a good reason for the difference, by all means, go for it.

Of course and agreed. Nothing should exist without reason. Antoine de Saint Exupery etc.


Doesn't the author somewhat argue the opposite? If form follows function, then surely a "new" way of doing dictionaries should have a new syntax?


I'll be more specific: if it's the same construct, please make it look familiar. If it's novel, then surely, yes, appearing novel will help me.


I like your word choice: "help". Not just not hurt; it will help. Different things should look different.


This seems to be blurring a use of "design". Not all design is chrome on top of things. Some literally leads to better use. Some design is required for safe use.

I think it is oversold, but the book "Design of Everyday Things"[1] goes over this for many common items. There is a long section on doors with many interesting points to consider.

[1] https://www.amazon.com/Design-Everyday-Things-Revised-Expand...


> This seems to be blurring a use of "design". Not all design is chrome on top of things.

You're thinking of syntax as chrome. It is more than that.


Apologies, that is exactly what I was trying to say. Specifically, I was arguing against "Most technologists don't seem to believe that design applies to programming languages at all."

I was speculating that this belief is from folks that think design is just chrome.


Ah that explains things perfectly, sorry I misinterpreted you.


No worries. I should write more clearly and take this as help along the way!


I disagree. Most technologists agree design is important in PX (PL + tools), but programmers are more willing to put up with poorly designed experiences in exchange for the latest technology.


As the designer of D, I can attest that syntax matters very, very much and it is not just a 'personal' issue.

Many times I've discovered that altering the syntax for something (not its semantics) can completely transform its use.


Interesting, do you have an example in mind?


The original syntax for lambdas in D worked, but it was so clumsy nobody used it, and people even said "D doesn't have lambdas". Changing it to a much simpler syntax changed everything.

old:

    function double(double a, double b) { return a + b; }
new:

    (a, b) { return a + b; }
Currently, the syntax for in/out contracts is being revised for usability:

https://github.com/dlang/DIPs/blob/98052839441fdb8c6cc05afcc...

D has a different syntax for templates than C++, and is far more approachable and convenient:

C++:

    template<class T> void foo(T t) { }
    template<class T> class S { };
D:

    void foo(T)(T t) { }
    class S(T) { }


I just picked up Knuth's Selected Papers on Computer Languages[1] which has some fun exploration of this topic. Mostly, I will confess, much denser than I am probably prepared for. The initial essay on languages before the 1950s was remarkably interesting. In particular, to see some of the early languages mathematicians were using.

[1] https://smile.amazon.com/Selected-Papers-Computer-Languages-...


A key point (perhaps well known but still very important) is the "let mutable" vs "let" for declaring mutable variables. It really does change tendencies of developers by putting the burden of effort on one of two paths, and encouraging people to write code a certain way can have significant effects on the end product.


While true, I'm not sure that it's necessarily because of the increased burden.

In Scala, for example, immutable variables are declared with "val" while mutable ones are declared with "var". Here's there's no great cost in typing or screen space (and to the uninitiated, it may not even jump out at you). But Scala developers will almost universally use val; in the language idiom var is an unusual thing that feels wrong. And I say above that it won't jump out to beginners; but to experienced Scala developers that l vs r is a huge glaring difference -- your brain just gets trained that way.

I think the key takeaway is that some languages are designed with first-class support for immutable variables, and in those languages, you tend to use immutable variables. Even in Java, modern style tends to use "final" for local variable definitions, though the support in the language for immutable types is much weaker, so the tendency isn't as strong. In a language like C, immutability is a practically non-existent practice, and "const" does not serve the same function.


I find your statement confusing -- you appear to be attempting to argue that "let mutable" vs "let" would have no impact because "var" vs "val" has no impact. But you point out yourself that there's no cost to typing either in Scala... whereas there's a huge (in context) cost to typing "let mutable" vs "let. If you had to type "constant var foo =" vs "var foo =" (or similar) then presumably average Scala code would have quite a different distribution of mutable vs. immutable values.


The argument is that whether people choose to use mutable when it's not necessary has nothing to do with being harder to type. In Scala, using mutable is not harder and people still do the right thing.


Any studies exploring that? Many claims like this are highly prone to confirmation bias. Specifically, you will notice the times it favorably changes your tendencies. Ignoring all of the times it was irrelevant or cumbersome.

Note that I am specifically not arguing against immutability. I have, however, seen bugs caused both ways. When I was a TA, Java students were notorious for not understanding why calling "trim" on their string left the whitespace. To often disastrous consequences. I would be delighted to see empirical studies to weigh against all of the anecdotes in my head. :)


It's a kind of trivial side-note, but F# would provide a compiler warning (configurable to be a compilation error, if you like), if you were to call myString.Trim() without binding the result to a value, unless you explicitly pipe the result to "|> ignore". This language in particular has oodles of great default behaviors, most of which can be overridden, but you must do so explicitly.

Unfortunately, I have no more data on the real-world results than anyone else. :-)


Can I butt in with an aside about how nice F#'s physical unit type handling is? Seriously, that's some nice design right there.


I remember this study about the use of final and const keywords in Java/C.

http://dl.acm.org/citation.cfm?id=2884798&CFID=985095807&CFT...

It's a small example, maybe it's a start.


Fun read, thanks for sharing!

It is just a start, so I am hesitant to pick at it. I am excited to see this getting explored, I was hoping for a dive into bug reports against software, though.

That is, I share the same bias most of software developers share that mutable code is ultimately dangerous and should be avoided. The flipside, only when working with established codebases does this really bother me. Greenfield projects that are not close to shipping are usually quite clean and not a concern. Battle worn codebases, though...

To that end, the CWE attempt (https://cwe.mitre.org/) was an interesting start. Especially if it was backed by numbers in the CVE database. So my question is, how many of the CVE reports could have fully been laid to this class of bugs? (I'll see if I can put together a notebook going over this idea. Still hoping someone else has already given this a better treatment than I am likely to.)


I didn't necessarily mean the effects would be positive, just that there would be effects. I would also like to see studies done regarding decisions such as the one we're discussing


There has been some great work in this area (but not near enough!).

One of the more well known pieces that is worth a read is Cognitive Dimensions of Notations [1]. I've even used them in my research on the usability of debugging tools.

It is composed of 14 dimensions to evaluate your design (of a PL or UI).

[1] https://en.wikipedia.org/wiki/Cognitive_dimensions_of_notati...


Wow I love this framework. I wish there was a book or something, seems a little neglected, but I have compiled a good week or two of papers to read. Thanks!


I also think that reading and writing code are often asymmetrically addressed. Some languages are far easier to write in than to read in.


Agreed and our tooling is heavily optimised towards writing code and testing it now, not reading it and testing it 2 years from now when you trace the code into the briar patch.

It drives me crazy that in my main editors/IDE's of choice I can't just hide all traces of comments when I'm writing code until I'm ready to comment it up or that I can't layer comments, since half the languages now have annotations for everything from documentation to configuration either built in or as add-on's.

It's a lot of noise when you trying to grok just code and gets in the way.


Definitely! Another factor is how fun is it to write in? That is a hard one to measure.


That book cover -- A bicycle frame with only right angles!? Is that supposed to be ironic? Anyone with "mechanical sympathy" should be gritting their teeth.


Yeah, I was cringing the moment I saw it. No rake on the fork? All that extra frame weight, only to introduce joints that maximize /lateral/ flex? Imagine the stress that top tube joint must be under in even normal riding conditions. This is a bike designed by a graphic designer who intends to decorate a wall with it.


This is a bike designed by a graphic designer who intends to decorate a wall with it.

There is a key analogy here for programming language design!

The analogy also extends to specific languages. There are some bikes which are keenly optimized for short distance performance. There are other bikes optimized for long distance performance. There are some bikes which are optimized for comfort. Yet other bikes which are optimized for folding compactly.

All of the above are valid designs, just for different contexts.


Coming from the Perl 6 project, it feels to me like design is a major part of modern computer languages, like Perl 6 and ES6+.

Perl 6 started with its design documents[0] -- the apocalypses led to the exegeses, which finally settled into a synopsis. The Perl 6 language itself is not any of these: The test suite defines the language, but the reasons they are the way they are came first. It has design features like hyperoperators to make parallel code easier to write -- which of course should encourage people to write parallel code.

Perl 5 is explicitly a postmodern language[1], and Perl 6 is the same but more. From my point of view, Python looks like it has a strong modernist philosophy. So I'm not really getting where OP is coming from.

[0]: https://design.perl6.org [1]: http://www.wall.org/~larry/pm.html


I think this article misses a crucial point with it's phrasing, or construction of the "design vs mathematics" tradeoff. Mathematics is superior for underpinning programming languages because it is universal. Design, like art, relies on a shared view on the world to be appeasing. Design from the 70's, or 80's is often unappealing to modern viewers because of the lack of shared experience with the designer, for example.

Mathematics solves this by appealing to a shared underlying "truth", that allows not only programmers with different backgrounds, but also computers to understand and process a programming language.


Mathematics is very good at maintaining the impression that it is universal and superior, but that's pretty questionable once you start looking at how it actually works. For two nice references, see:

* https://en.wikipedia.org/wiki/Where_Mathematics_Comes_From

* https://en.wikipedia.org/wiki/Proofs_and_Refutations

As for mathematics being superior for underpinning programming languages - that's certainly how some people make it look, but nobody really tries to explain what exactly is the link between mathematical foundations of programming and actual programming. (I did this for my PhD and I'm still not sure how it's supposed to work.) Everyone just hand-wavy assumes it's somehow there. The best account for how this might looks is perhaps this: https://plato.stanford.edu/entries/computer-science/


What do you think programming languages are standing upon if not math?

And I don't think math really "solves" any of the issues that are on debate here. You still need language and notation to express your precious maths, and that falls to the very same pitfalls as what is seen in PL design.


What do you think programming languages are standing upon if not math?

Many do, especially in academia, but quite a few seem to be standing on a combination of "this looks like it'll do what I want" and "this interpreter is the spec."


Indeed, It's almost as if the gp has never seen cobol or early fortran. Zero theoretical basis for either of those very popular languages (for their time).


See my post regarding Knuth's studies of languages for programming pre 1950. Simply put, mathematics had very little to help in the way of describing iterative processes. Arguably, we are still debating the mathematics of iterative processes to this day.


you have a matrix, one axis is names of programming languages and the other axis is names of things a language can do

the entries are 0 or 1.

If language L can do thing T, then there is a 1 in in cell L,T. Otherwise there is a 0.

For example, a thing could be, "you can feed it a math text and it will give a list of defined terms and expand the definition of any term in the list in the signature {<-}.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: