Is this not sniping from the sidelines? Paul has proposed program size as a metric for the power of programming languages. Ron criticizes, but doesn't suggest anything better (other than "make creating programs easier", or "have well-organized and easy-to-use libraries") - he seems to argue that there is no perfect metric. Sure. Maybe. But you can't get far with such skepticism.
Personally, I think Paul is enticed by too simple a rule-of-thumb when he defines such an easy metric. Instead of settling for something as catchy as "programs should be short", defining the power of languages may be better approached from a psychological perspective. Programming is about bridging the gap between our mental representations and the computer's representations of instructions for (one) interacting with users and (two) manipulating data.
A better (albeit more complex/open-ended) metric would be to "maximize the utilization of our mental structures in a formal language".
For example, abstraction is an obvious way our minds understand the world, and hence we have all kinds of programming tricks to express abstraction in code. We should be asking questions like "what other ways do our minds comprehend the world?", and writing languages which mold to those ways. How could we express metaphor in a formal language?
Now that I think about it, maybe I'm proposing a slight generalization to Paul's metric - a language is powerful if it can express problems using the least number of mental structures. But code trees are easier to count than mental structures, so I guess we should stick to those for now.
Since time is the most limited human resource, maybe the best metric for the most powerful language would be that programming language which, overall, saves the most time in creating, maintaining, and improving a program. Using "saving time' as the standard of value, PG's use of fewer nodes and less lengthy names both help in creating programs with less time, so ARC seems to be right on.
I intuitively like the lengthier names in CL for readability, but at the end of the day, you still have to not only know them all through memorization, you also have to know the order of parameters and keys, which is not consistent among all functions/macros in CL. So having shorter names to memorize, and more importantly fewer, would definitely save time in the learning stage.
The question I don't know the answer to is how fast can ARC be changed, maintained, and improved, but I'm guessing this will also be faster.
Agreed. These are all correct ways of defining the same thing (the "power" of languages), and we need to pick whichever is useful depending on what we're trying to do. If we're trying to put a precise measure on the power of a language, PG's treelength seems to do the job perfectly. But how do we design a language which would create programs with the smallest tree length, or which would take the least time to write it? To answer this, it's best to consider how easily our basic mental structures can be expressed in the language.
The concept of sets, lists, map, and reduce exist in our lives whether we know program or not. If a language contains or lets us easily express other embodied mental structures such as relations, tags, partitions, etc. easily, it will be more powerful than languages which don't.
Sounds like a good way of measuring things, although I harbor some doubts as to the conflicts inherent in "creating" vs "maintaining", with Perl being the whipping poster boy for throwaway code.
Isn't the true metric is always time? "How long will it take me to implement algorithm X in language Y?"
Paul assumes that program size == coding time. However, do we really spend most of our time typing? And can we honestly say that shorter programs take less time to read than longer ones, even if all are written by ourselves?
In my opinion language's power should be related to its capabilities for abstraction, more precisely - higher orders of it. High degrees of abstraction give you better code rewrite/reuse ratios, which means as your program grows, you gain more power since you can reuse various abstract concepts you've already implemented.
Is there a sound relation between language's ability for abstraction and size of programs written in it? I am not so sure.
> Ron criticizes, but doesn't suggest anything better
That's because my whole point is that programming is much too rich and complicated for any one metric to be particularly useful. (Ironically, the exact same thing is true of painting, which is Paul' central metaphor!)
That's because my whole point is that programming is much too rich and complicated for any one metric to be particularly useful.
I think that's where Paul disagrees. I don't have enough experience to agree or disagree with either one of you - it would be nice if Paul were right, and there were a simple "rule of thumb" metric that could guide him to create the 100-year language, but your point sounds truthy also.
But trying to optimize multiple variables becomes an excercise in game theory. That is a value of each metric's benefit needs to be added for each 'player', the programmer, management, qa, the customer... This becomes an equation that I for one will wait on the Singularity to resolve.
Your definition is good, and I like it, but it's probably too vague to really get you anywhere.
The thing this article is right about is that exactly same program with shorter keywords isn't likely anyone's definition of a more powerful language. What pg is right about is that writing less code to achieve more is a workable definition of power - as long as the savings come in better program structure and not in cosmetic changes. Which category Arc falls in, I'm not able to comment on yet.
As much as I like elegant languages, sometimes (when crunch is on at work), I feel like I'm paid to copy-and-paste code (80%) and do the 20% (or more) modification that differentiates me from a trained monkey...
Also, APL is some crazy shit.
I found this quote (from its wikipedia page) interesting:
"Advocates of APL also claim that they are far more productive with APL than with more conventional computer languages, and that working software can be implemented in far less time and with far fewer programmers than using other technology. APL lets an individual solve harder problems faster. Also, being compact and terse, APL lends itself well to larger scale software development as complexity arising from a large number of lines of code can be dramatically reduced. Many APL advocates and practitioners view programming in standard programming languages, such as COBOL and Java, as comparatively tedious."
Ken Iverson's Turing Award lecture, "Notation as a Tool of Thought" should be interesting to you and others here, particularly those who subscribe to the "implementation as specification" idea.
Thanks for the links. I especially liked the following observation:
"During the APL75 conference in Pisa Ken visited the Leaning Tower. He pronounced it the first software project -- late and overbudget, and from early on everyone could see that it was going to be a disaster, but by then the project was too far along and there was nothing to do but plow ahead."
we have a language we use at work (internal, proprietary) that was originally based on apl (albeit with all ASCII chars). It is incredibly efficient to write code with, especially for mathematically oriented apps.
If programming languages are for making programming easier, then it's clearly a mistake to use just one language to write programs. Different languages are optimal for different areas of concern. Rob Pike spent 6 months writing a language optimized for concurrency, then wrote an entire windowing system in just 300 lines. If a programming language is a tool, then people have been advocating doing everything with a hammer. What if we had a way of combining many different languages, so that each area of concern could be written in the language which is optimal for it? I think one of the strengths of Lisp, is that it's its own abstract syntax tree. In some ways it's more like a substrate for implementing other languages than a conventional computer language. What if we had a substrate that let us use multiple languages together?
Google Tech Talk on Newsqueak & High level abstractions for concurrency:
I think I am willing to grant that "making the creation of programs easier" is probably closer to what language design should achieve than "make programs shorter". However, the two are strongly correlated, so by selecting make programs shorter as the axiom you do end up using a less appealing objective, though not by much.
However, Make Programs Shorter is vastly superior in another regard; it is explicitly measurable. When faced with a design choice, whether or not it Made Programming Easier might take lots of careful consideration on your part, and users of the language might have very strong opinions about whether Make Programming Easier was achieved.
If you're counting characters and tokens, design changes instantly and verifiably achieve the objective or they don't. No fuss, no second guessing.
Or, more crudely, selling your map to buy more decimals of latitude and longitude is not going to get you there.
I've worked a long time in a completely different vein, under the premise that language design should focus on making programs manageable. Making them shorter certainly promotes that in many respects, but destroys it in other respects. And the notion of "making the creation of programs easier" is to me, the completely wrong premise. That ends up being a wonderful derivative of making them manageable, but should never the goal. Programs are easy to create in BASIC, for example. They're just not scalable, which is another way of saying they lack manageability, which is something that only shows itself as the system's complexity expands. To paraphrase Alan Kay, you can build a doghouse with cardboard and plastic, but you couldn't build a house like that.
Personally, I think Paul is enticed by too simple a rule-of-thumb when he defines such an easy metric. Instead of settling for something as catchy as "programs should be short", defining the power of languages may be better approached from a psychological perspective. Programming is about bridging the gap between our mental representations and the computer's representations of instructions for (one) interacting with users and (two) manipulating data.
A better (albeit more complex/open-ended) metric would be to "maximize the utilization of our mental structures in a formal language".
For example, abstraction is an obvious way our minds understand the world, and hence we have all kinds of programming tricks to express abstraction in code. We should be asking questions like "what other ways do our minds comprehend the world?", and writing languages which mold to those ways. How could we express metaphor in a formal language?
Now that I think about it, maybe I'm proposing a slight generalization to Paul's metric - a language is powerful if it can express problems using the least number of mental structures. But code trees are easier to count than mental structures, so I guess we should stick to those for now.