"Perl, despite the fact that it was a very popular programming language [..] was apparently not detectably easier to use for novices than a language that my student [..] created by essentially rolling dice and picking (ridiculous) symbols at random. That result was astonishing"
"Interestingly, in this new study, the same result we saw with Perl we also observed with Java, a programming language so popular it is used in the Computer Science A – AP test in high school. Ultimately, using a technique we call Token Accuracy Mapping, which is basically a way to figure out which tokens may have caused the problems, it appeared that C-style syntax was plausibly the culprit."
It started with a knockoff of ALGOL60 called CPL. That's already pretty arbitrary and subject to thinking of 1960's. That basically pre-dates modern software development or involvement of laypeople. They couldn't compile it so they just eliminated features until it could compile. Result, very simplistic, was BCPL. That was slightly modified & extended into C also due to hardware constraints.
So, it literally has no justification as a programming language. Almost totally a result of terrible hardware that persisted for legacy/social reasons. A little research might have saved them time and money. ;)
If you read on to the language page, they apparently mean tokens and not syntax. They have retained C/C++/Java style syntax, only having renamed some of the keywords and delimiters.
Read further down, and the C-style syntax (curly braces, etc.) was a cause for problems:
"Ultimately, using a technique we call Token Accuracy Mapping, which is basically a way to figure out which tokens may have caused the problems, it appeared that C-style syntax was plausibly the culprit."
That's my point. They don't seem to have gone any deeper than surface syntax. They renamed "while" to "repeat", but is a while-loop-by-any-other-name even the best looping construct in the given scenario?
Quorum is only about syntactical optimization. The reasoning for specific implementation is beyond the scope of the "simple" problem they are trying to study.
...about why you believed the items on my list didn't constitute scientific evidence. Hypothesis, analysis, verification by experimental practice, matched predictions, replication via re-application by diverse crowds, and delivering results. All sounds like scientific method to me. Many studies in academic and industry with these properties over the years.
If you reject all of that, I'd like to see what your version of the scientific method is as it's different than what most fields of study are using. I still contend that my numbered list represents scientific evidence in favor of whatever those methods were applied to. In that case, there's scientific evidence for everything I recommended with varying strength.
> The proof is in the literature I'm referencing rather than me dumping all of it in the comment box
You never cited anything.
Most importantly, some of the enumerated "points"? that you espoused are methodology Quorum uses. I'm sure you can help them out, if this subject is of interest to you.
I would've started testing BASIC and Pascal style syntax as they were specifically designed for ease of use and implementation. Would be a nice test to see how wise or arbitrary various parts were.
I like the experimental approach. I'm also glad they realize a lot of these decisions were made by people with trade-offs that aren't obvious years down the line. So, it has to be done carefully. I agree with moron4hire that this is a very shallow take at it where it's basically just syntax. Prior studies on programming languages, particularly for quality or composition, showed certain features had strong benefits compared to other languages without them. I started on a list of examples proven by evidence in academic studies or field use:
I'd like to see them start with what's been proven. Syntax was my lowest worry as COBOL and BASIC were used successfully by laypeople because they looked like pseudocode. So, just start with a BASIC then modify by how people solve the problem. Most issues in software defects and maintenance have nothing to do with syntax. It's an improper expression or manipulation of data, control, or interfaces. The techniques above address those.
So, I challenge anyone wanting to explore evidence-oriented paradigm (aka do science) and improve SW development to start on those techniques. Pull up all the scientific literature that puts them to the test against conventional alternatives. Look at the logic behind solutions & the results in terms of numbers. Re-run the studies using Quorum-style methods along w/ attention to reducing any biases. Use test data (eg applications) that embed realistic errors or integration problems that can show or refute the above's value in facilitating robust development. Once a technique is proven, start working on various approaches of doing it in both syntax and analytical model (i.e. effectiveness). Do it for all of them and we'll have better tooling that increases effectiveness instead of just pretty syntax.
That's my take on it. Good luck on that round of research.
> So, just start with a BASIC then modify by how people solve the problem.
The problem with this approach is that it only scales to small problems, and provides no guidance beyond trial and error for scaling to more complicated problems.
Solving complicated problems requires different kinds of abstractions, and optimizing how people use existing constructs is simply finding local maxima, not finding a global maxima.
Oh I meant with the core syntax. I'd look at how people describe their systems, small pieces and large parts, then try to map that to appropriate language statements.
Far as what you're describing, I just go with something that worked in the past and was easy to pick up. As a start at least.
Re existing constructs
I'm not sure about that as a local maxima. Have you ever looked at the totality of languages, design styles, paradigms, and so on? Im not sure how much more they're going to come up with that's substantially different. Most of what's out there condenses into a small number of categories. Picking easy, most effective stuff from those approaches a global maxima rather than a local one.
For example, the languages of various paradigms all compile to 3GL's or assembly. Many can be integrated in varying ways. So, we can actually run parallel beginning where we start with BASIC-like 3GL, integrate a front end like Scratch, do a R-like skin for another crowd, something mathematical in notation for that type, and so on. Eventually, a number of people will start to converge on one or more.
Meanwhile, an easy 3GL can get us started on various software constructs and techniques just because it's easy to pick up. Plus, results will be more applicable to a 3GL-dominated market.
> Im not sure how much more they're going to come up with that's substantially different. Most of what's out there condenses into a small number of categories.
I don't know, there's quite a bit that's new or not yet full developed. We have languages for probablistic programming now, and programming with compositional effects. I think logic programming probably has an important place too, but it hasn't yet been found.
> Forgot to ask, naasking from Lambda the Ultimate? If so, then welcome to HN! :)
Indeed I am. I only peruse HN from time to time. I spend enough time on LtU and reddit as it is!
If I were designing language for the blind I wouldn't end the nested block with "end" but rather "end action withdraw" "end class BankAccount" ... this could be auto-suggested for non-blind, it could help them too.
This is a negative review. I don't generally like bagging on peoples' projects, but this is an important subject and I feel it has been significantly failed. Their hearts are in the right place, but I think they've failed by even their own metrics.
I was very curious to read this topic because I firmly believe we need to do much, much more work to make the full range of computing more accessible for people with disabilities. It's not just about equity--though that should be reason enough--but that systems designed for systemic handicap are also useful for situational handicap. I work from home and have a brand new, baby boy. I'm going to want to hold him a lot, but I currently need two hands and two eyes on my computer. Also, with the work I do in Virtual Reality, it becomes very clear, very quickly, that our UI metaphors are informed by and designed for 2D systems, and do not adapt well to 3D systems. The current market of VR being primitive also presents problems in that User Input schemes have largely been either overlooked or exclusively focused on gaming applications.
So I've been keeping my eye out for a more ergonomic programming language. I imagine a very high level language that makes functional tasks easy when I want FP, object oriented tasks easy when I want OOP, but perhaps doesn't go out of its way to try to marry the two.
With this article on Quorum, they talk up a big game about using studies on live subjects to figure out more ergonomic programming languages. I found myself nodding along the entire way and was very excited to see what they had built.
And then was quite deflated to see that what they had delivered is essentially a small corner of Java or C++ with no more changes than having all the keywords renamed. It seems they start with the assumption that OOP is the most natural, correct paradigm for all cases, and then only go so far as to test whether or not different variations of keywords are easier to understand as spoken language. I get renaming "for" to "repeat". I don't get how you're helping anyone with multiple inheritance, or boxed primitives for use in contajners, or Arrays that are really Lists, but also a separate List class.
With the direction they've gone, it does not impress that the people involved understand programming language theory beyond keyword replacement. With zero apparent effort put into constructs and semantics, it just seems like a dead-end of bodged-together shell scripts.
Also, the example code in the documentation would be wholly inadequate for a beginner. It is definitely written for an experienced programmer to learn how to map concepts in other languages to Quorum. The first example on each page does generally demonstrate the most basic usage, but subsequent examples don't actually demonstrate what they say they will demonstrate. In some cases they demonstrate how not to achieve what has been described, but then don't describe how to achieve it. And in at least one case, two entries in a compare-and-contrast example were identical. In a few examples, I'm not sure the example will even run.
To be fair, they did change more than keywords. They adopted a more Ruby-like syntax instead of curly braces, semicolons, etc.
There are a couple of issues with the research though that I have questions about. One is that, as most of us who program know, the syntax is just one part of it. The tools, the community, the documentation, the libraries and platforms, and so on all factor into our decisions and productivity when programming.
A second issue is more about the research. A/B tests and randomized trials are all great and useful. But that also is not enough. What is the theory or theories you are testing? I wouldn't choose a keyword over another just because one short, lab-based study showed a p-value under .05. You also need to do other styles of research to get a richer picture of what is going on, such as design-based research and qualitative research methods.
Not to belabor the point on syntax too much, but it's my opinion that such changes are minimally substantive. I mean that, they are of equal value to the keyword changes, which might make the overall reading-of-code-out-loud easier, but do not do anything towards making the actual code constructs easier to understand.
Their efforts seem to have been fundamentally focused on making beginner-level Java-style program code easier to understand when read out loud. That's a fine start. But it's still very low-level. List comprehensions, unified numeric stack, pattern matching, functional stream processing, etc., the general idea of making code composition easier, has not been addressed.
This is also a problem with FP, though. FP circles dress up these concepts in difficult-to-understand mathematical language and wallow in it as a badge of honor. I'm sure we're all familiar now with what has become a joke-line "a monad is a monoid in the category of endofunctors". Monads in usage are actually quite simple, and we can start to use them effectively without ever knowing that they are even called Monads. So a lot of work needs to be done from that direction, as well.
In other words, calculus might be difficult to learn, but it's easier than using arithmetic to understand change of functions. You don't make your programming language easier by limiting the constructs it has available. You make it easier by expanding them, so that the more difficult concepts are easier to lift, don't require as much low-level reimplementation without them.
I believe it's called (job) security by obscurity.
UNIX didn't need to be full of obscure commands that are impossible to guess. The fact that it is locks out outsiders, but doesn't necessarily improve efficiency.
It's perfectly possible to make dev systems that are useful but not too complicated for beginners. Hypercard and VBA are just two examples.
But the language syntax is a small part of that. Ease of availability, and direct applicability - being able to see a clear personal benefit to writing code - are at least as important.
VBA isn't an outstandingly fine language, but a lot of people got a lot done with it - because it's obvious how you can save time with it, and that provides a motivation to use it.
The point of Haskell is more obscure.
> You make it easier by expanding them
I think it depends on the language and the target audience. You make it easier for developers by expanding them. But there are forces in CS that benefit from obscurity and Yet Another Framework - not least academic CS, and the GitHub scramble for self-promotion.
CS pretends to be mathematical, but languages and development systems are cultural and social phenomena, designed as much to promote or deny certain kinds of relationships with other devs and with technology as to get useful shit done quickly and easily.
You know, I recently had reason to implement a BASIC interpreter (A bit of an art project, a bit of code archaeology. I wanted to be able to get a specific, 40-year-old code listing running without change) and in the process, ended up with something that was surprisingly simple, fun, and productive. I was surprised with myself how little I missed case-sensitivity, or how natural "LET" and "CALL" felt for variable assignment and subroutine execution. I ended up extending my demo further to discard manual line numbering, unify functions and subroutines, and even started on a basic object system. It was quite a lot of fun and I am considering returning to the concept some day.
> You make it easier for developers
Right, that is correct. But kittens eventually grow to become cats. I think you can have a simple base-system that doesn't require knowledge of the more complex devices, but still has them in the tool box for when the user grows.
> UNIX didn't need to be full of obscure commands that are impossible to guess.
Why should commands should be easy to guess? That seems like a bizarre property to expect of a command language.
UNIX provides exploration and discovery via man and apropos commands.
> The point of Haskell is more obscure.
That's a ridiculous assertion. The point of Haskell is to be rigourous. If that makes it obscure, it's only because the problems it's solving are also obscure and poorly understood.
I don't think you understand that a programming language is a formal language that gets interpreted by a machine. It's pretty much mathematical logic the whole way down, and the fact that you can think they're a cultural/social phenomenon is a testament to the amazing work PL researchers have done over the years.
> it's my opinion that such changes are minimally substantive
Once you start requiring and providing proof, you've made a contribution. Saying "I believe" should is cause for stopping. If you can't even prove the simplest of assumptions, you have a starting point. That's where they are. Studies are time consuming and expensive and the project has been mired in the prequisite grunt work to challenge small assumptions.
makes functional tasks easy when I want FP
object oriented tasks easy when I want OOP
A high-level language doing this exists, it's called Scala. It does "marry" FP with OOP though, in the sense that FP is reconstructed as a specific form of OOP.
In fact, there are many such languages, for example F# or OCaml. Dylan and Common Lisp both offer a very advanced object system on top of usual functional features. Smalltalk is inherently functional in its usage of closures (blocks) and so is Io and JavaScript and other related languages. There's also Swift and Rust, not to mention even Python supports many FP constructs.
I'd argue that neither Ocaml nor F# get the OO-FP integration quite right. The problem is this:
offer ... object system on top of
... functional features.
This implicitly assumes that objects are a special case of functions. This is questionable. Scala goes the other way and recovers functions as special cases of objects. In my opinion that solved the integration problem of typed OO with statically typed FP. Javascript, Smalltalk et al are not statically typed.
Almost any language having some kind of procedure mechanism. I'm not sure how helpful it is to take that alone as an example of supporting FP.
Maybe but I find VERY strange that he write about studies with beginners and Perl or Java: I doubt very much that making Perl easy to learn by beginners was a priority, and Java isn't a VHLL..
Now if he was making studies like this against Python, Logo (not BASIC: it's too old) then this could be interesting.
This sound-bite needs no comment. I do like Perl.