To be fair, they did change more than keywords. They adopted a more Ruby-like syntax instead of curly braces, semicolons, etc.
There are a couple of issues with the research though that I have questions about. One is that, as most of us who program know, the syntax is just one part of it. The tools, the community, the documentation, the libraries and platforms, and so on all factor into our decisions and productivity when programming.
A second issue is more about the research. A/B tests and randomized trials are all great and useful. But that also is not enough. What is the theory or theories you are testing? I wouldn't choose a keyword over another just because one short, lab-based study showed a p-value under .05. You also need to do other styles of research to get a richer picture of what is going on, such as design-based research and qualitative research methods.
Not to belabor the point on syntax too much, but it's my opinion that such changes are minimally substantive. I mean that, they are of equal value to the keyword changes, which might make the overall reading-of-code-out-loud easier, but do not do anything towards making the actual code constructs easier to understand.
Their efforts seem to have been fundamentally focused on making beginner-level Java-style program code easier to understand when read out loud. That's a fine start. But it's still very low-level. List comprehensions, unified numeric stack, pattern matching, functional stream processing, etc., the general idea of making code composition easier, has not been addressed.
This is also a problem with FP, though. FP circles dress up these concepts in difficult-to-understand mathematical language and wallow in it as a badge of honor. I'm sure we're all familiar now with what has become a joke-line "a monad is a monoid in the category of endofunctors". Monads in usage are actually quite simple, and we can start to use them effectively without ever knowing that they are even called Monads. So a lot of work needs to be done from that direction, as well.
In other words, calculus might be difficult to learn, but it's easier than using arithmetic to understand change of functions. You don't make your programming language easier by limiting the constructs it has available. You make it easier by expanding them, so that the more difficult concepts are easier to lift, don't require as much low-level reimplementation without them.
I believe it's called (job) security by obscurity.
UNIX didn't need to be full of obscure commands that are impossible to guess. The fact that it is locks out outsiders, but doesn't necessarily improve efficiency.
It's perfectly possible to make dev systems that are useful but not too complicated for beginners. Hypercard and VBA are just two examples.
But the language syntax is a small part of that. Ease of availability, and direct applicability - being able to see a clear personal benefit to writing code - are at least as important.
VBA isn't an outstandingly fine language, but a lot of people got a lot done with it - because it's obvious how you can save time with it, and that provides a motivation to use it.
The point of Haskell is more obscure.
> You make it easier by expanding them
I think it depends on the language and the target audience. You make it easier for developers by expanding them. But there are forces in CS that benefit from obscurity and Yet Another Framework - not least academic CS, and the GitHub scramble for self-promotion.
CS pretends to be mathematical, but languages and development systems are cultural and social phenomena, designed as much to promote or deny certain kinds of relationships with other devs and with technology as to get useful shit done quickly and easily.
You know, I recently had reason to implement a BASIC interpreter (A bit of an art project, a bit of code archaeology. I wanted to be able to get a specific, 40-year-old code listing running without change) and in the process, ended up with something that was surprisingly simple, fun, and productive. I was surprised with myself how little I missed case-sensitivity, or how natural "LET" and "CALL" felt for variable assignment and subroutine execution. I ended up extending my demo further to discard manual line numbering, unify functions and subroutines, and even started on a basic object system. It was quite a lot of fun and I am considering returning to the concept some day.
> You make it easier for developers
Right, that is correct. But kittens eventually grow to become cats. I think you can have a simple base-system that doesn't require knowledge of the more complex devices, but still has them in the tool box for when the user grows.
> UNIX didn't need to be full of obscure commands that are impossible to guess.
Why should commands should be easy to guess? That seems like a bizarre property to expect of a command language.
UNIX provides exploration and discovery via man and apropos commands.
> The point of Haskell is more obscure.
That's a ridiculous assertion. The point of Haskell is to be rigourous. If that makes it obscure, it's only because the problems it's solving are also obscure and poorly understood.
I don't think you understand that a programming language is a formal language that gets interpreted by a machine. It's pretty much mathematical logic the whole way down, and the fact that you can think they're a cultural/social phenomenon is a testament to the amazing work PL researchers have done over the years.
> it's my opinion that such changes are minimally substantive
Once you start requiring and providing proof, you've made a contribution. Saying "I believe" should is cause for stopping. If you can't even prove the simplest of assumptions, you have a starting point. That's where they are. Studies are time consuming and expensive and the project has been mired in the prequisite grunt work to challenge small assumptions.
There are a couple of issues with the research though that I have questions about. One is that, as most of us who program know, the syntax is just one part of it. The tools, the community, the documentation, the libraries and platforms, and so on all factor into our decisions and productivity when programming.
A second issue is more about the research. A/B tests and randomized trials are all great and useful. But that also is not enough. What is the theory or theories you are testing? I wouldn't choose a keyword over another just because one short, lab-based study showed a p-value under .05. You also need to do other styles of research to get a richer picture of what is going on, such as design-based research and qualitative research methods.