I definitely agree with the article that once you learn a good set of "archetypal" languages, then most popular new languages become easier to learn. It's a worthwhile exercise.
But at the same time, there are some deeply weird languages out there. These contain some absolutely fascinating ideas. And if you follow the advice in the article to learn "every" language, then these languages will still surprise and perplex you.
- If you go deep enough in Haskell that you stop being confused by monads and start being confused by "free monad interpreters", then Haskell will provide you with a nearly endless source of complex, powerful abstractions—more than enough for any 10 other research languages.
- Mozart (http://mozart.github.io/) is a concurrent logic language with a constraint solver. Even if you're familiar with Prolog and Erlang, this one's going to blow your mind.
- Coq (https://coq.inria.fr/) is a language for formally proving the correctness of the things you compute. (You typically also write your programs in Coq.) There are also other, newer languages in this space. These languages have a steep learning curve, and they're challenging to work in, but they're extremely interesting.
Learning exotic programming languages is fun, and it means that you'll rarely be surprised by any new popular language. But despite this article's suggestion to "learn every language", you won't run out of strange new languages any time soon.
The book "Concepts, Techniques, and Models of Computer Programming" (what this blogpost basically is about) uses Mozart/Oz. When I first studied it at uni I didn't care much for it, but when I sat down later and really tried to understand it, it blew my mind as you said.
The book is cool, basically introduces a simple model of an abstract computer and defines some operations, and then expands upon that throughout.
Spending some time learning Clojure, I feel, has had a measurable positive effect on the code I write. I'm considering something even more exotic for the coming year.
I'd like to disagree with the title of this blog post.
I did something like this in 2016, learnt Clojure, OCaml, Haskell, Go and Racket. Took Dan Grossman's excellent Coursera course on learning language fundamentals and paradigms across Ruby/ML/Racket. Though I can code basic to medium level toy programs in all these languages now I wish I had followed Dave Thomas' (of Pragmatic Programmer fame) advice of concentrating on only one language a year.
My advice would be pick only one language for 2017 and then think and code only in that language for the year (for your learning projects - of course you'll be using other language for your $DAY_JOB).
In 5 years you'll know how to use 5 tools really really well and your brain will be much better aware of which of these tools is the right tool for which kinds of problems.
Its a width vs depth argument. I think If you wanted to start digging deeper in any of the languages you learned in 2016 you could do so faster than having one language to consider. For instance, you've probably noticed that Clojure borrows ideas from other languages (go channels for instance), so learning how Clojure deals with that abstraction will help you compare/contrast it with how Go's abstraction works.
The more languages you know (lots of width), the more you can compare to others as you learn (gaining depth), and the faster you can assimilate new ideas as you go. Analogously, its as if while digging into one language you discover a cave-tunnel that connects you back up to another language you are familiar with. This saves you from having to dig all of the tunnels yourself because you can see how deep concepts in one language can be connected across your broad familiarity (width).
IMHO, I hate approaching tasks with a rigid mindset, whether its "LEARN EVERYTHING" or "LEARN ONE THING". I think its better to just encourage yourself to investigate something because you're interested in it, and then see where it leads you.
For instance, I'm digging into Clojure right now myself, and after watching a Rich Hickey talk about immutability and concurrency, I became curious about how GPU's work, and if they could be used with immutable data structures. Now somehow I've gotten into learning about OpenCL and CUDA and am having my mind blown about the possibilities of using a GPU as a massively parallel processor (concurrency isn't possible on GPU's though). If I were strict, I would've blocked myself from diverging from the Clojure path, and I would not have learned anything about the wonders of the emerging field of Heterogeneous Computing Languages.
You obviously did not read the whole blog post. He is talking about the importance of programming paradigms and the ideas behind programming languages that could lead to a more general understanding of new languages :) .
I did read the blog post and what Im saying is that I think the end goal of being able to understand the different paradigms that different languages bring to your understand/repertoire - would be much better served by - not just sampling a whole bunch of languages in 2017 - but by thinking in just one paradigm a year and then going on to the next one the next year.
I think this really depends on what your background is and what your personal expansion of "understand" is. Assuming a cold start (which would never actually be case), with your approach it would take seven years to get through eg. Assembly, C, Java, Haskell, Prolog, Factor and and J. Obviously you would have a better understanding of each paradigm and their strengths, weaknesses and tradeoffs after seven years of study than someone who tries to get through all seven in 2017. But that person will have a much better understanding of all seven paradigms in one year, whereas you will have no basis to compare the single language you've learned in that year to anything else.
oh wait - you're right. For the purposes of being able to get a more general understanding of new languages it is better to have somewhat of a (good) grasp on the main paradigms that exist out there. So my earlier comment and advice (which I still maintain) was orthogonal to the blog post.
Loved this!
To be honest I think the more you learn, the better for you but I would even disagree of learning everything since then you can't specialize as much as other people. I'd say, it depends on what you look for. I've seen successful people who are savvy in lots of fields but I've also came up to meet people who are really good at just a couple of things :)
And I say this because I believe it's not only about the quantity of languages that you know, but also how it's gonna be implemented, a developer that knows about business is in my opinion really valuable and difficult to find.
"It’s about understanding the common paradigms and implementation patterns so as to be confidently language agnostic."
Essentially, which are these few common paradigm languages that teach you more than 90% of what's out there? My guess would be to include something like C and LISP, but is that enough?
Haskell for lazy FP, pattern matching, and monads; ocaml for strict FP and a crazy expressive module system; C# for industry OO; lua for dynamic language and prototype inheritance; ruby for runtime meta programming; lisp for macros; forth for stack language; erlang for actor module, byte pattern matching, and dynamic unification pattern matching; prolog for logic programming; c for low level; c++ for templates and destructors; rust for linear types; idris for dependent types; R for rank polymorphism. After that languages started to seem like variations of things I've seen before.
Apparently Philip Wadler came up with this notion, so your already-mentioned Haskell should suffice ;)
> idris for dependent types
Are we still on "Essentially, which are these few common paradigm languages that teach you more than 90% of what's out there?" or are we by now in the territory of "which languages to cover every single remotely-computation-related notion anyone ever conjured up and managed to somewhat implement"? ;D
C++, Java and C# aren't really object oriented languages --- they're scalar languages with object systems bolted on. The giveaway is that they're full of types which aren't objects; ints, pointers, floats, booleans, etc. These are handled and implemented entirely differently to the object types. e.g.: compare a Java int (a scalar type) with a Java Integer (an object type).
In a proper object-oriented language, everything is an object, including these. That allows you to do some really interesting things which are at the heart of the OO mindset. e.g. Smalltalk avoids having to use conditionals almost completely via dynamic dispatch:
b ifTrue: [ 'b is true!' print ]
b is a Boolean; ifTrue:block is an abstract method called on Boolean. But the values true and false are actually instances of subclasses of Boolean; True's implementation of ifTrue evaluates the block, and False's doesn't, and that's how you get different behaviour depending on whether b is true or false. The core language neither knows nor cares that booleans are special.
(Actually, the bytecode does care, and has special accelerated opcodes to make operations of booleans fast for performance reasons, but that's just an implementation detail.)
Luckily I first encountered "OO" after having already learned the concept of structs/records in Pascal, this way I did not have to seriously entertain the idea that simple scalars oughta be anything-but..
OO should be Smalltalk given it's real OO & implementations still exist. Functional should include an ML on it given all the languages influenced by SML and Ocaml. If they'd done functional, then I suggest Mercury for logic language as it's a better Prolog with functional and performance improvements. It's used commercially, too. Query, if logic programming is done, should be SQL followed by Datalog. Assembly should be x86 and one RISC (probably ARM).
> I suggest Mercury for logic language as it's a better Prolog with functional and performance improvements. It's used commercially, too.
Any idea what sort of applications the Mercury language is used commercially for? (Only) one I know of is Prince XML (from the Wikipedia article about either one of them). Not saying it's not used for more, just interested to know.
You could also use it anywhere Prolog is used. So, you could look up applications or commercial users of Prolog. Here's one in finance that would probably benefit from Mercury:
Your OO examples all come from the same portion of the OOP family tree. People should also explore the style of OOP which comes from systems like CLOS & Dylan.
My favourite member of the Logic Programming family of languages would be http://picat-lang.org/ . As far as implementations of Prolog go, http://www.swi-prolog.org/ is a good implementation to go with (It's mature, has a good ecosystem, and good documentation).
C is a bit dated and Lisp too wishy-washy, I'd suggest either Rust or Go for the Turing machine paradigm, and for the Lambda Calculus paradigm Haskell (or any truly comparably powerful robust "strictly" pure-and-lazy functional language --- can't think of any).
This way by covering the extreme ends of the spectrum from (A) gratifyingly roll-up-your-sleeves-and-get-your-hands-dirty petal-to-the-medal (heh) systems programming to (Z)abstractions-all-the-way-up-and-down-and-sideways algebraic-typed purely-functional glory.
With current-day concurrency approaches, easy-networked-ness etc thrown in natively in any of the above.
Once proficient in these "spectrum extremes", anything in between should be relatively easier to get up to speed with
Norivg's advice seems better than the entire article. And it's probably best to be skeptical of anything that insists you should start with X - that tends to reflect the author's own experience rather than anything fundamental.
The problem with learning numerous programming languages with the implicit hope of learning about various programming pradigms is that it's not a focussed approach. Instead explicitly set out to learn different programming paradigms and hopefully avoid languages which have an overlap of programming paradigms.
So instead of "In 2017, learn every language" I would say "In 2017, learn every programming paradigm".
I am a person who LOVES to try different, new, exotic languages whenever I can. I usually try around 5-6 new programming languages (sometimes including frameworks) in a year.
My personal strategy is to try to learn enough of a language "to be dangerous". This means you can create working code with it, probably not 100% idiomatic, and probably reinventing the wheel in many ocassions.
The good thing is that it in my opinion the 80/20 rule applies here, as it takes 20% of effort to learn 80% of a (programming) language. The largest benefit is that you are going to be able to try many different languages and decide if you want to keep digging deeper into it. Heck, maybe you love a programming language so much you would even accept a new job just because you want to program in that language.
For the record, in 2016 I tried Go, Rust, Clojure, Elixir, Crystal, Swift. My personal favorites this year are Clojure and Elixir. But all the other languages have their own niches with lots of potential. I'll be keeping an eye on them.
Roughly understanding every language is admirable, but very different than being highly productive in a language. If I have to google syntax and packages to do every little thing like writing a for loop, I wont exactly be productive. He is very correct though - subtle differences in syntax often translate well to other languages and learning the general paradigms are the key point.
So yes, have a general knowledge of programming that applies to different languages with ease, but focus on getting very productive with the language that you use all the time. For me, my focus is dictated by my work: primarily VBA, Matlab.
The missing puzzle piece here is that languages also imply APIs and Frameworks - which are beast in themselves. Rails is mentioned in the article. Understanding Rails, in context of a serious application, is a huge piece of work... And in many ways very different from knowing Ruby as a language.
If you're going to learn a language you probably want to learn a canonical framework for that language... which can be a much bigger (and maybe rewarding) process.
So funny anecdote, I once asked a candidate which programming languages he knew and he replied "All of them, BASIC, FORTRAN, and COBOL." :-)
More seriously though the point in the article is solid, all computers have broadly similar architectures and the number of concepts needed to express computation on those architectures is finite, so starting with a language in each "category" of expression and and then adding additional languages in each category can quickly teach you the essence of computing. At that point a new language can be quickly picked up and mastered once you've mapped the essential concepts to the idioms of the new language.
Love it. Many years ago I read some advice somewhere to "learn a new language each year". And while I'm something of a language junkie, I never actually stuck to that. But lately I've been getting more aggressive again about adding new languages to my repertoire. In 2016 I spent a lot of time with R and Octave, and started brushing up on Python again.
Going into 2017, I've started spending a lot of time on Prolog just recently, and am hoping to spend some serious time on Scala and Figaro soon.
How valuable all this is from a career standpoint can be hard to quantify, but if nothing else it's just plain fun to learn new approaches and new languages.
Has some recommendations, reviews, & critiques on various books & learning strategies people here might be interested in. Here's a repost of my comment on the article:
"Consider a Scheme resource like How to Design Programs for its combination of easy modification of syntax and DSL’s. Additionally, OMeta, TXL, or the original paper on Meta II for transformation-based languages. Paul Morrison’s stuff on flow-based programming. Hit them with Datalog or Mercury on declarative side on top of whatever foundational logic they learn so they see its practical power. Formal language like Coq with Chlipala’s Programming with Proofs might be icing on the cake so they can see programs get engineered from precise specifications with provable correctness. Should come in handy in debates on whether programming is engineering or not. ;) Lightweight version of that would be SPARK Ada that can automatically prove aspects of imperative programs due to clean design. Or design by contract with that, Ada 2012, or originator Eiffel. If doing parallelism, show them an inherently parallel language like X10, Chapel, or ParaSail. Erlang should come up if we’re talking high concurrency and reliability in same sentence."
I sort of did this a couple of years ago without exactly realizing what I was doing. I learned a bunch of languages and played around with them for fun. Many were functional, many were not. I deliberately tried to learn languages I felt would be different from what I already knew, so I didn't even touch languages like C# or Ruby. It was a fun few weeks, and I got a better mental idea of a general framework of thinking about languages. It was effectively the "go meta" thing (the wording of that section of the post really resonated with me) -- I stopped thinking of languages as their own special snowflakes to be learned, and instead each language was a cocktail of features I already understood, wrapped in their own special-snowflake syntax (which doesn't take long to pick up). At some point I implicitly understood that it wouldn't be much effort to be useful a new language if I wanted to. I was an idiot student back then, so I never consciously realized this I think -- it just affected my decision making (e.g. "I need to code in perl to add this feature to my irc client? Doesn't sound too hard.")
This exercise became immensely useful, because while it doesn't give you the ability to write new codebases from scratch in $randomlanguage without needing to put effort into learning it, it does give you the ability to make nontrivial changes to codebases in $randomlanguage by looking at the context.
An explicit realization of exactly what skill I'd gained came a few years later at one of my internships. My own project was in Java/C++ with some GLSL, all languages I had used before. The project itself was very interesting, but my manager was super busy and I was finishing work pretty quickly so I'd often have nothing to do. A fellow intern had a very large C# project that I found pretty interesting. Involved a server-side backend, client-side app, phone app, and a lot of communication between them all. This intern was very smart, but had never had the chance to learn new languages or the general concepts used in building such things. He was learning, but would often ask us (me + another somewhat-bored intern) for help. Now, I didn't actually know C# at this point (none of us did). Never coded a line of it in my life. But I was able to pattern-match the code I saw with concepts I already understood and explain things. At some point I realized that I was teaching my friend a language that I didn't know in the first place. This blew my mind, and I gradually had a conscious realization that I could do this with most languages provided they didn't bring something too radical to the table.
---------------------
My personal list of languages I tell folks to learn are:
- C++ because it has so many unique features. This usually covers for C as well. D may be a decent substitute since it has a lot of overlap with C++s features, but they're designed to better work together. I suspect that learning D will not let you automatically hack on C++ code without learning C++ first, but the reverse would be easier.
- assembly
- A GCd OOP language like Java or C#
- Haskell
- Lisp
- Some flavor of ML. Rust or Swift can substitute here, but it's better to just pick up an ML.
- Something for the GPU (GLSL)
- Something with affine/linear types (Rust?)
- Something with dependent types (Idris -- I have yet to properly dig into this one myself)
- Javascript. Also has a lot of interesting bits.
- Some dynamically typed scripting lang like Python or Ruby.
It's important that you cover programming styles, not just features -- e.g. Lisp is there not because of its features, but because of the way you program in it.
"After learning a few more languages myself, I came to realize that they were all less diverse than I had anticipated. After some more study, I started appreciating some of the underlying models of computation and programming language theory that make most languages look like different versions of the same basic ideas."
Yep despite all these language wars, they all really are the same ~10 ideas shuffled around in a different priority queue (type safety over dynamic, camel case over snake case, AOT over interpreted etc).
I'm surprised the author advocates learning new languages after this insight, especially Rust and Go which are again just a reshuffle of the status quo (followed by the need to re-create everything in said language once again).
Instead of just re-learning the same ideas over and over under different names, how about do something actually productive in 2017...
There's no way you can maintain that when you compare very different languages to one another, e.g. Scheme, APL, Haskell, Smalltalk, Go, Forth and Prolog. They're all such different ways of telling the computer to do something, that it isn't just the same 10 ideas being shuffled around.
Scheme - prioritized a 1:1 between syntax and the AST. Reshuffle the syntax, remove the explicit type system.
APL - prioritized arrays.
Haskell - prioritized FP. Also introduced primitives to make things possible with FP that you could do with OOP already (Monads etc).
Smalltalk - prioritized OOP.
Go - prioritized Concurrency.
Forth - prioritized Stack based.
Prolog - prioritized Stack based.
Really isn't that much insightful difference between these languages as you'd think. Not taking away from their history, but you gain nothing from learning them.
That's a really simplistic way of thinking about language differences. Saying that Smalltalk prioritized OOP is saying nothing about how Smalltalk is different in all sorts of ways from Java, or how Self and IO are different from both by doing away with classes, but closer to Smalltalk than C++. An important addition with Smalltalk and Self are the environments they live in, which are very different from most programming environments. Smalltalk was designed to be part of an image-based GUI, with modification of live objects, which includes the entire image, instead of text in files and folders, which are separate from whatever editor is being used. That's a huge difference (whether one prefers it or not).
And with APL, you leave it the not-so insignificant ability to use any character one wishes, which is again very different from most programming languages (whether one prefers it or not). It's not insignificant that an APL program looks nothing like a program in most mainstream programming languages.
I recommend an APL offspring to someone who wants to expand their horizons, rather than APL itself: K (or a variant thereof), or J. No special chars needed.
J used to be the only freely available, and with better documentation -- however, it is also a larger language. It seems that recently the K dialects (oK, klong, kuc, kerf, xxl) are plentiful, freely available and well documented.
It is also my experience that K is less frightening to most people - J uses e.g. curly braces independently as operators, which looks a-priori broken to most people (rather than just unintelligible like K does).
Clojure - prioritized rich immutable data. I haven't found any other language that prioritizes open data sets: where a map has "a" and "b" but it could also have "c" and "d", if you only care about "a" and "b" it doesn't matter if the other data just tags along.
And I say "rich" data since other languages are pretty data-centric (scheme of course), but not any that prioritize immutable hash maps, sets, vectors, linked-lists, etc.
> Really isn't that much insightful difference between these languages as you'd think. Not taking away from their history, but you gain nothing from learning them.
But at the same time, there are some deeply weird languages out there. These contain some absolutely fascinating ideas. And if you follow the advice in the article to learn "every" language, then these languages will still surprise and perplex you.
- If you go deep enough in Haskell that you stop being confused by monads and start being confused by "free monad interpreters", then Haskell will provide you with a nearly endless source of complex, powerful abstractions—more than enough for any 10 other research languages.
- Mozart (http://mozart.github.io/) is a concurrent logic language with a constraint solver. Even if you're familiar with Prolog and Erlang, this one's going to blow your mind.
- Coq (https://coq.inria.fr/) is a language for formally proving the correctness of the things you compute. (You typically also write your programs in Coq.) There are also other, newer languages in this space. These languages have a steep learning curve, and they're challenging to work in, but they're extremely interesting.
Learning exotic programming languages is fun, and it means that you'll rarely be surprised by any new popular language. But despite this article's suggestion to "learn every language", you won't run out of strange new languages any time soon.