Sigh. We're still banging rocks together and amazed when occasionally there is a spark?
Look, Ruby and Python -- their implementations just plain suck. There, I said it. MRI and CPython are just a pile of crap. We've known since 1991-ish (see Self) how to make performant runtimes for dynamic languages and 23 years later Ruby and Python still have crappy slow interpreters with no useful concurrency support. Note that Ruby and Python were both started around about that time and are in fact older than Java. Sure the JVM team has a lot of resources, but other language implementations, such as Racket, don't and they manage to solve these performance issues.
So let's not get excited when a language implementation actually achieves a decent baseline of performance. Let's expect that and move on.
Then there is Go's woeful head-in-the-sand type system and its dumb approach to error handling. Errors values are logical ORs -- you can return a useful value OR an error. In Go they are ANDs -- you return a value (which may not be useful) AND an error. Just dumb. We've know how to do error handling without exceptions and without boiler plate since about 1991 (Wadler; monads). Generics since about 1985 (ML). Can we move on yet? Is a quarter-century long enough?
Oh, and channels? That's Occam (1980s), if not earlier.
So what's Go? A language with an implementation that's not complete crap, and an inability to absorb ideas known for 25 or more years. I'm supposed to be excited that this will move our field forward? No thanks.
> Then there is Go's woeful head-in-the-sand type system and its dumb approach to error handling. Errors values are logical ORs -- you can return a useful value OR an error.
Not necessarily. Maybe you want to return the number of bytes read until the error AND the error.
> We've know how to do error handling without exceptions and without boiler plate since about 1991 (Wadler; monads). Generics since about 1985 (ML). Can we move on yet? Is a quarter-century long enough?
You're conflating "it was once invented" with "it's a good thing without any trade-offs and should be put in every programming language".
> Not necessarily. Maybe you want to return the number of bytes read until the error AND the error.
Not mutually exclusive. The number of bytes read until the error occurred could be part of the error itself. Sum types are practically always better. The problem is that Go does not support them.
It's not a problem, because you can simply use product types, so you don't have to extend the language with yet another concept. For the Maybe monad / Option types you'd need lots of additional cruft like parametric polymorphism and probably something like the 'do' notation to make monads bearable. Go would lose all its charm of simplicity. Go would be another Haskell or ML, and we all know that these languages have failed to become successful by ignoring the human aspect of programming. We'd end up with articles like this http://www.haskell.org/haskellwiki/Typeclassopedia , longer than the entire Go specification.
Haskell and ML are practically FP oriented. You might want to check out Rust for a language that is not purely FP oriented (it is multi-paradigm), yet attempts to take pragmatic choices from such languages. There seems to be a good community response to it.
The community response seems to be mostly from people who like to talk about programming language features all day without actually using it for real projects.
I think that's pretty unfair. Rust has not even reached 1.0 yet, and there are already some cool projects being implemented. There were even a couple of games written in Rust for Ludum Dare, not to mention at least one case of use in production.
I think you're missing the point of the basic type system: simplicity. It's a bigger win than you think when it comes to attracting new developers and making code easier to read. Sure, there are languages that let you write more succinct code and support all sorts of crazy typing, but one of the things good code requires is readability.
Your code my be terse and elegant, but if other people can't understand it, it won't matter. Code is as much communication to other developers as it is to the computer it runs on. That's why I credit simplicity as Go's greatest strength.
Of course a complex language can be used to write simple code and vice versa. Ultimately every program is going to increase in complexity-- that quote about failure vs legacy nightmare applies here.
IMHO it's better to reduce the complexity of the language to a minimum so that language/runtime complexity doesn't have a multiplicative effect on application code complexity.
For instance, C++ is a complex language. That complexity inevitably accrues to applications written in C++. This is a conscious trade-off of the language -- performance + high level programming, or what have you.
In the end, though, I am not sure I'd argue there's much correlation between simplicity of the language and simplicity of the code. It's too hard to get anyone to agree on what simple means anyway. Me, I prefer Hickey's "Simple Made Easy" for that, but not everyone agrees.
ETA: I guess what I would say is: would you prefer to start with something simpler and build complexity appropriate to the problem domain out of simple, but sound ideas? Or would you prefer to start with a set of inherently complex primitives and build something else complex on top of that? It's subjective, but I've probably leaked my bias in the phrasing.
I don't think his point is as broad as you're claiming. He's saying that golang application code is complex due to (for example) lack of generics because the language lacks that complexity.
I am very wary of arguments about simplicity, as what people usually mean by "simplicity" is "stuff I already know". Indeed I believe this is a large part of the reason for the popularity of Go: it is basically a statically typed and performant Python / Ruby. You don't need to learn much new to use it if you already know these languages.
The problem with this is how are we to advance if we never adopt new ideas?
Furthermore, I believe that the core of functional languages are very simple. Do you understand logical ands and ors? Then you understand algebraic data types. Do you understand high school algebra? Then you understand generics and higher kinds. It's all very simple but just maybe not stuff you're used to.
This is a false dichotomy. There's a pretty wide continuum between type systems at least expressive enough to represent a user-defined list of a generic type and "all sorts of crazy typing".
I think you're missing the point of the basic type system: simplicity. It's a bigger win than you think when it comes to attracting new developers
...and that brings us full-circle to the blog post: rounding the corners so that the developers don't cut themselves and avoiding ideas that require developers to get out of their comfort-zones is exactly what made Java so successful. It put OO within the reach of mortals who were unwilling or unable to absorb C++. Go, being simple and unadventurous could very well skyrocket into the Java/COBOL stratosphere.
What language created in the last 5 years has a feature that wasn't originally discovered in the 80s or earlier? It's all too easy to bash new languages for not having some hot new, never before seen feature, meanwhile there are almost no examples of new languages doing earth shattering things.
I think what we're currently seeing with Go, Rust, Elixir and others is taking the features that are perceived as good as trying to combine them in different ways.
Actually I think that given that they gave birth to one of the most unsafe languages on the planet, and publicly ranted against OO and FP languages during their career, they have choosen to ignore them as a political decision.
ARC doesn't involve any sort of nontrivial lifetime analysis. The compiler simply inserts calls to retain and release at all of the same places where explicit calls to them (hopefully) were with manual reference counting. The only vaguely novel part of any of it was successfully migrating a language from manual to automatic reference counting.
Actually that's not entirely true. Arc inserts the calls first, but then does an elimination phase that can identify lifetimes beyond method boundaries and remove unnecessary calls.
It's not particularly complex, but the architecture is there for further enhancement of this phase as they build out the static analyzer.
That's nowhere near what the borrow check does. The borrow check is based on reasoning about ownership and inherited mutability, neither of which apply to Objective-C.
Look, Ruby and Python -- their implementations just plain suck. There, I said it. MRI and CPython are just a pile of crap. We've known since 1991-ish (see Self) how to make performant runtimes for dynamic languages and 23 years later Ruby and Python still have crappy slow interpreters with no useful concurrency support. Note that Ruby and Python were both started around about that time and are in fact older than Java. Sure the JVM team has a lot of resources, but other language implementations, such as Racket, don't and they manage to solve these performance issues.
So let's not get excited when a language implementation actually achieves a decent baseline of performance. Let's expect that and move on.
Then there is Go's woeful head-in-the-sand type system and its dumb approach to error handling. Errors values are logical ORs -- you can return a useful value OR an error. In Go they are ANDs -- you return a value (which may not be useful) AND an error. Just dumb. We've know how to do error handling without exceptions and without boiler plate since about 1991 (Wadler; monads). Generics since about 1985 (ML). Can we move on yet? Is a quarter-century long enough?
Oh, and channels? That's Occam (1980s), if not earlier.
So what's Go? A language with an implementation that's not complete crap, and an inability to absorb ideas known for 25 or more years. I'm supposed to be excited that this will move our field forward? No thanks.
(This was a fun rant to write.)