Sigh. We're still banging rocks together and amazed when occasionally there is a spark?
Look, Ruby and Python -- their implementations just plain suck. There, I said it. MRI and CPython are just a pile of crap. We've known since 1991-ish (see Self) how to make performant runtimes for dynamic languages and 23 years later Ruby and Python still have crappy slow interpreters with no useful concurrency support. Note that Ruby and Python were both started around about that time and are in fact older than Java. Sure the JVM team has a lot of resources, but other language implementations, such as Racket, don't and they manage to solve these performance issues.
So let's not get excited when a language implementation actually achieves a decent baseline of performance. Let's expect that and move on.
Then there is Go's woeful head-in-the-sand type system and its dumb approach to error handling. Errors values are logical ORs -- you can return a useful value OR an error. In Go they are ANDs -- you return a value (which may not be useful) AND an error. Just dumb. We've know how to do error handling without exceptions and without boiler plate since about 1991 (Wadler; monads). Generics since about 1985 (ML). Can we move on yet? Is a quarter-century long enough?
Oh, and channels? That's Occam (1980s), if not earlier.
So what's Go? A language with an implementation that's not complete crap, and an inability to absorb ideas known for 25 or more years. I'm supposed to be excited that this will move our field forward? No thanks.
> Then there is Go's woeful head-in-the-sand type system and its dumb approach to error handling. Errors values are logical ORs -- you can return a useful value OR an error.
Not necessarily. Maybe you want to return the number of bytes read until the error AND the error.
> We've know how to do error handling without exceptions and without boiler plate since about 1991 (Wadler; monads). Generics since about 1985 (ML). Can we move on yet? Is a quarter-century long enough?
You're conflating "it was once invented" with "it's a good thing without any trade-offs and should be put in every programming language".
> Not necessarily. Maybe you want to return the number of bytes read until the error AND the error.
Not mutually exclusive. The number of bytes read until the error occurred could be part of the error itself. Sum types are practically always better. The problem is that Go does not support them.
It's not a problem, because you can simply use product types, so you don't have to extend the language with yet another concept. For the Maybe monad / Option types you'd need lots of additional cruft like parametric polymorphism and probably something like the 'do' notation to make monads bearable. Go would lose all its charm of simplicity. Go would be another Haskell or ML, and we all know that these languages have failed to become successful by ignoring the human aspect of programming. We'd end up with articles like this http://www.haskell.org/haskellwiki/Typeclassopedia , longer than the entire Go specification.
Haskell and ML are practically FP oriented. You might want to check out Rust for a language that is not purely FP oriented (it is multi-paradigm), yet attempts to take pragmatic choices from such languages. There seems to be a good community response to it.
The community response seems to be mostly from people who like to talk about programming language features all day without actually using it for real projects.
I think that's pretty unfair. Rust has not even reached 1.0 yet, and there are already some cool projects being implemented. There were even a couple of games written in Rust for Ludum Dare, not to mention at least one case of use in production.
I think you're missing the point of the basic type system: simplicity. It's a bigger win than you think when it comes to attracting new developers and making code easier to read. Sure, there are languages that let you write more succinct code and support all sorts of crazy typing, but one of the things good code requires is readability.
Your code my be terse and elegant, but if other people can't understand it, it won't matter. Code is as much communication to other developers as it is to the computer it runs on. That's why I credit simplicity as Go's greatest strength.
Of course a complex language can be used to write simple code and vice versa. Ultimately every program is going to increase in complexity-- that quote about failure vs legacy nightmare applies here.
IMHO it's better to reduce the complexity of the language to a minimum so that language/runtime complexity doesn't have a multiplicative effect on application code complexity.
For instance, C++ is a complex language. That complexity inevitably accrues to applications written in C++. This is a conscious trade-off of the language -- performance + high level programming, or what have you.
In the end, though, I am not sure I'd argue there's much correlation between simplicity of the language and simplicity of the code. It's too hard to get anyone to agree on what simple means anyway. Me, I prefer Hickey's "Simple Made Easy" for that, but not everyone agrees.
ETA: I guess what I would say is: would you prefer to start with something simpler and build complexity appropriate to the problem domain out of simple, but sound ideas? Or would you prefer to start with a set of inherently complex primitives and build something else complex on top of that? It's subjective, but I've probably leaked my bias in the phrasing.
I don't think his point is as broad as you're claiming. He's saying that golang application code is complex due to (for example) lack of generics because the language lacks that complexity.
I am very wary of arguments about simplicity, as what people usually mean by "simplicity" is "stuff I already know". Indeed I believe this is a large part of the reason for the popularity of Go: it is basically a statically typed and performant Python / Ruby. You don't need to learn much new to use it if you already know these languages.
The problem with this is how are we to advance if we never adopt new ideas?
Furthermore, I believe that the core of functional languages are very simple. Do you understand logical ands and ors? Then you understand algebraic data types. Do you understand high school algebra? Then you understand generics and higher kinds. It's all very simple but just maybe not stuff you're used to.
This is a false dichotomy. There's a pretty wide continuum between type systems at least expressive enough to represent a user-defined list of a generic type and "all sorts of crazy typing".
I think you're missing the point of the basic type system: simplicity. It's a bigger win than you think when it comes to attracting new developers
...and that brings us full-circle to the blog post: rounding the corners so that the developers don't cut themselves and avoiding ideas that require developers to get out of their comfort-zones is exactly what made Java so successful. It put OO within the reach of mortals who were unwilling or unable to absorb C++. Go, being simple and unadventurous could very well skyrocket into the Java/COBOL stratosphere.
What language created in the last 5 years has a feature that wasn't originally discovered in the 80s or earlier? It's all too easy to bash new languages for not having some hot new, never before seen feature, meanwhile there are almost no examples of new languages doing earth shattering things.
I think what we're currently seeing with Go, Rust, Elixir and others is taking the features that are perceived as good as trying to combine them in different ways.
Actually I think that given that they gave birth to one of the most unsafe languages on the planet, and publicly ranted against OO and FP languages during their career, they have choosen to ignore them as a political decision.
ARC doesn't involve any sort of nontrivial lifetime analysis. The compiler simply inserts calls to retain and release at all of the same places where explicit calls to them (hopefully) were with manual reference counting. The only vaguely novel part of any of it was successfully migrating a language from manual to automatic reference counting.
Actually that's not entirely true. Arc inserts the calls first, but then does an elimination phase that can identify lifetimes beyond method boundaries and remove unnecessary calls.
It's not particularly complex, but the architecture is there for further enhancement of this phase as they build out the static analyzer.
That's nowhere near what the borrow check does. The borrow check is based on reasoning about ownership and inherited mutability, neither of which apply to Objective-C.
Go now is like Java in 1997. A mediocre language with lots of corporate support and a big standard library. It's popular in the developer crowd right now because it's A)simple B)has a good standard library and C)getting support (in the forms of tools, tutorials, etc.) is easy.
We shouldn't let those things be the deciding factors in choosing what language to stick with over the next ten or twenty years. That's what we did with Java, and even though we're on revision 8, people are still (rightfully) complaining about many of the same flaws that are still present 20 years later.
As Java aged, developers realized that they didn't actually want the type of language that Java started out as, so over time, Java has accumulated features like lambdas, generic programming, etc., but because the language wasn't designed with it in the first place, it's all sub-optimal cruft.
Go has a lot of the same problems now. No generic support (What language designer in 2014 builds something that idiomatically requires casting to the top type? That was precisely a huge problem for Java; Java (sort of) fixed this by adding Generics in 2004), a mediocre type system (which gets completely ignored pretty frequently anyway; see the previous issue), inflexible (you can't extend important built-ins like range or make()), and with none of the interesting features that programming languages have gotten good at in the last few decades (pattern matching, immutability, null/nil-free programming, etc.).
I really hope that developers can see past the hollow promises of corporate support and a strong standard library and wait for a language that is actually good in and of itself, and not just because there are so many developers propping it up. There are a number of very well-designed languages on the horizon. The one closest to Go may be Rust, which has excellent features like ML-style generic programming, pattern matching, an almost-hindley-milner type system, strong support for immutable and functional programming, etc.
In addition, GO failed to advance the art of error handling in any way, in fact I'd call it a step back. I can already do multiple return values in python, or I can choose to use exceptions rarely or frequently depending on the problem and have easy syntax like 'with' for critical cleanup. Any language that demands 100% programmer reliability to trap errors is not useful for most projects that don't have google-level code review and rapidly changing project specs.
I've done the C way of checking every function for errors. It's painful, and should be done only when the problem domain demands it, which is a small subset of problems. Googlers are living in their own universe.
> No generic support (What language designer in 2014 builds
> something that idiomatically requires casting to the top
> type? That was precisely a huge problem for Java
> you can't extend important built-ins like range or make()
This was a deliberate design decision. When the Go team was asked about what they like about Go one answer was:
"It's very simple to understand, and the code that
I read on the page is the code that is executed.
When I iterate over a slice, there's not some chance that
there's an iterator implementation that fetches a
file from a web server in Romania and serves me some
JSON or something.
Instead, the language is very deliberately simple and
contained so that it's very easy to reason about code that
you're reading."
https://www.youtube.com/watch?v=sln-gJaURzk
>Erik Meijer has a more nuanced view on the topic than you
I'd rather not watch a 40 minute video. Care to summarize?
>"and the code that I read on the page is the code that is executed... the language is very deliberately simple and contained"
This is a dumb argument. The same thing can be said about assembly, and in fact this property doesn't hold true once you introduce abstraction through functions.
All well-written code is easy to read, and most poorly-written code is hard to read. Go does not change that.
I'm waiting for the talks from Gophercon to appear online, but one of them[0] talks about using Go interfaces to handle most generic type problems. Another interesting read is Rob Pike's argument about "less is more"[1]. The HN discussion on less-is-more from 2 years ago brought about some interesting discussion as well[2].
>using Go interfaces to handle most generic type problems
Do you mean Go interface specifications, or the Go interface{} type?
Because the latter is the top type. Using it is equivalent to casting things to Object in Java, which people did up until 2004 when they realized it was really stupid.
You also can't use Go interface specifications to make generic data structures, which is the really important use case.
Go does not have a type hierarchy, so there is no top type.
You can easily use interface{} to make generic data structures, but interfaces have a time/space/code-complexity cost which makes them not worth it most of the time.
>Go does not have a type hierarchy, so there is no top type.
Go does have a type heirarchy. It's just not very extensible. There is top (interface{}) and then there are all other types (which are subtypes of interface{}).
If Go didn't have a type heirarchy, you wouldn't be able to up- and down-cast (up-casts are performed via implicit coercion to interface{} and down-casts are explicit).
>You can easily use interface{} to make generic data structures, but interfaces have a time/space/code-complexity cost which makes them not worth it most of the time.
You also lose any semblance of type safety, which is kind of awful.
There is no type hierarchy and there are no subtypes. Interfaces do not describe type relationships. The conversion required between `[]int` and `[]interface{}` further demonstrates the significance of this distinction.
As for losing type safety when writing generic containers, this is true, and another reason why people don't do this.
> A mediocre language with lots of corporate support
It's very disingenuous to compare the level of corporate support that Sun provided Java and the level of corporate "support" (involvement) that Google provides Go.
The Go project began at Google to solve the sorts of challenges that Google was having (architecting and managing large servers), and several of the original members of the Go team work at Google, but the Go team has gone out of their way to ensure that Google (the corporation) doesn't control Go development.
This is exactly why I don't want to jump on the Go bandwagon.
You forgot to mention the Go ABI which isn't even compatible with C. That alone for me is reason enough not to use it as a systems programming language.
I would choose D over Go any day of the week. It's longer to learn and master, sure, but it is also much more powerful and expressive.
One of the design decisions that I deslike in Go, is that instead of providing FFI declarations like Object Pascal, Ada, D and many others, they require the use of an external C compiler.
Does the OP even know what made COBOL successful and useful ? Or for that matter , does the OP even know a Cobol programmer ? I suspect the answer is no and yet his blog is being discussed on HN front page. Cobol predates the RDBMS and made a lot of sense when writing structured data to flat file systems. Cobol language and cobol programmers never aspired to do a general purpose computing. Back in those days, the scientists wrote their programs in fortran and businesses wrote their code in cobol and most computing was done on IBM systems. Then came minicomputers with DBASE. And so on. If you are a canadian, I can guarantee you that most of your RRSP backend data processing is still being done on Cobol ; I had a bruising experience when a newly minted CIO decided to do away with all cobol and replace it with modern langauges ( c# etc.) Long story short, the CIO moved on to another unfortunate company , millions of dollars wasted and the cobol code is still working.
New programmers who don't want to learn old technologies to maintain existing code bases will most likely want to replace it with something they know. It's almost always a very bad idea.
All of these "x is the future" articles show a huge lack of perspective on what made languages successful in the first place and why they're still in use today.
Often the "why they're still in use today" when describing legacy tech can be described as a combination of fear of change, lack of resources, and technical debt (the result of both of those things).
Not because it was or is the best tool, not because it's the most efficient or the most performant or most readable, but because simple mundane politics and FUD.
There's also the cost of upgrading a code base to a more modern language vs the benefits of working in the new language.
Most legacy projects I've worked with were not that hard and costly to maintain. Rewriting these code bases would definitely cost more than what would be saved afterwards.
COBOL is like bedbugs. Nobody likes COBOL; the reason it sticks around is because it's damn near impossible to get rid of.
COBOL is very hard to migrate out of production because it's incredibly difficult to translate COBOL code to other languages. Everyone writes their own version of their COBOL, and even translating one COBOL program to another COBOL programmer's "dialect" is non-trivial. COBOL provides all the power of LISP macros, except with none of the elegance - it's very hideous once you peel back the layers.
On the other hand, it's very easy to translate Go to/from other C-family languages. Indeed, there was a blog post recently on here about a company that translated their entire Python codebase line-for-line into Go. The Go team is even working on an automatic translator to translate the current gc compiler codebase (written in C) into Go.
I've used Go as my primary language for almost two years now. I wouldn't say I "love" Go - I love the things it lets me do. If Go is going to achieve the same level of dominance that Java has, and the same level of persistence that COBOL has, it's not going to be because it's got the ultimate form of lock-in (legacy code) - it's going to be because it continues to let people do powerful things very simply.
Funny you mention it... almost replied to the post earlier saying that if I were to bet my career on a single language it would certainly be COBOL. Not glamorous, but it runs critical infrastructure and schools aren't exactly pumping out mainframe programmers.
That said, I have not and would not recommend betting on a single language. Being polyglot has its own advantages.
edit: any ideas for a first project with Go? Any place it's particularly well suited for?
edit: I re-emphasize my 2nd paragraph, "I have not and would not recommend betting on a single language"
I don't think that's a very good bet. There is still a lot of COBOL sticking around, but there are essentially zero new COBOL projects being started while there are a non-zero number of COBOL projects being replaced with other languages, so COBOL use is ultimately trending downward. You are essentially betting that COBOL developers die out faster than COBOL itself does.
> COBOL is like bedbugs. Nobody likes COBOL; the reason it sticks around is because it's damn near impossible to get rid of.
The reason it sticks around is because lots of enterprises either have mission critical code that works well in it and still has the same requirements (leaving no reason to replace it) -- or because poorly documented and tightly coupled code-bases makes it impossible to smoothly transition to something new. Often a combination of the two.
Well, I believe that developers should be able to choose their own tools although work requirements can understandably override personal choices.
I used to be a 'Java guy' (for many years I was the number one Google search result for "Java consultant") but I migrated to Ruby out of personal preference, then to Clojure because I got a lot of work offers using Clojure, and now I am struggling to learn Haskell. That said, the bits of Java programming that I have been doing lately (mostly Java 8 with streams and lambdas) has been a lot of fun.
Bottom line IMHO is that choice of programming language is not that important. More important is having a good fit with existing code bases, lots of trained developers, good libraries and frameworks, and adequate performance.
If java was brainf&%k and we still got the JVM out of it, it would all be worth it. And I think there's a good chance that Java's successor runs on it.
Totally agree that the JVM is super powerful. I think it's Java's greatest strength. However, I don't see any current JVM language as a potential replacement. Scala is too complex. Clojure, while wonderful, is Lisp and no one has been able to make that popular for 5 decades (not even pg).
I'm looking a lot at Kotlin. It's not mature yet: JetBrains will start using it this summer for their own projects and I expect it to firm up a lot then.
The nice thing about Kotlin is almost perfect compatibility with Java, and an auto-translator that doesn't suck. So you can take an existing Java codebase and auto translate class by class, maintaining compilability the whole time. Also the standard library is mostly a set of extensions to the JDK so your existing library knowledge ports across, except you keep finding useful goodies sprinkled all over the place.
Feature-wise Kotlin has things that I feel would help me write fewer bugs: it has null-safety encoded into the type system, smart casts, extension methods, some good functional programming support, powerful properties and so on. There are features it lacks too, but I hope JetBrains will continue to push it forward for many years.
Groovy doesn't have much significant use anymore. Someone's gaming the stats to make it look more popular, though. At https://bintray.com/groovy/maven/groovy/view/statistics you'll see 190k downloads in the last 30 days, click on country and you'll see 162k of them from a proxy server in China, and only 8300 direct from the US.
Most people say Node as to not confuse client-side JavaScript. e.g: Saying Go will be as popular as JavaScript would be unreasonable, saying Go will be as popular as JavaScript on the server is reasonable.
> Some developers have noted Go’s lack of features or a few other things: no exceptions, nils instead of options, inability to specify dependency versions, mark and sweep GC, no macros, no generics.
Not having exceptions is one thing and probably a valid opinion, but specifying library dependencies as whatever today's HEAD commit is on a GitHub project always seemed to me incompatible with writing reliable software. Until reading this article (and learning about godep) I thought that I must be misunderstanding how dependencies are managed in Go, because how could that possibly work? In practice how have people been dealing with this before tools like godep?
Go has a historical connection to C because Pike and Thompson. However, Go can't be the successor to C, because C is a systems language that can be used without garbage collection. There's no way that Go can follow in C's footsteps.
Go is most likely to be used where Java dominates today: web application servers.
If the author of the current post has anything going for him, it's the fact that the wave of mass adoption in technology always chooses the lesser technology over the better alternatives of the time. So I think Go has a bright future.
> There's no way that Go can follow in C's footsteps.
I am not a big fan of Go, but looking at Oberon based OSs used during the 90's at Swiss Federal Institute of Technology in Zurich, it could be used.
Granted, maybe some extra features like direct compiler support for untraced pointers and full processor mapping in the unsafe package would be welcome, but the original Oberon could survive without them. By making a set of assembly routines available as kernel package.
Which is no different than the assembly required by C to interact with the hardware.
Look, Ruby and Python -- their implementations just plain suck. There, I said it. MRI and CPython are just a pile of crap. We've known since 1991-ish (see Self) how to make performant runtimes for dynamic languages and 23 years later Ruby and Python still have crappy slow interpreters with no useful concurrency support. Note that Ruby and Python were both started around about that time and are in fact older than Java. Sure the JVM team has a lot of resources, but other language implementations, such as Racket, don't and they manage to solve these performance issues.
So let's not get excited when a language implementation actually achieves a decent baseline of performance. Let's expect that and move on.
Then there is Go's woeful head-in-the-sand type system and its dumb approach to error handling. Errors values are logical ORs -- you can return a useful value OR an error. In Go they are ANDs -- you return a value (which may not be useful) AND an error. Just dumb. We've know how to do error handling without exceptions and without boiler plate since about 1991 (Wadler; monads). Generics since about 1985 (ML). Can we move on yet? Is a quarter-century long enough?
Oh, and channels? That's Occam (1980s), if not earlier.
So what's Go? A language with an implementation that's not complete crap, and an inability to absorb ideas known for 25 or more years. I'm supposed to be excited that this will move our field forward? No thanks.
(This was a fun rant to write.)