I'm not OP, and I don't necessarily agree, but Haskell can produce static binaries, has GC, and has a great story for concurrency. It uses a M:N threading model similar to Go.
But this is an argument more about community than about language design. Granted there are a lot of problems with the Haskell community but deifying poor design decisions as "simplicity" is generally not one of them. For the horror that is string types, though, maybe.
The size of community is a direct result of the popularity of the language.
Haskell had almost 20 year head start over Go. That Go already surpassed it in the number of libraries speaks to the fact that people are picking Go over Haskell.
Following Occam's Razor, people pick Go over Haskell because they find it a better language, where "better" for them is probably different than how you define it, given that you think Go is "poorly designed" and it's a given that "poorly designed" language can't possibly be "better" than even mediocre language and I gather you place Haskell a bit higher than "mediocre".
> Go is a prime example of the "worse is better" doctrine, that's why it's popular
Also known as making trade-offs; a universal engineering theme. Engineering projects that refuse to make trade-offs don't fare too well - you have to pick a point that's acceptable to you on a strength/weight graph or you will be forced to use exotic materials that tend to come with their own cost (yet another trade-off: cost/strength). This is fine in a prototype or in a research paper, but when it comes to production, you will not win the fight against reality.
Edit: For programming languages, I imagine the graphs are roughly features vs. complexity (directly correlated) and then complexity vs. hiring pool size (inversely populated). You have to pick a point on both graphs, and Google picked values some people on HN disagree with and they find that offensive for some reason.
>This is fine in a prototype or in a research paper, but when it comes to production, you will not win the fight against reality.
Indeed, that's the argument I'm making. Where's the disagreement?
>or programming languages, I imagine the graphs are roughly features vs. complexity (directly correlated)
You know Simple Made Easy is a ubiquitous recommendation here and I don't really have anything to add to it. In Haskell as in Lisp all the complex "features" are described in terms of far simpler features. Languages that build in these features explicitly are complex, languages that let you choose don't have to be.
This is why C++ is complex and Common Lisp is simple, even though they have a pretty similar feature set all things considered.
As far as I can tell Go was designed by people who think C got most stuff right, and they intended to make a language that was hard to screw up in. I think they failed here though because neither option types nor generics are that hard to understand and eliminate huge classes of errors.
As I understand it in Go it's pretty common to cast stuff from interface{}. This is a bad pattern because it eliminates type safety. Somebody wrote on HN a while ago, and I'm inclined to agree, that you either have a static type system with generics or you have a dynamic type system. Go is no exception.