This is a nice take on the language; and doesn't descend into the regular lampooning ("lol no generics") that's found on languagey posts. Kudos!
> The attitude of the decision making around the language is unfortunate, and I think Go could really take a page from Rust’s book with respect to its governance model.
(I wrote part of Rust's current governance model and was a part of polishing it up; though I wasn't so involved when Rust initially started using a consensus based model. I've also tried Go, and written about it [1])
... I don't know if this would work out so well. One of Go's strengths (also weaknesses, but it is a strength) is that the language designers can afford to just make hard decisions for everyone. This is where the simplicity comes from, IMO. Rust is complicated because it tries to cater to a wide range of needs. Go makes decisions even if they cut out use cases, and the language stays simple. Both approaches are valid; but IMO switching approaches would make Go lose one of its best plus points.
What they are getting at is that the Go designers are stubborn and the Rust designers are open-minded... but they don't want to just come out and say that.
For instance both had green threads. Rust designers ran into all the problems with them like interop with anything else and interrupts and fairness, so like everybody else that did M:N threading they threw up their hands and said "well it looked like a good idea at first" and removed it. Go designers literally called it a "rabbit hole" and followed it down to whatever crazy lands it lead to, because they promised people green threads so they had to keep it.
True. For the record I like the Rust model a lot more (obviously, as a contributor, I should); but what I'm saying is that there's a tradeoff and both have their good parts. The moment you're open to some community decision making, you're expected to be open to all of it -- Go doesn't get much flak from the community for unilateral decisionmaking. I bet it would if it started to allow some community consensus on some decisions. And like I said, the "let the community decide everything" model probably wouldn't work for Go, so unilateral it is :)
I firmly believe that moving away from M:N threading was the right decision for Rust. Even if nothing else, the fact that Rust FFI performance is thousands of times faster than cgo alone justifies the decision. There are numerous other reasons (fairness, I/O performance, performance of synchronization primitives, implementation complexity) why just using the OS scheduling is superior for a systems language. The primary benefit of M:N scheduling—fast thread spawning due to small initial stack sizes—requires GC, which Rust doesn't have, and arguably has little to do with M:N scheduling in the first place, given that thread stacks are configurable in the syscall.
I believe you know what you're talking about, and so are right about what's best (or really, even possible) in rust, but I can still be sad that there's not some magic way to have my cake and eat it too.
In general mallocs allocate small blocks a lot faster than they allocate large blocks, because small blocks qualify for free list binning and the OS kernel doesn't need to serve you as many pages. (It wasn't obvious to me either that this was the case, but try it and see for yourself!) :)
At RethinkDB we managed our own free lists for stacks for this reason. One funny problem with the first implementation was that a stack could be born on one posix thread and die on another -- and certain threads would accumulate all the stacks, while the others would keep allocating them fresh.
Yeah, caching works OK when your total thread count is relatively constant, but most thread spawning benchmarks do things like spawn 100000 threads, synchronize, and then shut them all down, in which case the thread high-water-mark is constantly increasing and your stack cache always misses.
Could you find out the names and addresses of people making these benchmarks? If so, that seems like it would be a relatively easy problem to fix.
(Edit: I mean of course so we could allocate memory at their addresses for better locality, this is HN after all. Other ideas would need to be pulled to a different arena.)
I've written one largish (large for a 3 man team) production codebase in go (as well as numerous side projects). At first I loved it--it was simple, and I really loved the fact that it compiles to a native binary. I also really liked gofmt. But looking back through what the code has evolved into 2 years later, a huge portion of it is essentially copy and paste code to work around the lack of generics. Another large portion is copy and paste error handling.
I was essentially using go to fill several niches. I wanted a higher level language to handle business logic, but I also needed to work with serial ports, and I wanted something that would run fast with a small memory footprint.
Go seemed to fit the bill perfectly, but eventually go started to feel like jack of all trades master of non type of thing. Since I had to eventually rewrite my serial port code in C anyway, I think I would have been better choosing a language with more modern features for the rest of the project.
Maybe it's because I've been using F# (and C# to a lesser extent) lately, but I just don't think I want to go back.
- Its simple. Stupid simple. I can focus on writing code that is simple to grasp while following conventions.
- Its fast. Sure, its not the fastest language out there, but its hard to argue with its performance.
- Its feels like a lower level Python. Sometimes I think that's the main goal of the project.
- It comes from a respectable source. This means that it will probably exist in the next ten years.
- Setting it up is simple. No need to deal with runtimes like Java.
- Others who write Go seems to stick to the conventions. I rarely find the kind of surprises that you would find in C++ code. Since Go is simple to understand, people can write code that feels more "common". C++, well, its not fully understood by many (including me).
One point is that I don't use it to serve HTML or anything like that. Only to do systems programming. Servers talking to each other and/or responding to RPC or rest calls. Don't really serve anything other than files/json/xml.
OP makes some reasonable arguments about the practical uses of Golang, but anecdotally and seriously evaluating it for a complex backend service, I felt that I had to write a non-trivial amount of "simple" code that felt like a mix of copy & paste procedural code with some very shallow object oriented programming.
I attempted to write the same service in Kotlin (http://kotlinlang.org), which felt mature and easy to understand. JetBrains did a great job also making it "boring." It felt like it was a boring attempt at making Java programming enjoyable. Along with 100% interoperable with Java libraries, alongside incredible tooling, made writing reasonably boring but functioning code very enjoyable.
To each their own, but I'm having a hard time justifying why I'd use something like Go over something like Kotlin. Maybe I'm missing something?
I think "To each to their own" is exactly the OPs point -- Go worked for them, but that doesn't say much about whether or not it will work for you (and vice versa). So you're probably not missing anything :)
> Previously, we worked almost exclusively with Python, and after a certain point, it becomes a nightmare. You can bend Python to your will. You can hack it, you can monkey patch it, and you can write remarkably expressive, terse code. It’s also remarkably difficult to maintain and slow.
Performance aside which is a weak point of python, the language is certainly not inherently unmaintainable. My most maintainable and enjoyable projects are all in Python, you just have to not do stupid things which would make any project unmaintainable (like say no to spaghetti inheritance).
I find this particular argument rather frustrating.
"Dynamic languages are fine as long as you don't do stupid things." One of the huge benefits of static types is that you can extricate yourself from those situations because the compiler can help guide you through large scale refactorings safely.
It's not even that the team is bad or that the authors of the code were ignorant - it may just be that the software has evolved to where the initial code no longer fits the problem exactly.
"Don't be stupid" is completely unactionable advice.
I primarily use ruby, and I agree it doesn't have to be unmaintainable, but I also think that static typing will eventually win because it's easier to make type systems more powerful and expressive than it is to make programmers smarter or able to fit a larger system in their heads.
One reason dynamic typing has so much mindshare is because the comparison was always against Java which not only has some of the most boilerplate, but can't even prevent a null pointer exception. So in effect Java is 50% safer at 5x the cost of well designed Python program. However a language like Haskell actually brings real value from the type system (among other things), so you get more like 5x the maintainability at 50% the cost.
Obviously those are just made up numbers, and we can debate the particulars of Haskell vs other modern type systems, but it seems clear to me that is the richest soil for future gains in software maintainability and correctness.
Personally I think optional typing brings the best of both worlds, I don't see why it's more popular. It's coming to Python with 3.5, using syntax compatible with as early as 3.2, which is fantastic.
Agreed. If you don't carefully control the design of your system, how can you expect to control the code base? I'd say something about waterfalls... but I feel I'd be downvoted into the abyss.
> My most maintainable and enjoyable projects are all in Python, you just have to not do stupid things which would make any project unmaintainable (like say no to spaghetti inheritance).
That's what they all say though. Low level, high level, static or dynamically typed -- it's not a problem as long as you don't do stupid stuff.
There is no language which is going to prevent you from shooting yourself in the foot. At best they will make sure the bullet is not also poisoned.
However, there are a lot of languages which encourage bad patterns (like mixing app and view logic, or implicitly convert arrays into numbers when comparing them to strings or something, he said, referring to no language in particular). Python is not one of those languages. Use it as a tool, not a toy, and it will easily help you produce very high quality, maintainable results.
Take a look at this for example, 1 year old hobby project of mine on which I still work daily - some of my most maintainable code. I have never, with any other language, experienced such joy at refactoring massive parts of a project given how easy it has been.
https://github.com/jleclanche/fireplace
>There is no language which is going to prevent you from shooting yourself in the foot.
How many guns and how big are the guns that are pointed at my foot? Is it an ion cannon or an airsoft gun?
Most can agree one can be a happy user of $XLANG, but there are gradations of advantage depending on what you're working on.
Plenty of people out there that are happiest working in C, but I don't think many would say it's the safest or most productive language to work in.
Also worth thinking about how easy it is to extract meaning from software just by reading it. Explicit datatypes, specifications, grammars, etc. can all help with that. Even better is tools that can make sure your program faithfully represents an implementation of those specifications and act as a denser mental encoding of code than code itself is capable of being.
But it seems more true with python. Maybe because it's more expressive and terse, maybe because there is no magic (scope is always clear), and also, PEP8, which new languages tend to emulate and even formalise with linter such as gofmt.
Bet you ten bucks that any large scale programs (>60k LoC) in brainfuck is unmaintainable -- it's quite obvious that there are certain languages that are more maintainable than other, stupid stuffs or not.
It's good that the author realizes that there is a tradeoff between the complexity of the language and the time and energy needed to bring new programmers up to speed on a project. This is something that gets missed really often.
However, I think the author completely misses the power of composition. It's usually the best alternative if you want to share code between different types in Go. I have written many large go programs without ever using the empty interface or typecasting (except of numeric types, where sometimes you need it because go doesn't ever do implicit conversions between numeric types.)
I agree that the behavior of storing nil in an interface is a wart.
The author spends a long time arguing against the idea that the use of sync and atomic should be discouraged. He claims that it is hypocritical because the standard library uses these packages. But it doesn't seem hypocritical to me to put tricky optimizations into the standard library, but discourage ordinary programmers from writing tricky code in their ordinary applications. The standard library has millions of users; most code has a handful. The tradeoffs are going to be different.
Besides, the author is arguing against a strawman here. Go has always given programmers access to low-level programming resources such as inline assembly, operating-system-specific calls, and even the ability to directly make non-portable Linux system calls. Compare this to Java, where up until very recently, you couldn't even create a symlink without using JNI because the authors were afraid it might be non-portable.
I think micro-benchmarking the performance of channels misses a lot of the point. OK so you have a microbenchmark where mutexes are 30% faster than channels in a tight loop-- and you're willing to give up all the benefits of channels for that? Talk about missing the point. And if your goroutine does anything interesting with what it's pulling out of the channel, this is even more irrelevant.
> It’s not all that different from your users requesting features after you release a product and telling those users they aren’t smart enough to use them.
Wanted to address this point separately. This sort of design is something you will find very often in well-designed video games: authors saying no to good features because they'll be misused and/or, on a grander scale, hurt the product.
I've enjoyed the little I played with Go. I can't comment on its simplicity/complexity nor the devs' design choices, but I don't think it's "toxic" as the author says, it's just a different way of developing a language. We need more projects doing this; I have rarely seen this be the downfall of any product - and when it is, it's usually because the design team is incompetent, which I definitely don't think the Go team is.
Games seem to have a shorter lifespan[1] than PLs. Certainly from an economical perspective. Does the design have to be perfect? No -- just make a good enough game and then make a sequel that incorporates slightly other ideas (and that is always inferior to the original, apparently... or was that for movies).
[1] Surviror bias note: we might mostly tend to compare newfangled languages to well-established languages, ignoring all the ones that have fallen by the wayside from our perspective.
Games have shorter lifespans because they are consumable: you play them and throw them away. It gets slightly more comparable when there are games on which you can build a business/career (esports-types, MMOs etc). In fact, if you look broadly at the numbers from the last decades, popular games of that type last for roughly 1.5 gamer generations (~10-12 years), and popular programming languages last for roughly 1.5 software engineer careers (~25-30 years). There are of course very long runners in both categories.
Esports (at least RTS) games have to be constantly iterated on with regards to balance. That tweaking is probably simple enough when it is things like giving a unit 5 more HP, but the designers can not a priori know how the game will play out on the dedicated, professional level. Maybe what strategies and mechanics that are balanced on the level of "500 hours played" works great at that level, but falls on its face and is inverted at professional levels. Then you are either left with some part of the design that is so broken that no one uses it (underpowered), or force the game into a spectator-unfriendly monoculture of single-strategy, single-unit or whatever the players had to use to adapt.
Designers of competitive RTS games rely incredibly on player feedback. And they are eager to give it.
The author touches on a lot of reasons I think go continues to be used. For me, its the simplicity really.
This ultimate simplicity, and encouraging programmers to write simple code (sometimes, I know, at the cost of having more tedious code) is not all bad.
The "dogma", and the "it's too complicated" parts of go, while sometimes are indeed a frustration. However, I feel like it has kept the language and the culture surrounding the language largely simple. So both a pro and a con in my opinion.
Having worked with go for several years, on many small project and a few large ones. The pros outweigh the cons.
One of the bigger ones for me is simply understanding other people's code rapidly. Go projects seem extraordinarily easy to understand without spending hours reading code. When I am in a position to support/operate/scale/troubleshoot other people's code, which is part of my role, this has been a less challenging experience than other languages.
Is it just me or has this same article or title come up like 2-3 other times? The only thing remotely interesting in this particular article is the graph with productivity vs code-base size...mostly because when you give it some thought, is actually wrong. An expert programmer will be as efficient as any other expert programmer. The code-base will still grow and they will still remain productive. This guy is talking about what happens in this era, how that's our "average". If we up the average so everyone is proficient with language X, we wouldn't see this issue.
Everyone has a cognitive limit on what they can keep track of/in working memory. Once your codebase exceeds that limit (in # of files, lines of codes, types, interfaces, etc.) you lose productivity. 10k lines of python is a lot harder to grasp than 10k lines of go.
And that's not even considering the ramp up time for someone new to the code to jump in, which is even worse in most dynamic language codebases.
Openstack is a very good example of a very large python codebase split into smaller projects. But it's still a large codebase. It's still hard to work with different components and the only thing that happens to hold it together are loads and loads of tests and lints which try to come close to what static analysis could provide.
Swift has a very fortunate position here. Specifically, the interaction surface is very low (mainly user app<->Swift and Glance<->Swift) and the interface is very restricted. It's just an object store.
It may be a great project for migration, just like Glance (although that one doesn't need that much performance).
The projects I don't think will ever be able to move are Neutron and Nova. Even parts of Nova are so tightly connected that it would need a major redesign before rpc can really split scheduler and other pieces into different projects. Also Keystone probably has too many integration layers / plugins by now to migrate to anything.
(I'm glad that this happened BTW, the more languages in OpenStack, the better isolated and simpler the interfaces will become)
> The attitude of the decision making around the language is unfortunate, and I think Go could really take a page from Rust’s book with respect to its governance model.
(I wrote part of Rust's current governance model and was a part of polishing it up; though I wasn't so involved when Rust initially started using a consensus based model. I've also tried Go, and written about it [1])
... I don't know if this would work out so well. One of Go's strengths (also weaknesses, but it is a strength) is that the language designers can afford to just make hard decisions for everyone. This is where the simplicity comes from, IMO. Rust is complicated because it tries to cater to a wide range of needs. Go makes decisions even if they cut out use cases, and the language stays simple. Both approaches are valid; but IMO switching approaches would make Go lose one of its best plus points.
[1]: http://www.polyglotweekly.com/2015/04/24/thoughts-of-a-rusta...