Hacker News new | past | comments | ask | show | jobs | submit login
Why we switched from Python to Go (getstream.io)
380 points by tortilla on Oct 17, 2017 | hide | past | favorite | 397 comments



Python was my entry into the programming world, and I've been an evangelist ever since... Or I was until I ran into distribution and parallelism. Since then, Nim has been my go-to language of choice. It is all that Python was, plus unbelievable speed, compiling to shippable binaries, and some other cool language features that admittedly, are still beyond my scope of abilities. Still quite lacking in libraries compared to Python, but after a few attempts (perhaps halfhearted), I never felt "Go" was a suitable replacement.


For me, the ecosystem matters as much as the language itself these days (actually more). There are languages that I've used at home just as a learning experience, but ideally, I want a language that I can put into production at work. That means quite a list of criteria: only a small number of languages have a well-supported AWS SDK, for example.

I won't say that Go is mainstream yet, but it feels very "production-ready" in those respects.


I'd say that simplest "production-ready" criterion is this: does it have an officially supported and stable release of a IntelliJ IDE (there is a slight Java-ecosystem bias, but I think it's only slight).

So the mainstream languages are...: C/C++, C#, F#, Go, Groovy, Java, JavaScript, TypeScript, Kotlin, Objective-C, PHP, Python, Ruby, Scala, SQL, Swift, VB.NET (source: https://www.jetbrains.com/products.html).

Sounds about right :)


So, from the top of my head, Erlang, Haskell, OCaml, Perl, Fortran, Cobol and any type of shell script is not production-ready. Nice to have that clarified. :P


It was a very bad choice of words.

I wish that I knew a phrase to mean "has all of the documentation, tools, and libraries that are needed for an average team of developers following common practices to be likely to successfully ship and effectively maintain commercial projects, consistently".

Many really interesting languages and frameworks don't pass this criteria. It doesn't mean that they are a failure: the idea of programming languages that are designed for teaching or research, and are specifically not for commercial use, perhaps used to be more common than it is now.


Your 100-ish character phrase is probably the most succinct description of an idea I've frequently wanted to express.


They may be production ready, but have a quite small ecosystem compared to other languages mentioned.


Do shell and Perl really have a "small ecosystem" compared to something like Groovy?


Or there's not that many greenfield projects in any of them, for those that have managed to acquire the ecosystem years ago.


I'm the opposite; I don't stick much time into learning a language if it doesn't have a high quality vim plug-in and command line tooling. It's fairly shallow, but I want to learn how things work behind the scenes, and I don't like fighting the editor over my preferences. It's a lot of small things, but why not enjoy your hobby time? This is just my personal preference and I understand and respect that there are lots of folks who enjoy their IDEs like I enjoy vim/Unix.


> So the mainstream languages are...: C/C++, C#, F#, Go, Groovy, Java, JavaScript, TypeScript, Kotlin, Objective-C, PHP, Python, Ruby, Scala, SQL, Swift, VB.NET

Of those 18 languages, only 13 are in the TIOBE top 50. Two of them (Apache Groovy and TypeScript) don't appear in the TIOBE top 50 so it's pushing credibility to call them "mainstream" languages -- the Java-ecosystem bias is more than "only slight".


Slight nit, but I don't think it's accurate to say that Go has an "officially supported and stable release of an Intellij IDE". While Gogland is excellent (I use it as my main Go editor at work), I think it's still technically in pre-release.

On a more unrelated note, is Groovy widely used? As someone who doesn't really ever use JVM languages, my impression was that it didn't have the similar usage to Kotlin and Scala, although I might be mistaken.


Apache Groovy's used for scripting on the JVM, like Bash and Perl on Linux. It's OK for glue code, mock testing, and 10-line build scripts in Gradle. But 5 years ago, some of Groovy's backers tried to re-purpose it as a competitor to statically typed languages like Java, Kotlin, and Scala, which is when things went awry. If it had stuck to its knitting, it could have been used widely for scripting classes written in Kotlin and Scala as well as Java. But it didn't even keep up with Java -- it doesn't even have Java 8's lambdas on the eve of Java 9's release.


Django's ecosystem has spoilt me so much that it's very hard to switch to another language, at least as far as web apps are concerned. I would love to play with Rust/Nim more, but most of the side-projects I'm passionate about involve at least a few things that Python has a lib for that would be a huge time sink to rewrite in another language.


Nim ameliorates this somewhat by being able to easily wrap any C/C++ libray, and even has c2nim to create wrappers automatically.

One of the advantages of compiling to C and C++.


I wonder if it's easy to get Nim to run on the ESP8266, huh.


It appears so: https://github.com/TomCrypto/caret

> This is an experiment for running Nim code on the ESP8266 ("NodeMCU") microcontroller. The end goal is an extendable framework on which to build autonomous ESP8266 applications, along with an associated web service which acts as a base station providing back-end message aggregation and front-end information display/control panel.


What libraries are you most missing for Rust?


These days I mostly do web, so Django and the whole ecosystem are just missing from Rust, but I'd settle for just the ORM. I found Diesel pretty hard to get started with, unfortunately :/

My other hobby is microcontrollers, and I would love it if I could run Rust on my ESP8266. I can already run MicroPython, but it feels a bit hacky (probably unfairly), and it would be amazing if I could use Rust and easily link in all the C/C++ libraries the Arduino framework has available.

I know this reply isn't useful (I miss all the libraries, woo), but it's as specific as I can get. I started a very simple website uptime monitoring service:

https://gitlab.com/stavros/panoptes

I didn't get very far before the difficulty of integrating Diesel just stalled the whole thing.


Yeah, Diesel is tough; they're working on better docs which should help out quite a bit.

IIRC someone is working on the ESP stuff...

> I know this reply isn't useful

Naw, it's all good: it's just as much about qualitative as quantitative. It's also because I'm writing a small framework in Rust in my spare time, so "rust for web" stuff is extremely top of mind for me. Thanks for taking the time :)


> they're working on better docs which should help out quite a bit.

That'd be great, because I currently can't find a niche where Rust is so much better than Python that I won't just fall back to that, impeding my Rust progress.

> IIRC someone is working on the ESP stuff...

That's what I heard too, but progress seems stalled. Hopefully, in the end, it'll be easy to run on the ESP, but the biggest problem is the ecosystem (which is also much of the reason why I don't use micropython all the time). There are so many libraries for the Arduino framework that C is hard to escape.


I use both Go and Python regularly, and both have their uses and issues. Go for me is a rather simple tool that doesn't get in my way, and just lets you do stuff - on it's own it's relatively boring. While it's probably not everyone's cup of tea, I like it's simplicity. It does have some downsides the article here mentions like package management and versioning. I understand why vendoring is being pushed as 'the official solution', but it still feels backwards to me.

The thing that eventually makes me grab Go time and time again is how damn easy it is to drop into code of some library - even (and maybe especially) the standard library, something I rarely did or do in any other language. With Go there's little friction to do this, for multiple reasons. The source of truth for pretty much any project is almost exclusively Github, godoc.org documentation has links directly into the source-code for every single function and struct, and it forces you to set up a dev env that let's you understand it's structuring, where your libraries and it's code will end up. About the last part I had some mixed feelings at first, but after a while you appreciate it that the libraries you're using aren't tucked away somewhere in some system directory. Add to that that Go is a simple, pretty readable language, and you end up with a very transparent library system. Would I ever even think about jumping into the paramiko library (which I used on multiple occasions)? Not at all, so I didn't. The first time I used the crypto/ssh lib, I also didn't think about jumping into the code, but somehow, I did because it was the natural thing to do, and made me understand the internals and SSH a lot better (paramiko was slightly easier to get started with though).

Another thing I really like is that the language encourages not writing 'applications' but rather libraries that can be reused by virtually anyone, and your final application will be a relatively small front-end shim for it. While this is not hard in Python, creating a separate library for smaller stuff somehow always felt as overhead, while in Go it feels like the natural thing to do. In python, there's yet another ton of overhead if you want to add something to PyPI - while in Go it's a `git push` away. This also creates serious issues and pitfalls, but it makes contributing a library a lot easier.


> In python, there's yet another ton of overhead if you want to add something to PyPI - while in Go it's a `git push` away.

note that it is possible to pip install from git repos and it's actually very easy. you're just not expected to have to do that :)


I also tried Go and didn't feel it was a suitable replacement either. I had never heard of Nim, I'll have to give it a try, thanks.


I never heard of Nim, something to try out at some point. What's your favorite Nim tutorial?


Nim by Example[0] is a great introduction. The blog mentioned in the OP also has a writeup that explores some of the tooling[1]. After that, the official tutorial[2] is a comprehensive dive. The standard library documentation is sometimes lacking but is easily searchable.

[0]: https://nim-by-example.github.io

[1]: http://howistart.org/posts/nim/1/

[2]: https://nim-lang.org/docs/tut1.html


Second this. I've been playing with Nim for the past couple months and love it. The biggest downside has been lack of tooling.


> High-performance garbage-collected language Compiles to C, C++ or JavaScript Produces dependency-free binaries Runs on Windows, macOS, Linux, and more

Wow. That's cool. I'll have to look into this.


Agreed completely. We're in the process of porting our Python 'shim' (aka agent, at https://Userify.com - plug SSH/sudo key management) to Nim right now, so that we can provide a fully static shim for CoreOS and other minimal distros, and eventually Windows; there are a few languages that can do this cleanly, such as Go, Ocaml, and Lua, but Nim is just blindingly fast and actually pretty fun to code in. Great stuff.


Can you please add your company here? :) https://github.com/nim-lang/Nim/wiki/Companies-using-Nim

There are some companies using Nim, but they're not in this list


Absolutely - will do!


Would you kindly share you experience on using Nim for that project?


For that project, we had very specific requirements: easily handle SSL/TLS with contexts and control over self-signed vs certificate checking, JSON processing, speed, nice syntax, and one of the most challenging requirements: statically compiled, linkable against musl and libressl, while still supporting mingw_64 for windows. Only a few languages have flexible compilers that can do this; for example, rust can't (afaik).

The experience so far has been outstanding. Nim has functioned flawlessly with a minimum of magic. It seems to work very cleanly and the compiler is cleanly integrated, but still swappable (ie between gcc, clang, ming..) Nicely color-coded, too.

Exceptions are caught with full tracebacks, and pre-compile checks quickly point out exact location of syntax errors. (Good, clear error messages are surprisingly missing from many languages.)

Here's an awesome example; in my first day of coding, I was able to replicate python's "+" string concatenator ("hello" + "world" versus "hello" & "world" in Nim) with a one-liner:

    proc `+` (x, y: string): string = x & y
This is pretty amazing; not only is it readable and concise (and more than a passing similarity to python's lambda, of course) but nim comes with the ability to define new operators right in the language, and the compiler raises an error if operators, procs, types, etc would introduce ambiguity.

Nim compiles quickly and its type inference (where it guesses what type of variable you're working with) makes strong typing mostly painless, and you still get all of the advantages (type safety, speed) of static typing.

There are some trade-offs that are made (obviously), but the language designers seem to make trade-offs in favor of speed and robustness over language features -- but this still leaves a lot of room for features.

I also like how the syntax has a lot of similarities to Python's. The only thing I've missed so far is a nim interpreter, so that I can get up to speed faster on the syntax or try things out quickly. The tutorial on the Nim website is definitely not for beginning coders (who would probably be quickly scared off by words like lexical), but it quickly covers the language syntax for experienced coders and seems to borrow a lot of the best ideas from other languages.

Nim is basically awesome. The few downsides are that the standard library is still pretty light (but that gives you an opportunity to build something great and have it be widely adopted), that there's no interpreter, and that the tooling is still a bit lighter than older languages. All of these will be improved with time.

And, it's fast. Really fast. Compare nim in these benchmarks[1] to any other mid-level (or even low-level) language and it really shines. It's generally much faster than Go, for instance.

1. https://github.com/kostya/benchmarks


> no interpreter

Try running `nim secret` on the command line, it's not perfect but it's enough to play around a bit.


Thank you!


If you do concatenation a lot, it's better (for performance, of course) if you make it a template (so Nim will just replace a + b with a & b):

  template `+`(a, b: string): string = a & b
But probably C compiler is smart enough to inline your proc :)


Ah, brilliant! thanks!!


You can also check out this: [redacted] But there's almost no features in it as I'm lazy :) I think that the best feature in this lib is a python-like range type: [redacted]


[redacted] is really slick... looks so much like Python.


Rust can do what you asked, as far as I know.


Excellent-- thank you for the correction!


Thanks for your comment here. I heard of Him from you here. It genuinely looks like the best of many worlds. Easy to use and learn like Python: Check Fast and efficient approaching the levels of C: Check Ability to spit out a binary that just works without installing all the batteries: Check (particularly hurt by this in Python) Uses multiple cores by default : Check I'm going to spend a good deal of time playing with this and building some CLI tools at work.


Can you elaborate, somewhat? How long ago did you try GO? Also, I agree, having a generous assortment of contributed libraries makes all the difference.


+1 for Nim: it writes like Python and runs like C.


I tried Nim in non commercial capacity and liked it. Wrote an Aho-corasick string matching algorithm in it to see how it fared with Lua/Luajit and it was fairly close in speed. The code was also quite pleasant to write.

I remember that debugging it was a bit of a pain if just not a real option then. Maybe things have changed.



> Since then, Nim has been my go-to language of choice.

Out of curiosity: You use Nim professionally? Do you work for a company? What kind of software do you make? And what sort of people or organizations are your customers?


There is a huge need for python to be compiled or made faster.

I wonder if python 4 could make this happen.


Pypy exists.


>Go forces you to stick to the basics. This makes it very easy to read anyone’s code and immediately understand what’s going on.

I’m not criticizing Go since I have no LoC in it, but in other restricted-and-flat languages and areas our company’s expertise quickly (read: in half a decade) went to the limit with no chance to turn the lang partly into dsl and level up. It is like playing rpg where you stuck to level 5 and never get powerful enough to take middle-level quest even with a great party.

While it seems cool to have automatic jsonification, build process and out of box concurrency, inability to create something that only your team can use and understand effectively means you’re locked to growing markets (bubbles) and never have real expertise and/or budget over bloated competition. This may play a bad game with your future, should roads cross with one who has.

Again, the type I’m talking about must be rare, and “there is only one opinionated way to do it” seems to fit better on average tasks.


> inability to create something that only your team can use and understand effectively means you’re locked to growing markets (bubbles) and never have real expertise and/or budget over bloated competition

Could you unpack that a bit further?

By "inability to create something that only your team can use and understand effectively", do you mean "inability to create something really powerful"? As it stands, it sounds like a good thing.


I understand what you mean by "inability to create something that only your team can use and understand", but the article doesn't say anything about that and isn't really addressing developing new code in Go. The article is about Python not matching their needs after a few tries at optimization and Go having a similar development time (although if the code was already implemented in Python then I assume the Go code was almost trivial to write so I'm not sure that was a valid point).

As for "'there is only one opinionated way to do it' seems to fit better on average tasks", from the Zen of Python: "There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch." And Python has been used to do plenty of above average tasks.


At a high enough level, Python does not live up to that zen. It does not restrict how you will organize your code, it just restricts instruction level options.

Keep in mind that the Zen of Python is mostly about how Python differs from its ancestors (basically, Perl). It's not a work on software engineering.


I'm surprised to see Go's error handling listed as a disadvantage. Go encourages writing good error messages, and that's one of my favorite things about the language.

To get a sense of it, take a look at a failing test in a language such as Python that uses asserts for testing and compare it to the equivalent written in Go. Quite often, the error message in Go will be clear and to the point. On the other hand, I've seen plenty of assertion based tests in other languages that report inane things like "1 != 2" or "true != false" when they fail.

Go encourages putting the effort into this up front. It pays off later when things go wrong and you want to know why.


Probably complaining from someone used to exceptions and/or assertions, where there is perhaps less boilerplate and manual verification that context bubbles to the right place.

You can get there either way, but what you're used to weighs heavily.

For your example, depending on my experience, knowing that 1!=2 in some code I'm using directly might be more meaningful or actionable than "out of filehandles for accept()" in some deep chain of dependencies I know little about. Just depends on what you're used to.


Or anyone who's used a language where errors aren't as stringly-typed as Go and where the compiler can enforce error handling. Exceptions have benefits and drawbacks, but any language that can return a Result type will have better error handling than Go in basically every way.


In Go, `error` is an interface, which means you can return anything that implements the `Error() string` method as an `error`.


Yeah, but 99% of libs won't return anything with additional accessible structure.


Yeah I think the point is, errors shouldn't be overloaded with too much extra structure. They should be errors. If additional structure is needed, it should probably be sent separately via the common multiple returns idiom in Go.


Additional structure like, say, a numeric error code? An HttpError could definitely contain the corresponding status code.

But I think the bigger point is that, for libraries, errors should be an enumerated set of possible error conditions, not strings. You say "additional structure", I say the duplicate filename, the invalid email address or, crucially, the error that caused the error...chaining errors is very useful. The multiple returns idiom in Go may make this possible, but it's hacky and strictly worse than languages that can utilize generics and a more richly-typed set of errors.


The point I'm making here is that everyone knows 1!=2. It's not helpful for a test to tell you that because then you have to go hunting around to figure out what is wrong instead of it just giving you at least a rough idea right away. It does not depend on what you're used to.


I get that part. I'm saying Go doesn't fix stupid either. I can panic 6 layers into a dependency chain with a verbose error message that helps nobody.

It's often easier to fix 1!=2 if I know the entire error stack than it is to know exactly what happened where, but not the path down.

Like as obtuse as 1!=2 is, with exceptions, I might more easily recognize it as a reference leak.


The reason I, personally, don't like it is it's basically a slight improvement over C's error handling when everyone else has substancially improved. In C, basically every function could fail and you needed to check the return code. In some cases you'd need to make up an impossible value to return to signal error and in some cases (e.g. functions returning binary data) you would need some other global variable to be tested. So Go fixed all the error signalling issues bit still has the problem of needing to check every function return... which no one is going to do. In C, if printf fails that's usually just going to happen without the program noticing. In any language with more modern error handling (note: not just exceptions) printing can't silently fail.


> I'm surprised to see Go's error handling listed as a disadvantage. Go encourages writing good error messages, and that's one of my favorite things about the language.

The biggest reason for me is the inability to get stacktraces to where an error comes from.


Work is in progress to improve that: https://research.swtch.com/go2017#error


> Quite often, the error message in Go will be clear and to the point. On the other hand, I've seen plenty of assertion based tests in other languages that report inane things like "1 != 2" or "true != false" when they fail.

An error message like '1 != 2' is a lot more meaningful when it's reported as the reason that test_new_foo_has_2_bars failed. Occasionally it's nice to give a more explicit message, which is why in eg Python you can optionally supply one, but it's not generally an issue if every test is really only testing one thing. To me, the Go approach seems to encourage writing cluttered tests that test several things at once.


I think the semantics of Go's error handling is pretty good, but the syntax could use work.


Ya my number 1 request for Go2, better syntax for error handling.


There is an ongoing reflexion on making error handling less verbose, but as you can see in the link below, there are tradeoffs, as always:

https://github.com/golang/go/issues/21161


Something as simple as allowing the use of multi-result calls in if statements would make a huge difference. Being able to write "if err := blah(); err != nil { ... }" is great (although it does kind of hide the original call.)


This is possible in Go right now. Use "if thing, err := blah(); err != nil { ... }".

This is common for things like map access. See an example here https://play.golang.org/p/P3k5tgFLd2


Wait! Is this a go 1.9 thing? I've been using 1.83 all this time.


I don't think it's a 1.9 thing - I've got code from 2017-02 which uses it (the quickest example I could find in my repos)


WTF? You are absolutely right. I could've sworn I had issues with this before. 25% of my irritation with Go just evaporated (the other 75% is how hard it is to run Delve, but I'll figure that out.)


This has been possible for years ;-)


> On the other hand, I've seen plenty of assertion based tests in other languages that report inane things like "1 != 2" or "true != false" when they fail.

In Python, one would need to supply the message argument to avoid that. But Go seems no different, really; you still have to supply the message. My understanding of the testing package is that Python's:

    self.assertEqual(expect, actual, f'expected ({expect!r} != actual ({actual!r})')
is approximately:

    if expect != actual {
        test.Errorf("expected (%s) != actual (%s)", expect, actual)
    }
I'm honestly not sure which is "better".

I do wish Python could figure out something better than "1 != 2", s.t. the assert would just do The Right Thing™ in a graceful way, but doing so would require facilities the language hasn't got (e.g., some macro capability). There are some packages that do some black magic stack walking to do slightly better, IIRC.

To me, Go's error handling falls a bit short because it is possible to forget to handle an error (though thankfully that unused variables is an error makes this more difficult), and you don't get an error or a result, you always get an error and result. (The language lacks a sum type, so it can't really do much better here.) It's on the programmer to not use the result, and to me that felt and still feels weird. The constant do-something, check, propagate is thing, too … I had hoped C had beat that out of language design as too tedious, frankly. (C.f. Rust, where even though error propagation is always explicit, it was `try!(foo)` early on, and `foo?` more recently.)


py.test does the magic walking, and it does indeed make for much less verbose tests -- no need to remember which self.assertSomething method to call either:

    x = 1
    y = 2
    items = [4,5,6]
    assert (x+y) in items
Outputs this, replacing the variable names with values

    test.py:4: in <module>
        assert (x+y) in items
    E   assert (1 + 2) in [4, 5, 6]


If you aren't using pytest, you don't know what your missing. It cleans up so much of the testing boilerplate for python.


Though the magical global dependency injection is still somewhat obtuse, and not overly new-team-member friendly


I think the if-statement one is clearer, but in terms of results they're both fine. At the same time, I usually don't see people put in the extra work to add these helpful messages to their assertions. Maybe one reason is because they make the code a little harder to read.


I've been searching if there's an equivalent in some Python unit test framework for Google test's predicate-formatter feature. Crafting an intelligible failure interpretation that even directs a developer for what to fix is solid gold. https://github.com/google/googletest/blob/master/googletest/...


golang routine


How do these shallow articles get upvoted so much ? they don't have much specific information except very generic "developer productivity".

Let me give a specific example where moving to Go really helped our tooling:

Go has some great interfaces, specifically their net & ssh client. In order to perform operations against some machines, we have to tunnel through bastions, however we'd also like the tool to work when an operator has already ssh'd within the region (the tool is installed on hosts as well -- so that long running tasks can be performed).

It was easy to create HTTP clients that easily tunnel through an SSH connection via Go's Dialer interface (present in the net & ssh package) or just directly if no tunnel was needed.


I think in general people vote up stuff that glorifies their favorite language/smartphone vendor/operating system, even tho many of the comparisons are shallow.


Personally I often upvote based on subjects that I'd like to see more discussion about.

(In this particular case I did not vote on the OP.)


Actually moving to Go was a very thorough and long process. We did small example projects and tried out many of the libraries we needed. The blogpost is more of a high level summary of why Go is such a great language.


Oh hey, it's you. I weirdly recognize your name from many many years ago. I have vague memories of dealing with you in submitting an open source patch for Django Facebook. Or maybe it was just a documentation change. Either way I remember you being very polite and professional. Just wanted to say that stuck with me and please keep it up, as I can see you're doing here!


I thought it was very thoughtful, detailed, well written and touched on all the points I would need to make a similar decision if it ever came down to it. Thanks for writing it.


Did the Go implementation of the ranked feed use the algorithm optimizations discovered while optimizing the Python implementation?


No most of the optimization were Python specific. But of course development time was a bit faster due to it being the 2nd time solving the same problem.


But that is some very specific plumbing, and may not be relevant for 99% of usecases.

If, later in your project, you'd have to add GPU computations to your software, would Go still be a better choice than Python?


Because it's interesting to read others stories and thoughts.

I don't use Go nor Python but still thought the article was interesting.

> they don't have much specific information except very generic "developer productivity"

They give you a lot of reasons but in reality choosing a language is seldom about arriving at a scientific conclusion and much more about what feels best. Just like choosing a dish at a restaurant.


I think the article is not that bad. I specially like the comparison of : well python was a bit faster to develop with, but much longer to optimize. Note they actually used Python long enough to actually spend quite some time to consider the move to go; this means that performance of Python were ranging from good enough to barely enough for a (hopefully) long period of time. Also note the scale of operations of that company, if Python hold for say 10% of that scale, then it's certainly good enough for me !

That basically confirms the strength and weakness of python : it's very good for prototyping and if performance issues arise (and that can be later than one think), it's tough to optimize.


I'm a bit sceptical about the comparison of optimisation. Specifically when the AST was mentioned (did they parse the expression as Python code?) and Python came out that much slower even after fixes. As long as they were interpreting that, rather than compiling the expression into native code, I don't see a good reason for Go to be faster. Interpreting expressions like that in Go would be almost as dynamic / lookup heavy as in Python.

I'd like to see both apps for comparison / more context.


Although, if I spent over 2 weeks developing and optimizing a solution in Language A, I would fully expect to develop and optimize it an awful lot quicker in Language B.

Additionally, a lot of performance issues can depend very much on what libraries are used.


The reasons in the article might be valid for this particular company, but in my experience, the performance aspect is less valid in 90% of applications.

Either, the database is the limiting factor (and thus, languages like Python are fast enough, anyway) or the really performance demanding parts are located in <5% of the coding.

In my case, I enjoy the productivity that Python gives me and if I encounter such cases, that demand very high performance, I implement them in C or with Cython (or both).


Because title starts with "Why". Psychological effect.


What exactly is the psychological effect tying the word "why" to an article getting more attention?


Did you read the article? It certainly wasn't shallow nor was it negative.

You are right in 90% of these post they are shallow this one was really a good read.


To each their own - I tend to agree with OP. My takeaway was rather close to "goroutines and gofmt are good" which are about two of the most obvious talking points when looking at Golang. Shallow has a bit of a negative connotation, but I was personally left feeling hungry for a deeper analysis after reading that.


Why does anyone write web apps in Python? PHP? Ruby?

Modern Java and Go are so much faster than the alternatives that it's stupid to consider anything else if performance is important.

Golang and Java can manage over a half million HTTP responses a second. Node is pretty fast but why bother when Java is many times more mature in features, tooling, and and supports concurrency... And uses less ram and is usually faster. People moan about the "huge" Java runtime when JavaScript uses 3-5x more memory and has a huge runtime of its own.

All the big companies are using Java and Go almost exclusively for high volume endpoints and it blows my mind the amount of mental gymnastics some companies go through to avoid following suit.

Java has come a long way since J2EE. These days it's asynchronous, non-blocking, serverless, etc.. pretty much all the acronyms thrown around about node except its not JavaScript, which IMO is a huge win.


The answer is that for a huge variety of software, performance is not important, or perhaps is only important for a subset of the application.

My personal experience is that the dynamic languages you've laid out generally have frameworks that are extremely conducive to rapid prototyping (Django is my favorite). I've seen and done the dance many times -- start with a Django/Rails/Laravel app, get a free admin and build up some CRUD pages in no time flat, and then once you've got enough traffic to care, move parts of the application to more performant platforms (Go/JVM usually) as necessary.


Yeah, plus even if performance is important, the app layer isn't necessarily the best place to optimize. It doesn't really matter how fast you sprint between database calls if the database and its IO dominate your site's performance profile, which they often do...


Everyone seems to say this and also to write really slow websites.


Are these sites slow because the software on the server-side is slow or because they're JavaScript bloated garbage?

That's a serious question: I find it's often hard to tell what the bottleneck might be in these applications.


The vast majority of slow websites I've seen written with RoR were slow because the DB later want optimised. 1+n query problems, pulling way more data than needed and then processing it in Ruby, missing indices etc.


Either that or by slow views. URL generation is often a noticable culprit.


Most websites are written to minimize developer time not processing time.


They say this and write slow websites because they don't care. They only care about their code being "beautiful" in some weird sense.


If that's really the case, why do people spawn multiple instances of their app?

A python application can be anywhere from 10x to 50x slower than a native application. It also probably consumes at least 5x more memory.

Writing the same app in a compiled language is not even an optimization. It's just baseline work to ensure the code is not super slow.

Like, if you know you will sort a list of 10 items, choosing quicksort from the stdlib instead of bubble sort is not even an optimization. It's just common sense.


> why do people spawn multiple instances of their app

For concurrency (number of requests handled at once) instead of speed (end-to-end time of of a single request)


This is not true. Java and Golang can use asynchronous IO and maintain thousands of concurrent connections. It's just another case where slow languages are... Slow


If that was the only reason (which it is not), it would still be a very good reason to stop writing code in these languages.

Why waste 10x more memory?


> Why waste 10x more memory?

That sort of question is totally missing the point of why people use these languages, yeah? Languages in the web world don't tend to be chosen based on memory requirements (or speed as this suggests). Are there cases where you want to think about that? Sure.

People have plenty of reasons they'd want to use Python over Go, and vice versa.


The waste in memory is just an additional negative point.

"and vice versa" < Sorry, but no. There's no equivalency here.

The only reason I would ever use python is for small scripts that I only run on my machine and don't need to deploy anywhere.

Maybe 10 years ago Python was an attractive language because Java sucked and C# only runs on windows and there weren't many other good choices.

Now there are many expressive languages that are also statically typed and fast. D, Swift, Kotlin, etc.


The ecosystem is a much bigger deal. There's an officially supported python library for every SaaS product on the market, and many libraries that are best-in-class in areas like data science. It takes minutes to write to pdfs, make graphical charts, edit images, and a million other nuancy, minor parts of apps that you want, but don't want to spend a ton of time writing.

Java is the only static language that features roughly equivalent levels of support ecosystem wide.


Forking 10 processes does not use 10x the memory of a single process starting 10 threads. It's actually almost identical. Both are implemented by the kernel using clone(). Many older tools written in "fast" languages like PostgreSQL and Apache also use forking.


We're not talking about forking here. Python/Ruby apps are actually spawned as several separate processes.


Not for almost a decade. Ruby web servers and job processing frameworks have used forking out of the box since the release of Phusion Passanger 2 in 2008 and Resque in 2009.


This just isn't true on any decently designed system I've seen. Practically any database can manage 5k complex-ish queries per second. For the common simple queries closer to 50k.

Good luck getting more than 100 calls per second out of the slow languages


If you start getting into decently designed systems territory, you're still going to have trouble beating some of the stuff that comes out of the Python/Lisp/Node communities. For instance: https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-pytho...

Anyway, if you're locally looping for hundreds/thousands of queries to the database, instead of writing one query or calling a stored procedure, you're probably doing things wrong.


I'm talking about API calls where you make a few database queries, not local or batched.

Java/Go with pretty much any database can manage about 50,000 individual queries+REST calls a second.


Ah I see. I still think if you're measuring that way, there's no reason languages like Python et al. can't accomplish similar too. And if they can't, well, there's always adding more machines. Horizontal scaling tends to occur no matter what you're using, so is the argument just that with less productive (probably a matter of taste for many at this point) languages you'll have to scale out later? That's a tradeoff to consider, but there are other tradeoffs too.

https://www.techempower.com/benchmarks/#section=data-r14&hw=... has some interesting benchmarks, apparently we should be using Dart or C++. Go does slightly better than JS but not a lot. Some newer Python frameworks aren't on there yet. None of them reach near 50k, but I don't know the details of the benchmark and they aren't all using the same DB. Certainly you can get crazy numbers depending on what you're testing. e.g. https://github.com/squeaky-pl/japronto gets one million requests per second through async and pipelining but those requests aren't doing much.


True, horizontal scaling will always save you no matter how slow the front end is. Cost becomes significant at a certain level too though. For example, Google estimates each search query goes to over a thousand machines. If you need 100x 1000 machines to serve a query because the back end is PHP it adds up.

And you can make Python or even PHP fast if you try hard enough.

My argument is that the engineer overhead for Go and new breeds of Java frameworks are small enough that it makes no sense to use anything else if you're planning on scaling for real.

If you start with something else the cost of making a slow language fast and the multiples of extra machines you need costs far more than just using the faster language to start with

For the benchmark you posted, take a good look at the "realistic" vs "stripped" implementations and whether the test used an ORM. You'll quickly see that the realistic test applications with any kind of ORM are exclusively C#, Java, Go, and C++


And then you end up with slow applications or websites. Fast response times under high load can pay off and easily recoup longer development times. And it's much harder to change the system once you're successful.

Cost difference is another topic. While servers can be cheaper than developers, saving 90% of server cost can definitely clear some budget for enhancements.


> Modern Java and Go are so much faster than the alternatives that it's stupid to consider anything else if performance is important.

You answered your own question. Why bother with those languages when the language isn't the bottleneck? In those cases, what language one uses becomes an entirely subjective matter.

> All the big companies are [...]

For every large company using Java, Go, Rust, C, etc. There's another one (hell, probably the same one) also using Python, Ruby, PHP or JS.


And most of those other companies like Facebook are writing endless hacks, VM's, or entire languages to fix the slow runtime :)


Sorry, I have no interest in getting into a religious flamewar. I will only say that I find it weird that you think that there aren't endless hacks and VMs in [insert here the languages you like].


Eh, vanilla Java or Go is at least an order of magnitude faster than those other languages without messing with anything. You can mess around with the JVM and interfacing to native code but it's rarely needed because it's already within a low multiple of C performance writing things normally.

Arguments about what language is better are pretty flamey but it's hard to argue that performance is not an advantage of Go/Java


> it's hard to argue that performance is not an advantage of Go/Java

And I never even remotely argued such a thing so I'm confused as to why you are saying this in reply to me.


I wonder the opposite. Hardly any system needs to process 500k response per second. But nearly all of them benefit from the developer productivity that Python/Rails offer.

My current company has 15 Java & React engineers where 2 or 3 Rails would suffice. Load tops out at maybe a dozen requests/second. Feature development is super-slow. System complexity is off the charts.


> Feature development is super-slow. System complexity is off the charts.

That probably has little to do with language/stack and everything to do with constantly changing requirements/system growth.

To your required engineers comment, Spring Boot + jOOQ is easily one of the most productive backend stacks I've ever used. A single engineer could easily build a large API leveraging the stack.


Take a look a vert.x and Dropwizard. Spring Boot can handle maybe 10k requests per second, but Dropwizard is around 200k and vert.x maybe a million.

If you're only using them for a REST API they offer similar features. Spring Boot supports loads of other stuff you probably don't need for something API driven


I'm sure you'd need 10 or 15 bad Ruby developers as well. I've managed to kick things out the door very happily with one Java and one React developer before.


> All the big companies are using Java and Go almost exclusively for high volume endpoints

Because mature companies in competitive markets live or die by operational cost effectiveness.

Growth-phase companies or one with moats live or die by other means; if they are around long enough and are targeting a valuable enough market, they'll eventually probably be a mature firm in a competitive market, but a good way not to get there is to focus on the needs that such a firm would have rather than the needs the firm they actually are now actually has.


> All the big companies are using Java and Go almost exclusively for high volume endpoints

Please cite “all the big companies.” Also exclusively? No seriously, that is a bold claim. Google isn’t one of them because most of those hotepots are still written in C++. I don’t know how you can claim this. Based on some occaion company blog posts, changing just one or two endpoints out of say 100?

What about Rust? I also know companies rewrote some hotspots in Rust too.


Pick the Fortune 500 list, take out the SV darlings, the majority of their backend stacks will be a mix of Java and .NET deployments.

Easy to find out just by looking at their open job positions.


This is true even with most SV darlings. Netflix, Google, Uber, and Amazon use tons of Java. Probably the majority of their systems.

Microsoft is an exception because they built C# which is basically a more modern but less popular variant of Java. There was even a project for a long time that let you use Java code in C# projects by converting the bytecode, they're that similar.

The notable exception is Facebook. They were stuck in PHP hell for so long that they redesigned the language to make it work.


I'd say that you find a fair share of C++ as well. Especially for high volume endpoints.


Quite true, although it tends to be used in native libraries called from one of those managed languages.


Google has a huge amount of code written in Java, I would say the majority of their systems. Just look at their open sourced projects and job listings. Over 50% Java easily.


> Why does anyone write web apps in Python?

The new generation of Python web frameworks is pretty fast (check out Sanic[0][1] for example), Python has a huge growing ecosystem, you get rapid time to marked and if you find some part of your application to be the bottleneck, you can replace it with C/Rust, which you probably don't need, because your company won't ever scale as far.

[0] https://github.com/channelcat/sanic

[1] https://magic.io/blog/uvloop-blazing-fast-python-networking/


The problem with asyncio is that you can't take advantage of both it and the bazillion synchronous-io libraries already in the ecosystem.

For large web apps it's still pretty much a nonstarter unless you want to not be able to take full advantage of libraries like sqlalchemy.

Nodejs had a great advantage by starting with asyncio from the beginning.

M:N threading would have been a much better fit, though that's out of question given the GIL.


For context, Instagram's primary web servers are Django.


And reddit. And a little thing called youtube (though to be fair last I read they're offloading a lot of the hot paths to golang and c via native modules).


Curious, where do you see that youtube is running Django? Python, maybe, but I don't think they run Django?


It's definitely python, but definitely not Django. I recall someone explaining to me that it actually predates WSGI standardisation so its not even WSGI its just all custom python from start to finish.


My apologies, I thought OP said 'Python' not 'DJango'. You're correct. Youtube has their own framework and IIRC reddit runs a few different Pyramid modules rather than a full-fat framework.

Reddit has some scattered Engineering blogs and my insight into youtube comes from a previous HN article by the author of GrumPy (who works on youtube at Google) in case you're curious.


Pinterest, too, I believe.


Efficiency can be language performance or developer performance or whatever.

Higher level languages might offer lower raw performance and more bugs. But they offer more features per dollar.

That isn't really specific to programming. You might also wonder why McDonalds is popular, even though their food is...meh.


Can you point a clueless (about Java web dev) person to a nice, lightweight framework that's easy to learn and to set up? Please no XML configuration files and other such nonsense.

For me as a Python/Rails/C++ dev Java has a reputation of being too large, too complex and otherwise.. unwieldy.

Hearing things like "To test a bug I had to start 6 services on my computer and then I ran out of memory (computer had 16GB)" doesn't encourage me to try to do web dev in Java.


Not the most lightweight, but I'd highly recommend wicket running on embedded jetty, a la the second code block on https://cwiki.apache.org/confluence/pages/viewpage.action?pa... . You do still have to use XML for Maven I'm afraid (there are alternatives but I wouldn't recommend them) but it's a relatively good use of XML and you can use eclipse's GUI to add dependencies rather than adding them directly if you like.

Wicket is a true OO approach to GUI which is quite different from the page-template style of Rails/Django/..., but I find it makes for much more compositional style, with lots of small reusable components that are just compositions of a few smaller components. And while not being able to monkeypatch everything can chafe initially, when you come to upgrade to a newer version of the framework you'll really appreciate the safety a compiled/typechecked language can offer.


How much of a framework do you want?

These days, i write most of my applications using the JDK's built-in web server. They're intended for internal use by single-digit numbers of people, and it works fine. It doesn't give you anything except the ability to serve HTTP, though.

Before that, i was writing apps using Spring Boot, which is a framework covering pretty much everything. It's easy to get started with, and requires no XML, but gets complicated fast as soon as you want to do anything even slightly unusual.

A more production-grade alternative to the built-in web server is Undertow, which again just does HTTP, but is fast and scalable, and fairly simple:

http://undertow.io/undertow-docs/undertow-docs-1.4.0/index.h...

Some people swear by Dropwizard as a more lightweight but still framework-sized alternative to Spring:

http://www.dropwizard.io/1.2.0/docs/getting-started.html

The tutorial there uses a smidgen of XML for Maven, but that's all. You can use Gradle instead of Maven, which lets you avoid XML and is generally much better, but you'd have to work that out yourself, or find someone else who has, perhaps this person:

http://automationrhapsody.com/build-dropwizard-project-gradl...

There's also Java EE, the 'offical' framework for Java. It's actually not bad to program with, but you need an application server to run your apps, and although those are miracles of engineering, the user experience is still stuck in the dark ages.

> Hearing things like "To test a bug I had to start 6 services on my computer and then I ran out of memory (computer had 16GB)" doesn't encourage me to try to do web dev in Java.

If you need to start six services, you must be doing some kind of microservice / SOA thing, and if you need >2 GB for each service, you must be using some very heavy-weight application server. Neither of those things are smart, and in combination, they're deadly!


I want an orm, a template engine, form validation, sessions and some authentication mechanism. I'm currently doing a system for a robotics competition - team registration, match results entry and point calculations, rankings and so on. Doing it in Python and Pyramid is quite easy, though sometimes I have to fight with the framework to do things the way I want to.

One especially useful feature in Pyramid (and some other Python frameworks, Flask, Django etc) is the debug console, when I get an exception somewhere, then on the 500 page I can see the call stack and get a shell at each line in the calls stack, print the variables, view the last n requests, see the request variables and so on.


Spring MVC is probably what you want then.

Really though, you should invest some time in learning typescript and react. These days most apps are just API calls on the back end and you deal with templating, forms, and other bs in the frontend.

Because most apps are built this way now, you won't find any "modern" Java frameworks that support what you want, and you're pretty much gonna be stuck with the older clunkier stuff.

The learning curve for new SPA frontend stuff is high but I've found it much more productive now that I'm into it. With HTML generated on the server it's too damn hard to get pages to do what you want


The Spark Framework [0] is pretty easy to pick up IMO.

Spring (with Spring Boot) has approachable tutorials [1] on how to get started but will get intimidating pretty fast.

I highly recommend using Kotlin instead of Java when trying out the JVM, though.

[0] http://sparkjava.com/ [1] https://spring.io/guides/gs/rest-service/


I also recommend Java Spark for a minimalist framework. I would say that is quite similar to Flask in Python. But unlike Flask, Spark does not have a templating language. FOr something simple, I would recommend freemarker..


I haven't seen an xml configuration in years beyond some logger configurations. Spring Boot + jOOQ is simple to setup and works. It is my go-to for backend services. You will have to be okay with dependency injection which for some reason people have problems with.

It's important to remember that Java has been around for 20-30 years and it certainly has not stood still. Many of the complaints people have are from Java of 10+ years ago.


>It's important to remember that Java has been around for 20-30 years and it certainly has not stood still. Many of the complaints people have are from Java of 10+ years ago.

Exactly. I spent about 5 years away from Java after J2EE made me swear it off. I learned a bunch of other languages and frameworks and recently went back. Java is awesome now. Unlike 5 years ago is relatively easy to avoid all the clunky old crap if you want to.


High quality Java and Go developers are more expensive than Ruby and JS webdevs, partly because there are just so many of the latter.


Java and Go are much harder to learn. Much more complex from a language perspective, more complicated build process, bigger standard libraries.

This leads to a lot of "first languages" being the latter and a perpetual glut of recent grads that know nothing else so will do those jobs for less


Why does anyone write web apps in Python? PHP? Ruby?

First: Because squeezing every last nanosecond's worth of performance out of your web app is actually an extremely rare problem to have. And if you truly cared about performance over programmer convenience, you'd practice what you preach and build your web apps in hand-rolled assembly, but I'd bet a lot of money that you don't do that.

The typical web application -- I'd be willing to bet over 99.999% of all deployed production web applications serving requests today -- has a bottleneck at the database and the network that dwarfs any overhead from language performance.

I remember a bit over ten years ago when there was debate in the Python web world about which templating engine to use to generate HTML. And people argued endlessly over microbenchmarks of them, to figure out which was fastest, but I remember one blog post which showed a pie chart of time spent in the average request/response cycle. Nearly all of it was accessing the database, and template rendering was a tiny, tiny sliver, so the author humorously labeled it "obviously this is the part we need to focus all our optimization work on". Language choice is similar.

Second: Because what else your company does matters. Where I work, web applications are how we expose data and interfaces to that data. But there's a gigantic stack behind that, of data intake, data parsing, data processing, analytics, the whole nine yards. It's all in Python, because Python has hands-down the strongest ecosystem of any popular programming language for that stuff. So the web applications which serve as the interfaces to the data are also written in Python; it means we have one language to worry about, one language to work in, one language every software engineer knows. I've been pulled onto projects doing things that didn't involve web applications at all, and I've been able to be productive because those projects were still in Python, and I could read code and get up to speed on what was happening, and take care of mundane things for a more domain-experienced person whose domain expertise was then free to apply to things I couldn't do.

Third: Because programmer convenience really and truly does matter. When I first started doing this nearly twenty years ago, people posted comments like yours, incredulous at the idea that someone would use PHP or Perl given their performance characteristics compared to Java (or C -- plenty of web apps used to be written in C!). But even then we knew: servers are cheaper than people. The average salary of a quality software engineer (or "web developer" as we were known then) would buy you a lot of compute time, either on your own (in-house or colo'd) metal, or nowadays on someone else's cloud.

So you choose based on convenience to humans. PHP, for all its faults, was an incredibly convenient language to write web apps in, and compared to the usual CGI model that preceded it, was a breath of fresh air when it took off. Today, frameworks written in Python, Ruby, PHP, etc. are similarly in a good position compared to more heavyweight things like the Java world (which for better or worse is still suffering the lingering effects of its mid-2000s "enterprise" reputation), or even Go (which is still young and still seems to come up short, both language- and ecosystem-wise, on some of the things actual working web programmers want. In particular, programmer-friendly ORMs in statically-typed language really really really want generics or a good equivalent, and Go's historic attitude toward that has not been great.


> And if you truly cared about performance over programmer convenience, you'd practice what you preach and build your web apps in hand-rolled assembly

The performance gain from Python/Ruby to Java/Go is an order of magnitude larger than the performance gain from Java/Go to assembly. The productivity loss from Python/Ruby to Java/Go is an order of magnitude smaller than the productivity loss from Java/Go to assembly.

Therefore, going from Python/Ruby to Java/Go might be a good idea for web apps, but going from Java/Go to assembly is virtually never a good idea for web apps.


If you're disregarding programmer convenience in the name of performance, then do it. But once you make the decision to trade off some performance for convenience -- no matter how much or how little you give up/get -- you start losing the high ground for preaching to others about how they should refuse to trade what you see as inappropriate amounts of performance for convenience, because now you're just arguing matters of degree and unquantifiable personal taste.


> has a bottleneck at the database and the network that dwarfs any overhead from language performance.

Or a bottleneck in a process which can be optimised into better data structures. Changing the language can only lower your constant cost and the scale - it will not change the complexity on its own, so it should be really the last resort.


Bottleneck at the DB/network is BS. Cheap AWS boxes have 200Mbps of throughout and you'll basically never get that out of any of the slower languages on a box with something like 1VCPU 1GB ram. Slightly more expensive boxes have 1 gigabit which is hitting many thousands of HTTP requests per second, even Java/Go struggle with that load.

I have seen MS SQL server handle over 10k database queries per second on a regular machine in a real, huge application (over a million lines). Can Python handle 10k queries per second? Definitely not. In Java or go that load could probably be handled by a single AWS box that costs $10 a month.

I have never seen a database maxed out that wasn't stuck because of table locks. I've seen applications like WordPress die at 10, requests a second


There's more to bottlenecks than throughput. Latency also needs to be considered. A 5MB webpage will load much faster when served from a 100Mbit connection with 10ms of latency, than a 1000Mbit connection with 300ms of latency. Especially when you consider that web page requests (pre-HTTP/2) consisted of many separate requests to fetch all the different assets (e.g. CSS, JS, images, etc).

Secondly, the majority of websites that make up the Internet (and Intranets) are not serving 10k/second. They're serving a 100th of that.

Third, because your database can handle 10k queries, and your database can aggregate and manipulate data, and it's optimised to do this -- it might be a good idea to perform these functions at the DB-level, as opposed to the web application.

Finally, as stated previously, for most businesses, it's easier to buy more beefy servers, or get a more experienced PHP dev (that understands how to optimise code) than it is to find a Go programmer.


Java and Go aren't the only programming languages that happen to be comparable to Python and Ruby, while offering good performance.

Common Lisp, Scheme, JavaScript, ML variants come to mind.

Having been part of a startup that used a dynamic language without a world class JIT/AOT compiler (Tcl) back in the first .COM wave, teached me to never do it again.


> I'd be willing to bet over 99.999% of all deployed production web applications serving requests today -- has a bottleneck at the database and the network that dwarfs any overhead from language performance

A language is not just about overhead, it's also about capability. For example, your database is slow, and your web page needs to render by executing 20 queries. Will your programming language allow you to parallelize those queries? While your database/network may have high latency, they can still have good throughput.

This particular example is very real, and is incidentally relevant to the original post, since Python and Go implementations have different capabilities.


well then why go with any of them when erlang and elixir has them all beat in terms of performance AND stability? let alone the power of concurrent processes it can handle that none of those languages can hold a candle to. why? :)


Because your statements are very challengeable.

Erlang is much slower in raw performance than both go and jvm: https://benchmarksgame.alioth.debian.org/u64q/erlang.html

Async approache similar to Erlang was reproduced for Java and Go already: https://akka.io


You're right that Go/Java have async style similar to Erlang. But the majority of production systems today that these languages run is some sort of web application. In this area, the Erlang VM holds its own pretty well, especially for websockets [1]. In that study Elixir's memory usage is higher, but total connections were almost identical to Go.

It'd be great if there were better benchmarks for common use cases of various languages. Spring on the Java side tends to be heavy on reflection usage, which is orders of magnitude slower than JIT'ed JVM methods. Benchmarks like the benchmark game don't capture this. Still despite that the Erlang VM performs very well on the benchmarks game compared to other dynamic/scripting languages. Often it's easily 5-10 times faster than Python or Ruby. Given the parents comment, I'd argue many programmers who enjoy developing with dynamic languages can do so with Elixir with comparable performance to Go/Java for high concurrency web applications.

1: https://hashrocket.com/blog/posts/websocket-shootout


> It'd be great if there were better benchmarks for common use cases of various languages

Somebody should make those.

(People may have different ideas about which use cases are common).


There are some: https://www.techempower.com/benchmarks/

As you can see, erlang is far from the top there.


It's not near the bottom either which is the point. Actually Phoenix (the only Erlang base web framework I could find at the link) edges out quite a number of other frameworks, roughly ranking in the middle of the pack. Specifically Phoenix performed 32k req/s, in comparison to say the Go based Gin which does 51k req/s or a arguably more comparable setup of,Python3 Flask with full ORM at 13k. Spring only manages in 23k req/s. Nothing to write home about either way but clearly along the lines of my argument that Erlang/Beam can hold its own. Though some of the ruby frameworks are damn impressive seaming, huh.

However it gets more interesting if you look at the Latency tab. There Phoenix comes in with an average of 7.9 ms. In comparison Gin averages 5.8 ms, Spring at 12.1 ms, Flask Py3 at 23.1 or Py2 at 14.7 ms.

Where it's really interesting is looking at the max latency. Presuming this indicates roughly how 99th and 95th percentiles measurements would compare. In this category Phoenix comes in second with a max of 22.0 ms, behind only lib-mongodb at 20.8 ms. The lib-Mongolia is one of the fastest frameworks by raw req/s.

Appreciate the link to the benchmarks! Much more interesting, especially if you're concerned about max latency (and likely 99/95 th percentiles). In this case BEAM/Phoenix would let you plan capacity to minimize max latency fairly well as it appears very consistent.


> Specifically Phoenix performed 32k req/s,

But raw java servlets delivered 100+k req/s. It may mean that all this event loop/async/actors hype is overrated, and regular blocking approach can also deliver.


I don't see Erlang listed at-all on that "Fortunes" page ?


They have elixir there, which is another language for Erlang VM, and they have erlang for other benchmarks.


JRuby is another language for JVM -- do you put forward JRuby as the example of JVM performance?


It has very different paradigm (dynamic vs static typing) with very strong performance implication.


> … slower in raw performance…

… and "Most (all?) large systems developed using Erlang make heavy use of C for low-level code, leaving Erlang to manage the parts which tend to be complex in other languages, like controlling systems spread across several machines and implementing complex protocol logic."

FAQ 1.4 What sort of problems is Erlang not particularly suitable for?

http://erlang.org/faq/introduction.html#idp32150096


hi riku_iki, I just checked out the link akka.io and it says that it caters to java/scala only ... from your last statement I understood that it was meant for java and go.


Go also has tons of actor frameworks: https://github.com/AsynkronIT/protoactor-go

But it also has native coroutines embedded into language, which provide excellent asynchronous performance.


Hey thanks for that link ... I was aware of go having coroutines available as part of the language but not that something similar to what erlang provided was available to the go system as well...


Most of the time, runtime speed is far, far less important than developer speed.


I'd be curious what if any developer speed difference there is comparing a modern java stack (like Spring boot + jooq) with RAILs.


Speed and memory usage don’t matter until you need to rent servers on AWS


At which time you beat 95% of the pack and you should be grateful that your current 'slow' language brought you where you are.



JS lets you share code between front-end and back-end. For example, if you're writing a networked game, you're going to need to transport a lot of state between the server and client.


Because PHP is awesome, and so is Python.

Java and Go are terrible.

Simple as that.


I was wondering the same thing, and it frustrates me.

I think people obviously enjoy using Python/Ruby or whatever for small scripts, and by inductive reasoning they think they can enjoy programming even larger projects in these languages.

They also rationalize the slowness by pretending that performance doesn't matter or that the bottle neck is the database.

They abuse the adage about "premature optimization" being a bad thing. Not realizing that while it's stupid to prematurely optimize everything, it's equally unwise to write everything in a slow language. Using a fast language is not premature optimization; it's just the basic thing you need to do in order to not have a crappy optimization, which is a good thing , and everyone should do that.


> by pretending that performance doesn't matter or that the bottle neck is the database.

Or, you know... You can actually measure those things. And unless you're growing like crazy, or have a massive initial audience, you're likely to find that 90%+ of the app time is spent in the database and your CPU time isn't even close to maxed out.

Why would you assume people would pretend any of that is true?


Well, we've seen for example, Twitter do that. I saw some slides[0] that other day from sometime in 2007[1] where they said they were using Ruby and spawning 180 rails instances to handle 600 requests per second. Like, that's insane. A compiled language can handle that load with just one instance. It's nothing.

[0]: https://www.slideshare.net/Blaine/scaling-twitter/3-First_So...

Also, if your audience is small enough that a slow python can suffice, why even bother using a database server such as postgresql or mysql when you can just use sqlite and simplify the architecture?

EDIT:

[1] It's been 10 years since 2007 but people still do this more or less all the time. Write a webapp in js/ruby/python and spawn 20 instances of the application server


Twitter's issue was more with concurrent IO than anything. This problem was solved about 10 years ago with the release of Ruby 1.9 and since then only one Ruby process per core is required, just like NodeJS. In that time most CPU intensive parts of the web stack have also been re-written as native extensions.

Recent benchmarks of a properly setup Ruby environment vs Go/Gin are showing Ruby/Sinatra as having 50% of the throughput https://www.techempower.com/benchmarks/#section=data-r14&hw=...

You can also just use JRuby and use a single JVM process. For small CLI apps, MRuby can even be compiled to C, then compiled as an executable.


Contrived benchmarks are not useful. Specially when you have tiny datasets.

If you want to do anything interesting, Python/Ruby are slow as hell, which is why you cannot do anything interesting in them.

For example, while in Go, you can load say 1000 rows from the db and perform some data manipulation on it in the code to get a desired result, you cannot do this in Python because it will very very slow.

So what you do is you write complicated sql queries and essentially offload all your work from the application server(s) on to the database server.

Now imagine that these rows on the database don't actually change very often. You could just load them once, keep them in memory (in a global object), and only update them once in a while (when needed). You can always do whatever search/manipulation operation directly on the data that is readily available and always respond very quickly.

This would be _unthinkable_ if you are using Ruby or Python, so instead you keep hammering your database with the same query, over and over and over again.


Simple benchmarks are a useful yardstick. I recently wrote a service in Rust/Iron which only has 4.7x the throughput of the same Ruby/Rails service. That was rather disappointing considering how much more effort is required to do it in a lower level language.

Is Python/Django performance significantly worse than Ruby/Rails? The situations you describe are things I do every day in Ruby. Getting 1000 rows from the DB and performing some operation only takes a couple of milliseconds in Ruby.

Ruby/Python are meant to be glue, and you can most certainly use them to glue together "interesting things", like image processing or audio processing in a web layer.

Memory caching rarely changed, often accessed, but ultimately persisted in a DB things like exchange rates in a global object is exactly what you do in Rails. There's a specific helper for doing it. http://api.rubyonrails.org/classes/ActiveSupport/Cache/Memor...


Iron is still using sync IO, and while Ruby isn't great at parallel stuff, it at least does async IO, IIRC. That's going to be a huge difference.


It depends what you're doing. If you're running a ruby web app server and talking to the database, all the io you're doing is most likely synchronous. In the one-server-per-cpu model, anyway.


It's been a while, but I thought that MRI basically slept during IO and released the GIL so that other threads could do work. In that case, you'd still be handling more requests while the IO was performed. I could be wrong. This kind of thing: http://ablogaboutcode.com/2012/02/06/the-ruby-global-interpr...


You're right that it can do async io. But you compared a framework to a language. Rust has Tokio for async - Iron is just not using it.

It's similar with Ruby/RoR - yeah, they can do async. But not on their own. With unicorn server, you still get no threading and just a bunch of processes. With puma you can do threading (async cooperative really) - as long as you keep the configuration/code within the limits of what's allowed.

And due to the extra care needed whenever you do caching/storage things, I expect unicorn is still the king in RoR deployments. (GH uses that for example)


It's definitely preemptive rather than cooperative. Ruby/Puma is actually using one OS thread per Ruby thread, so when one hits a DB call and blocks on sync IO, it releases the GVL and another Ruby thread can proceed. There's a timer thread Ruby runs and pre-empts each thread to schedule them.


Yeah, I guess I moved to Puma long enough ago that I forgot about this. Good points, thank you.


Yeah, that's pretty much what happens. There's also non-blocking IO you can use with EventMachine but the DB drivers are a bit of an issue AFAIK.


Wow, steveklabnik replied to my comment! Unfortunately, both implementations of the service are CPU bound. I think we're running 25 threads in Iron but I'll have to check.


Ah interesting! With that being true, then yes, I'd be surprised that it isn't faster too.

What about memory usage? That's an un-appreciated axis, IMO: for example, all of crates.io takes ~30mb resident, which is roughly the overhead for MRI itself, let alone loading code.

Anyway, the Rust team loves helping production users succeed, so if there's anything we can do, please let us know!


Every bigger web application will start caching at some point. This is nothing new in either python or ruby. It's even well integrated into SQL access libs: python sqlalchemy http://docs.sqlalchemy.org/en/latest/orm/examples.html#modul... or just using the in memory memcache.


Twitter has moved key parts of the system to Java. The search functionality was redone on top of Netty which they had some articles about.


It's about flexibility.

You will have to close that IT-only part of your brain for a moment, and keep in mind that people create software for a reason. And every nascent project lives or dies by how promptly it adapts against incorrect assumptions or changes in the environment.

Difference in hardware costs do not even enter into the radar. They happen in a different universe that good decision making completely ignores at this point.

After you have a proper solution to a problem, and enough scale so that it's worthwhile to collect those gains, you move your software to another language. It's not a big deal.


Productivity.


I like how for the past 10 years, everybody has been espousing language/framework xyz as being superior for the reasons you've just said (asynchronous, responses per second, memory footprint etc), but as soon as you say Java does it all better, suddenly developer efficiency is most important.

FWIW, it always seems like a mostly zero sum game to me. Whatever efficiency you gain in using (e.g.) Python+Latest Frontend Framework+Backend Framework, you lose through having to wade through yet another set of new concepts and documents for those frameworks.


> Python, Node and Ruby all have better systems for package management

Maybe they have better tools for managing package dependencies but Go doesn't have to deal with interpreter dependencies, which can be a major headache. Go also doesn't have the problem of conflicting system level packages.

Honestly no matter how good your deployment practices are, if you have to manage an application's deployment long term you're going to have dependency problems of some kind. I'd rather deal with those problems at compile time than hit an edge case during deploy or in prod.


> conflicting system level packages

In Python, if you have control over the target system, this is solved with virtualenv. You can install whatever versions of libraries you want in there with pip without causing conflicts with system level packages.

If you intend to ship to end-users, yeah, you are kinda screwed. You are stuck using one of the various "freeze" methods, which, in my experience, kinda blow.


We use Nix which is much nicer than pyenv/virtual environment, but it's still a pain compared to Go. This is mostly due to Python's runtime model, and maybe also the tendency for Python applications to have very tall, broad dependency graphs.

Working with nix is also not very easy; docker might fare better here, but probably just a lateral move. In any case, Go outclasses Python on deployments. Also, my company is looking into compiling Python as a means of code obfuscation; Go compiles by default, which (modulo stripping) is probably enough obfuscation for our purposes.


I've been evaluating Cython for both code obfuscation and for using C based extensions to improve performance. Cython is pretty nice, but shipping binary only Python extensions can be such a pain, especially dealing with 2-byte vs 4-byte unicode representation issues.

I need to try Go one of these days !


> shipping binary only Python extensions can be such a pain, especially dealing with 2-byte vs 4-byte unicode representation issues

The issue with different Unicode builds was fixed 5 years ago, in Python 3.3:

https://docs.python.org/3/whatsnew/3.3.html#pep-393-flexible...


Yes it's nice to see this is now fixed. In my case I need to support Python 2.7 though, and for Python 2.7 the only workaround I know of is shipping two binaries (2 byte and 4 byte encoding) then loading the correct module at runtime.


> especially dealing with 2-byte vs 4-byte unicode

Wasn’t that solved years ago?


Virtualenv is nice, but lately I've grown to use docker containers instead. You can use the official containers for go, python or ruby, feed it the gemfile or requirements.txt or whether, tag it, push it and you have a permanent snapshoted image.


Virtualenv is not nice; it's a pragmatic yet laughable hack.


It's definitely a hack, but for development environments it's quite an effective one. Which is why we actually created a (more user friendly) clone for Go development. It's called Virtualgo, and it's mentioned near the end of the article as well. https://github.com/GetStream/vg


> this is solved with virtualenv

Except when you run into edge cases, which happens all the time.


Are you using the newer venv package from the stdlib or the old virtualenv package from pypi? I have never encountered a single issue with the former.


elaborate?


Most of the time pip installs work fine, but once a while after some type of OS upgrade (namely, my experience has been MacOS), you run into odd compiler types of issues.

So it's why a lot of teams will use either Vagrant / Docker to setup local developer environments.


Can confirm that once upgrading macOS required using Docker to be productive, because some C-dep we had stopped working. We have developers using macOS 10.10 to avoid any issues (although they're likely gone by now, it's not worth the effort to figure it out anymore).


I've occasionally had things spontaneously break. Most recently the cryptography package just stopped installing on deployment. Had to add a pip upgrade on deploy, which somehow prevented the AMI I was using from installing some of its requirements. Had to add those packages to my project requirements.

Also some of the data analysis packages don't work with virtualenv.


> Most recently the cryptography package just stopped installing on deployment. Had to add a pip upgrade on deploy, [...]

Erm... Why the heck are you running pip when deploying software? O_o It should be a build step, not a deployment step.


Most deploy scripts I've seen tend to do this. Which is insane in my opinion, but the python toolset encourage this kind of approach by making it the default easy thing to do.

That's one of the things I prefer about Go: the compiler can cross-compile, so I can compile the entire codebase on my osx machine and produce a single statically linked linux binary that I can just copy over scp to the server. Deployment becomes infinitely simpler.


> Most deploy scripts I've seen tend to do this. Which is insane in my opinion [...]

Also, most programmers don't know a thing about administering servers. Ubiquity doesn't mean that it's a proper way, as you clearly see yourself.

> [...] the python toolset encourage this kind of approach by making it the default easy thing to do.

It's not quite the fault of Python tooling in particular. It's really the easiest way in general. Similarly, the easiest way to deploy a unix daemon is to run it as root user, because the daemon will have automatically access to all files and directories it needs (data storage, temporary files, sockets, pidfiles, log files). Anything else that is easier to manage requires to put a non-zero effort.

> That's one of the things I prefer about Go: [static linking]

Static linking has its own share of administrative problems. It's not all nice and dandy.


For a number of reasons, I haven't bothered to do a complex build/deploy process. I write in python, I freeze the requirements into requirements.txt, and I type "eb deploy". Anything else is overkill for me.


And then you have different lists of installed packages in your development environment and in the production and even between two different production servers (which can silently and/or subtly break your software), and you need to manually install all the C/C++ libraries and compilers and non-Python tools and whatnot, and the upgrade process takes a lot of time and can break on problems with PyPI and/or packages that went missing, and you can't even sensibly downgrade your software to an earlier version.

Yes, avoiding learning how to properly package and deploy your software is totally worth this cost.


Python does not really have a "build" step. _That_ is the problem.

I think Heroku also played a role in prolifirating the idea that you can just push your code (from a git repo) to a server, and the server will take care of deploying it. Which usually means the server will install all the dependencies from pip during the process.


When you build a piece of software on many layers of abstractions, something will eventually break and leak all the way up in a nasty way that is difficult to debug or fix.

I don't have specific examples in mind but I've had a lot of frustrations with virtual-env.


Solved as long as there aren't native dependencies not managed by pip, which there often are.


Wheels solved this problem in 2013.

For context, you can install opencv, tensorflow, ROS, matplotlib, and the entire scipy stack in a virtualenv, with no external dependencies, using wheels.

This means that you can generate images, train a machine learning algorithm on them, compare the results to conventional CV algorithms, and display them in an ipython notebook all from a venv. There's a huge amount of C++ and even qt integration in that pipeline, all isolated.

It won't be maximally performant (ie for massive training), but for that you'd want distributed docker deploys or similar anyway.


Wheels doesn't completely solve this problem. For example, your example of matplotlib, IIRC, has a dependency on libblas: a C library. This dependency isn't captured in the wheel metadata, and it's up to you to install the right libblas on the host, or somewhere where it will get loaded. (Though honestly, this isn't usually an unmanageable level of complexity. But virtualenvs are not free from external dependencies always.)

IIRC, we also had a problem where a wheel failed to find symbols in a SO it was linked against. It turns out that the wheel worked fine in precise, but failed in trusty, and we ended up having to split our wheel repository because the wheel was specific to the Ubuntu release. (It seemed, at the time, that whatever SO it was linked against appears to have made backwards incompatible changes without changing the major.) There are rare cases like this where the wheel is unique to attributes of the host that the wheel metadata can't capture.


> For example, your example of matplotlib, IIRC, has a dependency on libblas: a C library

And with the wheel packaging, you're free to embed that library in the wheel that depends on it. You can also not do that and rely on the system libraries. The wheel provides you a way to do what you want, but doesn't force you to do it.

The are good reasons for either of those approaches, so I'd say wheel does solve the issue.


As another user mentioned, they do, if the library maintainers take the time to do things correctly. The package I'm describing does not link outside of the virtualenv, the matlplotlib, opencv-python, and tensorflow libraries include all necessary dependencies (although you have to use a non-default renderer in matplotlib, because it doesn't bundle all of them).

What you say is correct, virtualenvs are not free from external dependencies always, but correctly build wheels are. Wheels and virtualenvs aren't the same thing.


Wheels just broke something in our build pipeline. They removed support for Python 2.6 and started tossing errors. I was able to fix it by pinning Wheel which probably should have been done originally by who ever made the build utility, but it would have been a non issue with Go and a binary.


Well, python 2.6 was also EoL'd 4 years ago, so yes, if you're using a no longer supported piece of software and not pinning your versions, I'd argue that you're inviting issues.


Sure. But if the build utility was just some binary, then it wouldn't matter. If Go was abandoned by all maintainers tomorrow or they broke all the packages, the already built binary will still work.

Should someone have changed the Python build tool to be 2.7 or 3? Maybe if they were bored and new it was something that needed work or wanted to be good tech citizen. However, what really happened is that no one even knew what the tool was really doing, just that it was part of a suite of tools in a build process, and no one would have looked twice at it ever again had wheel not removed support for 2.6. /me shrugs.


>But if the build utility was just some binary

I mean it totally would if the binary dynamically linked against a file you didn't have on your system, which is exactly what happened with `wheel` (python 2.6 doesn't have OrderedDict, which wheel now uses).


Have you actually done this? If so, on what platform(s)?


Linux, If you'd like, I'll post the pipfile for the repository.


Could you elaborate on how ROS fits into this framework?


That's a different project, but I also use ros in a virtualenv. Its a bit weird because you end up installing ros both inside and outside of the venv (various ros command line tools only use /opt/ros/... python deps), but your actual nodes run in your virtualenv.

And ros doesn't even need to be a wheel fwiw, its pure python, but its also just a painful thing to deal with for many, so it was worth mentioning.


You can pip install pyspark now too!


> Go also doesn't have the problem of conflicting system level packages.

Sure, as long as you're not using CGO and dynamic linking. Otherwise you'll get the exact same problems as the others.

> Go doesn't have to deal with interpreter dependencies

As long as Go is forward compatible this will old true, but I don't think this was the point of the comment.

'go get' is a half-baked package manager and yes it fetch packages and resolve dependencies between packages so that's a package manager. It just doesn't care about versions. Other languages have solved the issue by requiring a third party tool, Ruby even has its own build tool.

There is a reason why some gophers are working on an actual package manager...


> Sure, as long as you're not using CGO and dynamic linking. Otherwise you'll get the exact same problems as the others.

Not even then, to be honest.

I wrote a product that is deployed on many thousands of servers, and all of a sudden a not-insignificant population just started experiencing SIGABRTs on process start.

Turns out Amazon Linux (and some others) had removed a syscall that i386 Go on Linux was relying on (https://github.com/golang/go/issues/14795). We built our product with a manual patch against "src/runtime/sys_linux_386.s" for many months as a result, and it was really a huge headache to help all our customers.

I would be surprised if Python or even Java broke in this way, for example. I was really surprised it even happened in Go to be honest.

There are other runtime problems too, we had a weird interaction with a "cloud OS" based on cgroups (CloudLinux) screwing up our Go processes depending on how many goroutines we ran ...

I think Go is fantastic but its runtime can definitely clash with its environment ... it's not the same as a C program.


Was the system call part of POSIX or the standard Linux interface? Because if so, that sounds like the fault belongs to Amazon, not Go. The same thing could happen to the Python interpreter.


>There is a reason why some gophers are working on an actual package manager...

Finally! I had to deal with a golang project once, the dependency management made me want to pull my hair out after using Bundler and Cargo


Yes! Once you've used Bundler and Cargo anything else seems just silly and unacceptable.


This has been a massive win for us vs languages that require you to haul around a runtime and other dependencies.


Just use Java.


A thing that REALLY turned me off in go...

The fact that go's Math functions only work with floats. Idiotic.

You have so many packages that have to add the same freaking min(int a, int b) function.



So, one more vote for generics?


That is indeed quite weird


Does Go have templating?


Templating is built right into the Stdlib https://golang.org/pkg/text/template/

The only 3rd party library I pull in to build web servers is gorilla (routing, sessions) and a database driver .

Everything else you need is in the go standard library .


I believe the parent means templating in the sense of generic types: https://en.wikipedia.org/wiki/Template_(C%2B%2B).


Nope! And that's most people's main problem with it.


Wrong. Go has templating.


yeah, maybe, if what you want is to fill some strings in a template and not to create a type that's generic over other types.


Wait just one minute, Thierry...

Are you telling me that Python helped you create a viable tech-oriented newsfeed and activity stream business, serving 500 companies and more than 200Million "end users"? That sounds like a great incentive for any entrepreneur to get started with Python.

You've gotten this far by using the language you've turned away from! Further, I'm confident you didn't solve every challenge you've had on your own but rather did so with the help of the worldwide community of Pythonistas who answer questions and help to solve problems.

It takes a village to raise a startup. Don't try to burn the village down after you've grown up and can afford to explore other villages.


Hi, Jelte from Stream here. The post was definitely not meant to indicate that people should stop using Python. We still use it happily for the website and I'm still a big fan of it myself.

However, for the API we've outgrown it in performance requirements. That's what the article is about, together with the things we found during the switch that we liked and disliked about Go.


I would love to hear your experience in two years though. I find the go runtime incredibly lacking in terms of introspectability. That's okay if you run google scale operations where you just throw away the environment on deploys and you do not care about the individual processes, but if you are running a reasonably lean operation the ability to see what a process is doing is incredibly helpful. That even affects simple things such as error reporting which is almost useless in Go to debug issues at runtime.


These are good points. Stack traces in Go are a matter of discipline while they're free in Python. That said, I think Go brings a lot to the table even for a lean operation which more than compensate for these weaknesses. Probably the biggest advantages are that Go deploys as a single binary, and one Go process is sufficient to make efficient use of the machine's resources--no need for something like it's uwsgi. At our small Python shop, we probably have two full time positions dedicated to managing our Python installation process. This could conservatively be halved with Go. Beyond that, we have a couple more full time positions for managing the fleet of servers required to run or applications--Go could easily reduce our machine demand to 2 or 3 (including redundancy) which would probably require just one engineer to maintain not to mention the savings on our AWS bill. These are just a couple of obvious benefits to Go, but for us this isn't enough to justify a rewrite.


Hey Jelte, quick question: Did you guys try Numba or Cython? Or did you guys figure that the language simplification that Go provides would be worth it even if the performance considerations were similar?


No, we didn't try those. Mainly because it wasn't only raw performance that we were after. We also wanted more simplicity and better concurrency than what python was providing for us.

Also, the reason we even considered a language switch is that we had performance problems related to our core design. Because of this we decided that we needed to rewrite most of our API code based on a new design. This required rewrite made us consider switching languages. Eventually we chose to rewrite our whole API codebase in Go for all the reasons in the article. (we did a small trial project in go first to evaluate it)


Did you ever consider Elixir?


We did but Go was deemed better in our case. For more details Ctrl+f for Elixir in the blogpost.


Did you guys try PyPy before deciding to switch? If yes, what was the outcome? If not, why not?


It reads like a reasonable rationale for why they switched to Go. Uninteresting maybe, but certainly no village burning. You sound indignant for some reason that isn't related to the content of the article.


Indeed the author even:

1) recommends Django or Rails for CRUD apps 2) mentions a lack of frameworks in Go as a negative

It's a good article and not a rant about which language is "better"


You really hit the nail on the head. If someone is able to build a successful business with an easy to use language, that sounds like the language you want to start with.


facebook, php ∎


and they are now seem invested in Ocaml/Reason (which are a lot less popular compared to Go)

i guess after companies reach a certain size, they can go beyond choosing a programming platform for its ecosystem


Are they using much ocaml on the backend? At facebook's size they have a whole lot of different languages in the stack. I'd be very surprised if a significant portion of new backend development were in OCaml though.


you can check their github account, they have several static analysis tools written in ocaml

and then they have reason, is being used heavily in messenger

https://reasonml.github.io/community/blog/#messengercom-now-...


Author here. Super cool to see this post on HNews.

For those of you working with Go. One of the guys on our team wrote a little tool called VirtualGo. It's pretty handy when you're working on multiple go based projects (https://github.com/getstream/vg)


Thank god something like this exists. I gagged at the workspace structure when I first started Go - it gave me enough pain starting out one weekend that my interest faded away. I've been looking for a reason to try it out again and a bunch of comments here got me really excited again.


You might also enjoy wgo: https://github.com/skelterjohn/wgo (I am skelterjohn)


I think there's a great value in code being straight forward and simple and that is very much not appreciated among many programmers.

A good language enables you to "compress" your code without making the code flow impossible to follow.

A bad language encourages you to obfuscate the flow of the program using wacky abstraction techniques.

Go doesn't exactly hit the sweet spot for semantic compression due to lack of compile time programming (aka generics?), but it does a good job of making the flow of the program pretty clear.


IMO Go has poor primitives that make even straightforward code needlessly complicated. Slices are probably the worst. Compare Python's way to inserting a value into a list:

    a.insert(i, x)
With the way that the Go docs suggest:

    a = append(a[:i], append([]T{x}, a[i:]...)...)
Sorting and for-range loops are other examples.


I prefer to the Go way. The python way gives people the impression that the insert operation is cheap. In fact, it is not.


WTF?! Isn't the point of an abstraction to make simple what is complex? Insert into a list should always be like the python example. If go's standard list doesn't have that, maybe it's time for someone to write a better library.


That's one of the things in my original comment.

Although I agree with you about some basic things in Go such as insertion being somewhat crazy, but there's a point I want to make about abstractions.

Abstractions that hide too many things away are not always a good thing.

If an abstraction makes something easier to write, but harder to read, that's not a very good thing.

Unfortunately Go does not exactly hit the sweet spot. It errs on the side of less expressive power.

EDIT:

Actually, for inserting into a list, the code can be simple, although the line will be split into three lines, and creates a new temporary list

// insert number 10 into position 2 var b []int b = append(b, a[0:2]...) b = append(b, 10) b = append(b, a[2:]...)

https://play.golang.com/p/ommopBd3io

Now, that _is_ overly verbose, but I don't think it's too off putting. I don't like it but I guess it's a quirk of go I'm willing to put up with.


The insertion operation on slices is expensive and is not recommended to be used frequently. If you use it frequently, please rethink and redesign your data structure, for example, use a list instead.


I tried using Go this weekend and basically just abandoned it when I learned that it only had 'generic's for three built-in types and other than that you are forced to essentially dynamic cast everywhere.

I just don't get the appeal. Go seems a lot like what you'd get if you just removed every language feature that anyone has ever complained about; for good reason or not.


It seems like the domains where Go is used often don't make heavy use of custom containers or data structures, so that makes the pressure on the language makers lower than it would otherwise be.


Or the other way around.


I venture to say that if you are casting/converting everywhere, you are likely doing it wrong. An interface{} says nothing. It should generally be avoided. However, I do find this to be a pain when working with numbers. Floats and ints mixed up is not fun. Python is so vastly easier to work with in that arena.


In Go, if you need a container which is not a built-in map or array, you end up casting to interface{}, because that's the only reasonable thing your container can accept. This is clearly not "doing it wrong", and it's an extremely common use case.


Thank you for the insights on Go and how Go has added value to your organizations and enabled a better environment for development at your organization! I love hearing viewpoints on tradeoffs and general evaluations on value of new(ish) development technologies.

I'd also like to point out that I write this a Python-phile that has come to the understanding that Python is great until it isn't. When Python becomes to burdensome, the understanding of the problem delivered by my Python sketch helps to narrow the focus of what I need to solve the problem efficiently. By that time, Go has never been the correct tool for me. I'm comfortable writing solutions in C, Rust and Java, perhaps this is my problem.

Whenever I read or write Go I don't understand how it managed to capture the mindshare that is has. Especially in the face of other technologies available, Go seems like a timid step forward in descriptions of computation, when there are bolder choices that deliver valuable additions to my development ecosystem.

I appreciate that Go places a premium price on simplicity, however with languages like Rust make some big steps towards static guarantees at compile time the guarantees that Go makes seem pretty weak. But I feel that when I make the leap from Python to Go, I admittedly don't understand the allure. Static compiltion is really nice! Statically linked binaries for distribution is very nice! The tooling is out of this world! Hands down, Go has one of the best out of the box toolchains I've ever encountered. But when helping me think about computation, Go doesn't.

Go's concurrency primitives don't feel like a great deal of a step forward compared to C. I know, I know, this is crazy talk. I feel like I am missing some great wisdom about Go, I truly do. Concurrency is really hard and Go has clearly helped a lot of people minimize concurrency problems. But whenever I use channels I am reminded Go does not allow me to shut my brain off, and whenever the deadlock bug strikes, I feel that I might as well be writing C.

I don't mean to disrespect Go and the great things it does, the opposite of that! I am hoping Go delivers a bulletproof abstraction for concurrency. Until then I've become too familiar with other tools to justify using Go for more than incredibly niche use cases. I really hope that will change (both my mindset and Go)!


> I appreciate that Go places a premium price on simplicity, however with languages like Rust make some big steps towards static guarantees at compile time the guarantees that Go makes seem pretty weak.

Rust's static guarantees come at a price: You have to learn Rust to take advantage of them, which is apparently no mean feat.

> I am hoping Go delivers a bulletproof abstraction for concurrency.

Don't hold your breath. Go has always been and will always be about “good enough”, not “bulletproof”.


>Reason 3 – Developer Productivity & Not Getting Too Creative

There is a time and place for these tools. A part of managing a team is ensuring there are good practices around "getting creative" and that there is a clear rationale. Python's metaprogramming came in handy for helping us provide a high level syntax to work with our data model.


It's not just metaprogramming, it's a lack of standard types (like sets) paired with a lack of parametric polymorphism cough yeah, generics cough that makes writing some data structure manipulations feel sort of counter-productive.

Go makes up for this by making other stuff being easy and fast to code.


I'm happy that Python's metaprogramming came in handy for you and your team, but Python's propensity for DSLs is one of the reasons I loathe it so much. I guess in the context of data science, a DSL makes sense, but it's a nightmare to debug.


> I guess in the context of data science, a DSL makes sense, but it's a nightmare to debug.

What do you think struct-tags are? it's not like Go is free of DSL either. In fact the std lib itself uses them. Let's not pretend Go is without fault on that matter.


I would agree with some of the points - speed (which is stretched over two points), concurrency, and the ability not to do magic as easily, but several of the points (compile time, available developers, strong ecosystem) are not a "versus Python" at all but rather against other languages.

Personally, I see a language like Go and Python as solving different spaces. I wouldn't write a lot of website business logic in Go, and I wouldn't write a low-level TCP redirection daemon in Python.


Yeah there are some use cases where Python is a clear winner. For us Python seemed like a good fit initially. Traffic was low and the API didn't provide more advanced features where Python's speed becomes an issue. (Ranking, aggregation)


We are switching from Python to Go, because:

> Reason 7 – Strong Ecosystem

> Go’s ecosystem (...) it’s of course not as good as (...) Python.

Then why do you put that as a reason to switch?


Similarly,

> Reason 5 – Fast Compile Time


If you can switch from Python to Go, you weren't using Python anyways, you were use Gothon or Javthon.

If you want static binaries, great tooling and an excellent imperative and functional language, try F# with mkbundle. The compelling reasons to use go are shrinking.


Would love to, but what I would miss from Go is that low latency GC. If ya need it, ya really really need it.

If only somebody wrote a nice ML that compiled to Go! I love a lot of things about Go, but the language certainly isn’t one of them.


I'm hacking on a PureScript-to-Go transpiler (ie. alt backend for existing PureScript compiler once PR #3117 gets accepted). Watch http://github.com/metaleap/gonad/ --- hoping to "get there" within 2 weeks. Taking unreasonably long already, given that purs already does its jobs on its own like parsing, type-checking, transforming-ML-to-imperative. One reason I'm taking a bit longer than the quickest-dirtiest approach would, I want to preserve the type information as much as possible rather than have all functions accepting and returning and passing `interface{}`s in a JS-like manner. Ie it's meant to generate sane readable human-like idiomatic Go code. Type-classes mapped to interface types, etc. Will also have to attack the need to have any Go package be represented readily on the PureScript side as an existing module that'll show up in auto-complete and pass compilation, so to quickly auto-generate some kind of dummy FFI bindings whenever a Go-land dep is imported and used. Fun ride!

There's clearly value in combining Go's compilation speeds, stdlib functionality, rich ecosystem, lean fast binaries, GC etc with a rich and cutting-edge Haskell-ish/ML-ish type system (and the leaner syntax and compressed idioms). Will be great for clear thinking and expressive high-level type-driven dev and DSLs (and naturally, implicitly generics/code-gen haha), without having to wrestle with GHC/cabal/stack build annoyances/times and Haskell's whacky academic overly-PhD-ish "wrappings" around raw straightforward real-world needs such as http-server and db-client, where again the Go ecosystem shines.


Thanks for the tip! I'll keep an eye on it.


I don't get how low latency GC can be a good selling point.

I mean, sure, the Haskell's GC is a big downside, but if latency is so important that you have to look how the performance of you GC compares to an usual language, why don't you go with a language with no GC for once?

Because once you start to look at the details, you will need all of them, and how deterministic is Go's GC performance? What's its 99th percentile? What's the 99.999th percentile? Will any of those change in a future version?


Games. I like low latency and garbage collection. I feel like, hey, it's 2017, those things (along with performance within a reasonably small multiple of C's) really should go together.


Java and .Net games seem to have no problem getting into 120fps. Go's GC is optimized further.


120 fps is a measure of throughput, not latency. A high frame rate is good, but it's no guarantee that you're not going to drop a frame. Dropping a frame kills the experience, especially in VR.

In any case, .Net (and I gather Java) games typically go to considerable trouble to reduce GC pressure by pre-allocating memory pools and reusing them during execution. For certain kinds of games, like where you load everything in a level up front and don't deallocate until you finish, that can be fine. For other kinds of games, that's a nasty constraint to try to deal with. It ends up bending your architecture in ugly ways that make it hard to develop the game.


Well, that's a good rationale. That's indeed a good reason to look into Go.


If you want a nice ML with low GC latency, how about Rust?

(Did you look at tuning the GC? As a Scala guy I know a some people write off the JVM because the default GC settings are optimised for throughput rather than latency (whereas Go does the opposite), when often their requirements are comfortably within what the JVM can do when suitably configured. That said I've heard the CLR is less tuneable)


Rust is too complicated for me to learn and be productive with in a reasonably short period of time, and the CLR is not good at keeping GC pauses down. :-(

I have high hopes for Pony, though.


One problem: CLR interop means that F#, an ML descendant, is usually reduced to "C# but the syntax is less noisy/annoying".


one problem: windows


F# runs everywhere. I dev on OSX using Rider and VS Code. Deploy to Linux.

http://fsharp.org/use/linux/

http://fsharp.org/

> F# runs on Linux, Mac OS X, Android, iOS, Windows, GPUs, and browsers. It is free to use and is open source under an OSI-approved license.


Scala or OCaml are similar options if you don't want to use the CLR.


> Python allows you to get pretty creative with the code you’re writing. For instance, you can:

> Use MetaClasses to self-register classes upon code initialization

Avoid metaclasses....

> Swap out True and False

Not possible anymore

> Add functions to the list of built-in functions

What are the practical reasons?

> Overload operators via magic methods

Useful but also recommend don’t. I can see someone writting numpy would do that, but most applications don’t have a reason

Profile profile profile! Find the bottlenecks.


Django uses metaclasses quite nicely actually for the the Forms and Model definitions.


"Avoid metaclasses...."

Good luck finding a Python library that doesn't use metaclasses anywhere!



You can tell a lightweight article when you read that all you need to do to get concurrency working is stick 'go' in front of a function call.

I must mention that to the LVM developers dealing with my latest lost wakeup problem.


Instead of all the 'we replaced x with y', I'd like to for once read something along the lines of 'we started using x and y side by side, embracing the benefits of each'

I use both Python and Go regularly and the use cases rarely overlap. They are both fantastic languages and once you realise what you want to use each one for, you'll be one happy developer. (Applies to many other combinations of languages.)


Right now, Docker, K8s and their adoption among Microsoft, Oracle, IBM are what keeps me keeping up with what goes on Go, because "hey we need to write some stuff in Go here".

For me Reason 3 is exactly the opposite, if I have to spend time manually writing code that other languages give me for free, I am anything but productive.


Performance shouldn't be a compelling reason to change the language for a project. If they have truly benchmarked, profiled, reworked hotspots using Cython and looked into major bottlenecks such as IO and Python still isn't performing up to scratch then fair enough.


> Go’s fast compile times are a major productivity win compared to languages like Java and C++ which are famous for sluggish compilation speed.

C++ is famous for sluggish compilation speed, but Java is not. Java code compiles pretty quickly; it’s a simple language.


Neither is java a simple language, nor does Java code compile quickly, in my experience.

However the largest disadvantage of Java is Java programs generally consume more than 10x more memory than the same scale Go programs, and need 20x more time to fully to warm up.

I do admit Java is more consistent than Go from the syntax design view.


Java is not a simple language, period

Now, if you mean "simple" as in, needs an IDE to figure out what other languages figure out in runtime/compile time automatically, then yes, it's a dumb language

Not to mention the library, which is a jigsaw puzzle composed of barely fitting parts that need to be connected in non-obvious ways to work and made by people who want to prove they know several design patterns and can apply then to any situation instead of something that's rational and straightforward


> Go forces you to stick to the basics. This makes it very easy to read anyone’s code and immediately understand what’s going on.

I think all new language design must incorporate this goal going forward. It’s been simply too valuable. When I see something introduced that does x, y incrementally better but makes no effort for the above goal I get sad and move on.

It’s easy to get caught up in the technical challenges of a language, but this meta challenge is what really helped move the ball forward with Go, and I’d love to see it introduced as a first-class goal in new language design.


Some people insist on being obtuse (coincidentally, I'm looking in the general direction of a template heavy C++ user), and for worse or better, personal preference is a large factor in language adoption as opposed to what works best for the industry as an engineering practice.


I think it depends on the use case. For individual component programming, sure: pick a hammer and make everyone use it. But for systems programming, being able to be multi-paradigm is a huge benefit. TIMTOWTDI ;)


when you list as a con "package management", meaning "go is even worse than python"... man.


I cant help to feel that in Go performance is more important that developer productity. Go is the new C. The code looks a bit clunky, with lots of error checks.

I rather work in a language like Python or Kotlin where the code just looks like how you would think the code would like (if i would write it in pseudocode on paper)

However Go in big teams will really be nice. Thats why they invented it in the first place.


I'm surprised to see that people choose anything other than C++ if they care about performance. Are you really trying to profile and optimise python and go? It will never be worth it! Just write the same thing in good modern C++ and you get an automatic 100x speed up for most cases. Then optimise to reach the absolute limits of the hardware. Python and go it seems!


Some people choose Go rather than C++ for the same reason you went with C++ rather than an assembly language.

Most of the time, things only need to be fast enough and trading speed for ease of development, deployment, and maintenance is an easy decision to make.


The guy didn't even mention type checking. Guess it's not a big deal to him. Zero type errors during runtime is a big deal.


Well, it's Go, so you're going to have some runtime type errors. interface{} is way too common to avoid them completely (I had the order of arguments for two ConcurrentMaps wrong, for instance, and it blew up on reads -- found it almost instantly, but alas, at runtime). But it should be a much smaller number than python.


That's a very good thing for a language to have, indeed. But Go isn't the best thing to look into for productivity-via-the-type-system. What is (barring the purely functional languages' learning curve) is most any ML descendant; albeit Rust in particular if Go-like performance is desirable.


Rust is descended from ml? Isn't Rust a algol based language? Is it functional?


OCaml was a really strong influence on Rust. The first Rust compiler was written in OCaml. Rust's traits are also similar to Haskell's typeclasses.


Rust also has discriminated unions, structural pattern matching and local type inference.


In addition to typeclasses, Rust has discriminated unions (ergo algebraic datatypes), structural pattern matching, first-class functions, local type inference, and an exclusive/nonexclusive dichotomy of references which serves as a useful proxy for mutability.


People who use Python are generally happy without it IMO. It's not a big deal.


people who use python more aren't as happy about it, otherwise mypy wouldn't exist.

i use python more, can't recommend mypy enough, it's great for what it is. it has warts, but that's to be expected if you add an optional strict modern type system to 20 years of dynamic language.


A large part of why mypy exists is due to cargo-culting groupthink that leaves the 'static typing > dynamic typing' trope less challenged than it ought to be. Static typing is just not as useful as the glossy claims - and the boilerplate cruft incurs way more friction than typists like to admit.

mypy is great, I use it in 'public' api's, tricky parts where context alone is inadequate and where I previously fucked up - the latter usually is a signal that the flow or abstractions can be improved to the extent that I can remove the typing hints.

mypy helps me, it does not force me to throw a goat into the lava pit to appease the compiler.


friction vs 99% reduction of runtime type errors...

If the goal is convenience don't go with types. If the goal is robustness, use types.


Well, python has type check since 3.4 so...


Really? How does it work?


Google mypy for an introduction. Pycharm has good support for it if you want IDE integration.

The stdlib is globally well annoted now. However, most 3rd party libs are not.


mixing untyped libraries with typed basically makes the whole monolith behave as if it was untyped. Once they get the major libraries typed, python projects will become much more robust.


It's not an all or nothing proposition. Gradual improvement is a thing.


agreed. I was just stating something, not a counter at all.


A nice write up generally. I just feel he should also mention the classic static vs dynamic typing thing. From my perspective, teams switching to Go — from dynamic typed language such as Python — really appreciate the additional compiler help it provides ;)


You have type checks in python if you want them. It's just opt in.


I've done a fair bit of web dev work in Python but I am currently building my startup (https://www.growthmetrics.io) in Go.

It is in the initial stages, but so far it has been pretty good - both as a learning experience and from a developer productivity perspective.

I've also open sourced a web app boilerplate that I extracted from GrowthMetrics' codebase - https://github.com/olliecoleman/alloy. It might be useful for people looking to get started with Go.


IF you like Python but need performant compiled alternative take a look at nim.


> Our largest micro service written in Go currently takes 6 seconds to compile.

What's the biggest, beefiest real world program yet written in Go, and how long does it take to compile it?



> Docker takes about 10m to build: https://jenkins.dockerproject.org/job/docker/job/docker.gith....

That seems to be docker.github.io, the Docker documentation, not Docker itself (and not written in Go).


Doesn't sound too fast, that's about how long it takes for FreeBSD kernel to be compiled[1] and I would imagine a kernel is more complex. The whole system takes about 50 minutes[2]

[1] https://ci.freebsd.org/job/FreeBSD-head-amd64-LINT/buildTime...

[2] https://ci.freebsd.org/job/FreeBSD-head-amd64-build/buildTim...


Go's main compilation performance gains come from incremental compilation. Since C++ doesn't have real module support merely parsing the entire dependency graph from headers can take a very long time. In go if you have .a pkgs for all your dependencies your program will compile very fast.

If you use docker as part of your ci build the dependencies as a separate step so they can be cached between builds. This is especially important for libraries like sqlite, which can take a lot longer to compile.


On a not-too-old notebook, building k8s needs less than one minute.


Two large projects that I know of are CockroachDB and Influxdb

A quick glance seems to indicate that cockroachdb takes about 9 minutes [1] and infuxdb about 15 minutes [2]

[1] https://teamcity.cockroachdb.com/viewLog.html?buildId=380603...

[2] https://circleci.com/gh/influxdata/influxdb/tree/master


That build log for CockroachDB takes a lot longer than a normal build, because it's generating releasable artifacts for multiple platforms, which means running the entire build process multiple times. Also, the build time is dominated by a few vendored C++ dependencies (mainly libprotobuf and libsnappy) which are built using CMake.

On my system, building just the Go code for the latest version of CockroachDB (about 360kloc) takes 24 seconds.


Building pure go applications can be speedy but as soon as you introduce cgo, you can expect these times to inflate pretty significantly.


Since when is Java famous for sluggish compilation???


The only thing actually missing from Go's standard library that I've wanted is an abstraction of 'native' UI things (assuming you're compiling for a graphical platform).

For portability there is shipping your a widget set (QT is OK for this if your licencing needs are compatible with either case).


Re: frameworks

My experience with the language (building multiple production systems) is that the language itself is sort of it's own framework. You don't need to add much to build an application with the standard library. Which is a huge plus and makes things simpler.


You could say that nearly for any language. Very often you could do just fine without one.


That applies to all languages with exception of C, because they didn't want to standardize it as part of the language, hence we got ANSI C + POSIX instead.


Don't forget Javascript which is ridiculously under-featured natively. At least C has the excuse of having to run on the bare metal of a wide range of architectures.


Go provides built in functionality to develop web applications. Things like template rendering, routing, a generic SQL interface, etc. Which are libraries not commonly found on most languages. Python, OTOH, requires libraries like Flask, Jinja2, and SQLAlchemy to provide the same functionality Go provides as defaults. Thats my whole point.


And they're somewhat awkward versions of all the above such that most projects pull in deps like gorilla/* for those things anyway.

PHP has template rendering, routing, and database conns built right in too ;)


Here is a similar piece if someone is interested:

https://hackernoon.com/the-beauty-of-go-98057e3f0a7d


> Another great aspect of concurrency in Go is the race detector. This makes it easy [emphasis mine] to figure out if there are any race conditions within your asynchronous code.

Am I reading correctly?


It can be a bear to try and track down some race conditions but at least it will not report false positives. However, it will not catch all race conditions or data races you may have.


Is Go even memory-safe in the presence of data races? Don't get me wrong, “memory safety” isn't a hard requirement in my book, but “memory safety or a formal semantics” is.


If you're curious, the behavior is well defined: golang.org/ref/mem


My English reading comprehension is admittedly poor, but I don't see any portion of the text that automatically implies that unsynchronized reads and writes are atomic.


You know there are but that's the end - still not easy to reason about them haha!


I like these "Why we switched from X to Y" articles. I can be summarized in one sentence most of the time. "Because it supported our use case better."


It has func and var compiler hints, which I hate (just like in Swift). Its harder to read too (just like Swift).

I'll stick with Python.


The post never mentions PyPy. Have you tried it? Could increase the performance dramatically for free.


Did you consider scala ?

I use it at work, and it is awesome. It has awesome error handling (Either/Option >>> exception) on my opinion, a much stronger type system (many errors are checked at compile). It is harder to learn, but you also have top notch package manager with the even bigger scala + java ecosystem.


"Plan to throw one away. You will, anyhow." - Fred Brooks


Oh, terrific, another one.

The only solid reason this article gave was the second one and, really, is the only one that would ever make sense: If the language one's using becomes the bottleneck then, by all means, change it[1].

Everything else is fluff and opinions. Every single other reason is stuff that is entirely subjective and/or python already has and even the article itself acknowledges, at least for one "reason", python as the better option.

Well, at least they tried to be impartial by mentioning the absolutely odious error handling in Go. It's the biggest reason I became so disillusioned with the language once I actually tried it.

[1]: I, however, would try switching CPython for pypy before rewriting the entire architecture from scratch. It's more performant.


I don't think that performance is the key reason to switch from Python to Go (although it is nice): as someone who switched from Python to Go, what really made it worthwhile for me was knowing that my code is significantly more likely to be correct when it runs. Go's typing is much nicer than Python's, for this use case.

Performance is just gravy.


Type errors/misuses aren't my biggest problem and Go's type system isn't very powerful. The Python code is smaller, more expressive, easier to review, etc. I have more confidence in the correctness of pylinted Python code. I wouldn't jump to trade conciseness for compile-time type safety, particularly for applications, because it's easier to create/not see a logic error in heavy code. And depending on what you're doing, Python's type system may be more powerful/appropriate.


I hear that. I work in Python, but prototype most things in Go, simply because many classes of problems benefit from the additional structure, and a compiler that can reason about that structure. Also Go has excellent tooling, it has great libraries, it compiles to static binaries, and anyone can read it.

I'm hopeful about Python's type checking though, but so far it leaves a lot to be desired (absolutely no recursive types, lots of bugs, strange restrictions, confusing for variables, etc).


That's subjective, and part of the sixty-year-old flamewar known as "dynamic vs static".

You like to write the type of your variables next to them, all the power to you. I don't or, more exactly, I prefer the little extra flexibility of not having to do that.


It's subjective. Big benefit I see with Go is that you get a fast language (comparable to C++ and Java) while still being very productive to work with.

We actually did try pypy for a few parts of our infrastructure. It's better, but for what we tried only 2x faster.


You had me at 4. I'm definitely GOing to give this language a GO now. That was a GOod introduction to the language, a veritable advertisement, if you will.


The audience on Hacker News takes themselves way too seriously, apparently.


Case in point. Thanks. Also, the moderators seem to have no life. Please deactivate my account if you're offended. I'd prefer not to spend more time attempting to communicate with one dimensional, armchair experts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: