Python was my entry into the programming world, and I've been an evangelist ever since... Or I was until I ran into distribution and parallelism. Since then, Nim has been my go-to language of choice. It is all that Python was, plus unbelievable speed, compiling to shippable binaries, and some other cool language features that admittedly, are still beyond my scope of abilities. Still quite lacking in libraries compared to Python, but after a few attempts (perhaps halfhearted), I never felt "Go" was a suitable replacement.
For me, the ecosystem matters as much as the language itself these days (actually more). There are languages that I've used at home just as a learning experience, but ideally, I want a language that I can put into production at work. That means quite a list of criteria: only a small number of languages have a well-supported AWS SDK, for example.
I won't say that Go is mainstream yet, but it feels very "production-ready" in those respects.
I'd say that simplest "production-ready" criterion is this: does it have an officially supported and stable release of a IntelliJ IDE (there is a slight Java-ecosystem bias, but I think it's only slight).
So the mainstream languages are...: C/C++, C#, F#, Go, Groovy, Java, JavaScript, TypeScript, Kotlin, Objective-C, PHP, Python, Ruby, Scala, SQL, Swift, VB.NET (source: https://www.jetbrains.com/products.html).
So, from the top of my head, Erlang, Haskell, OCaml, Perl, Fortran, Cobol and any type of shell script is not production-ready. Nice to have that clarified. :P
I wish that I knew a phrase to mean "has all of the documentation, tools, and libraries that are needed for an average team of developers following common practices to be likely to successfully ship and effectively maintain commercial projects, consistently".
Many really interesting languages and frameworks don't pass this criteria. It doesn't mean that they are a failure: the idea of programming languages that are designed for teaching or research, and are specifically not for commercial use, perhaps used to be more common than it is now.
I'm the opposite; I don't stick much time into learning a language if it doesn't have a high quality vim plug-in and command line tooling. It's fairly shallow, but I want to learn how things work behind the scenes, and I don't like fighting the editor over my preferences. It's a lot of small things, but why not enjoy your hobby time? This is just my personal preference and I understand and respect that there are lots of folks who enjoy their IDEs like I enjoy vim/Unix.
> So the mainstream languages are...: C/C++, C#, F#, Go, Groovy, Java, JavaScript, TypeScript, Kotlin, Objective-C, PHP, Python, Ruby, Scala, SQL, Swift, VB.NET
Of those 18 languages, only 13 are in the TIOBE top 50. Two of them (Apache Groovy and TypeScript) don't appear in the TIOBE top 50 so it's pushing credibility to call them "mainstream" languages -- the Java-ecosystem bias is more than "only slight".
Slight nit, but I don't think it's accurate to say that Go has an "officially supported and stable release of an Intellij IDE". While Gogland is excellent (I use it as my main Go editor at work), I think it's still technically in pre-release.
On a more unrelated note, is Groovy widely used? As someone who doesn't really ever use JVM languages, my impression was that it didn't have the similar usage to Kotlin and Scala, although I might be mistaken.
Apache Groovy's used for scripting on the JVM, like Bash and Perl on Linux. It's OK for glue code, mock testing, and 10-line build scripts in Gradle. But 5 years ago, some of Groovy's backers tried to re-purpose it as a competitor to statically typed languages like Java, Kotlin, and Scala, which is when things went awry. If it had stuck to its knitting, it could have been used widely for scripting classes written in Kotlin and Scala as well as Java. But it didn't even keep up with Java -- it doesn't even have Java 8's lambdas on the eve of Java 9's release.
Django's ecosystem has spoilt me so much that it's very hard to switch to another language, at least as far as web apps are concerned. I would love to play with Rust/Nim more, but most of the side-projects I'm passionate about involve at least a few things that Python has a lib for that would be a huge time sink to rewrite in another language.
> This is an experiment for running Nim code on the ESP8266 ("NodeMCU") microcontroller. The end goal is an extendable framework on which to build autonomous ESP8266 applications, along with an associated web service which acts as a base station providing back-end message aggregation and front-end information display/control panel.
These days I mostly do web, so Django and the whole ecosystem are just missing from Rust, but I'd settle for just the ORM. I found Diesel pretty hard to get started with, unfortunately :/
My other hobby is microcontrollers, and I would love it if I could run Rust on my ESP8266. I can already run MicroPython, but it feels a bit hacky (probably unfairly), and it would be amazing if I could use Rust and easily link in all the C/C++ libraries the Arduino framework has available.
I know this reply isn't useful (I miss all the libraries, woo), but it's as specific as I can get. I started a very simple website uptime monitoring service:
Yeah, Diesel is tough; they're working on better docs which should help out quite a bit.
IIRC someone is working on the ESP stuff...
> I know this reply isn't useful
Naw, it's all good: it's just as much about qualitative as quantitative. It's also because I'm writing a small framework in Rust in my spare time, so "rust for web" stuff is extremely top of mind for me. Thanks for taking the time :)
> they're working on better docs which should help out quite a bit.
That'd be great, because I currently can't find a niche where Rust is so much better than Python that I won't just fall back to that, impeding my Rust progress.
> IIRC someone is working on the ESP stuff...
That's what I heard too, but progress seems stalled. Hopefully, in the end, it'll be easy to run on the ESP, but the biggest problem is the ecosystem (which is also much of the reason why I don't use micropython all the time). There are so many libraries for the Arduino framework that C is hard to escape.
I use both Go and Python regularly, and both have their uses and issues. Go for me is a rather simple tool that doesn't get in my way, and just lets you do stuff - on it's own it's relatively boring. While it's probably not everyone's cup of tea, I like it's simplicity. It does have some downsides the article here mentions like package management and versioning. I understand why vendoring is being pushed as 'the official solution', but it still feels backwards to me.
The thing that eventually makes me grab Go time and time again is how damn easy it is to drop into code of some library - even (and maybe especially) the standard library, something I rarely did or do in any other language. With Go there's little friction to do this, for multiple reasons. The source of truth for pretty much any project is almost exclusively Github, godoc.org documentation has links directly into the source-code for every single function and struct, and it forces you to set up a dev env that let's you understand it's structuring, where your libraries and it's code will end up. About the last part I had some mixed feelings at first, but after a while you appreciate it that the libraries you're using aren't tucked away somewhere in some system directory. Add to that that Go is a simple, pretty readable language, and you end up with a very transparent library system. Would I ever even think about jumping into the paramiko library (which I used on multiple occasions)? Not at all, so I didn't. The first time I used the crypto/ssh lib, I also didn't think about jumping into the code, but somehow, I did because it was the natural thing to do, and made me understand the internals and SSH a lot better (paramiko was slightly easier to get started with though).
Another thing I really like is that the language encourages not writing 'applications' but rather libraries that can be reused by virtually anyone, and your final application will be a relatively small front-end shim for it. While this is not hard in Python, creating a separate library for smaller stuff somehow always felt as overhead, while in Go it feels like the natural thing to do. In python, there's yet another ton of overhead if you want to add something to PyPI - while in Go it's a `git push` away. This also creates serious issues and pitfalls, but it makes contributing a library a lot easier.
Nim by Example[0] is a great introduction. The blog mentioned in the OP also has a writeup that explores some of the tooling[1]. After that, the official tutorial[2] is a comprehensive dive. The standard library documentation is sometimes lacking but is easily searchable.
> High-performance garbage-collected language
Compiles to C, C++ or JavaScript
Produces dependency-free binaries
Runs on Windows, macOS, Linux, and more
Agreed completely. We're in the process of porting our Python 'shim' (aka agent, at https://Userify.com - plug SSH/sudo key management) to Nim right now, so that we can provide a fully static shim for CoreOS and other minimal distros, and eventually Windows; there are a few languages that can do this cleanly, such as Go, Ocaml, and Lua, but Nim is just blindingly fast and actually pretty fun to code in. Great stuff.
For that project, we had very specific requirements: easily handle SSL/TLS with contexts and control over self-signed vs certificate checking, JSON processing, speed, nice syntax, and one of the most challenging requirements: statically compiled, linkable against musl and libressl, while still supporting mingw_64 for windows. Only a few languages have flexible compilers that can do this; for example, rust can't (afaik).
The experience so far has been outstanding. Nim has functioned flawlessly with a minimum of magic. It seems to work very cleanly and the compiler is cleanly integrated, but still swappable (ie between gcc, clang, ming..) Nicely color-coded, too.
Exceptions are caught with full tracebacks, and pre-compile checks quickly point out exact location of syntax errors. (Good, clear error messages are surprisingly missing from many languages.)
Here's an awesome example; in my first day of coding, I was able to replicate python's "+" string concatenator ("hello" + "world" versus "hello" & "world" in Nim) with a one-liner:
proc `+` (x, y: string): string = x & y
This is pretty amazing; not only is it readable and concise (and more than a passing similarity to python's lambda, of course) but nim comes with the ability to define new operators right in the language, and the compiler raises an error if operators, procs, types, etc would introduce ambiguity.
Nim compiles quickly and its type inference (where it guesses what type of variable you're working with) makes strong typing mostly painless, and you still get all of the advantages (type safety, speed) of static typing.
There are some trade-offs that are made (obviously), but the language designers seem to make trade-offs in favor of speed and robustness over language features -- but this still leaves a lot of room for features.
I also like how the syntax has a lot of similarities to Python's. The only thing I've missed so far is a nim interpreter, so that I can get up to speed faster on the syntax or try things out quickly. The tutorial on the Nim website is definitely not for beginning coders (who would probably be quickly scared off by words like lexical), but it quickly covers the language syntax for experienced coders and seems to borrow a lot of the best ideas from other languages.
Nim is basically awesome. The few downsides are that the standard library is still pretty light (but that gives you an opportunity to build something great and have it be widely adopted), that there's no interpreter, and that the tooling is still a bit lighter than older languages. All of these will be improved with time.
And, it's fast. Really fast. Compare nim in these benchmarks[1] to any other mid-level (or even low-level) language and it really shines. It's generally much faster than Go, for instance.
You can also check out this:
[redacted]
But there's almost no features in it as I'm lazy :)
I think that the best feature in this lib is a python-like range type:
[redacted]
Thanks for your comment here. I heard of Him from you here. It genuinely looks like the best of many worlds.
Easy to use and learn like Python: Check
Fast and efficient approaching the levels of C: Check
Ability to spit out a binary that just works without installing all the batteries: Check (particularly hurt by this in Python)
Uses multiple cores by default : Check
I'm going to spend a good deal of time playing with this and building some CLI tools at work.
Can you elaborate, somewhat? How long ago did you try GO? Also, I agree, having a generous assortment of contributed libraries makes all the difference.
I tried Nim in non commercial capacity and liked it. Wrote an Aho-corasick string matching algorithm in it to see how it fared with Lua/Luajit and it was fairly close in speed. The code was also quite pleasant to write.
I remember that debugging it was a bit of a pain if just not a real option then. Maybe things have changed.
> Since then, Nim has been my go-to language of choice.
Out of curiosity:
You use Nim professionally? Do you work for a company? What kind of software do you make? And what sort of people or organizations are your customers?
>Go forces you to stick to the basics. This makes it very easy to read anyone’s code and immediately understand what’s going on.
I’m not criticizing Go since I have no LoC in it, but in other restricted-and-flat languages and areas our company’s expertise quickly (read: in half a decade) went to the limit with no chance to turn the lang partly into dsl and level up. It is like playing rpg where you stuck to level 5 and never get powerful enough to take middle-level quest even with a great party.
While it seems cool to have automatic jsonification, build process and out of box concurrency, inability to create something that only your team can use and understand effectively means you’re locked to growing markets (bubbles) and never have real expertise and/or budget over bloated competition. This may play a bad game with your future, should roads cross with one who has.
Again, the type I’m talking about must be rare, and “there is only one opinionated way to do it” seems to fit better on average tasks.
> inability to create something that only your team can use and understand effectively means you’re locked to growing markets (bubbles) and never have real expertise and/or budget over bloated competition
Could you unpack that a bit further?
By "inability to create something that only your team can use and understand effectively", do you mean "inability to create something really powerful"? As it stands, it sounds like a good thing.
I understand what you mean by "inability to create something that only your team can use and understand", but the article doesn't say anything about that and isn't really addressing developing new code in Go. The article is about Python not matching their needs after a few tries at optimization and Go having a similar development time (although if the code was already implemented in Python then I assume the Go code was almost trivial to write so I'm not sure that was a valid point).
As for "'there is only one opinionated way to do it' seems to fit better on average tasks", from the Zen of Python: "There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch." And Python has been used to do plenty of above average tasks.
At a high enough level, Python does not live up to that zen. It does not restrict how you will organize your code, it just restricts instruction level options.
Keep in mind that the Zen of Python is mostly about how Python differs from its ancestors (basically, Perl). It's not a work on software engineering.
I'm surprised to see Go's error handling listed as a disadvantage. Go encourages writing good error messages, and that's one of my favorite things about the language.
To get a sense of it, take a look at a failing test in a language such as Python that uses asserts for testing and compare it to the equivalent written in Go. Quite often, the error message in Go will be clear and to the point. On the other hand, I've seen plenty of assertion based tests in other languages that report inane things like "1 != 2" or "true != false" when they fail.
Go encourages putting the effort into this up front. It pays off later when things go wrong and you want to know why.
Probably complaining from someone used to exceptions and/or assertions, where there is perhaps less boilerplate and manual verification that context bubbles to the right place.
You can get there either way, but what you're used to weighs
heavily.
For your example, depending on my experience, knowing that 1!=2 in some code I'm using directly might be more meaningful or actionable than "out of filehandles for accept()" in some deep chain of dependencies I know little about. Just depends on what you're used to.
Or anyone who's used a language where errors aren't as stringly-typed as Go and where the compiler can enforce error handling. Exceptions have benefits and drawbacks, but any language that can return a Result type will have better error handling than Go in basically every way.
Yeah I think the point is, errors shouldn't be overloaded with too much extra structure. They should be errors. If additional structure is needed, it should probably be sent separately via the common multiple returns idiom in Go.
Additional structure like, say, a numeric error code? An HttpError could definitely contain the corresponding status code.
But I think the bigger point is that, for libraries, errors should be an enumerated set of possible error conditions, not strings. You say "additional structure", I say the duplicate filename, the invalid email address or, crucially, the error that caused the error...chaining errors is very useful. The multiple returns idiom in Go may make this possible, but it's hacky and strictly worse than languages that can utilize generics and a more richly-typed set of errors.
The point I'm making here is that everyone knows 1!=2. It's not helpful for a test to tell you that because then you have to go hunting around to figure out what is wrong instead of it just giving you at least a rough idea right away. It does not depend on what you're used to.
The reason I, personally, don't like it is it's basically a slight improvement over C's error handling when everyone else has substancially improved. In C, basically every function could fail and you needed to check the return code. In some cases you'd need to make up an impossible value to return to signal error and in some cases (e.g. functions returning binary data) you would need some other global variable to be tested. So Go fixed all the error signalling issues bit still has the problem of needing to check every function return... which no one is going to do. In C, if printf fails that's usually just going to happen without the program noticing. In any language with more modern error handling (note: not just exceptions) printing can't silently fail.
> I'm surprised to see Go's error handling listed as a disadvantage. Go encourages writing good error messages, and that's one of my favorite things about the language.
The biggest reason for me is the inability to get stacktraces to where an error comes from.
> Quite often, the error message in Go will be clear and to the point. On the other hand, I've seen plenty of assertion based tests in other languages that report inane things like "1 != 2" or "true != false" when they fail.
An error message like '1 != 2' is a lot more meaningful when it's reported as the reason that test_new_foo_has_2_bars failed. Occasionally it's nice to give a more explicit message, which is why in eg Python you can optionally supply one, but it's not generally an issue if every test is really only testing one thing. To me, the Go approach seems to encourage writing cluttered tests that test several things at once.
Something as simple as allowing the use of multi-result calls in if statements would make a huge difference. Being able to write "if err := blah(); err != nil { ... }" is great (although it does kind of hide the original call.)
WTF? You are absolutely right. I could've sworn I had issues with this before. 25% of my irritation with Go just evaporated (the other 75% is how hard it is to run Delve, but I'll figure that out.)
> On the other hand, I've seen plenty of assertion based tests in other languages that report inane things like "1 != 2" or "true != false" when they fail.
In Python, one would need to supply the message argument to avoid that. But Go seems no different, really; you still have to supply the message. My understanding of the testing package is that Python's:
self.assertEqual(expect, actual, f'expected ({expect!r} != actual ({actual!r})')
is approximately:
if expect != actual {
test.Errorf("expected (%s) != actual (%s)", expect, actual)
}
I'm honestly not sure which is "better".
I do wish Python could figure out something better than "1 != 2", s.t. the assert would just do The Right Thing™ in a graceful way, but doing so would require facilities the language hasn't got (e.g., some macro capability). There are some packages that do some black magic stack walking to do slightly better, IIRC.
To me, Go's error handling falls a bit short because it is possible to forget to handle an error (though thankfully that unused variables is an error makes this more difficult), and you don't get an error or a result, you always get an error and result. (The language lacks a sum type, so it can't really do much better here.) It's on the programmer to not use the result, and to me that felt and still feels weird. The constant do-something, check, propagate is thing, too … I had hoped C had beat that out of language design as too tedious, frankly. (C.f. Rust, where even though error propagation is always explicit, it was `try!(foo)` early on, and `foo?` more recently.)
py.test does the magic walking, and it does indeed make for much less verbose tests -- no need to remember which self.assertSomething method to call either:
x = 1
y = 2
items = [4,5,6]
assert (x+y) in items
Outputs this, replacing the variable names with values
test.py:4: in <module>
assert (x+y) in items
E assert (1 + 2) in [4, 5, 6]
I think the if-statement one is clearer, but in terms of results they're both fine. At the same time, I usually don't see people put in the extra work to add these helpful messages to their assertions. Maybe one reason is because they make the code a little harder to read.
I've been searching if there's an equivalent in some Python unit test framework for Google test's predicate-formatter feature. Crafting an intelligible failure interpretation that even directs a developer for what to fix is solid gold.
https://github.com/google/googletest/blob/master/googletest/...
How do these shallow articles get upvoted so much ?
they don't have much specific information except very generic "developer productivity".
Let me give a specific example where moving to Go really helped our tooling:
Go has some great interfaces, specifically their net & ssh client. In order to perform operations against some machines, we have to tunnel through bastions, however we'd also like the tool to work when an operator has already ssh'd within the region (the tool is installed on hosts as well -- so that long running tasks can be performed).
It was easy to create HTTP clients that easily tunnel through an SSH connection via Go's Dialer interface (present in the net & ssh package) or just directly if no tunnel was needed.
I think in general people vote up stuff that glorifies their favorite language/smartphone vendor/operating system, even tho many of the comparisons are shallow.
Actually moving to Go was a very thorough and long process. We did small example projects and tried out many of the libraries we needed. The blogpost is more of a high level summary of why Go is such a great language.
Oh hey, it's you. I weirdly recognize your name from many many years ago. I have vague memories of dealing with you in submitting an open source patch for Django Facebook. Or maybe it was just a documentation change. Either way I remember you being very polite and professional. Just wanted to say that stuck with me and please keep it up, as I can see you're doing here!
I thought it was very thoughtful, detailed, well written and touched on all the points I would need to make a similar decision if it ever came down to it. Thanks for writing it.
No most of the optimization were Python specific. But of course development time was a bit faster due to it being the 2nd time solving the same problem.
Because it's interesting to read others stories and thoughts.
I don't use Go nor Python but still thought the article was interesting.
> they don't have much specific information except very generic "developer productivity"
They give you a lot of reasons but in reality choosing a language is seldom about arriving at a scientific conclusion and much more about what feels best. Just like choosing a dish at a restaurant.
I think the article is not that bad. I specially like the comparison of : well python was a bit faster to develop with, but much longer to optimize. Note they actually used Python long enough to actually spend quite some time to consider the move to go; this means that performance of Python were ranging from good enough to barely enough for a (hopefully) long period of time. Also note the scale of operations of that company, if Python hold for say 10% of that scale, then it's certainly good enough for me !
That basically confirms the strength and weakness of python : it's very good for prototyping and if performance issues arise (and that can be later than one think), it's tough to optimize.
I'm a bit sceptical about the comparison of optimisation. Specifically when the AST was mentioned (did they parse the expression as Python code?) and Python came out that much slower even after fixes. As long as they were interpreting that, rather than compiling the expression into native code, I don't see a good reason for Go to be faster. Interpreting expressions like that in Go would be almost as dynamic / lookup heavy as in Python.
I'd like to see both apps for comparison / more context.
Although, if I spent over 2 weeks developing and optimizing a solution in Language A, I would fully expect to develop and optimize it an awful lot quicker in Language B.
Additionally, a lot of performance issues can depend very much on what libraries are used.
The reasons in the article might be valid for this particular company, but in my experience, the performance aspect is less valid in 90% of applications.
Either, the database is the limiting factor (and thus, languages like Python are fast enough, anyway) or the really performance demanding parts are located in <5% of the coding.
In my case, I enjoy the productivity that Python gives me and if I encounter such cases, that demand very high performance, I implement them in C or with Cython (or both).
To each their own - I tend to agree with OP. My takeaway was rather close to "goroutines and gofmt are good" which are about two of the most obvious talking points when looking at Golang. Shallow has a bit of a negative connotation, but I was personally left feeling hungry for a deeper analysis after reading that.
Why does anyone write web apps in Python? PHP? Ruby?
Modern Java and Go are so much faster than the alternatives that it's stupid to consider anything else if performance is important.
Golang and Java can manage over a half million HTTP responses a second. Node is pretty fast but why bother when Java is many times more mature in features, tooling, and and supports concurrency... And uses less ram and is usually faster. People moan about the "huge" Java runtime when JavaScript uses 3-5x more memory and has a huge runtime of its own.
All the big companies are using Java and Go almost exclusively for high volume endpoints and it blows my mind the amount of mental gymnastics some companies go through to avoid following suit.
Java has come a long way since J2EE. These days it's asynchronous, non-blocking, serverless, etc.. pretty much all the acronyms thrown around about node except its not JavaScript, which IMO is a huge win.
The answer is that for a huge variety of software, performance is not important, or perhaps is only important for a subset of the application.
My personal experience is that the dynamic languages you've laid out generally have frameworks that are extremely conducive to rapid prototyping (Django is my favorite). I've seen and done the dance many times -- start with a Django/Rails/Laravel app, get a free admin and build up some CRUD pages in no time flat, and then once you've got enough traffic to care, move parts of the application to more performant platforms (Go/JVM usually) as necessary.
Yeah, plus even if performance is important, the app layer isn't necessarily the best place to optimize. It doesn't really matter how fast you sprint between database calls if the database and its IO dominate your site's performance profile, which they often do...
The vast majority of slow websites I've seen written with RoR were slow because the DB later want optimised. 1+n query problems, pulling way more data than needed and then processing it in Ruby, missing indices etc.
If that's really the case, why do people spawn multiple instances of their app?
A python application can be anywhere from 10x to 50x slower than a native application. It also probably consumes at least 5x more memory.
Writing the same app in a compiled language is not even an optimization. It's just baseline work to ensure the code is not super slow.
Like, if you know you will sort a list of 10 items, choosing quicksort from the stdlib instead of bubble sort is not even an optimization. It's just common sense.
This is not true. Java and Golang can use asynchronous IO and maintain thousands of concurrent connections. It's just another case where slow languages are... Slow
That sort of question is totally missing the point of why people use these languages, yeah? Languages in the web world don't tend to be chosen based on memory requirements (or speed as this suggests). Are there cases where you want to think about that? Sure.
People have plenty of reasons they'd want to use Python over Go, and vice versa.
The ecosystem is a much bigger deal. There's an officially supported python library for every SaaS product on the market, and many libraries that are best-in-class in areas like data science. It takes minutes to write to pdfs, make graphical charts, edit images, and a million other nuancy, minor parts of apps that you want, but don't want to spend a ton of time writing.
Java is the only static language that features roughly equivalent levels of support ecosystem wide.
Forking 10 processes does not use 10x the memory of a single process starting 10 threads. It's actually almost identical. Both are implemented by the kernel using clone(). Many older tools written in "fast" languages like PostgreSQL and Apache also use forking.
Not for almost a decade. Ruby web servers and job processing frameworks have used forking out of the box since the release of Phusion Passanger 2 in 2008 and Resque in 2009.
This just isn't true on any decently designed system I've seen. Practically any database can manage 5k complex-ish queries per second. For the common simple queries closer to 50k.
Good luck getting more than 100 calls per second out of the slow languages
If you start getting into decently designed systems territory, you're still going to have trouble beating some of the stuff that comes out of the Python/Lisp/Node communities. For instance: https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-pytho...
Anyway, if you're locally looping for hundreds/thousands of queries to the database, instead of writing one query or calling a stored procedure, you're probably doing things wrong.
Ah I see. I still think if you're measuring that way, there's no reason languages like Python et al. can't accomplish similar too. And if they can't, well, there's always adding more machines. Horizontal scaling tends to occur no matter what you're using, so is the argument just that with less productive (probably a matter of taste for many at this point) languages you'll have to scale out later? That's a tradeoff to consider, but there are other tradeoffs too.
https://www.techempower.com/benchmarks/#section=data-r14&hw=... has some interesting benchmarks, apparently we should be using Dart or C++. Go does slightly better than JS but not a lot. Some newer Python frameworks aren't on there yet. None of them reach near 50k, but I don't know the details of the benchmark and they aren't all using the same DB. Certainly you can get crazy numbers depending on what you're testing. e.g. https://github.com/squeaky-pl/japronto gets one million requests per second through async and pipelining but those requests aren't doing much.
True, horizontal scaling will always save you no matter how slow the front end is. Cost becomes significant at a certain level too though. For example, Google estimates each search query goes to over a thousand machines. If you need 100x 1000 machines to serve a query because the back end is PHP it adds up.
And you can make Python or even PHP fast if you try hard enough.
My argument is that the engineer overhead for Go and new breeds of Java frameworks are small enough that it makes no sense to use anything else if you're planning on scaling for real.
If you start with something else the cost of making a slow language fast and the multiples of extra machines you need costs far more than just using the faster language to start with
For the benchmark you posted, take a good look at the "realistic" vs "stripped" implementations and whether the test used an ORM. You'll quickly see that the realistic test applications with any kind of ORM are exclusively C#, Java, Go, and C++
And then you end up with slow applications or websites. Fast response times under high load can pay off and easily recoup longer development times. And it's much harder to change the system once you're successful.
Cost difference is another topic. While servers can be cheaper than developers, saving 90% of server cost can definitely clear some budget for enhancements.
> Modern Java and Go are so much faster than the alternatives that it's stupid to consider anything else if performance is important.
You answered your own question. Why bother with those languages when the language isn't the bottleneck? In those cases, what language one uses becomes an entirely subjective matter.
> All the big companies are [...]
For every large company using Java, Go, Rust, C, etc. There's another one (hell, probably the same one) also using Python, Ruby, PHP or JS.
Sorry, I have no interest in getting into a religious flamewar. I will only say that I find it weird that you think that there aren't endless hacks and VMs in [insert here the languages you like].
Eh, vanilla Java or Go is at least an order of magnitude faster than those other languages without messing with anything. You can mess around with the JVM and interfacing to native code but it's rarely needed because it's already within a low multiple of C performance writing things normally.
Arguments about what language is better are pretty flamey but it's hard to argue that performance is not an advantage of Go/Java
I wonder the opposite. Hardly any system needs to process 500k response per second. But nearly all of them benefit from the developer productivity that Python/Rails offer.
My current company has 15 Java & React engineers where 2 or 3 Rails would suffice. Load tops out at maybe a dozen requests/second. Feature development is super-slow. System complexity is off the charts.
> Feature development is super-slow. System complexity is off the charts.
That probably has little to do with language/stack and everything to do with constantly changing requirements/system growth.
To your required engineers comment, Spring Boot + jOOQ is easily one of the most productive backend stacks I've ever used. A single engineer could easily build a large API leveraging the stack.
Take a look a vert.x and Dropwizard. Spring Boot can handle maybe 10k requests per second, but Dropwizard is around 200k and vert.x maybe a million.
If you're only using them for a REST API they offer similar features. Spring Boot supports loads of other stuff you probably don't need for something API driven
I'm sure you'd need 10 or 15 bad Ruby developers as well. I've managed to kick things out the door very happily with one Java and one React developer before.
> All the big companies are using Java and Go almost exclusively for high volume endpoints
Because mature companies in competitive markets live or die by operational cost effectiveness.
Growth-phase companies or one with moats live or die by other means; if they are around long enough and are targeting a valuable enough market, they'll eventually probably be a mature firm in a competitive market, but a good way not to get there is to focus on the needs that such a firm would have rather than the needs the firm they actually are now actually has.
> All the big companies are using Java and Go almost exclusively for high volume endpoints
Please cite “all the big companies.” Also exclusively? No seriously, that is a bold claim. Google isn’t one of them because most of those hotepots are still written in C++. I don’t know how you can claim this. Based on some occaion company blog posts, changing just one or two endpoints out of say 100?
What about Rust? I also know companies rewrote some hotspots in Rust too.
This is true even with most SV darlings. Netflix, Google, Uber, and Amazon use tons of Java. Probably the majority of their systems.
Microsoft is an exception because they built C# which is basically a more modern but less popular variant of Java. There was even a project for a long time that let you use Java code in C# projects by converting the bytecode, they're that similar.
The notable exception is Facebook. They were stuck in PHP hell for so long that they redesigned the language to make it work.
Google has a huge amount of code written in Java, I would say the majority of their systems. Just look at their open sourced projects and job listings. Over 50% Java easily.
The new generation of Python web frameworks is pretty fast (check out Sanic[0][1] for example), Python has a huge growing ecosystem, you get rapid time to marked and if you find some part of your application to be the bottleneck, you can replace it with C/Rust, which you probably don't need, because your company won't ever scale as far.
And reddit. And a little thing called youtube (though to be fair last I read they're offloading a lot of the hot paths to golang and c via native modules).
It's definitely python, but definitely not Django. I recall someone explaining to me that it actually predates WSGI standardisation so its not even WSGI its just all custom python from start to finish.
My apologies, I thought OP said 'Python' not 'DJango'. You're correct. Youtube has their own framework and IIRC reddit runs a few different Pyramid modules rather than a full-fat framework.
Reddit has some scattered Engineering blogs and my insight into youtube comes from a previous HN article by the author of GrumPy (who works on youtube at Google) in case you're curious.
Can you point a clueless (about Java web dev) person to a nice, lightweight framework that's easy to learn and to set up? Please no XML configuration files and other such nonsense.
For me as a Python/Rails/C++ dev Java has a reputation of being too large, too complex and otherwise.. unwieldy.
Hearing things like "To test a bug I had to start 6 services on my computer and then I ran out of memory (computer had 16GB)" doesn't encourage me to try to do web dev in Java.
Not the most lightweight, but I'd highly recommend wicket running on embedded jetty, a la the second code block on https://cwiki.apache.org/confluence/pages/viewpage.action?pa... . You do still have to use XML for Maven I'm afraid (there are alternatives but I wouldn't recommend them) but it's a relatively good use of XML and you can use eclipse's GUI to add dependencies rather than adding them directly if you like.
Wicket is a true OO approach to GUI which is quite different from the page-template style of Rails/Django/..., but I find it makes for much more compositional style, with lots of small reusable components that are just compositions of a few smaller components. And while not being able to monkeypatch everything can chafe initially, when you come to upgrade to a newer version of the framework you'll really appreciate the safety a compiled/typechecked language can offer.
These days, i write most of my applications using the JDK's built-in web server. They're intended for internal use by single-digit numbers of people, and it works fine. It doesn't give you anything except the ability to serve HTTP, though.
Before that, i was writing apps using Spring Boot, which is a framework covering pretty much everything. It's easy to get started with, and requires no XML, but gets complicated fast as soon as you want to do anything even slightly unusual.
A more production-grade alternative to the built-in web server is Undertow, which again just does HTTP, but is fast and scalable, and fairly simple:
The tutorial there uses a smidgen of XML for Maven, but that's all. You can use Gradle instead of Maven, which lets you avoid XML and is generally much better, but you'd have to work that out yourself, or find someone else who has, perhaps this person:
There's also Java EE, the 'offical' framework for Java. It's actually not bad to program with, but you need an application server to run your apps, and although those are miracles of engineering, the user experience is still stuck in the dark ages.
> Hearing things like "To test a bug I had to start 6 services on my computer and then I ran out of memory (computer had 16GB)" doesn't encourage me to try to do web dev in Java.
If you need to start six services, you must be doing some kind of microservice / SOA thing, and if you need >2 GB for each service, you must be using some very heavy-weight application server. Neither of those things are smart, and in combination, they're deadly!
I want an orm, a template engine, form validation, sessions and some authentication mechanism. I'm currently doing a system for a robotics competition - team registration, match results entry and point calculations, rankings and so on. Doing it in Python and Pyramid is quite easy, though sometimes I have to fight with the framework to do things the way I want to.
One especially useful feature in Pyramid (and some other Python frameworks, Flask, Django etc) is the debug console, when I get an exception somewhere, then on the 500 page I can see the call stack and get a shell at each line in the calls stack, print the variables, view the last n requests, see the request variables and so on.
Really though, you should invest some time in learning typescript and react. These days most apps are just API calls on the back end and you deal with templating, forms, and other bs in the frontend.
Because most apps are built this way now, you won't find any "modern" Java frameworks that support what you want, and you're pretty much gonna be stuck with the older clunkier stuff.
The learning curve for new SPA frontend stuff is high but I've found it much more productive now that I'm into it. With HTML generated on the server it's too damn hard to get pages to do what you want
I also recommend Java Spark for a minimalist framework. I would say that is quite similar to Flask in Python. But unlike Flask, Spark does not have a templating language. FOr something simple, I would recommend freemarker..
I haven't seen an xml configuration in years beyond some logger configurations. Spring Boot + jOOQ is simple to setup and works. It is my go-to for backend services. You will have to be okay with dependency injection which for some reason people have problems with.
It's important to remember that Java has been around for 20-30 years and it certainly has not stood still. Many of the complaints people have are from Java of 10+ years ago.
>It's important to remember that Java has been around for 20-30 years and it certainly has not stood still. Many of the complaints people have are from Java of 10+ years ago.
Exactly. I spent about 5 years away from Java after J2EE made me swear it off. I learned a bunch of other languages and frameworks and recently went back. Java is awesome now. Unlike 5 years ago is relatively easy to avoid all the clunky old crap if you want to.
Why does anyone write web apps in Python? PHP? Ruby?
First: Because squeezing every last nanosecond's worth of performance out of your web app is actually an extremely rare problem to have. And if you truly cared about performance over programmer convenience, you'd practice what you preach and build your web apps in hand-rolled assembly, but I'd bet a lot of money that you don't do that.
The typical web application -- I'd be willing to bet over 99.999% of all deployed production web applications serving requests today -- has a bottleneck at the database and the network that dwarfs any overhead from language performance.
I remember a bit over ten years ago when there was debate in the Python web world about which templating engine to use to generate HTML. And people argued endlessly over microbenchmarks of them, to figure out which was fastest, but I remember one blog post which showed a pie chart of time spent in the average request/response cycle. Nearly all of it was accessing the database, and template rendering was a tiny, tiny sliver, so the author humorously labeled it "obviously this is the part we need to focus all our optimization work on". Language choice is similar.
Second: Because what else your company does matters. Where I work, web applications are how we expose data and interfaces to that data. But there's a gigantic stack behind that, of data intake, data parsing, data processing, analytics, the whole nine yards. It's all in Python, because Python has hands-down the strongest ecosystem of any popular programming language for that stuff. So the web applications which serve as the interfaces to the data are also written in Python; it means we have one language to worry about, one language to work in, one language every software engineer knows. I've been pulled onto projects doing things that didn't involve web applications at all, and I've been able to be productive because those projects were still in Python, and I could read code and get up to speed on what was happening, and take care of mundane things for a more domain-experienced person whose domain expertise was then free to apply to things I couldn't do.
Third: Because programmer convenience really and truly does matter. When I first started doing this nearly twenty years ago, people posted comments like yours, incredulous at the idea that someone would use PHP or Perl given their performance characteristics compared to Java (or C -- plenty of web apps used to be written in C!). But even then we knew: servers are cheaper than people. The average salary of a quality software engineer (or "web developer" as we were known then) would buy you a lot of compute time, either on your own (in-house or colo'd) metal, or nowadays on someone else's cloud.
So you choose based on convenience to humans. PHP, for all its faults, was an incredibly convenient language to write web apps in, and compared to the usual CGI model that preceded it, was a breath of fresh air when it took off. Today, frameworks written in Python, Ruby, PHP, etc. are similarly in a good position compared to more heavyweight things like the Java world (which for better or worse is still suffering the lingering effects of its mid-2000s "enterprise" reputation), or even Go (which is still young and still seems to come up short, both language- and ecosystem-wise, on some of the things actual working web programmers want. In particular, programmer-friendly ORMs in statically-typed language really really really want generics or a good equivalent, and Go's historic attitude toward that has not been great.
> And if you truly cared about performance over programmer convenience, you'd practice what you preach and build your web apps in hand-rolled assembly
The performance gain from Python/Ruby to Java/Go is an order of magnitude larger than the performance gain from Java/Go to assembly. The productivity loss from Python/Ruby to Java/Go is an order of magnitude smaller than the productivity loss from Java/Go to assembly.
Therefore, going from Python/Ruby to Java/Go might be a good idea for web apps, but going from Java/Go to assembly is virtually never a good idea for web apps.
If you're disregarding programmer convenience in the name of performance, then do it. But once you make the decision to trade off some performance for convenience -- no matter how much or how little you give up/get -- you start losing the high ground for preaching to others about how they should refuse to trade what you see as inappropriate amounts of performance for convenience, because now you're just arguing matters of degree and unquantifiable personal taste.
> has a bottleneck at the database and the network that dwarfs any overhead from language performance.
Or a bottleneck in a process which can be optimised into better data structures. Changing the language can only lower your constant cost and the scale - it will not change the complexity on its own, so it should be really the last resort.
Bottleneck at the DB/network is BS. Cheap AWS boxes have 200Mbps of throughout and you'll basically never get that out of any of the slower languages on a box with something like 1VCPU 1GB ram. Slightly more expensive boxes have 1 gigabit which is hitting many thousands of HTTP requests per second, even Java/Go struggle with that load.
I have seen MS SQL server handle over 10k database queries per second on a regular machine in a real, huge application (over a million lines). Can Python handle 10k queries per second? Definitely not. In Java or go that load could probably be handled by a single AWS box that costs $10 a month.
I have never seen a database maxed out that wasn't stuck because of table locks. I've seen applications like WordPress die at 10, requests a second
There's more to bottlenecks than throughput. Latency also needs to be considered. A 5MB webpage will load much faster when served from a 100Mbit connection with 10ms of latency, than a 1000Mbit connection with 300ms of latency. Especially when you consider that web page requests (pre-HTTP/2) consisted of many separate requests to fetch all the different assets (e.g. CSS, JS, images, etc).
Secondly, the majority of websites that make up the Internet (and Intranets) are not serving 10k/second. They're serving a 100th of that.
Third, because your database can handle 10k queries, and your database can aggregate and manipulate data, and it's optimised to do this -- it might be a good idea to perform these functions at the DB-level, as opposed to the web application.
Finally, as stated previously, for most businesses, it's easier to buy more beefy servers, or get a more experienced PHP dev (that understands how to optimise code) than it is to find a Go programmer.
Java and Go aren't the only programming languages that happen to be comparable to Python and Ruby, while offering good performance.
Common Lisp, Scheme, JavaScript, ML variants come to mind.
Having been part of a startup that used a dynamic language without a world class JIT/AOT compiler (Tcl) back in the first .COM wave, teached me to never do it again.
> I'd be willing to bet over 99.999% of all deployed production web applications serving requests today -- has a bottleneck at the database and the network that dwarfs any overhead from language performance
A language is not just about overhead, it's also about capability. For example, your database is slow, and your web page needs to render by executing 20 queries. Will your programming language allow you to parallelize those queries? While your database/network may have high latency, they can still have good throughput.
This particular example is very real, and is incidentally relevant to the original post, since Python and Go implementations have different capabilities.
well then why go with any of them when erlang and elixir has them all beat in terms of performance AND stability? let alone the power of concurrent processes it can handle that none of those languages can hold a candle to. why? :)
You're right that Go/Java have async style similar to Erlang. But the majority of production systems today that these languages run is some sort of web application. In this area, the Erlang VM holds its own pretty well, especially for websockets [1]. In that study Elixir's memory usage is higher, but total connections were almost identical to Go.
It'd be great if there were better benchmarks for common use cases of various languages. Spring on the Java side tends to be heavy on reflection usage, which is orders of magnitude slower than JIT'ed JVM methods. Benchmarks like the benchmark game don't capture this. Still despite that the Erlang VM performs very well on the benchmarks game compared to other dynamic/scripting languages. Often it's easily 5-10 times faster than Python or Ruby. Given the parents comment, I'd argue many programmers who enjoy developing with dynamic languages can do so with Elixir with comparable performance to Go/Java for high concurrency web applications.
It's not near the bottom either which is the point. Actually Phoenix (the only Erlang base web framework I could find at the link) edges out quite a number of other frameworks, roughly ranking in the middle of the pack. Specifically Phoenix performed 32k req/s, in comparison to say the Go based Gin which does 51k req/s or a arguably more comparable setup of,Python3 Flask with full ORM at 13k. Spring only manages in 23k req/s. Nothing to write home about either way but clearly along the lines of my argument that Erlang/Beam can hold its own. Though some of the ruby frameworks are damn impressive seaming, huh.
However it gets more interesting if you look at the Latency tab. There Phoenix comes in with an average of 7.9 ms. In comparison Gin averages 5.8 ms, Spring at 12.1 ms, Flask Py3 at 23.1 or Py2 at 14.7 ms.
Where it's really interesting is looking at the max latency. Presuming this indicates roughly how 99th and 95th percentiles measurements would compare. In this category Phoenix comes in second with a max of 22.0 ms, behind only lib-mongodb at 20.8 ms. The lib-Mongolia is one of the fastest frameworks by raw req/s.
Appreciate the link to the benchmarks! Much more interesting, especially if you're concerned about max latency (and likely 99/95 th percentiles). In this case BEAM/Phoenix would let you plan capacity to minimize max latency fairly well as it appears very consistent.
But raw java servlets delivered 100+k req/s. It may mean that all this event loop/async/actors hype is overrated, and regular blocking approach can also deliver.
… and "Most (all?) large systems developed using Erlang make heavy use of C for low-level code, leaving Erlang to manage the parts which tend to be complex in other languages, like controlling systems spread across several machines and implementing complex protocol logic."
FAQ 1.4 What sort of problems is Erlang not particularly suitable for?
hi riku_iki, I just checked out the link akka.io and it says that it caters to java/scala only ... from your last statement I understood that it was meant for java and go.
Hey thanks for that link ... I was aware of go having coroutines available as part of the language but not that something similar to what erlang provided was available to the go system as well...
JS lets you share code between front-end and back-end. For example, if you're writing a networked game, you're going to need to transport a lot of state between the server and client.
I was wondering the same thing, and it frustrates me.
I think people obviously enjoy using Python/Ruby or whatever for small scripts, and by inductive reasoning they think they can enjoy programming even larger projects in these languages.
They also rationalize the slowness by pretending that performance doesn't matter or that the bottle neck is the database.
They abuse the adage about "premature optimization" being a bad thing. Not realizing that while it's stupid to prematurely optimize everything, it's equally unwise to write everything in a slow language. Using a fast language is not premature optimization; it's just the basic thing you need to do in order to not have a crappy optimization, which is a good thing , and everyone should do that.
> by pretending that performance doesn't matter or that the bottle neck is the database.
Or, you know... You can actually measure those things. And unless you're growing like crazy, or have a massive initial audience, you're likely to find that 90%+ of the app time is spent in the database and your CPU time isn't even close to maxed out.
Why would you assume people would pretend any of that is true?
Well, we've seen for example, Twitter do that. I saw some slides[0] that other day from sometime in 2007[1] where they said they were using Ruby and spawning 180 rails instances to handle 600 requests per second. Like, that's insane. A compiled language can handle that load with just one instance. It's nothing.
Also, if your audience is small enough that a slow python can suffice, why even bother using a database server such as postgresql or mysql when you can just use sqlite and simplify the architecture?
EDIT:
[1] It's been 10 years since 2007 but people still do this more or less all the time. Write a webapp in js/ruby/python and spawn 20 instances of the application server
Twitter's issue was more with concurrent IO than anything. This problem was solved about 10 years ago with the release of Ruby 1.9 and since then only one Ruby process per core is required, just like NodeJS. In that time most CPU intensive parts of the web stack have also been re-written as native extensions.
Contrived benchmarks are not useful. Specially when you have tiny datasets.
If you want to do anything interesting, Python/Ruby are slow as hell, which is why you cannot do anything interesting in them.
For example, while in Go, you can load say 1000 rows from the db and perform some data manipulation on it in the code to get a desired result, you cannot do this in Python because it will very very slow.
So what you do is you write complicated sql queries and essentially offload all your work from the application server(s) on to the database server.
Now imagine that these rows on the database don't actually change very often. You could just load them once, keep them in memory (in a global object), and only update them once in a while (when needed). You can always do whatever search/manipulation operation directly on the data that is readily available and always respond very quickly.
This would be _unthinkable_ if you are using Ruby or Python, so instead you keep hammering your database with the same query, over and over and over again.
Simple benchmarks are a useful yardstick. I recently wrote a service in Rust/Iron which only has 4.7x the throughput of the same Ruby/Rails service. That was rather disappointing considering how much more effort is required to do it in a lower level language.
Is Python/Django performance significantly worse than Ruby/Rails? The situations you describe are things I do every day in Ruby. Getting 1000 rows from the DB and performing some operation only takes a couple of milliseconds in Ruby.
Ruby/Python are meant to be glue, and you can most certainly use them to glue together "interesting things", like image processing or audio processing in a web layer.
Memory caching rarely changed, often accessed, but ultimately persisted in a DB things like exchange rates in a global object is exactly what you do in Rails. There's a specific helper for doing it. http://api.rubyonrails.org/classes/ActiveSupport/Cache/Memor...
It depends what you're doing. If you're running a ruby web app server and talking to the database, all the io you're doing is most likely synchronous. In the one-server-per-cpu model, anyway.
It's been a while, but I thought that MRI basically slept during IO and released the GIL so that other threads could do work. In that case, you'd still be handling more requests while the IO was performed. I could be wrong. This kind of thing: http://ablogaboutcode.com/2012/02/06/the-ruby-global-interpr...
You're right that it can do async io. But you compared a framework to a language. Rust has Tokio for async - Iron is just not using it.
It's similar with Ruby/RoR - yeah, they can do async. But not on their own. With unicorn server, you still get no threading and just a bunch of processes. With puma you can do threading (async cooperative really) - as long as you keep the configuration/code within the limits of what's allowed.
And due to the extra care needed whenever you do caching/storage things, I expect unicorn is still the king in RoR deployments. (GH uses that for example)
It's definitely preemptive rather than cooperative. Ruby/Puma is actually using one OS thread per Ruby thread, so when one hits a DB call and blocks on sync IO, it releases the GVL and another Ruby thread can proceed. There's a timer thread Ruby runs and pre-empts each thread to schedule them.
Wow, steveklabnik replied to my comment! Unfortunately, both implementations of the service are CPU bound. I think we're running 25 threads in Iron but I'll have to check.
Ah interesting! With that being true, then yes, I'd be surprised that it isn't faster too.
What about memory usage? That's an un-appreciated axis, IMO: for example, all of crates.io takes ~30mb resident, which is roughly the overhead for MRI itself, let alone loading code.
Anyway, the Rust team loves helping production users succeed, so if there's anything we can do, please let us know!
Every bigger web application will start caching at some point. This is nothing new in either python or ruby. It's even well integrated into SQL access libs: python sqlalchemy http://docs.sqlalchemy.org/en/latest/orm/examples.html#modul... or just using the in memory memcache.
You will have to close that IT-only part of your brain for a moment, and keep in mind that people create software for a reason. And every nascent project lives or dies by how promptly it adapts against incorrect assumptions or changes in the environment.
Difference in hardware costs do not even enter into the radar. They happen in a different universe that good decision making completely ignores at this point.
After you have a proper solution to a problem, and enough scale so that it's worthwhile to collect those gains, you move your software to another language. It's not a big deal.
I like how for the past 10 years, everybody has been espousing language/framework xyz as being superior for the reasons you've just said (asynchronous, responses per second, memory footprint etc), but as soon as you say Java does it all better, suddenly developer efficiency is most important.
FWIW, it always seems like a mostly zero sum game to me. Whatever efficiency you gain in using (e.g.) Python+Latest Frontend Framework+Backend Framework, you lose through having to wade through yet another set of new concepts and documents for those frameworks.
> Python, Node and Ruby all have better systems for package management
Maybe they have better tools for managing package dependencies but Go doesn't have to deal with interpreter dependencies, which can be a major headache. Go also doesn't have the problem of conflicting system level packages.
Honestly no matter how good your deployment practices are, if you have to manage an application's deployment long term you're going to have dependency problems of some kind. I'd rather deal with those problems at compile time than hit an edge case during deploy or in prod.
In Python, if you have control over the target system, this is solved with virtualenv. You can install whatever versions of libraries you want in there with pip without causing conflicts with system level packages.
If you intend to ship to end-users, yeah, you are kinda screwed. You are stuck using one of the various "freeze" methods, which, in my experience, kinda blow.
We use Nix which is much nicer than pyenv/virtual environment, but it's still a pain compared to Go. This is mostly due to Python's runtime model, and maybe also the tendency for Python applications to have very tall, broad dependency graphs.
Working with nix is also not very easy; docker might fare better here, but probably just a lateral move. In any case, Go outclasses Python on deployments. Also, my company is looking into compiling Python as a means of code obfuscation; Go compiles by default, which (modulo stripping) is probably enough obfuscation for our purposes.
I've been evaluating Cython for both code obfuscation and for using C based extensions to improve performance. Cython is pretty nice, but shipping binary only Python extensions can be such a pain, especially dealing with 2-byte vs 4-byte unicode representation issues.
Yes it's nice to see this is now fixed. In my case I need to support Python 2.7 though, and for Python 2.7 the only workaround I know of is shipping two binaries (2 byte and 4 byte encoding) then loading the correct module at runtime.
Virtualenv is nice, but lately I've grown to use docker containers instead. You can use the official containers for go, python or ruby, feed it the gemfile or requirements.txt or whether, tag it, push it and you have a permanent snapshoted image.
It's definitely a hack, but for development environments it's quite an effective one. Which is why we actually created a (more user friendly) clone for Go development. It's called Virtualgo, and it's mentioned near the end of the article as well. https://github.com/GetStream/vg
Most of the time pip installs work fine, but once a while after some type of OS upgrade (namely, my experience has been MacOS), you run into odd compiler types of issues.
So it's why a lot of teams will use either Vagrant / Docker to setup local developer environments.
Can confirm that once upgrading macOS required using Docker to be productive, because some C-dep we had stopped working. We have developers using macOS 10.10 to avoid any issues (although they're likely gone by now, it's not worth the effort to figure it out anymore).
I've occasionally had things spontaneously break. Most recently the cryptography package just stopped installing on deployment. Had to add a pip upgrade on deploy, which somehow prevented the AMI I was using from installing some of its requirements. Had to add those packages to my project requirements.
Also some of the data analysis packages don't work with virtualenv.
Most deploy scripts I've seen tend to do this. Which is insane in my opinion, but the python toolset encourage this kind of approach by making it the default easy thing to do.
That's one of the things I prefer about Go: the compiler can cross-compile, so I can compile the entire codebase on my osx machine and produce a single statically linked linux binary that I can just copy over scp to the server. Deployment becomes infinitely simpler.
> Most deploy scripts I've seen tend to do this. Which is insane in my opinion [...]
Also, most programmers don't know a thing about administering servers.
Ubiquity doesn't mean that it's a proper way, as you clearly see yourself.
> [...] the python toolset encourage this kind of approach by making it the default easy thing to do.
It's not quite the fault of Python tooling in particular. It's really the
easiest way in general. Similarly, the easiest way to deploy a unix daemon is
to run it as root user, because the daemon will have automatically access to
all files and directories it needs (data storage, temporary files, sockets,
pidfiles, log files). Anything else that is easier to manage requires to put
a non-zero effort.
> That's one of the things I prefer about Go: [static linking]
Static linking has its own share of administrative problems. It's not all nice
and dandy.
For a number of reasons, I haven't bothered to do a complex build/deploy process. I write in python, I freeze the requirements into requirements.txt, and I type "eb deploy". Anything else is overkill for me.
And then you have different lists of installed packages in your development
environment and in the production and even between two different production
servers (which can silently and/or subtly break your software), and you need
to manually install all the C/C++ libraries and compilers and non-Python tools
and whatnot, and the upgrade process takes a lot of time and can break on
problems with PyPI and/or packages that went missing, and you can't even
sensibly downgrade your software to an earlier version.
Yes, avoiding learning how to properly package and deploy your software is
totally worth this cost.
Python does not really have a "build" step. _That_ is the problem.
I think Heroku also played a role in prolifirating the idea that you can just push your code (from a git repo) to a server, and the server will take care of deploying it. Which usually means the server will install all the dependencies from pip during the process.
When you build a piece of software on many layers of abstractions, something will eventually break and leak all the way up in a nasty way that is difficult to debug or fix.
I don't have specific examples in mind but I've had a lot of frustrations with virtual-env.
For context, you can install opencv, tensorflow, ROS, matplotlib, and the entire scipy stack in a virtualenv, with no external dependencies, using wheels.
This means that you can generate images, train a machine learning algorithm on them, compare the results to conventional CV algorithms, and display them in an ipython notebook all from a venv. There's a huge amount of C++ and even qt integration in that pipeline, all isolated.
It won't be maximally performant (ie for massive training), but for that you'd want distributed docker deploys or similar anyway.
Wheels doesn't completely solve this problem. For example, your example of matplotlib, IIRC, has a dependency on libblas: a C library. This dependency isn't captured in the wheel metadata, and it's up to you to install the right libblas on the host, or somewhere where it will get loaded. (Though honestly, this isn't usually an unmanageable level of complexity. But virtualenvs are not free from external dependencies always.)
IIRC, we also had a problem where a wheel failed to find symbols in a SO it was linked against. It turns out that the wheel worked fine in precise, but failed in trusty, and we ended up having to split our wheel repository because the wheel was specific to the Ubuntu release. (It seemed, at the time, that whatever SO it was linked against appears to have made backwards incompatible changes without changing the major.) There are rare cases like this where the wheel is unique to attributes of the host that the wheel metadata can't capture.
> For example, your example of matplotlib, IIRC, has a dependency on libblas: a C library
And with the wheel packaging, you're free to embed that library in the wheel that depends on it. You can also not do that and rely on the system libraries. The wheel provides you a way to do what you want, but doesn't force you to do it.
The are good reasons for either of those approaches, so I'd say wheel does solve the issue.
As another user mentioned, they do, if the library maintainers take the time to do things correctly. The package I'm describing does not link outside of the virtualenv, the matlplotlib, opencv-python, and tensorflow libraries include all necessary dependencies (although you have to use a non-default renderer in matplotlib, because it doesn't bundle all of them).
What you say is correct, virtualenvs are not free from external dependencies always, but correctly build wheels are. Wheels and virtualenvs aren't the same thing.
Wheels just broke something in our build pipeline. They removed support for Python 2.6 and started tossing errors. I was able to fix it by pinning Wheel which probably should have been done originally by who ever made the build utility, but it would have been a non issue with Go and a binary.
Well, python 2.6 was also EoL'd 4 years ago, so yes, if you're using a no longer supported piece of software and not pinning your versions, I'd argue that you're inviting issues.
Sure. But if the build utility was just some binary, then it wouldn't matter. If Go was abandoned by all maintainers tomorrow or they broke all the packages, the already built binary will still work.
Should someone have changed the Python build tool to be 2.7 or 3? Maybe if they were bored and new it was something that needed work or wanted to be good tech citizen. However, what really happened is that no one even knew what the tool was really doing, just that it was part of a suite of tools in a build process, and no one would have looked twice at it ever again had wheel not removed support for 2.6. /me shrugs.
I mean it totally would if the binary dynamically linked against a file you didn't have on your system, which is exactly what happened with `wheel` (python 2.6 doesn't have OrderedDict, which wheel now uses).
That's a different project, but I also use ros in a virtualenv. Its a bit weird because you end up installing ros both inside and outside of the venv (various ros command line tools only use /opt/ros/... python deps), but your actual nodes run in your virtualenv.
And ros doesn't even need to be a wheel fwiw, its pure python, but its also just a painful thing to deal with for many, so it was worth mentioning.
> Go also doesn't have the problem of conflicting system level packages.
Sure, as long as you're not using CGO and dynamic linking. Otherwise you'll get the exact same problems as the others.
> Go doesn't have to deal with interpreter dependencies
As long as Go is forward compatible this will old true, but I don't think this was the point of the comment.
'go get' is a half-baked package manager and yes it fetch packages and resolve dependencies between packages so that's a package manager. It just doesn't care about versions. Other languages have solved the issue by requiring a third party tool, Ruby even has its own build tool.
There is a reason why some gophers are working on an actual package manager...
> Sure, as long as you're not using CGO and dynamic linking. Otherwise you'll get the exact same problems as the others.
Not even then, to be honest.
I wrote a product that is deployed on many thousands of servers, and all of a sudden a not-insignificant population just started experiencing SIGABRTs on process start.
Turns out Amazon Linux (and some others) had removed a syscall that i386 Go on Linux was relying on (https://github.com/golang/go/issues/14795). We built our product with a manual patch against "src/runtime/sys_linux_386.s" for many months as a result, and it was really a huge headache to help all our customers.
I would be surprised if Python or even Java broke in this way, for example. I was really surprised it even happened in Go to be honest.
There are other runtime problems too, we had a weird interaction with a "cloud OS" based on cgroups (CloudLinux) screwing up our Go processes depending on how many goroutines we ran ...
I think Go is fantastic but its runtime can definitely clash with its environment ... it's not the same as a C program.
Was the system call part of POSIX or the standard Linux interface? Because if so, that sounds like the fault belongs to Amazon, not Go. The same thing could happen to the Python interpreter.
Are you telling me that Python helped you create a viable tech-oriented newsfeed and activity stream business, serving 500 companies and more than 200Million "end users"? That sounds like a great incentive for any entrepreneur to get started with Python.
You've gotten this far by using the language you've turned away from! Further, I'm confident you didn't solve every challenge you've had on your own but rather did so with the help of the worldwide community of Pythonistas who answer questions and help to solve problems.
It takes a village to raise a startup. Don't try to burn the village down after you've grown up and can afford to explore other villages.
Hi, Jelte from Stream here. The post was definitely not meant to indicate that people should stop using Python. We still use it happily for the website and I'm still a big fan of it myself.
However, for the API we've outgrown it in performance requirements. That's what the article is about, together with the things we found during the switch that we liked and disliked about Go.
I would love to hear your experience in two years though. I find the go runtime incredibly lacking in terms of introspectability. That's okay if you run google scale operations where you just throw away the environment on deploys and you do not care about the individual processes, but if you are running a reasonably lean operation the ability to see what a process is doing is incredibly helpful. That even affects simple things such as error reporting which is almost useless in Go to debug issues at runtime.
These are good points. Stack traces in Go are a matter of discipline while they're free in Python. That said, I think Go brings a lot to the table even for a lean operation which more than compensate for these weaknesses. Probably the biggest advantages are that Go deploys as a single binary, and one Go process is sufficient to make efficient use of the machine's resources--no need for something like it's uwsgi. At our small Python shop, we probably have two full time positions dedicated to managing our Python installation process. This could conservatively be halved with Go. Beyond that, we have a couple more full time positions for managing the fleet of servers required to run or applications--Go could easily reduce our machine demand to 2 or 3 (including redundancy) which would probably require just one engineer to maintain not to mention the savings on our AWS bill. These are just a couple of obvious benefits to Go, but for us this isn't enough to justify a rewrite.
Hey Jelte, quick question: Did you guys try Numba or Cython?
Or did you guys figure that the language simplification that Go provides would be worth it even if the performance considerations were similar?
No, we didn't try those. Mainly because it wasn't only raw performance that we were after. We also wanted more simplicity and better concurrency than what python was providing for us.
Also, the reason we even considered a language switch is that we had performance problems related to our core design. Because of this we decided that we needed to rewrite most of our API code based on a new design. This required rewrite made us consider switching languages. Eventually we chose to rewrite our whole API codebase in Go for all the reasons in the article. (we did a small trial project in go first to evaluate it)
It reads like a reasonable rationale for why they switched to Go. Uninteresting maybe, but certainly no village burning. You sound indignant for some reason that isn't related to the content of the article.
You really hit the nail on the head. If someone is able to build a successful business with an easy to use language, that sounds like the language you want to start with.
Are they using much ocaml on the backend? At facebook's size they have a whole lot of different languages in the stack. I'd be very surprised if a significant portion of new backend development were in OCaml though.
Author here. Super cool to see this post on HNews.
For those of you working with Go. One of the guys on our team wrote a little tool called VirtualGo. It's pretty handy when you're working on multiple go based projects (https://github.com/getstream/vg)
Thank god something like this exists. I gagged at the workspace structure when I first started Go - it gave me enough pain starting out one weekend that my interest faded away. I've been looking for a reason to try it out again and a bunch of comments here got me really excited again.
I think there's a great value in code being straight forward and simple and that is very much not appreciated among many programmers.
A good language enables you to "compress" your code without making the code flow impossible to follow.
A bad language encourages you to obfuscate the flow of the program using wacky abstraction techniques.
Go doesn't exactly hit the sweet spot for semantic compression due to lack of compile time programming (aka generics?), but it does a good job of making the flow of the program pretty clear.
IMO Go has poor primitives that make even straightforward code needlessly complicated. Slices are probably the worst. Compare Python's way to inserting a value into a list:
WTF?! Isn't the point of an abstraction to make simple what is complex? Insert into a list should always be like the python example. If go's standard list doesn't have that, maybe it's time for someone to write a better library.
The insertion operation on slices is expensive and is not recommended to be used frequently.
If you use it frequently, please rethink and redesign your data structure, for example, use a list instead.
I tried using Go this weekend and basically just abandoned it when I learned that it only had 'generic's for three built-in types and other than that you are forced to essentially dynamic cast everywhere.
I just don't get the appeal. Go seems a lot like what you'd get if you just removed every language feature that anyone has ever complained about; for good reason or not.
It seems like the domains where Go is used often don't make heavy use of custom containers or data structures, so that makes the pressure on the language makers lower than it would otherwise be.
I venture to say that if you are casting/converting everywhere, you are likely doing it wrong. An interface{} says nothing. It should generally be avoided. However, I do find this to be a pain when working with numbers. Floats and ints mixed up is not fun. Python is so vastly easier to work with in that arena.
In Go, if you need a container which is not a built-in map or array, you end up casting to interface{}, because that's the only reasonable thing your container can accept. This is clearly not "doing it wrong", and it's an extremely common use case.
Thank you for the insights on Go and how Go has added value to your
organizations and enabled a better environment for development at your
organization! I love hearing viewpoints on tradeoffs and general evaluations on
value of new(ish) development technologies.
I'd also like to point out that I write this a Python-phile that has come to the
understanding that Python is great until it isn't. When Python becomes to
burdensome, the understanding of the problem delivered by my Python sketch
helps to narrow the focus of what I need to solve the problem efficiently. By
that time, Go has never been the correct tool for me. I'm comfortable writing
solutions in C, Rust and Java, perhaps this is my problem.
Whenever I read or write Go I don't understand how it managed to capture the
mindshare that is has. Especially in the face of other technologies available,
Go seems like a timid step forward in descriptions of computation, when there
are bolder choices that deliver valuable additions to my development ecosystem.
I appreciate that Go places a premium price on simplicity, however with
languages like Rust make some big steps towards static guarantees at compile
time the guarantees that Go makes seem pretty weak. But I feel that when I make
the leap from Python to Go, I admittedly don't understand the allure. Static
compiltion is really nice! Statically linked binaries for distribution is very
nice! The tooling is out of this world! Hands down, Go has one of the best out
of the box toolchains I've ever encountered. But when helping me think about
computation, Go doesn't.
Go's concurrency primitives don't feel like a great deal of a step forward
compared to C. I know, I know, this is crazy talk. I feel like I am missing some
great wisdom about Go, I truly do. Concurrency is really hard and Go has clearly
helped a lot of people minimize concurrency problems. But whenever I use
channels I am reminded Go does not allow me to shut my brain off, and whenever
the deadlock bug strikes, I feel that I might as well be writing C.
I don't mean to disrespect Go and the great things it does, the opposite of
that! I am hoping Go delivers a bulletproof abstraction for concurrency. Until
then I've become too familiar with other tools to justify using Go for more than
incredibly niche use cases. I really hope that will change (both my mindset and Go)!
> I appreciate that Go places a premium price on simplicity, however with languages like Rust make some big steps towards static guarantees at compile time the guarantees that Go makes seem pretty weak.
Rust's static guarantees come at a price: You have to learn Rust to take advantage of them, which is apparently no mean feat.
> I am hoping Go delivers a bulletproof abstraction for concurrency.
Don't hold your breath. Go has always been and will always be about “good enough”, not “bulletproof”.
>Reason 3 – Developer Productivity & Not Getting Too Creative
There is a time and place for these tools. A part of managing a team is ensuring there are good practices around "getting creative" and that there is a clear rationale. Python's metaprogramming came in handy for helping us provide a high level syntax to work with our data model.
It's not just metaprogramming, it's a lack of standard types (like sets) paired with a lack of parametric polymorphism cough yeah, generics cough that makes writing some data structure manipulations feel sort of counter-productive.
Go makes up for this by making other stuff being easy and fast to code.
I'm happy that Python's metaprogramming came in handy for you and your team, but Python's propensity for DSLs is one of the reasons I loathe it so much. I guess in the context of data science, a DSL makes sense, but it's a nightmare to debug.
> I guess in the context of data science, a DSL makes sense, but it's a nightmare to debug.
What do you think struct-tags are? it's not like Go is free of DSL either. In fact the std lib itself uses them. Let's not pretend Go is without fault on that matter.
I would agree with some of the points - speed (which is stretched over two points), concurrency, and the ability not to do magic as easily, but several of the points (compile time, available developers, strong ecosystem) are not a "versus Python" at all but rather against other languages.
Personally, I see a language like Go and Python as solving different spaces. I wouldn't write a lot of website business logic in Go, and I wouldn't write a low-level TCP redirection daemon in Python.
Yeah there are some use cases where Python is a clear winner. For us Python seemed like a good fit initially. Traffic was low and the API didn't provide more advanced features where Python's speed becomes an issue. (Ranking, aggregation)
If you can switch from Python to Go, you weren't using Python anyways, you were use Gothon or Javthon.
If you want static binaries, great tooling and an excellent imperative and functional language, try F# with mkbundle. The compelling reasons to use go are shrinking.
I'm hacking on a PureScript-to-Go transpiler (ie. alt backend for existing PureScript compiler once PR #3117 gets accepted). Watch http://github.com/metaleap/gonad/ --- hoping to "get there" within 2 weeks. Taking unreasonably long already, given that purs already does its jobs on its own like parsing, type-checking, transforming-ML-to-imperative. One reason I'm taking a bit longer than the quickest-dirtiest approach would, I want to preserve the type information as much as possible rather than have all functions accepting and returning and passing `interface{}`s in a JS-like manner. Ie it's meant to generate sane readable human-like idiomatic Go code. Type-classes mapped to interface types, etc. Will also have to attack the need to have any Go package be represented readily on the PureScript side as an existing module that'll show up in auto-complete and pass compilation, so to quickly auto-generate some kind of dummy FFI bindings whenever a Go-land dep is imported and used. Fun ride!
There's clearly value in combining Go's compilation speeds, stdlib functionality, rich ecosystem, lean fast binaries, GC etc with a rich and cutting-edge Haskell-ish/ML-ish type system (and the leaner syntax and compressed idioms). Will be great for clear thinking and expressive high-level type-driven dev and DSLs (and naturally, implicitly generics/code-gen haha), without having to wrestle with GHC/cabal/stack build annoyances/times and Haskell's whacky academic overly-PhD-ish "wrappings" around raw straightforward real-world needs such as http-server and db-client, where again the Go ecosystem shines.
I don't get how low latency GC can be a good selling point.
I mean, sure, the Haskell's GC is a big downside, but if latency is so important that you have to look how the performance of you GC compares to an usual language, why don't you go with a language with no GC for once?
Because once you start to look at the details, you will need all of them, and how deterministic is Go's GC performance? What's its 99th percentile? What's the 99.999th percentile? Will any of those change in a future version?
Games. I like low latency and garbage collection. I feel like, hey, it's 2017, those things (along with performance within a reasonably small multiple of C's) really should go together.
120 fps is a measure of throughput, not latency. A high frame rate is good, but it's no guarantee that you're not going to drop a frame. Dropping a frame kills the experience, especially in VR.
In any case, .Net (and I gather Java) games typically go to considerable trouble to reduce GC pressure by pre-allocating memory pools and reusing them during execution. For certain kinds of games, like where you load everything in a level up front and don't deallocate until you finish, that can be fine. For other kinds of games, that's a nasty constraint to try to deal with. It ends up bending your architecture in ugly ways that make it hard to develop the game.
If you want a nice ML with low GC latency, how about Rust?
(Did you look at tuning the GC? As a Scala guy I know a some people write off the JVM because the default GC settings are optimised for throughput rather than latency (whereas Go does the opposite), when often their requirements are comfortably within what the JVM can do when suitably configured. That said I've heard the CLR is less tuneable)
Rust is too complicated for me to learn and be productive with in a reasonably short period of time, and the CLR is not good at keeping GC pauses down. :-(
Instead of all the 'we replaced x with y', I'd like to for once read something along the lines of 'we started using x and y side by side, embracing the benefits of each'
I use both Python and Go regularly and the use cases rarely overlap. They are both fantastic languages and once you realise what you want to use each one for, you'll be one happy developer. (Applies to many other combinations of languages.)
Right now, Docker, K8s and their adoption among Microsoft, Oracle, IBM are what keeps me keeping up with what goes on Go, because "hey we need to write some stuff in Go here".
For me Reason 3 is exactly the opposite, if I have to spend time manually writing code that other languages give me for free, I am anything but productive.
Performance shouldn't be a compelling reason to change the language for a project. If they have truly benchmarked, profiled, reworked hotspots using Cython and looked into major bottlenecks such as IO and Python still isn't performing up to scratch then fair enough.
Neither is java a simple language, nor does Java code compile quickly, in my experience.
However the largest disadvantage of Java is Java programs generally consume more than 10x more memory than the same scale Go programs, and need 20x more time to fully to warm up.
I do admit Java is more consistent than Go from the syntax design view.
Now, if you mean "simple" as in, needs an IDE to figure out what other languages figure out in runtime/compile time automatically, then yes, it's a dumb language
Not to mention the library, which is a jigsaw puzzle composed of barely fitting parts that need to be connected in non-obvious ways to work and made by people who want to prove they know several design patterns and can apply then to any situation instead of something that's rational and straightforward
> Go forces you to stick to the basics. This makes it very easy to read anyone’s code and immediately understand what’s going on.
I think all new language design must incorporate this goal going forward. It’s been simply too valuable. When I see something introduced that does x, y incrementally better but makes no effort for the above goal I get sad and move on.
It’s easy to get caught up in the technical challenges of a language, but this meta challenge is what really helped move the ball forward with Go, and I’d love to see it introduced as a first-class goal in new language design.
Some people insist on being obtuse (coincidentally, I'm looking in the general direction of a template heavy C++ user), and for worse or better, personal preference is a large factor in language adoption as opposed to what works best for the industry as an engineering practice.
I think it depends on the use case. For individual component programming, sure: pick a hammer and make everyone use it. But for systems programming, being able to be multi-paradigm is a huge benefit. TIMTOWTDI ;)
I cant help to feel that in Go performance is more important that developer productity. Go is the new C. The code looks a bit clunky, with lots of error checks.
I rather work in a language like Python or Kotlin where the code just looks like how you would think the code would like (if i would write it in pseudocode on paper)
However Go in big teams will really be nice. Thats why they invented it in the first place.
I'm surprised to see that people choose anything other than C++ if they care about performance. Are you really trying to profile and optimise python and go? It will never be worth it! Just write the same thing in good modern C++ and you get an automatic 100x speed up for most cases. Then optimise to reach the absolute limits of the hardware. Python and go it seems!
Some people choose Go rather than C++ for the same reason you went with C++ rather than an assembly language.
Most of the time, things only need to be fast enough and trading speed for ease of development, deployment, and maintenance is an easy decision to make.
Well, it's Go, so you're going to have some runtime type errors. interface{} is way too common to avoid them completely (I had the order of arguments for two ConcurrentMaps wrong, for instance, and it blew up on reads -- found it almost instantly, but alas, at runtime). But it should be a much smaller number than python.
That's a very good thing for a language to have, indeed. But Go isn't the best thing to look into for productivity-via-the-type-system. What is (barring the purely functional languages' learning curve) is most any ML descendant; albeit Rust in particular if Go-like performance is desirable.
In addition to typeclasses, Rust has discriminated unions (ergo algebraic datatypes), structural pattern matching, first-class functions, local type inference, and an exclusive/nonexclusive dichotomy of references which serves as a useful proxy for mutability.
people who use python more aren't as happy about it, otherwise mypy wouldn't exist.
i use python more, can't recommend mypy enough, it's great for what it is. it has warts, but that's to be expected if you add an optional strict modern type system to 20 years of dynamic language.
A large part of why mypy exists is due to cargo-culting groupthink that leaves the 'static typing > dynamic typing' trope less challenged than it ought to be. Static typing is just not as useful as the glossy claims - and the boilerplate cruft incurs way more friction than typists like to admit.
mypy is great, I use it in 'public' api's, tricky parts where context alone is inadequate and where I previously fucked up - the latter usually is a signal that the flow or abstractions can be improved to the extent that I can remove the typing hints.
mypy helps me, it does not force me to throw a goat into the lava pit to appease the compiler.
mixing untyped libraries with typed basically makes the whole monolith behave as if it was untyped. Once they get the major libraries typed, python projects will become much more robust.
A nice write up generally. I just feel he should also mention the classic static vs dynamic typing thing. From my perspective, teams switching to Go — from dynamic typed language such as Python — really appreciate the additional compiler help it provides ;)
I've done a fair bit of web dev work in Python but I am currently building my startup (https://www.growthmetrics.io) in Go.
It is in the initial stages, but so far it has been pretty good - both as a learning experience and from a developer productivity perspective.
I've also open sourced a web app boilerplate that I extracted from GrowthMetrics' codebase - https://github.com/olliecoleman/alloy. It might be useful for people looking to get started with Go.
Doesn't sound too fast, that's about how long it takes for FreeBSD kernel to be compiled[1] and I would imagine a kernel is more complex.
The whole system takes about 50 minutes[2]
Go's main compilation performance gains come from incremental compilation. Since C++ doesn't have real module support merely parsing the entire dependency graph from headers can take a very long time. In go if you have .a pkgs for all your dependencies your program will compile very fast.
If you use docker as part of your ci build the dependencies as a separate step so they can be cached between builds. This is especially important for libraries like sqlite, which can take a lot longer to compile.
That build log for CockroachDB takes a lot longer than a normal build, because it's generating releasable artifacts for multiple platforms, which means running the entire build process multiple times. Also, the build time is dominated by a few vendored C++ dependencies (mainly libprotobuf and libsnappy) which are built using CMake.
On my system, building just the Go code for the latest version of CockroachDB (about 360kloc) takes 24 seconds.
The only thing actually missing from Go's standard library that I've wanted is an abstraction of 'native' UI things (assuming you're compiling for a graphical platform).
For portability there is shipping your a widget set (QT is OK for this if your licencing needs are compatible with either case).
My experience with the language (building multiple production systems) is that the language itself is sort of it's own framework. You don't need to add much to build an application with the standard library. Which is a huge plus and makes things simpler.
That applies to all languages with exception of C, because they didn't want to standardize it as part of the language, hence we got ANSI C + POSIX instead.
Don't forget Javascript which is ridiculously under-featured natively. At least C has the excuse of having to run on the bare metal of a wide range of architectures.
Go provides built in functionality to develop web applications. Things like template rendering, routing, a generic SQL interface, etc. Which are libraries not commonly found on most languages. Python, OTOH, requires libraries like Flask, Jinja2, and SQLAlchemy to provide the same functionality Go provides as defaults. Thats my whole point.
> Another great aspect of concurrency in Go is the race detector. This makes it easy [emphasis mine] to figure out if there are any race conditions within your asynchronous code.
It can be a bear to try and track down some race conditions but at least it will not report false positives. However, it will not catch all race conditions or data races you may have.
Is Go even memory-safe in the presence of data races? Don't get me wrong, “memory safety” isn't a hard requirement in my book, but “memory safety or a formal semantics” is.
My English reading comprehension is admittedly poor, but I don't see any portion of the text that automatically implies that unsynchronized reads and writes are atomic.
I use it at work, and it is awesome. It has awesome error handling (Either/Option >>> exception) on my opinion, a much stronger type system (many errors are checked at compile). It is harder to learn, but you also have top notch package manager with the even bigger scala + java ecosystem.
The only solid reason this article gave was the second one and, really, is the only one that would ever make sense: If the language one's using becomes the bottleneck then, by all means, change it[1].
Everything else is fluff and opinions. Every single other reason is stuff that is entirely subjective and/or python already has and even the article itself acknowledges, at least for one "reason", python as the better option.
Well, at least they tried to be impartial by mentioning the absolutely odious error handling in Go. It's the biggest reason I became so disillusioned with the language once I actually tried it.
[1]: I, however, would try switching CPython for pypy before rewriting the entire architecture from scratch. It's more performant.
I don't think that performance is the key reason to switch from Python to Go (although it is nice): as someone who switched from Python to Go, what really made it worthwhile for me was knowing that my code is significantly more likely to be correct when it runs. Go's typing is much nicer than Python's, for this use case.
Type errors/misuses aren't my biggest problem and Go's type system isn't very powerful. The Python code is smaller, more expressive, easier to review, etc. I have more confidence in the correctness of pylinted Python code. I wouldn't jump to trade conciseness for compile-time type safety, particularly for applications, because it's easier to create/not see a logic error in heavy code. And depending on what you're doing, Python's type system may be more powerful/appropriate.
I hear that. I work in Python, but prototype most things in Go, simply because many classes of problems benefit from the additional structure, and a compiler that can reason about that structure. Also Go has excellent tooling, it has great libraries, it compiles to static binaries, and anyone can read it.
I'm hopeful about Python's type checking though, but so far it leaves a lot to be desired (absolutely no recursive types, lots of bugs, strange restrictions, confusing for variables, etc).
That's subjective, and part of the sixty-year-old flamewar known as "dynamic vs static".
You like to write the type of your variables next to them, all the power to you. I don't or, more exactly, I prefer the little extra flexibility of not having to do that.
It's subjective. Big benefit I see with Go is that you get a fast language (comparable to C++ and Java) while still being very productive to work with.
We actually did try pypy for a few parts of our infrastructure. It's better, but for what we tried only 2x faster.
You had me at 4. I'm definitely GOing to give this language a GO now. That was a GOod introduction to the language, a veritable advertisement, if you will.
Case in point. Thanks. Also, the moderators seem to have no life. Please deactivate my account if you're offended. I'd prefer not to spend more time attempting to communicate with one dimensional, armchair experts.