"... as a Python programmer, I was the member of an elite cabal of superhuman ultranerds, smarter than those childish Rails/JavaScript/PHP/whatever developers that couldn’t write a bubble sort or comprehend even basic algorithmic complexity, but more in touch with reality than the grey-bearded wizards of Lisp/Haskell/whatever that sat in their caves/towers/whatever solving contrived, nonexistent problems for people that don’t exist, or those insane Erlang programmers who are content writing sumerian cuneiform all day long."
I read that quote from the article as essentially self-pardoying. The author is essentially making fun of himself and the general attitude he--and other Pythonistas, I imagine--held.
Then again, maybe I'm reading too much into it. For the record, I think Haskell is a strictly superior choice to Go, so I'm what he would probably call "gray-bearded" :P. (And I'm certainly working on a beard, even if it shouldn't be gray any time soon.)
no, you got it. It's not really meant to be a criticism of Python itself, it's just behavior that is common among 20-something web developers, which is representative of some areas of the industry, and is specifically the target audience for the post. You know, the kind of developer that is advanced enough that they can build non-trivial projects, but not experienced enough to have wisdom forged in the furnaces of years of industry experience (that's ... me and a lot of other HN users). The same type of attitude can be found in all languages and isn't specific to the Python community; the specific swipes at other languages are just what you get if you apply that attitude to Python. Judging people by the tools they use is a pretty useless metric.
>For the record, I think Haskell is a strictly superior choice to Go
I'm pretty sure it's the best programming language, that's why I put them in the gray-beard category. Or maybe it's OCaml or some Lisp dialect; it doesn't really matter. It's even why I further apologized to Dons and acknowledged that Haskell programmers tend to be able to solve harder problems than other programmers I know, but even after I twice acknowledged the superiority of Haskell, he basically just looked at me and said "you're a stupid idiot, and by the way, look at how many numbers I can compute with my superior Haskell program that runs on a lot of machines". Uh, ok, that's cool... I guess.
But every time I sit down with Haskell programmers and they explain to me how elegant their code is, I just don't really care. It's not what makes me interested in computer programming. I don't really care to use the best programming language; I really only care to make the best programs. PHP (and other shit technologies) have shown us that you don't need to understand monads to make people happy.
A friend of mine, who is a very strong programmer that I have a tremendous amount of respect for professionally, once showed me how to use Coq to prove that multiplication is commutative. I found that very boring, and it didn't make me want to learn Coq. Mind you, I spend between 40 and 60 hours a week working with software. I'm in it because I like to build things that are useful to people, and I think software is a great creative medium. I think that's a fairly common attitude.
Every time I say the name Haskell I am sorry for it, because every time I say the name Haskell I am met with bitter condescension. It makes me sad that despite talking about having experience with seven programming languages in my post, or mentioning having built a variety of things that are very different, Haskell programmers consistently treat me like I am a stupid person. So I'm glad you're not like that, and I'm glad that you understood I didn't mean to dismiss Haskell on technical grounds. Thank you for your comment.
For what it's worth, the reason I prefer Haskell is exactly what you're interested in--I want to make the "best" programs. The elegance of the language is just a proxy for this.
Out of all the languages I've tried--and I've tried a decent amount--Haskell is by far the most productive. I can write my programs faster and they come out shorter, more readable, more maintainable and easier to test. I've found it far easier to go back to old Haskell code I've written than old Java or JavaScript or Perl or even Python code. All this without significantly sacrificing performance.
I've also found Haskell far easier to refactor. In most languages, my projects' code size goes up monotonically; in Haskell, it isn't rare for me to both add a feature and make the code shorter! This was a surprise the first couple of times, but now I almost expect it.
Also, critically, I've found Haskell's advantages scale superlinearly with the complexity of the problem. That is, the harder a problem is conceptually, the bigger the advantage of using Haskell over another language. Haskell actually helps me think about the problem, even if I don't want to write a program for it. I've certainly found certain tasks far easier even in other languages by thinking in Haskell terms about things like nondeterminism.
This may seem counter-intuitive, but I've even found Haskell to be very good for prototyping. Once you get used to the slightly different style of thinking it requires, the type system actually starts helping you develop solutions quickly. I'm sure other languages may be better at this than Haskell, but Haskell really shines when it's time to take your prototype and transition to a solid piece of software--it makes the refactoring much easier and makes even significant architectural changes more approachable.
The real problem is that Haskell, for whatever reason, has the rather unfair reputation of being impractical. Really nothing could be further from the truth--yes, there is plenty of theory and research in the language, but this is not there just for fun: it actually makes the language better to use, even for more mundane tasks!
You definitely don't need to understand monads to make people happy, but it certainly helps. In practice, as PHP has shown :P, you really don't need to understand much of anything to do awesome stuff. And yet we all still heavily recommend encapsulation and testing and code reuse and so on; the Haskell philosophy is just a systematic extension of this.
I think Haskell's widely spread reputation as just an academic curiosity makes some people a little defensive--and well it should! But never take it personally. It's just a little annoying for an immensely practical and productive language to be cast aside simply because it derives its efficiency from a well-founded theoretical basis.
It's actually kind of bizarre that so many Pythonistas actually have the kind of attitude that the article mocks, since Python is a thoroughly mediocre language. I can only attribute their patronizing disposition to a lack of genuine awareness of the broader programming world.
How many Pythonistas? I would like you to substantiate this claim. I would assume that you actually don't have that much contact with the Python community because otherwise it is pretty unaccountable how I have never encountered that attitude all these years. and I am a grumpy person, and not without my own axes to grind about the Python community.
I can relate to what he said, like the grandparent here it made me laugh.
I wouldn't say i espouse the view presented (have too much respect for what's been achieved in the other camps such as Ruby and even PHP), but internally yeah i think i do view the world that way.
I don't think it's any different from the common "anyone driving slower than you is a doddler, anyone faster is a maniac" - a patently absurd view to hold.
The enlightening part for me is that other pythonistas think this way, because as you said, i have also never experienced this view expressed publicly by the python community.
Is there anything to add? It's not very fast. It does not have any interesting abstraction. It doesn't push the envelope further. It doesn't allow any clever optimizations. It has a bunch of horrible gotchas (scoping, mutable default arguments, tuples of one element...). It isn't elegant--the grammar alone takes pages and it's just a loose collection of mostly orthogonal features with largely arbitrary (but, admittedly, readable) syntax. It is fairly verbose and writing most anything takes more than one line. It pretends to support functional programming, but only grudgingly and never well. It has OO support, but also rather half-heartedly compared to Self or Smalltalk. It has a completely arbitrary and unnecessary delineation between statements and expressions (this one really annoys me).
In short, it's a language best define by what it isn't: it isn't bad, but it also isn't good.
It isn't particularly elegant, but it isn't particularly ugly either. It isn't exceptionally expressive, but it's also no tarpit. It isn't very concise, but it isn't quite overly verbose.
It is the very essence of plainness. If languages had a color, Python's would be a dull gray.
An interesting thing to note is which communities do not like Python. For what it's worth, the least Python-endorsing site I've found is Lambda the Ultimate. That should definitely tell you something, although I suspect what it tells you depends entirely on your own preferences :P.
Python really is nothing but mediocre and uninspiring. Sure, it isn't bad, but I would like a language that is actively good.
The very interesting thing about all this is that, quite often, you need a mediocre language. So Python is actually a great fit for a bunch of startups--it's the modern lowest common denominator. But I--being deeply interested in programming languages--wouldn't want to work anywhere that settled on a language that way.
There is no 'horrible gotcha' with tuples of one element. Have you even tried generator expressions? Do you have any specific performance problem, or do you just suppose that you couldn't write sufficiently performant code in Python?
Overall, from your post, I don't believe you have used Python seriously.
I don't care what color Python would be, nor would I care what ice cream flavor it would be. Nor do I care whether it is cool on Lambda the Ultimate. Is this grade school?
If you are looking for a lowest common denominator, I think the language best fitting that description is C. I do not say this to damn C, which I rather like in its way. The core language is relatively simple and portable and is the closest thing that practical programming has to a lingua franca (not talking about academic stuff like ML).
If you can't understand why anyone would like C then it isn't at all surprising that you wouldn't understand why someone would like Python.
Use whatever you want for yourself, but let's be clear that this is a matter of you being too cool for Python, not of you having any real reason why it is a bad tool.
For what it's worth, I've used Python professionally and in several classes. I certainly don't have much experience in it, but it would be odd if I did--why would I use a language I don't particularly like too much?
You only say there are no horrible gotchas with tuples because you've never spent hours hunting down a bug and finding that you forgot a trailing comma. More pertinently, changing something like
("foo",
"bar",
"baz")
to
("foo")
actually breaks the code. This style is used often in configuration scripts (e.g. for Django) and certainly got me before. I don't really see what generator expressions have to do with anything.
The performance problem isn't a problem per se--the performance just isn't good. But this does not mean it's bad! That was really the whole point of my post; the performance, like everything else about the language, is unimpressive. In my experience, even naively written Haskell tends to perform better while being shorter and easier to write.
My point with Lambda the Ultimate was more general--I've noticed that the more programming language oriented a community is, the less people like Python. The people who spend the most time thinking about and working on different programming languages--the denizens of Lambda the Ultimate--are the ones who like it least. It's very much like music that musicians don't like.
What I meant by lowest common denominator is that you could get virtually anybody, regardless of skill level, to use Python. C is far more tricky--and has far more odd edge cases and ways to shoot yourself in the foot--than Python. It trades this for lower-level hardware access and performance, but this makes it far less of a LCD language than Python.
This is not a matter of "cool"--and it's not even a matter of Python being a bad tool. It's a matter of Python not being an outstanding, or, honestly, even a good tool. And, as I said, this is perfectly fine; there are plenty of reasons to choose a solid tool you know well over something that may even be strictly superior. I can even understand why people would like this; many people like something approachable and bland over something seemingly exotic but also exciting.
But this does not fit with Python's general hype. People trot Python out as the gold standard of language design and are rather proud of themselves (as the post starting this whole discussion parodied) without having the substance to back that attitude up.
And all this goes back to the original thesis: Python is not actively bad, but it is also not actively good--it occupies a rather uninteresting middle ground. This is only surprising in light of its somewhat incongruous reputation as one of the best languages in certain circles.
People who like programming languages are not the intended users. People who like programming are, and in my experience there is rather less overlap between these two groups than you might think.
I'm not clear on what in my comment you're saying "no" to. isinstance(tuple, ("foo")) being True is not a "rough edge" (or if it is anything people are ever confused by is a rough edge). If it were True, that would also mean you couldn't use parentheses to set the order of operations, since items inside parentheses would be tuples. That would be nuts.
IMO this is just an educational issue where people aren't taught that the comma is the tuple operator (e.g. `x = 3,4` is perfectly legal) and parentheses just come along for the ride to disambiguate other uses of commas. If I were king tuples wouldn't have the same syntax as argument lists (`f((3,2))` is pretty ugly), but both Python tuples and Python argument lists conform to a lot of people's expectations of how both those things should look.
The fact that duck typing makes tuples and literals (temporarily) indistinguishable to the interpreter is more of a rough edge of duck typing itself. Duck typing can result in tricky bugs in many scenarios well beyond singleton tuples.
Sorry to belabor the point, but this is exactly what parent was saying.
> changing something like
> ("foo",
> "bar",
> "baz")
> to
> ("foo")
> actually breaks the code.
Of course it breaks the code because the first thing is a tuple and the second is a string. He's implying that the second thing shouldn't be a string. I'm saying that's nuts.
No, he's saying that the operation you perform to truncate the former to the latter is the operation you would logically expect to perform (i.e. removing the trailing comma), which makes the thing a string.
He's not actually proposing that a string be a tuple. He's just saying that a single element enclosed in parentheses doesn't work the way you'd expect it.
I like how after all these paragraphs you've managed to avoid having to offer up your own personal example of a "good" and "outstanding" language. But have no problem labeling other languages as NOT being such, with very poor examples :P
You should get an achievement of some sort. Hackernews needs achievements.
Why were the examples poor? Also, if he'd have "Python is mediocre, and X is fantastic", the discussion would've moved to X. He was right in not doing that.
There are no perfect languages. They aren't a bag of individual features, but platforms these days. Taken as a broad platform, the python ecosystem is up there with the best, warts and all. Why, because others have even more.
No offense, but your youth is showing--it appears you haven't used it long enough to fully understand it (gotchas), have unrealistic expectations of purity, and aren't very aware of the preceding. Yet you still have strong opinions... like most of us experience will beat you down, give it time.
I wrote a few of these stinkers a decade ago and would laugh at them now.
> In short, it's a language best define by what it isn't: it isn't bad, but it also isn't good.
Exactly, as you point out in the end. It's a language for people who are deeply interested in getting shit done. Rather than people "being deeply interested in programming languages".
You are acting out the 2nd (tower type) stereotype so well I can almost believe you are a parody.
I don't always code Python, but when I do I don't complain much about these issues :-).
I think everything you've mentioned is pretty defensible, particularly the readability bit. Honestly if a language does everything reasonably well, I think that's pretty awesome. I'm also curious what magical dynamically-typed interpreted language you prefer. The only mainstream general-purpose alternatives I can think of are Ruby (which is basically alternate-universe python) and Perl (which seems strictly worse), or are you into JavaScript or APL-derivatives? It's all a matter of taste of course, but I could probably write a much more vicious rant against any of those.
I go to Python conferences and meetups regularly and I cannot say that I have ever run into this attitude, let alone the specific combination.
I do run into a lot of egos but these don't seem much different from the types I see posting comments like this on Hacker News. And a lot of Python people have picked up Node or Go these days, so I wouldn't be too sure that you weren't damning a significant part of your own favorite community here.
Concurrency support is possible in Python, without gevent-style monkey patching (or callback madness). Have a look at concurrent.futures and http://www.dabeaz.com/coroutines/index.html. It really needs a lot more work before it's part of the language's DNA, though. Also, pypy needs much wider adoption as quickly as possible, to address the speed problems (and its STM branch holds huge potential).
For me, Go's major shortcoming is its community's lack of focus on readability as compared to Python.
"For me, Go's major shortcoming is its community's lack of focus on readability as compared to Python."
Can you be a bit more concrete here? Because whether you like the code styles enforced by gofmt and the compiler itself or not, Go is the most consistently readable language I've ever used (once you adapt yourself to the language). IMO, this seems like an especially odd problem for someone to have with Go.
I think we, as a profession, have a tendency to focus on what's "easy" (syntax and constructs we're already familiar with) to the exclusion of other, potentially more important factors. I am and have been guilty of this myself, so I'm not claiming any sort of sainthood, just making an observation.
Often people pick on the use of single-letter variable names as "unreadable." This is mostly a matter of keeping the code light on the page, as they put it. If you're dealing with a package centered around Foo, especially methods on same, it's customary to just abbreviate it to f. That clashes with a lot of people's instincts even though, once you get used to it, it works well.
"Once you get used to it" is key -- this is true of any language with substantially new or different syntax/semantics from what a programmer is used to. All else being equal, familiarity ought to be orthogonal to merit; the mere presence of a learning curve ought not be a deal-breaker.
Anyway, Go is one of the most readable languages I've ever seen, given that it is not a scripting language. Among other things, type inference and literals make the code very clean, to the point where you can in short order read the standard library code and expect to understand it.
I think the issue with single-letter variables goes deeper than it just being unreadable (although its that too for someone that isn't "used to it").
For example there are issues with scope (what happens when I need a second or third "f" variable" and maintainability too (6 months from now I have to remember if "f" stands for "foo" or "file").
Generally, I try not to use single letter variable names in any scope that doesn't fit into a single screen in my editor. That way, I don't need to remember what "f" means, it's right there in front of me. And I shouldn't have too many variables in that scope, so conflicts aren't all that important; if I really do, I just start using multi-letter names again.
This is a good heuristic. For me, single letter variables are almost always iterators in places where space is at a premium (i.e. one line list comprehensions in python). If I'm breaking a loop out over multiple lines (which is almost always the case), I prefer a longer, more informative variable name.
You can look at the method signature that accepts or returns f. At worst case, you need an IDE with hover popups, or to open the functions doc page or source code.
The biggest gripe I have with single-letter variable names is that they invariably require mental translation to their full names before anyone can understand how they're being used. Even for those who wrote the code in question.
I can't speak for @ak217, but for me Go hits a bad spot on "expressibility". Some of the nuances of static typing are mitigated a bit with features, yes, like the type inference on local variables, or the automatic interface detection (i.e. no explicit "type X implements Y"). But overall, i think the chosen type system is lacking on expressibility. Having no parametric polymorphism in a static language is a deal breaker for me at least. No way of doing type-safe parametric container types for example.
It's a fair complaint. Duplicating code just for the sake of type specialization is annoying and interface{} has it's costs as well as dangers.
However, the built-in arrays, slices, and maps (which are parametrically polymorphic) seem to have covered all the cases where I need parametric types so far. Maybe I'm just lucky or haven't written enough lines of Go but it seems like an 80/20 sort of feature in the presence of the built-in maps.
If I'm going to be burdened by a static type system, then I expect it to do certain things for me in return. One of those things that I take as a given is that a static type system can check that my basic use of types in my program are safe.
Not being able to type-check something as simple as a container lookup is a show-stopper for me because using containers is a huge part of day-to-day programming. Static typing without parametric polymorphism is extremely retrograde -- like Java in the 90's.
I've read rsc's essays on adding parametric polymorphism to Go -- I'm not saying it's a "duh -- just add parametric polymorphism". I'm well aware of the issues -- I'm just saying that Go looks great except for this glaring exception.
The first link, Effective Go doesn't talk about the philosophy behind idiomatic Go, like _Code like a Pythonista_ does. There is however snippets of Go code on that page, and it's fairly reminiscent of C/C++. I don't really see that as a drawback, for I believe C code is fairly readable for what it offers (speed.) Go likeness to C doesn't offend me one bit, considering the power and speed it offers versus C.
What makes you unimpressed? Why do you think that Python clearly wins? Do you think that the additional type safety, namespace safety, and ease of concurrency in Go are worth nothing? Or do you just think that Python has more that outweigh that?
I know that I frequently kick myself in Python when I typo a name somewhere, and everything dies far beyond where I made the mistake because it goes along happily storing something into that typoed variable, and then when I read it later it has the wrong value, and it takes me a while to trace back and find where I made the typo.
This is what linting tools like pyflakes are for. Running it as a prerequisite to running the test suite means that I don't have to run all my tests just to find out that I misspelled a variable name.
that was deliberate. There has been a glut of "introduction to Go" blog posts, and the documentation for introductory Go from the Go Team is quite good, so I didn't feel the need to author another introductory post. Most of the stories about why people use Go are bigger, established companies, so I wanted to show how it was perceived by a different audience. I don't remember where I saw it, but someone talked about using Go in contrast with Java, and the overwhelming reaction was "that's cool but I'm a Python programmer, I'm curious what Python programmers think", so I was just trying to give it that angle.
>Have a look at concurrent.futures
but this is part of the problem: it's a library. Concurrency deserves language-level support. Adding concurrency at the library level is like adding an object system at the library level.
> I don't remember where I saw it, but someone talked about using Go in contrast with Java, and the overwhelming reaction was "that's cool but I'm a Python programmer, I'm curious what Python programmers think", so I was just trying to give it that angle.
I have seen more Python/Ruby programmers interested in Go than Java or C++ programmers. Rob Pike seems to have observed it as well.
I was asked a few weeks ago, "What was the biggest surprise you encountered rolling out Go?" I knew the answer instantly: Although we expected C++ programmers to see Go as an alternative, instead most Go programmers come from languages like Python and Ruby. Very few come from C++.
> but this is part of the problem: it's a library. Concurrency deserves language-level support. Adding concurrency at the library level is like adding an object system at the library level.
Which is not a problem at all. Lisp and Scheme have been doing that since forever.
You don't need macros to solve this problem. You can use unique types, STM, etc. to forbid the data sharing at the type level and ensure a coherent memory model.
The paper's position is that you can't implement threads, safely, as a library because the compiler is not aware of the concurrency. My point was that this reasoning may not hold if your language has full macros. Your suggestions will work great, but those are in the compiler, not as a library.
Well, the point is that concurrency primitives don't have to be built into the language to enable libraries to write safe concurrent abstractions. Rather, the type system features that enable it can exist in the compiler as general type system features unrelated to concurrency that just so happen to be able to ensure the safety of concurrent code if used properly.
I take your point, but I find it unlikely that anyone would defined the correct language semantics that would guarantee correct results under concurrency without designing for concurrency. Which is the point of Boehm's paper: your language (and its compiler) need to know about concurrency in order to get it right.
But these arguments are pretty much invalid today: C++11 introduces explicit support for concurrency, and the compiler MUST be aware of it, even though it may look (on the surface) as a "library".
I wonder if parent has ever coded with Go? You have to maybe learn a new syntax (partially), but once you have its incredibly readable. for instance, encapsulation is handled by giving semantic significance to the case (upper or lower) of the first character in a variable name. So "myFunc" is local and not visible whereas "MyFunc" is publicly accessible. This and many other features make Go extremely readable for me at least, although I admit, I did have to read the docs and program some of my own projects before I really got accustomed to the syntax.
If someone created a debugging environment for Go based on a VM, which also let one recompile source from within the debugger then continue execution, then it would be, for all intents and purposes, as productive and immediate as the old Smalltalk environments. You'd have the same small-grained cycles of inspecting state, modifying code, rewinding the stack to the place of your choosing, then getting immediate feedback.
Source code changes could be saved as log-structured patch files, which could then be thrown away or applied to the source tree as desired. One could also steal some ideas from the Smalltalk Change Log tool by adding similar editing, search, and filtering commands.
With tools like this, one could recompile for "interpreted debug mode," have complete visibility and control of runtime state to debug a problem, then take the resulting patch file and apply it to the source tree. It would be a best of both worlds scenario -- all the enhanced debugging of an interpreted runtime with the type safety and speed of compiled code.
Interactive coding is a very powerful way of working, it's one of the reasons I'm so productive in Mathematica.
At a Go talk he did, I raised the idea with Russ Cox of having a "repl" package that would allow one to instrument a running program with a live REPL to do debugging and development on it.
The reflect package is powerful enough to make some of that relatively straightforward, but one major problem is that Go can't construct new types at runtime. Another challenge would be dynamic linking of new code -- because the Go toolchain doesn't support dynamic linking there would seem to be no hope of say entering anonymous functions on the REPL.
Unless one builds a full Go interpreter -- but who wants to be in the business of maintaining a fully compliant Go interpreter that can interoperate with the Go runtime?
Still, calling existing functions and banging on variables would be pretty useful.
> one major problem is that Go can't construct new types at runtime.
The current implementations can't. I don't see any reason why one couldn't. In any case, I don't think that's such a big deal. One just goes from completely seamless interactive coding to mostly seamless interactive coding.
> Unless one builds a full Go interpreter
That is precisely what I was proposing. (Are you implying some sort of hard VM/interpreter dichotomy? I've met some people who implement VMs who think this is somewhat arbitrary.)
>but who wants to be in the business of maintaining a fully compliant Go interpreter that can interoperate with the Go runtime?
There would be no need to interoperate at all with the current Go runtime. One would have to have their own Go runtime, however. As an alternative, one could just target an emulator with no optimizations, then use debugging information and dirty tricks to map the new code with the old state in an entirely new process.
> Could you fake interpretation by continually recompiling everything?
That's pretty much what the modern JIT VM Smalltalk environments do. On recompiling a method, all the affected JIT compiled machine code is kicked out of the code cache, and you start from interpretation again, which eventually results in the "hot" code being JIT compiled again.
It would help a bit if the article included at least roughly equivalent Go code next to the Python code. The Go code is wordier, but maybe it takes less time to write because it doesn't require as many decisions (libraries etc.) as with Python.
package main;
import (
"fmt"
"net"
)
func main() {
hosts := []string { "www.google.com", "www.example.com", "www.python.org" }
c := make(chan string)
for _, h := range(hosts) {
go get_ip(h, c)
}
for i := 0; i < 3; i++ {
fmt.Println(<-c)
}
}
func get_ip(host string, c chan string) {
addrs, err := net.LookupHost(host)
if err != nil {
fmt.Println("Host not found:", host)
c <- host + ": <error>"
return
}
c <- host + ": " + addrs[0]
}
This is a very good example of the type of thing I'm talking about. Nobody prompted you to do so, but you gracefully handled the case of a DNS failure in your code, because the control path was obvious throughout. Go makes this type of error handling a topic very early on in the literature. I find this type of clarity when structuring concurrent code to be very helpful in minimizing subtle concurrency bugs.
Lack of exceptions means the deeper the call stack the higher the proportion of error handling code relative to a normal (non-exceptional) code path.
For example, if you decide to refactor some code from a bigger function into its own smaller function then all error handling code have to be repeated twice (first in the child function then in the parent that calls it). It introduces unnecessary clutter and boiler-plate.
Exceptions allows you to limit the error handling to two places: the place where you detect an error and the place where you are ready to handle it and not throughout the whole call stack.
Language with a builtin garbage collection can (should) afford a builtin exception support.
I like that Go excludes some language features on purpose but in the case of exceptions Python has its merits.
Coming from Python, I was of the same opinion at first. A significant amount of error handling lines is what I noticed first when looking at Go code[0], too.
However, I was interested to learn that Google style guide for C++ recommends not using exceptions. I don't know C++ (shame), but at least some of their reasoning is applicable to Python as well:
For example, when you start raising an exception in a Python function, you have to check its callers, whether they (or their callers) handle it. And vice-versa—when a Go function has “expected” error in its signature as return value, it's very obvious what you need to handle when you're the caller.
“Expected” error as return value also encourages single responsibility principle, I suppose. E.g., for a function that decodes JSON, bad syntax is “expected” error, the rest is a reason to panic().
I'm not sure if I get your example with refactoring, but it may be a case when you use panic() in inner function, recover() in outer, and return an error as usual.
That said, I've never actually used Go yet, so it's just theorizing.
[0] But then my Python code that (to me) looks more or less “solid” appears to contain a comparable amount of error handling lines, so not sure if it's a good metric.
I was initially of the same opinion, but with time I came to understand that Go's error handling system is like vegetables; unpalatable, but good for you. Go's error handling works very well in practice, and my programs exhibit little to no unexpected behavior.
Another gonatic? I'm immediately reminded of the days when everyone who liked D proclaimed it would overtake C++ and rule the world with terrible, contrived examples: "let me show you why my language is better than yours by completely misunderstanding how to solve a problem, then implementing that broken/overengineered solution in your language, then compare it to something my language's API can do for me, just so we can see how much simpler that bad solution is in my language!"
The insight to async I/O is that this server, for this I/O bound task, will perform as well as your GOMAXPROCS example:
But it's a lot simpler. You don't need to bolt on parallelism when you don't actually need parallelism. There are valid reasons to use Go, and there are valid complaints against Node. I don't see any of either here.
The big difference between D and Go is that D is another kitchen sink language. It has almost all of C++'s features, and more! Conversely, Go has many fewer features than C++ or D, and many argue that this is Go's biggest selling point.
I do believe that's what I said. The comparisons the OP gave aren't useful because the trivial example does not require parallelism, so it amounts to little more than complaining about the lack of features in Node. I'd like to see some nontrivial case studies as well.
I'd be interested in a more in-depth look at why this might be the case, because in JS written such that V8 will optimize properly, it's generally faster.
IIRC, you can't tell v8 to keep memory in the same place (it can and does move things around as part of its gc).. this complicates sending strings to the socket, so you have to make a copy from the v8 world to some other world before initiating the async write.. i believe (suspect?) that lua does not have this limitation, so the extra work involved is not necessary.. this is based on old, vague notions so I may be off my rocker, but it is the best of my understanding. I would love a detailed look.
I'd like to see more detail about the real problem he was facing in python. It sounds like he mostly wants a non-blocking background job "I don’t want to set up another daemon, I just want to send some email in the background!" Why not just use Queue (thread-safe, waitable, built-in) and a background thread(pool)?
There's an awful global lock on the actual execution of python code, but unless the problem is performance or contention worrying about it is premature.
Not only that, but there's a multiprocess Queue class as well that is a drop in replacement for the threaded one. So if you are having problems with GIL contention (unlikely, but possible), you can still just as easily use multiple processes.
Exactly what I thought - "what about multiprocessing?" Doesn't that count as language-level concurrency support? You can fire off a background job to send email or whatever just as easily as in the Go example.
This confused me as well, it felt like the author was thinking about the problem space all wrong. As soon as I read
"As a Django developer, there wasn’t a straightforward and obvious way to just do things in the background on a page request"
I started thinking, "um, django produces http responses, you can only respond once...".
Clearly, from his gevents example, he's looking to do things that take time, and wants to do them as concurrently as possible, then send the whole response at once. But if he really doesn't want the user to wait, then this is not something you should be doing in a webserver anyways. You really should be building that into the front end javascript, and let it handle all the async data grabs from small, fast, cacheable web requests, then redraw the page as the data comes in.
If for whatever reason you don't want to build everything into the front end Javascript, it is very common to throw jobs onto a task queue to be done outside of the HTTP flow, so they do not block the response.
This could be a lot more accessible to beginners, however, because there are moving pieces like a message queue that have to be set up and blah blah.
You should update the node.js docs if you consider the original JS snippet ugly, because that's where I pulled the original code sample from verbatim.
In any case, it's still not an apples to apples comparison. The node.js example forks separate processes that share nothing, while in the Go example it all runs in one process so the HTTP handlers can communicate with each other via channels or other shared data structures.
The documentation has the primary goal of explaining how things work, not being the canonical style guide.
Take a similar example from the Go documentation:
const NCPU = 4 // number of CPU cores
func (v Vector) DoAll(u Vector) {
c := make(chan int, NCPU) // Buffering optional but sensible.
for i := 0; i < NCPU; i++ {
go v.DoSome(i*len(v)/NCPU, (i+1)*len(v)/NCPU, u, c)
}
// Drain the channel.
for i := 0; i < NCPU; i++ {
<-c // wait for one task to complete
}
// All done.
}
Of course it's easier to read for you: you know JavaScript. I know both JS and Go and the Go version is much easier on my eyes - there's simply a lot less to it.
ah, so, I don't think I wrote that section particularly well. I hold Erlang and Haskell in very high regard; I was really trying to mock my own superiority complex, which I think is fairly common in the Python community. I guess what I was really trying to get at is that Go is verymuch an industry language. Haskell I would argue against being an industry language not because it can't be used in industry; that's not at all what I mean. I don't mean to compare them on their technical merits; what I mean to say is that Go has a broad appeal, in that it is welcoming to beginners, it has large corporate support, and that it is also technically capable. Of the programmers that I know, the Haskell programmers are typically capable of solving the hardest problems, but they also tend to be the most academic. The Erlang programmers typically build the most stable software, but I think Erlang is intimidating to less-experienced programmers (the same is true of Haskell, actually).
That could have been written better. Thanks for the feedback.
> Haskell I would argue against being an industry language
FYI, the problems the Haskell community has been working on are things like scalability, performance and safety because they're critical to industrial problems. Toy approaches don't work at the scale we operate at -- you need real computer science.
You want 1,000,000 Haskell threads in your app? You've got it. Want to write numerical models that compete with C++ code, in a fraction of the development time? Done. Want to guarantee the app won't crash? Solved. Run models over 10,000 cores? It happens.
Because we put the work in.
---
/me wanders back to a multi-million line Haskell codebase running systems in 25 countries, processing billions a year in financial transactions.
I spend 3 days and learned all features of Go. Haskell, I spend 3 month just to barely touch applicative functors and a bit monads. I am expecting another year if I need to truly fluent in haskell.
Go look at the Haskell code dons has written and come back and confirm or cancel your claim. Hint: dons studies Core to get the great performance his code achieves.
People like dons can do amazing work in Haskell. There at 10x or 100x as many people who can't (not without at least a PhD's worth of study at least), but could be productive in Go on a large array of projects they need to do.
/Studied haskell, hung out on #haskell, thinks its awesome, made me a better java programmer, but always gets blocked on showstoppers due to not knowing the incantations and deep type theory needed to structure a program for high performance and debug space leaks.
Probably not if that's what you start with. Carnegie Mellon now starts all CS students with ML, I believe.
For those of us who learned to program in an imperative language first, the difficulty with learning functional programming is unlearning all of our bad habits.
At my college (www.ii.uib.no) they've restructured the "programming paradigms" course (basic compiler/language theory) from using Standard ML to using Haskell for the strongly typed functional bit. While I appreciate being forced to 'learn' Haksell -- I found Standard ML much easier than Haskell. I did manage to implement a small language for calculating numbers in both, in the end.
"Functional programming" is 10% of what makes Haskell difficult. The intricacies of laziness and existential types and opimizing higher order functions are more the problem.
Very true. One of my first forays into Haskell came during the Google Code Jam, after seeing how short and elegant Reid Barton's solutions where. I solved the problems, but the programs were horribly slow, so I checked how Reid got around it: oh, he doesn't just map things and use the standard style, he needs some special constructs from the libraries.. it looks mathematically equivalent, but it looks plausible that the old way ends up holding onto too much data in memory. I eventually stopped when I realised that it would always take more time than hammering it out in C++, even if the final product in Haskell is shorter and more elegant. In fact, outside of writing APIs and frameworks, the desire for elegance seems debilitating. This also reminds me of how when I was most seriously practicing for Topcoder I could consistently produce shorter solutions than tomek (the top ranked guy far and away at that time) he would finish more quickly, since once he types it in it's done.
For the record I'm now using Python (and some R) but I think I'd like Go.
> /me wanders back to a multi-million line Haskell codebase running systems in 25 countries, processing billions a year in financial transactions.
Curious... Can't be a bank, too conservative. Hedge fund? I know Jane Street love their OCaml so functional does have a place in that world. But 25 countries? You'd have to be huge.
The idea comes from 10 years of working with them as an external vendor. I see all sorts (but especially Python and C++) on prop desks, but mostly C# in other areas (integration, data management etc.) That being said, nearly all of my experience is in front office trading and asset management, primarily in listed instruments, with little on OTCs, and nothing on payment processing, 'core banking' or the kind of stuff that spans the entire organisation.
It sounds like Haskell has found a niche at Standard Chartered doing perhaps just that. What are those two banks you mentioned doing with their Haskell code?
It's easy. Pimp yourself out, put up a resume.html, write "Haskell" on your linkedin profile.
This "throwaway" line on my resume: """Interest in functional programming (Erlang, Haskell, Lisp) and associated techniques as they relates to increasing software reliability and scalability."""
...has gotten me at least 2-3 calls/emails for erlang/haskell-specific work over 3-4 years, and that's without even trying.
Don't be afraid either to make up a second resume b/c from what I can tell recruiters just do google searches and click on links. With linked-in, write a "brave" summary up top and put in a strong haskell statement or two in your details section and you'll probably start getting some calls.
If you're happily employed right now then don't be afraid to change your resume / linked in / passive search to reflect your dream job instead of a "real" job-search resume. Just be ready to "change it back" when you get serious about looking.
I too share the perception that Haskell is not an "industry language" although I play around with it in my free time and like it. I'd be interested to know if there are any really competitive i.e. not experimental or mediocre software packages for Haskell out there. For example, is there a server written in Haskell that can compare to nginx or apache? Anything like a MySQL, CouchDB, or Redis? What about web development? - any CMS's or Frameworks comparable to Rails or Django? Up till now everything I've found from the Haskell site just can't compete with the industry standard alternatives - unless of course you're a Haskell expert and can program everything yourself...
While I agree with you, please be careful - you're an extremely influencial member of the Haskell community. This post could come across as very smug, and is thus potentially damaging. I'm sure it wasn't your attention, but please - be careful.
If dons were concerned about this post hurting anyone's reputation, he would have a Herculean effort to clean up a decade's worth of similar and more scathing commentary.
>/me wanders back to a multi-million line Haskell codebase running systems in 25 countries, processing billions a year in financial transactions.
Hope the codebase runs well, because that's the bank I use in Singapore ;-)
Has you/anyone written something about how Haskell got picked for the task by that particular bank? Is the language used at other financial type businesses? I've heard some firms use Ocaml.
Actually, that would probably be pretty boring--the two languages overlap considerably. I worked at an OCaml company this summer; coming from Haskell, I picked the basics up immediately and only took a bit more time to learn some of the more advanced features (e.g. OCaml functors).
I had some discussions about OCaml vs Haskell and, honestly, nobody could come up with a particularly strong case in either direction. (That said, we didn't do anything using multiple cores where Haskell has a bit of an advantage, I believe.) It always came down as "well, Haskell is awesome, but we're already using OCaml and it's pretty awesome too, and maybe easier to learn".
Much more interesting would be Haskell/OCaml vs everybody else, but that gets played out all the time anyhow :P.
Actually, I loved that section. I'm one of those "grey-bearded wizards of Lisp/Haskell/whatever" but I like the idea of becoming one of the "insane Erlang programmers who are content writing sumerian cuneiform all day long." Guess I'll have to learn me some Erlang.
C is can be intimidating, C++ often is. You have to gain much experience to be able to write good C++ code. With Erlang it's simple. It's supervisiors and gen_servers all the way down.
Interestingly, many things in Go's design that might seem 'deficiencies' from the point of view of Erlang means most of those problems don't apply to Go at all.
That's a good talk, I've encountered these problems in the wild. Erlang isn't a silver bullet obviously and you have to know it well if you want to write something big, but it's really easy to write small to medium applications. Shame their erld isn't FOSS.
For small tasks in the background of a web request, you can just, you know, use a worker thread. This author seemed like he didn't even try regular threads, and went directly from one hyped meme to another. It's often the case that the GIL isn't much of an issue.
I'd like to start porting a few of my little scripts to Go (that do pretty poor messy parallelism in Python), and I was wondering what a good resource/book type thing would be for people learning Go. Like, the equivalent of learn you some haskell or whatnot. Also some advice on "wtf library do I use for this".
Is there some sort of Go package manager? How does all this shit work?
Yes, the Go tool comes with package-management capabilities. "go get labix.org/v2/mgo" will download the very excellent MongoDB driver mgo, for example.
But then actually implementing something useful is cryptic and obscure. I tried figuring out how to connect to a postgresql server following the spaghetti documentation of https://github.com/bmizerany/pq and http://golang.org/pkg/database/sql/ and have yet to find anything useful. Hacking around only leaves me with frustration. Searching around only leads me to unhelpful descriptions of what the GO command does in MSSQL Server.
func TestExec(t *testing.T) {
db := openTestConn(t)
defer db.Close()
_, err := db.Exec("CREATE TEMP TABLE temp (a int)")
if err != nil {
t.Fatal(err)
}
r, err := db.Exec("INSERT INTO temp VALUES (1)")
if err != nil {
t.Fatal(err)
}
What is it exactly that you tried and failed? Because looking at conn_test.go the API is dead simple and pretty much like any other db API of this kind.
"As a Django developer, there wasn’t a straightforward and obvious way to just do things in the background on a page request. People suggested I try Celery, but I didn’t like that option at all. A distributed task queue? What? I just want to do something in the background without making the user wait; I don’t need some super comprehensive ultimate computing machine. The whole notion that I would need to set up and configure one of these supported brokers made my spidey sense tingle"
It's not really that hard. I just latch on to a broker that I'm already using elsewhere in my stack (Redis). Celery makes it super simple to run a command in the background.
I just want to do something in the background without making the user wait
What's more, the strategy of just spawning a thread to do with async processing doesn't scale. Once you hit the limit of requests that a machine can process, you'll need a distributed work queue anyway. Or do Go coroutines run through some sort of managed execution queue?
Celery is one of the easiest things to set up. It really doesn't get much simpler. Your configuration is living in your django project as python source code, and `celeryd` is no-fuss.
well, the complaint isn't entirely that it's hard, per se, but that there are many different ways to do things concurrently in Python, and that it presents too many choices to be made for the novice programmer. In Go, you just say `go myfunction()` and you're done. There's a lot of value in that.
Note that Celery isn't just about what you describe here. A distributed task queue is beneficial in Go (or node.js) too. Async is one thing, distribution is another. Also web servers are often volatile environments (e.g. for tasks that must complete).
Until, that is, you tackle a problem for which one machine isn't big enough, at which time you need to do all of the "real" solutions anyway, and `go myfunction()` buys you...nothing.
There's way too much focus today on new languages that are designed to work on just one machine, and scale there – as if vertical scaling was the true problem we all face, when in fact, it's not. /sigh
He's talking about a web application doing a task asynchronously. There is no reason why it would be hard to scale Go web servers across multiple machines. Yes you could use a distributed broker system to do asynchronous tasks, and some languages platforms leave you no other choice, but there is no reason to think his use case actually requires anything more than running an asynchronous task.
I keep hearing about go. I am new-ish to programming. Only really getting started on my first project, which depends on pyparsing, which depends on other things. Is go something a novice should be attacking real-world problems with?
Probably not. Go is still in its infancy and not especially well-supported. If pyparsing is solving your problems well, stick with it. Python is great.
Of course if speed is important, you should ditch CPython anyway.
A language is just one tool in your toolset. I love using Go very much, but I would recommend learning on Python. There's much more literature out there, more people you know will know it, and if you are looking for work, there are effectively zero entry-level programming jobs in Go.
The reason you keep hearing about Go is that it just very recently hit its 1.0 release, so it it a comparatively new language, and it's just a topic of conversation. Talking about Python isn't really newsworthy in the same way, because for most people, it's just a fact of everyday life. Go, however, is this new thing that's a little mysterious, that most people haven't tried yet.
Have you ever heard the phrase "you can never really know yourself until you know others"? Well, that's true of programming languages, too; learning new languages can help you to write better code in a language you were already familiar with. Learning new languages is great practice for any programmer.
But to start, I would just stick with Python and write some things that make you happy. That's the most important part; to figure out how to use code to make yourself happy. Which language you use is really just an implementation detail.
Actually, not without going out and meeting new people. I'm a physician, so meeting anyone who knows anything about IT is quite challenging. And the helpdesk folks at the hospital usually aren't in the mood to troubleshoot those kinds of errors!
ah, I initially met most of the programmers I know though hackathons and meetups. Startup Weekend was one of the first things I went to where I met a lot of people, and that happens basically everywhere.
I think we can cut the author some slack. He admits in the header to being a relative newbie. (Learned python less than a year ago, recently took Hacker School)
In Go, the coroutines are in the same thread. There is a single thread here too(just like node.js). Coroutines are just multiplexed to the one main thread. Multi-Core Processing is handled by coroutine internals too i.e. there may or may not be more than one threads and even if there are more than one threads, they too will be multiplexed with the one main thread.
Goroutines running Go code are multiplexed onto a variable number of system threads (GOMAXPROCS, in the gc implementation), Goroutines running C code, for example via cgo, or when they call a blocking system call, have their own thread.
I think that's more of a matter of scheduler still being immature. Erlang only added an SMP scheduler in R11 due to the complexities of managing internal state without sacrificing it's soft-realtime abilities. It was also only really reliable after R12.
I'm sure Go will mature here and the defaults will evolve along with that. Whether that will take years or not is up for debate but at least it's a possibility and doesn't cost the programmer anything other than a recompile if they start with careful use of goroutines today.
I haven't been using Django for a while, but I know there was no way built into the framework to do it. The complaint isn't really that it's too hard, it's that there's too many decisions to make, because it's not supported at the language level.
Sure you can do that (using threads/multiprocessing/greenlets). But usually you don't want to because it's hard to guarantee that the background task will complete without interruption.
What's wrong with good old threads. It's not that hard. And threading in Java will blow away any pseudo concurrency setup in python. Been there done that. Love python but just couldn't get the performance we needed.
it's a slow, i/o operation. Perhaps it takes a second or two; maybe you're generating a PDF and the email is of non-negligible size, or your email provider is slow that day, or something of that nature. When a user clicks a button, and the response is just "ok, we sent you an email", you can just issue that response immediately, and then send the email. Otherwise, the user's browser just sits there and hangs until the entire process is done.
Based solely on this article, and this article alone, and knowing nothing more about Jordan Orelli, the conclusion I drew about the author is that he thinks "Java Web Programming" strictly refers to applets and nothing more.
ah, I was working on a game. It was a space shooter. I wanted it to be viewable in the browser so that I could just send a link to my friends and they wouldn't have to download it. I wound up writing it in AS3.
I have found at least the Go stdlib much more uniform and reliable than the Python stdlib, perhaps not as complete, but it also has things that are missing from the Python stdlib, and unlike with the Python stdlib I don't find myself reaching from alternatives, the Go stdlib has great template system, image manipulation, etc.
Third party libs still are more uneven, but that is not much different from Python.
nonblocking i/o is an important feature if you want to make something responsive that will handle a lot of connections. node.js brings that to the foreground, and makes it a topic that is digestible for a lot of programmers that have never seen it before. The end result is that by using node.js, a developer is able to experience one element of concurrency that they may not have had exposure to using Ruby/Python/PHP. Yes, it is possible to do nonblocking i/o in Ruby/Python/PHP, but it isn't as natural of an experience.
It's a usability thing. If you make something easier to use, more people use it; as a result, node.js has introduced the topic of nonblocking i/o in a way that other technologies have been unable to do. It also allows you to share code between the client and the server, which may or may not matter, depending on your project. And for a lot of developers, node.js means using a language you already know.
The guys at Kitchen Table Coders do a workshop where they make a physical game paddle with an arduino, and then hook it up to a node.js server, so that two people can play pong with physical controllers in their browser. They're able to teach that workshop in a day. I think that speaks volumes for types of things that you can do with node.js, and its mass appeal.
But, like I said in the article, I find that node.js becomes disorganized quickly, and I just plain don't enjoy debugging JavaScript. On top of that, if you need to do CPU-bound work, node.js basically forces you to create an isolated process and perform i/o with it, but now you have an OS process for each of these instances, whereas in Go, creating a goroutine and communicating with it over channels is much more lightweight. Node.js has its merits, and I don't mean to say "nobody use node.js ever!", I just don't like using it.
This made me laugh, thank you.