We've just started using Go as well. It smokes our Python app in terms of speed, and is fun to use (maybe just because it's new?).
I have always wondered, however, that if moving to a new language seems great because of the language, or because you have such a better understanding of the implementation of the problem you are trying to solve.
New is fun. Exploration is fun. I think a lot of people will swear by a new language simply because it's not old and probably doesn't suffer many of the same deficiencies they're used to in their "every day," language.
This to me is an illusion however. One must remain skeptical and treat new, untested languages with even more scrutiny than an old one. Many of these new languages will make extraordinary claims. Discovering the evidence to support these claims is often left as an exercise to the programmer.
That being said, new has a lot of advantages. It's free to try to break away from past paradigms that perhaps limited programmers. Stability can always come later once the core ideas have been fleshed out. And it's always fun to work on fresh ideas rather than refining the same old ones that we're plagued with.
Personally I wouldn't use a language and compiler that only just reached 1.0 this year in a production system. If I was really interested in Go I'd certainly hack with it and perhaps on it, but I wouldn't trust it to be reliable. Maybe that makes me an old, stodgy fart but I trust wisdom over brilliance when it comes to building systems that are dependable and robust.
Go has been surprisingly reliable and stable, even before it hit 1.0 a few months ago quite a few people (including Google) were using it in production: http://go-lang.cat-v.org/organizations-using-go
Go is also quite different from most 'new' languages, many people find it to be the most fun language they have used in a long time (even after using many other new languages).
This might be in part because one of the things that makes Go special (and my favorite "feature") is not just the features it has, but all the stuff it doesn't have.
Go is simple and doesn't get on the way and lets you focus on the problem, other "new" languages are often described as "powerful", but much of the work involves using their "features", when Go is more often described as productive, the focus is not in the language and its features but on the problem you are trying to solve and the language gets out of the way.
Is it really helpful to judge a language by its version number? Go is a very conservative language, in that it only uses well known and studied language features, and has been in production at many companies, including Google, Canonical, CloudFlare, etc.
Certainly it deserves more faith than any random language designed for the exploration of new paradigms and features and which is only used by its maker?
To paraphrase Big Lebowski: "Well, like, that's just Google's opinion, man."
Of course Go is already in production at Google because the Go team designed it to fit particular pain points that Google was having, so it's going to have the libraries that are needed to solve those problems. The question is whether the current libraries are there to solve your problems, and, as others noted, it's not that cut-and-dried.
Version numbers are pretty useless these days. I would certainly put a lot more stock in a 1.0 from the Go team than, say, my company. We use version numbers more for marketing than anything else.
Personally, I'm not as worried about reliability as I am roadblocks. Say, for instance, we spend a month moving our framework over to Go. Then we find a problem that is yet unsolved. Either we solve it ourselves at an unknown cost or we have to just ... wait.. until another group solves it while we make payroll in other ways.
I'm lacking any real evidence here, maybe Go doesn't have a library for our Message Queue (not true, just an example). Now we aren't just porting, we're writing a pooling message queue interface that is beyond our pay grade in the language.
New vs Better is also an interesting point. In this case, Go is a leap forward from Python in terms of language quality and predictability. It even provides you are proper concurrency framework - something all modern languages need.
Go also works very well at STARTeurope, powering our event-platform http://startuplive.in/
Developing a high level webframework from scratch just for one website was a bit of a crazy undertaking: https://github.com/ungerik/go-start
(sorry, the documentation needs a big update and a tutorial. Most time was spent on running stuff and shipping features...).
Garbage collection is actually a lot easier with 64 bit pointers, since the odds of a random collision between pointers and non-pointer data goes way, way down. And because the ratio of memory in use to total address space goes down.
I'm still a bit of a novice, could someone elaborate on what he means by operator overloading being "problem creating?" I thought that was one of the main, 'core' concepts of OOP. Inheritance, and polymorphism.
How would you make something like a GUI without being able to specialize classes by overriding certain methods?
The argument is roughly that it is, in fact, not very useful, while opening up a vast array of misuses - especially in the domain of being all too clever. What does "+" mean on a list and an object? Add, probably. But what if it's two lists? Union? Or add the second list as the last element to the first? Named functions "add" and "union" aren't any less readable and loads more descriptive.
The only situation you positively need operator overloading is when doing arithmetic, and you should do that using the built in types. This, of course, sucks when the built in types are inadequate, and you e.g. want to use a arbitrary precision library such as BigDecimal in Java.
This, of course, is an opinion, and I'm not shy to admit it's been shaped by being burned by Lift, the Scala web framework which makes heavy use of symbolic function names which makes it incredibly difficult to talk about or search for answers online.
Of course, Go itself is completely unsearchable, so you always have to search for Golang, which means that thee mut be hundreds of pages that get missed.
And "C" or "D" are searchable? Or "Java" (island, coffee) or "C#" (musical scale)? Or "Basic" (fundamental)? Or "Python" (snake) or "Ruby" (gemstone)? Or "Lisp" (speech impediment)? Or "Pascal" (French mathematician, SI unit and general given name)? Or "Smalltalk" (informal conversation)? Or "Logo" (emblem)? Or "Lua" (Portuguese word for moon)? ...
In my humble opinion, in the vast majority of cases, yes. Because " C ", " D ", " C# " should be easier for search engines to disambiguate (at least working with Lucene, it is, and I imagine Google etc are similar-ish).
As for Java, Python the context in the page likely to point to its intended audience (and it helps they have been around for ages etc). Otoh, " go " is likely to be used in a lot of literature, including other programming related texts.
Try searching for something with clojure, and then try go -- the quality of results is usually substantially different, and my unsubstantiated hunch is that not all of it has to do with lack of go-related content.
I would think that the search algorithms are smart enough to conflate references to go in an article about programming with golang as a search query. I am not a google engineer but I do know that they have very good "did you mean" analysis on search queries.
Doesn't ruby with its "operators are actually methods and fair-game for redefinition" implementation disprove the theory operator overloading is an a priori Bad Idea (tm)?
Those guys seem to manage the flexibility pretty well.
In my experience, ruby's '+' doesn't get redefined all that often. The most-redefined operators I've seen are '[]' and '<<'.
I think it works because people don't usually just go around wantonly pushing objects onto each other. It's usually part of a DSL that's used deliberately. Some of the craziest I've seen were things like _why's Hpricot library, which made a sort of xpath-like DSL:
doc / :div / ".foo"
If I saw it out of context I'd assume it was some sort of pseudocode.
His point is that, if you aren't familiar with the code, operator overloading can be difficult to read. It gives objects the appearance of being native types, and it is sometimes not entirely clear what the result of the overload might be. What does "dog + cat" equal? In an extreme example, if you are crazy enough, it might make sense to have animal + animal = baby animal. You need to temper that example down to something closer to reality :)
I personally like overloading, but I think it's probably too easy to abuse, and I can see how it would cause problems with a team size larger than 5.
As far as I can see, whenever the Go team encountered a language feature that could possibly be abused, they always deferred to leaving it out. Whether that is good or bad I'm not sure we will know until we have years of experience with it.
> As far as I can see, whenever the Go team encountered a language feature that could possibly be abused, they always deferred to leaving it out.
All features can be abused.
The Go philosophy is more to leave out features that obscure the meaning and understanding of code (what is also known as "magic").
Also part of the Go philosophy is to not include any feature unless it is clear that its benefits are greater than its costs, and which might interact in unpredictable ways with other existing features.
In other words, the default is to leave things out, rather than to include them, the opposite of a kitchen sink approach.
I think the poster child for operator overloading abuse is iostreams overloading << and >>.
However, operator overloading is very much needed for when creating user defined types that have arithmetic properties, such as bigint, or matrix.
As for the confusion about whether + means "add" or "concatenate", that is unresolvable. The D programming language deals with it by introducing the ~ binary operator to mean "concatenate". No more problems.
Although in D one can overload operators to mean any crazy thing one wants to, the consensus in the community is to eschew non-arithmetic use in the same manner that the C community has condemned:
I haven't seen a GUI written in Go yet, but the basic idea is that you don't customize via inheritance; you do it in a different way. For example, in many UI libraries, you can create a function and ask a widget to call you when you receive an event; the widget will have methods like:
addClickHandler(myFunction)
Any customization you could do with inheritance could be done with a callback function instead, provided that the object has the hook you want.
In Go, there is also an "interface" which is basically a set of methods.
That is more or less how UIs using Cocoa/Cocoa Touch are working from a developer standpoint of view. Most standard views/controls have a delegate property which is informed about certain things happening or asked when an important decision has to be made. This works extremely well and by implementing a delegate you get around the problems related to subclassing UI controls/views. Now that Objective-C/C has blocks Apple began to replace (or add) block based "callbacks" which makes life easier in most cases.
Yes, I think you misunderstood. You are thinking of overriding methods, which is very important. Operator overloading is where you redefine the behavior of a "+" symbol, for example.
In a number of oo languages + is a method(smalltalk, ruby, python)(possibly using some sort of special casing of addition for integers/floats to speed up the common cases)/
I wish I could agree, but experience has shown that not having operator overloading makes (a) operating polymorphically over different number types and (b) creating new number types (decimals, bigints, etc.) really awkward. The former is much of the reason we had to add it to Rust. We have matrices that can operate over any numeric type T that implements the basic operations (so we can write matrix math once and have it work on 32-bit floats and 64-bit floats; this is important for speed vs. precision for browsers vs. scientific computing), but we couldn't put an addition operator in the Num interface, so we had to make "add", "sub", etc. methods. The result made our matrix math operations nigh-unreadable.
How did you solve the problem in your matrix libraries of overloading a single operator multiple times? I was trying to make rudimentary, game-oriented linear algebra library in rust along the lines of glm. I immediately ran into the problem of not being able to implement "mat4 * float->mat4", "mat4 * vec4->vec4" and "mat4 * mat4->mat4" overloads at once. The alternative was only to go with "mult_float", "mult_vec4" and "mult_mat4", but as you say this leads to "nigh-unreadable" code.
It might seem petty, but whilst I understand rust avoiding overloading functions entirely (due to type problems and code obfuscation), it was enough to turn me off entirely, in this case sending back to D.
Bear in mind this is not to say Rust is a bad language - there's plenty to like about it. ;)
I was about to say "you can't do it", but I think you can -- the trick is to use a bounded generic implementation. Once "Mul" becomes a trait, you'll be able to say this:
It's admittedly a bit awkward, but maybe that's OK to discourage overloading unless you actually need it. Still, your point was very interesting -- I didn't realize this was possible! -- and I'll spread it around the team.
I don't want to cause a fuss. While Rust might not work for my needs/wants/desires, that's ok. I highly respect those who don't attempt to please everyone. :)
I have never worked with this library but it seems to me that they have not the problems you described. Maybe having a look at it brings up some new ideas...
Although on superficial level it shares the AGOL/C-style syntax, Rust is a very different language from C++. Some things are possible or easier in Rust as opposed to C++ and vice-versa. Copying directly from a C++ library would be difficult, and wouldn't take advantage of Rust's unique strengths.
Okay. Thanks for the info. I just had a quick look at Rusts wikipedia entry and saw that it has been influenced by C++. But as you pointed out, this might not mean much...
You are missing the point. You might not know exactly what add() does internally but "+" only gives you a 1 character description, requiring you to know, or assume behaviour of the terms being added. By using a function name instead, you have far more opportunity to describe more clearly what actually is going to happen, requiring less assumption which as we know is the mother of ....
add(foo, bar) isn't any clearer than foo + bar, but usually an overloaded operator doesn't correspond to "add".
For example, in Javascript: "Hello" + " " + "World!". What the operator there is doing is concatenating the strings, so if you had a method to do it you wouldn't call it add - you'd call it concat.
But then you lose the information that both ((usually modular) arithmetic, and strings with concatenation, et al.) are monoids, and have a similar structure, and creating generic functions which might use that symmetry becomes more difficult.
> For example, in Javascript: "Hello" + " " + "World!". What the operator there is doing is concatenating the strings
Hmm, are we talking about (user defined) operator overloading as a language feature, or about overloaded operators? For example, I hate that 1/2 and 1.0/2 are different things in most languages, but I haven't heard anyone call this operator overloading in the context of C.
What sort of development environment are others here using for go (if using it at all, of course) ? I've had reasonably good experience with the go-mode in emacs.
I'm using vim with the vim plugins that come in go/misc/vim. I use :Import and :Drop for adding and removing imports, and :Fmt to run gofmt in vim. I also have a git pre-commit hook that runs gofmt.
I'm one of the project owners on the GoClipse project. I do a lot of polyglot development (Java, Javascript, C, Go, & a little Python). I've used and liked vim, Sublime w/ GoSublime, and GoClipse. All with gocode. I have a hard time escaping Eclipse, in general, because of my skill profile. It kind of unifies the experience.
Interesting that sublime text comes up. I have been looking for a "bells and whistles" sort of IDE for python for a few days now, I am traditionally a unix person so I have moved along nicely with both vim and emacs as needed, but at this point I need to work with a full featured IDE.
I am using the evaluation version of pycharm and I must say it is quite impressive, although paying for an editor does seem odd after using emacs for so many years but it is a well designed software and I think worth the price.
That said I have been asked to give sublime text a try and I must say it looks a lot better than pycharm, I think will give it a try next (it is certainly a lot cheaper and if I understand correctly has much wider language support than pycharm).
Sublime Text will never be a bells-and-whistles IDE like PyCharm or PyDev on Eclipse, it just won't work that way. The question is whether the extra niceities offered by those IDEs offer enough productivity gains over their heavyweight design which leads to them being clock-time slow to get things done (launching, navigating around files etc. etc.)
I think for experienced devs, a text editor is quicker for dynamic languages (less experienced people will get good mileage from an IDE). Go is sort of weird in that it reads like a dynamic language, so a text editor is Good Enough, while the static compiler helps to catch the sort of bugs that float up when you're doing manual (and hence, human-error-prone) refactoring work, like changing the type of something, which IDEs tend to automate for you before compile time. That's why GoSublime really is all you need for Go, as far as I can tell. (NOTE: I've not written anything like even a medium project in Go).
Since posting my comment about sublime text, I went ahead and downloaded and gave it a try. I must say it seems to fit my needs as far as python is concerned quite well, from the short time I spent with it.
It hits the sweet spot between emacs and pycharm quite nicely and I am at this point inclined to buy a license, I think an IDE is more useful for languages like java or scala but for python sublime text will do for me.
Re. the use cases for an IDE, my point was that as my side projects keep growing in size, I need something which is smart about things like refactoring, comprehensive autocomplete, support for debugging etc.
I use Acme, simply because it's got good buffer management and a useful editing language. I also used go-mode about a year ago, before I got sick of emacs.
LiteIDE X (http://code.google.com/p/liteide) is a small, portable IDE for Go with package management, build management, and compilation from within the IDE. Also, light debugging support. And, OSS, of course. I've used it on smallish Go projects without problems.
I agree. For some reason the whole OOP thing got really out of hands and has been force fed into a whole generation of programmers. Yet there's no real evidence that OOP is the right way to go. And then there was the whole Java deal where the "everything is an object" mantra was taken so far that it hurts. The result is probably the most expensive mistake in the history of computing with machines.
> UPDATE: I have a feeling that in 25 years we'll be dissing the current fad du jour - functional programming.
In 25 years, we will be laughing at the present for sure. I just disagree on what that fad is (functional programming is hardly popular enough to be called a "fad", but it's been bubbling under for 30+ years). I think it is dynamic programming languages like Ruby and Python, which to some degree are great but fall apart quickly. Another candidate is Node.js -style asynchronous programming, which will be laughed at once a mainstream language ships with a proper async model like the IO manager of Haskell or Erlang.
It got out of hand, but it's still a rather nice paradigm if you're actually simulating a system or designing a GUI. Teaching it as the One True Paradigm is certainly bad, but it certainly has its uses.
Not really. HTML/CSS/JavaScript combination is not really OO, but works really really well.
In general, I think that any "programming language" for GUI is a fail - we need to develop a declarative approach to GUI (like HTML/CSS, but with more features (e.g. effects) and more emphasis on Application Development (e.g. it's still really hard to create a photoshop-like interface in HTML), less on text presentation).
HTML kind of sucks as a GUI though. You have to work really really hard with Javascript to make it work semi-well.
HTML+HTTP works really well as a way to scale up client / server GUI over a high latency / low bandwidth network. That's its strong point; not that it makes for a good UI in terms of human factors.
In my opinion, even the former notion is to a large extent flawed... Sure, there are several classes of different datatypes that really are different (e.g. mathematical objects, such as vectors, matrices, real numbers, ratios, complex numbers, ..., then strings, channels, binary data, time data...), but most data structures used in most programs are simply either sequences, or maps (dictionaries). I prefer Lisp's/Clojure's approach here - have many functions operating on few data types, as opposed to the inverse.
>I prefer Lisp's/Clojure's approach here - have many functions operating on few data types, as opposed to the inverse.
...that doesn't accurately describe a flaw in Go at all, and stems from a common misconception of Go's type system; namely that it is Java's type system, which it is decidedly not. The interfaces make a big difference.
An interface is simply a set of methods. Any object that implements those methods implements that interface. Adhering to an interface is implicit; you never have to say "type Stanley implements the Cat interface". If the Cat interface is just a "Meow" method, and Stanley can "Meow", Stanley is a Cat.
Take, for example, the io.Writer interface. io.Writer is a method set that contains a single method: the write method. This is the definition for io.Writer:
This interface definition says "a Writer is any object that has a Write method. The Write method must accept a slice of bytes as its only argument, and it returns an integer and an error". Any object that implements this method also implements io.Writer. Therefore, any function that accepts an io.Writer may accept any object that defines this method. (when accepting io.Writer, the object's type is io.Writer; the only thing you can do with an io.Writer object inside of a method that accepts an io.Writer parameter is utilize its Write method, since that's the only thing you know it has).
So, for example, in the encoding/json package, there is an Encoder object. The Encoder object has just one method: the Encode method. This is the signature for the Encode method:
func (enc *Encoder) Encode(v interface{}) error
this method definition reads "the function for the * Encoder type called Encode accepts an interface{} v and returns an error". interface{} is the empty interface; all objects implement at least zero methods, so any object can be supplied; it is valid to pass any object into the Encode method. The returned "error" value will let us know if something has gone wrong.
Now then. We know that we're encoding data to the json format, but to where is it being encoded? Where is the output going? The io.Encoder object embeds an io.Writer object; encoded items are written into the writer. That's a big leap. How do we know which io.Writer to write to? We inject the io.Writer when we create the encoder. This is the signature for the function that creates a json encoder:
func NewEncoder(w io.Writer) *Encoder
It has only one argument; io.Writer. io.Writer has only one method; the Write method. That means that for any data target at all, if you define a Write method, you can encode json to it.
So what io.Writers are commonly found? There is an io.Writer for a UDP socket, a TCP socket, a websocket, an http response, a file on disk, a buffer of bytes, etc. The list goes on.
With this one Encode method, and this one Write interface, we are able to Encode json data to arbitrary targets. There's none of that JSONFileWriter, JSONHTTPResponseWriter, JSONUDPSocketStreamer stuff like you would get in other statically typed languages.
In 25 years, the kids will still be trying to decide which approach is black-and-white "correct", while the experienced will still be using a blend of styles depending on the given problem.
Eliminating side effects sounds brilliant until you start interacting with filesystems or networks. What, you can't memoize those ops or split them across a pmap?
Briefly, functional programming is good at one use case (adding new operations over the data type) and weak at one use case (adding new data type variants), while OO is the opposite (adding new data type variants is easy, while adding new operations is not). You have to choose between OO and FP based on which notion of extensibility is more important to you for the problem at hand (unless you use the relatively exotic solutions of multimethods or the generics trick that Wadler originally proposed).
My takeaway is that OO and FP both have their time and place, and the pragmatic programmer will learn when to use one or the other instead of choosing one camp and bashing the other side.
> UPDATE: I have a feeling that in 25 years we'll be dissing the current fad du jour - functional programming.
Only if unwashed masses start doing a half-baked version of FP without really understanding it. This is what happened to OO. It's similar to what happens to musical genres.
Nowadays, polyglot approach is the only right path for a software company. When I arrived in Berlin a month ago, I was positively surprised that SoundCloud supports local Clojure or functional programming groups. Keep up with great work!
The usual text editor features, like templates, reduce the typing, but the point is that the programmer must think, each time, about what they want to do when a function fails. This can become either a really good habit, or a distraction, depending on your perspective and problem domain.
When people say Go poor or no error handling, this is probably what they are complaining about. The fundamental properties offered by try/catch/throw are unpacked by Go into defer, recover and multiple value returns and type switches. Go's designers make the argument that when used thoughtlessly, exceptions encourages error handling antipatterns.
Go doesn't "lack error-handling." They're referring to the fact that Go doesn't have exceptions; you check return codes to detect and handle errors. For some this is tedious, but has advantages (mentioned in the article) with respect to understanding an entire program.
The problem is it's no better than C - the correct way is to return sum types, either values or errors. A good language would statically check that any returned values are only used when there are no errors, preventing invalid values being accessed.
This requires some flow analysis, but brings real benefit and safety.
It is considerably better than C, in that you can return multiple values. A common approach is to return two things: the data result of the function, and a error-indicating bool or int. Checking the error flag right at the call is very easy to read and understand, though some feel the code gets verbose. I feel that the execution path is easier to understand this way than with exceptions ("goto considered harmful").
C's multiple-arguments-but-only-one-return-value always was a bit weird. Does anyone know how C's authors decided on this feature?
A well known and tested approach, sum types (tagged unions), would have been way better, but it was cast aside because the designers were apparently unaware of the improvements to union types made since C's inception:
There is no reason for any new language to lack tagged union types. It disturbs me that Brian Cox rejects a simple, proven language feature that he does not even understand, even though it would take all of 2 minutes of searching/reading to understand. I'm sure he spent at least that long composing the replies in that thread, never moving past the ego-threat of "will Go ever have X?" to honestly evaluate the question.
"There is no reason for any new language to lack X" is false for all X. Languages differ in their goals, and there's no feature that all languages have to have. Even basic features like assignment can be questioned.
You're jumping to a false conclusion that Go's designers were not aware of those language features, and that that was the reason why they aren't in Go.
You said that sum types were omitted from Go because the designers were not aware of more recent developments. That's not true. They were omitted because they do not mesh well with the other features of the language, such as zero types, interfaces and embedding.
Whether you agree with that latter point is moot. Go's designers were and are fully aware of sum types; they chose to omit them from Go for a reason, not because they were ignorant of their existence.
Sum types are conceptually cleaner but it's unclear to me how they would help with analysis for this case. I don't know about formally, but informally, it's easy to determine by inspecting the code that a function returns either a value or an error, even it's written as a tuple. It's also easy to determine whether the call sites are handling errors correctly.
it's not there there's no error handling; it's that there's no exceptions. Instead, you use multiple return values, one of which is an error, and you check the return value for an error. It forces you to handle errors at the call site and makes diapers unimplementable.
> It forces you to handle errors at the call site and makes diapers unimplementable.
I don't see how the latter is true. What's the practical difference between wrapping a function call in a try/(no-op)catch and entirely ignoring the error return value?
If the function returns a useful value and an error then you'll have to assign to error to "_" to ignore it, which is a pretty big hint to the reviewer that it's being suppressed. So in cases where you want to "force" error checking, returning multiple values is probably good enough.
is this from looking at their jobs page? because usually those 'requirements' are just guidelines to make sure the people applying know their stuff, and if you contact the company it turns out they're a little more flexible.
Changing languages is not how progress happens. It's only progressive when the long-term benefits of the language outweigh the inefficiency of training all your devs to use it. Assuming that is true for Go (debatable), at the end of the day, SoundCloud still gimps their hiring pool far more than if they choose something like Node. Learning doesn't bother me at all - I like learning - but I can't advocate for battle-testing Go in a mainstream environment when their are plenty of other fast and tested languages. If Go evolves into a language that is more desirable in the everyday stack, that process should be organic, just as it was when people decided to switch to Ruby.
Does it make your day every time you see another unnecessary language or framework on the HN homepage? I'm just as indifferent to those as I am to caring about what SoundCloud does, if not more so.
I have always wondered, however, that if moving to a new language seems great because of the language, or because you have such a better understanding of the implementation of the problem you are trying to solve.