Hacker News new | past | comments | ask | show | jobs | submit login
Go 1.1 is released (golang.org)
484 points by enneff on May 13, 2013 | hide | past | favorite | 290 comments



I'm rewriting Nuuton with Go. Before, I was using a combination of Python (with Django), and Lisp (for the back end processing). After playing around with Go for a couple of hours, I decided to do a little test. Went ahead and wrote a simple API that mimicked how Nuuton worked. It turned out to be so simple and fun to build that I decided to trash all of the old code base and rewrite it in Go. The performance is great (it runs circles around the Python/Django code (expected)), but what I like is the syntax, and how easy it is to work with it. In my mind, its the right mix of C and Python. Its just an awesome language to work with.


I tried to use it but felt it was so slow to write your ideas that programming got boring (no advanced loops/generators, no list comprehensions, lambda functions aren't used very much IIRC). I'd probably stick with Python unless performance matters.

But OTOH it really "just works" out of the box.


It is "boring" -- absolutely. It is almost insanely so at times (even when solving complex problems). The number of times you get to feel insanely clever is very low.

But, that output / work product is -- and a huge part of Go's value is around the EDGES of a system (build, standard formatting, deploys, etc)


Well channels can get pretty exciting at times, but they are so easy to work with I don't spend much time on them.


True, and they allow unique and sometimes amazing ways to solve problems.


It's a bit strange to complain about lack of use for "lambda functions" in a language where there's a keyword that generally accepts a function reference, most often a function literal? (This keyword coincidentally is the language name lower-cased.)

WRT generators: have you tried using channels? "The tour of Go" seems to instill you with the idea that you should not iterate over lists and use channels instead.

I also miss list comprehensions and, for that matter, type-parametrized functions.

OTOH, static duck typing is brilliant: best of both worlds.


Maybe I used the wrong terms; what I mean't was that I didn't see stuff like "filter(lambda x:x>=50, s)"



This is indeed interesting, and this is why I'm missing something like func <A, B> Map(f func(A) B, input []A) []B that would statically check the A and B types at a call site.

BTW it looks like less work at compile time than the elaborate run-time solution that you linked to. It probably will not even require generation of specialized versions for primitive types.


If you're missing lambdas based on Python's neutered implementation, you're not really missing anything with go.


What is your Nuuton project about ?


A search engine. From what I know, its the only engine using Go (aside from Google.) Blekko uses Perl, DDG uses python (I think), Bing uses unicorn blood, and all the smallr ones use Jva and/or C++.


Hm, it seems DDG uses Perl actually[1].

[1]http://help.duckduckgo.com/customer/portal/articles/216392-a...


You needn't have said "aside from Google": it seems that Google doesn't use Go for the search engine.


It's mostly C++ for indexing and lookup, but "Search" is a big topic, and there are Go, Java, and Python parts in the many systems that make Google's web search work. Go is growing in popularity within Google.


They do use it on some parts of the system. But its not a pure Go implementation like Nuuton. (:


I must say I like the name. What differentiates your search engine from the others?


I am building it for us. Rather than to hype it up and cash in. It is truly a search engine aimed at hackers. In terms of features, you will learn as it goes through the BETA. Keep posted at the blog nuuton.com/blog.


Awesome idea, because as we all know the choice of implementation language is absolutely crucial to the success of a startup.


How much fun the founders have is directly related to the chance of the startup's success, and a person's ability to learn is directly related to their personal success.

Sarcasm doesn't belong on HN.


Well, I was having fun with Python and Lisp. But it was a bit painful between all of the different parts. Plus on the front end, I was writing mostly Django. Which has its own set of conventions. I like to keep things simple. And that made me move to Go. Its simple.


Exactly. You switched to Go because it rewarded you. Here you made an improvement that you liked, and you grew happier. Now, when you sit down to write, you'll be more likely to think "Wow, here I get to learn another piece of the Go ecosystem," which will probably drive you to continue working on your startup.


I think a huge part of go's success is the go command. For those of us coming from scripting language having the whole "configure/make/install" problem solved (and solved well) by the core language greatly lowers the barrier to entry.


Go acts like a scripting language, but runs like a compiled one. That's why I love it. The right mix of C and Python.


That trait is shared by lots of languages though. Racket and some other lisps do exactly that. So does PyPy.


The difference is it's a true compilation. Most lisps will save like a data image that still needs to launch the runtime.


So your point is that founder happiness is directly related to the startup's chance of success, which seems questionable, but possibly true. Then you define happiness as things that are good choices for the business. Of course doing things that are good for the business are good for the business. This is a bit of a circular definition.

That said, choice of programming language is important. It's possible to be 10x to 100x more productive in one language (at least to start with), if it has libraries (included in the standard lib or community) that do most of the hard work for you. It's very easy to get started with a Python project, if your work at all uses Numpy/Scipy/ETC. The choice of programing language also is a big factor in what engineers you can hire in the future.


Yes, spot on. I already have all of the business side stuff done, and will now focus on the fun part. Its refreshing to be able to sit down and just code. I'm putting up a simple blog at nuuton.com/blog. I'll blog about my experience coding it. Keep posted.


If the stories are to be believed, the only reason we're having this conversation on this particular message board is because they actually do at times.

http://www.paulgraham.com/avg.html


It's definitely important. Not only can the language contribute to better development (however you want to define "better"), Go, in particular is attractive to engineers you might really want to hire.


I thought about COBOL, but then I realized I did not work at a bank.


In February I deployed a website written in Go. It's been running 24/7 since then, pulling from thousands of data sources every ten minutes, no problems, no memory leaks. The thing is so fast on my cheap server (4 ms turnaround) that the only thing I would've done differently is to use a cheaper server.


Does Go have a defacto web framework?


> Does Go have a defacto web framework?

There's a few, but I think it's still too early to really call out a "defacto" framework.

Revel, as mentioned, is a popular "full stack" framework, but it can be pretty heavy. web.go[1] gets a lot of use (it sits at a Flask/Sinatra level), and beego[5] looks good but it's also totally acceptable to plug a few libraries together without much work on top.

For example, I'm using:

- mux, sessions, schema & context from Gorilla[2]

- go.auth for social sign-in [3]

- the RethinkDB driver [4]

... and that's about it. Write your routes with mux, set up your handlers with auth/sessions/schema, and generate your models & queries with whatever DB you want. The standard html/template library is good enough (and there's a Mustache library if you need it), and the standard http library covers everything in between.

TL;DR: If you enjoy writing Flask/Sinatra apps, you'll enjoy writing Go web apps. If you're coming from a full-stack framework (like me), there is definitely a learning curve, but I feel that Go's documentation and std. lib flatten that out a little.

[1]: https://github.com/hoisie/web

[2]: http://www.gorillatoolkit.org/

[3]: https://github.com/bradrydzewski/go.auth

[4]: https://github.com/christopherhesse/rethinkgo

[5]: https://github.com/astaxie/beego


Spot on. I suggest to newcomers that they save od your coment. It is good intro to the Go biome.


When you said Revel is "heavy" - did you mean slow, uses lots of memory/CPU/other or just has lots of custom bits to learn/use?


> When you said Revel is "heavy" [...]

It provides nearly everything, likely including things you don't even need. It's likely that a lot of the parts included won't be needed by a "typical" web application, and if you want to do things another way you'll need to spend some time switching those components out. To be fair though, having "everything" can be a big plus, especially when you don't know what you might need (or have the time to glue things together).

I don't know how tightly coupled the components are in Revel, but if it's like Django, some things are easy to add/remove (middleware), and others are much harder to work around (ORM, views, etc.).


web.go is influenced from web.py.


Many of the people in the Go community are anti-framework and pro-library. (inversion of control; a library is provided code called by your code, versus a Framework provided code that calls your code).

That being said, right now the dominate Go web library is "net/http" from the Go standard library, supplemented by Gorilla (as needed): http://www.gorillatoolkit.org/

If what you are looking for is really a Go web framework however, there are a few in development including Revel: http://robfig.github.io/revel/


I've written a small production web project and several medium-sized personal projects in Go, and have not yet found a need for a web framework outside of net/http. I've used several libraries that work as nested/stacked handlers under net/http; Go's structure embedding is a natural fit for that.


I've seen Gorilla (http://www.gorillatoolkit.org/) used quite a bit. The standard library is also good enough for small and quick things.


I would love to know about this as well. Basically a basic rundown of how Go works in relation to running various web facing stuff (a simple API for example).


See http://golang.org/doc/articles/wiki/ for a starting point.


Do you have a database?


Yes, SQLite. Users are served solely from memory.


No generics. Less speed than C. Lame.


Seriously, congratulations to the Go team.

And for those who missed my attempt at humor: http://slashdot.org/story/01/10/23/1816257/apple-releases-ip...


"Damn the torpedos and downvotes, I'm gonna make this joke work!"

You, sir, are a risk taker. Kudos.


Yeah, who needs type and memory safety, a large standard library, OO via interfaces, structural typing, garbage collection, functions as first class citizens, closures, blisteringly fast build times or easy concurrency.....


It's a reference to CmdrTaco's impressively off-base assessment of the first iPod: http://slashdot.org/story/01/10/23/1816257/apple-releases-ip...


It wasn't really off-base, it's just that his opinion doesn't correlate very well with the general public's.

That said, it's fun to read the old thread. I like this comment from dbrutus:

Do you want revolutionary? Ask yourself, what chip is this running? Ask yourself, what is the OS on this thing? This is v0.8 of Apple's PDA folks. They're just waiting for the hardware and the economy to get a little better.


>It wasn't really off-base, it's just that his opinion doesn't correlate very well with the general public's.

Well, opinions can be off base too. Especially in engineering matters.

For example he was totally off base that wireless would be feasible in a 2001 machine, with adequate battery life and performance -- or that it would offer any benefit at the time.

He was also totally off base in commenting about the "less space than a nomad" being lame, ignoring the fact that the iPod was also far more pocketable.

Plus, he deemed it "lame" (probably just for those two reasons) without even commenting on the vastly improved navigation and user experience compared to everything that was there before. Again, off base.


> For example he was totally off base that wireless would be feasible in a 2001 machine, with adequate battery life and performance

Well I mean, he presumably thought the Nomad was cool, indicating that massive size (nearly a pound!) and shit battery life were not really important to him. The Nokia 9500 came out 3 years later with WiFi and was pretty small, even by today's standards. Could they have put WiFi into a device the size/weight of a Nomad 3 years earlier, and gotten at least some shitty battery life out of it? I suspect so.

"Lame-ness" is an opinion, not really something that you could call objectively wrong. (Though obviously I think we all agree that his degree that the iPod was lame was itself lame).

Had he said "there is no market for this" instead of merely implying it, then he would have been wrong in a more objective way.


>Adequate battery life and perfomance and the importance of pocketability and improved UX are all subjective.

It's true that "adequate battery life" does not have a hard, very specific answer, and it's more of a range (and also depends on the capabilities of the era and what users are used to have).

But it's still not "all subjective". E.g if somebody says "1 hour mp3 player battery life is totally fine for me" -- doesn't mean he has a valid opinion that people should respect. In fact, he would probably be laughed off any product design meeting. (Sure, it can be fine "for him". But that's like saying "being hit with a stick if fine for me" if you are a masochist. In no way it represents what would work for most people).

>Personally, I always thought iPods had a terrible UX for forcing you to use iTunes instead of just mounting the drive

That was its definitive advantage. Just mounting the drive would be the backwards option, for people used to manage their own directories instead of letting the computer do it for them.

With iTunes they got people to also grasp the concept of file metadata, playlists, arbitrary organization based on tags and smart playlist queries, etc. And it opened the road for work on them offering a Store, iTunes Match et al.

>And that he just took into account those two reasons is just speculation.

It's based on the only data we have. What he actually said, and the reasons he gave for saying it.


There is no such thing as "valid or invalid opinions". There are only opinions you agree with or disagree with, or (mistaken or accurate) statements of fact that the speaker is inaccurately calling an opinion ("The moon is made of cheese" is not an opinion, even if the speaker states that it is. Additionally it can be said to be false).

CmdrTaco's opinion ("Lame") is an opinion, not a statement of fact masquerading as an opinion, so it is neither valid nor invalid. Most reasonable people would strongly disagree with this opinion.


>There is no such thing as "valid or invalid opinions".

Well, for all practical purposes there is.

You're having an underlying premise that personal value/taste judgements cannot by definition be wrong, which I strongly disagree with.

It might not be as clear cut as 1+1=2, but that doesn't mean that everything goes.

If opinions were such as you suggest (totally separate from judgements of fact and totally divorced from the realm of validity or invalidity), there would be no point in sharing them, because they would be totally irrelevant, irreducible and meaningless to anybody but the person having them.

This whole thing is based on the idea of "opinion" as "personal taste beyond critique". Like one happens to like brocolli whereas another likes nachos. Well, in my, err, opinion, even taste for food can be subject to critique. If a food is bad for you, I can say you're wrong to like it. Even if he says "this food is lame FOR ME", one can show him that his "personal taste" is mostly a cultural construction, an effect of early nurture, national cuisine, class, etc. So, it can be "lame for you" just because you don't know any different.

>CmdrTaco's opinion ("Lame") is an opinion, not a statement of fact masquerading as an opinion, so it is neither valid nor invalid.

There is a thing such as an external reality, and in that reality things have attributes and rankings. If from nothing else, at least from expert and general consensus. And I don't think either the expert's or the general consensus validates his opinion.


All you are saying is that most people have a different opinion than him. In other words that he is subjectively wrong.

If opinions could be objectively right or wrong, then they would not be opinions.


>All you are saying is that most people have a different opinion than him. In other words that he is subjectively wrong

No, I'm also making the bigger claim that "most people have a different opinion than him" _for good reason_.

That is based on a qualitative appreciation of certain measurable aspects of reality.

Plus, opinions are also based on facts. If you get the facts wrong, or get what the facts imply wrong, then your opinion is also wrong. (a crude example: I, mistakenly, believe that Mother Teresa killed millions of people, so "in my opinion she is evil")


What fact is "blue is the best" based on? You have a very warped idea of what opinions have to be; you are confusing statements of fact with opinions.


If someone has the opinion that they can leap off a building and fly with their psychic power, the ground still awaits. I'm not saying you're wrong about the specific case here, but people throw "opinion" around sometimes when they mean "I don't like the facts so I don't believe them."


Adequate battery life and perfomance and the importance of pocketability and improved UX are all subjective. (Personally, I always thought iPods had a terrible UX for forcing you to use iTunes instead of just mounting the drive). And that he just took into account those two reasons is just speculation.


haha, thanks :)


I do complain a lot about Go's design in HN, but today is a reason to celebrate, not to bash the authors.


His comment is a reference to the infamous CmdrTaco comment on the newly-released iPod. "No wireless. Less space than a nomad. Lame."

I assume he is jokingly suggesting that the detractors are wrong.


I assume he is jokingly suggesting that the detractors are wrong.

Yes, I believe the detractors are wrong. And I believe they are especially wrong about generics. One of CloudFlare's most hard core C++/Boost guys 'tried out' Go to see how he liked it and is rewriting everything that needed redoing in Go.

The combination of a 'small, powerful' language that supports multicore machines and the good standard library means people can _get things done_.


They're entitled to their opinion, though. Lameness is in the eye of the beholder. I think the Slashdot commenter on the iPod is right. It is lame to accept that we can't just have a huge collection of music, which is what convincing people to pay the exorbitant amounts the RIAA demands for music or using Spotify amounts to.


Keep in mind that CmdrTaco's comment was made in 2001.

CmdrTaco was of course entitled to his opinion, and nobody is suggesting that we take from him that right. However most would also say that in their opinions, which they are also entitled to, his assessment was very lame.


Chet and Erik's download Go 1.1:

Chet: "Get out the drool cup!" Erik: "In use, my friend! Drooool!" Chet: "Then break out the scraper, because someone's gonna have to scrape my jaw off the floor!" Erik: "That's also in use, to scrape my spooge off the monitor! Spoooge!" Chet: "Ouch! Fetch me my eye medicine, because I have an eyeache from watching a whole mixed bag of eye candy!" Erik: "All of the eye medicine's already in my eye! I'm done with the scraper, though."


"It is likely that your Go code will run noticeably faster when built with Go 1.1" I have been extremely impressed with Go's performance and now it is even faster! :-)


Really? I always found it somewhat lacking in this. It has the potential to be so so much faster. Glad to see they're finally working on it.


In terms of speed, Go 1.1 is nearly on par with Java. [1]*

[1] - http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

* - To the extent that you may find micro-benchmarks a useful measuring stick.



Go is awesome!


As a no-go programmer looking through the data race detector docs (http://golang.org/doc/articles/race_detector.html), it to me shows snippets of a really ugly language with poor readability. Examples from Google's page:

  var service map[string]net.Addr
When I read this it's easy to skip seeing string, aside from the fact that ] as a delimiter also looks like net.Addr was accidentally combined with the previous line.

  func ParallelWrite(data []byte) chan error {
	res := make(chan error, 2)
	f1, err := os.Create("file1")
	if err != nil {
		res <- err
	} else {
Here we see no parenthesizes used in if statements, yet we retain {. In skimming it's easy to miss the if in the first place. Yet everywhere else they're happy to use parenthesizes. We also see that "make" is lowercase yet a method, but os.Create and ParallelWrite has caps for a method.

There's also the fact that the array (I'm guessing) of bytes diverges from every other language to put them in front of the type which looks like crap. We see the same poor choices of delimiters as with map value types.

Then there's this

  func main() {
	c := make(chan bool)
	m := make(map[string]string)
	go func() {
		m["1"] = "a" // First conflicting access.
		c <- true
	}()
	m["2"] = "b" // Second conflicting access.
	<-c
	for k, v := range m {
		fmt.Println(k, v)
	}
  }
<-c looks like its just dangling, and if this is meant to say there's some kind of stream or assignment then does it just go to /dev/null? Yet "c <- true" is also correct? I'd argue for either void <- c, _ <- c, or c.read() if wanted to support something like that.

It's also difficult to tell in the for loop if v is being assigned as "range m", or if "k, v" are a tuple resulting from "range m"

It's too bad because the goals of the language are quite nice from a CS perspective.


Did you seriously just post "As someone who doesn't know how to read this language, this language seems really unreadable!" ??

As any Go programmer will tell you, Go is a very readable language. It's one of the design goals. It is _not_ a design goal to impress people making superficial judgements.

Also, LOL: https://news.ycombinator.com/item?id=5327478


I'm hoping you're lol'ing with me and not at me, because that's a comment on the parody of HN, which is obviously a parody comment itself.

Yes I did just post that. Should I wait until there's an article on coding aesthetics first? I'm a seasoned programmer with experience in many languages. Reading a new language should make me want to use it because it's an improvement in many ways, particularly around expressivity and clarity. IMHO Go isn't as good as it could be. I wasn't saying Go as a whole is bad or not worthwhile.


Yes, it's obviously a parody comment, but here you are perpetrating the meme for real. Don't you see the irony?

Like I said, Go isn't designed to impress superficially. It's a pragmatic language for getting work done.


I never said Go was bad. I just said it was ugly and gave examples. Those code samples came from the race condition page detector which I wanted to learn about, because that's awesome.

If a programming experience was designed to impress superficially, then it wouldn't live in a pure text editor at all. I'm commenting on aesthetics as it also relates to usability (readability). I don't get this mindset that things that functionally work well have to be ugly. It's a very unix/engineering mentality that just isn't true. You CAN have both.


At you


That sort of comment is a horrible scourge on all the Haskell articles. It's somewhat surprising to see it with Go just because of how C-like Go's syntax is.


Which makes it all the more odd that they kept so much of C syntax yet left out parenthesis on if statements, for example. It feels really arbitrary and doesn't seem balanced. Python did away with it but the : combined with space-sensitive scoping makes that more elegant and readable.


Also, "recent PhD from the MIT Media Lab". Maybe http://scratch.mit.edu/ is more his thing? ;)


No need to disparage Scratch. It's a great learning environment. I'm just amazed by the comment, that's all.


Nope.


Personally, whether or not I superficially find a language unreadable has a remarkable correlation with how likely I find I am to like a language once I learn more about it. Go does not do well on that measure for me either.

In Go's case, I tried to give it the benefit of the doubt, and went through a couple of tutorials, but while there were some nice parts of it that just reinforced my initial impression.


> Here we see no parenthesizes used in if statements, yet we retain {. In skimming it's easy to miss the if in the first place. Yet everywhere else they're happy to use parenthesizes.

I feel that braces on blocks are more important to readability/clarity—because blocks should be identifiable—compared to parentheses for if statements. I don't think the if gets lost in there, but if you do then that's fine.

> We also see that "make" is lowercase yet a method, but os.Create and ParallelWrite has caps for a method.

That's because "make" is a built-in function for building slices and maps.

Packages with exportable functions (i.e. os.Create, http.HandleFunc, etc, etc) use capitalisation to denote "public" functions[1]. Functions within the os or http packages (for example) that should not be used directly by someone using the library retain a lowerCase approach. This reduces the risk and confusion surrounding which functions you should be working with when using a library.

> It's also difficult to tell in the for loop if v is being assigned as "range m", or if "k, v" are a tuple resulting from "range m"

Each key:value pair in m (a map, or "dictionary") is being assigned—hence the :=, which declares & assigns k & v in one line—for the entire range ('length', really).

Obviously you don't agree with some of the syntax in Go. That's fine. But some of your criticisms appear to stem from a misunderstanding or lack of knowledge about some language-agnostic concepts (channels, range, exportable functions) that I don't think Go obscures.

[1]: http://golang.org/doc/effective_go.html#commentary


I agree with you on blocks/scoping. If dropping the parens I think Python does it right with its tab-based blocks.

Was uninformed about caps. That was a minor point for me, however, and in hindsight I should have at least investigated that before saying something.

Ok I see the consistency in :=, but in that sense I'd personally like to see something closer to Scala where the tuple is explicit by doing something like:

  for (k, v) := range m {
Which doesn't look all that much better, mostly because of the := (I've never liked in any lang) and the spacing between range, m, and {.

  for (k, v) := range(m) {
Could be yet another way to help make it more clear what is modifying what.

When you're tired and have been starting at code all day long having more visual gestalt in the language helps... syntax highlighting serves much of this but it's better IMHO if the language highlights itself.


Rob Pike made an excellent point in one of his presentations on GO.

He pointed out that utilizing a language through a DSL or FFI that utilizes indentation instead of another method makes utilizing that language inside another language more difficult as it is easy to introduce bugs in the host language that are non-obvious to debug based on the indentation being incorrect but not obviously so.

I recommend watching all of his videos as they are very illuminating due to his primary focus being that of software engineering rather than theory.


> There's also the fact that the array (I'm guessing) of bytes diverges from every other language to put them in front of the type which looks like crap.

Every other language except:

- Actionscript, TypeScript, Haxe

    var x:int = 10;
- ML, OCaml, Haskell and derivatives

    val x : int = 10;
- Pascal and derivatives

    Var x : Integer;
- Ada and derivatives

    x : integer := 10;
- Old-school MASM Assembly syntax

    DSEG            segment
    x               word 
- SQL

    CREATE TABLE tbl { x INT }
- Basic

    Dim x as Integer
- XML DTDs even

    <!ELEMENT x (#PCDATA)>


"put them in front of the type"

I meant the [], not just generally the type.

go: data []byte most (all?) other languages that use [] in arrays: data byte[]


Sorry I misinterpreted. When Go was introduced a lot of people got caught up in the declaration order being opposite to C-style languages and I assumed you had too.

Nevertheless, it's similar to Algol's bracket syntax, except in Algol the type comes before the variable:

    []INT numbers = (0, 1, 2);


> There's also the fact that the array (I'm guessing) of bytes diverges from every other language to put them in front of the type which looks like crap.

It does look weird at first—I thought the same thing. But after writing some code with it, the design has a nice simplicity. When decoding C types, you have to sort of go back and forth "int * x[]" (an array of pointers to int) but in go it's always one direction: "int * []x". The type is always grouped together and that ends up being much easier for humans to parse.

Ditto for putting the return type after the function args. Seems backward if you're used to C/Java/etc but it makes things nicely composable, plus now every function starts with "func", and not a random return type.

I'm fine with the losing the parens around the ifs. Other languages do that and less syntax can look less messy. I do, however, HATE the required curlies. And the fact that stupid go-fmt always wants to format them out differently than I do:

    if short_check { short_code() } // My preferred formatting for short error tests
    if short_check { // go-fmt always wants to do this.
        short_code()
    }                // Blech.
For that reason alone I refuse to use go-fmt on my code :-). But I'm stubborn that way...


Why not int x? :) No confusion there! Except what that actually is underlying it all. I'm actually shocked that Go even has pointers given coding C++ at Google was all about passing by reference, although at least they put a lot of sane constraints on them (no pointing to arbitrary memory addresses, no pointer arithmetic, etc).

Google code style guidelines are anti-one liner, so that's not surprising to me that Go's formatter tries to enforce that.

I also employ one-liners but judiciously. It can be easy to miss the condition or side effect, particularly with returns. I waffle on whether it should or shouldn't be allowed. But there's something to code compactness on large projects...


> For that reason alone I refuse to use go-fmt on my code

Nooooo! :-p

I used to prefer short one-liners like that (well, still do in ruby), but in Go I'm starting to kind of enjoy the blindingly obvious, verbose way of doing it. It makes me feel like Mr. Rogers[1][2]. Sure it's plodding, but it's exceedingly clear and easy to understand.

[1] http://www.youtube.com/watch?v=yXEuEUQIP3Q [2] http://www.youtube.com/watch?v=Upm9LnuCBUM


Of course, you could have:

   def parallelWrite(data: Array[Byte])
No pointers to worry about, generics, etc ...


What a pointless complaint. Literally none of those things are real issues. Use the language for a day and it'll all be second nature.


Do aesthetics not matter?


Aesthetics in the sense of an clean syntax, or minimizing language warts. Not choice of delimiters or a lack of parenthesis in certain constructs. There are literally thousands of choices the language designers made that are more important that the criticisms you brought up.


`<-c` is the main routine waiting for the other goroutine to finish (`c <- true`). It's being blocked until there's something for it to receive.

> It's also difficult to tell in the for loop if v is being assigned as "range m", or if "k, v" are a tuple resulting from "range m"

The `range` clause returns two values: index, and value at that index. It's really pretty easy to read and get used to once you actually read some documents.


I read those examples and had no trouble inferring their meaning. I have no Go experience whatsoever and have not even read the introduction or tutorial. Yet, building entirely on what I know from speaking several C-like and non-C-like languages, here's what it looks like to me:

clearly c is some kind of "chan"nel-based synchronisation object for which <- is both the read & write operator (presumably blocking); the "go func" declaration is building a closure that is immediately invoked which, given this appears to be a parallelism demo, probably happens in a separate thread (green or OS unstated). m appears to be a map of string to string, so the assignment to k,v is almost certainly a tuple returned from iterator "range m" because that's just how things go in documentation-level examples.

I know nobody likes a smart-ass but it's a good example and even if some of the above is wrong, I got a taste of Go from it that I rather like. So thanks.


All correct.


Programmer for 8 years, in a wide variaty of languages but never touched a language that had pointers.

How much do I need to know about pointers to use and learn Go? Is Go garbage collected in the C#/Ruby sense?

I recently switched from full ORM usage to hand tuned SQL queries and loved watching the speed of my application increase 10-fold. I'm having a bit of a speed high and Go seems like the perfect language for this.

What book should I buy to learn Go from the ground up?

Edit:This seems like the perfect place to start.

http://www.golang-book.com/


If it's been 8 years, it's about time :) ...

Do yourself a favor, and learn the basics of pointers (might as well just read up on C), then move on to Go.

Before you go down the route of buying a paper book, try the interactive tutorial on the golang web site. It's pretty good. There's also a lot of good, and free, reading material on that site to help you learn.

Also, regarding pointers: They're really not that bad, but I understand a lot of people can have trouble with them.

Just realize that in languages like Ruby, and Python, you're never directly working with the computer's memory (as in RAM) - that is abstracted from you.

However, in a language like C, you are. (As an aside, you still aren't directly dealing with it, the operating system abstracts it from your program via the virtual memory system, but you can ignore that for now, and look it up some time when you see fit.) So the first thing you need to make sure is that you understand the difference between stack storage, and heap storage.

Once you understand the heap, and that it is essentially a large block of memory with sequential addresses, you can begin to understand pointers. They are just variables that contain the address of some other variable stored in memory. Languages that support pointers usually give you some syntactic sugar to make working with the "pointed to" object through the pointer easy.

Of course it can all get more complicated, but in summary, give them a shot...you'll gain a much better understanding of many things once you learn them a bit.


>How much do I need to know about pointers to use and learn Go?

Very little. Just what pass by value and pass by reference mean in the context of pointers.

The very basic you need to know is: if a function argument is defined as (*foo), you are changing the item you pass to the function (like in Javascript or Java), whereas if it's defined as (foo), you're changing a copy of it and the original item remains unaffected.

Go has pointers, but no pointer arithmetic (in the base language), and it also has GC, so most of the problems people have with pointers in C are not there.


You need a little more than that, unfortunately.

A quick example: the standard library `flag` package lets you parse command line arguments, and returns either pointers or values. You need to know which is which. It's easy, for example, to do this:

    var filename = flag.String("file", "default.txt", "file to open")
    fmt.Println("Using file ", filename)
The above code looks ok at first glance, but prints the address of filename rather than its value. It can be fixed by using `flag.StringVal` instead of `flag.String`, or dereferencing `filename` using `* filename` in the Println.

So I would say you need to know way less about pointers than C but you still need to know the difference.

http://golang.org/pkg/flag/


>The very basic you need to know is: if a function argument is defined as (*foo), you are changing the item you pass to the function (like in Javascript or Java), whereas if it's defined as (foo), you're changing a copy of it and the original item remains unaffected.

=============================

So it's like PHP where I need to pass in `function modify(&$foo) {` to modify a variable passed by reference.


> So it's like PHP where I need to pass in `function modify(&$foo) {` to modify a variable passed by reference.

If you're thinking in terms of PHP's primitive types, then yes: that's probably the best equivalent. PHP has the whole objects always being "passed by reference" [0] thing going on too.

0. http://php.net/manual/en/language.oop5.references.php


It's also a performance boost when you're passing something non-trivial, to pass just the pointer, rather than have Go make a copy of whatever you're passing by value.


Yes, not sure for the whole PHP semantics on this, but it's a little like that.


I was under the impression that slices, maps and channels are reference types - i.e., if you pass them to a function using (foo) you'll pass "only" the reference like in Python, but that might be old information. Is that still valid?


Yes, those are three special built-in objects that are reference types to their underlying stores.


There are no reference types in Go, everything in Go is passed by value.

When you pass a slice, for example, you are passing a structure that has a pointer to the underlying data by value; you could think of it as a reference BUT its not what you think it is. What you are passing around is actually a struct that is passed by value:

  type SliceHeader struct { //[1]
      Data uintptr
      Len  int
      Cap  int
   }
1: http://golang.org/pkg/reflect/#SliceHeader


>There are no reference types in Go, everything in Go is passed by value.

Yes, but this value is not the ACTUAL value (e.g the underlying storage).

When people (especially newcomers to pointers like the person that started this thread) speak of "pass by value" they refer to their data (the actual value).

So that Go passes _the structure_ by value is quite misleading, compared to the expectation of what "pass by value" means.

For practical purposes, I think they're better off to think of it as a reference.


>>For practical purposes, I think they're better off to think of it as a reference.

No, they should learn to understand what is actually happening. Otherwise things, at best, do not make sense and, at worst, can lead to bugs when using functions like append(): http://play.golang.org/p/eTJXu6XMNJ


> a structure that has a pointer to the underlying data by value

Isn't this the exact definition of a reference type?


Except that if you are passed something like a slice and you modify the length, that change is not reflected to others like it would be with a reference. See my above code example.


If you make a change to an integer that was in a variable and passed in by parameter, does Javascript really make the change to the caller's variable?


No. The scenario you posit would be true if JavaScript were call-by-reference.

JavaScript, however, is call-by-value. Primitive types (string, number, boolean) are passed to functions by copying their value, and reference types (such as any object you create) are passed by copying the value of their reference.

The reference semantics can be observed in JavaScript in some cases, such as:

  var i = 0;
  for (i = 0; i < 10; i++) {
    setTimeout(function() {
      document.write("i is now " + i);
    }, 10);
  }
Here the inner function gets reference of the variable i instead of its value.


No.

However, if you pass an object or an array, the called function can change a field in the object or an element in the array (the difference between those two is an optimization), and the change will be visible to the caller.

Go, however, lets you pass in an explicit pointer to an int, through which the function can change the int to which the caller pointed. Of ourse, the caller can also change the passed-in pointer if it wants, but that has no effect on the caller's variable.


Javascript primitives (number, string, boolean) are always copied, the remaining (object, function) are passed by reference (arrays are objects). Similar to Java.


Pointers are not difficult. Take what you know about references as a starting point and go from there. In many cases, references and pointers behave very similar. When you get into the more interesting things that are possible with pointers, just remember they are merely a value that holds a memory address. If you keep that in mind, the complexity is muted a lot. Just dig into Go and see what you find.


I concur with this approach. My former languages do not have pointers either. It is refreshing to have them as an option, and they are not difficult to get used to.


I started with this. http://tour.golang.org/

Then I went through those. https://news.ycombinator.com/item?id=5365401


As others have pointed out - you probably know enough about pointers to get started from the (rather excellent) documentation.

As for learning pointers -- I've never been able to get entirely comfortable with pointers in C -- the syntax just feels unnatural to me. I much prefer either assembler (intel syntax) -- or Pascal, eg:

http://www.tutorialspoint.com/pascal/pascal_pointers.htm

Go syntax is closer to C, though. So knowing C, Go might be easier (than not knowing C) -- but it isn't a prerequisite.


>>How much do I need to know about pointers to use and learn Go?

Do you understand how they work in C? Maybe try reading a little about that. For Go, I'd recommend starting here: http://tour.golang.org/#1 (Don't miss the next arrow in the lower right-)

After running through that you should have an idea of what you don't know- which makes asking questions people can provide useful answers to much easier. :)

>>Is Go garbage collected in the C#/Ruby sense?

I'm not sure what your sense of C#/Ruby GC is, but I'd go with yes.


You already know everything you need to know about pointers. In Ruby almost everything is a pointer. What you need to learn about is 'values'.


Any plans to add templates/a form of generic programming? Does the community care about that? How have people been working around that?

And congrats!

EDIT: Yes I know what the faq says [1]. I was wondering if someone working on that can shed light on how the development of a satisfactory proposal has been going. Also, what's exactly wrong with template instantion like in C++ from the perspective of Go devs?

[1] http://golang.org/doc/faq#generics


It gets brought up continually on the mailing list.

The Go authors have repeatedly stated that they will not be implementing generics.

They don't seem to miss them, I don't miss them, and indeed most people who write much Go don't seem to miss them. The people who continually whine about generics seem to be the type who say, "Oh boy, I'd sure LOVE to write a bunch of kickass code in Go, but I just couldn't do ANYTHING without generics! Why don't you put them in, then I'll try the tutorial"

It's not something we feel the need to work around, we just sit down and write concise programs that do the job, in fewer lines than C++.

Edit: as others have mentioned, it seems the authors would be willing to put them in if they find a way to do it without crapping up the language. If they do, I'll try writing Go code with generics, but they have a bad taste for me after dealing with code containing GenericType.cpp, GenericType.hpp, AbstractType.cpp, AbstractType.hpp, AbstractGenericType.cpp, and AbstractGenericType.hpp all in one place.


>The Go authors have repeatedly stated that they will not be implementing generics.

No, they have repeatedly made ho-hum excuses about how they can't think of any proper way of implementing them, and that they will do them if they find a "clean way".

>They don't seem to miss them, I don't miss them, and indeed most people who write much Go don't seem to miss them. The people who continually whine about generics seem to be the type who say, "Oh boy, I'd sure LOVE to write a bunch of kickass code in Go, but I just couldn't do ANYTHING without generics!"

Well, isn't this dismissal of them a kind of "confirmation bias"?

Sure, they people who don't care about generics can churn out tons of code without them. Like people have been churning out tons of code in C or pre-generics Java anyway. And, sure, they might not even miss them.

Plus, some of the core Go members have been using languages without generic for decades and even created some. So, being used to working like that, it makes sense for them not to miss them.

That doesn't mean the people who DO want them are wrong in wanting them in -- or that because guys writing C for several decades doesn't find a need for generics in Go, means that there really isn't one.


> The Go authors have repeatedly stated that they will not be implementing generics.

That's not true. We just don't have a way of doing them that works well in the language. http://golang.org/doc/faq#generics


Yeah, I updated to reflect that. I was basing that statement on the replies I'd see on golang-nuts when the weekly generics thread came up.


What about macros then?


Russ Cox wrote down in 2009 what's the dilemma about generics: http://research.swtch.com/generic I think this is a very valuable article to understand why the Go team hesitates to implement generics in some suboptimal form.


It also highlights Go's weak spot ... it's too low level, while not being low level enough for manual memory management.

The JVM engineers have been free to do whatever they want in terms of optimizations at runtime, like code inlining or escape analysis. They may even add trace compilation or whatever they feel like it's necessary. The JVM's GC is precise and generational. The CMS garbage collection rarely blocks the whole process and you've got options for GCs that are totally non-blocking. In fact, the JVM's GC is so efficient that allocating and deallocating short-lived objects is almost as efficient as doing that on the stack.

Russ Cox speaks about the boxing that Java does, but Java makes boxing almost a non-issue. Personally I've learned to appreciate the JVM and what it can do in the context of servers receiving tens of thousands of requests per second.

And here in lies the problem - Go is too low level to achieve the same level of optimizations that Java is able of. But on the other hand it's totally dependent on a per-process GC and personally I don't think we'll ever see a precise, generational GC for it precisely because Go is too low level. The Rust engineers at the very least redefined the problem by making individual threads in a process have their own heap and their own garbage collector.

Go on the other hand feels like a hack on multiple accounts. And to me it's not a beautiful hack either, plus once a language achieves inertia, you cannot change its fundamentals without redefining the language to be something else entirely. If Go will indeed get popular, it will join a long line of languages that people hate, because they have to maintain code-bases written in it, with no easy way out.

Its authors say they don't want Go to be like C++. In many ways, it already is.


> The JVM engineers have been free to do whatever they want in terms of optimizations at runtime, like code inlining or escape analysis.

Both of these optimizations are present in this release of Go. Granted, there is no doubt more inlining opportunities exist, and better escape analysis is possible, but I think you'd want to chose different optimizations to highlight your point.

What specifically makes you think that there'll never be a precise, generational GC for Go? What about Go makes it too low-level for such an implementation. Unless I'm missing some key issue, I definitely think its possible...

> Go on the other hand feels like a hack on multiple accounts.

Again, what specifically about the language gives you this impression?

> If Go will indeed get popular, it will join a long line of languages that people hate, because they have to maintain code-bases written in it, with no easy way out.

Forget about evidence, you're making claims with absolutely no reasoning behind them. What is different about Go as opposed to say Python, or Erlang, or Node, or C++ in this situation? A code-base written in any of these languages needs maintenance... I don't see how Go is any different in this situation.


One of the 3 original designers of Go is Robert Griesemer, who worked on the Java HotSpot compiler, so I think it's safe to say that the Go team is quite aware of what the JVM can and cannot do.


So? Is this like an appeal to authority?


Wow, you even manage to put a negative spin on "the whole Go team has an impressive proven track record."


If that argument is used to defend against arguments of bad design decisions, in spite of years of research by many other well renowned computer scientists that went ignored, then yes.


This is a bad attitude, imo. The Go standard library wouldn't be possible without generics [1]. It's just that the functionality exposed to the standard library isn't exposed otherwise.

If it's a feature needed for the standard library, then it's safe to say the devs will need it to. The devs aren't (shouldn't) be consistently writing programs that are more trivial than the standard library.

[1] http://golang.org/doc/effective_go.html#append


Not sure I agree with this logic.

I could easily write a custom Append func in my own application because I would know the Slice types I'm trying to operate on. Writing a language feature is different than writing an application, and it has nothing to do with how trivial something is.

Honestly, Append() as a concept is about as trivial as it gets..


> I could easily write a custom Append func in my own application because I would know the Slice types I'm trying to operate on.

More precisely, you would write one custom Append function per Slice type. AppendString, AppendInt, AppendWhatever.


Yeah, that was supposed to be singular: because I would know the Slice type I'm trying to operate on. Appreciate the clarification. I'll leave the typo there for context.


> Writing a language feature is different than writing an application

True enough. But libraries can easily tend toward either of those extremes. And any obstacles to library development will eventually become obstacles to application development.


You could also say that ANSI C has fenced off generics (array of T). Just as with Go's append and co, that's part of the language, not it's standard library.

http://golang.org/pkg/builtin/#pkg-overview


The thing is, you can do that with interface{} and you don't even need type switching. It's awkward, but considering that the rarity with which interface{} is necessary (at least in my experience), much less interface{} with type switching, I really don't see a big gain in generics. I agree with jff, it seems most people who complain about the lack of generics are those who haven't written much code without generics.


You lose a lot of type safety via interface{}. I'm currently writing a server that makes heavy use of []interface{} and I hate it. For many, most, maybe even all, code pathes in the app I could 100% static-type if Go allowed for something akin to Haskell's generies. Basically I bundle around a type `APIVal` (an alias for []interface{}), but everywhere I either take as an argument or return an APIVal, I can tell you exactly what type the underlying slice items "really" have, but I can't assert that at compile time, and it kinda bugs me.


>>The Go authors have repeatedly stated that they will not be implementing generics.

Not so http://golang.org/doc/faq#generics; "This remains an open issue." In the discussions I have read, if an implementation of generics is proposed that doesn't sacrifice performance and it truly generic, they are open to it.

What I have seen said is that implementing generics was not on the table for 1.1 or maybe even 1.x


>In the discussions I have read, if an implementation of generics is proposed that doesn't sacrifice performance and it truly generic, they are open to it.

Sounds like weasel speak for "take a hike and leave us alone".


Or maybe professor speak for freshmen Johnny who thinks he as an answer to the the Halting problem...

But in all seriousness they are interested in implementing generics, just not in a hurry to do it. It doesn't seem unreasonable to me to label it a low priority feature compared to other things they have added to Go. Java got along just fine before generics (which are arguably poor) and C still gets along fine without them. The Go authors are saying, lets add Generics when we are all happy about the solution and feel we are going to get it "right".


>>Any plans to add templates/a form of generic programming?

The Go team is open to generics as soon as there is a proposal they feel fits right without performance penalties. Read: http://golang.org/doc/faq#generics

>>Does the community care about that?

Yes, but most people in the Go community are OK waiting for the "right" solution.

>>How have people been working around that?

Generics are nice and convenient, but you can write anything/everything without generics. Just look at everything written in C. And C doesn't have interfaces like Go.


>The Go team is open to generics as soon as there is a proposal they feel fits right without performance penalties.

Because using interface{} for the same tasks fits so much better and has so much better performance right?

Tons of languages have implemented Generics -- it's not rocket science since at least 2 decades.

If the Go team is not interesting in working on the thing, then I doubt they would be really evaluating any "proposal".


I think Generics are just low priority for the Go team. Maybe we will see them in Go 2.0

>>tons of languages have implemented Generics -- it's not rocket science since at least 2 decades.

Sure, but that doesn't mean there are not concerns about the way it is often implemented: http://research.swtch.com/generic


I haven't looked at go beyond doing the tutorial, so what follows isn't a learned opinion. The interface system in Go covers most of what you would do with generics, if you use interfaces your code is generic. The only friction over a generics implementation is the way you wire up types to interfaces.


Someone correct me if I'm wrong, but I think that's true only if you use methods and not operators, because Go doesn't support operator overloading.


The other place you really feel the lack of generics is in the collections. They are built around the empty interface so that they can hold any type. Then you have to cast back to what you want. Exactly like Java pre generics. ugh.


That's correct, which means stonemetal's comment is true only for non-math code, but 99% of the code out there is not math.



One of the features of Go is suppose to be compilation speed. Can anyone compare to C, for example, or discuss compilation speed in large projects (e.g. >100k loc)?


The Go compiler is so fast you can use it the way you would a scripting interpreter ("go run whatever.go"); it builds the whole standard library in seconds. Every program I've written in it (topping out at around 20kloc) compiles instantaneously --- I can't perceive the amount of time it takes to compile.

C compilers are fast, but they aren't that fast. Go compiles so fast that it changes the way you write and organize code; you would never consider compilation overhead in any decision you made in a Go source tree, where you might do that in a C codebase. For instance, Go programmers routinely compile code, run it, and throw away the compiled binary! You wouldn't think it would be possible for compiler speed to be so much better than C that it would make a difference, but Go's compiler manages it.


"You wouldn't think it would be possible for compiler speed to be so much better than C that it would make a difference"

I would think that, even before I had used Go. Ignoring the smaller scanning benefits of Go that come about from the more rigid syntax for braces and such, the C preprocessor system introduces a complete nightmare when it comes to requiring multiple passes over the same code, especially for code that is naive about include guards, pre-compiled headers and other "best practices".

C (and C++ is even worse) has a lot of very large low hanging fruit to cull when it comes to writing languages with faster compilers. Whatever speed C compilers have now is due to the geniuses who wrote the compilers and decades of shoulders to stand on, the language is actively working against them at nearly every turn when it comes to compilation speed.

That aside, I agree. Go compiles very, very fast. So fast that it never occurred to me that "go get" did a compilation until it was pointed out to me that it did.


I can't agree regarding C. C++, sure, templates are nasty to compile quickly. C on the other hand have very little that causes problems unless you severely abuse the pre-processor in ways I've seen maybe a handful of times in the last 20 years.

Sure, there are compilers that do a poor job at optimizing it, and it's very easy to make it slow on memory starved systems, but on a system with enough RAM to keep your headers in memory, and a compiler that doesn't totally disregard compilation speed, the overhead of skipping over blocks of include files using include guards is low and for code that doesn't, a compiler that can process multiple source files at the time can trivially cache large chunks of parsed headers.

As I've pointed out elsewhere, Bellard demonstrated TCCBOOT that used TCC to boot-time compile the Linux kernel in 15 seconds on a P4 - it's sufficient to demonstrate that it's difficulty in pre-processing or parsing C quickly that is the block for fast C compilation, but that to the extent it is even a real problem it is architectural decisions in the major C compilers and/or optimisation.

It's quite possible Go can be compiled faster, but so far I've not seen any compelling evidence that it is even roundly beating readily available C compilers. Certainly the numbers bandied around in this thread are not impressive.


> C (and C++ is even worse) has a lot of very large low hanging fruit to cull when it comes to writing languages with faster compilers.

I look forward to seeing the speed boost that C and C++ get if/when modules are available: http://llvm.org/devmtg/2012-11/Gregor-Modules.pdf


Me too. Not sure if they make it to C++14 though. :(


They're delayed to C++17. C++14 is an incremental release only.


Yes. Actually I wonder if they will make it to C++17 even, but I keep looking forward to it.


> Every program I've written in it (topping out at around 20kloc) compiles instantaneously --- I can't perceive the amount of time it takes to compile.

Okay, Go is fast, but it's not that fast. My 1000 LoC program takes ~1 second to compile. While it's certainly fast enough for me, it's not imperceptible.


I rebuilt a 15kloc microcontroller emulator I wrote (as my intro- to- Go project) just now, after blitzing my pkg/ directory, and it took about a second.


Yeah, that sounds about right. My definition of "imperceptible" is around a few ms, I guess. Maybe people coming from C/C++ have different tolerances, but I'm more used to the startup times of interpreted languages.


> I rebuilt a 15kloc microcontroller emulator I wrote (as my intro- to- Go project) just now, after blitzing my pkg/ directory, and it took about a second.

that sounds pretty interesting, is there a GitHub page I can look at ? thanks !


I wrote it for a project and probably won't publish it soon. It's not all that interesting... at least, it's not interesting once you've written a simple emulator, which I highly recommend doing; it's an extremely rewarding personal project, and not nearly as difficult as it sounds. Pick something GCC can compile to.


That's within range of gcc on similarly sized projects on anything remotely like a modern machine. I'm surprised this is considered fast.


Is this code up anywhere? It'd be interesting to see what's going on there.


The one pathological case of kinda-slow Go compilation I have involves embedding a very big binary blob (FPGA firmware for a controller chip driving a Texas Instruments DMD array) as a const array of bytes that I can just dump down the usb connection when cold booting the device. I don't fault Go for this as I'm really abusing the language to simplify my single-file deployment without having to do too much work for the simple deploy.

If I wanted to spend the time I could do something like append the blob to the end of the program executable instead of just clumsily jamming it into DATA, but this all exists in a library that doesn't need to change often so the fact that it takes ~10-15 seconds to compile is not a problem in practice.


I vaguely remembered minux suggesting a way to do this more efficiently, and dug it up: golang.org/issue/3035

Maybe it'll work for your use case.


Wow, that (the syso linker thing) is very useful and should work out well in my case, thanks.


I would suggest to file an issue: https://code.google.com/p/go/issues/list

At least, Go team will be aware of your pathological case and have a chance to speed up it.


No, it's closed-source unfortunately. It might be a bit inaccurate as I only measured the includes in my folder and not everything in the src folder, but it only imports from the stdlib and a redis lib.


Which is the entire raison d'être for the Go project and thus it can be dubbed a success. Any other advantages over C are just gravy. :)

A nicer, more accessible (python/ruby/etc folks seem to like it a lot), faster compiling reasonable alternative to C is a worthy goal - I really don't get why some people get into flamewars over it.


If some people don't get into flamewars over a language, it's because no one's using it. :)


C compilers are largely slow because they are written with the expectations of a lot of optimization and large projects rather than with much concern for compile times.

C compilers that are written to be fast rather than flexible can be extremely fast. E.g. see Fabrice Bellards "TCCBOOT", that he demonstrated compiling the Linux kernel at boot-time in 15 seconds on a 2.4GHz Pentium 4: http://bellard.org/tcc/tccboot_readme.html

TCC can also be treated much like an interpreter like your go example.


Any language with modules is able to run circles around C compilation speeds, regardless of what optimizations you might be doing.

It all boils down to having the parse each include file every time it appears, because the preprocessor might change the meaning of the file in each inclusion.


That's something TCCBOOT demonstrated quite clearly isn't a major problem of C itself but rather an issue with common C compiler implementations, given the size of the Linux kernel and the time spent on a CPU which is by now ancient history.

Yes, you need to pre-process and potentially parse them again, but in practice most C include files use include guards and are low cost to process if done even remotely properly.


But the compiler does not know the guards are there, so it needs to read the files anyway.

This is why Plan9 C compilers have the strict rules that header files don't include other header files and you as a user need to include them all.

I need to look into TCCBOOT in more detail.


Technically it needs to read the file once, and keep the data around (of course it can keep re-reading the same files, but there's no technical reason other than memory availability that forces it to). If the compiler implementer cares about speed, it can easily keep the data together with tokenized pre-processed blocks that can be used to reduce the guard lookups to a hash table lookup unless someone has done something weird (in which case the worst case is to revert to pre-processing the file again).

It is also perfectly possible to cache token streams or even pre-parsed (with some caveats) versions of the headers specialised by the value of used pre-processor definitions, which in most cases will prevent re-parsing of most include files (only causing re-parses when the pre-processor definitions actually change).

The pathological worst case for C code can be horribly nasty, but real-world code is generally fairly well behaved and easy to speed up processing of on modern systems that aren't terribly memory starved. Even on systems that are memory starved, there have been plenty of C compilers that can "pre-compile" headers to speed up compilation.


> TCC can also be treated much like an interpreter

I assume you are aware, but tcc can be treated exactly like an interpreter:

  #!/usr/bin/tcc -run -lgmp
  #include<stdlib.h>
  #include "gmp.h"

  /* all this in eg. fac.c */
  void factorial(mpz_t r, int n)
  {   
    unsigned int i;
    mpz_t tmp;
    mpz_init(tmp);
    mpz_set_ui(r,1);
    for (i=1;i<=n;i++)
    {   
        mpz_set_ui(tmp,i);
        mpz_mul(r,r,tmp);
    }
    mpz_clear(tmp);

  }

  int main(int argc, char** argv)
  {   
    mpz_t r;
    mpz_init(r);
    int n;

    if (argc == 2)
    {   
        n = atoi(argv[1]);
        factorial(r, n);
        gmp_printf("%d!:%Zd\n", n, r);
        mpz_clear(r);
    } else {
        exit(1);
    }
    exit(0);
  }

  chmod a+x fac.c
  time ./fac.c 1000

  1000!:4023872(... snip for formating ...)000

  real    0m0.007s
  user    0m0.004s
  sys     0m0.000s
edit: removed actual factorial, as hn formating choked on the long line.

edit #2: indentation/block formatting of code. edit #3: missing "time" prefix...


Yeah, I opted for a weaker claim as I've not actually tested how practical it is to use it that way in myself. I keep meaning to read through the tcc source..


> You wouldn't think it would be possible for compiler speed to be so much better than C that it would make a difference, but Go's compiler manages it.

I did some ocaml work a few years ago and had a similar revelation. Its compiler is just as fast—my short little programs were more or less instantaneously compiled. Fast compiling is really, really nice, even though it seems like a silly thing to tout.


The main Go program I work on is CloudFlare's Railgun which is 18,000 lines of code. It builds in 4s.

   % make clean
   % time make make-listener make-sender make-diag make-ssl...
   3.42s user
   1.05s system
   105% cpu
   4.240 total
   % find src/ -name '*.go' | xargs wc -l | tail -1
   17870 total
And that's building everything sequentially. If I switch to using -j4 I get a 2s build from clean.

   % make clean
   % time make -j4 make-listener make-sender make-diag make-ssl...
   5.75s user
   1.99s system
   382% cpu
   2.020 total
An incremental build (no op) of the entire tree is one second:

   % time make -j4 make-listener make-sender make-diag make-ssl...
   1.56s user
   0.73s system
   209% cpu
   1.090 total
Yes, we wrap the go build tools with make. Not because they are lacking anything for Go but because we do other stuff (like build documentation and release packages).

(I should have not counted the tests in the above analysis. There are 7,290 lines of tests and 10,580 lines of code).


Given the way people are praising Go for compilation speed here I'm surprised at your numbers, which does not seem to support the claim it's all that fast, unless your build machine is antiquated...

My home server is an AMD FX-4100 (quad core, 3GHz; this is one of the early Bulldozer CPU's and definitively not a speed daemon - I picked up the CPU and motherboard for <$150 a year or two ago).

I just tested a 6k C program that compiled in 1s flat with gcc 4.7.2, or .44 seconds with -j4, and a 13k lines one that completed in 2 seconds, or .88 with -j4.

Gcc isn't exactly renowned for being a fast c compiler.

A "no op" build of the 13k project takes .004 seconds, though, perhaps that's the reason - if your make setup is horrendously complicated perhaps it's slowing your numbers down a lot. And it's certainly possible to make C compilation incredibly slow if you do lots of nasty pre-processor hacks.

But then again whether 6k, 13k or 18k lines, both your project and mine are tiny.


I assume you're calling the go command a few times in parallel, hence the improvement with -j4. I wonder if it would be faster to aggregate all the packages you need to build in a variable, and call the go command with all of them.

i.e. instead of issuing "go build foo/mylib" and "go build bar/myotherlib" in parallel, just issuing "go build foo/mylib bar/myotherlib".

I say this because I remember there being logic inside the go command to parallelize builds already, see the "-p" flag in "go help build". In particular, the go command will be able to avoid statting all the files, comparing mod times twice, recomputing the DAG, etc. Although, considering how fast it runs anyway, this speedup might be negligible...


  > Yes, we wrap the go build tools with make. 
Do you wrap `go build` or do you wrap the 6g/6l tools directly?


go build


We have similar times being reported on a codebase of 17,482 lines of code.

    : time go build -a

    real	0m3.980s
    user	0m5.012s
    sys	0m1.140s
But that is unrealistic, in dev we skip the -a flag:

    : time go build

    real	0m1.854s
    user	0m1.760s
    sys	0m0.332s


If your memory goes back this far: it's Turbo Pascal fast. :)


Haha, you've just sold me. I'm switching to Go for my next project :)


Turbo Pascal was so fast mostly because of not compiler but linker and .tpu files format.

edit: s/tpl/tpu/

btw, the tp linker - http://turbopascal.org/linking-modules-and-creating-exe-file


Which is an interesting point... I'm pretty sure Go has more in common with TP than ordinary C wrt linking, relying on modules, and using static binaries.

http://prog21.dadgum.com/47.html

edit: in fact a number of the things that Rob Pike has brought up in talks reminded me of that article


IIRC, the compiler was single-pass, and thus, quite fast.


Most older Pascal compilers were, as most (all?) of Wirth's languages are explicitly designed to make single pass compilation easy.


And it used modules.


I've been reading this thread trying to find a reason to switch to Go. Then I read this comment and it brought back some very good memories (TP was my first language after command-line BASIC on the Apple IIe and TRS-80). Sigh. Looks like I have a new weekend project!


The Juju guys at Canonical said at GoSF that their >100k LoC project compiles in 6 seconds.


Much better than SBT compiling my Scala project (20k? LoC) in 75 seconds (!) on my latest MBP. And that was not from scratch, but having changed about 10 lines of code.


This website now reflects it too: http://isgo1point1outyet.com/

(For those that don't know, this website was created for a talk about Go given by Andrew Gerrand. Code: https://github.com/nf/go11 , slides: http://talks.godoc.org/github.com/nf/go11/talk.slide#17)


Our web hosting service at Parse (launched last week) is built entirely in Go. It happens to live in a sweet spot for the language: we need to query Memcache and/or Mongo to figure out exactly how to route a request but don't want to block the thread, which is dead simple in Go. I've personally had a great time using the language, and I am hoping to write up a blog post on our experience within the next week.


Just waiting on the PR to go through for Homebrew to pick-up go 1.1 :)

https://github.com/mxcl/homebrew/pull/19782


Does Go have a roadmap of development work for future releases? One thing that I've been very curious about is if the Go devs will be able to resist the tendency of every other language (save perhaps Lua) to linearly increase in complexity over time.


I asked this exact question once when bradfitz was giving a talk about Go at AirBnB. The answer was that the team is led by minimalists who are interested in keeping it lean and avoiding bloat and especially in avoiding it becoming C++. In fact, you can see his answer about Go 2.0 roadmap plans (where the answer is basically "too soon to think about that yet") followed by my question (at 54:50) of this talk:

http://www.youtube.com/watch?feature=player_detailpage&v...

I guess the question then is "do you trust those guys?" So far the answer seems to be yes.


We are most certainly concerned with language complexity. We will only add features that mesh well with the Go we know and love, not just for the sake of utility (every feature is at least useful).


That's good to hear, but it still raises the question of what you're spending your time on. :) I imagine that all the low-hanging GC-related fruit has been plucked for 1.1, so what are users clamoring for now? Without a maximalist attitude, do you foresee a point where the language is declared as "done" and all the full-time devs just pack up and move on?


At this point there are something like 600 open issues, so I think that will be a while.


Go is awesome! The best feature of 1.1 is very large heaps on Linux x64. I was bumping into the limits before but the 1.1 rc has been rock solid so far.


Congratulations to everyone involved for their work.


And thank you so much! As of lately, Go is the leading source of my happiness in my dev life.


Fantastic! Thanks to the Golang team for getting this out the door!

I'm surprised by the speed of the dev team. It seems that just a few days ago that we saw the latest RC and just a few days before then was the previous RC. Fast development!


We would have preferred to have hit RC a bit earlier and left more time between candidates, but we had such a lengthy beta period that we felt it was okay to crank out a few RCs to iron out the last issues and release. I hope the next release cycle will be shorter, and the candidates will come out earlier (and be a lot rougher).


What is the recommended way to go about debugging a Go program? Is there something better than GDB if I want to look at the call stack, have breakpoints, look at the state of variables etc.

Using GDB seems akin to working with assembly


There're a few Go IDEs that integrate GDB into the IDE.

http://geekmonkey.org/articles/20-comparison-of-ides-for-goo...

I'm the author of one of them (Zeus) and while the results are not as slick as the Visual Studio debugger it's a better than straight out GDB.


the %v and %T flags to the printing package. surprisingly powerful :)


What is the NPM equivalent in the Go world?

I wanna give Go a start but can't see any community website that lists Go libraries like NPM.

EDIT: I'm not asking for "npm install person/project", I'm asking for a community website and a file like package.json that records information about a project.

As I observe, Go projects are really messy. I saw some code importing dependencies like "import github.com/foo/bar".

I think Go should focus on declarative programming and its aesthetics culture.


> As I observe, Go projects are really messy. I saw some code importing code like "import github.com/foo/bar".

That's not messy; that's idiomatic Go. By convention, package imports can be URLs to version control repositories, which the `go` tool uses to determine dependencies and install them automatically.

Go has no equivalent `package.json` file. All package information is inferred from the source code.

The closest thing to a central repository is on the Go wiki. [1] Many people also use godoc.org [2], which is searchable.

[1] - https://code.google.com/p/go-wiki/wiki/Projects

[2] - http://godoc.org/-/index


> That’s not messy; that’s idiomatic Go.

The two aren’t mutually exclusive. If that URL goes down, is there a fallback mechanism?

For what it’s worth, I agree with oakaz’s sentiment that Go is “very imperative” and “not simple”. It is deliberately designed that way, with lots of syntax and types for representing (abstractly) the inherent complexities of its intended domain. Any ugliness, however, is purely subjective.


> The two aren’t mutually exclusive.

The OP (of this thread) said, 'I saw some code importing dependencies like "import github.com/foo/bar."' I did not understand this as a criticism of the design choice of using URLs as imports, but rather, a statement about some Go projects doing something horrendously wrong.

Yes, one can think URLs as imports is both messy and idiomatic, but I do not feel that was central point of the OP. I suspect we will never get a clarification, though.

> If that URL goes down, is there a fallback mechanism?

I don't know what you mean. If the package is no longer available at the URL then it cannot be downloaded from that URL.

> For what it’s worth, I agree with oakaz’s sentiment that Go is “very imperative” and “not simple”. It is deliberately designed that way, with lots of syntax and types for representing (abstractly) the inherent complexities of its intended domain.

Go was not deliberately designed to be "not simple." In fact, it was deliberately designed to be simple. Simplicity's name is invoked again and again as a justification for including or not including changes to the language.


The way it works is that you use "go install" to download your dependencies and then check them in locally. Adding a new library is like any other commit. Upgrading to the latest version is easy - just run "go install" and commit again.


> That's not messy; that's idiomatic Go

Haha.

Everybody understands Go has a tool that deals with dependencies for you. And any other language can have it.

And this is why I avoid monopolistic platforms even though they are fast. If the owner of your platform has no taste of aesthetics, you'll write fast and ugly looking code. which is not fun for me.

As PG says in his book, a good solution should also look good and simple.

Go is very imperative, ugly looking, and not simple.

> http://godoc.org/-/index

This link is actually a self-answer by you.


> Go is [...] ugly looking, and not simple.

I disagree.


You clearly haven't a single clue what you're talking about. I've never, ever, ever heard someone describe Go this way:

>Go is very imperative and ugly looking, and not simple.

Further, this is just stupid:

>And this is why I avoid monopolistic platforms even though they are fast.

If you don't want to use `go`, so be it. The manual compiler and linker tools are available that you can cobble together with make or ant or whatever masochistic tool you prefer.

>This link is actually a self-answer by you.

What are you whining about? What do you want? That's just a laundry list of known/seen Go packages. As everyone else has already told you, there's no need for a package.json and this literally isn't a problem for anyone actually writing Go code.


Cool down. Apparently you guys are having difficulty of discussing about negative sides of your language and platform and this is fair for an immature community.


From what I can tell, the only negative thing you've said about Go in this thread is that "it's ugly." What kind of discussion were you hoping to have?


It would be less "difficult" for "me" if you had a proper frame of reference. I have actually used Go to write real code and be productive with it.


To get a good overview of third party libraries 'out there', you can browse the index at http://godoc.org/-/index

These are by no means all the 3rd party stuff out there, but it's a good starting point.

Incidentally, this website is intended to automatically generate and serve the documentation for all those projects in a centralized place. You don't even have to submit your projects. Just type the import path in the search box, and the site fetches the code by itself.

Edit: And here's anther community-built site to do continuous integration testing for your projects. It too, has an index of registered projects: http://goci.me/


No offense but that page is horrible. I liked the project pages though.

I think Go needs to focus on improving its community and its aesthetic culture.


> I think Go needs to focus on improving its community and its aesthetic culture.

You seem more eager to make grand proclamations than to gain understanding.


> No offense but that page is horrible.

I'm pretty sure that page is auto generated by GoDoc, an automatic documentation tool that generates documentation on the fly.

For auto generated documentation I'd say it looks pretty darn good!


>>improving its community

You must read different mailing lists, G+ groups and IRC channels than I do...

>>and its aesthetic culture.

ROFL, what? How do you feel about the "aesthetic culture" of C programmers? Because you know- thats not a very popular language or anything...


`go get` but it's not anywhere as featured as npm. Forget about versions or using yours or anybody else's fork of a project without editing code or using symlinks. "github.com/your_fork/somelib" is not a replacement for "github.com/foo/somelib". For those that don't use node, using a fork or a specific commit in node is a simple version change in package.json.


who is talking about forks or versions here?

try this in your command-line: npm install name/project


npm does a lot more than that

npm install pkg@1.2.3

npm install foo/bar#cfdea4213


    $ go get github.com/person/project
It will even figure out dependencies by checking every import (and you can do 'import "github.com/person/project"' too), and builds the source by default.

And yes, "go get" comes with the standard gotools.


As I said in my previous comment, it's just the equivalent of `npm install person/project` not NPM.

Edit: WTF? why this comment is down-voted now?


Ah! You mean CPAN!

Nothing that I'm aware yet. It's mostly community-based discovery and recommendations, but a centralized repository would sure be nice.


I'm not sure if there's a direct equivalent, but see if any of these are helpful.

http://godoc.org/

http://go-lang.cat-v.org/

https://code.google.com/p/go-wiki/wiki/Projects


I see what you're saying in your replies down-thread from this, but honestly the fact that there's not a directory of packages has never been an issue for me. I google when I go looking for a package, then "go get" it.


Is this the most productive way of coding?


Honestly I've spent more time installing NPM than I have searching for packages when I need them. Which is not very much time.


And when you can't find the packages you write them. And at the end of day that's so nice to have a bunch of packages for each not-so-popular task, yeah.


You think NPM solves discovery?

Look if you or Oakaz feel so strongly, go build that centralized repository. But looking at centralized repos in different languages there are as many misses as hits. And even among hits -- pip for example -- the core benefit pip provides IMO is not discovery. More often than not, I find a Python package via a Google search and then merely install it using pip.

Your snark here is a little absurd IMO.


>You think NPM solves discovery?

I know CPAN does.



It's the equivalent of npm install <package-url> command, not the NPM.


Why would you need a central repository when you can have a completely decentralized repo system?


Why do you see a contradiction? CPAN is a central repository while vast majority of packages development now is backed by GitHub, for example.


What on Earth are you talking about? Go's dependency system tied to SCM is awesome.


It has a crucial limitation compared to "real" dependency managers like Ivy, Maven, NPM, Leiningen, pip, et al.: you can't declare a dependency on a particular version of a module, only on the trunk. So you have to either maintain local mirrors of every single module, or just give up on reproducible builds and accept that upstream might introduce a bug or a breaking change at any time.

The Go developers are aware of this limitation, and their typical recommendation on the mailing list is that you should never need to use anything other than the latest version of any third-party library, e.g.: https://groups.google.com/d/topic/golang-nuts/3gEseiCwdf0/di...


It's because it isn't a trivial problem to solve. What happens when A requires B1 and C, but C requires B2?

> The Go developers are aware of this limitation, and their typical recommendation on the mailing list is that you should never need to use anything other than the latest version of any third-party library

It's either that, or you pin the version you want in your own fork of the package. Which is not an unreasonable solution to the bitrot problem.

Regardless, this limitation doesn't stop the `go` tool from being awesome. It's easily one of the reasons why I use Go in the first place.


Certainly there will be cases where there are conflicts between different versions of packages, but Go's current behavior doesn't prevent them; it just makes them happen unpredictably. Instead of building against a specific known version, you're just using whichever one happened to be the most recent as of the first time you referenced it.

If B1 and B2 are really incompatible, then yes, there's a problem; but not being able to solve the problem completely in all cases doesn't mean we need to throw our hands in the air and not even try to handle the simple cases. I think it's telling that Google uses a custom build system for their Go projects, rather than "go get".

(And forking all of your dependencies is an option, but it's not quite that simple. In order to get truly reproducible builds, you have to mirror all of their dependencies as well, and patch all of your forked versions to fix their imports.)


> If B1 and B2 are really incompatible, then yes, there's a problem

It's not just incompatibility at the package level. Allowing them to simply coexist in the `GOPATH` ecosystem is problematic. It isn't technically difficult to offer a package tool that can handle different but compatible versions of a library, but it certainly increases the complexity of the system. The simple directory structure of `GOPATH` and its correspondence with import URLs would be lost.

> In order to get truly reproducible builds

Yes, if that's what you need then you'll have to pay that price. But not everyone needs that. For software I release, I only pin dependencies when they have a track record of breaking or don't update frequently enough for my purposes. In practice, it's pretty rare.

> If B1 and B2 are really incompatible, then yes, there's a problem; but not being able to solve the problem completely in all cases doesn't mean we need to throw our hands in the air and not even try to handle the simple cases.

That's ironic because that's exactly how I see the `go` tool. Package management is extremely difficult to get right, so instead of throwing their hands in the air, the Go devs put out a tool that is simple to use and has proven to work very well in practice.

The fact that the `go` tool is so simple is one of the reasons why I love it so much. If and when I need to be 100% confident in my builds, I'll happily pay the price to make it so.


> I think it's telling that Google uses a custom build system for their Go projects, rather than "go get".

Google uses custom build systems for lots of things ;)

In fact, to not under-generalize, I think it's safe to remove 'build' from the previous sentence.


> (And forking all of your dependencies is an option, but it's not quite that simple. In order to get truly reproducible builds, you have to mirror all of their dependencies as well, and patch all of your forked versions to fix their imports.)

It's not only an option, it's a practical necessity to freeze the dependency tree locally so you can run a rebuild when someone's knocked the remote repository offline. Doesn't everyone do this?


I agree that it is not trivial but the ruby bundler project seems to manage it reasonably well, why can't go?

Mind you ruby also has a central repository in the form of rubygems which allows for this kind of versioning.

Go, lacking the same thing, relies on people just entering github URLS which seems insane to me but could work I guess if everyone agrees to tag their projects using semantic versioning where appropriate. Then you could do something like person/project@v1.2.3

Mind you, of any of the companies on the planet I could think of that has the resources to to create, manage and maintain a rubygems equivalent for go, Google would be top of the list.


> Go, lacking the same thing, relies on people just entering github URLS which seems insane to me but could work I guess

There's no need to guess. It works. Almost the entire Go ecosystem uses it.

> Then you could do something like person/project@v1.2.3

This has been suggested. But it's just a mechanism for versioned dependencies; it doesn't address the transitive dependency problem that I cited.

As of right now, if you need to pin a dependency to a particular version, then you fork it yourself. I welcome this infrequent pain with open arms to the benefit of keeping the `go` tool simple.

It's not like people haven't tried to setup a central repository with versioned packages [1], but it just hasn't caught on.

[1] - http://www.gonuts.io/


> There's no need to guess. It works. Almost the entire Go ecosystem uses it.

For a given definition of "works". Personally speaking, my "good" has not been met but of course, each to their own. The "Fork it if you want version x" workaround also would not satisfy me.

Google has the resources, reach and authority to easily setup a central repository. If they did then I would not be surprised to see people start using it like CPAN and rubygems.

You say you do not want the go tool to become complex but there is no reason the dependency management side of things cannot be done with a separate tool to keep things segregated.

The current tool seems to roughly match the ruby 'gem' command (albeit with-out a central repository). I am surprised google has not created an analogue of the ruby 'bundler' gem with its 'bundle' command.


> You say you do not want the go tool to become complex but there is no reason the dependency management side of things cannot be done with a separate tool to keep things segregated.

Yes. I addressed that.


> It's either that, or you pin the version you want in your own fork of the package. Which is not an unreasonable solution to the bitrot problem.

And if that package references another package on github understandably I need to pin it too, but do I also need to modify all of the imports in the original third party package I wanted to reference the pinned dependency?


> And if that package references another package on github understandably I need to pin it too

Only if you want to. You don't have to.

> but do I also need to modify all of the imports in the original third party package I wanted to reference the pinned dependency?

Yes, but it's trivial:

    find ./ -name '*.go' -print0 | xargs -0 \
        sed -i 's#github.com/BurntSushi/wingo#github.com/YourName/wingo#g'


> Only if you want to. You don't have to.

I guess I don't understand why it's expected that I would run production code that imports from a third party automatically. I don't want to deploy updates some random person committed today to some third party package I'm using or some package it depends on, so if I want to use a third party package I have to recursively pin it and its dependencies and recursively modify import statements.


Then don't run `go get` to update your local packages.


burntsushi is completely right. Declaring dependencies on specific versions is peachy until it isn't.

Along those lines, did you know that Maven2 (and possibly in Maven 3) it's possible to have dependencies that are pulled into your bundle that are at a lower version that specified in a top level pom? Yup, versioning is really hard.


The ARM branch mean is possible to do iOS dev on it?


> The ARM branch mean is possible to do iOS dev on it?

No. You can compile for ARM + a platform (i.e. Linux, BSD) if you want to run Go code on an ARM device. Think RaspberryPi or Beaglebone running an ARM port of Ubuntu/Arch.

Writing Go code targeting iOS (or Android) is a whole 'nother ballgame.


Android is just Linux, Go works perfectly fine on it. Go issues syscalls directly and doesn't use libc for that (it uses glibc for name resolution, but with cgo disabled it doesn't do even that).

Minux did an iOS port, it's not part of the official tree because an iOS port required a rooted device. Perhaps this might have changed as with two word funcs, the Go runtime doesn't do runtime code generation.


iOS is too locked down to properly build Go code for. It's possible to write useful Go code on Android though, Brad Fitzpatrick posted about doing so recently:

https://plus.google.com/115863474911002159675/posts/DWmyygSr...


No, the ARM architecture is only supported on Linux right now (although unofficially it's mostly working on some other platforms too).

I remember reading that Go was ported to darwin/arm, but it can never be supported since there is no way to legally run their test suite for it.


This ticket might interest you: http://code.google.com/p/go/issues/detail?id=837

Maybe one day it'll come. If on that day iOS has gained a public XPC API I'll be quite the happy camper.


I'm especially excited about the race detector. It looks like a practical implementation of http://cseweb.ucsd.edu/~savage/papers/Sosp97.pdf. Always exciting to see research coming up in practice!


Hey guys! I was inspired by all these rave responses. And decided to download and try. When I came to downloads page I saw only about 1000 downloads (uploaded 38 hours ago). And all my inspiration has disappeared.

It would be interesting to compare number of downloads for Python/Ruby.


Go doesn't have exception handling.

How practical is it to use such a language for large code bases?


Practical is subjective, but there are a lot of large, successful projects written in C, so it's definitively feasible.


Of course if you say 'you have to live with it' then you really have to 'live with it'. After some time, you no longer complain about it.

But this is 2013, there are somethings that expected to come from a language default. Exceptions are one of them, few such things I can think of are things like regular expressions, closures etc.


Exceptions are extremely controversial in programming language circles. They reduce encapsulation, and the try/catch syntax as seen in java encourages bad error handling. Any idiot can put their whole program in a giant try/catch block.

It's really a hard design problem. I agree that some form of error handling should be a language default, but I'm not sure if "traditional" exceptions should be it. As a C++ dev who has worked on projects with and without exceptions, I can say there is pros and cons to both.


It's not that hard, basically a solved problem in Common Lisp and more recently Rust: use conditions. They only unwind (not re-wind), so there's no worry about exception safety (as in C++) and they allow strictly more power than exceptions.


    But this is 2013, there are somethings that expected to 
    come from a language default. 
Very true!

    Exceptions are one of them,
Very false! :) Exceptions have a lot of detractors.


It has explicit error handling (via multiple return values) which the authors argue is superior (even if more verbose) because they know from experience that programmers tend to make mistakes forgetting to check for errors in their code. If the error code comes back as a return value the programmer has to consciously decide what to do with it immediately, leading to better code on average.


as long as you aren't calling what you consider to be a 'return void' function (like writing data to a socket). then it is easy to forget to check the return value.


It seems a shame that more hasn't been done with Go and games. The concurrency facilities seem particularly fantastic for this.


There is quite a lot of gamedev activity in Go and quite a few bindings to GL, SDL and the like. I've written a (bad) game in it and am currently working on a better one. I know of at least one commercial game being written in it. There are lots of emulators written in go for some reason too.

I'm finding the multiplayer aspects of my current game to be an excellent fit for Go, making good use of concurrency and networking.


I'm doing a little game in it at the moment. Had to write my own allegro bindings (https://github.com/bluepeppers/allegro), but it's fun enough. One thing I really enjoy is mixing event handling with the goroutines. For example, I wanted to start scrolling the map on the user's click, and then stop when they release the button. So on click, I fired up a goroutine just to watch for the button release and handle the scrolling. So much easier than having some variable somewhere called "isButtonXDown" and then checking it in the main loop. Of course, you have to be carefully with concurrency, but it's a lot of fun.


A friend did a couple of of the Ludlum Dare comps in Go. Here's the most recent one: https://github.com/kurrik/ld26


Congratulations, keep up the good work!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: