Hacker News new | past | comments | ask | show | jobs | submit login
Why I Like Go (gist.github.com)
179 points by craigkerstiens on Feb 10, 2013 | hide | past | favorite | 183 comments



These points are all very true, and I admit I haven't done more with Go yet than play around. But there are a few reasons I feel I don't like Go, or at least, there are warts that will probably limit how much I'm going to like it.

Nil pointers are #1 on that list. Tony Hoare called them a "billion dollar mistake"[1]. In Go, as in C, C++, or Java, any pointer can be `nil` (similar to `NULL`) at any point in time. You essentially always have to check; you're constantly just one pointer dereference away from blowing up the program. Since ways of solving this problem are known, I find this hard to swallow in a new language.

Compare Rust where, unless you have explicitly used the unsafe features of the language, all pointers are guaranteed non-nil and valid. Instead of a function returning a nil pointer, it returns an `Option` type which is either `Some ptr` or `None`. The type system guarantees you have considered both possibilities so there's no such thing as a runtime null pointer dereference. Scala has a similar `Option` type, as does Haskell, calling it `Maybe`. In 2013 I don't want to still constantly check for a nil pointer, or have my program blow up at runtime if I forget.

The second disappointment is that when I looked into it, it seemed there are ways to call C functions from Go code, but no working way to call Go from C code. Maybe that wasn't a goal at Google, but it seems like a missed opportunity. As a result, you can't use Go to write an Nginx module, or an audio plugin for a C/C++ host app, or a library that can be used from multiple languages.

I think there is a real unmet need for a higher-level, safer language you can use to write libraries. Imagine if zlib, or libpng, or the reference implementations of codecs (Vorbis, Opus, VP8) could be written in something like Go. Or spelling correction libraries. Currently we have two tiers of libraries: those written in C/C++, which can be used from any language under the sun (Python bindings, Ruby bindings, Perl bindings, PHP bindings...), and those written in high-level dynamic languages (Python, Ruby, Perl, PHP, ...) which can only be used by the same language. We need a middle ground. C isn't expressive or safe enough to deserve to be the lingua franca like this. And Go is tantalizingly close to replacing it, but not quite.

[1]: http://qconlondon.com/london-2009/presentation/Null+Referenc...


Nil pointers are #1 on that list. Tony Hoare called them a "billion dollar mistake"[1].

In fact, Go made nil pointers even worse, since nil may not be nil: http://play.golang.org/p/fIde3QZt9m

Every interface is stored as a tuple of a type (pointer) and a value (pointer). The interface tuple returned by a() is ([]int, nil), comparing this to nil, which is a (nil,nil) returns false(!).

So, Go doesn't only have null pointers, it has null-pointers which sometimes has a 'hidden' type, which does not compare to the literal nil.


I like the language a lot but that is definitely a part that I dislike a lot. This doesn't feel consistent with the simplicity and obviousness of the rest of the language and it is error prone. Working full time with Go, it already happened 2 or 3 times this got me into an hour of debugging.


> The interface tuple returned by a() is ([]int, nil), comparing this to nil, which is a (nil,nil) returns false(!)

It's certainly a subtlety that can bite you, but I'd say that in this form it is arguably more consistent: assigning a type to an interface and having that be nil is not the same as assigning nil to an interface - those two variables are not equal and should be treated as such.


How is this more consistent? This can produce nil pointer dereferences.


> those two variables are not equal and should be treated as such.

In that sense. Don't get me wrong: I'm not saying it fixes other problems with nill pointers, and yes, it introduces an extra possibility for dereference problems. Just like you're allowed to call methods on nill pointers if it's a nill pointer to a type that has these methods.

https://groups.google.com/forum/?fromgroups=#!searchin/golan...


"Compare Rust where, unless you have explicitly used the unsafe features of the language, all pointers are guaranteed non-nil and valid. Instead of a function returning a nil pointer, it returns an `Option` type which is either `Some ptr` or `None`. The type system guarantees you have considered both possibilities so there's no such thing as a runtime null pointer dereference. Scala has a similar `Option` type, as does Haskell, calling it `Maybe`. In 2013 I don't want to still constantly check for a nil pointer, or have my program blow up at runtime if I forget."

So how is nil checking different from when I write a function that pattern matches against a Maybe in a way that I only match against the "Just" case?


We have been using Scala lately for a couple of web services that are now running in production at our startup.

The difference is huge, because Option[T] references are type-checked at compile-time. Whenever a reference can be either Some(value) or None, then you are made aware of it and you are also forced to either handle it (by giving a default value, or throwing a better documented exception) or you can simply pass the value along as is and make it somebody else's problem.

Option[T] in Scala is also a monadic type, as it implements filter(), map() and flatMap(). It's really easy and effective to work with Option[T]. In comparison with "null", which isn't a value that you can work with other than doing equality tests, None on the other hand is an empty container for which you know at compile-time the type of the value it should contain and in Scala it's also an object that knows how to do filter(), map() and flatMap().

My code is basically free of NullPointerExceptions. This doesn't mean that certain errors can't still be triggered by null pointers, but those errors are better documented. What's better? A NullPointerException or a ConfigSettingMissing("db.url")?

Of course, to tell you the truth, Option[T] (or Maybe as it is named in Haskell) is only really useful in a static language. In a dynamic language, such as Clojure, especially if you have multi-methods or something similar, well in such a case Option[T] is less useful. And before you ask, no, Go is not dynamic and people saying that Go feels like a dynamic language, don't really know what they are talking about.


>My code is basically free of NullPointerExceptions. This doesn't mean that certain errors can't still be triggered by null pointers, but those errors are better documented. What's better? A NullPointerException or a ConfigSettingMissing("db.url")?

Almost always it is a matter of 2 seconds to find the source of a nil pointer error. Given that I would almost never forward raw error messages to the user, I cannot really see a gain.

However having a language that combines this Scala feature with Go's exception-free error handling, would be awesome and a true solution that would make software run more reliable and with less crashes.


> Almost always it is a matter of 2 seconds to find the source of a nil pointer error

Either you're some kind of a super-human, or your code bases are really tiny. Yes, you usually can figure out the trigger of a null pointer exception, but not the source that made it happen and with complex software the stack-trace can get a mile long ;-)

The biggest problem with null pointer exceptions is precisely that (1) they get triggered too late in the lifecycle of an app, (2) such errors are unexpected, non-recoverable and sometimes completely untraceable and (3) you need all the help you can get in tracking and fixing it.

Either way, throwing better exceptions is just one of the side-effects of using Option[T], because in 99% of the cases you end up with code that functions properly without throwing or catching exceptions at all. And you completely missed my point focusing only on a small part of it that's also the most insignificant benefit of Option/Maybe.

> However having a language that combines this Scala feature with Go's exception-free error handling

First of all it's not a Scala specific feature, as the Maybe/Option type has been used and is proven from other languages, such as Haskell and ML. That the Go language creators took no inspiration from these languages is unfortunate.

Also, people bitching about Exceptions have yet to give us a more reliable way of dealing with runtime errors. The only thing that comes closest to an alternative is Erlang and yet again, the Go language designers took no inspiration from it.


>Either you're some kind of a super-human, or your code bases are really tiny. Yes, you usually can figure out the trigger of a null pointer exception, but not the source that made it happen and with complex software the stack-trace can get a mile long ;-)

Ok even if the stack trace is 10 miles long, you just need to go to the end, right? :P

Anyway, so an exception gets thrown and Scala forces you to explicitely throw an exception, am I right? How does the other case not crash your program unless you catch it?

>Also, people bitching about Exceptions have yet to give us a more reliable way of dealing with runtime errors. The only thing that comes closest to an alternative is Erlang and yet again, the Go language designers took no inspiration from it.

Go uses panic (Go-speak for exceptions) for really bad errors: out of memory, nil pointer dereference... You can catch them like in other well-known languages.

The only difference: your catch blocks aren't cluttered with handling for non-exceptional errors like file doesn't exist etc. You are forced to handle those explicitely. Why is this good? Except for those truly exceptional errors, the state of your programs is much easier to determine.


Lately, the biggest complaints about PL design seem to be that they didn't take enough inspiration from ML-family or Erlang.


For Go? The biggest complaints about PL design are and have always been that its designers ignored or discarded the previous 30 years of PL (theoretical and practical both) when creating it.


Not just Go. But in Go a lot of problems could be solved by doing things the Erlang or ML way... then there's the new problems they've invented, like enforcing things that don't matter instead of things that do.


>then there's the new problems they've invented, like enforcing things that don't matter instead of things that do.

Never heard such complaints from users that have used it for a few months. I assume you are talking about unused imports and variables. Actually it helps a lot because it keeps your code clean and clear.


There was a quote from .NET authors about roughly 50% of the field issues with .NET code being Null Pointer exceptions.


Pointers can be null, this is a well-known issue. So when you use pointers, you better check they are not null. Even better it is to have some kind of code convention or pattern that you follow to prevent this.

Still I don't understand why you would prefer MyCustomException to crash your catch-less program instead of NullPointerException.


Ok, you checked your pointer to see it isn't null. Then you passed it to function foo. Which passes it to function bar. Do foo and bar have to check it again?

That maintains dead code that will never fire, is untestable, and costs runtime. So you will probably not want to re-check the pointer at every point. However, the compiler doesn't help you here. If you ever decide to call foo or bar from any other point without the NULL check, then you will get a crash.

Type safety can solve this. It does not convert "NullPointerException" to "MyCustomException". It converts "NullPointerException" to a compile-time type error (expected Foo, got Maybe Foo. Or: Unhandled pattern in case statement: Nothing).

The trick is simply to differentiate between a pointer that is guaranteed to not be null and one that isn't. Then, disallow using a nullable pointer as a regular one and force a check.


>Ok, you checked your pointer to see it isn't null. Then you passed it to function foo. Which passes it to function bar. Do foo and bar have to check it again?

I guess not, what I'm saying is therefore: if you use a language that does a lot of stuff, you need to find a convention for your project. One may be: check for nil after assigning variables.

>costs runtime

By all means, no.

Anyway, looks like in need to try Scala and see for myself. (Scala installed: check, Hello World: check)


Checking nil after assigning variables is not helpful for the reason you mentioned earlier: If you check for it and it is nil where it shouldn't be -- you're merely converting one runtime error (null exception) to another (different exception).

If however you use types to distinguish whether it can be nil or not, you simply eliminate the error completely at compile-time.

Glad you're checking it out!

I don't know Scala, I'm a Haskeller myself, but I believe it does get nulls more correctly. It might have bad old null in there too though because of Java interop.


> However having a language that combines this Scala feature with Go's exception-free error handling, would be awesome and a true solution that would make software run more reliable and with less crashes.

Do you realize that this is actually the case? Every Library/API I have seen in Scala until now uses the appropriate abstractions like Option/Either/Try/Validation/... and restrict exceptions to the most exceptional faults.

But anyway, if I had to choose between Go's horribly broken approach of returning multiple values and exceptions, I'll choose exceptions every day. Exceptions are ugly, but at least they are not blatantly wrong like using tuples for error codes.


>Exceptions are ugly

Indeed, you can end up with an indeterminate state. Tell you one thing: writing the if err != nil boilerplate in Go is isomorphic to writing try { ... } catch { ... } for each function call in a way that your state stays clear. The difference of the former is, it reminds you all the time to do this.


> Indeed, you can end up with an indeterminate state.

No. I think that claim is hysterically funny considering that Go developers almost never check all FOUR states of Go's style of error handling.

The problem Go is solving here wouldn't even exist if they had designed/used a better language in the first place.

Maybe Go people should stop drinking so much Kool-aid, because they sound like all these Node.js-ninja-rock-star kids who think that they revolutionize asynchronous programming while they reinvent threads, badly.


There is a huge difference: null/nil is a valid value for a pointer, but Option[string] is not a valid value for a string argument, so the compiler forces you to deal with it.

How is that any different from always checking it? When you program in C, you essentially always have to check it, when you program in Scala/Haskell/etc you only have to check it once.


In Rust at least, pattern matching on enums must consider all possible cases. Option is just an enum, so it's a compiler error if you don't handle the None case.


Guess: the compiler knows you forgot to check the "Nothing" case and will tell you.


Ah, I see. I was confused by Haskell not doing that by default... at least the last time I wrote code on it.

Is there actual data that says that null pointers are actually causing bugs in production software? I always thought they are just a symptom of lazy programmers, and no language can fully protect against that.

Witness my awesome Haskell snippet of code:

foo (Just x) = x+2 foo Nothing = undefined

Ok, so it's somewhat better since the lazyness is now explicit and cannot happen so accidentally.


http://www.computerworld.com.au/article/261958/a-z_programmi...

Anders estimates 50% of the bugs in C# and Java are due to null dereferences.

Your example illustrates the unsafety of undefined, not the unsafety of nulls/Nothing. And you can of course grep for use of partiality in Haskell code and get warnings about partiality in your own functions.

I've seen NULL causing a lot of trouble in production in every setting I've been.

It is very rare to see people mis-handling a Maybe value in Haskell, simply because you have to be explicit about ignoring the Nothing case.

Also, in Haskell, if you get a Maybe value it is a very clear indication that the Nothing case actually exists and you have to handle it. In C, C#, Java, Go, when you get a reference, it is unclear whether it could be null or not in practice. Checking for null when it isn't warranted is dead code you never test. Avoiding checking for null risks missing checks in cases you actually need to check. All of this is simply not a problem when the types don't lie.


"Is there actual data that says that null pointers are actually causing bugs in production software?"

Have you ever written even one line of code in your life?


This comment really clashes with your user name =P


I believe graue was referring to the case when you are not using such a type in the code; then you have a guarantee that the value is non-null. The purpose of the Some/Option/Maybe types then becomes to indicate when a value can be null, but outside that wrapper type it is guaranteed to be non-null. So code where it must always be non-null does not have to confirm that that's the case. I think it is less about efficiency and more about not having to worry about it.


To me it's about the existence of the nil pointer itself. If there is a pointer, it is guaranteed to be valid by the type system. The other case (which would be represented by a nil or null pointer in other languages) is represented by the "None" type. There's no null pointer to dereference (and no way to blow up).


You should also consider Haskell's Maybe monad, which makes the pattern matching on Just/Nothing unnecessary.


I don't know that this is 100% a problem with the language, which cannot be reasonably coped with.

Is it an option to write your own code so that it does not return nil pointers, and only check when you are interfacing which code which might not give you the same courtesy? In other words - in order to avoid driving yourself crazy with paranoid checks everywhere (whether they are done by the runtime or by you) maybe you can defend the invariants at the 'perimeter'.

For that matter, there are going to be many places in many projects where nil pointers are going to be exceptional enough that it is OK to shut down when they occur. There's not always a need to check and handle at points where will be a rare event and not such a big deal when it does happen.

It isn't necessarily a virtue for a program to keep on running when its contracts can't really be fulfilled any more due to the funkiness of its environment or dependencies, so sometimes the best way to handle an error is to just stop and give debug info rather than checking and handling.

I completely agree that it is a huge missed opportunity not to be able to call into Go from C code and I can understand complaints that mandatory gc might prevent Go from replacing C in some domains.


"For that matter, there are going to be many places in many projects where nil pointers are going to be exceptional enough that it is OK to shut down when they occur. There's not always a need to check and handle at points where will be a rare event and not such a big deal when it does happen."

It seems strange to me to create a modern statically typed language that by design doesn't prevent the most common type error (null passed to a function that doesn't handle null), especially when it's so easy to add null safety to the type system.

In C/C++/Java, I need to document in relatively verbose English if a public function/method I write won't handle null. References/pointers that by default aren't nullable are safer, but they also optimize (less documentation and perhaps less machine code) for the common case of functions that don't want to have to deal with null.

I agree that in many many circumstances, nulls are exceptional cases, and I think this is a good thing. At least in the C++ code I write, nulls are rare, so I handle nulls (perhaps by throwing) a small number of places at the borders and then create references from the pointers for internal use. (It's great that null references are undefined behavior in C++.) That way, I get nice stack traces near where unexpected nulls are introduced and I don't have to feel guilty/lazy about not properly handling nulls in my code.

You seem to think non-nullable references force the programmer to use extra checks all over the place. The opposite is true, at least when writing code that someone else might possibly call.


> You seem to think non-nullable references force the programmer to use extra checks all over the place.

No, the other way around. The demand for extra checks all over the place seems to demand non-nullable references. I probably wrote unclearly about this


> I completely agree that it is a huge missed opportunity not to be able to call into Go from C code and I can understand complaints that mandatory gc might prevent Go from replacing C in some domains.

As rsc has pointed out a few times, this problem is just a matter of someone doing the work to make it happen, not a fundamental limitation of the language itself.


>As rsc has pointed out a few times, this problem is just a matter of someone doing the work to make it happen, not a fundamental limitation of the language itself.

Yes, but as neither he nor any other Go designer went ahead and did it the problem exists.


Why would you want do to it anyway? There isonly a handful programs/libraries that do not exist in C.

The only scenario I can think of is risk-minimizing managers. They allow a project to be done in Go but in case things don't work as expected, they don't want to be trapped in Go and be able to reuse that code from their favorite language.


You assume no new libraries will ever need to be written.


I assume those genuinly new libraries will take some time. Go is over 3 years old and yet there is hardly anything available for Go that is not available for other languages.

In fact the only things that come to my mind are vitess and Skynet - Terminator-future with Go. Being no expert in these areas, I bet there are C equalivalents that perform equally well. Also the vitess equivalent's implementation might be more complex in case it exists.


Because a library written in Go will only be useful in Go (a major shortcoming of Go) and Go can use C libraries, I suspect Go will not be used for library creation for the foreseeable future.


>Because a library written in Go will only be useful in Go (a major shortcoming of Go)

I claim that C, C++ and Java are the only languages whose libraries are heavily used from other languages. For other languages it's often better to interface via some kind of Network socket.

Surely there are Go libraries like there are Ruby libraries. But which C user seriously would want to interface a Ruby library?

Long story short: this is why Go code does not need to be called from C. :-) Different story in 5 years, in case Go is then sufficiently widespread.


Yes, it just means a human needs to go in and do it. And preferably a few other humans need to go in and check it ...

The real question is whether this is still acceptable for computer languages.


Lua is easily callable from C as well as the other way round. But you are right it is not common for high level languages.


Interesting. Are you aware of any libraries taking this approach (write a library in Lua, create bindings for C, C++, Python, Ruby, etc)?



What about instead of having non-nullable types - do what objective-c does and 'drop' calls to null objects. There's been a number of times that instead of my application blowing up from a null exception, whatever feature my user was using just didn't work, but the rest of the application ran fine and they could report the bug to me. It's a much nicer user experience to have the car 'not' blow up if the AC button isn't working.


There are very few problem domains where "do something, even if it's the wrong thing" is better than refusing to run. Perl and PHP have both rightly been criticized for this sort of thinking.

I write automated trading software. If I created a bug, I'd much prefer my program just stop working rather than no-op out a hedging routine or no-op out a regulatory compliance routine. There are also plenty of safety-critical applications where a machine would be perfectly safe if it just stopped working, but would kill someone if it no-opped out a routine. I could see a photo hosting site accidentally giving people without accounts access to everyone's private photos because a filtering routine got no-opped out.

I'll grant you that there are a few small domains where this would be desirable behavior, but I think if it's that much of an advantage, those domains should have domain-specific languages. Skipping method calls on null objects is a terrible feature for a general purpose programming language.


This. One of the most important things I've learned over the years is fail early and verbosely. My code is always littered with pre and post condition checks. As a side effect, we discover 99%+ of bugs before production every time.


I think that in the case of unforeseen error in an application, the program should blow up instead of dropping calls to null objects and continuing on as if nothing has gone wrong.

I would not want to work on code that relies on this type of behavior since it would be so easy to overlook. Every time an object gets deferenced I would have to consider that it might no-op and consider how that might affect the rest of the program's execution.

If you really want to 'drop' calls to null objects, be explicit about it and only make the call inside of a conditional that checks that the object is not null. Your fellow programmers will thank you.

Most of the time it is much better that a program crashes instead of silently operating under incorrect assumptions. This is particularly true if the software is doing something critical. But even if it's a casual game, do you want to risk corrupting your costumers' saves?

The following blog post makes basically the same argument, but not about null pointers specifically: http://www.codinghorror.com/blog/2007/08/whats-worse-than-cr...


I would like to add one other thing I like about go: There is a parser and an abstract syntax tree in the standard library.

Sometimes when I'm programming at work I need to add log messages to every method on a large interface (or several interfaces). It is obvious to me that there needs to be some tool or library that adds these very simple log messages to these methods for me (like "Entering method getUser(UserId=2701, SessionId='AABCF')").

The tool we have at work uses runtime generated code to add these messages to methods automatically. Besides the performance hit this is fine, except for one problem. Sometimes it doesn't work; and it is impossible to figure out why. You turn on the tool, add all the right metadata, and nothing happens. You could argue that it is a tool problem, but I think one of the big things is that runtime generated code is just vastly inferior to compile time generated code.

If I were using go, my tool would open the code, read the interfaces, and spit out a nice implementation of logging in a .generated.go file and if it doesn't work for some reason I can actually go look at the generated code and see why. Trying to figure out why an implementation isn't working by looking at it's dynamically generated bytecode or .net clr is not fun at all.


runtime generated code is just vastly inferior to compile time generated code

That's a big call. I think you should curtail it, eg. "runtime generate code can be harder to debug". This is often true but not always. Let's not get in to workmen and their tools.


I agree with the GP's statement, because "harder to debug" == "vastly inferior". In my last job we worked a huge amount with generated code, and having generated source that you can step through is essential. I've never seen runtime generated code that's easy to debug.


Your logic seems to be that because some runtime code can be harder to debug (This is often true but not always), having personally encountered such, you therefore declare that all runtime code is vastly inferior. Sorry, but this is ridiculous. Can you contribute something more solid than an anecdote?


I'm sorry you think it's ridiculous but it's just my opinion, after all. Feel free to ignore it. Can you give us an example of runtime generated code that is easy to debug?


I can't. However, I do believe that software development in general is headed away from monolithic compilation, and that future systems will be largely real time, parallel, multi-component, perhaps more heterogeneous, and almost certainly predominantly interpreted.

Our habits trap us in to seeing things one way.


Lisp macros?


Lisp macros are compile-time generated code (think "small compiler plugins"); nothing happens at runtime in a compiled implementation.

More importantly, and to tie this back to the OP's comment, I would strongly recommend to implement any non-trivial macro as a "shell" which delegates to a normal function (cf. eval-when); that makes it much easier to debug/trace expansions from the REPL.

In other words: you can have it both ways! :)


I'm still of the opinion go messed up the error handling badly. As a simple example note how every tutorial doesn't bother looking for the error return of println calls and hence would silently ignore errors.

IMHO the ideal solution is that if you make no effort to look at the error return of a function, it is automatically returned to your parent. This is somewhat analogous to exceptions, but they require a lot more syntax. (Yes I know about panic/recover.)


The most bothersome thing is that at least one other language has a working version of what Go tried to go for: Erlang.

Erlang has exceptions (as go does, but does not pretend it doesn't have them), but they're generally "faults": developers don't normally throw or catch exceptions within a process, they're left to kill the process and signal to the supervision tree.

Instead, erlang functions return ad-hoc tagged unions, tagged with the atoms `ok` for success or `error` for failure, and this union is pattern-matched on. The pattern match can be a case for handling the error:

    case might_break(Something) of
        {ok, Value} -> handle_correct_case(Value);
        {error, Reason} -> handle_error(Reason)
    end
but if a developer doesn't want to handle the error and wants things to blow up in case of error, it's as simple as:

    {ok, Value} = might_break(something)
If `might_break` returns an error-tagged tuple, this will throw a badmatch.


(and for those who were wondering, a function which may error out but has no "value" to return, such as a print, will just return the atom `ok` in case of successful execution)


I always find it interesting that people repeat the example of println as if it's any different than any other time that you make a decision to ignore or check error codes. It's no different than most other languages that would be in Go's category.

Also, I presume you meant fmt.Println (println shouldn't be used and almost surely doesn't return an error).

>is that if you make no effort to look at the error return of a function, it is automatically returned to your parent

That would be... very, very, very strange for a huge number of reasons. Syntactically, function signature wise... You're basically just asking for exceptions in that case. And besides, that's how most functions that can error work, as I'm sure you know from any example code.

    function makeSomething() (Something, error) {
        something = new(Something)
        err = mightBreak(something)
        if err != nil {
            return nil, err
        } else {
            return something, nil
        }
    }
Yes, it's explicit error handling. Yes, it's griped about on every, single HN thread about Go. And the mailing list, extensively.

For a parallel example, do you call checkError() on your PrintStream (System.out) in Java after calling System.out.println? If not, you're not really checking to make sure println worked: http://docs.oracle.com/javase/6/docs/api/java/io/PrintStream...


Make a decision to or forget to check error codes.

My gripe is fairly different from the OP's in this aspect. Go is inconsistent. For functions that return values I need to use, I will be warned by the compiler if I forget to handle the error. For those that don't, the compiler will happily let me accidentally ignore the error.

I'd rather require that all return values be handled whether I use them or not. For the rare case that a function returns a value I honestly don't care about, I can just use _. At least I'm not hiding the fact that I knowingly am ignoring stuff.

(Edit: despite the above, I still do enjoy using Go.)


I can understand why that inconsistency is frustrating. Similarly:

    typedObj, err = something.(type)
    typedObj = something.(type)
    obj = someMap[key]
    obj, ok = someMap[key]


It's

    typedObj, ok = something.(type)
just like it is with maps, or any other ", ok" thing.

http://play.golang.org/p/I-5kD88knC


Oi vey, mea culpa.


> You're basically just asking for exceptions in that case

I'm asking for certain semantics. The syntax does matter and could be a lot more concise than standard exceptions (try/catch). For example this is concise where the called function returns a value and an error and requires no extra try/catch/return or similar syntactical decoration.

   // leave out ,err so it automatically gets returned
   value = function();
   // same thing
   function();
   // deliberate ignoring
   value, _ = function();
   // actually use error
   value, err = function();
   if ... { ... }
 
> Yes, it's explicit error handling. Yes, it's griped about on every, single HN thread about Go. And the mailing list, extensively.

There is a chance all those people are right!

> ... do you call checkError() on your PrintStream ...

Which shows exactly why requiring manual checking is a problem. The problem with Java is checked exceptions which do not work well (the checked part). I'm surprised they haven't been relaxed yet.


I've become aware of Go just a few months go, but I already like it so much that I'm planning to make my next/current project in it, after using C++ exclusively the last 8+ years.

It has a lot of good things going for it, as a modern language should, but IMO that only makes it a viable alternative.

What really pushes it over onto the "I wanna use it" side is how easy and natural it makes concurrent programming. See e.g. vimeo.com/53221560


It's interesting how many people post about moving to Go from an interpreted or VM language background. The original presentation video pitched it as a C++ replacement, but that is clearly not the whole story. This author does not seem to have much experience in native development. I wonder if the Go designers predicted how many people they'd convert from python/ruby/java.


Frankly, I think a lot of the people currently working with C/C++ are avoiding Go because of the garbage collector. You can't trust a garbage collector; you don't know when it will run, how long it'll take, or how quickly it will free the memory you used. C/C++ programmers are used to having complete control over all of this. Sure, you could familiarize yourself with the internals of whichever GC you're working with, but at that point you might as well have invested the same amount of time writing the program in a language without a GC.


I'm going to make the argument that a lot, if not the majority, of C/C++ programmers don't need complete control over memory management. I think in many cases (not OS, not real-time), saying "but I need absolute control over memory management so I can optimize!" is the equivalent of "I write everything in hand-coded assembly because it's more efficient". It's probably not, the machine can probably handle it better, and you're missing out on useful functionality. You know what happens when I allocate and free memory myself? I generally end up with either a leak or a double-free, and I doubt I'm such a terrible programmer that I'm the only one like that.


False dichotomy. If you use RAII, exceptions, and smart pointers in C++ there will be no leaks, no double-free, and not even a need for a GC. I know it is difficult to make a point on the Internet, but resource leaks really are a solved problem in C++.


Yes, but then you're stuck using C++, RAII, and exceptions. Let's not even get into which smart pointer you should use--a Googler I know has recounted stories of massive internal mailing list arguments over exactly which of the half-dozen smart pointer implementations they should be using.


So, because C++ has imaginary problems, we should all switch to a language that solve them, while being worse at all other things that matters (you know, like code generation)?


> You can't trust a garbage collector; you don't know when it will run, how long it'll take, or how quickly it will free the memory you used

You can certainly control the GC somewhat. For example, disable unpredictable automatic GC and run GC yourself at a more opportune time, or never (if the program terminates before consuming all swap space of course).

I have one app where I loop continously and invoke the GC every second. Why so often -- crazy, right? No because the "garbage" generated over that second --thanks to some memory-aware development practices-- is minimal, so the GC call will typically return in way under 1ms. Of course that's a meaningless number without app context and other numbers, but you do get some control.

Basically Go gives you both philosophies. On one hand, you can auto-GC and never worry about mem or perf and simply "code out" your need, like you would in Java/C#/Python etc. Or, you can carefully design data structures and operations with allocations and GC (or supressed/non-existing GC) in mind. Sure, you don't get direct access to malloc()/free(), but implicitly (via Go's data structures, struct values vs pointers etc etc) you have a great deal more control over memory accesses, allocations etc.


Yeah, I've had to push code down from our C# layer to our C++ layer on more than one occasion because the GC wasn't freeing memory promptly enough. I guess I don't know the GC well enough to nudge it in the right direction - but now I don't trust it and I plan to keep all high performance stuff in the native layer. I'd like to hear from someone who figured out how to tame the GC in applications where lots of new data is created and destroyed quickly.


That's one of the reasons Rust appealed to me more than Go.

Go has a global mark and sweep GC. OTOH Rust has a thread local, optional GC.


Yes, I think coming from C++, Rust is a more compelling language than Go. However, the Rust type system is a bit more complicated than the Go type system.

Go provides much better static guarantees and much better speed than Python/Ruby/JavaScript, without having to spend any time learning about the type system. However, Go's weaker type system doesn't appeal to a lot of people used to more powerful type systems.


> Go provides ... much better speed than Python/Ruby/JavaScript,

Citation?


No citation needed as long as you understand both languages' runtime stack.

Python is an interpreted language that runs on top of a virtual machine (usually CPython). Types are determined at run time.

  .py -> .pyo* (bytecode) -> VM -> OS
On the other hand, Go (and other C languages) have a different runtime stack:

  .go -> binary* -> OS
Between source and binary are a bunch of compiler steps[0], but * is where both binaries are executed. Statically compiled languages do not have to determine types at runtime and therefore are able to optimize code paths much better.

Still don't believe me? Here's a Fibonacci calculation micro benchmark[1] I ran:

    ╭─ting@noa /tmp/fib ‹python-2.7.3› ‹ruby-1.9.3›
    ╰─➤  time ./go
    1346269

    real	0.02s
    user	0.01s
    sys	        0.00s
    ╭─ting@noa /tmp/fib ‹python-2.7.3› ‹ruby-1.9.3›
    ╰─➤  time python3 ./fib.py
    1346269

    real	0.61s
    user	0.60s
    sys	        0.00s
Sidenote:

Java runs on a virtual machine (JVM) but it's performance comes very close to C languages due to static typing, JIT compilation, and heavy investment in the JVM from many companies.

[0]: For C, compilation goes through these steps:

  hello.c
  (preprocessor)
  hello.tmp
  (compiler)
  hello.s
  (assembler)
  hello.o
  (linker)
  hello <-- binary to run
[1]: https://gist.github.com/wting/77c9742fa1169179235f


Someone just mixed languages with implementation.

C interpreter -> http://root.cern.ch/drupal/content/cint

Java compiler to native code -> http://www.excelsior-usa.com/jet.html


Yes, if you want to get pedantic about it languages are separate from implementation. For example there's CPython, Cython, PyPy, Jython, IronPython.

However, reality is that most languages' ecosystems and performances are tightly connected to one or two implementations.


I tend to get pedantic about it given my background in compiler design.

I find sad that young generations mix languages with implementations and get wrong concepts that a certain language can only be implemented in a specific way.


> I find sad that young generations mix languages with implementations and get wrong concepts that a certain language can only be implemented in a specific way.

Yes, but the parent that you responded to wasn't really susceptible to this. It's quite natural to speak of the "performance properties of Language X" as if you were to say, "the performance properties of the most widely used implementation of Language X."

English doesn't lend itself well to precision. Therefore, people rely on the ability of others to use contextual clues to infer assumptions.

It's pretty clear in this case what point the parent was trying to convey.

And I'm not sure what youth has to do with any of this.


> And I'm not sure what youth has to do with any of this.

I am already old enough to have coded Z80 assembly back in the day and I see this mix of languages and implementations mostly around youth wannabe programmers.


And I have seen the "mix up" (if you can even call it that) among all programmers. Mostly for the reasons that I've already outlined. (i.e., there may not be a mix up if people are relying on their readers to infer assumptions through context.)


Dynamically-typed languages can be fast when run with a jit -- for example, Lua+luajit is close to Go in your microbenchmark. On my computer, for n=40, Go is 0m2.156s and luajit is 0m3.199s.


"Fast" and "close" are subjective terms. That's a 50% increase for a few thousand function calls.

PyPy uses JIT to improve Python run time speeds but it's still magnitudes slower than statically typed languages.

I've upped n to 40 and rerun with the following languages:

    C:         0.38s
    Java:      0.55s
    Go:        0.90s
    Rust:      1.29s
    LuaJit:    2.19s
    Haskell:   8.97s
    PyPy:     10.06s
    Lua:      22.87s
    Ruby:     22.13s
    Python2:  43.88s
    Python3:  66.28s
All code is available in the previous mentioned gist:

https://gist.github.com/wting/77c9742fa1169179235f


Thanks for the extra results. Obviously, a single micro-benchmark will only take you so far (and something like the Computer Language Shootout gets you farther -- it's a shame and mystifying (to me) that that site no longer has results for LuaJIT...).

But anyway, in my (limited) experimentations with LuaJIT, it's often been within a factor of 2x-3x of speed of C, which to me is pretty fast, and typical of many statically-typed, compiled languages.


I'm curious what times you get with an iterative version, or at least using a LUT. As-is, this mostly benchmarks the stack (admittedly that is an interesting datapoint.)


Can't imagine Ruby is twice as fast as Python 2 and three times as fast as Python 3 right now. Can you share your code in a gist?


I don't know if anyone will ever read this thread again :), but just in case, the the current front-page post on Julia provides another nice example of a fast, dynamically-typed, JITed language (within 1-2x of C, from their own set of benchmarks).


Yeah, what many of these people state as Go features are actually common to many static compiled languages in the Pascal/Modula family.

Old becomes new when you don't know it.


"Oh, and there isn't any pointer math, so you won't screw yourself with it."

It has been said before and I'll say it again here, but because you can mess stuff up because you don't know what your doing is a terrible reason to not have something in a language (or even a program).


Incidentally, I don't think it's entirely true that "there isn't any pointer math". I've never used it but I understand that you can do C-like pointer tricks by using the package called "unsafe": http://golang.org/pkg/unsafe/ and http://golang.org/ref/spec#Package_unsafe


Yes, indeed. Here's the "canonical" C strlen translated to unsafe Go with pointer arithmetic:

  package main
  
  import (
  	"fmt"
  	"unsafe"
  )
  
  func strlen(b []byte) int {
  	str := uintptr(unsafe.Pointer(&b[0]))
  	s := str
  	for ; *(*byte)(unsafe.Pointer(s)) != 0; s++ {}
  	return int(s - str)
  }
  
  func main() {
  	fmt.Println(strlen([]byte("ho ho ho\000")))
  }

Not that anyone should use it :-)

The safe and readable variant of this, often used in Go code, is "slice arithmetic", like this: http://golang.org/src/pkg/crypto/sha256/sha256.go#L105


Pointer math is effectively replaced with [slices][1], which are much more flexible and less error prone.

[1]: http://blog.golang.org/2011/01/go-slices-usage-and-internals...


I think it's an excellent reason for not having something in a language.

It's not you or me I'm worried about; it's all the other people, and their libraries that I may take dependencies on.

The more error-prone the constructs available in a language, the more likely artifacts created with the language, on average, are to have errors. The easiest way of eliminating these problems is to eliminate the constructs.

This only becomes a serious problem if the workarounds are too verbose, too inefficient, or impossible. Not all applications will need the workarounds. If that subset of applications is productive, the language may be viable - or at least, won't be non-viable because of lack of the given constructs.


I agree, but at the same time it seems good for pointer math to be marked off as unusual - something you only do when you are down a manhole, not the default way of doing normal things.


Pointer arithmetic is an important feature in languages like C, but languages like Go don't really want it. The language domain doesn't call for and it plays havoc with the GC.


It gets Damien Katz's approval so I'm learning it ASAP.

https://twitter.com/damienkatz/status/289575389641179137


These posts make me feel like Go is some oddball quirky person we all know and we all feel like we have talk about how great he/she is at some drunken party. I mean, we all love Go and all ...


Some people need support from others to confirm they are making a good choice.


Another great thing:

gofmt - Go has no style guide, just this tool!

Committed code ends up looking cleaner and you don't have to adapt to other's wacky standards. No more tabs vs spaces arguments or where to place that brace... brilliant.

http://golang.org/doc/effective_go.html#formatting


It does have a style guide, it's just that gofmt will automatically make your code conform to that style guide :)

The Go style is close enough to my normal C style that I end up mostly conforming anyway, but gofmt is great for making code look a bit prettier.


Unrelated: I like the fact this person used Github as a "blog". I've been looking for some good blogging software & host for a while, but Github might just fit my needs :)


it's pretty common..it leverages github pages (1) which itself is powered by jekyll (2)

(1) http://pages.github.com/ (2) http://jekyllrb.com/


Word of caution: GitHub has, at least in the past, had issues of gh-pages taking hours to be built. So if you are writing something particularly timely (like a project announcement) it might not be the best idea. I gave up on using it for precisely this reason ~ 1 year ago so things might have improved.


Checkout http://substancehq.com too, it is a very simple hosted blogging engine.


Thanks, it seems like a nice tool. Formats Objective-C as well! Great for my purposes :)


Argh! I felt rage reading: Package System or lack there of. Since Go compiles everything statically, you don't have to worry about packages at runtime. But how do you include libraries into your code? Simply by importing them via url, like so: import "github.com/bmizerany/pq"

It was all good until that point.


Are you going to say why, or just let us guess?


My guess: Remote packages are useless because (afaik) you can't specify a revision. This means "go get" will get the latest HEAD, which could be anything.

Go needs something like Ruby's Bundler to let a package declare what versions are required and let you pin to specific versions.


If I desperately needed a specific version of a package, I'd fork the project and freeze it where appropriate.

Does the `go` tool need to be aware of versions? IMO, no. Would it be a nice feature if it could be added without making the tool difficult to use? Yes.

But so far, it hasn't appeared to slow anyone down. I've worked quite a bit with distribution tools for both Python and Haskell, and Go's tool is a breath of fresh air.


Question re: only utilizing one CPU and GOMAXPROCS: would it be in bad taste to, by default, for a program which might be used by other people (and uses goroutines extensively) to include in initialization "runtime.GOMAXPROCS(runtime.NumCPU())"?


I asked this on the freenode #go-nuts channel. The consensus was no. 1) Not every program benefits from this. 2) It's hoped that this will be deprecated soon hopefully.


C or Java programs use all cores by default. I think it's reasonable for Go to have the same default.


C, previous to C11, didn't even have a concept of threading...

(And who the hell is using C11 like that anyway?)


OK, pthreads.


It is going to be a pretty hard sell that mimicking pthreads is ever a good idea...

Regardless, considering the casual nature of goroutines and the deliberate heavy-weight nature of pthreads, I find it hard to see pthreads as being any sort of argument about what Go should do, regardless of pthreads' shortcomings. They are wildly different animals.


I've been thinking of leaving Ruby for web applications. Both Go and Haskell are contenders because of the compilation step.

Been doing iOS so much lately (almost 3 years now) over the course, I've come to love compiled languages much more.

I still use Ruby for backend services and smaller scripts. But I want to leave it for larger backend software.

This was a nice list, thank you.


Sounds like a list of C# benefits. In 2004.


Except for the goroutine and channel, and the type system, and the package system.


Well, 'package' system is a big word. It does not have versioning, checksums, or signatures. An import of a package may bring in (1) a version that is API-incompatible; (2) a version that is API compatible but has new bugs; and (3) a version that has been trojaned/backdoored/whatever.

The only solutions is doing your own package management in $GOPATH, tracking a bunch of Git/Mercurial repositories and finding out by hand which commits are sane and which are not.

It's a disaster in the making, really.


Ah you mean Task Parallel Library and NuGet?


I didn't realize NuGet let you do remote path import. Nor that it was around in 2004. Nor that it's part of the C# language spec.

Goroutine + Channels are vastly different than the TPL. The comparison is so far off that I can't even come up with a clever analogy. There's more to concurrency than threads, and more to communication than locks.


> I didn't realize NuGet let you do remote path import. Nor that it was around in 2004. Nor that it's part of the C# language spec.

Fare enough if you want to limit yourself to 2004 and compiler specific support.

> Goroutine + Channels are vastly different than the TPL

If you take out your Go coloured glasses and read in what TPL does, you will see that what Goroutines and channels are indeed available as Tasks and Queues.


I don't know enough about Go, but Goroutines and Channels sound quite similar to F# agents and mailboxprocessor concepts, which as I understand is not quite the same as TPL. The former tackles concurrency, while the latter is about parallelism. C# 5 is getting there with async/await concepts I guess.

Here's a dining philosophers implementation in F# and Go (among other languages)

http://rosettacode.org/wiki/Dining_philosophers


C# doesn't build native binaries on all platforms out of the box.


Like any other language.

Don't mix a language with the available implementations.

You can get native binaries with ngen or mono -aot.

The dependency on CLR is no different than relying on libc being available when using dynamic linking.


Its what was left out of Go that makes it great ... e.g. most or all of the Go benefits could probably be recreated in C++ using libraries, but that doesn't make C++ equivalent or as nice as Go for developing in.


C# has a dependency on the CLR, i.e. either the .NET framework or Mono has to be installed.


Implementation != Language


You can't use the language without the implementation. They're bound together. Unless you're suggesting someone's going to write a C# compiler that creates native binaries?


ngen, mono -aot, bartok, IL2CPU ?


But I like Haskell more.


"Go compiles down to native code and it does it fast. Usually in just a few seconds."

Very strange statement. Most programming languages/runtimes do small operations very quickly.


What do you mean? He means that you can compile a pretty large program in seconds, most languages' compilers aren't that fast.


> most languages' compilers aren't that fast

All mainstream languages compile in about that ballpark, if not faster.

I've always found it odd that one of the driving design principles of Go was "fast compilation". It's always struck me as odd and short-sighted, obviously motivated by the Go authors' history with C++. If only they had spent some more time studying other languages, they would have realized that compilation times are really not a concern in languages these days, and they might also have designed Go with more modern features in mind, such as exceptions and generics.


Not really a concern? I don't agree. Look at large C++ projects, and let's not speak about GPU frameworks like CUDA, OpenCL and OpenACC. If it weren't be a concern, software companies wouldn't invest in clusters just for compilation.


Contrary to what many young developers may think, C and C++ aren't the only languages with compilation to native code.

Already in 1987 Turbo Pascal 4.0 was compiling quite fast, Borland states around 34,000 lines/minute for the 5.5 version in 1989.

Similar compilation speeds are possible for languages with modules, since the early 80's.

Go's compilation speed is only news for someone that never used a module based language for systems programming like Modula-2 or Turbo Pascal, just to name two.


It doesn't matter if somebody knows Turbo Pascal 4.0 or Modula 2 if you can't use it.

We are repeating history because the old ways were forgotten or replaced.


What he was trying to say is that you shouldn't "design for fast compilation". You should design the language to allow modularity and fast compilation will naturally follow.


> It doesn't matter if somebody knows Turbo Pascal 4.0 or Modula 2 if you can't use it.

Some Modula-2 compilers:

http://www.modula2.org/adwm2/

http://plas.fit.qut.edu.au/gpm/

http://www.modulaware.com/

http://www.excelsior-usa.com/xdsx86.html

Modern Turbo Pascal compatible compilers:

http://www.freepascal.org/

http://www.mikroe.com/mikropascal/avr/

http://turbopascal.org/

http://www.embarcadero.com/products/delphi

It is just a matter to look for them.


Didn't I just single out C++ as the only mainstream language being slow to compile?


Apparently you're stuck in a world where nobody around you uses C++. I hate to be the one to break the news to you...


Didn't I just call C++ "mainstream"?


Actually, compile time was a huge factor during the design of Go, due to the massive code bases at Google.

See http://talks.golang.org/2012/splash.article for discussion on the import/compile system.


At the kind of project sizes you see at Google, you start to measure compilation time in tens of minutes. If you have a source control system that triggers an automatic build every time you upload a change for review, can you see how waiting 45 minutes (a number I got from a current Googler) to compile and verify that your change doesn't break the build might just be an inspiration to put a focus on compilation speed?

Also, when you code with generics, you're coding with Hitler.


What do you mean? Even C# is measured in millions of lines per-second. The only compilers that are kind of slow these days deal with complex static type systems (Scala), whereas Go has a simple static type system and should compile fast.


Agreed. In terms of compilation times, Scala feels as slow in compilation times as C++ did in the early 2000's. Kotlin and Ceylon are showing that it's possible to be a modern JVM language and still compile fast. Hopefully, Scala can learn from them.


Martin Odersky does care about compilation speed a lot and pulls out as many tricks as possible to make it faster. But Scala has a way powerful type system in a way that I doubt Kotlin and Ceylon are barely approaching. I don't think many want to give up scala's features for better compilation speed, which anyway can be mitigated via better incremental compilation (disclaimer, my experience is about 5 years out of date).


That's what it's supposed to be, but somehow when I press "Go" on VS, in a project with <200,000 lines, it takes several seconds to compile and start.


This is not the compilers fault but rather something choking somewhere else in your VS tool pipeline, most likely the debugger.


With c# though it has to compile it again at runtime which destroys your advantage.


That hasn't been true for a long time (ngen.....).


Ngen is useless if you're deploying to IIS. Think of views, asp.net pages etc. You're to going to put them in the GAC are you. Ir makes pretty much no sense to ngen anything other than stuff exposed to com or core framework bits. All you do is fuck yourself at deploy time.


I've never done any asp.net programming, but then isn't C# competitive with everything else in that space (like python, ruby...)?

.net has a pretty good back end and the only thing you really fight with is not instruction compilation/execution time, but garbage collection. GC is mainly why you might want to use C++ sometimes instead of C#.


Well to be honest we decided to use go for a couple of things rather than c# (out logging back end and configuration store forked from doozer) and it ended up being used for considerably more in the end precisely because it is very easy to a) get deterministic GC behaviour and b) profile it.


Is there an implicit claim being made that go is already competitive with C# on performance? If so that would be huge, the CLR is very mature and tuned, much faster than the JVM.

I'm also surprised to hear they are ahead on garbage collection. Assuming they are using a sweep and/or copy collector and not reference counting, why is it more deterministic than the CLR's?

I'd like to see some benchmarks/numbers if you have them handy.


I would write it up but you're not allowed to (check the license that comes with windows server and the .net SDK).

Anecdotal evidence points to go being faster on nearly all counts bar regular expressions once you know what you are doing. The GC differences come from the fact that go does a hell of a lot less allocations. It's possible to write complex code in go which doesn't allocate or GC at all. The same is not true for the CLR. Fundamentally the GC in go is slower and causes more pauses but you can easily tune around it without shitty hacks like the disruptor pattern.

Anyway numbers cannot be published thanks to Microsoft's suspicious intentions with their EULA so you'll have to compare yourself behind closed doors.


I've read the EULA, it's mostly about best/fair practices when releasing benchmarks to the public. But having this codified in the EULA must give legal departments the shivers! I'm surprised there are no perf benchmarks on the web comparing Go....to anything! The last I heard they were way behind even the JVM on perf; just because they ahead of time compile doesn't give them much of a natural advantage here (since JIT compilation is not a perf problem in practice).

.net has value types that you can use to avoid allocations. I've used them to write GC free code before where it's necessary, like doing a physics engine. I'm not sure if this is comparable to how you are avoiding GC in Go.


What's "pretty large"? It's fair to say that compilation is quick, but that's meaningless without a comparison.

For example, you can generate really short C++ programs that exploit the template mechanism to perform heavy calculations at compile time.


It's not a scientific test or anything, but the results are so marked it's almost worth not even bothering to measure it:

24k loc of C (24743 total excluding headers and comments), using ninja and cmake, without using the autofail tool suite (which ups the time taken about about 150%):

    $ time make
    ...
    real	0m16.929s
    user	0m9.952s
    sys	0m4.641s
23kloc of go code (23588 total excluding comments):

    $ time go build
    ...
    real	0m1.667s
    user	0m1.331s
    sys	0m0.214s


It looks like Go has a compiler implementation with a custom linker and object format. Turbo Pascal got much of its speed from its linker (and by having a module system instead of include files) so they may have decided on the same strategy. They do list the Wirth family of languages as an influence in the FAQ -- of course Oberon and Plan 9 also have history with the Go designers.


Another one in a long list :) The case for Go: https://gist.github.com/ungerik/3731476


I remember that Go had some issues related to the garbage collector on 32-bit machines. Are they solved now?



Is Go used any where else besides at Google?


yes


Google pr is doing a great job with the go language. Every day 2-3 stories is always highly rated on a language not many people outside google cares about. Time to hellban som accounts maybe?


Unlike Java or C# or some other languages, there's basically no monetization path for Google from Go. They don't sell a Go compiler or IDE or anything like that. The whole this is open source and free. Go for them is a project aimed at solving their own problems, and if it solves yours too, so be it. With that in mind, your theory doesn't make a whole lot of sense.

As one of the people who does upvote the Go stories and has posted one myself, I think the real reason you see this is Go is an incredibly fun language to learn (at least if you're already a programmer, I can't speak for true neophytes) and the design of both the language and the API remind you about how much complexity you can get out of combining a few simple concepts. After years of dealing with other people's 1000 line FactoryOfFactoriesFactory classes in Java, such simplicity is incredibly refreshing and it is difficult not to try to direct other programmers into taking a look.

IMO there's a lot of benefit to playing around with Go even if you end up not using it as a primary language and thus I plan to continue evangelizing it.


Go is a "Google language" about as much as C is "the AT&T language".

So you really think Google is paying developers to blog about Go? Where do I sign up, I want some of that cake...


Says an account called "nogo" created 5 minutes before posting?


Ironically there is someone creating new accounts and trolling each of the Go posts. Last time it was someone spewing inconsistent lies about the Go build system.


My money is on the 0xABADC0DA doing it. An obsession with Google's relationship with Go and thinking that google employees were gaming HN to push Go was kind of his MO. Really quite obnoxious, I don't know who he thought his audience was.


Well, I think somebody's account will be hellbanned today...


Simple C Interface

By using build directives you can integrade with C libraries really easily.

And its a bunch of magic. I thought these people were serious about having a language for "systems programming".

What good is it if it can't properly interface with the systems programming language that people actually use, C?

It seems they wanted to add "C interop", but didn't understand what that means.. at all. You give it a header with function prototypes, because thats apparently the only way to tell it "I want to call this native function", and you get... undefined references for the native functions? What the hell is it even trying to do? You don't have source and you don't have linker stubs for a lot of the libraries you want to interop with.

It's a complete non-starter. It's interop for awesome magic demos.


What?

CGO is pretty cool actually. It does have some irritating downsides (pkg-config for example, is used, and there's no other option to link to a specific library afaik)...

...but seriously, what are you even talking about?

All you need is a header, and a binary (library; .so or .dll). Done.

How would you even use a library that doesn't have a header?

Sounds like you've spent some time using COM objects and the . autocomplete function on them; that's just windows crazy-talk, if you want to use that kind of nonsense, go write powershell.


Nonsense. Go's cgo facility is a fantastically productive way to wrap C code.

I've written a lot wrappers using Python's C API and a bunch wrappers using cgo and cgo is light years ahead.

For a real-world comparison see this Sqlite wrapper for Go at http://code.google.com/p/gosqlite/source/browse/sqlite/sqlit... and compare it to a similar wrapper for Python at http://code.google.com/p/pysqlite/source/browse/#hg%2Fsrc. The Go wrapper isn't a "magic demo", it's code that works reliably in production.

The bottom line is that cgo lets me quickly write production quality C wrappers for Go using Go itself.


It sounds like you couldn't get it to work and then decided they've done it wrong. I'm not terribly convinced.


You, literally, have no idea what you're talking about. And I'll simply laugh at your very last sentence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: