Hacker News new | past | comments | ask | show | jobs | submit login

It's interesting how many people post about moving to Go from an interpreted or VM language background. The original presentation video pitched it as a C++ replacement, but that is clearly not the whole story. This author does not seem to have much experience in native development. I wonder if the Go designers predicted how many people they'd convert from python/ruby/java.



Frankly, I think a lot of the people currently working with C/C++ are avoiding Go because of the garbage collector. You can't trust a garbage collector; you don't know when it will run, how long it'll take, or how quickly it will free the memory you used. C/C++ programmers are used to having complete control over all of this. Sure, you could familiarize yourself with the internals of whichever GC you're working with, but at that point you might as well have invested the same amount of time writing the program in a language without a GC.


I'm going to make the argument that a lot, if not the majority, of C/C++ programmers don't need complete control over memory management. I think in many cases (not OS, not real-time), saying "but I need absolute control over memory management so I can optimize!" is the equivalent of "I write everything in hand-coded assembly because it's more efficient". It's probably not, the machine can probably handle it better, and you're missing out on useful functionality. You know what happens when I allocate and free memory myself? I generally end up with either a leak or a double-free, and I doubt I'm such a terrible programmer that I'm the only one like that.


False dichotomy. If you use RAII, exceptions, and smart pointers in C++ there will be no leaks, no double-free, and not even a need for a GC. I know it is difficult to make a point on the Internet, but resource leaks really are a solved problem in C++.


Yes, but then you're stuck using C++, RAII, and exceptions. Let's not even get into which smart pointer you should use--a Googler I know has recounted stories of massive internal mailing list arguments over exactly which of the half-dozen smart pointer implementations they should be using.


So, because C++ has imaginary problems, we should all switch to a language that solve them, while being worse at all other things that matters (you know, like code generation)?


> You can't trust a garbage collector; you don't know when it will run, how long it'll take, or how quickly it will free the memory you used

You can certainly control the GC somewhat. For example, disable unpredictable automatic GC and run GC yourself at a more opportune time, or never (if the program terminates before consuming all swap space of course).

I have one app where I loop continously and invoke the GC every second. Why so often -- crazy, right? No because the "garbage" generated over that second --thanks to some memory-aware development practices-- is minimal, so the GC call will typically return in way under 1ms. Of course that's a meaningless number without app context and other numbers, but you do get some control.

Basically Go gives you both philosophies. On one hand, you can auto-GC and never worry about mem or perf and simply "code out" your need, like you would in Java/C#/Python etc. Or, you can carefully design data structures and operations with allocations and GC (or supressed/non-existing GC) in mind. Sure, you don't get direct access to malloc()/free(), but implicitly (via Go's data structures, struct values vs pointers etc etc) you have a great deal more control over memory accesses, allocations etc.


Yeah, I've had to push code down from our C# layer to our C++ layer on more than one occasion because the GC wasn't freeing memory promptly enough. I guess I don't know the GC well enough to nudge it in the right direction - but now I don't trust it and I plan to keep all high performance stuff in the native layer. I'd like to hear from someone who figured out how to tame the GC in applications where lots of new data is created and destroyed quickly.


That's one of the reasons Rust appealed to me more than Go.

Go has a global mark and sweep GC. OTOH Rust has a thread local, optional GC.


Yes, I think coming from C++, Rust is a more compelling language than Go. However, the Rust type system is a bit more complicated than the Go type system.

Go provides much better static guarantees and much better speed than Python/Ruby/JavaScript, without having to spend any time learning about the type system. However, Go's weaker type system doesn't appeal to a lot of people used to more powerful type systems.


> Go provides ... much better speed than Python/Ruby/JavaScript,

Citation?


No citation needed as long as you understand both languages' runtime stack.

Python is an interpreted language that runs on top of a virtual machine (usually CPython). Types are determined at run time.

  .py -> .pyo* (bytecode) -> VM -> OS
On the other hand, Go (and other C languages) have a different runtime stack:

  .go -> binary* -> OS
Between source and binary are a bunch of compiler steps[0], but * is where both binaries are executed. Statically compiled languages do not have to determine types at runtime and therefore are able to optimize code paths much better.

Still don't believe me? Here's a Fibonacci calculation micro benchmark[1] I ran:

    ╭─ting@noa /tmp/fib ‹python-2.7.3› ‹ruby-1.9.3›
    ╰─➤  time ./go
    1346269

    real	0.02s
    user	0.01s
    sys	        0.00s
    ╭─ting@noa /tmp/fib ‹python-2.7.3› ‹ruby-1.9.3›
    ╰─➤  time python3 ./fib.py
    1346269

    real	0.61s
    user	0.60s
    sys	        0.00s
Sidenote:

Java runs on a virtual machine (JVM) but it's performance comes very close to C languages due to static typing, JIT compilation, and heavy investment in the JVM from many companies.

[0]: For C, compilation goes through these steps:

  hello.c
  (preprocessor)
  hello.tmp
  (compiler)
  hello.s
  (assembler)
  hello.o
  (linker)
  hello <-- binary to run
[1]: https://gist.github.com/wting/77c9742fa1169179235f


Someone just mixed languages with implementation.

C interpreter -> http://root.cern.ch/drupal/content/cint

Java compiler to native code -> http://www.excelsior-usa.com/jet.html


Yes, if you want to get pedantic about it languages are separate from implementation. For example there's CPython, Cython, PyPy, Jython, IronPython.

However, reality is that most languages' ecosystems and performances are tightly connected to one or two implementations.


I tend to get pedantic about it given my background in compiler design.

I find sad that young generations mix languages with implementations and get wrong concepts that a certain language can only be implemented in a specific way.


> I find sad that young generations mix languages with implementations and get wrong concepts that a certain language can only be implemented in a specific way.

Yes, but the parent that you responded to wasn't really susceptible to this. It's quite natural to speak of the "performance properties of Language X" as if you were to say, "the performance properties of the most widely used implementation of Language X."

English doesn't lend itself well to precision. Therefore, people rely on the ability of others to use contextual clues to infer assumptions.

It's pretty clear in this case what point the parent was trying to convey.

And I'm not sure what youth has to do with any of this.


> And I'm not sure what youth has to do with any of this.

I am already old enough to have coded Z80 assembly back in the day and I see this mix of languages and implementations mostly around youth wannabe programmers.


And I have seen the "mix up" (if you can even call it that) among all programmers. Mostly for the reasons that I've already outlined. (i.e., there may not be a mix up if people are relying on their readers to infer assumptions through context.)


Dynamically-typed languages can be fast when run with a jit -- for example, Lua+luajit is close to Go in your microbenchmark. On my computer, for n=40, Go is 0m2.156s and luajit is 0m3.199s.


"Fast" and "close" are subjective terms. That's a 50% increase for a few thousand function calls.

PyPy uses JIT to improve Python run time speeds but it's still magnitudes slower than statically typed languages.

I've upped n to 40 and rerun with the following languages:

    C:         0.38s
    Java:      0.55s
    Go:        0.90s
    Rust:      1.29s
    LuaJit:    2.19s
    Haskell:   8.97s
    PyPy:     10.06s
    Lua:      22.87s
    Ruby:     22.13s
    Python2:  43.88s
    Python3:  66.28s
All code is available in the previous mentioned gist:

https://gist.github.com/wting/77c9742fa1169179235f


Thanks for the extra results. Obviously, a single micro-benchmark will only take you so far (and something like the Computer Language Shootout gets you farther -- it's a shame and mystifying (to me) that that site no longer has results for LuaJIT...).

But anyway, in my (limited) experimentations with LuaJIT, it's often been within a factor of 2x-3x of speed of C, which to me is pretty fast, and typical of many statically-typed, compiled languages.


I'm curious what times you get with an iterative version, or at least using a LUT. As-is, this mostly benchmarks the stack (admittedly that is an interesting datapoint.)


Can't imagine Ruby is twice as fast as Python 2 and three times as fast as Python 3 right now. Can you share your code in a gist?


I don't know if anyone will ever read this thread again :), but just in case, the the current front-page post on Julia provides another nice example of a fast, dynamically-typed, JITed language (within 1-2x of C, from their own set of benchmarks).


Yeah, what many of these people state as Go features are actually common to many static compiled languages in the Pascal/Modula family.

Old becomes new when you don't know it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: