Hacker News new | past | comments | ask | show | jobs | submit login

Go GC since 1.6 openly claims <=10ms STW pauses. Does any open source Java GC offers that? Also Go uses an order of magnitude less memory for running process compare to something similar in Java so I do not see how optimizing compilers in Java are doing any better job.

In my experience Java brings a mindset that there must be some complex way to solving a problem so lets find out that.




Go 1.6 GC is exactly what I mean. It's a design that gives tiny pauses by not being generational, or incremental, or compacting. Those other features weren't developed by GC researchers because it was more fun than Minesweeper. They were developed to solve actual problems real apps had.

By optimising for a single metric whilst ignoring all other criteria, Go's GC is setting its users up for problems. Just searching Google for [go 1.6 gc] shows the second result is about a company that can't upgrade past Go 1.4 because newer versions have way worse GC throughput: https://github.com/golang/go/issues/14161

Their recommended solution is, "give the Go app a lot more memory". Well, now they're back in the realm of GC tuning and trading off memory to increase throughput. Which is exactly what the JVM does (you can make the JVM use much less memory for any given app if you're willing to trade it off against CPU time, but if you have free RAM then by default the JVM will use it to go faster).

BTW the point of an optimising compiler is to make code run faster, not reduce memory usage. Go seems to impose something like a 3x overhead vs C, at least, that was the perf hit from converting the Go compiler itself from C to Go (which I read was done using some sort of transpiler?). The usual observed overheads of Java vs C are 0 to 0.5x overhead. The difference is presumably what the compilers can do. Go's compiler wasn't even using SSA form until recently, so it's likely missing a lot of advanced optimisations.

tl;dr - I have seen no evidence that the Go developers have any unique insight or solutions to the question of building managed language runtimes. They don't seem to be fundamentally smarter or better than the JVM or .NET teams. That's why I think eventually Go users will want to migrate, because those other teams have been doing it a lot longer than the Go guys have.


> Go GC since 1.6 openly claims <=10ms STW pauses. Does any open source Java GC offers that?

Yes. HotSpot has had configurable max pause times for years and years [1]. If you want less than 10ms, set MaxGCPauseMillis to 10ms. It also has a state-of-the-art generational GC, which is very important for throughput, as bump allocation in the nursery is essentially impossible to beat with a traditional malloc implementation.

[1]: https://docs.oracle.com/cd/E40972_01/doc.70/e40973/cnf_jvmgc...


From the link I see:

> The following example JVM settings are recommended for most production engine tier servers: -server -Xms24G -Xmx24G -XX:PermSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70

So Oracle mentions 200ms for most prod use. I am not sure how you are able to deduct ~10ms pause from that link.

And just because one can configure ~10ms does not mean at JVM will start respecting it. There is nothing in any official document by Oracle about max GC pause time. The results Google threw are mostly around ~150ms as min pause.

> It also has a state-of-the-art generational GC.

And it needs something like

http://www.amazon.com/Java-Performance-Charlie-Hunt/dp/01371...

to do what JVM can do in theory. In practice as a java users I am used to seeing ~20-30 sec pauses for Full GC.

The only effort in open for sub 10ms GC for large heaps is Project Shenandoah:

http://openjdk.java.net/jeps/189

and it is long way from availability.


You can ask for 10ms latency and you will get it. This is basic functionality of any incremental/concurrent GC. The throughput will suffer if you do that. But HotSpot's GC is far beyond that of Go in regards to throughput, for the simple fact that it's generational.

Nongenerational GC pretty much always without exception loses to generational in heavily GC'd languages like Java and Go. There is no silver bullet for GC; it requires lots of hard engineering work, and HotSpot is way ahead.


sievebrain - Did you read the full issue? This is an edge case - a program running on a 40 core machine that the developers were trying to keep to a 5MB heap. And yes, the answer was "use more RAM", but by "more" they mean "40MB". Not like gigabytes or anything.

There's always going to be edge cases in any GC/compiler/etc ... you just can't account for every case. I suppose with java's infinite knobs, you might be able to... but then you have to tune the GC. In Go, there's just one knob (a slider, really, more CPU vs. more RAM), and 98% of the time you'll never need to touch it. I had honestly forgotten it exists, and I work on a large Go project daily at work.


You don't have to tune the Java GC - I never have, and I use and write Java apps all the time.

People can and for big servers often do, to squeeze out more performance or better latency, but it is definitely not required.

In the presentation I linked to, GC tuning (after switching to G1) reduced tail latencies a bit, but otherwise did not radically change anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: