Hacker News new | past | comments | ask | show | jobs | submit login

> Go I just don't like as a language (personal taste) and I think the GC adds a bit too much of pausing for this app

Not saying you have to like Go, we all have different preferences. But FYI, the gc has gotten much better in the latest releases. I don't think you would be bothered by gc pauses. (my understanding is that it' both faster and more "spread out")




Personally, I don't care about the pauses, I care about having a complex runtime.

Consider a widely deployed library written in C -- say, zlib or libjpeg or SQLite or openssl. Could you rewrite those in Go or Haskell or Scala? No. Because nobody wants a big surprise when linking a harmless C library all of a sudden brings in an entire GC that's starting extra threads, etc.

In other words, Rust is the first practical language in a long time that could be used for a new library that might be used in millions of different applications written in dozens of languages.


We're doing real time stuff where these pauses matter. Just that we don't really need GC for anything and adding it to certain services cause more harm than it causes good.


If the execution is measured in nanoseconds, the GC pauses are measured in millisecond, sometimes tens or hundreds...

Also RAM has a cost, for garbage collected service you need to reserve some extra RAM. Scaling these processes add an extra memory overhead we're not so eager to take.


Oki, got it. Can't decide if that sounds like horrible conditions or a fun challenge =)

For reference, the Go 1.5 GC propaganda:

> Go 1.5, the first glimpse of this future, achieves GC latencies well below the 10 millisecond goal we set a year ago.

Blog post: https://blog.golang.org/go15gc

Slide: https://talks.golang.org/2015/go-gc.pdf

Talk: https://www.youtube.com/watch?v=aiv1JOfMjm0

Edit: Meanwhile, over in Java land they measure stop the world pauses in seconds, not milliseconds:

http://stackoverflow.com/questions/15696585/long-gc-pauses-i...

https://blogs.oracle.com/poonam/entry/troubleshooting_long_g...


10ms is also 2/3rds of a frame @ 60FPS which is going to guarantee dropped frames if you're doing anything soft real-time.

Also most GC'd languages don't give you strong control over data locality(by nature of everything being a reference) so you pay in cache misses which are not cheap.


Go gives you decent control over layout, much better than most scripting languages. It's not quite as good as C, mostly due to not having a wide variety of very complicated data structures to choose from; you've got structs, arrays, and maps. But the first two in particular let you do many things to control locality.

It's also worth pointing out 10ms is the maximum, not the minimum. There's plenty of workloads where you're not going to see pauses anywhere near that long. It's certainly not impossible to write a 3D game in Go and get 60 fps with high reliability, especially with the amount of stuff nowadays that actually runs on the GPU. You're not going to be able to do this for your next AAA game, but I wouldn't see a huge reason that an "indie" game couldn't use Go for that. (Probably not my first choice, all else being equal with library support personally I'd suggest Rust for this for a lot of other reasons (the type of concurrency you often get in games is going to be well supported with Rust's borrow checker architecture), but contrary to some discussion I would say it is still on the table.)


It's not worth it, because as soon as you go down this route you're going to be constantly thinking about the GC.

If I add a cyclic reference here will that make GCs longer? Maybe I should have a free list and reuse these objects (after all they're responsible for most of my allocations)?

As soon as you're thinking like that as you write each line of code you've lost all of the benefits of the language being high-level, and you'd be better of controlling memory manually.


> It's certainly not impossible to write a 3D game in Go and get 60 fps with high reliability, especially with the amount of stuff nowadays that actually runs on the GPU.

Maybe not impossible, but highly highly improbable. If you say that then you have no idea what it takes to run 60fps constantly. It is HARD, VERY HARD.


> It's certainly not impossible to write a 3D game in Go and get 60 fps with high reliability...

But it's hard to write a 3D game in Go and get 60 fps with certainty.


But so what? It's impossible to write a 3D game in anything and get 60 fps with certainty. If the McAfee Disk Bandwidth Consumer decides to kick in, you may take a penalty on disk access. If the system specs are slightly lower than you planned for, you don't get your guaranteed 60 fps. If the GPU overheats and underclocks, you don't get 60fps.

It's not that hard to write something in Go where you pay two milliseconds every 5 minutes or something, or less than that. Again, let me reiterate, 10ms per sweep is the max, not the min. Plus it sounds like people think that Go somehow guarantees that you're going to pay this on every frame or something, rather than potentially separated by seconds or minutes.

As I said, Go probably isn't my first choice for a game anyhow, but people are grossly overstating how bad it is. Games may be high performance, but they also do things like run huge swathes of the game on a relatively slow scripting language like Lua or some Lisp variant or something. It's not like the network servers that are Go's core design motivation are completely insensitive to performance or latency either.

(Plus to be honest I reject the premise that all games must be 60 fps on the grounds that they already aren't. I already disclaimed AAA vs. Indie. But it's still not a bad discussion.)


GC pauses are not the real reason people hate GCs even when they say otherwise. It's mostly about the cognitive overhead GC imposes on you and compromises that it forces you to take. If it's there - you are always aware of it, you cannot ignore it, you know it's unpredictable, but you know there are better predictable choices and it makes it very hard to feel good about the quality of the software you write. It's like it forces you to accept mediocrity.


Lua, slow? I don't even know where to begin to start on that one.

Like I said in another part of the thread, different tools for different domains. You certainly can get a consistent 60 FPS on any consoles, which are also much less forgiving about techniques that would be perfectly fine on a PC.


For another comparison 10 ms is nearly your entire frame at 90hz for current VR headsets.


I encourage you to read the links on the comment you responded to. If you had, you would have found that as of Go 1.5 the pauses are 2ms or less for a heapsize < 1GB. In Go 1.6 this reduced further. The 10ms pause that you're thinking of it the upper limit, not what actually happens even at heapsizes of 100GB+


2ms is still more than the entire budget we'd dedicate to animation on a lot of scenes. You want predictable performance in these cases and tracing GCs are fundamentally opposed to this.

Generally you're much better off using a scripting language like Lua for the places where you want the advantages GC brings but scoping it so you can limit the damage it does to your frame time.


What about microseconds using the category of collectors designed for your application area (real-time or minimal delay)?

http://www.cs.technion.ac.il/~erez/Papers/real-time-pldi.pdf


CAS on every property write? That adds up really quickly which is why Rust has Rc vs Arc.

The issue with GC is it's not deterministic. There's a whole nother aspect that has to do with free heap space. Quite a few GCs need some multiple of working set free to do the compact phase. If they don't have it then GC times start to spiral out of control.

On an embedded device(or modern game console) you'll be lucky to have 5-10mb free. On PSP we used to have only 8mb total since 24/32mb went to video+audio. We still ran Lua because we could constrain it to a 400kb block and we knew it would never outgrow it.

Just like everything in software there's different tools for different domain spaces. Otherwise someone would just write one piece of software that fits every problem and we'd all be out of a job.


I don't have the working set numbers on that example. I just had latency, which you were discussing, which maxed out a 145 micro-seconds per pause on highest-stress test. Usually lower. Far as working set, there's a whole subfield dedicated to embedded systems. One early one from IBM I found benchmarks for on a microcontroller had 85% peak performance with 5-30% working set for GC. They basically trade in one direction or another.

The more common strategy I see in CompSci for your use-case is to have a mix of memory pools, GC, and safe manual. Ada had memory pools and I'm sure you get the concept. Safe manual is when static analysis shows a delete can happen without safety consequences. So, that can be unmanaged. Then, what's left is handled by the concurrent, real-time GC. In typical applications, that's a small fraction of memory.


Yup and in games that exactly the space that Lua/UnrealScript/etc fit neatly into.

The issue is with using GC based language for areas where you need high throughput and low latency(there's the whole cache miss thing which GC exacerbates).


10ms latency is crap if you're doing something like realtime ad exchange bidding, where you have <60ms bid times. Really do not want that hiccup.


Curious, how long can a pause be before it's a problem? (I do not write these kind of services)


"It depends". Some services are insensitive to pauses of several minutes (email processing; you may not want it there all the time but spikes to that level are often acceptable). Some services are sensitive to pauses in the single-digit milliseconds (for instance trying to maintain a high frame rate).


Two years ago I was working on a system that cared about single digit microseconds, luckily with low throughput.


Lots of people have to worry about single digit microseconds with high throughput.


I don't think I implied otherwise.


How do you handle the reference counting pauses in Rust then; rely on them being deterministic? Or do you completely avoid the reference counting boxes?


How does reference counting pause? There's no stop-the-world GC action, as the garbage collection is amortized over every object deallocating itself.


That's not entirely true, if your decrement results in a cascading free of an entire tree of objects you will pay for the deallocation time for that entire tree -- decrement, free, decrement children, free, etc..

And unless your RC algorithm is more clever than most that's going to be stop-the-world.


Rc is scoped to a single thread, so it'll be, at worst, stop-the-thread.


Not the OP, but in my experience it's fairly rare to encounter someone using the `Rc` type in Rust. It's nowhere near as prevalent as `shared_ptr` seems to be in C++, for example.


There is also the `Arc` type, which is an atomic reference counter. You still need those, especially if you need to share stuff between multiple threads.


You can often get away with using scoped threads.


This is a good article about different wrapper types in Rust http://manishearth.github.io/blog/2015/05/27/wrapper-types-i...


Yeah, a lot of that has to do with the borrow checker guiding you towards single-ownership designs with is a good thing(tm).


I played a little bit around with Go currently, and thereby ported a part of an older application for which I already have C++, C#, Java and other implementations and benchmarks to compare them.

My current results is that Go performance can be really good (C++ level) if I try to avoid allocations as much as possible. With a more sloppy implementation that e.g. did not reuse buffer objects ([]byte) between multiple network packets the performance dropped significantly to about 1/3 of the C++ implementation, and the GC time dominated.

Fortunatly it's quite easy to avoid garbage in Go, as we have value types there and the synchronous programming style means lots of stuff can be put on the stack and there's not so much gargabe due to captured objects in closures and similar stuff. All in all I'm quite confident that with a mediocre amount of optimization Go can be suitable for lots of application types. Although I would not necessarily try to use it for low-latency audio processing if something else (that produces garbage) is running in the same process.


If you're going to work at that low a level, what are the advantages of golang over rust?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: