Hacker News new | past | comments | ask | show | jobs | submit login
Go channels, goroutines and GC available in Nim (nim-lang.org)
147 points by pwernersbach on June 1, 2015 | hide | past | favorite | 83 comments



This document describes how the GC works and how to tune it for (soft) realtime systems.

The basic algorithm is Deferred Reference Counting with cycle detection.

I'm sitting here feeling very impressed. Deferred reference counting has very good semantics for games. Even better, you can control the cycle detection part separately and run that part at an advantageous time. (Though it's probably better to just let GC do its thing, unless you really know what you're doing.)

I am currently writing a multiplayer game server in golang, by making sure almost everything is allocated on the stack, and heap sizes are small. This gives me an efficient, nearly pauseless server. However, something like Nim could give me even more flexibility.


> I'm sitting here feeling very impressed. Deferred reference counting has very good semantics for games.

Not if you need to be thread-safe.

> Even better, you can control the cycle detection part separately and run that part at an advantageous time.

How would that work with multithreading? (Assuming you had a thread-safe GC, which Nim's isn't.)


Not if you need to be thread-safe.

Everything that has to be thread-safe uses channels. I use channels to sanitize everything for a purely synchronous game loop. As an optimization, in cases where there are atomic operations available, there are places where concurrent code can mutate values visible to the game loop, but this is strictly an optimization technique, to be used judiciously. (So only values like "speed" can use this technique. Anything that's a reference is verboten.)

How would that work with multithreading? (Assuming you had a thread-safe GC, which Nim's isn't.)

I didn't realize Nim's GC wasn't thread safe. In my current architecture, you'd only have to worry about the part using channels to sanitize things for the synchronous game loop. If everything outside of the game loop was written such that most everything was allocated on the stack, the GC would never have to collect anything outside of the game loop. So maybe it could work as a port. I couldn't say for sure, though.


A more general approach to shared memory via lockable heaps is a feature Nim seems will implement soon.


Sounds interesting. Do you have any links to documentation?

I'm interested in reading more about it, because Nim is doing a lot of experimental stuff and it's always interesting to look at its designs.

(Edited to remove speculation about how well it will perform before reading about it.)


Well, this is currently highly speculative. What I proposed to Andreas was essentially a model based on Eiffel's SCOOP (with some additional influence from Erlang). Whether it's a practical design remains to be seen.

Note that shared, lockable heaps need not be heavyweight structures. It is entirely possible to imagine a shared hash table with one heap per bucket and fine-grained locking, for example. Collections for such small heaps can be fast because the number of roots is limited, and (depending on what invariants you guarantee), you can even forgo stack scanning for most collections or limit the number of stack frames that need to be traversed.


Neat, I'll read more on SCOOP. Thanks!


SCOOP is not a horribly complicated idea (well, other than the using preconditions as wait conditions, which has been critiqued in the past and is not a critical ingredient). It's basically an extension of the basic idea of monitors. It is based on the idea of having a unified approach for shared memory and distributed system and accomplishes that by assuming that objects can be partitioned into disjoint ("separate") data spaces, access to which is regulated to ensure mutual exclusion; this is why it translates nicely to a model involving thread-local heaps.

At the programming language level, this then mostly involves maintaining mutual exclusion (in Eiffel, the necessary semantics are attached to how "separate" types are handled) and having the optimizer get rid of unnecessary copies.


If you're crossing threads in games stuff you're doing it wrong. There's a good chance you're running into false-sharing and other pitfalls.

Most games use worker-queues(in which case you can use non-GC objects) to deal with architectures like the CELL and for better cache coherency. In that case Nim is a pretty good fir.


> If you're crossing threads in games stuff you're doing it wrong. There's a good chance you're running into false-sharing and other pitfalls.

Of course, you shouldn't use shared memory unless you need it. But often you need it. Look at how game developers have demanded shared memory in JavaScript, for example. Modern multicore CPUs do a lot of work to make shared memory work, and work well.

> Most games use worker-queues(in which case you can use non-GC objects) to deal with architectures like the CELL and for better cache coherency.

I agree with you that GC is often not the best solution for shared memory concurrency. But I think you really need to design the language around "no GC" in order to make that really ergonomic relative to C++. The entire C++ language and library ecosystem is based around making manual memory management (relatively) easy to use; going back to malloc and free is a hard sell.


> Look at how game developers have demanded shared memory in JavaScript

Have a source for that? I find it pretty dubious.

If you're looking to multicore for performance with javascript then you're using the wrong language. Correct memory layout and access patterns will give you a real-world 50-100x win.


Source: I work directly with people who interact with game developers who are asking for it.

Look for asm.js threads on HN: virtually every time it shows up someone brings up shared memory multithreading.

SharedArrayBuffer is the direct result of this popular demand: https://blog.mozilla.org/javascript/2015/02/26/the-path-to-p...


So, I've noticed that over the past six months that every time something about Nim gets posted to HN, you make an effort to discredit the language.

Care to offer an explanation why?


I think Nim is an impressive language that does a lot of things really well. I don't think the memory management is one of them. I believe that memory management and compilation to C are the only two major things I've ever talked about in regards to Nim, because I'm abstractly interested in those topics. If an article about thread-local deferred reference counting in Ruby hit the top of HN and the comments were talking about how that's good for games, I'd probably comment there too.


And how do you do "memory management well"? Like Rust? You pay a high price in complexity and inflexibility for that juicy GC-less yet safe memory management.

Ref counting is not superior or inferior to explicit, restrictive ownership semantics. Those are simply different trade-offs.

Nim might be strictly inferior for writing a heavily multi-threaded web browser because of its memory management approach but that does not mean the approach is generally inferior.

Seems to me that Nim aims to be a "casual", rapid development / friendly (Python-like) language. Ownership semantics like in Rust do not fit there.


I'm personally a fan of regular old Java/C#-like concurrent garbage collection for most "scripting" languages (perhaps surprisingly, given my work on Rust). It's a lot of work to get there, but I think there's no substitute for doing the work—apps really end up needing the flexibility of shared memory. Shortcuts that seem simple like DRC end up tending to run into walls in practice, which is something that the other languages discovered—history keeps pointing to the HotSpot-style garbage collector as the one that systems that need to offer a simple, general-purpose garbage-collected programming model migrate to.

For different use cases Rust-style ownership semantics (when the performance of GC and runtime interoperability become an issue), or Azul-style pauseless GC (when you're willing to trade throughput for latency), or shared-nothing architectures (when you need them for legacy reasons like JavaScript, or want a simple programming model like Erlang) can work great.


"apps really end up needing the flexibility of shared memory"

Why? Just performance or is there a design reason also?


"You pay a high price in complexity and inflexibility for that juicy GC-less yet safe memory management."

It's a price that a lot of people are willing to pay, because of how badly they want what they're paying for.

Nim and Rust have different goals, different tradeoffs, and overlapping target audiences. Having both is good. Making them fight is bad.


I'm guessing because Nim gets portrayed as safe, or offers some safe features, but overall is not memory safe. Rust has worked very hard and gotten through the problem of having memory safety, with zero runtime cost and reasonably good language features.

Since it's 2015, it seems fair to point out when a new language offers something neat, but in a way that isn't safe.


Nim is as safe as any other language. Perhaps it's not as safe as Rust but that brings specific trade offs most people dont wan't to deal with. I don't understand why people think that Nim is "terribly unsafe" when in reality it's like any other language


> Nim is as safe as any other language.

With regards to memory safety, it is not. https://news.ycombinator.com/item?id=9050999 is an old comment from Patrick, but in today's Nim, it segfaults in both release and development modes for me. Rust's guaranteed memory safety means that Rust code (without explicit unsafe, the vast vast majority of code) cannot segfault.

> I don't understand why people think that Nim is "terribly unsafe" when in reality it's like any other language

For example, unless I write a bad cext, I cannot get Ruby to segfault.

None of this makes Nim a bad language. All languages have tradeoffs.


Yes Rust is more safe than Nim, I'm not arguing that. I'm also not arguing that Nim is as safe as languages with automatic memory management.

EDIT: Also, Nim is planning on turning those segfaults into runtime NilErrors and a nilChecks flag that will check for them at compile time, you can also avoid this by annotating Pointers with `not nil`


Cool. When you said 'any other language,' I thought you were speaking more broadly than C or C++.


I probably should have been more clear, but I think it's safe to say that unlike C/C++, Nim can handle these types of issues like other languages that deal with pointers (Java, Go etc) with control from the programmer. The only memory safe language I know is Rust, but I am probably wrong on that part, so that's why I singled out Rust on how It's safer than Nim.


That's not at all the impression I get from reading the Nim manual. It sounds rather clear when it says unsafe over many different features. Can you declare a function pointer and point it at anything? It allows unchecked array access - what's stopping traditional overflows? I've not used Nim, but compiling to C and exposing a lot of C-like functionality seems to indicate that code will still be subject to the same types of errors. Why do you say this isn't the case? Why does the manual not mention such things? (Another example: doesn't Nim need most objects to be GC allocated to be safe? So if you're not using GC (which I imagine lots of perf sensitive code will want to avoid), what's preventing errors there?)

Maybe I've got the wrong impression and their docs are terribly misleading and there are safety checks all over. But I found the dics easy to understand last time I read them and the safety issues seemed clearly marked and more or less where'd you expect.


I never said that Nim is safe/ doesnt have safe areas in the language. But at this stage of development with Nim, it really focuses on the Language goals rather than anything else right now. I have only stated multiple times through this thread that there are way to avoid this unsafetiness and ways that will help avoid these situations in the Future with Nim (nilChecks)


It has a feel of a scripting language, but as far as I can tell, it rather has the safety of C/C++, which I personally wouldn't call "safe like any other language".


Why not? I'm interested to know because in my opinion I don't see it any less safe than languages that don't have automatic memory management and/or languages like Rust.


Because it is flatly untrue? Memory safety is rather a binary thing. C# without /unsafe is safe. Same for Java and Rust. Not true for Nim or C/C++. Rust is unique in doing this without any GC or other runtime overhead, AFAIK, which makes it a bit special.


Nim does not have a separate unsafe keyword, because all unsafe features are already characterized by keywords; that's a result of its Pascal heritage. To check whether a piece of Nim code is safe, you check for the presence or absence of these keywords; e.g., you can grep for "ptr" in Nim, while grepping for "*" in C# isn't particularly helpful. Every unsafe feature in Nim has an associated keyword/pragma. Having a special "unsafe" keyword that says, essentially, "this procedure can contain other unsafe keywords" is sort of superfluous.

Note: these unsafe features have two purposes. One is to interface with C/C++ code. The other is to be able to write close-to-the-metal code in Nim rather than in C (where you wouldn't gain any safety by using C, but lose the expressiveness of Nim). This is, for example, how Nim's GC is itself written in Nim.

None of the unsafe features are necessary for high-level programming, i.e. unless you actually want to operate that close to the metal.


Presumably there are plans to disallow accidental sending of thread-local GC'd pointers to other threads as well?



There's undefined behaviour in the "main" language, e.g. https://news.ycombinator.com/item?id=9050999


That's a bit misleading and mostly due to the fact that (1) Nim hasn't reached 1.0 yet and (2) in practice these issues are relatively uncommon for the C code that Nim generates, so this hasn't been a particularly high priority.

First of all, Nim's backend does not target the C standard; it targets a number of "approved" C compilers, which makes it (1) a bit easier to avoid undefined behavior, because these C compilers know that they may be used as backends by high-level languages and provide efficient means to disable some of the undefined behavior and (2) allows Nim to emit optimizations specifically for them. For example, Nim knows that gcc understands case ranges in its switch statement and can optimize for that. See compiler/extccomp.nim for some more examples. Nim also makes some additional assumptions about how data is represented that are not defined in the C standard, but are true for all target architectures (or can be made true with the appropriate compiler configurations).

Second, regarding the specific cases of undefined behavior:

1. That shift widths aren't subject to overflow checks is an oversight; most shifts are by a constant factor, anyway, so they can be checked at compile time with no additional overhead. Nim does not do signed shifts (unless you escape to C), so they are not an issue.

2. Integer overflow is actually checked, but expensive; there's an existing pull request for the compiler to generate code that leverages the existing clang/gcc builtins to avoid the overhead, but that hasn't been merged yet; -fno-strict-overflow/-ftrapv/-fwrapv can also be used for clang/gcc to suppress the undefined behavior (depending on what you want) and one of them may be enabled by default in the absence of checks.

3. Nils are not currently being checked, but they will be. There's already a nilcheck pragma, but that isn't fully implemented and also not available through a command line option. This will be fixed. Until then, you can use gcc (where -O2 implies -fisolate-erroneous-paths-dereference) or use --passC:-fsanitize=null for clang to fix the issue.


Do you have any specific examples of these unsafetiness in Nim?


Pointers?


Uhh that's not much of an answer...


I haven't done anything in Nim, but in C it's really easy to do bad things with pointers. You can deref a NULL, you can be sloppy about arithmetic, you can overflow a buffer, etc. Nim seems to emphasize using other features instead of pointers, but it still has them.


As stated before, there are ways to avoid them and ways that Nim will soon handle them, but do you know how many other languages deref a NULL pointer? Unlike C, this does not result in undefined behavior in means that it will execute something unsafe in Nim


Sorry for my ignorance, but the languages other than C that I've used are javascript, python, various lisps, bash, sql, prolog, etc.: no pointers! I'm interested to learn how pointers might be made safe, in Nim or anywhere else?


It really depends on how you define "safe". Nim will allow you to deref null pointers (unless you annotate it with `not nil`, then it can never be nil, this results in compile errors if it is) but if they are, it will be like Java and throw an exception if --nilChecks:On. The only language I know that makes pointers safe is Rust, with it's borrow checker and such, but that's a tradeoff I don't really want and the options Nim provides are better for my case.


Ada, SPARK, ParaSail


It is undefined behavior in Nim due to compiling to C. https://news.ycombinator.com/item?id=9050999


As stated before in this thread, there _will_ be nil Checks in the future which will result in NilErrors or you can just annotate it with `not nil` right now and it will never be nil. You can also use -fsanitize flags with the clang backend to trap the null dereferences.


"Unlike C, this does not result in undefined behavior "

Of course it does. If you turn off expensive runtime checks, you'll get SIGSEGVs. That doesn't happen in Rust, because it is semantically impossible to dereference NULL.


Since when were we comparing Nim and Rust? yes Rust is more safe than Nim, but that comes with tradeoffs. You are obviously not reading the whole thread about me bringing up (multiple times) the fact that you can avoid these and will be even easier to avoid in the future.


"Why not?"

Because it isn't true.

"in my opinion I don't see it any less safe than languages that don't have automatic memory management"

Strawman ... the comment was about scripting languages.

"and/or languages like Rust"

Then it would be unwise to pay any attention to your opinion.


> Because it isn't true

Did you not see the other person who just said that?

> Strawman ... the comment was about scripting languages.

I do not have a clue what you are trying to say...


"Nim is as safe as any other language."

That is factually false.

"Perhaps it's not as safe as Rust"

And there even you have contradicted yourself.

"Perhaps it's not as safe as Rust but that brings specific trade offs most people dont wan't to deal with."

That much is true ... and can be said without telling falsehoods, like your first statement.

"I don't understand why people think that Nim is "terribly unsafe" when in reality it's like any other language"

You are confused by your own strawman.


when --nilChecks:On become a thing, dereferencing null pointers will be like Java, a NilError (NullPointerException in Java). This is why I said it's as safe mainstream languages that dont have AMM but languages like Rust are safer than those mainstream languages. any others to point out?


You yourself know that this claim is entirely unsubstantiated, as evidenced by the fact that you felt the need to create a throwaway account. pcwalton is an active commentator throughout HN in general, and garbage collection and parallelism are two of his areas of expertise. If his opinion is somehow uninformed, then tell him so and explain how. If he's not uninformed, then the only thing your comment is doing is trying to shut down legitimate criticism.

All languages have faults. Engage with your critics, own your faults, and either correct them or justify them based on your principles.

EDIT: To give an example, pcwalton also initially criticized Go for not being memory-safe for values of GOMAXPROCS greater than 1. However, the Go team later implemented the dynamic race detector, which, if you've followed pcwalton's comments at all, you know that he is actually quite impressed with.


I always post using throwaways since I don't like karma influencing the content of my posts and it also makes it significantly more difficult for third parties to profile me.


This is some glass-houses logic right here, given that you're attempting to leverage trivially-falsifiable assertions in order to profile pcwalton as a Nim hater with a personal vendetta. In the meantime you have yet to actually address his criticisms, which, to reiterate, indicates that you're trying to shut down critics via deflection.

(I suppose, in the future, pcwalton should just generate a throwaway before commenting on Nim.)


I've noticed that a) that's an ad hominem attack and b) it isn't accurate.


I always wanted to see a Go/Nim interop project in a little different vein. Since Nim is a superset of Go (and Go is fairly simple), why not a Go implementation in Nim via cross-compilation? Then you can even build a Nim macro that does a "Go get" and Nim developers can use Golang libraries right in their code as imports. Not to mention we get a full, alternative Go impl.

It should be easy to bootstrap by first using Go's parser to dump some AST that Nim code can then translate, and then once that version is done, just reference the Go parser using this new impl. The only real struggle with the project from what I can see is deciding which standard library items to use the Go transpiled implementation (probably most of them) vs which need to have Nim backends (e.g. runtime package). Meh, just rambling thoughts in my head I was considering playing with given the time...


> Since Nim is a superset of Go

I seriously doubt that Nim is out of the box binary compatible with another runtime's fundamental types.


I am not looking for binary compatibility. I am looking for feature parity. Go interfaces can be handled with concepts [1] or just macros. I am talking about an alternative Go implementation (i.e. just sitting on top of Nim), not ABI compat between the two runtimes.

1 - http://nim-lang.org/docs/manual.html#generics-concepts


Along that way is C++ and "we should attempt to be a superset of all programming paradigms". Fewer features is better, and more restrictions allow better tooling and iteration if they don't functionally restrict the programmer.

Basically, I'm not sure of the benefit of building go on top of nim. They appear to fill the same space in orthogonal ways.


> Since Nim is a superset of Go

what ?


I was under the impression this was true feature-set wise. I may be wrong though and am happy to be corrected.


Calling something a superset usually means it's actually an expansion of something... kind of like C++11 might be a superset of C++03 (not sure if that's accurate).


That's a really strange metric to use. It'll rarely be true, except in a sort of Turing completeness way.


Almost. Go had the advantage of M:N threading with its goroutines and elegant CSP implementation.


> why not a Go implementation in Nim via cross-compilation?

Nim's macros have one limitation that prevents an accurate implementation of another syntax: they can't fully modify the existing syntax.

See how I had to use "scase" inside "select" blocks because the existing "case" keyword insists on having "of" after it. So Nim's semantics put some limits on the amount of hijacking one can inflict on it through macros.


I don't believe they should accept the full Golang syntax, just help with the import. E.g.

    macro goImport(path: string): stmt
      // TODO
      discard
    
    goImport("golang.org/x/crypto/nacl/box")


If you front compilation with Go's parser, you could generate bog-standard Nim code from the AST.


You can, but you'd be limiting yourself to pure Go. I'd rather have Go with Nim's generics, transpilation to C, macros, compile time computation, etc.


This is awesome! Go's channels and green threads were one of the features I really liked about Go. One more reason for me to go past basic prime number programs in Nim :)

I'm hoping to see Rust get green threads/tasks/goroutines too. I'm working on a GC myself, and hopefully someone is trying out a green thread scheduler.


I'm very curious to hear the pros and cons of this from somebody who has intimate knowledge of Nim internals. I really really like Go's CSP model and would love to see it properly supported in another lightweight non-jvm language (yes I know about Erlang, it doesn't fit the bill for me).


Well you can already use Nim threads (http://nim-lang.org/docs/threads.html) and channels (http://nim-lang.org/docs/channels.html) - the model is similar although the implementation uses system threads rather than coroutines.

Nim also has lightweight coroutines using `async` and `await` (http://nim-lang.org/docs/asyncdispatch.html) - you can run a bunch of these within one thread.

Also, have a look at gevent for Python.


Don't look at gevent for Python, look at a responsible async library: asyncio if you like Py3, Twisted if you like Py2 and want something that's very featureful but old-school, Trollius if you wish you could be using asyncio but you're stuck on Py2.

gevent is too much magic. It aims to give you async without changing any of your code. To accomplish this, it monkey-patches the entire Python standard library, in a way that is 99% compatible with Python, but the 1% will constantly surprise and infuriate you. Its compatibility shows no signs of increasing given how much its development has slowed down.

You can use gevent as a quick hack, but you will hate yourself if you have to maintain gevent code.


You make a valid point, but many years of work have gone into the monkeypatching at this point. If you use the latest version, it's very rare to find a place where it fails. It's usually only an issue if you're dealing with a third party module that makes use of native shared libraries.

I use gevent both for small and large projects, and haven't had any complaints. The monkeypatching pains my soul just a little bit, but I've found no better async framework for Python yet.



Ada have a CSP model too, just fyi.

It's a mature language. You can look into that?


I thought the Go runtime runs foreign code on M:M threads; e.g. when Go time calls foreign code, it dedicates a thread to it. This is so foreign libraries (which are unaware of yielding to the Go scheduler when they do I/O) don't block a thread with multiple goroutines scheduled on it. I do not think Nim code can run "in a goroutine".


> I do not think Nim code can run "in a goroutine".

It does. Don't forget that this is gccgo so it is possible to use plain C functions as goroutines. Nim is translated to C and with the help of a macro I convert Nim functions with an arbitrary number of arguments into ones with a single void* arg that gccgo wants for its 'go' keyword implementation:

    extern void* __go_go(void (*f)(void *), void *);


Yes, but when the Go runtime calls that foreign function pointer, it is going to schedule an entire OS thread for its duration isn't it? If it doesn't, then nothing prevents foreign code from blocking other goroutines. How does the Nim code yield the thread for other goroutines, does it have to register a callback?


> when the Go runtime calls that foreign function pointer, it is going to schedule an entire OS thread for its duration isn't it?

No.

> How does the Nim code yield the thread for other goroutines, does it have to register a callback?

There are no callbacks. Yielding happens automatically when launching another goroutine, when sending, receiving or selecting on a channel. You can also yield explicitly with go_yield() - the better named equivalent of Go's runtime.Gosched().

It's easier to understand if you realize that all those operations with goroutines and channels end up being done in the Go runtime.


> > when the Go runtime calls that foreign function pointer, it is going to schedule an entire OS thread for its duration isn't it?

>No.

Are you sure? Once a thread enters cgo it is considered blocked and is removed from the thread pool according to these sources [1][2]. I previously found a thread where Ian Lance Taylor explained it more explicitly but I couldn't find that now. Is that not what is happening though when __go_go invokes your function pointer?

I do not understand how the Nim code can live in the segmented stack of a goroutine, nor how the Go runtime could know it is time to grow that stack.

[1] https://groups.google.com/forum/#!searchin/golang-nuts/gorou...

[2]http://stackoverflow.com/questions/27600587/why-my-program-o...


> Are you sure?

Yes. See the chinese whispers benchmark with 500000 goroutines and a maximum resident set size of 5.4 GB on amd64. It has the same memory usage (and run time) as the Go version compiled with gccgo.

> Once a thread enters cgo

This has nothing to do with cgo. It's a different mechanism specific to gccgo.

> I do not understand how the Nim code can live in the segmented stack of a goroutine

Good thing you asked. I just ported to Nim the peano.go benchmark described as a "torture test for segmented stacks" and... it failed. The fix was to pass -fsplit-stack to gcc when it compiles the C code generated by nim.

> nor how the Go runtime could know it is time to grow that stack

From what I can tell it's done in __splitstack_makecontext() and friends from GCC's libgo.


Well, cool! Great work :)


How does it handle making a blocking socket call in a go goroutine?


It doesn't, yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: