Hacker News new | past | comments | ask | show | jobs | submit login
Nim 1.0 (nim-lang.org)
785 points by treeform on Sept 23, 2019 | hide | past | favorite | 303 comments



Congrats to the Nim team.

One thing that is frustrating for anyone hearing about Nim for the first time is that it's really hard to look at what appears to be yet another slightly different take on Rust or Go and intuitively understand why it exists.

There is absolutely a grid that can be populated: who started this, is it corporately affiliated, what languages is it similar to, what is the motivation of its creators, what is the philosophy driving its design, is it intended to be an eventual competitor or replacement for languages XYZ, where does it sit on the spectrum between raw assembly and MS Access, is it intended to be a "systems language" eg can you write a kernel or device driver using it, does it require a VM, is there corporate support available, how established is the developer community, are there books and courses, does it have garbage collection, is it a compiled language, is it a strongly typed language, what are the three perfect use cases, how stable is the API, what's the package management story, does it have a reasonable standard library, is there editor support for color coding and linting, what's the job market look like, is the documentation well-written, and most importantly, if we all got in the same room, would I want to hang out with the people excited about this language?

The cost of learning yet another language vs actually spending your finite spare time building something (or with your loved ones, or reading, or sleeping, or making art) is insanely huge.


I would say metaprogramming (and maybe the excellent FFI) is the huge stand-out feature for Nim.

However whilst you can compare all these languages and find a particular niche or set of features that sell them, Nim is just good at pretty much everything.

I know that sounds pretty bombastic, but you can practically pick any task and know that Nim will let you get there rapidly and performantly. That's it's ultimate strength. It's just so productive! It puts the fun back into a day job.

It's as easy to develop in as Python (great type inferrence and strong, static typing), but fast as well written C, yet (AST based) metaprogramming allows it to be higher level than even Python.

On top of low developer friction, it outputs stand-alone executables for effortless distribution. It's my first choice for scripting, for CRUD work, web stuff, game development, and embedded coding.

Doesn't hurt that it can compile to c, c++ and javascript so you can write your backend and frontend in the same language.

Also since the FFI is so great, you can effortlessly use the entire library ecosystem of C and C++.

Of course, it's not perfect, but for me it's getting pretty close.


> (AST based) metaprogramming allows it to be higher level than even Python.

I'm skeptical of this claim. Python's ast tools let you do just about anything you'd want to, and just about anything you'd never want to do as well.


Scala and rust both compile to JS too I think.


Do they have first-party JS backends though? I think that's what sets Nim apart, the Nim compiler includes the JS backend which means the JS backend is always up to date with the latest Nim developments.


Sorry I was wrong it's asm.js for rust. Scala does have an amazing first party backend that I love using


I didn't know there was an asm.js backend for rust. I do know there are emscripten and llvm backends for webassembly.


Rust sorta can, but prefers to be compiled to WebAssembly.


This sounds like everybody should be using Nim. Why do they still use Python, C# etc and why is Nim still so rare then?


Having had it's 1.0 release a day ago might have something to do with it.


Python has been around since, what, 1994? C# goes back to '99 or '00 or something like that.

Nim goes back to maybe 2008 and only hit 1.0 today.


Talk of metaprogramming intrigues me. I'd like to hear what a Lisp user makes of it because I find non-Lisp users are usually amazed by any metaprogramming at all and can't be as critical about it.


Not a Lisp user, but a heavy Nim macro user. Nim macros are AST based, which means that after the parser part of the compiler have read the code and created a tree structure of it your macro is called on this tree. Then you can modify this tree to your hearts desire and return a new tree that the Nim compiler will splice back into the main tree before continuing. This means that you rewrite pretty much anything into pretty much anything else. You also have access to reading files on the machine, or doing pretty much anything else that Nim offers. For examples of what you can do with it you can have a look at my macro to create GUIs: https://peterme.net/cross-platform-guis-and-nim-macros.html, my article on how I use macros to create more read- and maintainable code: https://peterme.net/metaprogramming-and-read-and-maintainabi... or perhaps see how Protobuf was written as a string parsing macro which reads a Protobuf specification and spits out all the types and procedures you need to use it: https://github.com/PMunch/protobuf-nim


> which means that after the parser part of the compiler

Lisp works slightly different. The macro form is an expression (a list) with the macro operator as the first element. The next elements in the expression can be anything as data.


Disclaimer: I only did a hobby project in Lisp once. But I did use some of the macro functionality.

Judging by some of the comments here, it seems like the macro system has a similar approach as Lisp's macro system, which is also AST-based. Something I don't see here is macros that generate other macros, but the question is how much you really want that anyway (when I did that, I thought the syntax was horribly complicated because of all the quoting). I know Lisp also has reader macros (they run before the parser) that allow you to effectively change the language syntax, but I didn't use those.


> Something I don't see here is macros that generate other macros, but the question is how much you really want that anyway

You can do that in Nim in a readable way.

  import macros
  macro genMacro(name: untyped): untyped =
    result = quote do:
      macro `name`: untyped =
        result = quote do:
          echo "Foo"

  genMacro(bar)
  bar # Generate and perform the echo
Surprisingly I've actually used this kind of thing!

In one of my projects I use a macro to parse a set of types for fields and generate constructor macros for them.

The generated constructor macro passes through the parameters it's given to the default built-in constructor but does some setup before.

The final generated code is a normal built-in construction without proc calling yet with special fields initialised automatically.


> In one of my projects I use a macro to parse a set of types for fields and generate constructor macros for them.

Funny, I did a similar thing! :)


Lisp macros are not based on an AST. The macro forms in Lisp need to be valid s-expressions and begin with a macro operator. That's it.


Still, isn't it AST-based in the sense that the input of a Lisp macro is effectively a parsed syntax tree, just as in Nim?


The only restriction is that it is a s-expression: a possibly nested list of data. There is no particular programming language syntax it is parsed against - it's parsed as data.

For example this is a valid Lisp macro form

    (loop for i below 10 and j downfrom 20
          when (= (+ i j) 9) sum i into isum of-type integer
          finally (return (+ j isum)))

    It returns 10.
The LOOP macro parses it on its own - it just sees a list of data. Lisp has other than that no idea what the tokens mean and what syntax the LOOP macro accepts.

    CL-USER 142 > (defmacro my-macro (&rest stuff)
                     t)
    MY-MACRO

    CL-USER 143 > (my-macro we are going to Europe and have a good time)
    T
Works.


The data has an AST. The AST is made of conses, and atoms. The atoms have a large variety of types. They are not "tokens", but objects. Tokens only exist briefly inside the Lisp reader. 123 and |123| are tokens; one converts to an integer node, one to a symbol node. (1 (2 3)) has an AST which is

    CONS
   /      \
  FIXNUM   CONS
  |        /   \
  1      CONS   SYMBOL
        /     \     \ 
      FIXNUM   CONS  NIL
        |      /   \
        2   FIXNUM  SYMBOL
              |        |
              3        NIL


> The data has an AST.

(not (eq 'has 'is))

But for an Abstract Syntax Tree for code we have more categories: function, operator, call, control structure, variable, class, ...


That kind of tree is stipulated a particular form of compiler (well, parser) writing dogma revolving around C++/Java style OOP. You set up classes with inheritance representing various node types; everything has methods, and "visitors" walk the tree, and various nonsense like that.

If our code walker dispatches on pattern matches on the nested list structure of conses and atoms, we don't need that sort of encapsulated data structuring. The shapes of the patterns are de facto the higher level AST nodes. The code walking pattern case that recognizes (if test [then [else]]) is in fact working with an "if node". That AST node isn't defined in a rigid structure in the underlying data, but it's defined in the pattern that is applied to it which imposes a schema on the data.

If that's not an AST node, that's like saying that (1 95000) isn't an employee record; only #S(employee id 1 salary 95000) is an employee record because it has a proper type with a name, and named fields, whereas (1 95000) "could be anything".


Syntax trees are no nonsense and go way back before Java and visitors were hip.

The code walker is just another parser. It needs to know the Lisp syntax. It needs to know which parts of a LET form is a binding list, what a binding list looks like, it needs to know where declarations are and where the code body ist. It can then recognize variables, declarations, calls, literal data, etc, It needs to know the scope of the variables etc. Nothing of that is encoded in the LET form (since it is no AST), and needs to be determined by the code walker. Actually that's one of the most important uses: finding out what things are in the source code. Lisp does not offer us that information. That's why we need an additional tool. A code walker may or may not construct an AST.

No, (1 950000) is not an employee record. Only your interpretation makes it one. Other than that our default Lisp interpretation based on s-expressions: it's a list of two numbers. In terms of the machine it's cons cells, numbers, nil. Without further context, it has no further meaning.


Note that deciphering type tag info in a Lisp value can be called "parsing". Cases occur: for some values we have to chase a pointer into a heap to get more type info.

A Lisp function call does "parsing". (foo 1 2 3) has to figure out dynamically whether a (lambda (&rest args)) is being called or (lambda (a b c)) or (lambda (a b &optional c (d 42)) or whatever.

The #S(employee id 1 salary 95000) object also isn't an employee record without context and interpretation.

> it needs to know where declarations are and where the code body ist

The syntax can be subject to a fairly trivial canonicalizing pass, after which all these things are at fixed positions:

  (let ((a 3) b) (foo a)) ---canon-->  (let ((a 3) (b)) (declare) (foo a))
Now the variables are all pairs to which we can blindly apply car and cadr, the declarations are at caddr and the body forms at cdddr.


> Note that deciphering type tag info in a Lisp value can be called "parsing".

No. You are still operating on the level of s-expressions, a data format. The type-tag of LET is SYMBOL.

Here we have some Lisp code in the form of an s-expression:

   (let ((let 'let))
     ((lambda (let)
        (let ((let let))
          let))
      let))
All above LET have the same type tag, but in terms of syntax they have a different purpose in the form above: we have special operators, variable declarations, variable usage, data objects. I can't just car/cdr down the lists and call TYPE-OF. This always returns SYMBOL for LET.

On the level of a syntax tree we would want to know what it is in terms of syntactic categories: variable, operator, data object, function, macro, etc. Lisp source code has no representation for that and we need to determine that by parsing the code.


The type tag is to be "parsed out" in some implementation specific way. For instance, we first look at the LSB of the Lisp value. If we find a 0 there, then it's a pointer; we dereference the pointer into some heap record, where we retrieve a second-level type tag. Except that, perhaps, if the whole thing is 0 bits, then it's nil; we must not chase that as a pointer. And we find a 1 in in the LSB, then perhaps we look at other surrounding bits for the type. All of this examination of bits with irregularities and cases looks like parsing.


Fair enough I guess. If the Lisp macro system only sees parsed s-exps, in principle you still need to figure out for yourself what kind of construct you are dealing with on a higher level of abstraction. Nim's macro system seems to be operating on a higher level of abstraction in that regard.


Metaprogramming is one of the core ideals in the birth of the language so is well supported. Personally I've not used Lisp, so can't comment there, but it is on the list of influences on the homepage.

Essentially there's a VM that runs almost all the language barring importc type stuff, and you can chuck around AST node objects to create code, so metaprogramming is done in the core Nim language. You can read files so it's easy to slurp files and use them to generate code or other data processing at compile-time.

Several simple utility operations in the stdlib make things really fluid; the easy ability to `quote` blocks of code to AST, and outputting nodes and code to string. This lets you both hack something together quickly and learn over time how the syntax trees work.

Quoting looks like this:

  macro repeat(count: int, code: untyped): untyped =
    quote do:
      for i in 0..<`count`:
        `code`

  repeat(10):
    echo "Hello"
Inspecting something's AST can be done with dumpTree:

  dumpTree:
    let
      x = 1
      y = 2
    echo "Hello ", x + y
To save even more effort, there's also dumpAstGen which outputs the code to create that node manually, and even a dumpLisp!

You can display ASTs inside macros:

  macro showMe(input: untyped): untyped =
    echo "Input as written:", input.repr
    echo "Input as AST tree:", input.treerepr
    result = quote: `input` + `input`
    echo result.repr
    echo result.treerepr
So it's really easy to debug what went wrong if you're generating lots of code.

Since untyped parameters to a macro don't have to be valid Nim code (though they still follow syntax rules) you can make your own DSLs really easily and reliably because any input is pre-parsed into a nice tree for you.

Here's a contrived example of some simple DSL that lets you call procs and store their results for later output:

  import macros, tables
  
  macro process(items: untyped): untyped =
    result = newStmtList()
    # Create hash table of 'perform' names to store their result variables.
    var performers: Table[string, NimNode]
  
    for item in items:
      let
        command = item[0]
        param = item[1]
        paramStr = $param
  
      case $command
      of "perform":
        # Check if we've already generated a var for holding the return value.
        var node = performers.getOrDefault(paramStr)
        if node == nil:
          # Generate a variable name to store the performer result in.
          # genSym guarantees a unique name.
          node = genSym(nskVar, paramStr)
          performers.add(paramStr, node)

          # Add the variable declaration
          result.add(quote do:
            var `node` = `param`()
          )
        else:
          # A repeat performance, we don't need to declare the variable and can overwrite the
          # value in the fetched variable.
          result.add(quote do:
            `node` = `param`()
            )
      of "output":
        let node = performers.getOrDefault(paramStr)
        if node == nil: quit "Cannot find performer " & paramStr
        result.add(quote do:
          echo `node`)
      else: discard
    # Display the resultant code.
    echo result.repr

  proc foo: string = "foo!"
  proc bar: string = "bar!"

  process:
    perform foo
    perform bar
    perform foo
    output foo
    output bar
The generated output from process looks like:

  var foo262819 = foo()
  var bar262821 = bar()
  foo262819 = foo()
  echo foo262819
  echo bar262821


In hindsight maybe the "performers" table would be better named "resultVars", as that's what it really represents.


We do metaprogramming all the time on JVM and .NET languages.


> yet another slightly different take on Rust or Go

From Wikipedia article of each language:

  Rust: First appeared    July 7, 2010; 9 years ago
  Go:   First appeared    November 10, 2009; 9 years ago
  Nim:  First appeared    2008; 11 years ago


The real question is what it offers over OCaml (1996). Nim people talk about GC being "optional" but have never been able to tell a clear story about what this does and doesn't mean (D has the same problem). Aside from that, even if the language puts everything together in a more polished package than its predecessors (and I've no idea whether Nim does or not), what's the unique selling point that would make it stand out?


Basically you only use GC if you declare something using a GC type.

  type
    # A `ref` type is GC and will use the heap.
    MyGCType = ref object
      fieldA: int

    # Otherwise ALL types are stack based.
    MyStackType = object
      fieldA: int
Also GC is deferred, so if you use a GC type in a local scope that doesn't escape, you don't pay for reference counting.

The only other type that use GC is `seq` (equivilent to C++ vectors) IIRC.

The standard library uses seqs in various places so if you fully turn the GC off using the compiler switch --gc:none you'll get warnings for things you use that will leak. There's no GC 'runtime' stuff that you need though.

However in my experience all you need to do is just not use refs, and make your own ptr based seq (there's probably a library for this but its trivial to implement).

Nim's GC is thread-local (so no stop-the-world issues), only triggered on allocate, and has realtime support via enforcing collection periods. Plus you can use other GCs if you wish (eg Boehm).

More info about the GC here: https://nim-lang.org/docs/gc.html


> Basically you only use GC if you declare something using a GC type.

So similar to C# (2000)? A useful feature to be sure, but not a major innovation.

> The standard library uses seqs in various places so if you fully turn the GC off using the compiler switch --gc:none you'll get warnings for things you use that will leak. There's no GC 'runtime' stuff that you need though.

Running with the GC off (and accepting the leaks) was already a standard managed-language technique though.

> Nim's GC is thread-local (so no stop-the-world issues)

Well no wonder it has nice properties if it avoids all of the hard problems! What happens when you pass references between threads?

> only triggered on allocate, and has realtime support via enforcing collection periods.

Your link describes the realtime support as best-effort, and implies that it doesn't work for cycle collection. So honestly this doesn't seem to offer much over e.g. the tuneable GC of the JVM (which admittedly made a massive mistake in choosing defaults that prioritised batch throughput rather than latency).

I do appreciate the information, and hope this isn't coming off as overly confrontational. But honestly it sounds like Nim is overselling things that are mostly within the capabilities of existing managed languages (maybe even behind them if the "GC" is only thread-local and/or not cycle-collecting).


So, the full gist is: Nim uses automatic reference counting with cycle detection. If you want to, you can disable the cycle detection, perhaps only temporarily. The compiler flag for turning GC off doesn't actually turn off all GC, IIRC. It still does automatic reference counting, and it can still do cycle detection, it's just that you need to initiate it manually.

The language does have pointers, and will let you do any C-style memory unsafe whatever you want to do with them. However, it doesn't have any library calls that I'm aware of that are equivalent to C's malloc() and free(). You'd have to supply your own.

There are also ambitions, of introducing real non-GC memory-safe memory management. Probably something along the lines of how Rust does it. Those haven't come to fruition yet, though.

So, long story short, yes you can completely disable GC, but I think that its capabilities on that front are somewhat overstated.


> However, it doesn't have any library calls that I'm aware of that are equivalent to C's malloc() and free(). You'd have to supply your own.

The malloc/free equivilents are in the system module: https://nim-lang.org/docs/system.html#alloc%2CNatural


This is the owned reference memory management: https://nim-lang.org/araq/ownedrefs.html


As shown by Mesa/Cedar, a systems language with RC coupled with a cycle collector can go a long way, like a full graphical Xerox PARC workstation.


I wouldn't say separating the GC at the type level is a major innovation, but as you say it's useful. I don't think Nim really sells itself on a groundbreaking GC implementation either. However it does give you a fast GC with enough flexibility should you need it. For example, the Boehm GC is not thread-local.

GC types are copied over channels with threads, or you can use the usual synchronisation primitives and pass pointers.

As you say, thread-locality avoids the hard problems and this is a good default - I would argue that most of the time you want your data being processed within a thread and the communication between them to be a special case.

Certainly, there's a lot of talk of adding some sugar to threading, and Nim does offer some interesting tastes, such as the parallel statement: https://nim-lang.org/docs/manual_experimental.html#parallel-...

The performance of the default GC is good to very good, the JVM is almost certainly better in most cases, however this is comparing apples to oranges; it's a different language.

Nim's GC pressure is much lower in most cases, not least because everything defaults to stack types which not only don't use GC but tend to be much better for cache coherence due to locality. Using ref types is not required unless you use inheritance, however the language does encourage composition over inheritance, despite providing full OO capabilities, so you find inheritance isn't as needed as in other languages.

Plus it's not really much different to using refs to drop down to pointer level and avoid the GC without disabling it:

  type
    DataObj = object
      foo: int
    Data = ptr DataObj

  proc newData: Data = cast[Data](alloc0(DataObj.sizeOf))
  proc free(data: Data) = data.deAlloc

  var someData = newData()
  someData.foo = 17
  echo someData.repr
  # the echo outputs eg: ptr 000000000018F048 --> [foo = 17]
  someData.free
All this means that you can 'not use the GC' whilst not disabling it. I am a performance tuning freak and I still use GC seqs all the time because the performance hit is actually using the heap instead of the stack, and worse, the actual allocation of heap memory - regardless of the language. The GC overhead is miniscule even with millions of seqs and would only even come into play when allocating memory inside loops. At that point, it's not the GC that's an issue, but the allocation pattern.

Again though, it's nice to be able to drop down to pointers easily when you do need every clock cycle.


That is the gist of the anti-GC crowd doesn't get.

Languages like Nim allow for having the productivity of relying on GC's help, with language features for performance freaks available, when they really need to make use of them.

Java not offering a Modula-3 like feature set has tainted a whole generation to think that GC == Java GC.

EDIT: grammar errors


> has tainted a whole generation to think that GC == Java GC

We can extend this pattern:

> has tainted a whole generation to think that OOP == Java OOP


From that point of view, C# also had hardly new to bring to the table, given Algol 68, Mesa/Cedar or Modula-3, if we start counting GC enabled systems programming languages.


I think the biggest issue is that most equate GC with Java/Smalltalk style, instead of GC Modula-3/C# style.


Doesn't .NET (and C# with it) have stop-the-world GC, very similar to Java?

Or do you mean something else?


CLR was designed for multiple languages execution models, including C++.

In what concerns C#, besides GC, you get access to off heap unamaged allocations, low level byte manipulations, value types, inlined vector allocations, stack allocation, struct aligments, spans.

All GCs have eventually to stop the world, but they aren't all made alike, and it is up to developers to actually take use of the language features for writing GC-free code.


> All GCs have eventually to stop the world

Not entirely true. Erlang's BEAM definitely doesn't need to (unless you define "world" to be a single lightweight process). Perl6's MoarVM apparently doesn't need to, either.


Yes, I do define it like that, there is always a stop, even pauseless collectors actually do have a stop, even if a few microns.

Doing it in some local context is a way to minimize overall process impact.

Just like reference counting as GC algorithm does introduce pauses, specially if implemented in a naive way. More advanced ones end up being a mini tracing GC algorithm.

Regardless, having any form of automatic memory management in system languages is something that should have already become standard by now.


There's always a stop, yes, but there's not always a stop of the whole world, which is my point. "Stop the world" implies (if not outright definitionally means - explies?) that the execution of all threads (lightweight or otherwise) stops while the garbage collector runs - e.g. that of (the official/reference implementations of) Python and Ruby.

Erlang doesn't require stopping the world because every Erlang process is isolated from the others (no shared state at all, let alone mutable shared state). I don't know off-hand how Perl avoids it.


> All GCs have eventually to stop the world

Not at all. Nim is an example.


Nim compiles to C, so you can use any existing C toolchain to generate a native executable for whatever platform you care about.

That's a big advantage over Rust, D, OCaml, and similar languages without this feature.


I see that as a downside, personally. It makes it a lot harder to understand what guarantees there are about with a given piece of code (rather than being able to look at the assembly and the language's own compiler, you would have to also understand C's rather odd semantics and the complex behaviour of many C compilers).


This is a bit like saying you wouldn't use LLVM languages because of the semantics of IR, or that you have to understand the guarantees IR provides, isn't it?

Ultimately if you're really interested in performance, regardless of the stages of compilation, the juice is the machine code output at the end.

In terms of guarantees, those should be satisfied higher up in the language itself, with C generation being output according to Nim's CGen spec. The compiler is fully open source though and easy to dig into.

Having said that, the CGen output is fairly readable if you're familiar with C and I must say I've investigated it when I wasn't sure how something was generated.


> This is a bit like saying you wouldn't use LLVM languages because of the semantics of IR, or that you have to understand the guarantees IR provides, isn't it?

Those languages tend to be a lot simpler than C, both in terms of what constructs they offer and in terms of how simply they translate into (platform-specific) assembly language. I'm not against the idea of intermediate languages in general, but IME C is the worst of both worlds: more complicated than most high-level languages and most assembly languages.

> In terms of guarantees, those should be satisfied higher up in the language itself, with C generation being output according to Nim's CGen spec. The compiler is fully open source though and easy to dig into.

The problem is being confident that those guarantees are preserved all the way down. It's very hard to be confident of the properties that any given piece of C code has, because it's extremely rare for C code to be 100% standards compliant and different compilers do radically different things with the same code. And it's hard to reason about the effects of changes because C compilers are so complicated: maybe you understand the Nim compiler and can see how two similar Nim functions will generate similar C code, but that doesn't help you much when two similar C functions can be compiled to radically different assembly (which happens a lot).


Nim is however not alone, with Haskell using C--, Eiffel using C (nowadays C++ as well) and Unity's IL2CPP/Burst.


C-- is very different from C.

Indeed several languages compile to C; I regard that as a downside for all those languages. I'm not sure what your point is?


That it is a very common approach.


I don't follow. Nim uses C as an intermediate representation, just like many languages use LLVM-IR, Gimple, javascript, etc.

If you care about the assembly a piece of code generates, you can just look at that.

The generated C code that Nim emits looks like what it is: boring automatically-generated C code.

If you wanted to debug the compiler, you could take a look at that, but most users don't have to.


C is a very complicated language with very complex compilers. The translation from C to assembly (under modern compilers) is hard to understand, even for "boring automatically-generated C code". Languages like LLVM-IR or Gimple are designed to be simple and translate more directly into assembly. I'd have the same complaint about javascript to a certain extent, but even though it's a full programming language it's a much simpler language than C and easier to introspect at runtime.


For me it's more: what does it offer over Crystal (apart from being freshly 1.0). Crystal seems a lot nicer and a bit more, hum, uniform. Nim seems to have a lot of features to be aware of, and doesn't have type-safe nil.


Well Crystal is newer than Nim and gives me even more of a "what is the unique advantage of this language" feeling. As to the specific point, Crystal may be nil safe but union types have many of the same problems as unchecked null (particularly when used with generics). Nim has compiler-enforced not-nil types plus true sum types (and therefore an option type), which make it possible to program in a completely safe way - though I'm not sure how well the standard library supports that approach; I would certainly prefer a language that didn't have null at all, as OCaml does.


> The real question is what it offers over OCaml

To you, that is


It’s not actually important what came first, but rather whats currently being used/popular. Nim might have had the headstart, but its lost the race, so now it has to challenge its existence against the current popular set of languages people are looking at for transition


While I'm happy to be informed after being clearly misinformed, I promise you that I'm a full-time developer working with reasonably bleeding edge stuff and I've never heard of it before last night.

I'm lucky to have folks here to set me straight. :)


> appears to be yet another slightly different take on Rust or Go

Nim is nowhere close to that. Most people describe it as a very fast statically typed Python.


Also nim came first, also, rust and go aren't even targeting the same use cases. Rust is for performant systems coding or webasm with far fewer footguns than c or c++. Go is for back end software with less cruft or boilerplate or latency than java but about the same throughput as java. It sounds like nim is "i just wan to get this working but I'm worried python wont run fast enough."


Okay, but you're ignoring the thrust of my question, which is "why do I care". Your name is nimmer so it's safe to say we understand why you care, but help me understand why I care about the 3rd-runner:

https://trends.google.com/trends/explore?geo=US&q=%2Fm%2F09g...

I'm only comparing it to those languages because of the direct comparisons many others have many in this thread. And popularity != quality, but age is also a very poor measure of success.


Had Nim been developed by or had funding from a large company (Google, Mozilla) I'm sure that chart would be much different. Instead, Nim is built by a group of volunteers contributing to a language they love.

I am not a contributor, but I also quickly fell in love with the language. I felt incredibly productive and being able to build tiny binaries that were cross platform was awesome compared to my old Golang binaries which tend to be heavy (bundling everything into a single binary). Also coming from a web background with little system/application programming knowledge I was able to pick it up really quick. It really is a "better python" in my opinion. The language is incredibly easy to read and write, has really good JSON support (which is a big plus for me) and is incredibly fast. To add more reasons to try, Visual Studio Code has a really good plugin for the language.


Nimmer was the parent. I actually like rust more. Though all three are distant in popularity to java or python, so popularity is probably not going to be the distinguishing characteristic between them. The reason i chimed in is precisely because people keep directly comparing rust and go and they ar all horribly misguided. Sure, you vould write a web backend in rust (or just the slow bits like dropbox did) but you do it because go turned out to be too slow, not because that is a primary use case for rust. And while you could probably write drivers in go, why?


Let's try to fill out some of this grid: Who started this? - That would be Andreas Rumpf, or Araq as he is known on IRC

Is it corporately affiliated? - No, it was created by Andreas, and has stayed independent. But it has corporate backing, which helps pay for development, and they do get a certain prioritisation in what gets implemented. But no closed door stuff.

What languages is it similar to? - Depends on what kind of similarities. Syntactically it will remind you of Python or maybe even Ruby. But when you use it the powerful type system might remind you of Ada, the macro system possibly of Lisp, the run-time speed of C/C++ or Rust, and the compilation time of, well, not C++ (it's really fast).

What is the motivation of its creators? - As far as I know the initial motivation was to have something that was as usable as Python and languages like that, but which was actually fast and type-safe. For me as a contributor these features allow me to focus on higher level things so I tend to mostly use the macro and type system to create wonderful abstractions that allow me to create nice readable code quickly.

What is the philosophy driving its design? - Haven't seen any guys with long white beards, pipes, and stacks of book in the community, so I'm not entirely sure there is an official philosophy. The design is mostly based on giving tools to the developer, and making sure they do "the right thing". If the developer then uses these tools to chop their arm of then that is their prerogative, but generally not advised.

Is it intended to be an eventual competitor or replacement for languages XYZ? - Well many people come to Nim from Python, seeking better performance, or from C or JS seeking better usability. While Nim doesn't try to position itself as a direct replacement or competitor for any one language in particular it is a probably candidate for most if not all of them.

Where does it sit on the spectrum between raw assembly and MS Access? - In terms of abstraction somewhere high up above with Lisp and friends (no personal experience with MS Access, so not sure where that actually ranks). The core language is pretty straight forward, but with the macro system the sky is the limit!

Is it intended to be a "systems language" eg can you write a kernel or device driver using it? - Yes, Nim does call itself a systems programming language, and there have been toy kernels written in it (not sure about drivers though, but I wouldn't be surprised). It can also run on micro-controllers and other embedded systems. Pretty much as long as you can run C, you can probably run Nim. That being said it also compiles down to JS, and has a lot of features that allow it to be used for anything from web-sites, through games and applications, to micro-controllers and kernels.

Does it require a VM? - Trick question, but yes and no. The compiler implements a VM to run Nim code on compile-time for the macro system, but this VM is not present at run-time.

Is there corporate support available? - It is possible to get a contract with the core developers to get support yes. Not that I've ever felt the need for any further support than the excellent IRC channel (which is also bridged to Gitter and Discord).

How established is the developer community? - The aforementioned IRC channel is very active, as well of course as the GitHub repository. There is also a yearly meetup at FOSDEM, and talks, articles, and videos about the language and development is coming out all the time.

Are there books and courses? - There is so far only one physical book, Nim in Action, and it's really good! As far as courses go there is no active course with a teacher and course material (as far as I know). But there are tutorials for all levels of programming skills, from never programmed before to picking it up as your n-th language. There are also more interactive things like Excercism where you can solve problems and have people from the community look over your solutions. Other than that people will often happily look over and comment on your project if you post in on Reddit or drop a link in the IRC channel.

Does it have garbage collection? - Yes, by default Nim has a garbage collector, but it is tuneable so you can control when and for how long it runs. And you can disable it completely and use manual memory allocation, for when you're doing kernels and micro-controller work for example. The garbage collector is also per-thread, meaning that running multi-threaded will not stop your entire application when it's collecting garbage. You also have an option regarding which garbage collector you want to use, all with different performance considerations, but in general all really fast.

Is it a compiled language? - Yes. Nim itself compiles down to C/C++/Obj-C or even JS. Then it is passed on to the target compiler to create native binaries, or in the case of JS left to either be run by node or loaded into a web-page.

Is it a strongly typed language? - Yes. Nims type system is strong and flexible. The basics similar to Adas type system. It also has type-inference however, so you might not guess that it is strongly typed by looking at the examples.

What are the three perfect use cases? - Hmm, this one is tough. I use Nim for pretty much everything, from scripting simple things on my own machine, to creating and deploying web-sites, graphical applications and games. But judging from the community I think the top three use cases are: High-performance computations, Web-dev, and games. The primary financial backer of Nim, Status, also uses it to create an Ethereum client focused on mobile devices. But not sure which category to bunch that into.

How stable is the API? - Now with the release of v1 the API of the core language and stdlib will be stable moving forward on the v1 branch. This means bug fixes will be back-ported, and any new features will only be added if they don't break this promise.

What's the package management story? - Nim has its own package manager, Nimble, and over 1000 packages ready to go with everything from bindings to popular C libraries to large pure Nim libraries and applications.

Does it have a reasonable standard library? - Indeed! The standard library is large, and has pretty much everything you need.

Is there editor support for color coding and linting? - Yes, there are plug-ins for various editors (most use the VSCode one, I personally prefer Vim), and also an LSP implementation if you want these features in an editor that doesn't yet support them.

What's the job market look like? - Not huge to be perfectly honest, there are a couple of opportunities around. But I was surprised to find people at my current job using it for some tinkering and internal tools. I'm personally lucky enough that I have fairly large autonomy in how I do my work, and I'm able to use Nim.

Is the documentation well-written? - Personally I find them pretty good, and recently they have been read through and improved, so they are better than ever. Nim also offers document comments directly in the code, so many libraries you find will be nice and documented as well.

And most importantly, if we all got in the same room, would I want to hang out with the people excited about this language? - I've tried this at FOSDEM for two years now and they are a wonderful bunch!


> What is the philosophy driving its design?

I think that the [nim github tagline](https://github.com/nim-lang/Nim) expresses well the philosphy: "Nim is a compiled, garbage-collected systems programming language with a design that focuses on efficiency, expressiveness, and elegance (in that order of priority)."

also Araq's blog post on 1.0 explains well how the guiding principles remained the same over time:

"When I started Nim's development I had a small simple language in mind that compiles to C; its implementation should not be more than 20,000 lines of code. The core guideline has always been that Nim should be a small language with a macro system, which should be capable of extending Nim with all the features that the small core is missing.

The current compiler plus the parts of the standard library it uses has roughly 140,000 lines of code, runs on a plethora of operating systems and CPU architectures, can also compile Nim code to C++ and JavaScript, and Nim's meta programming capabilities are top of the class. While the language is not nearly as small as I would like it to be, it turned out that meta programming cannot replace all the building blocks that a modern language needs to have.

For example, while Nim manages to implement async via its macro system, the macro system needs to be able to compile code into a state machine. And these state machines need some form of goto and a way to capture their environment. So Nim's core needed to grow so called "closure iterators" to enable this.

Furthermore, we don't really know yet how to leverage a macro system in order to give us extensibility on the type system level, so Nim's core needed generics and constraints for generics."


> Yes, by default Nim has a garbage collector,

Does the standard library and all other user libraries work with the garbage collector turned off, or do I have to use a different ecosystem for those applications?


They work only as long as they don't use any garbage collected types (the compiler will warn you of this when you turn the GC off). Unfortunately this means most libraries are out, and you have to do your own thing. Turning the GC off is more meant as a way to use Nim on micro-controllers and for things like kernels and such. In this case many libraries that aren't written for this use-case doesn't really make sense to use anyways, so I'm not sure how big of an issue this is in reality.


> I'm not sure how big of an issue this is in reality

I agree - embedded programming is a sufficiently separate world that you'll always need a different foundation with different assumptions.

Rust which doesn't have a GC at all, but stillhas this problem to some extent. On a microcontroller you generally don't want to use the standard library with all its assumptions about heap allocations always succeeding. This is why 'nostd' and libraries like heapless exist.


> Rust which doesn't have a GC at all, but stillhas this problem to some extent.

I agree, but there is a big difference between not being able to use almost any library, and being able to use most libraries.

There are many many libraries on embedded Rust, and my embedded and not embedded projects do share many libraries.


> so I'm not sure how big of an issue this is in reality.

I have audio programming in mind. Not that you can't do audio programming in GC-enabled languages, it's just that's it's quite frown upon in this circle (for good reasons). I'm sure there are workarounds though.


Audio programming in Nim is completely possible and reasonable.

Here is an example of a super collider plugin written entirely in Nim with the GC turned off: https://forum.nim-lang.org/t/3625

The author seemed to find the experience very pleasant, and the performance was great.


Not (yet) a Nim user, but another post mentioned that GC is thread-specific, so you can remove all traces of GC-types from your real-time thread(s) in order to avoid deallocations at the wrong times.

A related question is whether you can disable GC on a thread by thread basis.


Audio programming? And why is it frowned upon? If it is because of the Java-like GC freezes that is something you can ensure will not happen in Nim by turning on manual control and only running it when its suitable.


Not the person you responded to, but yes. The audio must never be interrupted, everything else first.


This is an AMAZING answer. I'm sad I can't buy you a sandwich.


Well thank you :) Meet up at FOSDEM and you can buy me a sandwich there!


> most importantly, if we all got in the same room, would I want to hang out with the people excited about this language?

Interesting, I wouldn't even thought someone could consider it a criterium for a language choice. Now I wonder if most people care about it like you do.


Yes. A thousand times, yes.

https://en.wikipedia.org/wiki/Yukihiro_Matsumoto (see the first paragraph)

A huge part of the reason that people love Ruby even when there's so many Elixirs/Elms/Crystals/a hundred more around is simple: it's a community full of nice, smart, interesting people with varied interests that extend beyond coding.


> it's a community full of nice, smart, interesting people

Isn't everyone describing the community they belong as such? I have yet to meet someone advertising his programming language of choice with “the community is full of morons, join us!”…



Shameless plug, but by a nice coincidence, Manning has a discount on all printed books today. Among them is my book, Nim in Action, available for $25. If you're interested in Nim it's a great way to learn :)

It was published in 2017 but we've ensured Nim is compatible with it (all book examples are in Nim's test suite), so apart from some minor deprecation warnings all examples should continue to work.

Grab a copy here: https://manning.com/books/nim-in-action?a_aid=niminaction&a_...


Not only a good resource to learn Nim, but also an excellent programming book in general!


I can confirm, the book is excellent and worth a read-through.


I bought a copy of the book a few months back (and haven't yet found time to read it fully, I can write a lot of programs just after reading a first few chapters), will Manning upgrade it for a newer version for Nim 1.0 ?


> will Manning upgrade it for a newer version for Nim 1.0 ?

The version of the book you bought (and which is still on sale) is compatible with Nim v1 and it is still relevant, no need for another version anytime soon.


Today it's on sale again for $25. Just grabbed a copy!


Aw, I missed it by a couple of hours.. :/


Thank you, purchasing a copy now


Congratulations Nim team! :D

I've had Nim installed on my laptop for a long time and I've always enjoyed tinkering around with it over the years. Maybe now it's time to find a bigger project to try it out on.

This is a tiny thing, but just to highlight something unique I like about Nim, using `func` to declare pure functions (and `proc` for others) has been a small, but really nice quality of life improvement. It's something I've been wanting in other languages since Nim added it.

It seems like the 2010s have been a pretty amazing time for new programming languages. The ones that I can think of that I've at least tried are Rust, Go, Nim, Crystal, Zig, Pony, Fennel, Janet, and Carp (I know, some of these are pre-2010, but I think they've all either 1.0'd or had a first public release post-2010), and every single one has made me really excited about some aspect of it. I hope this experimentation continues into the 2020s!


And it turns out Crystal 0.31 [1] is released! And it is ( or should be ) closing to 1.0 release as well, once Windows Support and MultiThreading matures. May be 2020?

On the list of languages I only know the first few up to Zig, will need to check out the others. There is also one noticeable missing and that is Elixir.

[1] https://crystal-lang.org/2019/09/23/crystal-0.31.0-released....


Good call! Also Julia, Typescript, and ReasonML now that I'm thinking about it.


Curious why you're adding Elixir in this list ? It operates is in a completely different space than nim / zig, as far as I know (Not statically checked, heavy but powerful runtime, much "higher level" abstractions, etc...)

Not to prevent you from trying it, of course - to each and everyone their own...


Not the same person, but I interpreted the preceding comments to be about programming languages in general, not just ones in the same space as Nim and Zig, and would agree that Elixir fits in that broader list.


I'd say Elixir is certainly a member of list of languages people are considering bailing out of Python for.


That's so interesting. I absolutely love Elixir, but I just never saw it as a replacement for Python. (Which I hold in high regards as well) Perhaps it's only my experience with both langs. Elixir I used for JSON API type things, while Python I used for all types of general purpose stuff. Both are great, but Python is super popular with the science crowd (I used it for GIS/Geoprocessing tasks). Maybe Elixir has similar packages and I just never went to look....


There are a great many people who don't do any heavy maths/sciences data crunching, for whom Python is just their web+scripting language of choice. For those people, "Python" fits in the set of {Perl, Python, Ruby, Node, ...}. Elixir does fit quite comfortably in that set.


Yeah... I'm not bailing out Python for Elixir.

I use both Python and Elixir. Python for web scraping and Elixir for my side project website.

I can see if people want to move web dev from Python to Elixir but yeah not entirely bailing out for Elixir. BEAM VM in general is terrible for numerical stuff. Elixir is pretty niche, it solve one thing concurrency problems. Python is a much more general language.


You could argue that Elixir isn't a 'new' programming language. While it obviously technically is, in practice you could see it as a nice coat of paint over Erlang and the BEAM. This is absolutely one of its strengths - it brings tooling, a more familiar syntax and macros, but really just allows us to build using a great programming paradigm and 30 years of solid engineering.


> using `func` to declare pure functions and `proc` for others

What's the limit used by Nim in its definition of a “pure function”? Is function that allocate memory considered a impure one? (technically it is, but it really reduces the usefulness of such keyword if it's too restrictive, that why Rust don't have it for instance).


D has had this[0] for a bit btw. I think D probably supports every major paradigm.

[0]: https://dlang.org/spec/function.html#pure-functions


Fortran has had pure functions and subroutines since well before any of the cool languages. Not that I recommend it in particular.


IN PARTICULAR,I DO NOT LIKE A PROGRAMMING LANGUAGE SCREAMING AT ME.(well, the last versions seem calmer)


For some reason, a lot of fortran programmers insist on screaming, even though it’s been optional for 3 decades. I mostly program in python these days...


Backwards compatibility.


I’ve never encountered a compiler that needed ALL CAPS. I guess some people saw old code and just emulated that.


And by last version you mean any Fortran since Fortran 90.


Fortran had nearly everything before nearly all languages.

And I'm sure Fortran was cool back in the day, I'm sure it's due a comeback any day now.


As a matter of fact, in its originally intended domain (i.e. scientific computing), it actually never stopped being a major player.


I’d add Swift and Julia to that list as well, and arguably Typescript too.


I've really liked that, too. Its approach to mutability is also very nice.

The combined result may not be nearly as principled as the Haskell way of doing controlled mutability with double-checking from the compiler that you aren't breaking your own rules, but it's a nice pragmatic compromise that's easy to understand without having to rewire the way you think about programming.


Hot dang, someone knows about Fennel.

I'm using a weak-CPU machine these days, and it turns out that everything is slow and cpu-hungry except native code and Lua. Lua blows even JS out. Specifically, my scripts seem to be done faster than Node.js spins up.


Also const/let/var declaration keywords for whether something is constant and whether its known at runtime or compile time are also very very friendly.


Don’t forget Kotlin, Swift and Typescript! Possibly the most popular 3 of the lot!

They don’t break the mould quite like the ones you’ve mentioned, but they’re still a big improvement for day to day stuff!


> ... something unique I like about Nim, using `func` to declare pure functions (and `proc` for others)

FORTRAN has this distinction since the beginning.


I think that's wrong. Isn't Fortran's procedure/function distinction between things that return a value and things that don't, rather than between things without side effects and things that might have side effects? The latter is what Nim's func/proc distinction is about.

(Modern Fortran does have a pure keyword meaning "no side effects", but that wasn't there "since the beginning".)

[EDITED to add:] I checked in the Fortran 77 standard. The difference between a FUNCTION and a SUBROUTINE is just that the former returns a value and the latter doesn't; in the code for a FUNCTION you are absolutely allowed to do I/O, modify global variables, etc. Functions (and subroutines) can change the values of the variables passed to them. There are also things called "statement functions" which have to have single-line definitions along the lines of "square(x) = x*x"; these don't have to be pure either because the defining expression may call functions that have side-effects.


I can confirm, Fortran subroutines are essentially just void functions, unless specifically declared pure.

Despite working in Fortran every day, I never knew about statement functions before. Thanks.


You are right, thanks for the correction.


"V" would also fit into that list I think.

https://vlang.io/


I was really excited when I saw V for the first time. Then as I was checking the claims one by one it turned out the author was only half honest. That was bad, really bad. I don't feel like visiting this website again and wonder if any of these claims are true or just a kind of strange marketing in order to collect donations.


I love seeing new languages that do interesting things. I wish I could see what languages everyone will be using in 100 years. Or at least I'm curious what we'd see with 100 years of development towards solving today's problems (and without AI because that's not what this particular fantasy of mine is about).

It's one of my favorite things to fantasize about when I run into another python bug or a write a tedious unit test that feels like I should be able to let my tools deal with for me. Super-future-dependent-haskell's type system with python's ergonomics, and fantasy-future-python's gradual typing to go fast and loose when I feel like it, and C's ability to specify exactly what I want when I need to with rust's ability to keep me from shooting myself in the foot while doing so, and zig's ability to do dead-simple compile time programming, and super-future-idris/coq's ability to write ergonomic proofs in place of many tests, and so on. Or even the futuristic fantasy one-FFI-to-rule-them-all, so I can mix and match pieces super smoothly.

Every time I see an interesting new language catching on, I can't help but engage in optimistic flights of fancy about the future of programming.


Many of these features sound like Julialang to me.. Not certain how well they fit with Nim.

In fact, I almost suspect you are hinting in that direction yourself ;)


Julia discourages Linux distros to package it. They include modified and unmodified versions of all libraries they depend on within the Julia source repository.

I am not quite sure how they are planning to grow within the ecosystem that matters most for many developers.


There's a very major revamp going on of how binary dependencies are handled. I don't know much about it, but it might possibly change this picture.


Let's not call it that.


I've been intermittently working on my own language ("transpiled" to C) for more than three years before seriously checking out Nim in 2015, at which point I just threw my code away. It already had very nearly everything I thought I could bring to the table, and then some stuff I had never even contemplated. Go, go, Nim!


Congrats guys!!! This has been much-anticipated and I'm very excited. I personally wish that the owned reference stuff (https://nim-lang.org/araq/ownedrefs.html) had been part of 1.0, but I think that at some point shipping 1.0 >> everything else.

I've been following (and evangelizing) Nim for a while, this will make it easier to do so.


I don't quite understand the definition of "memory safety" in that document. If deallocation can cause other objects to end up pointing to the wrong thing and the wrong data, how is that different from memory corruption?

If your filesystem suddenly starts returning the contents of notepad.exe when asked for user32.dll and vice versa, is that not filesystem corruption?

If an admin user object can suddenly start pointing to the guest user object and still be considered "memory safe", that doesn't seem like a very safe definition of safety.


It's type safety. The pointer will always point to an object of the same type. This is common in operating systems, for example. You have some out of band way of verifying that the object's identity has not changed (deallocation would be a change of identity) when you access the object under some sort of serialization.

It's certainly a weaker definition of memory safety than you and I, and I would guess most people, would have in mind. So in that sense, I think the author is wrong to call it memory safety.

You're totally correct that a logic bug in this category could cause a credentials pointer to point to a different or higher set of credentials, and that is an implementation risk.


i guess the argument is that you'll never read random garbage instead of a well-formed object; and given that random garbage could result in pretty much arbitrary "undefined behavior", it should at least guarantee that your program will behave roughly as intended, even if giving incorrect results

(i'm not convinced that's a useful thing myself)


> and given that random garbage could result in pretty much arbitrary "undefined behavior"

Nitpick: UB doesn't come from reading random garbage, it's quite the opposite: UB could result in reading random garbage, but it could also result in many worse things.


i put it in quotes because i meant "arbitrarily weird side-effects", but you're right that i should have used a different term


I don't think the owned reference part will change the language drastically. For people that want to write kernels, games or real time code it would be great. Nim's GC is super configurable already. Yeah owned reference would be better but its already pretty good.


It is part of 1.0, but you have to enable it using the "--newruntime" build switch.


Question (genuine, not trolling): what's the use-case for Nim regarding other languages? What are its pros/cons?


When I started an enterprise data project with a small team where Python was the familar workhorse we stopped for moment initially to think of ways to improve performance. Cython was the familiar route with Python but we started building small prototyes in Rust, Go, and Nim. Beyond the basics our progress slowed down with Rust and Go (yes, I know Go is very easy for some) but Nim allowed us to put together a fully fairly complex application in short time. Just one book, 'Nim in Action', gave us all the ropes, every example in it worked (thanks Dominic), and some of our key integrations such as Postgresl Client drivers, built-in web UI (Jester) framework, Json processin, Nim's build/packaging support, generics, fast static typechecking, etc were exceptionally robust (for this young a language) and worked in an intutive way for us ... we reached production rather smoothly.


I'm curious why did your progress with Rust stalled and at which point? I’m starting to learn it and so far I haven’t noticed any productivity blockers.


I heartily wish that you continue learning Rust, another significant contribution to open source languages. What has been already commented, things such as borrow/checker are humps you work through and it will work out for you eventually. The other challenges are some of your enterprise 3rd party integrations with the outside world, some of which may again demand your time if they are still in beta, etc.

Also, the organization has no desire to invest in training or experimenting. The project is funded by the "business" and they want to see a business result in return for their money ... so you basically you sneak in a bit of time as you try/test something new. Nim won out by somehow getting a team with python expertise learn and deploy their product fast enough.


Is there a site(s) where there is specified how to represent more complex data structures while making the borrow checker happy?


When you've internalized its rules, the borrow checker isn't too complicated to keep happy (you may need to use stuff like RefCell and Rc though). It takes some time though, and this tutorial helped me a lot in the beginning: https://rust-unofficial.github.io/too-many-lists/


This does not answer op question on why Rust was dropped ?


just keep at it, eventually you run into data structures hard to represent with a borrow checker and it will slow you down, at least at first


At first blush, it's easy to describe Nim as C-flavored Python, but I'm not sure that quite captures it. The syntax is similar, but that's about where the similarities end - Python is deeply, fundamentally a dynamic language, and Nim one of the more firmly statically typed non-functional languages out there.

You could also describe it as being competitive with Rust, but with more compromises for the sake of developer ergonomics. Garbage collection, for example.

For my part, I'm finding it attractive as a potential Java-killer. It isn't write-once-run-everywhere, but, now that x86 and Linux have eaten the world, that's not quite such a compelling feature anymore. And Nim smooths things over a bit by making cross compiling dead easy. In return, I get better type safety, better performance, a much better C FFI, and less verbose, easier-to-understand code.


I disagree in quite a few respects.

I think there's not much merit in discussing "C-flavored Python" as a description of nim, as that description seems just wrong.

Aside from significant whitespace, I don't find that many similarities with Python syntax.

Describing Nim as non-functional is misleading. Nim does have functional constructs and is a mixed-paradigm language involving both procedural and functional elements, just like Python.

"Firmly statically typed" is misleading. It's actually strongly-typed with great capabilities around type-inference so that, at times, you start to forget that it's typed in the first place and gives you a similar feel to a dynamically-typed language like Python.

Nim can run with garbage collection disabled. As a matter of fact, I seem to recall that the authors postulated that as a basic requirement that the language should be able to run in a low-level systems programming mode with garbage collection disabled (where it would be able to compete with Rust), and in a more high-level mode where it would rather compete on developer ergonomics/productivity.

I find Java-killer a potentially misleading analogy. I do think that it captures the kind of platform-independence that matters nowadays. And that is no longer Intel vs Sparc and Windows versus Linux, but rather the native ecosystem versus the JavaScript ecosystem.


Not agreeing or disagreeing with either of you just wanted to add some details to the garbage collector thing.

Nims GC is able to either be controlled in terms of when and how long it runs, and it can be completely disabled in favour of manual memory management. In fact people have gotten it to run on really low-powered microcontrollers like the Attiny85, as well of course as various Arduino boards. The benefit of Nim in these systems is that since it compiles and has such a rich macro system your are able to write abstractions that the compiler will turn into highly efficient code. So you can still write business logic level readable and maintainable code, while the compiler spits out super-optimised versions. This of course requires you to first write these macros, but often times a little goes a long way.


In fairness, Python is the first language the Nim team compares itself to in the first paragraph of their homepage.


<subjective>

Well: Nim is similar to Python in many respects, it's just not "C-flavored Python".

It has significant whitespace like Python. But since it has a lot of language constructs that are different from Python, it's a bit tedious to try to otherwise compare the syntax a great deal, and it has many elements to its syntax that are quite unique.

Saying that Nim is statically typed whereas Python is dynamically typed is exactly a feature that's a slightly unfortunate pick for trying to contrast the two, because type inference is trying to bridge the gap and give you something that's a bit in-between in terms of how it feels to the programmer.

Nim is similar to Python in that (a) It's very cleaned-up and allows you to do a lot of stuff and complex stuff with little code, without sacrificing clarity and readability. (b) It puts the programmer in a cognitive mode where he can think first and foremost about the problem at hand and not have to concentrate on the programming language so much. (c) It gently guides the programmer towards good programming practice without being patronizing. (d) It has a fast learning curve,...

Essentially all the things that make Python great are found here as well. But nim gets there on its own, not by replicating elements found in Python a lot. It's very unique in a way that makes nim a great choice of language in situations where Python isn't (like systems programming or writing code for the Browser ecosystem).

</subjective>


I'm not sure I should be getting involved, here, but I haven't had my morning tea yet, so what the hey --

It seems like you're hyper-focused on a rather skewed interpretation of my first sentence, to the point that you largely ignore the second sentence. Or at least you seem to ignore everything after the word "but", which, given what the word "but" is often used for, is a rhetorical move that has a way of enabling interpretations that tend to be close to the opposite of whatever was originally intended.

Parent poster has it right. I picked that initial comparison to Python by way of introducing one of the more popular summarizations of Nim, so that I could say why I don't think it's a good summary. If I didn't put it emphatically enough to satisfy you, I apologize. I apparently don't have anywhere near as strong of feelings on the subject; Nim's just a language I like to program in sometimes.


> In fairness, Python is the first language the Nim team compares itself to in the first paragraph of their homepage.

Probably because most people are more familiar with Python than Ada and Modula (combined) ;)

I guess it depends on your previous experience. Some people will only see Python, others might think of Nim and its syntax as "Pascal for 21st century" (fun fact: the original Nim compiler was written in Pascal).


We already got a Java-killer: Kotlin


Don't get me wrong; Kotlin is a great language and, if I have to work on the JVM, it's one of my top two picks.

But, being a JVM language, it's still stuck with all of the Java shortcomings that I had listed before: Achieving a decent level of type safety is impossible, the FFI is still a pain to use, and you've got to go through contortions to get really good numeric performance. Because all of those problems are characteristics of the JVM's run-time environment, not the language you're using to target it.


Most of those Java shortcomings are in the process of being fixed, while still enjoying one of the best language eco-systems currently available, with libraries for almost any kind of problem domain.


Kotlin is no Java killer.

The only KVM in town is Android.


Nim's syntax is very similar to Python simply because both are expressive, lean and have no braces. Naturally Python is dynamically typed and this, unfortunately, affects performance and maintainability.


To me Nim is a faster python that prevents typos. It can also compile to javascript.

I used it where I would use python - backend of web apps. But now instead running a cluster of servers I can just run 1 because nim is fast.

I also used to write frontend in CoffeeScript, but switched to nim because I can share code between backend and frontend. I don't have typos. I don't inherit any JS or OOP legacy like TypeScript.


> faster python that prevents typos

This was sort of my use-case and experience as well. I also find it can do what I needed JavaScript-based CLI tools to do with less dependencies/ecosystem bloat, and the resulting code is a lot nicer to work with. People have told me that's because I was using JS wrong, but I felt like Nim was such a huge step up for my use cases.

It's also just a pleasure to learn something new occasionally.


Do you use any JS libraries in the front end?


I stumbled onto nim because a sequencing-data (DNA/RNA/etc) library was written for it (https://github.com/brentp/hts-nim). In addition to Go, it's been a great way to learn a compiled language. It's fast, easy to use, and has a friendly community.


How good is nim performance with sequencing-data in comparision with other programming languages you used for it before?


Nim performance is really really good. Like, maybe C - 5%. Comparable with stuff like Rust or D... way way faster than Go or anything interpreted.


I think it depends on the nature of the code.

For a side project of mine, I recently re-wrote the same code in a dozen or so languages. The application reads input data from text file and does a bunch of conversions to integers and floats (atoi and atof). It then runs through an algorithm that does calculations and identifies proper actions. Finally, it writes the output data as formatted text file. Regardless of implementation, it runs quick. Sorry I can't provide more details on nature of the code (it has IP that belongs to another person).

Here are the top 10 best execution times (all run on same hardware). Times obtained by 'time' utility (e.g., "time ./run.sh"). Some of these were run on Ubuntu 16.04 and a few were run on FreeBSD 11.2. Where the compiler has optimization flags, I used it.

1. C++ 0.04 (gcc version 5.4.0) 2. Rust 0.11 (version 1.37.0) 3. Go 0.13 (version 1.12.7) 4. D 0.16 (tied with Pascal) (DMD64 D Compiler v2.073.2-devel) 5. Pascal 0.16 (tied with D) (fpc 3.0.2) 6. C# 0.25 (mono 4.8.1) 7. Nim 0.50 (1.0.0) 8. Kotlin 0.81 (Kotlin version 1.3.50-release-112, JRE 1.8.0_222) 9. Java 0.95 (openjdk version "1.8.0_131") 10. Scala 1.79 (version 2.12.2, running on 1.8 openjdk)


Replying to my own post to provide better formatting of execution times.

  1. C++     0.04  (gcc version 5.4.0)
  2. Rust    0.11  (version 1.37.0)
  3. Go      0.13  (version 1.12.7)
  4. D       0.16  (tied with Pascal) (DMD64 D Compiler v2.073.2-devel)
  5. Pascal  0.16  (tied with D) (fpc 3.0.2)
  6. C#      0.25  (mono 4.8.1)
  7. Nim     0.50  (1.0.0)
  8. Kotlin  0.81  (Kotlin version 1.3.50-release-112, JRE 1.8.0_222)
  9. Java    0.95  (openjdk version "1.8.0_131")
  10. Scala  1.79  (version 2.12.2, running on 1.8 openjdk)


I am pretty sure it should be very close to or the same as C++. Ask around on the Nim forum.


And the x2 difference between C++ and Rust looks suspicious too.


Obviously, if you measure cold starts, JVM-based languages will be slower...


Agreed. I include them in the list as a matter of completeness.


Surprised to see Nim that slow, typically it ranks much higher in benchmarks. But without looking at the code and compiler options its hard to tell if it is simply the problem or the implementation that's to blame.


There are many other benchmarks around where Nim is usually quite close to C. Perhaps Nim was not compiled in release mode.


or possibly was a line-to-line translation of the original test without using any peculiarities of Nim.


I'm very surprised to see rust that slow. Could you share at east parts of your code?


I have only used python (check out cyvcf2 by brentp!), R, and bash previously - so nothing else really compiled (its faster than these in general). But my guess is that you will have an easier time writing nim than the alternatives, and after that it is easy to optimize once you have basic functionality.


That's exactly how I found it as well. Brent has some really high quality bioinformatics libraries written in Nim, and supports them well.


For me, personally, the biggest points for Nim are:

- "Transpiled" to C, with some options regarding runtime library, which means you can target any uController out there

- Good metaprogramming support

- Static and _strong_ typing

- Consequently, precise type aware function dispatch

- Type inference

- Good generics

Also, more stuff like covariants and contravariants and... maybe you just have a look-see? ;-)


I'm already quite invested in Kotlin, and having a look at Elixir... on top of my regular work. Time is a limiting factor :-)


There are "concepts" which are like generics but still experimental. Are there any other generic programming facilities in nim?

https://nim-lang.org/docs/manual_experimental.html#concepts


> Are there any other generic programming facilities in nim?

https://nim-lang.org/docs/manual.html#generics


What I wanted to write was something like type restricted generics, apparently I ended my comment prematurely. Without concepts (interfaces), I find it hard to call this system "good generics" but there might be something I'm missing.


Nim occupies an interesting place between Rust and Go. It has an easier learning curve than Rust. I picked up the basics in a weekend, and some of the more advanced functionality over a month. The optional GC is ideal for the application space, and I also don't find myself fighting the compiler very frequently. When compared to Go, Nim provides a lot more functionality. Object variants, a working implementation of generics, not to mention a great macro system.


Those are valid questions when evaluating an unknown technology. How could anyone consider this trolling? Not to digress but have we become too sensitive?


Sometime people troll by sealioning: http://wondermark.com/c/2014-09-19-1062sea.png

(I'm not accusing the granparent post)


I've seen that comic before but have no idea what it's trying to express, could you explain? Is it just when you repeatedly pester someone with questions to annoy them?


Unfortunately, asking questions on the internet is often seen as "opposing" or "arguing against" the thing you're asking about. So many qualify their statements to avoid this sort of misreading.


Which just makes things worse. Having to qualify everything by default is a whole lot of foolishness.


It all depends on the tone of the question, which is hard to discern in writing, so the clarification is helpful.

Edit: Yes, those are very valid and reasonable questions. Clarifying that it's not meant as trolling is also valid and reasonable, because writing is easily misunderstood in exactly that way - which is the reason smileys were invented. (And, to be clear, I'm not accusing anyone of missing smileys or of trolling. I am trying to express my agreement with both parent and grandparent.)


Asking a question in plain English like this shouldn't require further clarification.


Communication by writing is harder because you miss facial expressions. Because of this, a lot of it is subject to interpretation on the part of the reader. The tone can be inferred from a lengthy piece, but not so much from a small paragraph.

So, in order to avoid bias on the reader part, I preferred to explicitly tell it was not trolling.


It occupies more or less the same space as Go: it produces fast standalone binaries with garbage collection. However, unlike Go, it isn't shy to give more features to the programmer to play with.

The cons are its lack of enterprise backing, its smaller community and its syntax, if you don't like either Pascal or Python.


No: you can create system libraries in Nim, unlike Go.

You can also entirely disable the GC or replace implementation.

Nim can run without OS and on microcontrollers.


It's as easy as Python and as fast as C. That was my take when I looked at it and bought the book last year.


I read the first few chapters of that Manning book. I really enjoyed it.


> as fast as C

I mean, it actually is C, since the code is transpiled to C.


The fact that it uses C as a compilation target doesn't have much to do with whether it's as fast as C in practice. It's easy to imagine a compiler that generates C but generates terrible C that runs really slowly. (E.g., imagine it goes via some sort of stack-machine intermediate representation, and variables in the source language turn into things like stack[25] in the compiled code.)

Or consider: Every native-compiled language ends up as machine code, which is equivalent to assembly language, but for most (perhaps all) it would be grossly misleading to say "as fast as assembler".


In fact, you can transpile Python to C with Cython. That typically gets one a speed boost, but only a bit. You still have most of the memory allocation/deallocation overhead of Python objects getting created and destroyed, and all the work needed to keep attributes tracked, so a straight C version with no focus towards optimization would likely outperform it greatly.

(A neat tool in one's toolbox, of course. But just transpiling to C does not get one as fast as C.)


I’m not sure that I understand your argument. If the code from which the resulting machine code is compiled is C, then it’s objectively “as fast as C” … because, at the end of the day, it actually is C. Being “as fast as C” means that your resulting program will perform as fast as a C compiler [worth its salt] can get you.

Your comparison to machine code (or human readable assembly code) is less useful in that such a statement means very little until one knows how said machine code is being produced (e.g., manually, from a IR, etc.).


"As fast as C" would commonly be interpreted as "a program written in it will be as fast as a well-written C equivalent", not as "there is a C program with the same performance characteristics".

That a language is compiled to C does not mean that its compiler is going to be able to produce a C program that's as good as a that well-written C equivalent. (A relatively obvious example would be a compiler that introduces a heavy runtime, and doesn't give the C compiler enough information for it to get rid of the runtime)

It's the same with assembly code: that a compiler produces assembly does not mean the resulting program is fast.


> "As fast as C" would commonly be interpreted as "a program written in it will be as fast as a well-written C equivalent"

That’s your interpretation, which is fine, but the objective meaning stands. Even the idea of “well-written C” is, in my experience, fairly subjective amongst C programmers.


Remember that we're comparing languages, not programs. A language can be thought of as the space of all programs that can be interpreted by that language. In any language, any algorithm can be implemented arbitrarily slowly. So the only meaningful point of comparison between language X and language Y is the upper bound on the performance of each language's program-space.

That some particular C program exists that is at least as slow as a program in some other language is always true, trivially, and so is not a good interpretation of "as fast as C" regardless of its objectivity.


You're missing the point. The fact that a language compiles down to C doesn't mean it compiles down to efficient C. At the simplest level, the compiled code could add a bunch of unnecessary function calls and pointers and other forms of indirection that wouldn't be present in hand-written C. But for a more extreme example, you could also compile an interpreter or VM to C, and it would still be much slower than the equivalent hand-written C code. This is why "as fast as C" typically refers to normal, hand-written C code—even though there is no formal definition for what "normal C" looks like, it's still a useful description.


> The fact that a language compiles down to C doesn't mean it compiles down to efficient C.

Where do you see me claiming otherwise?

> At the simplest level, the compiled code could add a bunch of unnecessary function calls and pointers and other forms of indirection that wouldn't be present in hand-written C.

Again, why are you telling me this? Please quote where I claimed otherwise.

> But for a more extreme example, you could also compile an interpreter or VM to C, and it would still be much slower than the equivalent hand-written C code.

The more that I read your response, the more that it seems that you’re debating yourself, because I’m not sure why you’re telling me this. You started your response by telling me that I’m “missing the point” when, in reality, you seem to have not even read my point. My main point was the following:

> If the code from which the resulting machine code is compiled is C, then it’s objectively “as fast as C” […] your resulting program will perform as fast as a C compiler [worth its salt] can get you.

This is true. I made no claims re efficiency; “as fast as C” and “as fast as efficient hand-written C” aren’t interchangeable claims. Forgive me for not assuming efficiency, because I’ve seen a good amount of inefficient hand-written C code in my years.

> This is why "as fast as C" typically refers to normal, hand-written C code—even though there is no formal definition for what "normal C" looks like, it's still a useful description.

Says who though? I’m professionally experienced in C, and as is very clear by this discussion, it’s down to individual interpretations.


But that's not at all a useful way to describe a language. Why would someone ever describe a language as "as fast as C" and not mean "as fast as typical hand-written C"? What would be the purpose of that? With your interpretation, CPython is "as fast as C", since it's written in C, and yet that's not a claim anyone would actually make.


Suppose I claim that some programming language achieves performance "as fast as C". Then, unless I very clearly and explicitly say otherwise, it will be assumed that if I write kinda-C-like code in that language I will get performance similar to comparable code actually written in C.

But that doesn't at all follow from compiling to C. I gave one example above; here's another. Perhaps my programming language has arbitrary-length integers and the C code it produces for a simple loop looks something like this:

    bignum_t i = bignum_from_int(0);
    bignum_t x = bignum_from_int(0);
    while (i < bignum_from_int(1000000)) {  
      bignum_add_in_place(x, bignum_bitand(i, bignum_from_int(1)));  
    }
Corresponding code in "normal" C would use machine integers and the compiler might well be able to understand the loop well enough to eliminate it altogether. Code like the above would run much, much slower, despite being technically in C.


JVM, CLR, Python bytecode or Lua JIT code are ultimately all transformed into machine code but those are not as fast as assembler.

Being as fast as C is not about using C as an intermediate language but having data structures and control flow that resemble what C compiler have been optimized for.

Case in point, Haskell GHC is capable of outputting C code but the algorithm will not get C performance (or Haskell performance without this intermediate C representation).


Nitpick, but it actually compiles down to C. Nim works at a higher abstraction and the compilation is a one-way street. But what is more important is that it generates efficient C code, it looks ugly, and it's not something you would ever dream of writing yourself, but it's been optimised to give fast run-times. Often times in benchmarks the Nim code with optimisations comes out as fast as the C code with optimisations, even in some cases beating it.


Since you seem to know how this works -- I hope you won't mind if I ask you a slightly on-topic question about this...

I've been trying to find out if I can take the generated C code that nim produces and, for example, compile it on some exotic architecture (say an ancient solaris/sparc system or some aix/power thing, or some mips microcontroller with linux) however I can't find any examples of people doing this...

Is it possible? Or should I abandon hope and continue writing C for these platforms? :}


Are you sure your OS/CPU is not on this list? :) https://github.com/nim-lang/Nim/blob/devel/lib/system/platfo... And yes, compiler works on almost all of them (you can see which ones are precompiled in csources - https://github.com/nim-lang/csources/blob/master/build.sh#L7... )

And for CPUs - https://github.com/nim-lang/csources/blob/master/build.sh#L1...


Yeah you can get to the generated C. See this SO question.[0]

Give it a try :)

[0]https://stackoverflow.com/questions/29956898/how-do-i-get-th...


As the others have said it should be possible. Nim is already pretty good at cross-compiling, but it is also able to just spit out the C files it generates and allow you to play with those.


Yes, Nim can run on all sorts of architectures including AVR microcontrollers, MIPS, RISC-V.


What you said here was literally my point. Maybe you misunderstood me?


I think the point is the terminology, Nim doesn't transpile it compiles to C.


"Transpile" is such a nonsense term that I don't think it's useful to split hairs here.


I'm from Status, Nim's main sponsor and a blockchain company that is doing Ethereum 2.0 research and implementation in Nim.

Nim had the following appeal for us:

- Ethereum research is done in Python, we actually started our code with a tool called py2nim to convert a 50k lines Python codebase to Nim in less than a month (and remove cruft for 3 months but that's another story).

- Nim allows us to use a single language for research and production: the Python syntax, and the fast compilation speed are very helpful. In particular researchers found the Nim codebase easy to debug as the overhead vs the Python spec was quite low (and the Python spec is using types with mypy)

- Nim has a easy C FFI and is one of the rare languages that can directly use C++ libraries, including header-only template heavy libraries.

- Nim allows tight control over memory allocation, stack vs heap objects, has support for Android and iOS and very low-memory devices as well.

- Nim also can be very high-level

- Nim is as fast as C and can resort to inline C, inline C++ or inline assembly when needed. (inline javascript or Objective C are possible as well)

- You can produce WASM code via clang, emscripten, binaryen, there is even a Nes emulator demo from 2015 running in Nim compiled to WASM here: https://hookrace.net/nimes/

- Nim has a strong type-system, including generics and type-level integers, boolean and enums.

- Nim has probably the best compile-time capabilities of all languages. I'm writing a deep-learning compiler in Nim macros. Someone wrote a Cuda code generator in Nim macros, and the code generation is extremely nice to write a VM, an emulator or any kind of assemblers as you don't need to use an intermediate step to generate your opcode tables or generate your functions from that table.

Now on a personal note, I use Nim to write a deep learning framework similar to PyTorch or Tensorflow.

I think it's the best language to solve the 2-language problem, i.e. write research in Python, R or Matlab and production in C, C++ or Fortran.

The operator overloading or even operator creation is very useful. The macros allow me to have a slicing syntax similar to Numpy, something impossible in C++. Compilation speed is very nice as a developer, I don't know how people deal with waiting for C++ CI.

I reimplemented a matrix multiplication with performance similar to handwritten assembly BLAS from OpenBLAS and MKL in pure Nim so I'm not worried about performance as well.

Now I'm onto refactoring the backend with a proper compiler (similar to Halide but since it will be macro-based, there won't be the 2-stage compilation issue)


I like Nim very much, however I do not think it will solve the 2 language problem. One of the things I lean on heavily in Python when experimenting and researching is the dynamic nature of the language. I believe this is also a strong factor for others.

For me, Julia completely addresses the 2 language problem. In practice I think it now simply needs a stronger ecosystem, better static analysis, and bug fixes, but I am very happy with the language design.


Some good arguments, but the "main sponsor is blockchain-y" was a bad start for me. That totally killed my interest in RED... Is this as in "biggest patreon donor" or "company that employs main developer(s)"?


Well the Nim team at Status is a separate team working on an open source Ethereum 2.0 implementation, rather than any blockchain 'product' (unless you count a working 2.0 client as a product).

Their goal is to get Eth2.0 (and 1.0) running on very low resource environments such as raspberry pi and turnstiles.

What this should tell you is that:

* Nim is close to the metal enough to run with high performance and low resources.

* In a cutting edge research arena, where the math is still being decided, Nim allows such rapid prototyping that the team is on par with much larger teams in much more established languages.

* All the tools required to build an ethereum platform are present and running, such as security, encryption, and networking such as devp2p. Here's their eth library docs for some of the contributions: https://nimbus-libs.status.im/lib/nim-eth/

Status is pretty dedicated to open source, and whilst they're the biggest patreon donor, the Nim team is completely autonomous.


It's as in "biggest patreon donor".


> ...something impossible in C++

    vec[3_s & 4] 
is very much possible in C++. It's only two characters longer.

(C++ doesn't have a ':' operator, you'll need to use some other symbol.)


I've used it to develop command-line based utilities. The binaries tend to be small and fast.


Are any of these open source? Would love to check them out.


These are both works in progress! But here is the first one for working with sequencing data:

https://github.com/danielecook/seq-collection

And the second:

https://github.com/danielecook/tut

The second one has a really useful utility called "stack" that you can use to concatenate datasets where one might lack some columns or the columns come in a different order. It glues it all together and has an option to include the filename. It's useful for data analysis.

Both of these lack 'polish' at this point but a few of the subcommands work quite well.


I love nim! Wrote a several-thousand-line side project in it in 2015 and have rewritten it several times over, partially to improve the project itself, but also to keep up with language changes during the various alpha-releases. Did a little evaluation of Rust along the way, but decided my money was going to continue to be on nim. -- Congrats to the team, and thank you so much for making it happen!



It's interesting to hear the constraints the language developer wanted from the beginning (20k lines, macro system drives most of the language development) and the various compromises they had to make along the way.

> Furthermore, we don't really know yet how to leverage a macro system in order to give us extensibility on the type system level, so Nim's core needed generics and constraints for generics.

I'm curious to dig into Nim and it's type system. I'm not familiar with Pascal style typing, I wonder how it compares with Rusts type system?


I haven't looked into Nim in as much detail as Rust (though I did make some contributions to the compiler/language a number of years ago), but one aspect that has always seemed notable to me is that it seems to defer type checking of generic code until it is instantiated (this is what happens in C++), so these functions will all typecheck until you actually try to use them:

  proc foo[T](): float32 =
    result = "foo" + 4
  
  proc bar[T](n: T, m: T): T =
    result = n(m)(n)
In Rust, such functions would require some sort of explicit constraints on `T`, so it's more-or-less guaranteed to be instantiable (ignoring the issue of polymorphic recursion) and you don't get some error pointing to code internal to the function when the user applies types that don't work.


>it seems to defer type checking of generic code until it is instantiated (this is what happens in C++) ... and you don't get some error pointing to code internal to the function when the user applies types that don't work.

Worth noting that C++ is going the opposite way with concepts. The compiler still won't enforce that `template<typename T> void foo(T t) { t++; }` needs a `requires Incrementable T` (like Rust would require with trait bounds). But if `foo` does use the concept `template<Incrementable T>`, then a call like `foo(S{})` will raise an error at the callsite, not at the `t++` line.


Interesting. From what I remember of Go's generics proposal, they seemed to be trying to do something similar, which seemed a bit hacky to me (as you say, it addresses the error reporting issue but it's difficult to see how the constraints would be used to actually check the generic code, so functions can presumably still lie about being "for all types").

EDIT: actually, Go's generics proposal probably doesn't have that issue, since the contracts are restricted enough to derive type declarations from them.

Looking into it now, Nim apparently has experimental support for the same feature: https://nim-lang.org/docs/manual_experimental.html#concepts


> EDIT: actually, Go's generics proposal probably doesn't have that issue, since the contracts are restricted enough to derive type declarations from them.

Just wanted to add that my initial memory was apparently of the earlier proposal for generics in Go, as reflected here: https://dev.to/deanveloper/go-2-draft-generics-3333 (in this earlier proposal, contracts simply specify code that is expected to compile, in such a way that it would be difficult/impossible to derive types for symbols introduced by the contract).

I guess the golang people figured out why that was not an optimal system and changed it ... wonder if the same will be true of Nim.



This is super cool. I built a small CLI tool with Nim back in 2017, and it was a joy. It ended up being very accessible to the rest of the team for further development (probably more credit due to the folks building Nim than to my design) and I believe it's still in use. It's a good language, and it was a nice break from JavaScript (which I also enjoy, but not so much for CLI tools). If you haven't tried it out yet, I highly recommend it.


Nim is a thing of beauty. It has a well-thought syntax (some consider it to be similar to Python, but it removes many Python idiosyncrasies), state of the art type system with generics, beyond state of the art macro system, excellent performance (comparable to Rust or C), and it can work as a low-level or a high-level language. It is pragmatic: you can write functional or imperative code, neither is imposed on you. There are so many beautiful hacks I enjoy (e.g. unit tests are supposed to be included in each file, all files are compiled as executables, and you can run them to run the tests).

Here is a taste of what's possible: https://hookrace.net/blog/what-is-special-about-nim/


My reasons for using nim:

- Syntax is very expressive and not restrictive or bloated (I can write clean code, almost like pseudocode, without boilerplate)

- Runs fast and has type checking something that Python lacked (until recently at least) (When I was programming in python I was so frustrated that any variable could be used in a function call and no code is checked until it's used, Also 'nim c -r file' is faster than any Python program I have written)

- Documentation is very good nowadays, but since code was so easy to read and in nim (whereas most python code interfaces with c) I could read and understand what the standard library functions were doing, another big plus when learning.


Finally, very happy for Nim and the team!

version 1.0, stability, RFC process -- all are signs of maturing language and the ecosystem.

Here is a curated list of frameworks for Nim

https://github.com/VPashkov/awesome-nim

For me, personally, the interest, is its javascript-as-target capabilities.


I noticed that Araq's personal post about v1.0 mentioned a powerful macro system as a top priority. I wonder if Nim's macro system is now expressive enough that a front-end web framework like Svelte [1] could be implemented using Nim macros, as opposed to Svelte's HTML template language and custom compiler. COuld be useful for developing isomorphic web apps with something lighter than Node on the server side.

[1]: https://svelte.dev/


I think this is what you're looking for: https://github.com/pragmagic/karax


Not quite. Karax has a virtual DOM, and Svelte's creator argues that this is pure overhead [1].

[1]: https://svelte.dev/blog/virtual-dom-is-pure-overhead


Love Nim !

Key points for me:

- familiar, easy to read syntax

- great performance

- standalone executables / ease of deployment

- easy interfacing with C and C++ code

I would love to see its community grow and see more available libs.


> Type identifiers should be in PascalCase [1]

I don't get why some languages adopt the 'PascalCase for types' approach (which is fine), but then a bunch of the built-in types such as 'string', 'set', 'int64', etc. are lowercase... it's annoyingly inconsistent.

[1] https://nim-lang.org/docs/nep1.html#introduction-naming-conv...


On the contrary, I feel that the stylistic [and sometimes semantic] separation of primitive and boxed types in languages (e.g., `byte` VS `Byte` in Java) improves the developer experience, in that I can very quickly dissect the type of value that I’m dealing with when reading the code.


In Java that difference matters a lot for performance: primitives are unboxed, objects are boxed. (Not as true now with auto boxing and escape analysis, but this was absolutely true in version 1.0.)

In C++ it can matter for correctness because primitives are uninitialized by default. But other types might be too and the standard library uses under_scores for things that are initialized on construction, so it's not a great example of this distinction.

Why do you care in other languages? In Rust for example I'm a little fuzzy on why I care if something is considered a primitive.


In this case it might be a 'documentation as code'thing, being able to see at a glance if something is a language primitive or a potentially very different implementation could have value.

However I'm not super familiar with Rust, so I couldn't speak to that why.


In C# the lowercased ones are keywords, some of which map to built in types. For example "string" is a keyword that always means the type String (System.String). The word name "String" could be a type that some nefarious person added to your current namespace


I think it does set the built-in types apart for being built it. Also tradition?


That makes about as much sense as making builtin functions UPPERCASE. Also, what tradition?


I like it. It lets the domain-specific types stand out.


I would have never thought to live long enough to see this happening! I started using Nim in 2014, but abandoned it after a few years, frustrated by the instability of the language and what I perceived as a lack of vision. (In 2014, release 1.0 was said to be "behind the corner".)

This release makes me eager to try it again. I remember that the language impressed me a lot: easy to learn, well-thought, and very fast to compile.

Congratulations to the team!


But why do they have to use camelCase? I don't know why it bothers me so, but I see that in a language or a repo and immediately and irrationally despise it.

Is it just me?


As other have mentioned Nim is style insensitive. This might sound odd at first, but it allows you to use snake_case in your code even though the standard library is camelCase. It also allows me to use your snake_case library in my camelCase code without ending up in mixed_Case hell like you often end up in Python.


Nim is style insensitive, so `foo_bar`, `foobar` and `fooBar` all resolve to the same symbol. However, `Foobar` won't.


More of a personal preference, but to me, typing all these underscores all the time would probably give me carpal tunnel syndrome within just a few years.


Camel case is extremely common, but it's funny that they use it given that the language is case insensitive: https://github.com/nim-lang/Nim/wiki/Unofficial-FAQ#why-is-i...

Sidenote: not a Nim user, but this seems like it could get in the way of interoperability with C (and most other languages)?


Not really, when you want to import things from C you simply name them what you like and then you can call them in a case-insensitive fashion. When writing code that should be imported into C code it will either have its name exactly as typed, or you can specify a name for it yourself.


Right now I write, almost exclusively, Rust and Python. I'm really hoping I can replace a lot of the Python code with Nim some day.


Why Nim ? As someone who put a lot of efforts in some personal project and understands what it takes to build a new language, I wonder why did the Nim's authors did it in the first place ? Just for the sake of it ? To make money ? For the fun ?

I'd like to know (as Nim doesn't seem to be backed by big dollars)


Basically it is a compiled language that feels like a scripting language -- typically compiled languages have so much boilerplate code -- but Nim doesn't. If you just looked at a Nim program, you might even think it is in some scripting language you haven't seen before. I see Nim and Crystal filling a similar niche -- Nim feels more like Python and Crystal more like Ruby.


"Nim feels more like Python and Crystal more like Ruby"

Agreed. To me Crystal is a bit easier to grasp because I already used Ruby a long ago, but it has the non trivial disadvantage of being bootstrapped (ie, it requires itself to compile itself) so that it is slower to port to different architectures.


It compiles to C which compiles to everything. You don't have that with Rust or Go


> To make money?

Not at all.

Nim aims to achieve the expressiveness of Python , the speed of C, the programmability of LISP.


What about kotlin-native?


I remember a benchmark of kotlin-native which wasn't good.. That said, there may have been improvement..


Congratulations Nim team!

Last year I wrote a book on building applications with Nim, now I think it's the time for all of the people waited for 1.0 to pick it up https://xmonader.github.io/nimdays/


Fantastic news. I haven't done anything too serious with Nim, but I did find for the code I did write in it, that its extremely intuitive and easy to get going. It also has all the features I could ask for in a language and it never feels like you need to work around a limitation. Of course that's the initial impression, I haven't used it as much as languages that have batteries included for the specific tasks I do (most of the programming I actually do these days is around data analysis, economic forecasting, so R is pretty hard to beat).


For compiling to Javascript, is it possible to use npm modules with the code?


There is no first-party support for this, but I don't see why it shouldn't be possible. It might take quite some effort though.



I know of at least one nim lib that wraps a js lib (react) you could find that and use it as an example.


Congrats and Thanks Nim team. I’ve been tinkering with Nim to build a really efficient pubsub server. Something that can take heavy concurrent connections. Now I can actually build with confidence.


Does it support tabs for indentation yet?


#? replace(sub = "\t", by = " ")

but please, dont


One exciting new project using Nim: https://github.com/status-im/nim-libp2p

and also https://github.com/status-im/nimbus which is a lightweight Ethereum 1.0 & 2.0 client in development. If Ethereum 2.0 is really able to scale like they plan, a project like nimbus could really enable widespread use of cryptocurrency on phones.


Great that it's here. I tried Nim around 8-9 months ago, and left it because I didn't want to work with a language that wasn't stable yet. Guess I'll give it a visit again.


> I didn't want to work with a language that wasn't stable yet.

IME, the language was quite stable since version 0.19, which was released one year ago.

But I know that people don't trust some random bloke on the internet, they want to see 1.0 before they dive in. I had the same concern when I first heard about Nim (I think at that time Nim was at v0.17).


Great! I love this language, so simple and powerful, so fast executables!

I hope I don't spoil the party by asking: What's the status of GUI bindings?


WxWidgets works great, Gtk(2/3) also works great (both have macros that make actually creating UI's much easier than in most other languages). There is also wNim for making Windows UIs and NiGui for pure Nim cross platform UIs (that target the native toolkits). Apart from that there are various bindings to other toolkits, both meant to be embedded in games, and stand-alone things. There's even ways to use Nim code to power Electron apps.

So all in all I'd say it's pretty decent, but I still have plans for my own toolkit that would solve my personal gripes with creating cross platform UIs.


I'm curious about why you want to develop your own toolkit and what will differentiate it from the others. Will it draw its own widgets, or wrap platform-native toolkits?


Well it all started when I wrote the genui macros for wxNim and the Gtk wrappers. I started working on a rather simple note-taking application and had an idea in mind for what I wanted it to look like. Turns out this look wasn't possible in wxWidgets without resorting to custom widgets (because the Windows back-end didn't have support for something none of the others could support it). So I had to switch away from wxWidgets and use Gtk instead, not a huge issue for that project since I was just going to use it for personal purposes anyways, but it made me realise something. There isn't a single cross-platform UI toolkit that allows me to learn that one library and then create UIs for either all the targets, or one specific target. The idea has then gone through many revisions and back and forth on implementation ideas, but the current vision is to base it on Nims powerful type system. So instead of saying that "I want a wxTextCtrl" you say "I want to have a way of editing this string". Then it is up to the target toolkit to have an idea of how a string should be edited. Then any sort of styling of the resulting UIs are done on a per-platform basis, with whatever tools are available for that platform. This will also allow the toolkit "drivers" to take in their own UI specifications (for example Glade for Gtk) and map the types you supply to parts of the UI. Essentially separating the UI generation completely from the code/logic. The benefits of an approach like this is that for a simple UI that is only meant as a front-end for some algorithm you can just specify your inputs and actions and the UI can be automatically generated for you across platforms. If it looks weird on one platform you are able to specify for that platform what modifications to make. Or if you want to create a beautiful shiny graphical application you are able to tailor make every aspect of the UI for the target platform and then bind it back to your code. This is probably already way longer than what you expected, or even wanted, I've been thinking about this and toying around with implementations for a while now. I had an early prototype that created widgets for Gtk and Karax (Nim web front-end toolkit) that worked really well, but I hit some snags with the implementation in an earlier version of the language. Now that the language is more mature, and I'm more used to it, I might finally be able to implement it properly.


I'm enjoying NiGui[0]. It uses Windows (Win32) API for Windows and GTK3 for Linux/Mac. It doesn't have every feature added yet, but it does have quite a few and is easy enough to add more. I've personally contributed a bit to this repo in hopes it grows in popularity, I found it to be one of the easier GUI's to work with (with Nim).

[0] https://github.com/trustable-code/NiGui



Gintro is good but has bad docs, and the api is inconsistent with actual GTK so sometimes you can’t guess.


The binary sizes that Nim produces are as usual misrepresented. D and C will blow it out of the water. Here's some D code:

    import core.stdc.stdio;

    extern(C) void main()
    {
        printf("Hello, World!\n");
    }
Compile with:

    $ dmd -betterC main.d
    $ ls -h main

    -rwxrwxr-x 1 user group 8.5K Sep 24 12:39 main


I guess the examples use the default std libs, and runtimes of the languages. So in this sense your D example is the misrepresentation, by directly calling libc.

If these kinds of low level size optimizations are enabled then both Nim and Rust can be more than 50 times smaller than your D example ;) :

150 byte Nim: https://hookrace.net/blog/nim-binary-size/

151 byte Rust: http://mainisusuallyafunction.blogspot.com/2015/01/151-byte-...


While Nim is a much better python, you cannot use python modules with Nim (pushing strings to a python interpreter does not count as such), so you need some man-years to build a reasonable software collection.

Haxe had another approach. Create a library and transpile it to be used with Lua/Python/ Java and other options.


Have you seen Nimpy[0]?

It allows for a nice integration between Nim and Python. At least for the examples I have tried, it could be used in place of Cython, when there's a need for extra speed in some hot loops and similar situations.

[0] https://github.com/yglukhov/nimpy


Havent't seen this. Should try.


I have been considering approaching a new language lately after have self learn python and more recently javascript.

Where does nim sit in regards to the future of machine learning? Can we expect clones of pytorch/tensorflow? Is it a suitable language for this?


> Where does nim sit in regards to the future of machine learning? Can we expect clones of pytorch/tensorflow?

https://github.com/fragcolor-xyz/nimtorch


Cheers. No harm in checking it out. That Manning book looks good.


Congrats to the team!

Could Nim be used to generate a framework for Xcode and the the same code be used in a Windows C++ or .NET project?

I've been looking for ages for a cross platform solution that can be used to create dynamic or static libraries that isn't C++.


Without knowing any more specifics about your problem it seems like this would be possible in Nim. It is cross platform and can generate dynamic libraries across them. I guess you could somehow also set up a static library, but I've never tried that myself. So as long as your target language(s) have the ability to use dynamic libraries then Nim should be able to supply them.


As far as I know, that should be doable. Nim can compile to Objective C as well as C++, and has a `when` statement that makes it easy to produce code for a specific OS or language target. Not an Xcode or .NET expert though, so not sure how difficult it is to hook into those platforms.


Once ecosystem matures (good async db drivers) would be pretty good option for web dev


It's already a pretty good option, the Nim forum is written 100% in Nim.


Damn, this is some solid work. Seriously impressive.


Christmas had come early!!


What’s Nim’s concurrency story? Since it’s compuled via C, I suspect the answer is “shitty” as in I expect it to inherit all of C’s issues with undefined behaviour, values out of thin air, and so on...


That's kinda jumping the gun - just because something is compiled to C, doesn't mean that it has all of the characteristics of C. That's like saying that because something is compiled to assembly, it has all the complexities and dangers of assembly.

For example, take a look at Cython, CGo etc.

Regarding concurrency, Nim has cross-platform thread support in the standard library. Concurrency, at least right now, is more memory safe than other languages, but also more restrictive. Each thread has its own heap, and and memory from one thread's heap cannot be referenced by another thread's heap without use of unsafe pointers. One can, however, copy memory from one heap to another.

There are more abstract currency mechanisms detailed in the manual (primarily for work that is well-suited for threadpool use cases)


Note that what you’re describing here is Nim’s parallelism. Nim supports concurrency via a macro-implemented async await mechanism that is very mature (it runs in production at https://forum.nim-lang.org)


> and memory from one thread's heap cannot be referenced by another thread's heap without use of unsafe pointers

Are you sure about that? Your statement seems to conflict Nim's manual, which says

> Each thread has its own (garbage collected) heap and sharing of memory is restricted to global variables.

This does suggest it inherits all of C's memory (un)safety characteristics.


I was going to make a rant about sentimental versioning, but instead congratulations for reaching version 1.0




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: