Hacker News new | past | comments | ask | show | jobs | submit login
Nim Community Survey 2019 Results (nim-lang.org)
147 points by open-source-ux on Feb 23, 2020 | hide | past | favorite | 84 comments



Funny, today's when I finally decided to start learning Nim. I do C++ with CUDA and Python on top for work. Having looked at the Arraymancer lib, I became interested in the language. Between its outstanding meta programming features and its ability to be compiled to C++, I think it is perfect for the scientific computing domain.


Thanks for pointing it out, that library looks quite interesting. A question: it mentions it "Supports tensors of up to 6 dimensions.". Sadly some of us have to deal with tensors of even more dimensions sometimes. Does Nim support some kind of equivalent to C++'s variadic templates that would allow creating tensors with an arbitrary number of dimensions?


Author here:

Actually it's 7 [1]

But in my future backend revamp I plan to make the default 6 [2]

Note that the line can easily be updated to

    const MAXRANK* {.intdefine.} = 6
as in the planned revamp and then you would be able to change the MAXRANK with a compile parameter

    nim c -d:MAXRANK=16 path/to/my/app
[1]: https://github.com/mratsim/Arraymancer/blob/6fddfa9a734ac01c...

[2]: https://github.com/numforge/laser/blob/d1e6ae6106564bfb350d4...


Nim supports variadic templates, type checked variadic run time calls, and macros which let you run readable code at compile time (unlike C++ which has an accidentally turing complete, so can do anything but readability -- or speed -- are not part of that accident).

But unless you actually extend Arraymancer to more than 6 dimensions, you will likely lose some speed and convenience.

mratsim is a magician. Both Arraymancer (tensor library) and weave (multithreading core) are comparable and in some way better than other similar runtimes, despite being written by one person who is working on other projects as well.


Nim would be really competitive in the gamedev/graphics field if there was a standard library that you can use without the GC. (Technically you can turn off the GC entirely, but you can’t use the current standard library without it.) The language’s especially good for gamedev because of its extensive metaprogramming facilities, a sane build system (which lets you write build scripts with a subset of Nim itself), and the fact that you can load C/C++ libraries with ease.


Unreal and Unity appear to be quite competitive with GC.

I love Tim Sweeney quotes:

"Programming languages without garbage collection send us down a long path of design decisions that lead to slow compile times and fragile runtime performance cliffs."

https://twitter.com/timsweeneyepic/status/122367043109542707...

"It's interesting that many games can afford a constant 10x interpretation overhead for scripts, but not a spikey 1% for garbage collection."

https://twitter.com/timsweeneyepic/status/880607734588211200


>slow compile times

Since when is C slow to compile? C++ is worse but the abuse of templates combined with the header system is the primary cause of the problem, not manual memory management. Just look at the include tree of a conventional application. Even a moderately complex file will only have around 100 included headers in total but once you add boot filesystem you will suddenly see 1000 headers being included into your project into every single file. The header system is necessary because the compiler has to be aware of the sizes of all data types but this can be avoided with the pimpl pattern which again does not require a GC.

The downsides of garbage collection can't be understated. A GC globally influences the entire application including sections that do not use the GC. Therefore there is no opt out way of avoiding the GC in critical sections. If just one section of your application is generating too much garbage it will directly influence critical sections like audio or rendering. Therefore the goal is to do as much as possible without a GC and then add an embedded scripting language that is running a GC. A GC cycle inside the embedded VM only stops the computation that happens inside that VM but since we have already ensured that all non critical sections are running outside of the VM the impact is minimal.

You want things like animations to be smooth and your attacks to hit but it doesn't matter to you if a quest progress bar or quest reward is getting delayed by 30ms.

I want non global GC heaps.


Except, especially regarding the former tweet, I get the impression that he's having a pipe dream (with all respect), and he's heavily criticized. Did he actually program anything serious recently? He seems to be mostly tweeting about languages instead of actual technology.

Honestly I can't make sense of what he's saying there, so there's a chance I'm not getting an important detail, but GC has nothing to do with function specialization. What does the following even mean?

> Without garbage collection, offering an array of bytes where an array of integers is required requires stack-allocating an array of integers and converting each byte to an integer every time a subtype is used in place of an actual type. This is so absurd that it’s not done.

The only interpretation of this that I can come up with would be just factually wrong.


Except that ignores the small detail that his company is selling a game engine with heavy use of GC, via Blueprints and C++ classes marked as UOBJECT.

I bet he understands one or two things about shipping games with such engine.

The sense I get from those tweets are the usual anti-GC discussions, mostly by folks that usually are even against modern C++ (as recently seen on some C++20 related discussions), which eventually derailed the discussion into something else.


> Except that ignores the small detail that his company is selling a game engine with heavy use of GC, via Blueprints and C++ classes marked as UOBJECT.

Yes I've heard that there are some high-level things and _some_ amount of GC in there. And I'd wager that this is about where your knowledge about UE ends as well. If that is so, better don't make bold statements like "heavy use of GC".

And Mr. Sweeny is obviously not only accomplished, but extremely sharp. But being sharp doesn't protect anybody from buying into stupid ideologies without any actual real world validation.

Just as it doesn't protect HN trolls from collecting little factual nuggets from the internet and then spreading misinterpretations (i.e. falsehoods) hundreds of times. For example, the kind of "Data oriented design" that has been all the rage recently is so definitely no concession to OOP.

Regarding Mr. Sweeney, the German Wikipedia page for example states that he didn't have any significant programming involvement anymore since Unreal Engine 3 (which was released in ~2006; Unreal Engine 2 was released in ~2002). Regarding programming, all I've read from him on twitter was just language related stuff, frequently mentioning theoretical blah-blah, such as covariance, monads, higher-kinded, blah-blah, and oh it is sooo practically relevant that C function pointer syntax is unusable if you want to return pointers to functions that return pointers to functions that return pointers to functions...

> The sense I get from those tweets are the usual anti-GC discussions

I suggest trying to make sense of his actual statements in the first place. Any offering from your side is very welcome! Again, I'll collect all the arrogance that I have in me, to criticize this man, who is obviously much sharper than myself, of blathering in this tweet a load of BS that hasn't undergone real world validation.

Again, I might be missing the valid interpretation of what he's saying, but to me all evidence points to him needing a little refresher regarding the real world, in practice.


Any C++ developer worth their salt is perfectly able to understand that blah-blah.


You know, I once learned category theory, at least to the point where I could explain from the ground up what a monad is, as well.

And the most important thing I've learned in the process is that these theoretical buzzwords don't have any relevance to programming whatsoever. This theory stuff is not a means to an end. It's a means unto itself.

> My answer to this is: the ultimate type system is constructive logic (following the proofs-as-programs direction), the ultimate typechecker is a theorem prover, and theorem proving technology improves over time. https://twitter.com/TimSweeneyEpic/status/123141241308757196...

People who say such things have simply drunken the kool-aid, big time. When it's about practice instead of theory, I prefer to listen to people who have actually made stuff in the last decade.


>with heavy use of GC

Game logic can optionally use GC, the engine itself does not. so, it would only be heavy use of GC if you had lots of computationally heavy game logic. If you do, you might not use the GC!


Just because it is possible to use GC, one doesn't need to use it all the time, not all platforms have a restricted GC that runs all the time without extra knobs to tune.

So just like one would not do malloc() on specific engine code paths, it is possible not to use GC heap allocations as well.


> it is possible not to use GC heap allocations as well

At this point you're almost doing manual memory management.


RAII and arenas are not manual memory management.


I didn't notice the point at which this turned into a discussion about RAII or whatever a GC is in 1990's books' definitions. And why would an arena not be manual memory management if I make explicit allocations that make explicit use of such arena, and I free the arena on a suitable occasion? I'm even likely to write the arena myself, but not necessarily.

I have this simple and (in my mind) very useful equation of manual memory management ~ writing code to manage memory. Conversely, GC ~ using an external runtime (such as that in JRE) that helps you to avoid writing any memory management code at the cost of giving up control. Ok, let's lose a sentence about ARC, I don't care if you name it GC or not, and I don't use it.

I'm not interested in other pointless taxonomic discussions. Whatever, man.


Because many languages with GC do offer the necessary tooling for doing low level memory management if one so desires, while still enjoying some safety.

Naturally the anti-GC crowd rather switches to their error prone C code instead.

Thankfully that is not where the industry is going, eventually looms will be the dominating force, one generation at a time.


The devel branch (which is actually usable, though not production ready yet) has a new gc (--gc:arc) which relies on reference counting, and is deterministic -- and the way it progresses, I suspect it will likely become the default at some point in the future.

But even now, Nim's native GC has a deadline scheduler, which means that unlike e.g. Java or C# last I checked, you can tell it "ok, you have 2ms, collect as much as you can" - which is perfect for games. (It also has Boehm if you want it, and a couple more).

And C# (though Unity and independently) is already competitive for games. So really, that's no reason not to use Nim today for games (or any other usse for that matter).


Many Unity games suffer because of that GC, and or developers suffer working around it. It isn't always a great story.


Mostly because many Unity developers are just indies learning C# and Unity at the same time and are yet to grasp how to do value type oriented programming.


Don't forget, there's an LTO switch built right into the Nim build system which does whole program optimization across C translation units.

This is the real killer feature - 40 years of GCC optimization leverage


Not only that, but you can also use many other C compilers, like Intel's own which AFAIK has some pretty big optimizations for Intel CPUs specifically.


Only thirty! It was started in 1987, I believe.


It'll be 2027 before we know it


25% reported having never used the language. Surprising that so many people outside the Nim development community would fill out this survey.


There has been a lot of observation and speculation about Nim for a few years now. Seems that a lot of people follow it's progress because they like it's philosophies, but don't feel comfortable enough with it's maturity level to dive in (language stability issues and API changes can be really frustrating when you are trying to maintain a project)


they hit 1.0 very recently so maybe more people will start feeling comfortable now.


I've been following nim since it's inception and I'd report myself as "never used the language" as I've only built hello-world type test programs.

I like the idea of static, compiled python but it's missing a lot of core libraries and what are there always look a bit disappointing to me. So I'm still in waiting for it to catch on phase and belive that 25% is there too!


I used it a few times before but I stopped once I realized it was a long way off from being worth using.

The lack of tooling, libraries, and numerous compiler bugs meant that it would be easier for me to write the same thing in C++ even though nim is a superior language.

I'd imagine many people came to realize this before even trying it out.


how long ago? I tried it maybe 3 years ago or so, and also ran into a lot of compiler bugs, which I reported. The good news is I saw all of them get fixed! Haven't tried the language since it hit 1.0 recently.


I don't remember, I tried it out a little before it was renamed from Nimrod I think.


I don't think you should be commenting about the current state of the programming language (which is how your first comment comes across) if you haven't used it in over 6 years.


I tried it a few times after that and it still seemed like it was many years away from being competitive with C++/Java/Go


have been using Nim throughout most of 2019 and never encountered compiler bugs. not once.


I'd assume they purposefully reached out to as many as possible, otherwise it just becomes an insular circle jerk. The people currently using the language will presumably have far fewer issues with it than people not using the language for whatever reason.


Very happy to see Nim gaining some adoption. I started using Nim about a year ago and could not be happier with the language.

I do wonder what it might take to take Nim towards "main stream" status.


I keep looking for a good excuse to learn it, but it does not fit my domain or my language-style preferences very well.

I primarily develop web services, and it is really difficult to beat Typescript due to the fact I can write my front-end, backend, and mobile apps in it (via NativeScript or React Native).

I use uWebsockets.js, which has both an HTTP API that is nearly-identical to Express, and a Websocket API that is identical to Socket.io, but performance ~x10 faster than Express and ~x2 faster than the top-performing Go/Rust/C web libraries due to being a C++ library exposed as Node V8 bindings.

If I really need performance outside the context of handling web requests, I will write small functions or services in Go/Rust and compile to WASM and invoke it through Node, or just throw it up on OpenFaaS.

I like a lot of what Nim does, but I have had a difficult time since finding it two years ago ever coming up with an adequate reason for using it.

Edit: I want to make one clarification. I think that Nim may actually have a bright future in ML, primarily due to the work of Mamy Ratsimbazafy. He has some libraries for HPC, Laser [0] and Arraymancer [1] that can smoke C by several orders of magnitude. Combined with Weave [2], the multi-threaded runtime he built for Nim, that opens the doors for ridiculously performant ML. I think the only the other language that would give Nim a run for its money here is Julia, I really dig the work that has been going on there for ML as well.

[0] https://github.com/numforge/laser

[1] https://github.com/mratsim/Arraymancer

[2] https://github.com/mratsim/weave


Well, Nim compiles to JS and cross-compilation to iOS and Android are both possible (I don't know how tricky those last two are).

So you could write your entire stack in Nim too the same way you do with TS. :)


Smoke C by several orders of magnitude? Citation needed.


Entirely reasonable doubt, you can find well-commented benchmark output of the Laser High-Performance Computing lib and Arraymancer lib in their source:

https://github.com/numforge/laser/blob/d1e6ae6106564bfb350d4...

https://github.com/mratsim/Arraymancer/blob/6fddfa9a734ac01c...

To qualify the statement, it would be more accurate to say that Nim can smoke C by several orders of magnitude unless you devote significant engineering efforts into replicating the equivalent Nim (as Nim targets C/C++ regardless, so this is just heavy lifting and optimizations)


While all you say is true, "on order of magnitude" mean 10x and "several orders of magnitude" mean 1000x or more.


I think it needs a "killer library". The core language is pretty easy to learn if you've used Python, but there aren't a ton of third party libraries, and the ones that do exist seem to be incomplete, poorly documented, buggy, abandoned, or some combination thereof.


I got annoyed when it didn’t have a decent standard library to handle command line flags and arguments.

The examples given so far, are mediocre.

parseopt is incredibly cumbersome.

docopt is an incredibly poor design, that I can’t believe anyone thought this was a good idea.

And failing to have such basic libraries, I realized, this language has a long way to go.

I contemplated building my own library to handle such a need, but why, this doesn’t solve my problem of building something. And if it’s missing such basic libraries, then what else is it missing? The odds were too great, and too risky for me to commit my time to it.


There are many libraries for that, but I suggest cligen. It is the simplest command line parser I have found across all programming languages.


What if the Nim maintainers made some kind of continuous popularity measurement to let good libraries bubble up? Opens other problems I know, but may fix such scenario and accelerate Nim adoption?


There was an attempt a few months ago, but it was a little weird. But if you just want to find libraries relevant for your task (e.g. parsing the command line), there is https://nimble.directory where all Nim libraries are listed


I hadn't seen docopt before, but am curious why you think it's such a bad design.


you can try cligen and then argparse https://github.com/iffy/nim-argparse


making clones of the top 50 python libraries would make it attractive. familiar syntax! way more speed!


Many of the top 50 python libraries are wrapping a lot of code that is actually c, c++ or fortran. So I doubt that there is a lot of performance to be gained. Numpy is a good examples for that.


I have applied for a job at a company where they were switching from Python to Rust, because even though they were using Numpy, there was a lot of overhead in setting the data up to send to Numpy.


It's not those libraries' performances, but allowing people to port their own code with less friction for a presumed gain in performance.


How would one find these "top 50 python libraries" list?


I’d suggest based on PyPI download numbers, see https://hugovk.github.io/top-pypi-packages/



This would be super compelling.


I'll place my chips on mratism (Arraymancer, Weave, Laser) being the killer librarian. But it will take a decade or two to unseat Python.


Speaking for myself, as an absolute novice, OOPs in Nim is not great. it feels shoehorned in, rather than being a primary citizen of the language design. There is too much fiddling with methods vs procs, pragmas and such involved for the most basic stuff.

I am tinkering on a small 2d game using Nim. I was planning on using OOPs for most of my game objects. Now I am debating if I should make the effort to actually learn OOPs the Nim way or just do it procedurally.

Considering that my project is tiny, I might go for the latter and skip OOPs altogether. So for me right now Nim is a better C, with nicer syntax, simple build system and a package manager.


I would say that this is in some ways by design. Nim offers some OOP features, but in general does its best to discourage the use of OOP. In that sense I would say you should skip OOP and implement everything in a procedural style (it will be more efficient too).


The only OOP-ish thing I've really missed in nim is interfaces. It seems to be a really convenient way to re-use code: "this function (proc) can operate on any type that has implementations of this and that other function (proc)".

Googling has left me with suggestions of using macros to re-implement similar functionality, which I guess is in line with nim's philosophy of "small core with powerful macro system". Looking into it however, developing macros seems like more effort than a simple interface.

Just the impressions of someone not super familiar with metaprogramming.


The idea I think is that someone needs to implement it using macros and get it into Nim's stdlib. Then we can all make use of it. If you're passionate enough about interfaces then you may just be able to implement something that works beautifully and get it merged into stdlib (that would be awesome).


It seems deliberate:

"While Nim's support for object oriented programming (OOP) is minimalistic, powerful OOP techniques can be used. OOP is seen as one way to design a program, not the only way."

https://nim-lang.org/docs/tut2.html


The lack of "proper" OOP is one of the things I really like about nim. Incidentally it's also one of my favorite decisions in Julia.


I wonder what causes the strong European bias in the results, in general in those surveys I'm used to seeing USA #1.

Beyond that it actually looks pretty good for the language, it seems popular among experienced developers and they seem pretty happy with it and consider using it for serious work (although obviously the fact that they even bothered to reply to the survey probably biases that).


Of the top ten contributors on github, nine of them are Europeans. I think all nim meet-up/conferences have been in Europe.

Makes me wonder if the language would have more mainstream traction if it was more of a US project.


There are various papers on the topic of developer locations.

Among community-driven projects USA is very far from #1.

Instead it's #1 on corporate stuff.


I like the Nim syntax, but I’m not certain why they chose to transpile it to C.

Why not just go all the way, and encode it in assembly?

Yes, I understand their reasoning, to make use of all the advanced C compilers out there. And to make the code compatible with existing C libraries and APIs.

But I feel this is more of a shortcut, and a crutch, that will prevent them from fulfilling their expectations, of being a C language and systems programming replacement.

I think the better approach would have been to target assembly code instead, and then, build connections to existing C APIs.


Nim compiles to C, C++, Objective-C, and JS with the official distribution.

There are community developed native code (NLVM which uses LLVM underneath), Lua and Python backends with various degrees of completeness.

With the limited resources available to Araq, it more definitely makes sense to leverage 30 years of GCC optimizations and/or 20 years of LLVM and/or ICC experience, while at the same time supporting basically every platform under the sun.

The Nim community is made of very pragmatic people, and I'm glad it is that way.


D doesn't compile to C code because I found C to be a rather limited platform. By generating native code, I could go beyond what C supported. The vagaries of various C compilers was also a problem.


Given that C has raw pointers and inline assembly, what is D doing that it couldn't (easily?) express?


1. cannot support anything but the C ABI

2. cannot support exception handling

3. cannot support types that C doesn't

4. stuck with C symbolic debug info, not your language's

5. cannot innovate with things like how strings are stored

6. many C compilers do not emit COMDATs

7. no thread local storage

8. chained to slow compilation speeds

9. stuck with trying to find a way to work around C compiler bugs

10. stuck with whatever C compiler the user has, i.e. one has to support multiple versions of multiple C compilers

11. C compiler can change, breaking your compiler

The original C++ compiler, cfront, had a LOT of problems with these issues.


Thanks for taking the time to respond, compilers are always interesting to learn about!

Point 7 was (finally!) addressed by C11.

Points 6, 8, 9, 10, & 11 are all good ones that hadn't occurred to me.

It seems like points 1, 4, & 5 could be worked around, but I can see where the added complexity would be unwelcome.

On the whole, I can see why you made the decision you did. However, I don't buy points 2 (exception handling) and 3 (type support). Certainly C doesn't have these things built in, but appropriate code transformations seem to suffice in practice. For example, the CHICKEN Scheme to C compiler employs continuation-passing style [1] and Embeddable Common Lisp [2] manages to compile to C without (to my knowledge) breaking either its type system or its condition and restart [3] system.

[1] https://wiki.call-cc.org/chicken-compilation-process

[2] https://www.cliki.net/ECL

[3] http://www.gigamonkeys.com/book/beyond-exception-handling-co...


2. yes, it can be done with setjmp/longjmp, but that's a terrible solution

3. if the C compiler doesn't support 80 bit reals (and D does), you're out of luck.

I looked at the C code generated by C++ cfront. No thanks.

The worst is, the C compiler vendor owns you, and they couldn't care less about you.


Inline assembly is a compiler extension and not portable.

Beyond that, sure you can do whatever you want in C if you're willing to mess with assembly and non-portable memory tricks but at this point you're better off writing your own native backend. The whole point of generating C instead of machine code is to leverage the existing compiler infrastructure to support a vast number of architectures and generate efficient code. If you just dump inline assembly in a C function or rely on opaque memory shenanigans the C compiler won't be able to meaningfully optimize your code and it probably won't be portable.


Just discovered the HN ‘past’ tab. Went back 5 years in order to find something that also appeared today. Nim was the only topic I found in a front page article today and 5 years ago. Maybe just random but I’ll give it a try and report back.


> Efficient, expressive, elegant

  let people = [
    Person(name: "John", age: 45),
    Person(name: "Kate", age: 30)
  ]
Verbosity at its finest. Compare with C++:

  Person people[] {
    { “John”, 45 },
    { “Kate”, 30 }
  };


This is a false comparison because you're really comparing by-order-of-argument initialization/construction ("BOA constructors") versus keyword-value initialization/construction.

BOA is more compact, but it's possible to get the arguments wrong, and the type system won't save you if you exchange two arguments of the same type.

There is no reminder anywhere that 45 and 30 are ages.

That's also true in function calls that do not use keyword arguments (the vastly prevalent paradigm in mainstream languages), but function calls specify a function name which can reveal something, like set_age(john, 45).

Some languages have both.

No knowing anything about Nim, I have no reason to believe that it has no way to achieve a compact way for initializing an array or list of objects using just positional arguments, and no repetition of the type name. If nothing else, surely there has to be a way to write an array of tuples, and that aggregate can surely be passed to some function that iterates over it and constructs the implied Person objects.

On the C++ front, I seem to recall C++ recently adding C99 designated initializers or planning to. If so, then { .name = "John", .age = 45 } will be possible, and C++ coders will end up using it.


C++ designated initializers are not like C99 designated initializers, rather a subset.

For example, just like initialization lists on constructors, initialization order is relevant.


Order is unspecified in C initializers, including designated ones. That quickly becomes relevant when the program depends on it for expected behavior.


We reading the same code? The nim code is way simpler.

Maybe if you'd used auto (if that's possible - now I understand that auto is out of fashion due to sometimes forcing copies) it'd be closer.

`let people = [`

vs

`Person people[] {`

Just for starters...


ah, syntax debate. * goes back to lisp


I am going to give you the benefit of the doubt that this is a joke.


You can use tuples in Nim and it would be almost identical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: