Nim is an amazing language. The syntax is cleaner (IMO) and easier-to-read than Go and approaches Python in its readability, which is impressive for a statically typed, compiled language[0]. The design is focused on performance above all else, but it still has metaprogramming[1] and functional features.
However, Go and other languages have a huge ecosystem and many more libraries. Nim only has a few web servers/frameworks, for example. Even if Nim's web frameworks/servers (like httpbeast) are quite fast[2], they lack the completeness that exist for other languages.
Until then, if you are looking for a systems programming language, you owe it to yourself to investigate Nim[3], alongside Go, Crystal, Julia, D, Rust, Haskell, etc. The tooling is fantastic and the Nim compiler produces clean, cross-platform C code (or JS[4]!) that can be automatically fed into gcc, clang, mingw-w64, etc. It's a language that's undergoing rapid changes, but almost all of the changes are in the libraries and it's exciting to see all of the innovation there -- and as libraries increase and mature, it will become a really compelling application language as well.
The community is extremely active, and issues are promptly dealt with. For HNers, it's an opportunity to still make a huge difference by contributing to a relatively young language, compared to getting drowned out by all the noise in a more mature language community.
Honestly I think Nim is more compelling as an application language than a systems language. I know there are plenty of people that care about Nim in the embedded space and are improving the experience of GC-less Nim. But for me, Nim with GC strikes a nice middle ground between speed of development and speed of execution. I can hack a project together quickly and get a native binary that executes much quicker than a scripting language. Its definitely my favorite language for hobby projects and I'm looking forward to the 1.0 release.
Couldn't agree with you more here. I personally use Nim as an application language too and it will take a lot of convincing to let me go of the precious GC :)
> Until then, if you are looking for a systems programming language, you owe it to yourself to investigate Nim[3], alongside Go, Crystal, Julia, D, Rust, Haskell, etc.
Go, Crystal, Julia, Haskell and Nim all have a runtime that sort of precludes their use as a true systems lang. I agree with your other points, and I like Nim, I wish it was more popular, but I can't convince myself it's worth the time investment to learn it. I know Rust and Haskell already, between those two there isn't much space where Nim would be a good fit that the others aren't.
It's great that there are a few projects that enable writing software for embedded devices in some languages you don't usually see in the space, but projects like this do not suddenly make the lang a 'systems language'. You wouldn't want to write an OS in Java or C# or any language with a GC & runtime really.
Nerves is also a wonderful project, enabling the use of elixir for IoT devices, but nobody would claim elixir is a systems lang.
Your argument that a systems language must be memory unsafe relates only to current practices but not to science. science would rather prefer memory safety in systems.
Go is a systems language. The term isn't confined to device drivers. It includes platform/backend software such as web servers and things like message queues or the Docker ecosystem.
Real time programming requires you to be able to make hard guarantees about how long a certain piece of code can run and what resources it uses.
This is literally impossible with a GC because you can only bound one resource (time or memory) at a time.
The best RTS languages are a pain in the ass to use because the compiler will complain about everysinglething - cases that could go wrong with your code
They are used in mission critical (in all sense of the word) systems like spaceships - where a tiny bit of lag will send it thousands of miles in the wrong direction.
Those links you provided, one is a draft for a spec, it's not even implemented yet - and the other has nothing to do with real time languages.
Maybe it would be easier for both of us if you wrote the definition for real-time or systems programming, rather than just saying something isn't one or the other.
Look, a good systems language should have predictable assembly instructions that the code compiles into. You should be able to literally with your eye map from code to asm and jump back and forth in a systems level language. GC enabled languages don't have this property as the assembly would be littered with GC code to clean up things. You won't be able to map back and forth.
You will note that rust advertises themselves as a systems level language and it is deliberately designed with the term zero cost abstraction. All this means is predictable asm from code. No black magic. Rust is advertised as systems level and this is the property that enables it. I hope that allows you to understand what it means to be systems level.
I truly believe that you are completely wrong and only few languages nowadays are system languages without a GC. C++, C, D (when used without the GC) and rust are examples. All the other languages you listed aren't.
Hence, the word has different meaning from person to person. So there is no wrong, or at least it exists on a spectrum, and everyone complaining about the original comment's word choice is just jerking their ego off.
> For HNers, it's an opportunity to still make a huge difference by contributing to a relatively young language
It would be better for someone to work on an alternate, Python-like input syntax (focusing on readability and good intuition, and perhaps more attractive to novice programmers) for some established language, like Rust. Working on a "young" language, you just miss the chance of contributing to an ecosystem that's already been in development for quite some time, and where efforts aren't going to be left stranded as the bulk of the dev community chooses to go for something else.
> for some established language, like Rust. Working on a "young" language, you just miss the chance of contributing to an ecosystem that's already been in development for quite some time
Would you still make this point if you were comparing...
for example, Rust and C++?
... where Rust is the "young" language? Working on such a young language (and, FWIW, Rust is younger than Nim), you might miss the chance of contributing to an ecosystem that's already been in development for quite some time.
Not every language grows up with a silver spoon from Mozilla or Google.
Given that C++ is widely regarded as having unfixable problems (and even the ISO-C++ community is now basically admitting this, with the C++ Core Guidelines being nothing more than a somewhat pointless band-aid), yes I would. If C++ was fixable, Rust would not exist in the first place. (Same goes for e.g. Ada btw - if you could simply fix both the clear lack of openness in the available Ada toolchains, and its lacking anything comparable to the Rust borrow checker, Rust would also not need to exist.)
I can tell why Nim was created - there is a somewhat widely felt need for a systems language (Nim is clearly targeting C/C++ compatibility) with a more Pythonic input syntax! But it's far from clear that Nim itself as it exists today is a sensible answer to these issues.
There are things about Nim, like the thread local garbage collected heaps, that you can't really replicate in Rust. Sometimes you want garbage collected language because it makes your life easier. Sometimes you don't because you're a library or realtime or you want to optimize the hell out of your code.
Pluggable GC heaps or the like will likely become possible in Rust at some point. We can already see some of the groundwork being provided, e.g. with support for things like custom, user-specified allocators (at a local, not whole-program level - this is not yet in Rust, but definitely in the pipeline!)
It's just a bit silly to couple one's choice of language/ecosystem/etc. to a single memory-management strategy, and the Python/Nim-like choice of obligate GC as an extension to obligate reference counting also seems a bit puzzling. That's basically taking every sort of automated memory management under the sun and compounding their disadvantages - I'd think one can do better than that, in fact even Go or Ocaml do better than that!
I'd say there isn't really a better general option. Some people may want to target the largest amount of people right out of the gate, some people might want to be a big fish in a (growing) small pound. Some projects might have so many (possibly non core) dependencies that it's too hard to start from scratch without already existing mature tools, some projects you might want to start from scratch anyway so might as well use a more modern language that will give it some edge.
I think the only globally optimal option in hobby open source projects is doing whatever seems more fun for you, and for many people is using that language that no one uses (or making your own language that no one uses).
To add another existing implementation of this, Naughty Dog's Lisp-based assembler could do hot code reloading, and that workflow was extensively used for developing Jak & Daxter. Plus, obviously any JIT'd language can do this but it's fair not to count that. Still, that's just splitting hairs and it's really exciting to see this in Nim. It's a great language and it's a shame that it doesn't seem to have found a real niche yet.
There are many game engines do some form of hot code reloading of the game scripting code (as opposed to engine code). Unity supports it for C#, for instance (though it's finicky, you have to program with hot reloading in mind).
IMHO - you should learn Common Lisp - it solved the problems described in the talk (Nim can't redefine types on the fly) decades ago. In Common Lisp you can redefine _everything_ on the fly. You can change types (classes) in a running system and objects in memory that have the layout of the old class get upgraded automatically when they are used. Programming with Common Lisp in Slime/emacs is an amazing experience that will spoil you for other languages.
I see that at this point in time Smalltalk and Lisp machines have become completely memory-holed (oh wait, Smalltalk & Lisp are interactive, thus "interpreted", thus obviously not compiled! I'll show myself out)
Smalltalk is a family of dialects and implementations. Most of them are indeed compiled to bytecode which is then executed by a JIT VM. Most of them use the "image" concept where you are always in runtime like in a classic Lisp, and yes, the VM handles shape changes etc etc. But some Smalltalks actually have other characteristics, like non JIT or even compiling via C (Smalltalk/X does that I believe).
+1 Nim is a really nice and simple language. Extremely easy FFI and Python-like. For me, using Nim and Julia to write small "scripts" for my data science (student) works (processing 10Gb-20Gb of data) makes me feel quite productive compared to something like Go (too `ugly` for me), Python (too slow)
My general sense is that Nim is most competitive as an alternative to Python and/or things like Julia, in that it's as expressive and easy to understand as those, but has performance closer to C or Rust than something like Python. Julia is similar to Nim but I think Nim seems cleaner in its implementation overall, and targets much more general use scenarios.
I've been really impressed by Nim. There are some little things that have irritated me, like case insensitivity, but I wish it got more traction in the community. Right now the only thing that it doesn't seem to have going for it is library support. For the numerical applications I do I'd prefer it over everything else, except everything else has huge resource bases, so something that's already packaged in those languages would have to be done from scratch in Nim, which isn't feasible.
Can't recommend this enough: Nim is a very low-friction language with performance in the C/C++ range, also offers introspection and metaprogramming with a very powerful macro system.
Whenever I've been working in Nim, going back to other languages is always - codewise - a letdown.
Alas, I often must go back, almost always for the same reason: The lack of libraries (and || or) attendant documentation. One doesn't always have the time, inclination, or ability to work everything up from scratch.
Otherwise, it's such an overwhelmingly pleasing toolset to work with. Clean code, ultrafast compilation producing rocksolid executables as small and tight as you care to make them. Joy all around.
Lisp in 1962 did it, it had a compiler and the implementation could mix interpreted and compiled code.
One takes an interpreted function, which can be defined at runtime, compile it to assembler and have the assembler generate machine code in binary program space. The Lisp system then notes that this is now a compiled function.
> The LISP Compiler is a program written in LISP that translates S-expression definitions of functions into machine language subroutines. It is an optional feature that makes programs run many times faster than they would if they were to be interpreted at run time by the interpreter.
> When the compiler is called upon to compile a function, it looks for an EXPR or FEXPR on the property list of the function name. The compiler then translates this S-expression into an S-expression that represents a subroutine in the LISP Assembly Language (LAP). LAP then proceeds to assemble this program into binary program space. Thus an EXPR, or an FEXPR, has been changed to a SUBR or an FSUBR, respectively.
...
> 1. It is not necessary to compile all of the functions that are used in a particular run. The interpreter is designed to link with compiled functions. Compiled functions that use interpreted functions will call the interpreter to evaluate these at run time.
> 2. The order in which functions are compiled is of no significance. It is not even necessary to have all of the functions defined until they are actually used at run time. (Specialforms are an exception to this rule. They must be defined before any function that calls them is compiled. )
It's in the CL standard, and compliant implementations support it. SBCL, LispWorks, Allegro, CCL, and others all do AOT native compilation, from the REPL and via eval (which should be avoided, in any case). ABCL does AOT compilation to JVM bytecode, and Clisp to its own bytecode.
It's a cool feature for Nim to have, but claiming to be the first makes them look bad.
Several Lisp implementations even do similar things which were shown in the video: compiling Lisp to C/C++, then calling the C or C++ compiler, then loading the compiled code into a running Lisp, replacing/extending code.
For example ECL. The maintainer of ECL also showed how to write inline C in a Lisp function and compile/load that in a REPL - using only a couple of lines of code - basically a C REPL inside Lisp.
I remember using "edit and continue" in Visual C++ 6, seems that this feature still exists in newer versions[1], but I don't know what limitations it has.
That's a great doc page. In the "Supported Code Changes" for C++ and C# it has a helpful list of what isn't supported too. Java also has the feature in the JVM (IDEs / mvn-hotswap plugin can hook into it), but it's similarly limited. There's a proprietary java agent called JRebel that significantly expands the support (https://jrebel.com/software/jrebel/features/comparison-matri...), and a couple open source versions I haven't tried, but it's still got some limitations.
It's worth a comparison to Common Lisp which thought about this feature a lot more and built it into the language rather than having it hacked in by external tools... e.g. if you redefine a class definition in Java say to add a new storage field, well you can't using default hotswap but with JRebel you can, but still any existing objects will continue to point to the "old class"'s code and the new field won't be available... Common Lisp defines a generic function 'update-instance-for-redefined-class that you extend before you do the class edit, and now your existing objects will work with the new code.
I had been exploring Lisp (in particular, Franz's Allegro Lisp and LispWorks Lisp) some years ago, and around the same time, came across the "edit and continue" feature in Visual C++ / Visual Studio (may have been 6 or other version). IIRC I blogged or tweeted about both, and that maybe that VC++ feature was inspired by something similar in Lisp.
I hope it'll get mature more this year. At least for the Javascript backend, so that i could write Nim for frontend (with bindings for popular JS frameworks like React, Vue,..) , too.
Educate me if I am wrong, but there are natively compiled Erlang modules, but no longer any natively compiled Erlang. It only runs on the BEAM, and ever since BEAM/C fell out of use in OTP R5 no native code is generated.
BEAM is still present but HiPE interfaces with it using some specific ABI that allows mixing AOT compiled native modules (just machine code) and the BEAM register machine. I'm not sure if there are new caveats as I regard HiPE as a compatibility feature rather than something under active development (I may be wrong but it seems to fail to compile all of OTP's Erlang code now).
The overhead of that context switch is a bit high but it allows the code loading facilities of the BEAM to reload HiPE compiled modules. This works because all processes yield to the scheduler, which acts as a kind of code-swap safe-point. The usual module-local vs module-remote call rules apply here when old versions are purged.
Native compiled modules are called NIFs or Native Implemented Functions. these are functions written in a compiled language (usually C but there are other bindings like Rust, etc.) The downside is an NIF is kind of a black box as it can't easily call back to native erlang functions without some rpc hackery. They're great if you need to do something which just returns a value. Typically, they are used for high computational speed or accessing things like hardware not available through native interfaces. See the Erlang interop page for more details: http://erlang.org/doc/tutorial/introduction.html
I'm curious, what more are you looking for? From the nim manual:
"Each thread has its own (garbage collected) heap and sharing of memory is restricted to global variables. This helps to prevent race conditions. GC efficiency is improved quite a lot, because the GC never has to stop other threads and see what they reference. Memory allocation requires no lock at all! This design easily scales to massive multicore processors that are becoming the norm."
To me, that sounds perfect for writing typical apps.
How does Nim do thread-based GC if an object is shared across threads? Does it not allow that sharing references cross-thread (not sure how it would without lifetimes though)?
You communicate via channels, like in Go. Objects passed through a channel are deep copied. To share an object, you can create an actor thread that owns it, or if you're sure the object won't be GCed (i.e. by exempting it with the `GC_ref` function), you can pass a pointer through the channel.
If I understand the Nim manual correctly, there is no sharing of objects across threads. Instead, Nim assumes message passing. When you communicate between threads, Nim makes a deep copy of the message.
In theory, that strategy could be very inefficient if the messages are large, but it also means the garbage collector is unaware of threads and therefore simpler.
What's the situation with the TODOs mentioned in the presentation? Primarily, the missing standard library support cleanup ("Currently no real-world project can be built with HCR")?
+1 for Nim. I'm dreaming some day to see a RAD where Lazarus generates Nim code with full LCL support; that would be a truly killer dev system for desktop apps. Lazarus is already a wonderful RAD, one can build native apps on a *PI board and run them directly on that target; Nim support would make it even greater.
Admittedly not much on the technical side beyond new languages constructs etc, but the main benefit would be ending for good the wrong argument "Pascal is old therefore Lazarus sucks".
Like 20 years ago I've written network code in Delphi that would send and receive binary data between different endianess and word size machines, so I had to align them and manage data down to bitfields in the Pascal equivalent of C unions, that is, the language offers enough bullets and rope to hang then shoot oneself in the foot just like C, but every time I talked with someone about Delphi and later Lazarus, it was just about a matter of minutes before the inevitable "yeah, but Pascal is old".
I only skimmed the video, but I couldn't see him describing it doing type reloading. Perhaps that is what he is planning on implementing. It is very hard to implement (especially if you can't control the compiler) but very useful for general code reloading. Of course everything is relative, and even if I think it is "very hard" it might be a weekend project for someone else. Generally though, code reloading is one of the hardest things to implement in a vm.
The holy grail of code reloading is to upgrade the code of a HTTP server while it is running and without disturbing any requests being processed. Very few languages except for Erlang are able to do that correctly. Some languages claim to support that, but when you experiment with them you discover "quirks" making it impossible in practice.
If you put your mind to it just a bit (i.e., I wouldn't call this "automatic" exactly but it's not too hard to use the support the language, runtime, and libraries have), Erlang can even do a harder thing, which is update the code handling a given socket connection or something live, in a principled manner, while never dropping the connection.
Since HTTP is transient, it's actually a bit easier than a raw socket since you can expect it to go away soon, or even in the case of HTTP2, you can often expect to just close a socket as long as it's not currently active and get away with it. Many languages can smoothly upgrade an HTTP server by handing off a listening socket to a new process with new code. But even that won't save you for live sockets, because even if you hand off the socket, you haven't got a clean mechanism for handing off its accompanying state.
Several interpreted languages can sort of do this, but I'd call it in an "unprincipled" manner by just slamming new code in place and hoping for the best. Erlang explicitly upgrades the gen_* instances and you can provide a function for converting the old state to the new state cleanly.
> Erlang can even do a harder thing, which is update the code handling a given socket connection or something live, in a principled manner, while never dropping the connection.
That's what I meant. :) Hot reloading when nothing is "in flight" isn't so hard. The Erlang the Movie example, hot-fixing a PBX without disturbing phone calls in progress (real time requirements!), is really hard.
Ah, my apologies, because when you said "The holy grail of code reloading is to upgrade the code of a HTTP server while it is running and without disturbing any requests being processed." I thought you were referring to the ability of the BEAM VM to also have multiple versions of the same gen_* running, so that old requests can still be running on the original code while the new requests come in on the new code. This is in contrast to the things that came to mind like Python, which technically can replace a function reference live, but there's no such ability to partition who's in the old vs. new space like that, so it's too dangerous to be practically used (or at least not without a lot more supporting code).
> Erlang can even do a harder thing, which is update the code handling a given socket connection or something live, in a principled manner, while never dropping the connection.
I imagine that runtime type errors is what makes this possible.
> The holy grail of code reloading is to upgrade the code of a HTTP server while it is running and without disturbing any requests being processed. Very few languages except for Erlang are able to do that correctly. Some languages claim to support that, but when you experiment with them you discover "quirks" making it impossible in practice.
You can kind of do it with node.js, but man do things get ugly fast when you manage state/connections in modules on reload.
I can't tell from the video if the hot code reloading is meant for production workflows or dev only workflows. For example being able to use HCR to update a long-lived TCP server (like a video streaming server) without disconnecting clients.
It seems like it forces pointers for everything which, which seems to me will break a lot of optimizations (like inlining)
Edit: I guess production use is limited anyway since you can't add, modify, or remove types with HCR yet.
Edit2: I just saw in the video they said it's 2x times slower, so definitely can't be used for a lot of workflows in production. Still useful though.
Loading and unloading dynamic libraries (dlopen in Unix, LoadLibrary in Windows ...) and operating system kernel modules are also examples of hot code reloading in C.
Can someone give me some pointers to reading material on this.
I'm aware of hot code reloading for interpreted languages, google just gives links to web page loading. I'm also aware of it at the OS level. I don't really understand why you'd want this in a compiled language.
Because for an interpreted language you can use it for interactive development. I had thought different reasons would apply to a compiled language but judging by the other replies obviously not!
Interpreters tend to have repls, which tends to influence how development happens. I'm not stating an absolute law though.
I think you may be misinterpreting my original question. It was asked out of ignorance, it wasn't a challenge. I was hypothesising some kind of self modifying code, or realtime adaptive optimising or something. I was attempting to show you my current level of knowledge so you could ELI7 instead of 5 :)
It's mostly just that the feature is always useful (that's why it's present so often in interpreted langauges), but muuuuuch harder to do for non-interpreted languages. Notably, a subsequent goal for the Nim feature is indeed to add a full-blown REPL to Nim too, I believe. Also, worth to note that some non-interpreted languages do also have REPLs, e.g.: Haskell, OCaml, C++ (! — see CLING).
Theres exceptions the other way, Awk is interpreted but doesn't as far as I'm aware have a REPL, Sed doesn't either. But then both of those have varying implementations, so there could be exceptions to the exceptions.
The very term REPL comes from a language that's been used compiled to native code since 1960s (Lisp)
In fact, "compiled" programs on many platforms require an interpreter to set them up by dynamic code loading (including modifications if needed).
In ELF header, it's called "program interpreter", which gives a path to a program that understands this particular file and can assembly a running image in memory
However, Go and other languages have a huge ecosystem and many more libraries. Nim only has a few web servers/frameworks, for example. Even if Nim's web frameworks/servers (like httpbeast) are quite fast[2], they lack the completeness that exist for other languages.
Until then, if you are looking for a systems programming language, you owe it to yourself to investigate Nim[3], alongside Go, Crystal, Julia, D, Rust, Haskell, etc. The tooling is fantastic and the Nim compiler produces clean, cross-platform C code (or JS[4]!) that can be automatically fed into gcc, clang, mingw-w64, etc. It's a language that's undergoing rapid changes, but almost all of the changes are in the libraries and it's exciting to see all of the innovation there -- and as libraries increase and mature, it will become a really compelling application language as well.
The community is extremely active, and issues are promptly dealt with. For HNers, it's an opportunity to still make a huge difference by contributing to a relatively young language, compared to getting drowned out by all the noise in a more mature language community.
0. https://nim-lang.org/docs/tut1.html
1. https://en.wikipedia.org/wiki/Metaprogramming
2. https://www.techempower.com/benchmarks/
3. https://nim-lang.org/features.html
4. https://picheta.me/snake/