Hacker News new | past | comments | ask | show | jobs | submit login
Nimrod by Example (nimrod-by-example.github.io)
171 points by def- on July 17, 2014 | hide | past | favorite | 71 comments



Sighs. I wish Nimrod were more popular. It seems like a better version of D. But there can only be so many languages with a fully functioning ecosystem, and I just doubt Nimrod will ever get there. It seems to be a one-man shop at this point (as this demonstrates):

https://github.com/Araq/Nimrod/graphs/contributors


I am always amazed how quickly I get things done with Nimrod. It's like writing Python, having the full power of C (see below), and some of the power of Lisp (macros!), and getting the performance of C++. Nimrod actually is what I wanted Python to be.

For instance, I just need to add two lines to import C's printf and fprintf:

proc printf(frmt: CString) {.importc: "printf", nodecl, varargs, tags: [FWriteIO].}

proc fprintf(f: TFile, frmt: CString) {.importc: "fprintf", nodecl, varargs, tags: [FWriteIO].}

Now I can use these C functions seamlessly (even without parentheses):

printf "Hello %s!\n", "world"

fprintf stderr, "Error in line %d\n", $nr


    from ctypes import *
    libc = CDLL("libc.so.6")
    libc.printf("Hello %s!\n", "world")
    from sys import stderr
    libc.fprintf(stderr.fileno(), "Error in line %d\n", nr)
although I don't know why in the world you would want to use printf instead of native string formatting


I presume the printf was just to show Nimrod's easy C interop.


This. I'm on the lookout for expressive, statically typed, native code compiling languages for game development. Currently poking around with Rust, since Go seems to pretty much be settled on not doing the custom expressibility thing. Nimrod looks like something I'd really want to like, the game stuff I do doesn't really need Rust's hardcore garbage collection avoidance, but the ecosystem sparsity just scares me off. Meanwhile, Rust has both reasonably heavy institutional support and an impressive swarm of adventurous game developers with a C++ background working on stuff for it. I'm banking on Rust mostly for the potential ecosystem strength in being the only alternative to C++ for a high level of abstraction don't pay for what you don't use language.


The standard library is pretty nice though: http://nimrod-lang.org/lib.html

I spent a few days solving Rosetta Code tasks and was surprised how easy most were using the existing libraries: http://rosettacode.org/wiki/Category:Nimrod


We also already have a pretty functional package manager (https://github.com/nimrod-code/babel) and many packages already make use of it: http://nimrod-lang.org/lib.html#babel


I see the Babel doco has now been changed (10 hrs ago - any relation to this HN post?) to say I can run Babel against Nimrod 0.9.4 stable instead of needing to build Nimrod from source. This is great news, I was put off by the language's package manager not working against the stable release version. I look forward to giving this a try.


It's great news indeed. It seems I underestimated how many people will be having trouble building babel due to this. Next time I will do my best to make sure that it works with the latest Nimrod release.


I will say building Nimrod from source is quite literally the easiest language that I've compiled before, aside from Node.js! Super easy :)


There really are only three ways to create a usable language, honestly:

1) Huge institutional support. See: C#, Java, Go.

2) Give it 20 years. See: Python, Haskell.

3) Weird, idiosyncratic factors. See: Javascript, mostly.

So it's hard for me to get excited about these sorts of things.

Edit: fair point, Javascript is really a case of #1, with Netscape as the institution.


You're missing 4) piggybacking on existing ecosystems. E.g. Scala, Clojure, F#.


In Nimrod it's easy to piggyback on C, which has the most extensive ecosystem of all languages.


But C's ecosystem is so extensive that everybody can piggyback on it (with varying levels of ceremony).


See CPython, Haskell, C++....


This might already exist, but if Nimrod could piggy back on Javascript (Node), Python and Ruby for libraries this would be huge. Native glue.


This is what Perl6 with Parrot VM aim for.

http://en.wikipedia.org/wiki/Parrot_virtual_machine


I don't think that we can really consider Perl 6 and Parrot to be anything but abysmal failures at this point. The people developing and advocating for them have had many years now to product something that's even minimally usable, and we just haven't seen that happen.

Writing traditional compilers and interpreters, or emitting C, or targeting LLVM have all proven to be good ways of getting production-grade programming language implementations created and usable quickly. Messing around with Parrot has never resulted in anything useful.


That was the goal, but Parrot failed to meet that goal and P6 is years away from being usable.


How about Perl, Ruby and C?

pg said that for a language to become popular, it has to be the scripting language of something popular. That's the category where the three languages I mentioned are falling.

So the answer is simple: one has to build something really nice in Nimrod and use Nimrod as the extension language of that thing. Maybe the embedded world would be a nice field from Nimrod due to having nice interop with C? (I haven't used Nimrod in my life yet, although I am tempted because it looks very easy).


Perl is a (3) as it was the scripting language on *Nix, and CGI was pretty crucial in the early web.

Ruby I guess is a (3): Rails is a killer app.

C, well, it's the a major accomplishment of human civilization and has held up for half a century. I'd call that idiosyncratic.


I feel like Lua is used pretty broadly for UI scripting in the games ecosystem (addons, plugins, etc) but I don't see it used really anywhere else aside from maybe IRC chatbots. Being the scripting language for something popular is maybe a necessary but probably not sufficient condition.


C had AT&T and was UNIX system programming language.

As UNIX spread into the industry, so did C.

All languages required by OS vendors as the official ones to target their OS, succeed in the market if the OS succeeds.


True, Objective-C comes into mind.


I think it's only the first two. Javascript happened because of Netscape (1).


It you're in game development, you might want to take a look at haxe. It seems many users use it successfully for games.


OCaml?


It is getting there, as can be seen by the Contributions graph in that link. Aside from the fact it's mostly from Araq, the graph looks very healthy and encouraging. I actually just started using it for my next biggest project, after a week or so of looking into haskell, Ocaml, and Lua.


I use NodeJS quite a bit, which has massive ecosystem, but recently (and especially After TJ Hollowaychuk's announcement about moving to Go) I've been taking a fresh look at these new-fangled systems languages, including Go, Rust and Nimrod, plus Apple Swift. If I'm going to tool and skill up in a new systems language, I want not just performance and nice syntax but also reach. Of the three, Nimrod is the only one with a convincing story for running on the main consumer platforms (iOS, Android, Windows), server platforms (Linux, *NIX) and embedded systems - because it compiles to C or Objective C (or JS for that matter!). That strikes me as a neat approach. Whilst there are attempts to get Rust and Go compiling for iOS, they don't seem very advanced.

The small ecosystem is definitely a concern, but could also be an opportunity for a mid-sized IT corp to step into the ring with the big hitters and all their shiny new systems languages by backing this project.


I do think Nimrod will catch on in a little while. It has amazing potential. But things like type classes aren't done yet.

Personally I believe Nimrod will be perfect as a Hardware Description Language. There's a couple of features missing, but they're currently being worked on. This is an area where Nimrod could really shine, and where it could start to gain massive commercial support.


At least in the devel branch type classes seem to be working:

  type Comparable = generic x, y
    (x < y) is bool
This creates a Comparable typeclass, which can then be instantiated:

  type Foo = tuple[a, b: int]

  proc `<`(x, y: Foo): bool =
    if    x.a < y.a: true
    elif  x.a > y.a: false
    else: x.b < y.b

  var c: Foo = (12, 13)
  var d: Foo = (14, 15)
  echo c < d # true
And used elsewhere in generic code:

  proc min[T: Comparable](xs: openarray[T]): T =
    assert xs.len > 0
    result = xs[0]
    for x in xs:
      if x < result:
        result = x

  echo min([c,d]) # (a: 12, b: 13)


Could you elaborate on why you think it would be a good HDL?


Three reasons: dependent typing (limited support, but enough to represent statically sized vectors), powerful macro system (functions doesn't work to well as an abstraction in HDL.. this allows us to create new kinds of abstractions, like state-machines and pipelines), and generics/type classes.

The only other language I know that has similar amount of power is Idris(but in a very different way). But Nimrod has another advantage: friendly syntax and semantics. HDL programmers are not generally good at programming, so I don't think Haskell or Idri based languages will catch on.


I worked on a HDL based in Haskell as an undergraduate project[1], and I'm about to start a PhD exploring the design of HDLs.

>dependent typing

I definitely agree with this. When you're writing a program to generate code, the distinction between compile time and runtime is essentially meaningless, so the boundary between types and values in Haskell ends up being a huge hassle for no good reason. It's also hard to explain the difference between numeric values and numeric types, when there doesn't need to be one in the first place.

I didn't realise Nimrod had this feature, so I might need to take another look at it.

>functions doesn't work to well as an abstraction in HDL

I'm not sure why you'd say that? Functions work great for abstracting chunks of combinational logic. Arguably one would prefer a more constrained abstraction though.

> friendly syntax and semantics ... I don't think Haskell or Idri based languages will catch on.

Bluespec took the approach of modifying Haskell syntax to look like Verilog. It's mostly quite effective, but I think they went slightly too far in throwing out some useful syntax features. Most notably, there are no lambda expressions in Bluespec, even though it could support them trivially.

>HDL programmers are not generally good at programming

I think this is a chicken and egg problem. HDLs suck, so there's not really any good practice to learn. Also competent programmers who can see the deficiencies tend to avoid the field.

I'm working with Bluespec in a team of mixed EE and CS backgrounds, and the EEs are learning much better programming practices as a result.

[1] https://github.com/aninhumer/mantle


> I didn't realise Nimrod had this feature, so I might need to take another look at it.

Yeah, it has it to a certain extent. But it's a minefield at the moment, and will never be as good as Idris for instance. But I'm sure it will good enough for bit vectors.

> I'm not sure why you'd say that? ...

You're right, I was thinking procedures, but wrote functions. A pure function is of course an excellent abstraction of combinatorial logic. A procedure could potential abstract sequential logic as well, with some restrictions/modifications.. but you really want other abstractions.

> Bluespec took the approach of modifying Haskell syntax to look like Verilog.

I'm very suspicious of this approach. If you expect Verilog but get something quite different, it could cause frustration. But I haven't tried Bluespec, so I shouldn't criticize too much. I suppose it's the only way they had a hope of getting more adoption.


Why does it seems better than D for you?


Anyone know the recommended way to do interop with C and Nimrod? I've been doing Project Euler problems in Nimrod, and wanted to use some library functions from GSL for bignum support, but I couldn't really get c2nim to work with the header files.

I think that I could use the 'importc'/'dynlib' with the compiled library, but there's not a whole lot of documentation on that in the Nimrod manual, and I ended up just doing that problem in C because it was easier (read: I was lazier) to get working.


For general C/Nimrod interop see these Rosetta Code tasks:

http://rosettacode.org/wiki/Call_a_foreign-language_function...

http://rosettacode.org/wiki/Call_a_function_in_a_shared_libr...

http://rosettacode.org/wiki/Use_another_language_to_call_a_f...

Making c2nim work with header files can be a bit difficult. The main information is here: http://nimrod-lang.org/c2nim.html

When something throws an error I comment it out and try to see if I can fix it up by hand.


To the curious - I got this working (thank you def- for the rosetta code examples!). Point of correction - the library I was actually using for bignum support was GMP, not GSL. (I was getting confused because I used functions from GSL for a different problem).

Importing a standard procedure from a C library like GMP actually is pretty easy (use the importc/header pragmas).

Here's what tripped me up:

* Calling variadic functions from Nimrod (just wrote a wrapper)

* Getting to link against gmp (used the passL: "-lgmp" pragma)

* Remembering to free what was allocated by C library functions (can't GC what you didn't allocate) - caught with valgrind

I'll probably push to my github repo, but can post to a pastebin or something if there's interest in a less cookie-cutter example


I never tried the c2nim thing but just used a few compiler directives I think to call C code. It was incredibly easy.


I'm afraid the memory management scheme might cause the Nimrod ecosystem to end up with too many libraries that require GC, and since the building blocks are already using it, why not use it yourself?

Either make it fully manual, or do something like Rust (No small feat). A GC you can switch on and off sounds like a good idea, but programmers don't keep half the promises they make ("It's just an MVP, I'll refactor it to use manual memory later").


> I'm afraid the memory management scheme might cause the Nimrod ecosystem to end up with too many libraries that require GC, and since the building blocks are already using it, why not use it yourself?

And what would be the problem with that? The number of application domains where even soft real-time GC is inappropriate is very, very small. And designing freedom from garbage collection into a language (other than the simple option of allowing manual allocation/deallocation) is not cost-free, either, and can weigh down the design, too. You can optimize your language design for having garbage collection or for not having garbage collection, but if you try to do both at once, a language design that specifically favors one or the other will likely beat out the compromise solution for the case it's optimized for.

And yes, that means that Nimrod (or any other language) won't be perfect for everything under the sun. Which is fine: there's nothing wrong with having multiple programming languages, each of which is better suited for a different task.


No, I think Nimrod's approach is absolutely correct.

What I hope will happen is that once Nimrod's effects system matures, the compiler will be able to figure out when it can statically free references when they go out of scope, rather than always using the GC.

Rusts approach, I think, is in the entirely wrong direction. You should only have one kind of reference, but if you use it correctly the compiler should figure out how to free it. You should then be able to make an assertion that a procedure (possibly "main") does not allocate any GC memory, and have the compiler tell you what part of your code violates this constraint.


What you describe is called escape analysis, and it's Go's approach (for example). But it falls down in many situations, especially when separate compilation is desired or when you have existential types (e.g. closures). To truly have no GC with safety, you need to replicate something like Rust's machinery.

"If you use it correctly the compiler will figure out how to free it" is extremely handwavy. If you try to formalize exactly what "use it correctly" means, you will likely arrive at something very similar to Rust's system.


I think the idea is that with an effect system, function signatures and existential types will have effects that says "argument X does not escape", and "argument X can only escape through call Y" (so that you can infer bottom-up). If you have that information in the interface, you can do separate compilation and existential types.

To me this seems different enough to Rust's system, and while this won't be 100%, I think this will greatly reduce escape analysis failures, and especially this will eradicate escape analysis failures due to missed inlining. Whether it will be enough seems an open question to me (that is, to me, it doesn't seem destined to fail) and worth pursuing.


Once you're at the level of "argument X only escapes via closure Y", isn't that just a lifetime system? You're manually expressing the invariant that the closure Y cannot outlive argument X.


Would the cognitive overhead of that be any less than lifetimes in Rust?


> Rusts approach, I think, is in the entirely wrong direction. You should only have one kind of reference, but if you use it correctly the compiler should figure out how to free it.

This has been an area of intense research for four or five decades now, and it's probably not going to get better than what we have now, in the general case. In the presence of general recursive data structures, the compiler can't know when each object is going to die. I'm pretty sure you can easily reduce this to the halting problem.

To date, you've had three sorts of solutions: 1) explicit malloc/free; 2) tracing GC or reference counting + various tricks to take advantage of special cases; and 3) putting additional information in the type system to help the compiler infer when objects might become free while preserving memory safety (this is the direction taken by Rust, Cyclone, MLKit, and to an extent C++ too).


I think if your code does use references in a way that does not let you guarantee memory safety without an extremely convoluted type system, you should perhaps just be using GC.

For most code I personally know, that does not play well with GC (firmware, hard real-time, etc.), you really don't want to do much allocation after initialization anyway, and the little you do can be handled by malloc/free style manual memory management, or perhaps by escape analysis. And then I think Nimrods approach might work well, because you could declare that everything done by a proc should be verified by escape analysis, rather than manually tagging each pointer used in that procedure.

But I recognize that Rusts approach should be extremely useful in a few applications. But then again I wonder if Nimrods type-system eventually will be able to implement its most useful pointer types in a library.


> So what you have is various approaches to take advantage of special cases (different forms of GC), or approaches that give the compiler more information (as in C or C++, and Rust or Cyclone or MLKit).

Pretty much. I'd also add ParaSail to the list of languages that give the compiler more information (or more precisely, restricts what is possible, and for reasons other than just memory management).


I agree in that I don't entirely buy Rust's approach, mostly because of the cognitive overhead that it puts on me, but I don't think it's plausible to think future changes to the compiler (Without the introduction of new constraints on the behaviour of pointers) will give us statically-determined deallocation points. If it were possible, it would have been done five years ago and merged into clang!


It's trivially impossible with unrestricted aliasing. No compiler can statically predict the properties of an unrestricted arbitrary graph about which, in the limit, absolutely nothing is known until runtime.


So, because something doesn't work in some theoretical corner cases, it isn't worth bothering with at all?


Creating arbitrary graphs in memory is not a "theoretical corner case". It corresponds to, for example, Graphviz reading in a graph from user input. Or, if you want something that could technically be done with manual memory management but might as well be an unrestricted graph for all any reasonable analysis could infer, creating a doubly-linked XML/HTML DOM.


Alias analysis failure leading to escape analysis failure is the default case, not the corner case.

I think it is valuable to research what is the best set of restrictions to get reliable escape analysis. For reliable escape analysis for unrestricted case, yes, I think it won't be possible and it isn't worth bothering with at all.


Still, there are times when GC is fine for a particular application. In those cases, Nimrod is pretty fantastic, especially given its metaprogramming capabilities.


Also, the GC can be controlled, so you can set exactly when it is allowed to run, something like this for a game:

  gcDisable()
  while true:
    gameLogic()
    renderFrame()
    gcStep(us = leftTime)
    sleep(restTime)
http://nimrod-lang.org/gc.html


Then what happens if the GC keeps taking longer than the allowed time? Will it then not keep allocating more and more memory that will never be freed?


Likely, yes. (Welcome to real-time programming; if you don't keep track of your time-budget, you're gonna have a bad day.)

I haven't looked at Nimrod's GC, but I believe it's indeed incremental; it won't take more time than you give it, but it will leave things uncollected if you don't give it [enough] time.


in a game, detect gc pressure and schedule some slo mo candy to use as gc cover.


I guess so. This is only viable if you have enough time for garbage collection.


Nimrod's pointers to garbage collected memory are separate from the pointers you can allocate yourself. So you could do manual memory management on the parts that really need it - say for a game in a tight loop - and leave the collector to worry about the smaller and less time-critical stuff.

Also the GC is only triggered on a memory allocation. It doesn't run in a background thread or anything like that. So if the GC fails, then the allocation of memory fails (as I understand it) which means your scenario would be caught early.


> Nimrod's pointers to garbage collected memory are separate from the pointers you can allocate yourself.

Does this mean that they use different heaps?


No, they are on the same heap since the GC isn't a moving collector.


They probably use the same heap underneath.


How does Nimrod compares to Go regarding concurreny,does it have some libs or some language onstructs ? thanks, I like how the language looks.


Nimrod's concurrency support is still evolving but it already implements a builtin thread pool: http://build.nimrod-lang.org/docs/manual.html#spawn

You may also be interested in the C#-like async await as shown in the following news article: http://nimrod-lang.org/news.html#Z2014-04-21-version-0-9-4-r...


Is there a list of "X By Example"-style tutorials around? I learn best from these sorts of things and would love a compendium of sorts.


The HN user def- posted a link to the Nimrod section of rosettacode.org: https://news.ycombinator.com/item?id=8048939


Not exactly what you're looking for, but http://learnxinyminutes.com/ is nice.


I always wanted to go back to C kind of language, let me wait for few more years to pick one Go,Rust,Nimrod...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: