Hacker News new | past | comments | ask | show | jobs | submit login
What makes Nim practical? (hookrace.net)
154 points by def- on Jan 23, 2015 | hide | past | favorite | 130 comments



I read the Nim manual a while ago (http://nim-lang.org/manual.html), back when it was Nimrod.

As a Python user, I loved it. Every single problem I had with Python, Nim seemed to have solved elegantly. Performance, distribution, typing... Everything looked perfect, and none of the Python expressiveness seemed to have been sacrificed.

But it was transpiled to C, and the abstraction was leaky. Every once in a while the manual would mention that certain structures were not supported, and it was clear C was the culprit. I think the most glaring example were nested functions, or something similar.

I thought to myself "this will bite me in the ass sooner or later" and gave up. Maybe it's time to try again. If they plugged the abstraction holes, this will be a killer language, with applications everywhere.


Nim isn't compiled to C in the same way that, say, Coffeescript is compiled to Javascript.

Nim's compiler converts the AST into an intermediate representation that can be compiled to several backend. The primary backend is C source code, but it also supports outputting Javascript (experimentally) or interpreting the intermediate representation ala Python.

The difficulty of developing the IR -> C transformation certainly influences the features of the language, and certain features, like tail calls, can't be implemented because Nim expresses functions as C functions for the benefit of foreign code. I wouldn't say it's a leaky abstraction though, not any more than C is a leaky abstraction over machine code.


Your example isn't exactly true. Clang and GCC both do tail call optimizations. Compiling language functions to C functions doesn't 100% preclude you from TCO. http://david.wragg.org/blog/2014/02/c-tail-calls-1.html


See also: http://www.pipeline.com/~hbaker1/CheneyMTA.html

It looks like work/research in this area has been going on for at least ~20 years.


Indeed it has. For a few more examples, see also http://www.ustream.tv/recorded/43777177, and http://www.ccs.neu.edu/racket/pubs/stackhack4.html. Pyret (http://pyret.org) uses similar stack techniques to these to simulate an arbitrarily deep stack while compiling to JS.


It will, otherwise semantics will depend on which C compiler is available and ANSI C doesn't require TCO.


Tail calls are pretty easy to implement in a language that compiles to C, even if the C compiler doesn't cooperate. This can be done without bothering the foreign code. For instance:

int f(int a, int b){return f(b,a);}

compiles into

int f(int a, int b){start: int t=a; a=b; b=tmp; goto start;}


It doesn't make sense that a language that compiles to C can't support nested functions, just because C doesn't.

All the transpiler has to do is invent a globally unique name for each function, for instance. Of course they might not have implemented it yet, but there shouldn't be any firm reason why it can't be done.


> All the transpiler has to do is invent a globally unique name for each function, for instance.

I would expect nested functions to close over their context. Still implementable in C of course (or ghc -fvia-c wouldn't work), but not as trivially.


Procedures can appear at the top level in a module as well as inside other scopes, in which case they are called nested procs. A nested proc can access local variables from its enclosing scope and if it does so it becomes a closure.

From the manual: http://nim-lang.org/manual.html#closures


Nested functions like this?:

  proc a =
    proc b =
      proc c =
        proc d = echo "Hello World"
        d()
      c()
    b()
  a()
Works fine.


If (and I haven't been interested enough in Nim to investigate) it is indeed compiled to C and then compiled down to platform specific code, unless they are implementing functions differently, nested functions are not a standard part of C; they are a compiler extension (which GCC supports).

Not necessarily a problem, but unless implemented as something other than nested functions in C, there may be some portability issues. Worth investigating.


As unwind suggested, they just get transformed down to global functions with unique names. Here's the actual C code generated: https://gist.github.com/def-/0fe87bf1d35102c62d3b#file-nest-...


Nice. Not the most compact code I've seen, but it makes sense considering it's the output of a compiler, and can thus take advantage of optimizations from clang/gcc/what_have_you.

EDIT: Actually this raises the question of how it handles closures, since the generated code does not seem to provide for preserving the surrounding environment data. Though I may just need to dive in now that it's catching my interest.



What about variables nested within these functions then? Is scope enforced? i.e. you can reach back to variable aa in function a() from nested d() but can't reach variable dd in d() from a()?


I have no idea how it actually works, but it would be easy enough to implement by having nested functions take a parameter that's a pointer to a structure containing the variables that are in scope in the outer function.

In your example, it could be something like this:

    struct variables_in_a_that_are_visible_in_d {
        int aa;
    };
    
    void d_nested_in_a(struct variables_in_a_that_are_visible_in_d *ascope) {
        // Variable dd, not accessible from a
        int dd = 42;  
    
        // Increment a's aa
        ascope->aa += 1;
    }
    
    void a() {
        int aa = 0;
    
        // Pack up variables for d()
        struct variables_in_a_that_are_visible_in_d ford;
        ford.aa = aa;
    
        // actually call d)
        d_nested_in_a(&ford);
    
        // Unpack after calling d()
        aa = ford.aa;
    
        // Continue with a()
        // ...
    }
    
I'm a little surprised people are so hung up on this. They don't call C "portable assembly language" for nothing. If it can be done compiling to native code or some virtual machine assembly language, it can be done compiling to C.


If you know the stack layout, you don't need to do the pack/unpack. You can define the struct to match the stack layout and pass a pointer into the stack instead.

The struct may have holes that contain variables that aren't used by the nested function, but that doesn't matter. Also, you would have to make sure to flush values from registers onto the stack, and, depending on the ABI, would have to explicitly push arguments passed in registers onto the stack (that would have the struct contain a return address, but that's not a problem, either), if you need them to pass arguments to the nested function.


  proc a =
    var x = 10
    proc b = echo x
    b()
  a()

Comes pretty close, the struct is further up: https://gist.github.com/def-/c496cd42774617fd0271#file-nestc...


Right, you could do that, it was more a question about how the closures work.


I assume so? That would be a static check of the nim code.


Dang, sorry. I skimmed the manual again, but couldn't find the exact feature(s) that triggered my conclusion. Maybe they have been plugged already.

Anyway, I think I'm overdue on reevaluating the language. I can't wait to replace Python/Go/C/Java with this.


Great post. I thought I'd comment just to say about the relative URL ../what-is-special-about-nim not working as intended from the homepage and the link to http://nim-lang/ which should presumably be http://nim-lang.org

Seems a shame to make such a pedantic and insubstantial comment on such an interesting article about a language I'd not come across, but thought the author might like to know.


Oops, I thought I checked all links. Thank you!


Can someone point me to an explanation of "Nim is the only language that leverages automated proof technology to perform a disjoint check for your parallel code"? This is mentioned prominently on the main page...but scanning the docs, the FAQ, Dr. Dobbs, and Googling 'nim disjoint' didnt lead me to a detailed explanation.


Disclaimer: I'm the lead designer of Nim.

pcwalton's remark is excellent but "automated proof technology" is not a well defined term. What I mean by this is that it goes beyond what a traditional type checker can do. I don't think Rust can do exactly the same things via its borrow checking and its iterators, but I might be wrong. Note that the very same analysis also proves your index bounds are correct.

Nim's disjoint checker is so experimental that its docs are indeed very terse and we only have a couple of test cases for now. That said, the disjoint checking is restricted to the 'parallel' statement, so its complexity only affects this language construct and not the whole language. You can think of it as a macro that does additional checking.


> I don't think Rust can do exactly the same things via its borrow checking and its iterators, but I might be wrong

Reading http://nim-lang.org/manual.html#parallel-statement point by point (disclaimer, I think Nim is very cool, but safe parallelism is one of Rust's strongest points):

> Every location of the form a[i] and a[i..j] and dest where dest is part of the pattern dest = spawn f(...) has to be provably disjoint. This is called the disjoint check.

The type system guarantees disjointness when necessary: a mutable reference `&mut` is guaranteed to be the only way to access the data it points to at any given point in time and iterators over mutable references preserve this guarantee, so disjointness-for-writing is automatic.

> Every other complex location loc that is used in a spawned proc (spawn f(loc)) has to be immutable for the duration of the parallel section. This is called the immutability check. Currently it is not specified what exactly "complex location" means. We need to make this an optimization!

Rust generalises immutable to "safe to be used in parallel"; everything that is (truly) immutable satisfies this, but so do, for example, memory locations that can only be used with atomic CPU instructions, or values that are protected by a mutex. There's no way to get data races with such things, so they're safe to refer to in multiple threads.

This is captured by the Sync trait (types which can be used from multiple threads in a shared way implement it): http://doc.rust-lang.org/nightly/std/marker/trait.Sync.html

> Every array access has to be provably within bounds. This is called the bounds check.

Rust's iterators give in-bounds automatically, but there's also no restriction about requiring bounds checks or not. (What does this rule offer Nim?)

> Slices are optimized so that no copy is performed. This optimization is not yet performed for ordinary slices outside of a parallel section. Slices are also special in that they currently do not support negative indexes!

I'm not sure what this means in the context of Nim, but passing around a Rust references never does a copy (even into another thread).

(Disclaimer 2: it's not currently possible to pass a reference into another thread safely, but the standard library is designed to support it, the only missing piece is changing one piece of the type system, https://github.com/rust-lang/rfcs/pull/458 , to be able to guarantee safety.)


Well, it will be hard to find a proof for it being the "only" language to do so, since as one of the commenters mentioned there is indeed at least one other. But http://nim-lang.org/manual.html#parallel-spawn seems to discuss it a bit. Or did you want more details? If so, what kind? Actually I don't know anything about Nim anyway, so whatever your question is I can't answer it.


Well I did my homework. When you find another language that does it in a somewhat similar fashion, I'll happily change the website. ;-) I didn't think Rust counts, but since it's constantly changing, I will have a fresh look at it.


I did not do my homework. I know neither Rust nor Nim. So I'm inclined to take your word for it :)


It doesn't seem like an accurate claim, as Rust also has the ability to reason about uniqueness of data. Rust uses a type system to do it—which, via Curry-Howard, qualifies as "automated proof technology" :)

It looks like Nim has some ability to reason about arithmetic identities natively, which is neat. In practice Rust uses iterators to encode the same sorts of common patterns in the type system.



I love how these new languages compile into a static binary and thereby avoid the deployment nightmares of Ruby/Python.

More of that please!


If Common Lisp, Scheme or Dylan had been adopted by mainstream programmers, dynamic languages with JIT/AOT compilers would already be a common practice instead of only available to a few that care to look around.


I came across [pex](https://github.com/pantsbuild/pex) a while ago. It basically compiles your package and all its dependencies into a single zipped module.

I never had the opportunity to try it out myself though.


If PEP-441[1] makes it to Python 3.5, pex like functionality will be a part of the standard library.

[1]: https://www.python.org/dev/peps/pep-0441/


It's not quite the same I think. The format is very similar, yes, and I think it'd be nice if the PEX program could be made to work with this PEP. However, the PEX program bundles dependencies, which this PEP doesn't seem to be explicitly about.


py2app and py2exe are fairly solid in this regard, aren't they?


No experience with py2app, but I've used py2exe and pyinstaller in production.

They can be made to work, but you regularly stumble on some more or less obscure bugs. The short of it, is that you have no guarantee that the compilation step won't introduce bugs in your program, so you need a solid test suite for the compiled binary. It's feasible, but clunky, and you really get the feeling that the compilers aren't first-class citizens.


I just realized something: Nim is like a faster Python or a better Go lang

It's not really competing in quite the same space as say, D or Rust. It's like a statically typed scripting language.


Comparisons to D or OCaml would be interesting (it'd be good to see that "new language for 0install" post redone with nim as one of the candidates). I think people are shoehorning Rust into the "compiled, typesafe, but strict and impure general-purpose programming language" niche, just because there aren't as many nice options there as we'd like.


Why would you say it's not competing with D and Rust? Performance-wise Nim should be in the same ball-park, and you can do systems programming in it. For me Nim is a universal language which I can use for almost anything.


Nim does not have the memory safety without garbage collection features of Rust. If you want memory safety in Nim, you have to use the garbage collector (which, last I looked, is not thread safe either, although I see that Boehm is now an option).

It's easy to write (a) an unsafe language that has no GC, or (b) a safe language that relies on GC for safety. It's also easy to write a language that is in category (b) but lets you drop down to category (a) in an unsafe dialect. It's a very difficult problem to write a safe language that does not rely on GC, and I know of no other industry language that has done it. Unfortunately, people often lump Rust into categories (a) or (b), without realizing that what makes it interesting is that it isn't in either. Nim may be in category (b) (if the thread safety issues in the GC have been solved, or you use Boehm, or your app is solely single-threaded) with the ability to drop down to category (a), but it is not in Rust's category.


I think it is a matter of what is safe.

While Ada doesn't provide the parallelism safety mecanisms from Rust, is is pretty much a safe systems programming language, specially the SPARK dialect.

And I do conceed that using RAII or memory pools is a bit more cumbersome than in Rust.

EDIT: Forgot to add that for me systems programming safety has been for a long time what Modula-3, Oberon and Ada offer. Only recently it became clear to me that Rust safety module is more broad.


Yeah, Ada/SPARK is safe too. But as I understand it achieves that by removing deallocation (from a single untyped heap) entirely, which is pretty limiting, though sufficient for a lot of embedded work.


Yes in SPARK's case.

In general Ada code, deallocation is considered unsafe and requires specializing the Unchecked_Deallocation() procedure. This is because Ada 83 allowed for optional GC.

In Ada 83, the safe alternative was to use memory pools.

With the newer revisions, support was added for RAII and Ada the ability to define custom refereced counted data structures, similar to how they are doing in C++.

So speaking of Ada 2012, you can get Ada's safety in terms of contracts, data types, numeric ranges, constrained types, access types (Ada pointers).

For heap related safety, it is possible if RAII, memory pools or RC access types are used. But like C++, this is one area where the compiler doesn't force the developer to use it.


D maybe, since it has a GC, but Rust not as much. A lot of defaults are different, affecting how libraries and idiomatic code are written and what situations it can be used in without turning things off or avoiding features.


Rust is where you can write code that relies on a custom allocator. Or a program that doesn't allocate memory. It's usable for the really low level stuff. Because of this everything must be explicit. Nim is implicit. It doesn't require you to declare a main() function, for example.


There's 1 thing that can't be done in Nim that you can do in Go though. And that is the goroutine system. Something like that needs to be explicitly baked into the language. You can try to emulate it with your own thread pools but you will never get the same level of preemption.

However few people will ever need this feature, and erlang/elixir probably does it better, though at a the cost of speed.


But but but Go's concurrent scheduler just isn't preemptive. It's cooperative (and that's meh).


It is preemptive on function calls just like erlang.


Care elaborating on why is it a better Golang?


Just about everything is better than Go. Nim, Rust, OCaml, Haskell... Did you read the submission? And for further enlightenment, the link to the previous post on the blog about what makes Nim special? Here's a link to a rant comparing Go and Rust, which includes some links of its own highlighting further issues with Go: http://www.quora.com/How-do-Go-and-Rust-languages-compare


I like to think of Go as a niche language with excellent tooling for small-medium microservices and various forms of networking clients and servers. When you look at it in that light, it's not a bad language. It's an okay language missing a lot of features (many based on good intentions, some due to what I consider poor design) that could make it a good language.

Rust is objectively much better, but I suspect that for the next 5 years, Rust will only remain popular for systems programming but not application/web dev, and Go will only remain popular for what it's currently doing but not systems programming (with some semi-exceptions like Docker and Kubernetes, though that's not really systems programming).


I've read both the submission and the articles that exist inside that Quora link you posted and I am still not convinced. IMO what makes a language better is end products and not features. When Rust becomes stable, its community will produce great software but I don't think all of the rest languages you mentioned have produced (or will do) better software than Docker, Kubernetes, OpenShift, etcd, btcd, and a ton of other Go software.


> IMO what makes a language better is end products and not features.

It's useful to distinguish between "the language" and "the tool". Some programming languages are terrible from a language design perspective (PHP, JavaScript, MATLAB), but they still have great tools (IDEs, build tools, package managers) and libraries to help programmers create good and interesting software.


The tooling around Go is good but not that great. What makes Go better than all those languages is that it's more complete: a language that does not get into your feet, good tooling, great community, and more. IMHO saying that a language is better or worse based only on "the language" is silly.


Without pointing out specifics: Nim has comparable ambitions to Go but Nim actually has escape hatches for people that desire "more."


Better metaprogramming, significant whitespace, generics, etc.


> significant whitespace

That's not an advantage


From my personal perspective - it's a major plus when choosing a language - under the general auspices of "readability counts". I find Python-like languages much easier to read and maintain - and significant whitespace is a not-insignificant factor in that.


It's not like code in other languages isn't indented.


Indentation helps, but it's a pretty pointless thing when you make a lot of other decisions that hurt readability.


It's an advantage for me, not for you. I've had experience with both and I prefer significant whitespace. The only place where I found it to be a problem was in Jade templates since HTML is kind of white-space sensitive (spaces between tags cause spaces in the output). I do like programming with significant whitespace.


Well, not for the offshore and other coderz around that can't / won't cleanly indent if their lives depended on it.


> That's not an advantage

Yes it is. I have been programming Python, C++, Java and C# over the last 10 years.

Python's convention for white-space beats the other ones hands down both when writing and reading large code bases.


So says you. The entire python community disagrees, and that's not an inconsiderable number of folks.


No, the Python community consists of a mix of people who think that the manner in which whitespace is significant in Python (most languages have significant whitespace of some form) is advantageous, people who think it is neither advantageous nor disadvantageous that are attracted to the language for other reasons, and people who think that it is disadvantageous, but not so much as to negate other advantages they see in the language.

It is a mistake to conclude that because language X has feature Y, the entire community around language X thinks that feature Y is a positive feature.


> entire python community disagrees

I can tell by the downvotes. Also, you can't prove that.


I don't know if it is, I use Golang and I never used Nim, but the source code from Nim is way prettier: http://rosetta.alhur.es/compare/nimrod/go


As a Python programmer, I find the Nim version needlessly cryptic, glyph-laden and ALGOL-esque, with many of the unattractive traits of Python from 10-15 years ago. If I get the choice to start out in a codebase which looks like this from brand new, I'll certainly pass. That's sad when "prettier" is really all that's being offered here.


It has generics, for once?


Nim can't be a faster Python when it isn't a Python, but a much younger language which lifts a few cosmetic choices but intentionally breaks with Python in large ways. If it is trying to replace Python, many of these breaks are poorly chosen and reflect a lack of understanding of why Python and its ancestors were unique. Aside from this, many of the practical refinements of Python are dropped in favor of fancy and shiny features that are better for arguing on HN than actually using. If I really want macros, Lisp never went away, but they aren't doing Nim's readability any favors.

Go is much more mature than Nim and encapsulates a huge amount of thinking and experience in language and compiler design. Nim would like to be Go. But it isn't.


It really makes no sense that saying "Nim is better than Python" is an acceptable opinion, but "no, it isn't" is not an acceptable opinion to HN.


I tried it and it is easy and fast. But the ecosystem is still not very large. I think that it should support building libriaries directly accessible from python including passing numpy arrays directly to nim. That would make a lot of people to jump right in creating libraries and thus extending its own ecosystem.


Something like Julia's PyCall package that allows calling arbitrary Python packages from Julia would also be great.


NimBorg is what you're looking for, https://github.com/micklat/NimBorg


Just add autocompletion and syntax checking for the major editors and people will start adopting it.


That's on its way!


Doesn't nim already have autocomplete?


Built into the Nim compiler, yeah, it has a `idetools` feature that makes integrating it into an IDE far simpler. I've been playing around lately with getting it into Textadept, now that Textadept has a better "call some particular binary and get input and output from it" story.


I wonder if it can generate bindings for Python like Vala does[0]? That would make it even more interesting for Python developers.

[0] https://github.com/antono/vala-object


I haven't looked closely at the Vala stuff that you mentioned, but it is very straightforward to generate shared libraries in Nim with exported functions that you can access via ctypes. That's not quite the same as generating a python module that you can directly import, but it gets the job done.

You might also be interested in NimBorg, https://github.com/micklat/NimBorg , which supports embedding Python or Lua in a Nim program.


I perked up as soon as I read this:

    # Table created at compile time
    const crc32table = createCRCTable()


This is not an unique feature today, you can do the same in D and Rust and even C++14, I think.


"Hey Nim, 2014 called... they want their language features back."

I jest.

But seriously, D, Rust, and C++14 are about as familiar to me as Nim.


To be fair, D has had it since 2007 :)


Rust is planning to add that eventually, but in the meantime:

    fn foo(n: i32) -> i32 {
        n * 2
    }
    const y: i32 = foo(3);
error: function calls in constants are limited to struct and enum constructors [E0015]


You can do it with a compiler plugin; I suspect that's what the parent comment was referring to. Admittedly compiler plugins are pretty experimental at this time.


Sorry about that. I haven't tried this myself, I was told by a Rust user that its CTFE was on par with D's. I guess he was mistaken.


Not even close, sadly.


Or Common Lisp, 30 years ago.


I keep coming back to it, and I really like it but wouldn't it be great if it had interfaces? I really want to restrict the generic type but there seems to be no way other than using templates[1].

[1]: https://github.com/Araq/Nim/blob/master/lib/pure/collections...


User defined type classes can be used for this, but they're still experimental: http://nim-lang.org/manual.html#user-defined-type-classes


I can't believe how I missed that. Maybe this is added recently? Anyway, thank you very much!


Is there a "ctags / etags" utility for nim yet?


That's something nim-idetools can do once it works properly: http://nim-lang.org/idetools.html


I played around with Nim and it has huge potential. It is early days and I would like to thank the creators of the language. A decent IDE is the only issue for me. I tried all the IDE options and it was stark. The idetools didn't work for me and the unit tests failed on Ubuntu 14.04.

At the moment small projects and libraries should be fine but the lack of IDE would be an issue for bigger projects.


> That means you end up with a single binary that depends solely on the standard C library, which we can take for granted on any operating system we're interested in.

What exactly does that mean? Does the end-user of my binaries require a C compiler? Which operating systems are "we" interested in. I'm interested in Windows and specifically Windows CE.


No, binaries do not require a C compiler. Yes, Windows and Windows CE are included.


That means if programs written in C work on your OS, so do the ones in Nim.


Sorry if this is a silly question but I've always thought of C as generating machine code and it had compilers available in any imaginable platform besides perhaps really niche stuff.

Were my assumptions accurate?


Simplified, but yeah. It generates intermediate object code, which is transformed to machine code.

And as you say, pretty much every platform in existence has a C compiler targeting it. In many cases, you use what's a called a cross-compiler, which lets you generate code for, say, Arduino using tools run on, say, Linux. But one way or the other there'll be a way to compile for it.

Net result is that as long as the code isn't doing something that is specific to the platform (memory layout assumptions, asm blocks, Windows API, etc.) the same C code will run on many different platforms. You'll have to recompile it for each one, but you won't have to change the source (much).

If I understand correctly, Nim ultimately generates C source (among other options) which then goes through the above process. So it has the same level of portability.


I think you got confused when the C standard library was listed as a dependency. What is meant is the resulting binary had a link to a dynamic library (libc). For example if you are on a Mac run "nm /usr/lib/libc.dylib", those are the symbols that it might be referencing in a compiled nim program. I don't know the equivalent in Windows, but the dynamic library is a DLL. It doesn't have a runtime dependency on the compiler.


Nim has some good ideas but I can't get over the syntax:

  - Significant whitespace, but tabs are forbidden
  - No block comments, save for `discard """ ... """`
  - Identifiers are case and underscore-insensitive (FOO_BAR === fooBar === fo_ob_ar)


> - Significant whitespace, but tabs are forbidden

Here's the reasoning: https://github.com/Araq/Nim/wiki/Whitespace-FAQ#tabs-vs-spac...

If you still want tabs you can add this at the top of your files and they work:

    #! replace("\t", "  ")
> - Identifiers are case and underscore-insensitive (FOO_BAR === fooBar === fo_ob_ar)

This changed recently, now the first letter is considered case sensitive: http://nim-lang.org/manual.html#identifier-equality


> Here's the reasoning: https://github.com/Araq/Nim/wiki/Whitespace-FAQ#tabs-vs-spac....

Honestly, there are less drastic solutions to these problems. Some languages (forgot which) simply forbid mixing tabs and spaces when the tab size makes the code ambiguous. And their C example will generate an "ambiguous else" warning in D, which you can disambiguate by adding braces.


> - Identifiers are case and underscore-insensitive (FOO_BAR === fooBar === fo_ob_ar)

In the language design stages who in the world thought this would be a good idea? Even PHP doesn't do that.


The idea here -as far as I'm aware- is to allow for different coding styles without having to wrap another library. One author preferring camel case while another library uses underscore, lowercase naming?

For a new language, I feel like one could have just forced one style unto the users, but that's just me talking.


Let's be realistic. If you have somebody naming variables:

foo_bar = blah

foobar = blah

fooBar = blah

That's someone you really don't want to be coding anywhere near, because it leads to really, really confusing code. So nim is just enforcing not being able to do it it as a language.


I agree, but the problem is that you don't ever want to do this whether those are three different variables (or is that two?) or whether all are alternative forms of the same case/underscore insensitive name. Nim is going out it's way to make the syntax you have about legal (will all of these equivalent), which strikes me as a bad move.

The best case for usability is that this "feature" never used, which makes it an odd design choice. The worst case is that the feature is used frequently, there is no clear language norm, visually scanning for variables is harder, search and replace requires dedicated tools, and subtle bugs abound. Far better (in my personal opinion) to make identifiers case sensitive and enforce consistency with style guidelines and code "prettifiers".

The current Nim approach is being called cs:partial (case sensitive partial) and makes the first letter case sensitive but all others insensitive. This like an awkward compromise, and needlessly complicated. The best proposal I've seen suggests making "_x" equivalant to "X" (underscore is an alias for "next letter is capital"). But I don't see it as better than case insensitive plus strong code conventions.


I think it would be a lot more palettable if Nim enforced one of those coding conventions per module, while letting users of that module reference it any way they liked. e.g. my team could exclusively use camelCase and the compiler enforced that in our modules, while the same code could be handed off to some other team that exclusively uses snake_case.


In practice, resolving symbols is not really harder. Once you understand the symbols rules, your mind is very good at finding the match. Plus the real solution is proper IDE support with "goto definition". IMO, the benefits of semi-case-insensitivity (allow programmers to write how they want) outweight the issues it causes.


It still means you can't reliably grep code for an identifier. Strikes me as a really stupid wart in an otherwise great language.

go format is a much better solution to the same problem.


You can with NimGrep: http://nim-lang.org/nimgrep.html


If a language's lexical structure is so tortured that I need a special tool to grep it, I'm going to skip it and go back to the sanity of C++.


Ridiculous.

We use "special tools" for every other language (Visual Studios, Eclipse, etc).. Calling a language feature, which give programmers style freedom, a "tortured lexical structure" is not an objective argument. Especially since we have a tool (nimgrep) which addresses the issue and takes < minute to learn. Once Nim has better IDE support, no one will be grepping in the first place.


I don't need special tools to just _search_ a C++ or Java program.

Sure, I can use nimgrep, but I can't use _my_ grep, _my_ freetext indexer, _my_ editing constructs, and so on. I'm not going to make my environment, which works fine for almost all programming languages, bend over backwards to support your special snowflake of a language.

I'd rather just use something else.


> to just _search_ a C++

Please, be serious.

> but I can't use _my_ grep, _my_ freetext indexer, _my_ editing constructs

Oh, but you can! These are _your_ tools, you can easily extend them by either scripting or modifying source, no problems there. Really, how much of a problem is adding a switch for underscore insensitivity to your program? Because I assume case insensitivity you had already coded.

Oh, unless by "_your_ grep" you meant a tool that someone else wrote and you're using without any real understanding of how it works and without required skill or knowledge to modify it. Right, this can happen, you're a busy man, have many obligations and no time at all to fiddle with grep. I understand.

But that also makes you completely outside of a target group of early adopters of new programming languages. So maybe stop commenting on them?


How about distinguishing between public, private and protected properties? How about being able to quickly identify classes versus instances? How about explicit is better than implicit?


There are worse things than PHP out there :)

Many of the so-called 4th generation languages (i.e. business shits derived from Cobol with DB support bolted in) are totally crazy in this sense. Not only are identifiers often case insensitive, they can also be abbreviated!

Yet people make massive amounts of money with them.


It's really nice when you're talking to a C library that use various prefixes and other stylistic choices that don't map well to the Nim code I want to write.


> - No block comments, save for `discard """ ... """`

I believe a different syntax for those will be added soon: https://github.com/Araq/Nim/issues/1535


Not that it's even really needed... Just select the section of text and use your IDE's shortcuts to comment it out (which now works fine in any IDE due to comments no longer being part of the AST)


You know, we used to mock the Java people for requiring fancy IDEs to work around deficiencies in their language. Now, in Nim, we're designing these warts into the language from the start?


Underscore insensitivity is unforgivable: it makes grep work poorly. I guess I'll have to strike Nim from the list of languages I might want to use.


> Nim libraries are statically compiled into our binary as well.

Does that mean one can not use GPL libraries in non-GPL programs? With other languages you can get around that by linking dinamically.


Actually that only applies to LGPL libraries -- pure GPL doesn't allow dynamic linking, even. (That's the most common interpretation anyway, and the reason why LGPL exists.)


I'm assuming "staticExec" is not sandboxed in any way...

It's one thing to make your compile stuck it's head while doing #include "aux" in windows, but completely different to treat source code as "shell"-script.

(I see the point, and it's a great feature, but with too much power).


Sure, just don't use it then.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: