Hacker News new | past | comments | ask | show | jobs | submit login
I missed Nim (2014) (krampe.se)
143 points by mapleoin on Jan 21, 2016 | hide | past | favorite | 128 comments



It seems to me that the evidence is that in the 2010s, it is exceedingly difficult for a new language to break into what I call the "B-class language list" (they aren't in the Nobody Was Ever Fired For Choosing (Java/C#) set, but the languages you can use pretty much anywhere but Big Enterprise without anybody raising an eyebrow, like Python or Ruby) with some sort of serious corporate backing.

There hasn't even been much from the 200xs that has broken into that list. Looking at the Tiobe list for concreteness (yes, it may not be perfect but it isn't completely wrong, either, so close enough), you have to go to 14 to Swift to find something not from the 20th century. At 17 (Groovy) you debateably have something that wasn't pushed by a company (that's actually a complicated question), and all the way down to 21 to get to D to find the first thing that clearly comes from a single person anytime recently. (I won't go any farther because the list probably is getting to be just noise below that.)

I hypothesize that it is because we expect so much from our languages now. If you don't have a solution for serving web pages from your new language, don't even hope to make that list. But it could also be because developers are nervous about choosing something so obscure and having it just fade away when the competition that doesn't have that problem is already pretty good. Compared to 20 years ago we are spoiled for choice, so perhaps getting your head about the rest even in a niche is simply harder than it used to be. But I really don't know what it is.... but observationally, it seems to take a lot more than just "being a good language" nowadays to attain great success in the computer language area.

PS: This is the sort of thing that leads me to disagree strongly with the idea that the computer world moves quickly. When it requires more than half an average career for dominant language to shift, that's not "fast".


Language adoption is pushed by platforms. New languages get adopted when there's a new platform that people wish to develop for, and existing languages aren't suitable for it.

That's behind Swift's rapid rise (it's being pushed as the "new official" language for iOS) and a lot of historically successful languages. BASIC was the language of the early microcomputer era. C was the language of the UNIX & Win32 era. PHP and Perl were languages of the Web 1.0 era, and Javascript, Ruby, and Python of the Web 2.0 era. Objective-C is for Mac and iOS development, and Swift is soon to replace it. C# is for .NET development. Java was pushed as the language for the web, but ultimately found adoption within the enterprise, and I think Go is headed toward a similar position.

The reason we've seen all the major languages of the past 10 years backed by large corporations is because all the major platforms are controlled by large corporations. The web was the last totally-open computing platform to see widespread adoption, and there was a large renaissance of single-developer programming languages for it. We've been in a similar position in the late 80s and early 90s, when C was completely dominant along with Windows & commercial UNIXes, and it turned out that pattern reversed itself in a few short years.


Nitpick: the defining sites of Web 2.0 were PHP and Perl.


I don't think the platform thing is that big of a deal.

Programming languages are pushed and promoted just like everything else in the world. But there are more programmers than ever before, more languages than ever before, more platforms, more everything. And reaching out to most of the programmers becomes much harder and convincing them to use a new shiny language even more so.

If anything, we are likely to never see any new language become mainstream again. And likely to see most of the mainstream languages fade away.


i don't buy it.

orgs coding in java or go or python or ruby on the backend can be coding in anything; it's not being pushed by a platform.

bazqux, the only good rss reader after the greader diaspora, is freakin' ur/web.

swift, while apple is trying mightily to support it like a first-class citizen, is really more like an adopted child, platform support wise. it's interesting because of the language, not the platform.


I think language adoption follows broader trends in the industry. In the early 90's, people were coding for desktops, so you could use C, C++, Pascal/Delphi, or Visual Basic. You had a lot of closed source languages.

The web brought us Perl, PHP, Python, Ruby, and JavaScript, which are all pretty darn similar if you squint. Java is sort of an outlier because it was commercial and free and had a lot of marketing.

Mobile phones lead to a resurgence in Objective C, and another big market for Java.

Since we're still making websites, we're still using the same languages for them. Languages naturally have a network effect that is hard to break. If you already have N modules, writing the N+1 module in the same language is the path of least resistance. And then module N+1 also makes the ecosystem more valuable.

C++ has an especially pernicious network effect, because it's so difficult to interface with it (in general). If languages were more interoperable, then this network effect could be broken.

But in practice it seems the main way to break the network effect is for a new generation of technology to come around. ALL languages are really domain-specific. And it is fantastically expensive to build a language ecosystem... so as long as the domain remains the same, the language will likely remain the same.

I think the hardest thing to break is going to be our foundations in C/C++. Mainly because every single other language is also written in C/C++ (Perl/PHP/Python/Ruby/JavaScript/Java).

An earlier special case of this is that it's easier to write an application in the same language that the operating system is written in, or a language written in that language -- which is of course always C or C++. The OS specifies its portable interface in C headers, and in C data types.

Possibly domains that will create new languages: AI, big data, distributed systems (Most things are built on top of existing languages, like Spark on Scala, Flume on Java, TensorFlow on Python, but there might be room for a first class language once we figure out what the heck we're doing!)


Clojure and Scala? I know they are backed by companies, but those companies were founded to support them (ie. the companis grew with the language).

Elixir would be joining the B-class language really soon.


Not in my B-list, no. Clojure and Scala are still going to get the "Never heard of it" from a lot of developers, and right now the developer who has heard of Elixir would be the exception, let alone being able to say "I wrote that in Elixir" to your boss with no eyebrow raises. YMMV depending on your boss, of course, but I think that's a fair characterization of the "average boss". I don't even put Go in list with Google's corporate support, since it still certainly produces eyebrow raises.

The B list mostly consists of the 1990s-style dynamic scripting languages right now; Python, Ruby, PHP, etc. There's a lot of up-and-coming alternatives but nothing leaps to mind that won't likely need to be defended.

There is, of course, no concrete definition of the B-list, so you are free to have your own; rather than arguing with me too much about what language falls where, if you want to propose a different definition for conversation, please do. It's a fun discussion, if ultimately pointless. I'd put what you mentioned in a separate C list, "the set of languages that are known about on HN and have been used for real projects but still generally need defending if you want to choose them". It's a much longer list. All the fun languages are on it. Erlang's still a C-list in my book, so first Elixir will have to pass Erlang before I'd even consider it in the B list.


Agreed. The problem with Clojure and Scala is that they naturally target Java developers, due to their dependence on the JVM and tight integration with existing Java libraries. Companies choose Java for, among other things, the relatively simple syntax and ubiquity of Java developers. This is not a group that is, as a group, longing for change, for something new and more expressive. Java's selling points are static typing, corporate backing, large-scale adoption, memory safety, verboseness, and write-once-run-anywhere. Clojure gets rid of static typing and you can easily write completely opaque, unreadable code in Scala[0]. They are DOA in most large organizations off the bat for these reasons. And for someone looking for native compilation or predictable memory management, they are also off the table.

[0] http://blog.fogus.me/2009/03/26/baysick-a-scala-dsl-implemen...


Not sure why this is being downvoted.

Even ClojureScript (which has JavaScript as a compilation target) runs on a JVM-based compiler and requires some buy-in into the greater Java ecosystem. Plus it is built around the Google Closure ecosystem which makes it largely incompatible with large parts of the modern JS ecosystem built around node (though there are ways to tap into these from CLJS but not vice versa).

Java itself is a very conservative programming language. It is getting new language features as new releases come out but it's very much not on the "bleeding edge" as language trends go. Its main target audience are big enterprises that need a reliable platform with a large market of competent developers (with qualifications available in the form of various well-established certifications).

That doesn't mean you can't do anything "cool" in Java or that there can't be any startup that starts out on Java, but it means the acceptance of non-Java JVM languages is a non-issue. That these companies already use the JVM as a platform just makes other JVM-based languages slightly more palatable.


The gist of flatline's post isn't necessarily wrong, but you must admit 2 out of 2 wrong assertions about Scala isn't a good sign.


Really? Your example of "unreadable" Scala code is a DSL specifically designed to look like BASIC, complete with line numbers?

Have you taken a look at the source code? It's extremely simple, short and easy to understand. It even says so in the blog post you linked to. The only way I can figure someone not understanding it is if they think "these don't look like Java keywords, I give up!".

Also, I don't know about Clojure, but Scala isn't DOA in large organizations.


My point being, Scala is a powerful and expressive language in which you can write something like a basic DSL. I personally think this is great, but a lot of Java secs will take one look at that level of flexibility and faint when trying to imagine a group of 10 developers of wildly varying ability and experience trying to manage a large code base with a language like this. Really, I've tried to introduce Scala in Java teams before, and was surprised at the pushback over just the concept of a different language running on the JVM. Groovy is apparently an exception, but I have not seen it used as the primary language over Java.


Yes, but you picked an example which is extremely easy to understand, on multiple levels. The Basic DSL is trivial. The Scala implementation is short and simple (don't trust me, take a look, assuming you're passingly familiar with Scala!).

Is your argument against DSLs in general? You don't need to write them at all if you don't want to. In fact, most Scala devs don't write DSLs.

So is your argument "the Java devs I worked with were afraid of languages with unfamiliar syntaxes"? Um. It's hard to argue with that. I don't know what to say, other than "some of the Java devs I currently work with embrace and love Scala, and others don't". Maybe change jobs and go work with more open-minded people?


In that case, Ruby, Python and Lua wouldn't be in your B-list of 1999 either. I think it's going to take time, that's all.


Is Lua in your 2016 B-list? Not trying to criticise, just curious (I don't know Lua or its ecosystem well).


In game development, I'd say it's on the A-list and you'll asked "Why did you do that?" if you chose to embed Python or Ruby into your commercial game instead of Lua, especially if your company is a large enterprisy game development house.

Python & Ruby are languages widely web development, but only to the level below enterprise.


I see, thanks, this is interesting. I'm wondering why some languages become associated to specific fields (e.g. Python in scientific computing, PHP and Ruby in web development etc.), I suppose it is a combination of language characteristics and ecosystem. For reference, here's a link to a list of game engines with Lua scripting: http://gamedev.stackexchange.com/questions/56189/why-is-lua-...


> Looking at the Tiobe list for concreteness [...] At 17 (Groovy)

No need to consider whether Groovy follows the trend or not. Tiobe gives a history of all Top 20 languages as a graph, just click on the language name in the top 20 chart. Groovy's ( http://www.tiobe.com/index.php/paperinfo/tpci/Groovy.html ) shows the most volatile movement in the rankings (e.g. it just rose from #82 to #17 in a mere 12 months), and Tiobe's comment "Scala might gain a permanent top 20 position soon" is probably a back-handed dig at the likely impermanence of Groovy's top 20 position. That language is being actively fiddled through the popularity rankings.


So this is an old article. I wonder if his perception of Rust has changed since it's 1.0 debut.

> Rust is “C syntax” and Nim is Python-ish. Nim wins this one hands down for me.

This is always going to be a really personal choice for most people. I actually prefer all the clarity I get from the curly brackets.

> Nim has Exceptions with tracking (!) and Rust has explicit error handling via the return value. Personally… that feels like a big step backwards, much like error handling in Go. Again, for me, a clear win for Nim. I know others feel differently, but that is ok :)

Rust has made a plain distinction between expected error return behavior (like parse errors) that you can recover from, and panic! which is the closest thing to exceptions. If you come from languages and code bases that suffer from huge amounts of confusion b/c of things like RuntimeExceptions vs. CheckedExceptions vs. Errors (Java) it's a blessing to have strongly typed return values that require doing something with the error case.

Though, errors in rust could be easier to work with, they are a bit painful now. It looks like there is some stuff on the road map to do just this, based on a merge I saw recently.

> Rust is pushed by Mozilla, Nim is a grass root “true open source” language. Having a “backer” might be good, but I like grass root efforts since they tend to be more open and easier to participate in as a “full” member.

Mozilla is a saint for backing this effort. Love them or hate them, it's like Redhat's support of Linux. It's great that there are paychecks paying for this development.

The Rust community is really strong, and doesn't (IMO) suffer from the issues related to BDFL and has open transparent discussions around requested changes.


The main complaint that "Rust is much more complicated to program in." Is still very valid. The learning curve for Rust's memory handling is steep. But this is to be expected when comparing any GC to a non-GC language. GC languages are always going to be easier for the programmer, that's the whole point of the GC. Rust's claim of being memory-safe without a GC is great but it's not an advantage if you can afford the GC.


I think that's a bit oversimplified. The same mechanisms that allow memory-safe manual memory management also eliminate data races without having to use a race detector. They also provide a very flexible form of const correctness, if that's your thing.

I think the cognitive overhead of Rust's memory model outweighs its benefits only if all of the following are true for your application:

1. Your application can afford the cost of GC and a runtime.

2. Your application does not use concurrency, or you are willing to tolerate data race errors, or you can eliminate all data races with a runtime race detector (due to extensive test coverage or similar).

3. The help that compiler-enforced immutability affords to your ability to reason about the code isn't important enough to you to justify the cost.

I think that there are a lot of apps that meet (1), (2), and (3), don't get me wrong. But I also think there are a good deal of apps that meet (1) but not (2) or (3), and they can benefit from Rust as well.


I don't use Rust, so pardon my ignorance, but my impression is that Rust will be very annoying if you're doing something that does not require safety.

For example, these days I write a lot of data processing stuff that is closer to scripting than anything else, and is usually "one-offs" that get scrapped afterwards. But the code needs to be super fast, talk to databases, parallelize across multiple cores and nodes, which rules out many languages. On the other hand, safety is the absolute least concern. I've been using Ruby when it's not performance-sensitive, and Go when it is, and while Go is significantly less pleasant, it gives me decent performance, concurrency and access to libraries. Nim would probably also be ideal, if not for the lack of libraries. But Rust just seems like a terrible choice for this stuff because of the syntactical overhead of its abstractions, in particular memory.

That limits its usefulness. A single language for all my work would reduce context switching and allow code reuse.

In general, my impression is that Rust is good at the complex, heavy stuff, but it doesn't scale down.


Don't forget that memory safety is the building block of a lot of other things, like safe concurrency: http://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.ht...

"Safe by default" saves a lot of different kinds of bugs.


Well, since you mentioned performance, the stuff I mentioned also buys you performance. Rust is in a category of language performance that GC'd languages with runtimes or JITs never reach in practice. I don't believe Go will be an exception to this (especially not now, with its lack of an optimizing compiler).

In general, I think if you're looking for "one language to rule them all", that can scale up to the highest performance and down to one-off apps you would write in Rails, you will be disappointed. Even if you don't care about safety. That's because, no matter what people say, static and dynamic typing is a tradeoff. Rust is more statically typed than Go is. Go is more statically typed than Ruby is. Which you should use depends on your circumstances.

A lot of language designers and enthusiasts, from all communities, believe there's a "sweet spot" of language design where your language can scale up or down at will. I don't believe it. That experiment was tried with Lisp and Java and failed. Multiple languages are here to stay.

At the end of the day, all I can say is: If you're happy with Go, use Go. Rust was never designed with all use cases in mind, and I don't see that fact as a failure at all. I see it as just acknowledging reality: trying to make a language that scales up and down indefinitely to all use cases is chasing the impossible, and language enthusiasts who believe their language is the one language that can do that are all mistaken.


But Nim shows that it's possible to have a fast, statically typed language that is also terse and lean and "script-like".

I actually do believe there exists a sweet spot. I just think that during design, too little thought it given to the scalability of languages. And I don't think it's that difficult.

For example, type inference is something of a game-changer, capable of making a statically typed language feel nearly like a dynamic one. Yet it's only recently being applied in mainstream languages, despite the theory not being particularly new.

I also think too little thought it given to the cognitive overhead of syntax. Wirth famously designed his grammars to fit on a single screen. Go (which, like its earlier incarnations Alef, Limbo and Newsqueak, is very much influenced by Wirth) gets this syntactical simplicity right. I thought early Rust syntax showed a lot of promise, but I'm unhappy about the current jungle::of::punctuation.

> If you're happy with Go, use Go.

To be clear, I'm not happy with Go.


Nim sacrifices a lot. It's garbage collected with a non-thread-safe GC (last I checked). It has undefined behavior, which makes me not confident that people won't discover security problems. It wouldn't be suitable for the project I'm working on, which has to be as fast as it would be if it were written in C++, free of GC overhead (which does not just consist of pauses), and absolutely memory safe. (That doesn't mean I wouldn't use it for other projects.)

Fixing these issues in Nim would make it much more like Rust. In particular, it would have all the cognitive overhead you're criticizing Rust for.

If you want to show that it's possible to achieve all of the use cases of Rust without the cognitive overhead, just citing Nim won't cut it. You need to (a) explain how to do memory safety without garbage collection (reference counting being, of course, a form of GC) without introducing new concepts like Rust does, (b) explain why everyone who thinks they can't afford GC is wrong, or (c) explain why memory safety isn't important even for security-critical software. I don't think (a) is possible, and I don't think (b) or (c) are correct.


But this is not at all what you said in the parent post. You said "Rust is in a category of performance that GC languages never reach in practice".

This is not a claim about safety.

Every time I have benchmarked something, Nim has turned out faster than Rust - or anything else, for that matter. Yet, Nim does have a GC. But since one can allocate on the stack, and it is per thread, it is not a bottleneck.

About safety: you can tune it with compile time flags, or even annotate per function what checks to leave.

So, please, stop spreading FUD about Nim.


- Non-thread-safe GC is unsafe. The way to make it safe is to make it thread-safe. Thread-safe GC does not have negligible performance overhead. (Actually, non-thread-safe GC doesn't either, not by a long shot, but it won't show up in small benchmarks as easily.)

- Just being able to allocate on the stack is not enough. You need to be able to freely make pointers to those stack objects. The ways to do that are to either (a) accept the lack of safety and admit dangling pointers; (b) use an imprecise escape analysis; (c) lifetimes and borrowing. Every choice has costs.

- Safety with runtime checks is not equivalent to what Rust does. Rust provides the safety without runtime checks. That yields better performance than safety with checks.

I'm not intending to spread FUD about Nim specifically. The more interesting question is whether it's possible to have a safe language with the same performance as C/C++ without any cognitive overhead. I strongly believe, after working in this space for over five years, it is not.

That's not to say Nim is a bad language. There are lots of things I like about Nim. It's just that it won't escape the basic tradeoff between cognitive overhead and safety/performance.


Nim threads don't share memory. Each has its own GC heap with data exchanged through messages. The lack of thread safety is irrelevant when they can't normally get to each others memory. This of course has performance implications for certain types of parallelism. Nim optimizes for a model where workers don't have to have high speed, high volume inter-communication. If you need shared memory it is up to you to add it along with whatever safety measures you need.


Does Nim now deep copy all messages between threads and start every thread with a fresh environment? If so, it's safe. But it is really problematic for any parallelism. (asm.js tried this and developers felt it was unacceptable.) I think you really can't get away from thread-safe GC in practice.


I wasn't comparing Nim to Rust's goals, I was comparing it to my "sweet spot".

From everything I have read (I've followed the development quite closely since it was first announced), Rust's safety comes at a significant cost to the developer, in ways that prevent it from scaling down.

You guys are doing a terrific job, don't get me wrong. But from my perspective this looks like a costly design mistake. Looseness is the key to being able to scale down, and strictness is its antithesis. Rust just doesn't offer a forgiving mode that approaches the kind of ergonomy you get from a less strict, garbage-collected language.

I think Nim made a better, more flexible choice, in offering a kind of graduated safety — you can restrict functions (for immutability, memory safety, exception handling, etc.), but the default is wide open. In my experience, this correlates to the top-down structure of programs: You want the foundation (runtime, stdlib etc.) to be as hardened as possible, because you can afford to spend lots of time on it, since it's the most used code that needs to be the most stable over a longer period of time. Towards the top layers everything should be able to decide whether it wants to be sloppy (fast to develop, but unsafe) or strict (slow but guaranteed to be correct).


> Rust just doesn't offer a forgiving mode that approaches the kind of ergonomy you get from a less strict, garbage-collected language.

Because it's impossible (without sacrificing Rust's basic goals). And Nim doesn't prove otherwise, because Nim doesn't share those goals.

You are framing safe vs. performant as a fundamental tradeoff. It is not, and that is the entire point of Rust. Nim does not allow you to remove that tradeoff. Rust does. That's why Rust is suitable for different projects than Nim is.

In other words, Nim's safety features are not equivalent to what Rust has. They have a performance cost in Nim and don't in Rust. So in order to make your "sweet spot" of a language that scales up and down a reality, then you're going to have to either convince me that I should accept the performance overhead of Nim's features or that I should give up safety.

I don't believe that it is possible to combine a loose, GC'd mode with a memory-safe, non-GC'd mode in the way you want. The best you can do is what Rust already does, with Rc and RefCell. What people observe with Rc and RefCell is that you do indeed get back your freedom to make aliases and mutate at will, but you still have to know how the borrow checker works. So Rc and RefCell get little use in practice, because once you know how the lifetime system works you might as well just use the ownership system instead of paying the cost of Rc.

Even if it acquires ownership and borrowing, Nim will not be able to escape this dilemma, barring some fundamental research advance.


From what I've seen, I'm pretty sure that over the past 8 years of development all usecases, features, and syntaxes were indeed kept in mind for at least some amount of time. Not all of them made it into the final design, though. ;)


> "On the other hand, safety is the absolute least concern."

In that case, you may be interested to know Rust permits unsafe code:

https://doc.rust-lang.org/book/unsafe.html


Unsafe code isn't for things that "don't require safety". It's for things that are intrinsically unsafe, like calling foreign functions, and the unsafe keyword is basically saying "I promise to be very, very careful".


Unsafe code in Rust is for anything the programmer wants to use it for. If someone is writing their own scripts (like the GP), and wants a strongly-typed system-level programming language, and has explicitly stated they care very little about safety, I see no reason why they couldn't use unsafe Rust code.


there is some work happening in the node+rust arena that's interesting in this regard


> The learning curve for Rust's memory handling is steep. But this is to be expected when comparing any GC to a non-GC language.

People forget that the learning curve for correct memory handling in C is steep.


Also

  >  I would however urge Rust people to add a section in their
  > documentation called “Error handling” or similar so that one
  > can find it without having to read the entire manual!
We have this today! http://doc.rust-lang.org/book/error-handling.html


That's great! :)


> It looks like there is some stuff on the road map to do just this, based on a merge I saw recently.

The "Trait Based Exception Handling" RFC is in final comments:

https://github.com/rust-lang/rfcs/pull/243


The comparison of Rust's errors-by-return-values to Go's is really off-base.


Exactly, between a powerful type system and macros like try!, you're basically rolling on the same level as Haskell and other advanced languages in Rust.


I'd say you should hold off on that claim until Rust errors are composable (RFC #243). I know it's close.

The macros aren't quite up to the job of safely coping with the tangle of different errors that different libraries can return without writing Go-like error handlers yet.


I disagree. The RFC you are citing is effectively just syntax sugar baking the `try!` macro into the language; it doesn't allow you to express anything you couldn't express today with `try!` and closures.


I also prefer Nim to Rust, but I haven't done anything extensively in either.

How does Nim do in the safety area? AFAIK, Rust has e.g. regions and linear types to achieve memory safety.


Last I looked:

- Nim had a lot of undefined behavior. Dereferencing a null pointer, for example, led to undefined behavior because of compiling to C.

- Nim seemed to have less flexible data race prevention (though the abstract interpretation stuff that Nim does to determine disjointness of arrays could be more ergonomic than Rust's split_at_mut, albeit more complex). Data races were also more of a problem in Nim because the lack of a thread-safe GC meant that races could result in use-after-free.

- Nim was in the typical "memory safety requires a GC; opt out of GC and you lose memory safety" category of languages (reference counting being a form of GC). Rust, by contrast, retains safety even when GC is not used (which it rarely is).

This was many months ago, so it may well have changed.


In most cases, though, the GC is light enough that it can stay enabled, and you'll rarely ever end up with raw pointers.

For threading, you can use the capabilities at:

http://nim-lang.org/docs/threadpool.html


The GC was not thread-safe at that time, so you did not need raw pointers to get the unsafety.


This forum thread might be of interest http://forum.nim-lang.org/t/1961


It's crazy that in 2016, people are still seeking the holy grail of statically typed, memory safe, no VM programming languages that retains most of the advantages of dynamically typed languages. My point is it should be there already!

Well , I guess language design is just hard... Yet the demand exists, and it's huge no question.

Out of all of these, i still think D is the most interesting one (I like C#, I want generics ...).

I don't agree with some choices Nim made but it still look interesting. Go is way too rigid for my programming style. Yet Go set some expectations when it comes to developer experience, so it was a step in the right direction.


Yes, of all the new-style AOT languages, Nim and D are by far and above my favorites.

Go and Rust just never appealed to me. Actually, I never liked Go for the same reasons as you...


> statically typed, memory safe, no VM programming languages that retains most of the advantages of dynamically typed languages

Those requirements contradict themselves in numerous ways. As always, engineering tradeoffs are paramount.

As for "statically typed, memory safe, no VM" -- C++ fits all those perfectly. (Provided you put in the effort of learning C++. There's another tradeoff you'd need to keep in mind.)


C++ is not memory-safe and cannot be coerced into being so.

You can write nice C++ in the C++14 style, but you can't turn off or ban the unsafe features. Until I can make my program fail to compile if there is a bare pointer in it (and how would that work with the standard library and system API?), it's not a memory-safe language.


There isn't any definition of memory safe, nor any published version of C++, for which this is a true claim.


  > Those requirements contradict themselves in numerous ways
I'd be interested in hearing more about how.


> It's crazy that in 2016, people are still seeking the holy grail of statically typed, memory safe, no VM programming languages that retains most of the advantages of dynamically typed languages.

I have my eye on Red and its system programming dialect Red/System

http://www.red-lang.org

http://static.red-lang.org/red-system-specs.html


Nim is great. It is fast and it works everywhere that C works or JavaScript works since it transpiles to either (and then compiles if transpiling to C). It is super fucking fast (usually faster than C) and it feels like it has none of the limitations that you normally find out there.

That being said, there are a couple of issues.

1. Like Crystal, Nim still doesn't have HTTPS for its webserver, and with both, the default instructions are insecure. (Download this thing and run it in your shell, pay no attention to any MITM that can trivially root your machine).

2. The community is really pro, but the arguments are too tiring. We had a huge argument over the equivalent of Ruby's chomp. Everyone is so afraid of bloat in the standard lib (which I don't really understand, since I've never found myself wanting less string methods).

3. While the syntax is way better than Rust, Go, et al. It borrows some annoyingness from Python where it could have looked to Ruby for better guidance. For example, sort vs sorted. Sorted is the safer operation in Python. It returns a new sorted list. Now what normally happens in the real world is that programmers make the safe version first and later make the fast version. But imagine that the first version was called "sort" and it returned a new array, how are you to transition to a fast sort and a safe sorted? Error prone patching. Ruby's array.sort vs array.sort! is much better during transition, more legible, just all around better. It means less mangling around and it means that 99.9% of the time you know the method you're calling is safe if it doesn't have the bang after it.

4. There is still some instability here and there when you try to do cute things. I can't remember exactly what it was, but something along the lines of passing channels over channels segfaulted. This leaves you a little concerned about just how much abstraction you build up.

That being said, I fucking love Nim. It's so fun to write, it's super fast, it's quite legible, and you can do literally anything. It's had shared objects (dlls) since I started using it, so Ruby <-> Nim is possible and fun. It's very debugable and once it hits 1.0 and gets more mainstream use I'm sure it's going to be a really popular language.


The deal breaker for me is the variable name normalization:

userName == user_name == username == uS_eR_aN_mE

It used to be totally case insensitive now it is case sensitive for the first letter. Underscores are ignored: two identifiers are considered equal if the following algorithm returns true:

    proc sameIdentifier(a, b: string): bool =
      a[0] == b[0] and
        a.replace(re"_|–", "").toLower == b.replace(re"_|–", "").toLower
For me it is a capital violation of the "principle of least astonishment".


I actually love this. It makes it easy to write code in either a JS context (where you want camel case) or Ruby(ish) context. I really don't see how this is a deal breaker. It's one of the first things you learn when you start with Nim and I've never had a problem with it.

To me this is kinda akin to the people that complain about the magic in Rails or that Ruby includes so many things by default (like rand or sleep) without working on a couple of projects first. Those really aren't the problems that slow you down in Ruby and this isn't really something that is an issue in practice.


it's a huge problem because in many cases you have no idea that you are reusing a variable, because in every other language you wouldn't be reusing a variable.

This plus the crazy global name space pollution. If you import something, everything is imported from that module. I have no idea what to name things because everything collides with everything else.

In Python, things are simple. Classes are 'CapitalCase' and functions are 'lower_case'. If I did the same thing in nim, they would collide. Also, in Python it is considered a terrible idea to do

    from some_module import *
in Nim, it is the way things are done. Although, they are alternative methods that don't do this.

also, the weird

    var
        a, b = 0
Perhaps you get used to it with time. But still namespace collision is always a problem for me especially when working with others' libs.

However, it is currently the most painless way to write fast code and integrate with Python, which says a lot.

Edit: Looks like first letter matters now, so ignore that part of the comment.


I've never had a namespace collision problem. Not when reusing or when importing. When was the last time you used Nim?


Yesterday. But I ran into the issue last week.

I was reusing "paramStr" variable. This is a built in in the os module. I defined "param_str" in my module. Then wrote another module that was using the first one. Then realized I was overwriting an OS module function.

I actually have no idea how such collisions are handled. What if I import another module that overwrites the OS module? What if a 3rd party lib uses some variable that another 3rd party lib also defined?

A whole set of problems would be gone if such importing was looked down upon. I mean it is exactly the same situation in Python (but not exacerbated by the name canonicalization), and people realized a long time ago that doing this is not a good idea.


Coming from python, I'm also surprised by the way Nim imports but I realised that it's not the same sitution because nim uses unified function call (ufc) syntax to implement its method calls. Methods are functions that due to the UFC sugar can be used as methods and so everything needs to be imported from a module in order that methods are visible to the calling scope. This works better than you might expect since the correct function normally gets called due to static typing. Of course this doesn't fix variable name collisions. I'd need to program more in nim though to decide how much of a problem this is.


Having looked at the tutorial, I can see now that variable name collisions (i.e. if defined in two imported modules) are caught as errors by the compiler. Also only deliberately exposed symbols get imported into the namespace which reduces the namespace overlap considerably.


It destroys the ability to search. You have to use a crazy regex to search for all instances of a variable.


Because it's such a deal breaker that you can't have "userName" and "user_name" in the same scope. This was brought up in the other thread, and it's just as silly now. Rejecting an absurd naming scheme is a feature.


For those who would like to read the previous thread: https://news.ycombinator.com/item?id=10931800


Urgh. I wish languages wouldn't do this. Things which look different should be different.

At least it's case sensitive for the first letter, which means that the traditional lowercase-for-identifiers, capitalised-for-types naming scheme works.

I've tried programming in Ada, which has case insensitive identifiers; it was deeply painful.

Edit: Plus, of course, case sensitivity is locale dependent. Are I and i the same letter? Not if you're Turkish. (Upper case i is İ; lower case I is ı.)

Yes, this matters. http://www.theinquirer.net/inquirer/news/1017243/cellphone-l...


I would be interested to hear your experiences with Ada, what in particular made case insensitive identifiers painful. Can you give some examples?


So uppercase Turkish `ı` is the same as ASCII `I`?.

Go and Haskell also have some special (unconventional) treatment of capital letters, which probably make them biased towards the English language.


Yes. And uppercase Turkish `i` (which is same to ASCII `i`) is not ASCII `I` but Turkish `İ`. Obviously this behavior is very locale-specific.

> Go and Haskell also have some special (unconventional) treatment of capital letters, which probably make them biased towards the English language.

In Ruby an identifier is considered to start with an uppercase letter when it really starts with an uppercase letter or underscore; that probably is the best compromise for unicase scripts including CJK scripts.


It's a weird language bug. I wish English (and my own tongue) was as consistent with the dots as Turkish is. ;)


How does the garbage collection work when transpiling to c?


Probably just has a conservative Boehm-ish GC. Their website is a bit sparse on the GC details from what I can tell, but anything else would be rather surprising.


A bit more interesting:

"The basic algorithm is Deferred Reference Counting with cycle detection. # This is achieved by combining a Deutsch-Bobrow garbage collector # together with Christoper's partial mark-sweep garbage collector. # # Special care has been taken to avoid recursion as far as possible to avoid # stack overflows when traversing deep datastructures. It is well-suited # for soft real time applications (like games)."


The garbage collector looks to be written in Nim: https://github.com/nim-lang/Nim/blob/devel/lib/system/gc.nim


In fact, there are five (!) different GCs you can try:

--gc:refc|v2|markAndSweep|boehm|go|none


EDIT: I realized the post was written before Rust's governance was diversified (https://github.com/rust-lang/rfcs/blob/master/text/1068-rust...). Rust at that time was "Mozilla led", but this isn't really true now.

> Rust is pushed by Mozilla, Nim is a grass root “true open source” language. Having a “backer” might be good, but I like grass root efforts since they tend to be more open and easier to participate in as a “full” member.

> UPDATE 2014-11-08: I know both languages are open source and have communities. But it remains a fact that Rust is lead by Mozilla (I would be surprised if not) and Nim is lead by its community.

Whilst the majority (not all!) of the core team consists of Mozilla employees, the wider governance does not (http://rust-lang.org/team.html). Anyone can participate in RfCs, and IIRC people can also request to join subteam meetings (not exactly sure how this works), though the main _discussion_ is out in the open on the RfC anyway. Additionally subteam composition may grow or change over time.

Mozilla certainly is backing a large chunk of Rust development, but as time passes this is becoming less and less true as large features are being written by community members. As far as decision making is done it's pretty open now with a defined process and completely open.

So "Rust is led by Mozilla" isn't exactly true anymore. The core team isn't 100% Mozilla, and the overall governance is even less so.


> Most advanced statically typed languages completely SUCK in the programmability department. Unless you are some genius of course.

Is this a common opinion? I've certainly heard the criticism that statically typed languages slow you down because you're fighting the compiler, but not that they're the only for exceptionally smart people. That's a troubling conclusion and potentially a self-fulfilling prophecy.

I'm curious how the author came to this opinion. Every "advanced" statically typed language I can think of isn't exclusively complicated type constructs; it's very much possible to write programs with simple type constraints. I wonder this opinion stems from a leak of Haskell's popular reputation.


I think the author was referring to programmability as in language extensibility. So the "geniuses" he references would be C++ template metaprogramming wizards. I don't think he was claiming that statically typed languages require you to be a genius to use.


Yeah, well, I actually meant both. One can't claim C++ is an "easy language" no matter what you are doing with it. And let's face it, most languages that are considered "easy to use" are dynamically typed. So yes, I do think it's a common opinion. And yes there are surely exceptions you can find :)


In my opinion, if a language really wants to compete with Go in my space (devops/infrastructure), then it absolutely _must_ have static-linked binaries. This feature has been such a huge win, and I continue to appreciate it every time I go back to something else.

As far as I can tell Nim doesn't have statically-linked and compiled binaries out-of-the-box. Even if it wins in other language-specific features, Go will continue charging on in the infra space.


Static linking is one of the worst possible things you can do in terms of system security. When you run N applications that are all statically linked against library Foo and a 0-day drops for Foo, how does a systems administrator identify the applications that statically link against a vulnerable version of Foo and then patch it? Suddenly that sysadmin is dependent upon N upstreams to provide security fixes for a library they don't even maintain! Static linking is only feasible in the "app" world of proprietary software that gets shipped to customers without any concern for their safety.

Go is a bold step backwards in so many other ways, but this is one of the most egregious, especially since support for dynamic linking appears to be only an experimental feature.


If you have a good CI pipeline, "patching" is all automatic.

A security bugfix just got pushed for Go 1.5. Since all my apps are dockerized based on the official golang image and are built by CI, as soon as the fix was published it was included in the next commit and pushed as part of that build. No intervention required. Similarly all dependent packages are pulled as part of the build so patches/fixes in them are automatically updated as well (note that vendoring would break this).


"as soon as the fix was published it was included in the next commit and pushed as part of that build"

Lovely for your developers, or if you're deploying it yourself as part of a service. Not so hot for people who are actually running the code you shipped them last week, on their own hardware. Monolithic statically-linked blobs aren't doing them any favors.


I would heartily disagree, as a guy that runs the code.

One of the benefits of these statically compiled binaries is that they are not coupled to each other. Everything doesn't have to be upgraded at the very same time.


There's good coupling and there's bad coupling, and a semi-famous quote about learning the wrong lesson.

"If a cat sits on a hot stove, that cat won't sit on a hot stove again. That cat won't sit on a cold stove either."

It sounds like you've been burned by bad coupling. Sure, upgrading one thing and having something else break sucks. However, having to upgrade a hundred things because that one thing that's statically linked into all of them (like your SSL library) had a bug also sucks. It's like Scylla and Charybdis. They're both bad, but it is possible to steer between them.

The dual nature of this problem was recognized decades ago. It's why we came up with things like API versions and even the symbol versioning that you probably don't realize is already supported by your system's loader. Responsible developers can use these things to make sure that compatible fixes take effect where they need to without relinking, while incompatible changes affect only new code compiled and linked with the new header files etc. That's good coupling - the kind that reduces risk and total work.

Unfortunately, responsible developers are in short supply. Worse, faced with a plethora of irresponsible developers who don't know how to make a change without breaking stuff, a lot of people are reaching for the ugly workaround - static linking, including the kind that's disguised as containers - instead of solutions that will last. They've veered away from Scylla, right into Charybdis's maw. I made that mistake once myself once - perhaps more than once - until I realized I was getting just as burned by the "solution" as I was by the original problem. I suppose more people will have to go through that same process before the situation really improves.


> Responsible developers can use these things to make sure that compatible fixes take effect where they need to without relinking, while incompatible changes affect only new code compiled and linked with the new header files etc. That's good coupling - the kind that reduces risk and total work.

I admit I do not know very much about this. Thanks for sharing! I admit I'm definitely not the most competent developer but I try.


I've yet to see any organization that is planning to be around for more that 10 years buying into this.

> all my apps are dockerized

That might be true in a 2-years old startup with 5 employees. Technologies change every few years. Linux distributions managed has been providing upgrade paths for software for two decades by unbundling components and tracking dependencies.


I've seen security issues introduced by dynamic linking too, so I don't think this is so cut-and-dry. The huge number of package updates needed when a core library has a security vulnerability is a big downside, but having programs that actually work is the upside.

Have you looked at what steam does to eliminate any system libraries from accidentally getting linked in and breaking games? This is an example of the downsides of dynamic linking.


Nim uses your C compiler to produce the final binary, so it can produce statically linked standalone programs everywhere a C compiler can (which is pretty much everywhere).


Exactly how self-contained do you want your binaries to be? Binaries built with Nim will be dynamically linked against libc, as well as any other C libraries you use (e.g. OpenSSL, PCRE), unless the Nim compiler gives you a way to pass the "-static" option to the linker. But Nim modules themselves, including the Nim standard library, are statically linked. So the resulting artifact is much more self-contained than it would be for, say, Python, Ruby, Node.js, or a JVM language.


Nothing prevents you from static linking AFAIK:

https://www.schipplock.software/p/static-linking-with-nim.ht...

Googling a bit finds more on the subject, for example to go very small:

http://hookrace.net/blog/nim-binary-size/


What about when Julia gets those things?


I thought Julia was interpreted and/or ran against a required pre-installed runtime?


For now...


I'm really excited to try Nim out in a project. It's been at the top of my list for a while, despite the evident rise of Elixir as the next hot new thing. Really love the Pythonish syntax and the ability to compile down to C and run as a native program.

Edit: just noticed this was posted on Oct 20th, 2014. Nim has been evolving a lot over its short lifetime and it's possible that some statements in this article are no longer pertinent.


Give it a shot! I finally sat down and did it recently, porting an old simple pygame: https://github.com/Jach/dodgeball_nim_pygame_comparison I'm glad I did.


Nice! I really like your analysis of Nim in the readme too. As somebody coming from Python, a REPL has been on my wishlist for Nim for a while now :)

One suggestion: you can get rid of the makefile and use `nimble build` to build instead. Take a look here for info on how to create Nimble packages: https://github.com/nim-lang/nimble#creating-packages. You should basically be able to execute `nimble init`, then add `bin = @["dodge"]` to your .nimble file.


"I ended up sifting out the 5 most interesting in my not so humble opinion - Go, Rust, Dart and Julia."

Five?


There are only two hard things in Computer Science: cache invalidation, naming things, and off-by-one errors.


Now I'm going to have to link to one of my favourite sources of engineering wisdom, The Codeless Code (which deserves to be way better known than it is).

http://thecodelesscode.com/case/220


This is absolutely fantastic.

http://thecodelesscode.com/case/128


Or else there's some new language called ''.


An excellent April Fool's Day project idea. Wait, nevermind... https://en.wikipedia.org/wiki/Whitespace_(programming_langua...


Or intellectual property protection. Their whole concept is hiding the message in the noise. (paranoid voice) "Don't read the April-related stuff! Highlight the page to see the hidden comments! The real reason they created it!"


You haven't heard of and?


Ha! That's funny, noone has noticed that before you :) I wonder if I was thinking of a fifth or not...


I seem to disregard languages that imports to global name-space by default, because such programs tend do create spaghetti nightmare inception.


It's interesting to see how some URLs are already invalid. For example the ones pointing to nimrod-lang.org


I will fix it, Nimrod was renamed.


I tried Nim last year after getting burnt out on rust beta. My impression is that it has a lot of the same issues Haxe has in that it seems to be a transpiler. The specific problem I hit was returning a list or vector particular way caused strange C++ code to get generated.

It would be cool if they redid the backend so it functions as a real compiler with llvm.

The transpiler issues + the smallish community pretty much scared me off. I am not sad I "missed it" even if it was a cool experiment.


The C code generator is definitely much more mature than the C++ code generator. Sorry to hear that you had problems with it. Did you report it as an issue on Github? If not do you still have the code somewhere, I'd be more than happy to take a look and submit a bug report for you if you don't have the time.


Filed and probably fixed now (I hope). As the sibling mentions it sounds like there is llvm support coming/available?

It is probably superstition instilled from my compilers course but having a byte code producing compiler does tend to keep you honest.


I can however mention that with the Urhonimo project (currently dormant, but still quite cool) we had no issues with using the C++ backend. Although Andreas did make a bunch of improvements to c2nim and perhaps the Nim compiler too IIRC.

And oh, notice there actually is an LLVM backend brewing now!


Thats awesome. Sounds like it is time to take nim for another spin then if that is true.

Not knowing the syntax or semantics is a great way to fuzz a compiler so my guess is I was trying to do insane things. I had similar issues with the rust beta.


Rust with Nim syntax and I'm sold!


https://github.com/mystor/slag

(To expand on the README, this project was mostly made as part of as silly bet, you shouldn't actually use it)


This looks awesome.


Rust with a more Nimish syntax would be good but what I really think I want for day to day use is Nim/Go

Nim's syntax regarding whitespace. It's easy const/let/var, if/when, and hopefully eventually proc/func distinctions. It's unified function call syntax. It's optional garbage collector. Sized arrays.

Go's handling of accessors for structures inside structures and structural typing.

But I would like to add Rust's use of '!' for macros and Python's slicing syntax.


> Go's handling of accessors for structures inside structures and structural typing.

Could you give an example of this?

> But I would like to add Rust's use of '!' for macros and Python's slicing syntax.

Have you seen the Nim `a[x .. ^1]` syntax?


It's been a while since I've used Go but I seem to recall that you can say foo.baz instead of foo.bar.baz if baz is a unique name in that struct hierarchy. Which makes choosing composition over inheritance nicer. For the structural typing thing that just means that you declare interfaces that functions can accept and any struct that can fulfill that interface does. Sort of like static duck typing but all the used methods are checked at compile time. But more generally I'm not a fan of using inheritance as a way of achieving polymorphism and I like Go's approach here more than Nim's.

No, I hadn't seen the 'a[x .. ^1]' syntax. Does that mean 'The items of a from x to the second from the last'? If so I'm glad that the semantics are there but Python's syntax is still nicer.


Have you seen Nim's concepts? [1] They sound similar to what you're describing.

The semantics are slightly different from Python. 'a[x .. ^1]' means 'The items of a from x to the last element'


Oops, forgot to include link to concepts: http://nim-lang.org/docs/manual.html#generics-concepts


And if Nim replaces inheritance with concepts and composition I'd be quite happy. But I think they'll end up existing side by side and most things will use inheritance based polymorphism.


my list of 5 most interesting languages would be

    * Rust
    * Perl6
    * Clojure
    * F#




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: