Hacker News new | past | comments | ask | show | jobs | submit login
Comparing Rust and Java (llogiq.github.io)
268 points by Manishearth on Feb 28, 2016 | hide | past | favorite | 114 comments



I love both Java and Rust, this is a great write up! I've been writing C++ recently and when I run into unexpected undefined behavior during run time I very much miss Rust and Java as neither would have happened in either. I'm just not as good at C++ as I am at the other two so that explains it. It does feel freeing in C++ to not have to worry about borrow checking, but I know it will get me into trouble I would have avoided with Rust. I can see the use case for both. Sometimes you just want some code without worrying about doing it perfectly, and sometimes you want to build a nice big application where things are fine tuned and as bug free as possible and in that case I'll probably want to choose Rust.

Nothing in my mind though really beats the ecosystem around Java of these three in terms of IDE support, tooling and just simplicity in getting code out fast when you don't need low level access to things. Netty really brings great things to Java, and the many mature web frameworks for JVM languages are probably better than anything available for Rust right now. I'm not sure a sane person would write backend for a web site in C++. :P

Anyway I love both Rust and Java 8 and it's great to see them thriving.


Speaking of tooling, correct me if I'm wrong but one thing I miss when working on Java (I've not worked much on Java) is intuitive and build/package manager. Java build tools seems to be painful to use outside IDE. Cargo on other hand, I can get started working in minute.


I have found the opposite to be true in my experience. IDE's have some level of trouble with most popular Java build tools (Maven and Gradle), while they are quite capable command line tools, and they themselves know how to work well with the IDE's (They each have plugins to create the 'meta' files for the most popular IDE's)

What issues have you had with which build tool when using it outside the IDE?


IDEA's Maven and Gradle support is pretty good. I haven't had to use any Maven plugins to generate project files for an IDE since IDEA 7. But yeah, mvn and gradle are the definitive, the IDE support is a nice to have.


IDEA's native Gradle support is excellent. I haven't had to use Gradle's IntelliJ plugin (the Gradle bit that generates the IDE files) for at least two years now. All I have to do is (on Mac OS X):

    open -a /Applications/IntelliJ\ IDEA\ 15\ CE.app build.gradle
At work, we have a Gradle project with 346 submodules. We have about 200 engineers working on the codebase. Gradle/IDEA combination works well beautifully.

There are a few rough edges, especially when one uses custom source sets. IDEA 16 is going to get better support for that [1].

[1]: http://blog.jetbrains.com/idea/2015/12/intellij-idea-16-eap-...


> IDE's have some level of trouble with most popular Java build tools (Maven and Gradle)

Can't speak for gradle but Netbeans has worked excellent with Maven for years.

Idea seems to work nicely as well (I haven't used it much for the last few years but my coworker across the table seems to like it with Maven as well.)

As for eclipse I haven't used it regularly since 2012 so I don't know.


Same here. Maven works great from the CLI, and Eclipse works great as a dumb-ish consumer of the dependencies and Maven reactor stuff in which to write code. But I don't use Eclipse for real build / test / package tasks, nor is the IDE a very nice place in which to configure the project skeleton.


Package management with Cargo is one reason that the Rust ecosystem will evolve very rapidly. It reduces a lot of friction and makes it easy to test out new libraries and approaches. This will pay off in the long run as the community starts to fill in the gaps.

Another underrated part of the Rust ecosystem is their RFC process, the multiple development streams and the six week dot-release iterations.


Maven is the best build tool I've used, with any language. You have to really embrace its declarative, convention-over-configuration style, but once you do it's wonderful: just declare the projectId, groupId and version, and any dependencies, and there, you're done.


I think, SBT is better than Maven. It allows interactive development.


Maven allows interactive development too when using the IDE integration. An IDE is in a better position to have that kind of information and integrate it well (e.g. an IDE can run tests on save without having to watch the filesystem for changes).

SBT is not adequately documented, relies too much on symbol-heavy syntax, and allows arbitrary code in build definitions.


Symbol heavy syntax as opposed to symbol free xml? Could you elaborate?


XML uses three symbols: <, >, and /. And it's a well-known, documented standard. Looking at the last project I depended on, its build.sbt uses :=, %, %%, /, <+=, <<= (in addition to standard Scala symbols like +=, ++=, ++, =>, <=, >=, which an inexperienced developer wouldn't necessarily realise weren't part of SBT's own syntax). That's not an exhaustive list (I've used :+= and :++= as examples in another thread) and I have no idea where to find one. Heck, I've been a professional Scala developer for 5 years and I have literally no idea what <+= or <<= mean.


You forgot &entities;

And there is an oft-documented overuse of custom operators among Scala programmers, however I don't have sufficient expertise to gauge the resulting problems in practice.


> You forgot &entities;

I did; in fairness I don't think I've ever seen one in a maven pom.xml


You should try Cargo


Gradle is the best general purpose build system I've seen, a bit of initial overhead but as soon as you need something more than basics you're very glad it's done the way it is. Not without flaws but better than other "API" build systems I've used (WAF, CMake)


Assuming you have a quad-core and enough GB to spare.

I am yet to see Gradle perform as fast as Ant and Maven without fine-tuning and background daemons tricks .


What are these fine-tuning and background daemon tricks? I have yet to see gradle run as fast as ant period.


What Gradle advocates present on their sessions about leaving a running daemon, configuring the JVM memory, changing the way the build is done and so on....

Still, the end result is as you say.


I like Gradle too, although I very much recommend having a look at QBS (Qt Build Suite, although unlike Qmake it's not really bound to Qt and can be used for general build scripting). I've used it about a year ago and was really amazed to see what was probably the first sane build tool I've used for years. Of course Gradle is rooted in (yet not limited to) the JVM ecosystem and QBS started as a C++ build tool, so they are not really equivalent, but comparing the ease of use, predictability and the overall impression, QBS beats Gradle by a full grade in my opinion.


You do know Gradle uses Groovy as the syntax for its build files, right? The people behind Groovy had a problem a year ago where they were retrenched from their jobs working on Groovy, and haven't found any businesses since then who'll pay them to continue improving Groovy. Gradle doesn't mention Groovy much on its gradle.org website, perhaps out of embarrassment.


No matter what all the other commenters are saying here maven and gradle are awful. Cargo is much more pleasant to use.


In my opinion cargo > maven/Gradle > most other build/dependency systems I've used, including setuptools, cmake, and others.


I didn't have terribly many issues with Gradle and consider it pretty good. What about it do you consider awful and how does Cargo solve those issues? What features does Gradle lack that stick out to you in Cargo?


Have you considered D? It's pretty much supposed to be a successor to C++ and has GC (can be turned off) and is compiled as well. I feel like I'm writing compiled Java when I code in D if that gives you a nicer perspective of D.


AFAIK, turning off GC in D is not feasible whenever using large portions of the standard library, but there were efforts underway to change that. Do you know if there's been much progress?


Lots of progress, but there's still work to do. If the GC is unacceptable for your application, slap `@nogc` on your `main` function and you're guaranteed to not use it.


I'm writing OpenGL code. I know lots of languages claim support for OpenGL, but in my opinion dealing with gpu stuff is something best written in C or C++. All the guides, all the answers to questions, all the popular books, everything out there assumes you're using C or C++ when you're working with GPU code. It's fine. I like using C++ for this. OpenGL in any other language and you're a second class citizen at best.



D much like C++ interfaces to C see:

https://dlang.org/spec/interfaceToC.html

D was at one point a C++ compiler but Walter Bright decided to break backward compatibility with C syntactically to allow the language to evolve in ways that C restricts, while still allowing it to interface with C. Sorry if I worded this poorly, but you may want to reconsider D and see for yourself if it feels like a second class citizen or what? There are sample projects interfacing to C projects (LuaD comes to mind).


I first heard about D from Gamedev.net, so you can certainly do OpenGL with it. I believe there have even been commercial games written in D.


Kenta Cho's shooter _Torus Trooper_ is written in D and might make a good example. (It's a damn good game, too.)

https://www.youtube.com/watch?v=PJMTQmqxBqg

http://www.asahi-net.or.jp/~cs8k-cyu/windows/tt_e.html

Also, ported to Linux: http://www.emhsoft.com/ttrooper/


What is the tooling and library ecosystem like for D compared to Java?


D programs can make direct (=zero overhead) calls to C functions (like we do in C++). Which basically means you have access to any C library.

The D standard library is very dense (it makes C++14 standard library look ridiculously incomplete), so most things are available out of the box (threads, processes, filesystem, network, random, algorithms) ... in a portable way.


There's a Visual Studio plugin available, and I know VS Code has a nice plugin as well ( you need to install certain tools yourself in that case ), as well as other editors with their own feature set. There is also Dub[0] which is a build tool for D and gives you access to all sorts of D packages. I'm still a beginner to D so forgive me if my answer doesn't suffice.

[0]: https://code.dlang.org/getting_started


I recently had need to write some native Windows software, and I did want to use Rust initially, since I don't fully trust myself to write correct C++.

I ended up still writing it in C++ since Rust's support for writing COM objects/servers was not great, but I also ended up having to chase down a bug where I forgot to AddRef before storing a COM pointer in a map.



I did use CComPtr, but they only reduce the amount of footguns, they don't get rid of them. In my case, I accidentally stored a reference to the inner pointer instead of calling Detach.

My point isn't that there aren't nice idiomatic ways of writing nice windows/c++ code that are not fragile, it's that unless you're well versed in them, you can easily screw things up pretty badly.

[EDIT]: In case it's not clear, I am not a windows C++ developer, I've read a decent amount of windows C++ code, so I sort of know what's going on, but I've written less than 5k lines of windows C/C++ in my life.


Fare enough. I know Windows since Win16 APIs days (started with Windows 3.0)

I would run into similar issues when coding for Apple platforms, for example.


I also had to do that recently and used Nim instead after spending too much time on it in C. For anyone attempting to do the same, I suggest 32bit Nim + msys2 + nimble install oldwinapi.


> when I run into unexpected undefined behavior during run time I very much miss Rust and Java as neither would have happened in either.

You obviously haven't dealt runtime dependency injection/etc. in Java.

It happens regularly.


"Undefined behaviour" is a specific term of art in the context of C/C++ and is an euphemism for your whole program state potentially becoming corrupted. With crafted inputs your program can start executing attacker-supplied arbitrary code.


Can you give examples? Dependency Injection seems to be the trend going forward as far as good practices go (Play Framework for instance is migrating to pure Dependency Injection for routers/controllers).


It might be a reference to debugging Spring autowire.


Thank you! I'm glad you like it.


Great writeup, especially that you are really fair to both and make good statements.

> Java has a lot going for it, and I probably will keep using it for some time.

Me, too. And I will always keep an eye on rust, especially since I also use Scala. ;) However the only downside of rust, is that crates.io doesn't has so powerful things like Java has in maven. However comparing Libraries of a 20 year old language vs a "some" year old language is somewhat unfair. What I would like would be Java <-> Rust FFI.


This may make you happy then: https://github.com/sureshg/java-rust-ffi


Heh, I wonder if I could write a Minecraft mod in Rust with this


I have written a native extension library for Java in Rust. It's really no different from doing it in C. Mind you it was pretty simple. The only slightly bizarre bit is matching the java types to the libc equivalents, and telling rustc that you don't want to enforce snake naming.


Thanks, glad you like it.

> crates.io doesn't has so powerful things like Java has in maven

Could you elaborate on this? I'm only using maven in a few projects and have only shallow experience with it.


Yeah that was badly written. I meant in Maven are way more powerful libraries than in crates.io. Most things in crates.io aren't really "stable" or "useful" (yet). They are in a really early stage, while in Java you have access to thousands production ready / tested libraries. I.E. Pdfbox, Netty, Akka, etc..


Ah, yes that's true. The java ecosystem had much more time to mature, and it shows.


> It may be harder to write Rust code than Java code, but it’s a lot harder to write incorrect Rust code than incorrect Java code.

This. 10x.

And, if Rust's type checker could be integrated with an SMT solver to statically verify array/vector indexing (instead of performing bounds checks at runtime, or using unsafe code), that 10x would become 100x.



Well, yes and no. You can prove all the properties liquid types can without actually using liquid types, just by running the algorithms that infer them. Either way (inferring with or without types) the problem of indexing is undecidable, and cannot be verified in the general case (but it can in many common usages).


The point to using types is that they make static analyses more compositional. Types enforce invariants across module boundaries, without requiring the entire program to be checked in a single pass.


Yes, but they may also lose information in the process... As usual, it's a tradeoff.


Yep. That was exactly what I had in mind.


I'd personally like to find the time to write this – some of the infrastructure (like call graph analysis) is already available.

However, I think this is the nuclear option. I have some experience from university where I wrote some Eiffel code, contracts and all that jazz. It often takes some head scratching to figure out the right pre- and postconditions.

I'd actually rather see more safe interfaces for the common cases (like iterators and slices) that are ergonomic to use and may actually carry the proof in their types.


Eiffel contracts are still checked at runtime, which kind of defeats the point - the real payoff of enforcing an invariant statically is not having to check it dynamically! What I'm talking about is using a special kind of theorem prover called “SMT solver” to guarantee at compile time that your array indices are always valid, eliminating the need to perform bounds checking at runtime. In many cases, SMT solvers require little to no programmer-supplied annotations, which makes them more ergonomic than full-blown dependently typed programming languages like Coq, Agda, Idris, etc.


There are tools to do exactly that (I don't remember the name of what we used at university, alas). And no, I do not want to run a SMT solver against anything but simple code, because my build is slow enough as is. ;-)


Most index manipulation is fairly simple. For instance, if the result of `str.find(pat)` is `Some(pos)`, then you know that:

(0) `pos <= str.len()`.

(1) `pos` is actually the beginning of a character - it makes sense to split the string at the position `pos`.

These things are easy to check - there is not even any multiplication involved! - but a conventional type checker won't do. Much of the pain of using dependent types is the result of sticking to unification as the only basis for type-checking.


On the contrary, it's actually rather easy to use the type checker to encode this: Just make a wrapper type CharIdx(usize) that encodes the fact that the enclosed value is a valid char position and implement Index for it using unchecked indexing. Also implement Index for usize with the usual check for bounds and valid character position.


The problem of array overflow/underflow is undecidable in the general case. Various verification techniques (like SMT solvers) can only prove some usages, but not all.


> The problem of array overflow/underflow is undecidable in the general case.

Obviously.

> Various verification techniques (like SMT solvers) can only prove some usages, but not all.

Turns out this is good enough for most cases. Verifying 80% of your array index manipulations is better than verifying none of it.


Where do you get "good enough for most cases" or the 80% from? Let's see the hard facts rather than conjecture masquerading as such.

My personal experience points to opposite results of what you claim.


> My personal experience points to opposite results of what you claim.

Let's see the hard facts rather than conjecture masquerading as such.


My main use cases for manipulating indices directly are:

(0) Implementing lexical contexts and environments whose variables are identified with de Bruijn indices or levels.

(1) Implementing multidimensional arrays and operations on them (e.g., matrix multiplication).

In both of these cases, a solver for the theory of integer arithmetic can check that I'm using my indices right.


That's true for slicing and indexes, but hopefully you can use Iterator then for basic traversal, which does this.


Yep, when iterators do the trick, they're preferable to indexing. But some algorithms (e.g., the tortoise and hare algorithm for cycle detection) explicitly require indexing.


Not necessarily – you can chain iterators and skip elements on the hare.


You're right. I was thinking exclusively of the case where you have an array whose elements themselves are indices into the same array.


" so it’s just-in-time compiled" and "As a fully compiled language, Rust isn’t as portable as Java (in theory)"

Look, JIT and static compilation are just compilation mechanisms. You can apply them to pretty much any language you want. You can statically compile java[1]. You can JIT Rust. You can compile java with llvm (Azul does it). You can interpret java, rust, C++, etc.

You can have hybrids where you start with an optimized statically compiled form and reoptimize it at runtime.

Don't ever tie current optimization mechanisms to comparisons of languages. If it's really important to gain performance, someone will do it.

For example, JIT of C++ is often not done because it's not worth it compared to automatic profiling and other reoptimization mechanisms. But it can be done :)

If you want to compare languages, compare languages. If you want to compare particular implementations of languages, compare particular implementations of languages. Don't say you are doing one and do the other, because it's confusing, and, well, if it matters enough, someone will come along and make you wrong.

:)

[1] If you are willing to guarantee no class loading. Otherwise, you can statically compile everything that is loaded or run an interpreter or do what you want to new code at runtime


Most languages have a single implementation that everyone uses. Unless I'm interested in writing a new implementation (never), then I'm going to be interested in comparisons of languages based on their well supported implementations. Otherwise we get into discussions about sufficiently smart compilers (http://c2.com/cgi/wiki?SufficientlySmartCompiler). I get what you're saying, but there are tons of people who are justified in only caring about implementations.


Java, JavaScript, Python, C# — all have significant alternative implementations; never mind C++ and C.


But for the most part those alternative implementations have similar designs and performance characteristics to the most popular implementation. E.g. for Java, all major modern implementations use a JIT, and the language lends itself to a JIT runtime, as virtual-methods-by-default performs poorly when statically compiled.


Except for the reference JDK, all commercial implementations of Java also provide AOT compilers to native code.


> If it's really important to gain performance, someone will do it.

Counter-examples:

I love Python, but it's slow. People thought performance in Python was important and ... well, here we are years later and no one has made Python performant. PyPy is a splendid display of engineering and optimization, but it doesn't make Python truly performant on real world applications versus C/C++. It's more of a stop-gap to delay the inevitable (which is, if you find yourself needing performance, you'll eventually have to bite the bullet and re-write in C++).

The same goes for JavaScript. Massive companies have brought their engineering might down upon JavaScript to build what are perhaps the most impressive optimizing compilers of our modern age. Yet JavaScript is still slow, and now what are we doing? Turning to asm.js and WebAssembly...

So while it's true that any language could be performant, given a sufficiently adept compiler, the real question is whether such a compiler is actually practical. And more importantly, should we pour the world's engineering resources into building such a compiler, when we could just build a better language instead?


I think Lua + LuaJIT hits all the right bases.


Oh, come on. I think it was a nice article, and all the points you made were just pedantic. Of course we all know that, can't we just enjoy a nice article? Leave it to HN comments to say things like "Don't ever do this thing...".


In theory, a programming language only has a syntax and a semantics, and everything else is an implementation detail. In practice, programming languages are designed for specific use cases, and lend themselves to specific implementation strategies. For example:

(0) Type-checking C++ templates requires expanding them. However, nothing forces a C++ implementor to translate the individual template instantiations to machine code (or whatever target language is used). A C++ implementor could use a strategy similar to what is used in Java and C#, and re-instantiate every template on demand at runtime. Of course, nobody does this, because it's bad for the implementor (more work, because instantiating templates is more complex than instantiating generics, so better not do it at runtime if it's already been done at compile time), bad for the user (worse runtime performance), and good for noone (except perhaps C++ detractors).

(1) A Python implementor could use whole-program analysis to assign variables and expressions more useful static types than `AnyObject` - similar to what STALIN does for Scheme. But, again, this is bad for the implementor (more work), bad for the user (less interactivity and instant gratification, due to the constant rechecking of the whole program every time a change is made), and good for noone (except perhaps Python detractors).

Now, there exist language designs that are less biased towards a fixed implementation strategy. For example, Common Lisp, Standard ML and Haskell. But these languages are also markedly less popular, which perhaps suggests that programmers usually prefer languages that have a concrete story about what use cases their design optimizes for.


> A C++ implementor could use a strategy similar to what is used in Java and C#, and re-instantiate every template on demand at runtime.

Are you sure? Expanding templates can affect the parsing of subsequent code, as this example illustrates: http://yosefk.com/c++fqa/web-vs-c++.html#misfeature-3 You pretty much have to expand them at compile time.

(This sort of complexity illustrates why I prefer generics to templates, incidentally.)


What I meant is:

(0) Expand the templates at compile-time, in order to type-check the code.

(1) Don't generate target language code for each specific instantiation, though.

(2) Re-expand the templates (or some representation of them) at runtime.

It's a very silly implementation strategy, of course. But I don't see why it's fundamentally impossible to do this - it's just undesirable.

And, yes, I agree about generics being better.


Shameless plug, I wrote a blog article explaining the yosefk example above more explicitly: http://blog.reverberate.org/2013/08/parsing-c-is-literally-u...


I'd say that it's in a large part a question of marketing. SML and Haskell are largely academic languages, even if there is now Haskell running in production at different locations. Likewise, you can hardly accuse OCaml of having the kind of marketing muscle and reach that a company like Mozilla can bring to bear (also, it's pretty old, which makes it difficult to sell as the new shiny).


(1) has actually been done a decade ago, it was called "Starkiller"

http://people.csail.mit.edu/jrb/Projects/starkiller.pdf


> If you want to compare languages, compare languages. If you want to compare particular implementations of languages, compare particular implementations of languages.

Well, to be fair, it says this at the top:

This post compares Rust-1.8.0 nightly to OpenJDK-1.8.0_60

I think it's pretty clear it's comparing particular implementations of the languages.


The article compares the only currently available Rust implementation in version 1.8.0 nightly with OpenJDK 1.8.0_60 – which is sufficiently similar to Oracle's JRE, which is the predominantly used VM, so I'm comparing how the languages are used by the majority of users. If you think this unfair, I'd like to read your blog post where you enlighten us about your implementation of choice.

> ... if it matters enough, someone will come along and make you wrong.

I'll be gladly proven wrong if someone comes around and speeds up my JVM. :-D


Azul compiles Java with LLVM? Where can I read more about that? I thought I was familiar with their tech but I never heard about this.


Many great points. It is interesting to note that Rust IDE, when it comes, is most likely to have Java based GUI.

> Both Rust and Java keep their generics to compile time.

I thought Rust generics are also available at runtime so they are different from Java.


AFAIK Rust generics are reified (where Java's are erased with typechecks inserted in various places) so the "generic type" exists at runtime, but there's no RTTI so whatever does exist is inaccessible at runtime.


So for users who mostly write enterprise type applications do Rust generics have anything better to offer compared to Java?


I don't think so, at least until specialisation lands.

When used as generic constraints they're statically dispatched which might mean better performances, but I believe the JVM is pretty good at devirtualisation so YMMV…

The only advantage I can think of is probably a lack of surprise when trying to use reflection and discovering your generic types are missing (but that's because there is no reflection in rust)


> it is interesting to note that Rust IDE, when it comes, is most likely to have Java based GUI.

Why is that exactly? I personally don't like Java based GUIs. They are rarely integrated well with the base system. Qt comes closer to it.


Aside from Visual Studio, the most promising IDEs to host a Rust plugin are IntelliJ IDEA and Eclipse. There's a page with some more general info about the efforts to make Rust IDE ready/friendly/available:

https://www.rust-lang.org/ides.html


I'd prefer something like KDEdevelop, though to be honest I rarely use IDEs in general. Looks like they already have plans for it:

> there are plans to integrate basic Go and Rust support.

https://www.kdevelop.org/news/first-beta-release-kdevelop-50...


KDevelop doesn't have such good support for advanced type systems (for the perfectly sensible reason that the languages it's usually used for don't have them).


What exactly do you mean by "advanced type systems"?


Generics (templates work differently), HKTs (not in Rust yet, but something a long-term tool would need to be aware of), general ADT stuff (sum and product types - emulatable with generics and polymorphism).


Well, C++ generics and templates are actually pretty advanced. So the fact that KDEdevelop was focused on C++ shouldn't cause this problem. May be it's just behind in that aspect, but not because of C++.


IntelliJ integrates pretty well on MacOS, at least.

Sometimes, when Apple changes up the theme, you notice it's not native because it still uses the old theme. But then eventually they update and it goes back to looking native. Or at least native enough that it looks good, which is ultimately what matters.


no, you can have "trait objects" in Rust, which are boxed and use dynamic dispatch instead of static dispatch, but they lose the original type (type erasure -- another similarity with java :)), so they aren't true generics.

For example, you could use the `Read` trait as a type:

    fn some_func(a: Box<Read>) { }
but then only the `Read` methods would be available on `a`, you wouldn't be able to access methods from the original type of `a`.


> so they aren't true generics.

Why does type erasure mean they aren't true generics?


They don't lose anything compared to monomorphized templates. You can only access methods of the trait in both cases.


Thanks. I need to spend time and learn Rust. The one thing stopping me is lack of some small work related project which I can try in Rust


If you don't have a hard requirement for manual memory management then there are good rust-like options on the JVM - Scala if you want java-like maturity and tool support, perhaps Ceylon if you want rust-like new-language elegance.


https://github.com/PistonDevelopers/VisualRust Rust development in Visual Studio.


I looked at this but never got past Hello World. Have you used it much?


A minor point on a great article, but "On the other hand, Java’s enum values are really singleton classes under the hood – so one can define an abstract method in the enum to implement differently in each value, something that would require a match expression (or hash-table, whose construction at compile time some enterprising Rustaceans have built a crate for) in Rust" isn't quite true -- you can implement methods on enums in Rust, too: http://is.gd/M3VJJR


I suspect the author was talking about defining methods for individual variants of an enum, something like

  enum Foo { A, B }
  impl Foo::A { fn method_on_a(&self) {} }


There is an RFC[1] for making enum variants first-class types.

[1]: https://github.com/rust-lang/rfcs/pull/1450


Exactly. I must admit that my wording was not as understandable as it should have been. I'll see if I can rephrase it to better convey the point.


One language I encourage people to try is myrddin http://myrlang.org/.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: