EDIT 2: I tried to update the code, but it's really, really out of date, and will be a ton of work. So I've just submitted https://github.com/kanaka/mal/pull/23 instead. :(
Well, I'm not going to remove it. But I will see if I can find some time to improve it in the next few days. I've been meaning to cycle back around. Of course, fixes from somebody who's actually a Rust expert would have been preferred :-)
I hope that you are mis-representing Rust. I'd be pretty discouraged if I was learning a new language and one of the first interactions I had with a notable community member was a notice of take-down containing neither encouragement nor constructive criticism.
To be clear: I spent about half an hour working on updating things for head Rust. But when a language changes as much as Rust does, at some point, if you haven't kept up with it, it's basically impossible to update, it's simply easier to nuke it and start all over. This is why we have always recommended that people use the nightlies rather than the releases, because even upgrading per-point release is quite difficult. Not only that, since it's unclear which version of Rust this even compiles with, it's not as if I could install an old Rust and submit particular, concrete improvements for a Rust of that vintage. Starting over from scratch _is_ a concrete improvement, because it's the actual best path to moving forward in this particular case.
Have I mentioned how much I can't wait until 1.0.0 actually ships?
I agree with you that this comment is quite harsh, but i'd like to point out that it's not a fair representation of steveklabnik's usual interaction with the community on reddit and IRC, as far as i can tell.
Klabnik is a big voice in the Rust community. But regardless of how he might come across here; he is almost always positive and constructive when talking about the language.
For that matter: the constructive part was in the other post where he suggested some improvements. I can understand if the pull request is seen as aggressive, though.
I'm quite happy to take PRs from an expert to address the issues and represent Rust better. :-)
UPDATE: I will point out that the README is pretty clear that this is rust 0.13. Doesn't mean it's a good representation of rust 0.13 either of course, but it clearly isn't based on a recent version of Rust.
Yes, it was intended to be somewhat in jest. "The expert" was a subtle (perhaps too subtle) acknowledgement of his role. Although, I am serious that if somebody with better Rust chops than I wants to fixup the code so that it represents Rust better, then I AM quite happy to pull those in and improve the implementation.
Don't feed the troll. :) He obviously has an axe to grind and isn't looking to have a rational discussion. He even chose to take the good fight to Github, on Steve's pull request to the kanaka/mal repo.
That did seem rather inefficient at the time but I wasn't able to discern the more efficient method at the time. I've kind of been waiting for Rust 1 to cycle back around. But I'll see if I might be able to address some of those and bump to Rust 1.0 alpha in the next few days.
Note the conversation about performance is kind of unfortunate. Those numbers should be considered VERY rough (they were just a personal notes file of mine). Also, with --release, the numbers place rust in the same range as other compiled languages.
pcwalton: thanks to Alex Crichton, the implementation has been cleaned up and updated to Rust 1.0.0 nightly. Yay! The feedback you noted has been applied to the code base. The performance has definitely improved some. Alex did some performance profiling and it looks like that hashmap performance may be the main bottleneck. The micro-benchmarks (and yes I admit they aren't good tests) don't really test strings, so it's probably the newer Rust version and the other cleanup that Alex did.
But if you want to take another pass through the code now that it's not based on an ancient Rust version, I'm always happy to take any advice from the master. :-)
Is the lua implementation very new and/or incomplete? It seems to be missing from the benchmarks, and if it's lua 5.1, maybe it'll work with luajit?
[ed: Also interesting to note that the clojure version is much slower than scala/java. If nothing else, I guess it's an indication of performance gains that can be had by implementing parts of a clojure program in java (unless there's something off with the clojure implementation, of course.]
I'm rather hesitant to post any sort of benchmarks (since the existing ones suck so bad), but the using these, the lua 5.1.5 equivalents would be: 1, 1, 293
Lua does seem to be an odd one in that the short tests run quickly but this does not translate into iterations for the longer 10 second test. Perhaps the mal implementation is triggering bad GC behavior or something and that drags down the longer running tests. That's just speculation though. Again, please take the numbers with a mountain size grain of salt.
It's weird that people that make benchmarks don't investigate which flags to pass for getting the most optimized build. Many compilers don't do max optimization by default. In particular, people making benchmarks with rust code seem to tend to forget or be unaware of the `release` flag.
Yeah, this is happening again and again in Rust: people publish Rust benchmarks without optimization on. In fact, we just decided this week to change "Compiling" to "Compiling (Debug)" in Cargo if optimization isn't turned on to address this problem. It's sad :(
If it helps it doesn't happen only to Rust but to all languages.
There is this misunderstanding between implementations and languages where people equate whatever they have installed on their computer with all implementations of the said language.
Also not knowing about profilers and optimization flags.
Yeah, the performance benchmarks currently suck. They are neither statistically valid, comprehensive, and often just run with default settings. That was really just a personal notes file that I shared with def- without clarifying that it really wasn't ready for publishing.
`cargo build --release` does more than just `-C opt-level=3`, actually. For example a regular `cargo build` also adds `-g`, and that's removed for `--release`.
My preference is to have unoptimized by default, unless there is only a negligible difference in compile-time between the two options. I mostly don't need optimized programs while I'm developing them.
Alternatively, don't have a default and let people opt-in to whatever default they like. That forces people to actually make a choice, instead of being lazy and publishing poor benchmarks without having even looked up what optimization knobs there are to turn on or off.
Can someone explain the huge performance difference between nim and C? I though nim compiled to C, therefore I would think that good handwritten C would more or less be the upper performance bound for nim.
One explanation could be "sufficiently advanced compilers". Reread your sentence in the context of C/asm:
Can someone explain the huge performance difference between C and assembly? I though C compiled to assembly, therefore I would think that good handwritten assembly would more or less be the upper performance bound for C.
I agree with what you wrote: "I would think that good handwritten assembly would more or less be the upper performance bound for C." I doubt a good C (or assembly) programmer has bothered to play this particular benchmark game, so the results look strange.
Here's a similar, comically pathological, example:
"Haskell compiles to C therefore good, handwritten C ought to upper bound performance for Haskell"
main :: IO ()
main = do
x <- return ([1..100000001] !! 100000000)
putStrLn "done"
As Haskell is lazy and pure it "automatically" eliminates the `x` and returns nearly immediately avoiding all work. The correspondingly written C code would do much more work despite its waste.
Of course, the human writing the C version might automatically perform this optimization, but similar things can arise in far less obvious locations in a more genuine compilation.
Any C compiler worth using will optimize out 'x' in that case. The details of why (laziness vs code analysis saying it's unused) is pretty much irrelevant.
Saying "the human writing the C compiler" is an odd way to put it. It's "the human writing the Haskell compiler" deciding it shouldn't be evaluated in Haskell, too. Totally meaningless distinction.
Please elaborate. I don't see the difference between the person writing the compiler implementing laziness and the person writing the compiler implementing unused value removal.
It wasn't a direct port of the Python one. Instead I mostly followed the guide and looked at the Python code when something was unclear to me. Nim being statically typed meant I had to create many MalTypes instead of just using the built in Python ones. But nothing particularly challenging.