Hacker News new | past | comments | ask | show | jobs | submit login
TruffleRuby on the Substrate VM (nirvdrum.com)
109 points by nirvdrum on Feb 15, 2017 | hide | past | favorite | 25 comments



SVM is super cool, but there's a huge caveat: Not only is it closed source, but you can't even use things it outputs in production. The TruffleRuby wiki notes that the output of SVM is covered by http://www.oracle.com/technetwork/licenses/standard-license-... which says that you can only use it for testing and not production.

Essentially, you can only get good startup times with TruffleRuby for testing. If you want to use TruffleRuby in production, you can't use the SVM version.


Big show stopper for now but I imagine they'll license the SVM once it's ready. Businesses will decide if it's worth the price. As a comparison, I never saw a small to medium Rails application running on Passenger Enterprise.


I am sure there must be use cases where users need high performance and low RAM with Ruby, but can not move to Go/Rust/Java for critical parts and hence need something like SVM. I just can't think of those scenarios.


We're aiming to be 100% compatible with MRI. It's far easier to swap out your runtime than it is to rewrite production code. Of course, situations vary for everyone and each team needs to evaluate on their own.

Prior to joining Oracle Labs, I ran a company that went through TechStars (Boston, 2010). We started with Ruby to get the MVP out. I would even say overall it was architected fairly well. But as we started growing, we hit performance issues that we just didn't have with a smaller customer base. As a bootstrapped company trying to grow, putting everything on hold to rewrite subsystems wasn't a terribly appealing option. We finally decided that it was a necessary evil and were going to move from MRI to JRuby, at which point we'd start replacing pieces with Java. As it turns out, JRuby ended up meeting our performance needs and we didn't have to rewrite anything (modulo code changes needed to run on JRuby). I ended up winding that company down for a variety of other reasons, but I'm happy I didn't become another case study in code rewrites.


Good to hear. At workplace we upgrade Java version without changing our code to use newer APIs. So far it is working reasonably well by getting benefit of newer GC and JRE without any code change.


The only benchmark that matters is how fast it runs a real world Rails application. It doesn't matter much if it requires huge amounts of RAM and has a long cold start. Developers can use MRI and run CI and production with SVM, much like they do with JRuby now.


I think you might be missing the aim here. We already have a release of TruffleRuby that starts cold but has great peak performance. This is done by way of the GraalVM. What's new is we're now able to target a new virtual machine that can provide fast start-up as well.

By far, one of the most common complaints we hear is the start-up time is too slow. The split development model you've describted works for some, but becomes problematic when you want to use extensions specific to one runtime or another. Additionally, a lot of people don't want to run completely different language implementations in development and production.

So, the idea here is to give you a choice. You get to keep the same language runtime but get to choose the VM you want to run it on based on what your current context demands.


Ok, understood. Then the other benchmark that matters in the typical Rails workflow is how fast rails c starts, and how fast tests run. If MRI runs tests twice as fasts it's hard not to use it. Think 30 seconds vs 1 minute. It really makes a difference. Or running a single rspec example.

Tests seem a really cold start scenario, even if only a bit of code changes among test runs. Is it possible to save (I don't know what, the internal state of the VM?) for the unchanged parts among runs? Would that even work?


Agreed. That's why I presented numbers for running the language specs. It was something I could easily and fairly compare MRI vs TruffleRuby with. And it's representative of what a typical RSpec suite will look like.

With the SVM, we're now at 1s (MRI) vs 9s (TruffleRuby/SVM). Previously, we were at 1s (MRI) vs 68s (TruffleRuby/JVM). There's a lot of ways to look at that data. You could say TruffleRuby/SVM took 9 times as long as MRI and that rules it out. Or you could say it took 8s longer and that's acceptable for my entire test suite, especially if I don't have to juggle multiple versions of Ruby. I really can't say -- that's for each team to decide on their own. But, we're looking to make the decision easier by further reducing that run time.

It could be possible to save the state of the parsed AST and embed that in the binary. It's actually what we do for the core library to help with start-up time. I'm not sure you'd really want to do that for individual codebases though. You would need to rebuild the static binary every time your code changes, eliminating a lot of the benefit of using a dynamic language is the first place. It would be far better if we just make TruffleRuby and the SVM faster, which we're working on.


I sincerely hope that you succeed!


Interesting to see IBM has also open sourced its JVM infrastructure as Eclipse OMR [0]. Which provides a JIT infrastructure to any existing runtime.

[0] http://www.eclipse.org/omr/


It is interesting piece of software but to me it seems to say critical business case of in-house JIT infra is gone. IBM lately seems to be betting on Swift for apps and Go for blockchain related infra initiatives.


The Substrate VM is still not open source, correct? Are there plans to do so in the near future?


Yes, according to Chris Seaton, one of the main Graal developers (lead, even?) a month or so ago, Substrate VM will be released to the public, sometime Real Soon in early 2017: http://lists.ruby-lang.org/pipermail/jruby/2017-January/0005...

Of course it doesn't clarify the licensing. But it seems like the bulk of basically all of Graal and Truffle are under the GPL and I do believe that with JVMCI, etc, making it into JDK 9 -- it seems like a pretty good bet all this will eventually be openly available. So I'm hoping for the same to be true of SubstrateVM, though I think it's in a slightly different place than Graal (Graal/Truffle are really just .jar libraries you put in the Classpath of your JVMCI-enabled JDK; they do not fork the JDK. But Substrate is clearly something else entirely, since it 'shrink wraps' your code with the JVM itself in a way).

I actually just started playing with Graal yesterday, so I'm looking forward to seeing Substrate being released.


That's correct. The script used to generate the compiled binary is open and you can play around with that per the terms of the OTN license, but the SVM itself is closed source. There aren't currently any plans to change that.


Well, apparently I'm eating my words in my sibling comment. That's incredibly disappointing, really. (I suppose I'm already having enough fun with Graal though, so I'll temper my sadness!)

Just to be clear: when you refer to the 'compiled binary', what binary are you referring to exactly? Is this the 'aot-image' tool inside the Graal OTN builds I was looking at? It was not clear to me if that was actually the Substrate tool or not, because the documentation is very vague[1] -- can any Graal-based language use that tool to test better startup times, etc?

Clearly if SVM is going to remain closed source, I'll have to continue to mostly ignore it (as I've done since I followed Graal), but it would be at least nice to see it "In Action" and what it does to the warm-up time, with my own eyes...

[1] http://www.oracle.com/technetwork/oracle-labs/program-langua...


The "compiled binary" I was referring to your compiled interpreter. The aot-image tools in the GraalVM distribution is just a bash script that sets up a bunch of options to control the AOT compiler (also called the boot image generator). If you look at the end of the aot-image script, you can see how the boot image generator is ultimately invoked. The output product of this script is a static binary of a Truffle language interpreter (we don't AOT guest programs -- those will be run in the AOT compiled interpreter)

Out of the box, the aot-image script has support for building Graal.js and TruffleRuby images/binaries, but you could modify the script to build whatever Truffle language you'd like. This is just our first release of the SVM and those are the two languages we've tested. As the SVM is fairly young and under active development, it's possible you'll encounter missing functionality. But we do have at least one community member on the graal-dev [1] mailing list trying it out.

I can't say whether the SVM will remain closed or not. I really don't know. There just currently aren't any plans to open it.

[1] -- http://mail.openjdk.java.net/pipermail/graal-dev/


> I can't say whether the SVM will remain closed or not. I really don't know. There just currently aren't any plans to open it.

Then there are no reasons to expect it to be open, as long as it's commercially viable to sell it. This is Oracle we're talking about. Not a company known for its love for F/OSS.


>"Then there are no reasons to expect it to be open, as long as it's commercially viable to sell it. This is Oracle we're talking about. Not a company known for its love for F/OSS."

It'd be a real shame if Oracle couldn't see the benefits of open-sourcing SVM. New companies are far less likely to base their tech stack on closed source components when more mature open-source competitors exist, there's too much risk involved.


Thanks, that makes a lot more sense. (I understood Substrate AOT'd the actual Truffle-based interpreter, not guest programs that run in that interpreter, but wasn't sure exactly what tooling you meant). I'll peek inside aot-image when I get a chance.

> I can't say whether the SVM will remain closed or not. I really don't know. There just currently aren't any plans to open it.

I'll keep my fingers crossed, then! SVM is honestly one of the most exciting components. So if I try it and I have any problems, I'll definitely let graal-dev know.


https://github.com/kostya/benchmarks

it looks more speedy than ruby, but also is eating a lot of memory, and for some benchmarks is not that fast :)

Will be interesting with substrate vm if the results or memory are the same.


What's the methodology? JIT warmup taken into account or no?

The memory use does seem excessive though. Perhaps a little too much code bloating due to the polymorphic inline caches?


I guess not for sure, so most the time will be in interpret mode


We can finally combine the speed of Ruby with the light weight of the JVM!


But the entire point of this project is that it's a faster version of Ruby, combined with a much more lightweight version of the JVM.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: