Hacker News new | past | comments | ask | show | jobs | submit login
Why Wasn't Ruby 3 Faster? (fastruby.io)
216 points by stanislavb on Feb 10, 2021 | hide | past | favorite | 171 comments



On the last section, where he says people "weren't avoiding Ruby for speed reasons", that's only true if you take speed very literally. Ruby has concurrency problems compared to many of the more recently popular backend languages like JS, Go, Elixir, or even Java. It has single-threaded performance that's fast enough for a lot of things, but out of the box its ecosystem and standard libraries don't default to evented IO, and the GIL messes up threading, both of which can be footguns for anyone trying to build a resilient and scalable service.

When I was a junior engineer, one of our demos to a client failed because we had a group of 20 people try to use a server that had a highish latency connection to a database, and using synchronous IO meant that a 50ms db round trip for each executed query, with each request running maybe 5 or 10 queries, on a VM with 2 cores and one Rails process per core, would queue up incoming requests until they hit the 20 second mark and started timing out.

Bad concurrency because of evented IO is not exactly the same thing as being slow, but it manifests as much the same developer experience, so it gets misinterpreted and Ruby gets a reputation for being slow.


There is something seriously wrong with your number (struggling to serve 20 people).

I do not have experience with Ruby, but I think it's similar to Python (in performance) and even if it's let's say 5 times slower (which I don't think it is), a webservice on a simple machine (workstation) in Ruby can easily serve north to 1k (or more) requests per seconds (behind nginx or apache or whatever), and in no way would 20 users be a problem.

It doesn't matter if that's async or not (note, not being async does not rule out concurrency or parallelism), but there was something seriously wrong with your application (if you have to serve long-running tasks, you have to make a msg queue anyway and shouldn't do that in your HTTP response directly). You may gain from async if you are IO bound, but with 20 users you are not IO bound, believe me.


We were "IO-bound" using a setup that executed a maximum of two requests at a time, completely blocked processing new requests until earlier ones fully completed, and took a long time to execute requests because of unexpected latency in a dependency, while 20 frustrated users sat in a room hammering the refresh button and dumping more tasks onto the queue.

You're absolutely right when you say there was something seriously wrong with the setup! The problem is that it was more or less what you'd get if you set up a default Rails app behind Passenger, and Passenger did this because some of the major Ruby gems weren't threadsafe (e.g. Rails, until 2008), which comes back to concurrency problems with the ecosystem.


If you were IO bound you could have run many more rails processes. There's no reason to limit your self to one process per core. You can run many times that number of processes since they are all just waiting on IO.


I'm sure the original commenter is loving all your 20/20 hindsight.


The original commenter offered his story as an issue with Ruby preventing X.

The 20/20 hindsight is not meant to retroactively solve their problem, it's mean as a counter-argument, that the problem was their design, not Ruby.


The posters point is that Ruby has surprising concurrency semantics compared to other solutions.


It has the same concurrency semantics w/r/t IO as any other language that isn't using an event based runtime. If you wrote your app in C and used a single thread you'd have the exact same issue.


Ruby should be able to execute more than one request on a time because while it can only have one thread executing Ruby at a time it can run other threads while waiting for IO. It seems like for some reason Rails was limited to one thread which is not how it should be configured.


What is Passenger exactly? Never heard of it before now. A quick Google brings up a marketing page that doesn't explain much more.


It's an application server. If you're familiar with the Python webdev ecosystem, Passenger is roughly a Ruby equivalent for something like uWSGI or Gunicorn. (IIRC, you can actually run Python apps with Passenger now, too)


Where now = for 5 years or more.


Something like mod_perl / mod_xxx in the early days... and they did some work on the concurrency problems with rails and ruby around the ruby 1.8 days iirc... In the meantime they expanded to more languages...


Not sure if you mean this page or not: https://www.phusionpassenger.com/ But as they say, it's an application server. Similar to mod_perl for apache/php, unicorn or indeed nodejs (which can function as an application server for Javascript).


It’s a Ruby web server that can we bed used with nginx or Apache.


We know nothing about the details, of course, but there's something fishy about the performance story. If I were to take a guess it sounds more like a problem with misconfigured software than having used fundamentally wrong tools for the job.

It sounds like the concurrency was set much too low. That's not limited to the configured max workers but there will be similar settings for the web server, database connection layer and the database itself.

A similar thing would have happened with C++ or Java if there was a connection pool of two and database queries that took 500 ms to complete. I've seen worse mistakes in producion code.

The lesson to learn from this is probably not that goroutines makes everything faster, but that you have to understand the tools you use. Also, test demos. And always keep a backup plan. That's what Steve Jobs did, relentlessly, for all his demos.


He said there was high db latency. This matters a ton when your language is single threaded and not using async io. 20 queries would tie up the whole server for a second at 50ms db latency.

You can get around it by using async instead of blocking, using fibers which don't block, or using a thread pool so other threads can keep serving.

I'm not experienced with Ruby but it sounds like Ruby doesn't do any of those


> I'm not experienced with Ruby but it sounds like Ruby doesn't do any of those

Modern ruby can do any of those, though in 1.8 or earlier it could do fewer, and less well.


Ruby does all of those things. The story sounds ancient, and even then a consequence of naive production design far more than any emergent property of language.


> You can get around it by using async instead of blocking, using fibers which don't block, or using a thread pool so other threads can keep serving.

The typical approach for Ruby, Python, and similar languages is just to run more processes.


Ah just like the 80's


> Ruby has concurrency problems [...] standard libraries don't default to evented IO, and the GIL messes up threading, both of which can be footguns for anyone trying to build a resilient and scalable service.

Yes, but this article is about speed, not concurrency, because speed is a thing 3.0 didn't do a major improvement to from the last 2.x release.

There's no need for a “Why didn't Ruby 3 improve concurrency/parallelism” article, because it did.

Fiber#scheduler addresses your IO issue, and Ractors (while still experimental) are aimed at the parallelism issue.


Java's concurrency support is quite good. One of the best language level implementations I've used. It supports OS level threads and async. And they're adding support for fibers. At that point it will be the only mainstream language that supports all three popular models

JS has an equally bad concurrency story as Ruby. Maybe even worse. There are no threads, you can only use a single core per instance. In practice the problem is ram. JS already uses a lot of ram compared to Java and running an instance for each core makes it way worse.IMO only crazy people run JS backends when there's so many other similar options


> JS has an equally bad concurrency story as Ruby. Maybe even worse.

This is absolutely false. Yes, JS is single threaded, but all IO is async and happens via a worker pool. On top of that it is 3-10x faster than Ruby for most tasks due to the insane amount of development effort that went into the browser runtimes, it’s often within 50% of C speeds.

And then the entire ecosystem is async. This is why you’ll see JS frameworks near the top of TechEmpower benchmarks achieving massive concurrency numbers, and also why nobody recommends any kind of CPU-heavy workload in node.

Imagine Ruby came with Puma built-in, and all libraries only exported async interfaces.


By JS do you mean Node here?

Node has both blocking and async versions of a lot of IO operations. But async IO is more commonly used certainly. The real question might be whether a thread pool backed async IO implementation makes your program do concurrent IO better than a GIL + blocking IO calls that release the GIL, it's not obvious to me.

(Doesn't Node have threads too nowadays?)


Almost no code uses node’s sync APIs unless explicitly asked to, its ingrained in developer practices to the same level as not polluting global scope.

Node’s threads are for cases you really want to do heavy computation without resorting to another language. All the I/O is still backed by LibUV and an event loop, threads don’t really offer any improvement there.


Makes sense. Better than Ruby I guess. It's still not really concurrency, it just handles being single threaded better.

JS is strange because of all the optimization. It's technically multi threaded with IO but in a way that you only control the "main thread". Reminds me of old UI work


>It's still not really concurrency

Well, it also has Workers, which is.


Concurrency != parallelism. You can of course run a multi-process cluster, but that’s besides the point (and these days it’s best to run a single process inside orchestration like k8s).

Basically almost every high performing server atm is running on some form of event loop / cooperative multi-tasking. Ruby Puma, Gunicorn, rust’s tokio and async/await, goroutines... Java is probably the big exception here (if you’re not using VertX et al) and we all know the pain of thread safety.


Very true. Go is an exception, and some day Java will be as well. They're both going route of userspace threads instead of reactive API's, thankfully.


C# has them all already. Java has been lagging behind C# for a long while


Java is very methodical about what features they add. Some of the sluggishness is intentional.

As far as threading, Java is adding userspace M:N threads which will be far superior in most use cases to C#'s async. And unlike async it gives an automatic performance boost to most existing code

C# has a long history of breaking changes. One of the reasons Java is more popular is 90% of decades old code runs fine on the newest version. Java's backwards compatibility is absolutely unmatched.


Project Loom’s virtual threads will be superior to the random function coloring with async in C#. With it, you can write sync code and it will automatically change to another “fiber” when it hits IO or other blocking operation.


> When I was a junior engineer, one of our demos to a client failed because we had a group of 20 people try to use a server that had a highish latency connection to a database, and using synchronous IO meant that a 50ms db round trip for each executed query, with each request running maybe 5 or 10 queries, on a VM with 2 cores and one Rails process per core, would queue up incoming requests until they hit the 20 second mark and started timing out.

I'm fairly sure GIL is released when waiting for external I/O, at least all the decent C database libraries do so. So in your case threads would have help quite a bit.

In general yes, the GIL is imho a problem and I do not like it, but at the same time there are ways to configure applications so that they do not mind that much.

For non-thread safe things (are there still any?) there is always the goold old way of just running multiple processes (with preloading the memory usage of even 10, 15 puma workers is often acceptable).


So you hadn't load-tested, and had misconfigured Rails.

I really, really dislike Rails for any number of reasons, but not having spent the time to understand how to set it up is not Rails fault.


Defaults matter.


So does knowledge of your stack. Rails default settings are for development on one box, it expects you to look over the settings at least once before you push to prod. I can't blame Rails for not having "client demo for 20 people" as the default settings (and I also can't really blame GP for not thinking about Rails configuration too deeply as a junior engineer, that is on the more senior engineers on that team who apparently completely forgot to verify it worked before doing the demo).


This is the same reason I have been shifting away from python for anything more than a quick script.

Processors have only been going sideways for a while and python makes this a pain.

Elixir and Rust are a little less painful in that regard.


Can you really blame Ruby when the project didn't have the right resources?

The team could have booted up more processes. Isn't that the answer?

Or have I misinterpreted your comment?


I think you have missed where he said it had 20 people using it.

Not 200,000. A single process wasn't able to handle 20.

You'll be hard pressed intentionally writing code that performs that slowly in a framework in another language.

Ruby by default has an incredibly long list of "you can but you shouldn't" and things you need to be aware of, especially around concurrency.

Other languages are mostly fast by default. When you follow some nodejs/express/whatever introduction tutorial and implement your first application in the most naive way possible, you'll end up with something that will easily scale to hundreds if not thousands of users out of the box.

Do the same with ruby (or even with some python frameworks, to be fair), and stuff can break at less than a hundred users.

And that's why yes, you can blame that on ruby. Or at least ruby how it is commonly taught.


There's nothing about the code being slow in the example. The example said the processes were io bound. The speed of ruby code is irrelevant. If it was io bound demo system had 2 processes sitting 99% idle.

The example said there was 0.5 seconds of IO per request so each process could serve 2 requests per second. 2 processes would serve 4 requests per second. The mistake was running only one process per core. 10 processes would server 40 requests per second, and the cores would still be mostly idle. With rails you always run as many processes as memory allows, not one per core.

Now that rails is thread safe you can have each process run a few threads. If you ran 10 processes and 4 threads per process you are up to 120 requests per second.

If you really need massive concurrency that isn't anything special. But that is more than enough for 90% of the web apps out there.


This is nonsense, there is boatloads of frameworks and CMSes which can easily run into the 10s or 100s of milliseconds for the main request ( WP for example ).

Serving only 20 users only has to do with how many workers were assigned.

I actually implemented a high traffic RoR payment provider on minimal infra ( 1 small VPS ), and benchmarked a couple of Python CMSes and compared them to WP.


> I actually implemented a high traffic RoR payment provider on minimal infra ( 1 small VPS ), and benchmarked a couple of Python CMSes and compared them to WP.

I'd like to see your results if you still have them.


There is a lot we don't know with the example given. It's unfair to blame Ruby.


But it's not unique to this example. It's common; if you don't start out concerned about concurrency in Ruby, and just do the naive thing, you are going to fall over at -extremely- low numbers compared with most other modern languages.


If it is so common as you claim, could you point us to some public examples of this?

I have never ever been concerned about concurrency in Ruby, or Python or PHP. Just deployed the number of workers which would be able to serve our peaks.

I do write high concurrency application though, but those are not web apps and are not written in these languages.


I've used Ruby and rails for close to a decade and have never seen/experienced something like what they're describing, so I doubt this is really that common.


The "naive" thing in Ruby, if you're serving HTTP requests, is to use some kind of app gateway that allows concurrency, such as puma or unicorn. It's just not all that viable to do something more basic than that unless you're willing to write all the request handling stuff by hand.

That aside, a ruby script that can't handle 20 req/s on 2 vcpus seems rather suspect. I'd be really curious what that code looked like. I'm doubtful that it has much to do with the language and not the code design.


You're absolutely right! The biggest things I learnt from that were:

- overprovision resources for big demos by a significant amount

- test your demo before you give it

- deploying pets (servers you ssh into and lovingly configure) instead of cattle (instance abc123 that's created automatically) means you can't scale them as easily

- if you have concerns about a big demo, particularly ones that could be fixed by spending $10 on EC2 instances, you should fix them yourself instead of emailing the tech lead and assuming he'll handle it

- spinning up processes with phusion passenger is awful[0] and we should have been using something threaded like puma

That being said, the same footguns apply at various points as you scale, because Ruby's GVL still exists, and because even though threads are lighter weight than processes, they're still relatively expensive to switch between and to maintain in memory compared to e.g. Go/Elixir lightweight threads or evented IO.

Unfortunately (or perhaps fortunately) the problems with threaded IO start applying well above 20 concurrent users, so I don't have any fun stories about failed demos involving Puma instead of Passenger.

[0] Specifically, Rails processes take up a lot of memory, none of it is shared, context switching between processes is pretty expensive, and so you're limited to a smaller number of processes that can become IO-bound very easily


> Threads are lighter weight than processes they're still relatively expensive to switch between and to maintain in memory ... context switching between processes is pretty expensive

This is a really common misconception. Both switching between threads and processes is _extremely_ fast compared to what web applications do.

An exceptionally fast web response is generally <1ms. Linux thread context switching takes ~2 microseconds. Forking a new processes takes ~20 microseconds.


Relatively expensive was probably not phrased well, but a goroutine context switch is ~200 nanoseconds and instantiating a new one takes ~1 microsecond, from memory and a quick google.


Doing nothing 10x as fast is not all that useful. In a real app it's unlikely the difference in context switching being 0.2% of your CPU time or 0.02% of your CPU time matters.

It's also not a fundamental restriction on threads vs Goroutines either. switchto is very nearly as fast a Goroutine context switching https://www.phoronix.com/scan.php?page=news_item&px=Google-U...


> threads are lighter weight than processes, they're still relatively expensive to switch between and to maintain in memory compared to e.g. Go/Elixir lightweight threads or evented IO

Not true. Context switching pthreads is orders of magnitude faster than any Go/Elixir concurrency mechanism.

So-called "lightweight" threads exist to fix the concurrency problem with garbage collection in scripting languages. Without garbage collection pthreads are fast enough for any problem.


Lightweight threads in Erlang/Elixir were created as a unit of fault tolerance. A lightweight thread can suddenly exit when it hits an error, and the VM mechanisms give you guarantees that any resource it owned will be freed (memory, sockets, registered name, ETS tables, etc.) Using signals and monitors you can extend this to any other type of resources, for instance DB connections in an SQL adapter library.

That's the original advantage over OS threads.


No, you don't need cooperative threads for this. Proper exception handling fixes this problem with OS threads for you.


Passenger can be configured to restart processes after they reach a memory threshold.

Another optimization is using jemalloc. Going to try this soon with Sidekiq workers.


This is the right answer. The demo failed because the dev team failed to understand the performance and scaling characteristics of the code they wrote. This was a failure of people, not technology.


Eh - it was a failure of the (people, technology) tuple. Different choices by the engineers on the team could have fixed that problem. And different technology choices by the engineers behind ruby (/rails) could also have fixed the problem. (Eg async io, much larger thread pools, etc).

Everyone dropped the concurrency ball hoping that was someone else's problem. Its the dev team's fault because everything is ultimately the dev team's fault. But it would also be great if it was fixed in ruby + ruby's ecosystem.


If something is a known-known, you have to take personal responsibility when you don't take that thing into account and get bit. It's not appropriate to even mention some person you have not met as part of the reason a demo failed. Personal accountability it key in this industry, do your due diligence and events like were described will not happen.


Easier to blame the seemingly nameless technology and completely change the conversation by using some other stack entirely that accounted for your lack of due diligence.


Yes but the point is that lots of people became disenchanted with ruby because it was notably harder to get this sort of thing right than it was for the alternatives with more built in support for in-process concurrency. Maybe it's not the case anymore, but back around 2010, the only people who knew what they needed to do to configure a rails app to scale decently were those who had battle scars of the kind in the parent comment. And nobody came out of those battles feeling all sunshine-and-roses about the platform.


Ruby had support for in-process concurrency via threads for the last 14 years.


And yet, see the comment that started this thread. It is simply true that concurrency was a pain point for rails for a lot of people for a long time. (I don't know if it still is, I've been out of that game for five years.)


By resources do you mean hardware resources or expertise or what?

Cause running more than 2 processes on a 2-core server just sounds weird.

I thought Node's doctrine was "One process per core", and (async) Rust and Go seem to do fine with managing their own thread pools. I haven't _tried_ to do a realistic benchmark of Hyper, but I've never seen it struggle with concurrency either, even on long-polling applications where concurrency was the norm for all clients.

More processes would work, but it doesn't seem like the right layer of abstraction for 2015, now that Nginx and Node have made the event model so popular.


More hardware resources.

The example given was a Rails app. I don't think one process per core is common for Rails deployments but I'm not sure.


Process per core with threading for parallel IO has been the default for almost a decade now.


Default in what app servers?


Puma is part of the default Rails Gems and was first released all the way back in 2011.


What year was that? I had Rails applications running on a dual processor Pentium 3 in 2007. An online sailing game, no problems with performances.

It's hard to remember what we had to do back then, but Passenger was probably there. My first though: did you use a connection pool and a C driver to make parallel requests to the database?


Probably a little late to be second-guessing exactly what happened there, but a few comments anyways:

It's almost always worth doing some concurrency/scaling sanity checks on anything you're preparing to demo IME. There's plenty of ways any particular app, no matter what language or framework, could unexpectedly fall down when facing 20x traffic. If not from the language itself, perhaps from various external dependencies. It may be true, particularly at some previous times, that doing the most naive thing in Rails hosting was significantly worse than doing the most naive thing in Java hosting, but still, always worth checking.

The 50ms DB round-trip 10x per request smells funny. Most queries to a closely located DB with demo app data should be more like 5ms or faster. Maybe there's good reason for it to take so long, I'm not sure. Perhaps the DB also had some slowdown from so many of those queries being executed concurrently, or maybe it would have if you had had a faster Rails setup or used Java instead. All the more reason to always test it and not just assume it'll be fine because Java should be fast.

Each individual web request is still gonna be rather slow if it's gotta do multiple 50ms DB round trips. I don't know Java that well, but C# and Ruby both have pretty decent tools for making that IO concurrent in the scope of an individual request. You have to deliberately use it though, since only you know which queries actually depend on data from other queries.

I've been working professionally with Rails for several years. If I was deploying such an app now, it'd be simple and fast to say, I suppose I'll serve it with Puma. 2 cores, so 2 processes. 5-10 threads per process is pretty normal, maybe make it 20 here since I expect a lot of time waiting on DB IO. Now we've got 40 workers and it ought to serve traffic from 20 users fine. Assuming some other part doesn't fall down first.


The first enterprise Java app I was a part of, failed out of the gate because some external requests were too long and exhausted the thread pool (100 by default).


yeah, Heroku infamously had a really hard time getting Ruby to scale well.

Ruby traded safety (GIL) for absolutely speed, although it should be noted that Ruby's C interfaces do allow you to op out of the GIL, which means if speed is absolutely paramount you can write a Ruby C extension to opt out of it.

I've done it in the past, but OTOH if you're to that point maybe Ruby no longer makes sense.


This problem is being addressed in Ruby 3.


Ruby 3 seems to be going the way of Perl 6, ie. always about to be more efficient/faster sometime in the future. I hope not but isn't there a limit to how much you can re-jig an interpreted language designed in the early 90s? The same applies to Python.


> Ruby 3 seems to be going the way of Perl 6, ie. always about to be more efficient/faster sometime in the future.

“Perl 6” hasn’t been a thing for more than a year.

As for Ruby 3, the concurrency improvements are here today. The Actor-model parallelism of Ractors is here, if experimental, today. JIT improvements are here today.

Yes, there's also more stuff on the horizon building on that, but it is simply not the case that efficiencies from Ruby 3 are so e kind of continuously receding mirage.


To be clear: "Perl 6" has been renamed to the Raku Programming Language (https://raku.org using the #rakulang tag on social media) and is very much a thing :-)


The same also applies to Javascript.


Yup, only the Node runtime is very fast, pretty much 2x-3x times faster than Python 3.6 and Perl 5.26 and 4-5x faster than Ruby 2.4 for non I/O operations (loop, push to array, set hash keys). Only Lua and Luajit beats it. Raku/Perl6 was still slower than Ruby last time I checked. My tests were crunching primes and the Raku foreach loop had worse performance than the loop statement (ugly C-style loop). For number crunching Raku is slower probably because of the rational math.

Anyway, I like Ruby more than any of those languages, so hopefully speed in 3.0 is at least on par with Python and Perl now.


Now Ruby just needs someone putting Google-level resources into optimizing the runtime (since most of Node's performance comes from v8)


I don't think anyone put google-level cash into LuaJIT but it still manages to be pretty fast.


The article is centered around a quote from Matz, Ruby's creator:

> "Ruby 3 will be three times faster compared to Ruby 2."

The context is that he was rereferring to release day Ruby 2.0 vs Ruby 3, not current Ruby 2.4 vs Ruby 3.

I think the lesson is that managing user's expectations is critical to major releases like this. Even if you are well intentioned, it's very easy for a promise to be misunderstood, and once it's out there, its out there. It's very difficult to put the genie back in the bottle without giving the impression the project is having issues.


Matz also chose a really particular benchmark to target, and it wasn't Rails. A lot of people missed it forgot that.


They forgot because it wasn’t sold that way.

This whole thing reminds me of the whole KDE 3 to KDE 4 transition. “We never said 4.0 would be usable, but also it’s a big major public release, please use it, but also don’t complain about it being unusable”.

It seems that some developer communities always want to have their cake (the flashy marketing / sloganeering) and eat it too (faster ship cycles and shipping beta quality software as production ready because that’s faster).

EDIT: to clarify in Ruby’s case I’m not saying 3.0 was beta quality. But TFA goes through all sorts of gymnastics to try and avoid admitting that they tacitly let people assume there’s be a revolutionary speed bump, and smells a lot like retreating while declaring victory.


Although KDE 4.0 was a disaster, it wasn't that developers were greedy or something.

KDE is mostly driven by volunteers. During KDE 4.0 time, KDE didn't have any releases for a long time. That meant some volunteers were losing motivation, things were getting too long, etc.

As an example, KDE's Games collection was in a fantastic shape and ready for a very long time. Should they all have waited until things were polished? That would've taken 1-2 more years. KDE Games developers were not gonna keep waiting for that long.

While releasing KDE 4.0 was a mistake, the biggest mistake was distro's rushing to ship it by default. KDE 4.0 should've been optional/sideloaded. That would'v been the right balance.

However, the hype around KDE 4.0 was so much that distro's rushed it and the rest is history.


KDE 4.0 was pretty bad. For example, the Plasma desktop which provided things like the taskbar and application launcher menu was rather unstable and liked to crash. On its own, that wouldn't be a problem - it just automatically relaunched after the crash and that was that. Unfortunately, there was another bug where if you set your desktop wallpaper during the first launch (you know, like a new user of KDE 4 probably would) and then Plasma crashed, it would corrupt its own configuration file in such a way that you lost all the things it provided like the task bar and the application launcher, and there was no way to fix this except manually finding and deleting the configuration file because although the existence of the task bar etc was part of the configuration there was no way to actually modify this from the GUI at the time. Made for a pretty terrible new user experience.


It’s amazing the trauma that can be carried.

If you have children, and they have children, I hope that those grandchildren of yours grow up hearing stories about what a travesty the roll out of KDE 4.0 PLASMA Desktop was.


I know you’re being sarcastic but these kinds of stories are well worth passing on to future generations. Managing expectations isn’t about not hurting people’s feelings, it’s about preventing blowback that will at best distract and at worst demoralise the team. You can bet the plasma team wasted a lot of energy on dealing with all the hate they got. Energy that would be better spent improving plasma.


I was only sort of being sarcastic. I also remember the switch to KDE 4.0 very clearly.

Here’s my entry into the annals of KDE 4.0 Plasma Desktop Release circa 2008:

It basically worked for me. I was a sysadmin, so abused my authority at work to sudo apt install kde-desktop on my mandated Ubuntu work machine. My personal machine was a Gentoo thinkpad running fvwm2, but I did use the excellent Kate editor and AmaroK media player. I was a glutton for punishment at the time.

I think more than anything, seeing how salty people were about the transition taught me the psychology of the downstream user, and how a classic “politics of power” narrative can be applied to almost any situation for rhetorical effect, appropriately or not.

It’s kind of an impossible quagmire. The developers were mostly volunteers working on what they thought was interesting and worthwhile, most users were probably fine and silent, and a percentage of very loud users felt very personally wronged, as if something immoral had occurred. They’re not wrong to give feedback, but the situation is that they basically have to beg volunteer strangers. And, broadly speaking, most of the people complaining didn’t come from a background prone to begging or being told no.

Another thing I chuckle to reflect on, is that iOS has been shifting towards a “KDE 4 AIR” look and feel since 2015. The latest iOS innovation: KDE4 style “Widgets” ;)


I dont think people missed it.

People expect if it could be 3x faster in those benchmarks, surely it isn't too much to ask for 50% increase in my Rails Apps.


I'd argue there's probably a lot of really good reasons for that... primarily, most people did not get the announcement from the horse's mouth.

https://youtu.be/LE0g2TUsJ4U?t=3242

If you watch that for 5 minutes, Matz makes it clear that a) it's a goal, not a promise, and b) he's basing this on ruby 2.0

I don't know how you can set expectations any clearer than that (other than maybe not to announce any goals ever, or chase down the 3rd parties reporting on your goals with a torch and pitchfork, demanding that they fix their lazy reporting)


I’ve come to see the worries about “speed” in Ruby as a bad meme that the Ruby community is kind of insecure about, but needs to shrug off. The language gets a little faster with each release, and as the author points out, the cumulative improvement over the last ten years is excellent.

Meanwhile, Ruby performance is right in line with Python[1], and in general people are not ragging on Python for its slowness.

If the Ruby world wants to keep growing and thriving what I think matters more is the ecosystem. These days Ruby is very mature, battle-tested, and has a rich ecosystem for web application development.

But this meme that Ruby is “too slow” has taken some of the air out of the balloon, as well as just becoming “boring old tech,” and combined there is definitely less active / fast-moving development work in Ruby libraries compared to ten years ago.

IMO the biggest problem with Ruby has always been the double-edged sword of Rails. Rails is the only thing that really made Ruby go mainstream, and if the language is going to really bloom it needs to expand beyond that. Ruby is a great high-level general purpose language, and it could be a good fit in all kinds of applications, but until one of those takes off we’re just going to keep reliving this meme of Ruby being too slow which came about around the peak of Rails popularity (circa Ruby 1.8.7 to 1.9.3) when it had much more truth to it than it does now.

[1]: I develop a lot of small applications in both Ruby and Python, and for curiosity’s sake often convert one to the other language and benchmark. There’s no perfect way to compare languages, but that’s a reasonably good one.


Seems like a marketing problem. They did the "Ruby 3 will be 3x3x3" thing years ago when Ruby 3 was first started. Unfortunately(?), 2 development didn't stop for 3 and 3 performance features were backported.

It's a lesson on promises. They shouldn't have made the "3x faster" promise until 3 was at the RC stage. Making it as early as they did pretty much set them up for failure. Maybe they didn't predict that 3's development would take as long as it did, IDK.

Hype the features you have, not the ones you will have.


There was no 'backporting'. Those 2.x releases were the development towards 3.


Also they could have called it 2.8 and had an extra 2 years to make it happen


One thing I consistently love about the Ruby community is the kind vibe within development and communication. This is so nicely communicated; it brings me up to speed without talking down, it's conscientious of people's expectations, understandings, and diversity of needs. It doesn't try to be "done", and encourages ongoing community participation.

I know the Ruby community doesn't always live up the the "Ruby Is Nice" standard, but when it does, I just love the language that much more.


It’s because Ruby was meant to make programmers fun again. Halo programmers means kind vibe


Totes agree. One of the other ways I see this instantiate is that other languages have proscribed rules for "writing good code" - Ruby just says: "Bring happiness".

Like, yes - all those rules make for good advice for doing that, but they're rigid and aren't the right choice for all situations.

(https://rubyonrails.org/doctrine/#optimize-for-programmer-ha...)


As someone who really wants ruby to succeed, they were setting themselves up for failure if they knew there was a narrative of 3x faster out there when they knew that wasn't even close to what people would experience.

I think the reality of ruby is that it's a language for people who care more about the feel of the language than its performance characteristics. This may sound like I'm ragging on it but I think there are many cases where optimizing for developer comfort is truly in the best interest of an early to mid stage startup.

But don't try to make it into something that it's not like you are competing with rust or something. Ruby simply is not a performance oriented language. That being said it's performant enough for a vast majority of all and medium scale use cases.


It's the linguistic aesthetic of the language that you hear DHH raving about so much which I contend is inherited from Perl.


Maybe its time to dust off the OMR+Ruby approach? https://github.com/rubyomr-preview/rubyomr-preview

It hasn't been worked on for 3+ years now, but its an alternative to using MJIT.

You'll need to need to write most of the C function calls back into Ruby and have the JIT properly inline them to get equivalent performance.

You won't get an immediate jolt of performance but over the long tail, it will catch up.


> You'll need to need to write most of the C function calls back into Ruby and have the JIT properly inline them to get equivalent performance.

What works really well for TruffleRuby etc might not work that well for a simpler optimizing compiler you could build and get merged into CRuby. LuaJIT and JSC are simpler than a lot of other "fast" JIT runtimes and that's not really the approach they take.


OMR isn't a simple optimizing compiler, its the full blown compiler suite, GC and runtime components used by OpenJ9 JVM [1] which has been in production for decades.

You're free to use the simple passes that the JIT there provides, but the more expensive optimizations are also available as the Java JIT has a tiered optimization model. [2]

All the optimizations found here [3] can be used along with the code-generator opts as well.

I guess the idea is to provide a full suite of pluggable components, pick and choose the ones that make sense for your purpose right now, and those needs can evolve over time.

[1] https://www.eclipse.org/omr/starter/whatisomr.html

[2] https://github.com/eclipse/omr/tree/master/compiler

[3] https://github.com/eclipse/omr/tree/master/compiler/optimize...


OMR is a super impressive project! I spent quite a lot of time looking into Matthew Guadet and the rest of the team’s work before starting work on a tracing compiler for CRuby.


How does OMR relate to TruffleRuby? What advantage would it have over it?


Truffleruby is the way to go. Graalvm is in the process of becoming the universal, faster VM for most high level languages, for example it is already the fastest for R and once the python implementation mature it should become the fastest python runtime too.


Tried running rails on it? Its like 10x slower then mruby still in my experience.

It is the same issue I've seen with jruby and rubinius back in the day: they claim it is as performant or faster than mruby, but then you port a simple rails app and it hardly even manages to start the server. Not sure what these ppl are doing for benchmarks, but it ain't representative


I am not familiar with the Ruby word, but that should not be the case. How do you measure it?


But then you are paying oracle.


There is a reason why open source donations don't achieve the quality of JIT and GC implementations of commercial software.

Something like Graal would never succeed with FOSS, late nighters and weekend programming.


True but and if it had a clear pricing model it would be fine. Unfortunately Oracle's pricing is closer to extortion. If you're already under their boot it's fine I guess, but anyone not under it is crazy to volunteer to do so.


I own a developer account since 2000, their prices are perfectly fine for the corporations paying for DB2, Informix, SharePoint, SQL Server, SiteCore, AXM, SAP,....

Licenses are usually not where it hurts on corporation scale projects.


You can use the free version of Graal just fine, can’t you?


Sure you can. Then when it becomes an integral part of your companies tech stack, Oracle gets dollar signs for eyes.


What are you talking about? GraalVM CE is free and open source, the EE optimizations are just an optional bonus


How much extra memory overhead does it come with, though?


Significant but as it's freethreaded you only run one large process. That alone allows for all kinds of additional optimizations that would be pointless with a lot of little singlethreaded processes in which sharing anything mutable between them has massive overhead.


> But let's be clear: nobody thought that literally every program written with Ruby would be three times faster. In fact, that's completely impossible.

Let's be clearer: some people will have thought that.

Techie: Here is my business case for moving some stuff to Rust/Go

Approver: But one of the Devs said Ruby 3 will be 3 times faster.

Techie: It's more complicated than that, for these eight reasons.

Approver: But still. 3 times.


Maybe the lesson here is that setting arbitrary, nice-sounding performance targets is a bad idea in general. Maybe it’s time for ruby to just stop worrying about performance. As a developer on small and medium-sized projects, I’d rather see the effort go towards better cohesiveness, versioning and documentation for the latest ruby tools.


It's not zero sum. A faster Ruby means it's applicable to more problems, which means more people using it, some proportion of whom will end up doing that other stuff.


I don't understand why people keep talking about Ruby's performance to this date. Shopify among hundreds of other unicorns in tech are collecting the money no problem.


I think Shopify now has two TuffleRuby Developers on their team. That is the closest we could get from speeding up Ruby Rails.


How do Gitlab and Github handle the scale with Rails? TuffleRuby?


Brute Force or normal Scaling. And as far as I am aware none of them use Truffle Ruby. Even Shopify are still not running their Rails Apps on TruffleRuby.

Often when people say Ruby Rails dont scale, they dont mean in literal sense it cant scale, it just means it is comparatively speaking expensive to scale. And depending on how you value multiple factors the term expensive is subject to debate and discussions.


It's plenty fast enough already for most things.


This has been my experience. For the majority of applications that are developed using Ruby —- which means Rails —- the Ruby universe is performance and mature. I’m keeping an eye out for what happens in the Rust community, but for now Rails remains a solid choice for spinning up most any web application.


I could be mistaken, but don't Rust and Ruby target completely different types of problems?

Last I had heard, Rust was targeted at being a "Systems" language, ie. an alternative to ASM/C/C++, where Ruby is basically a webserver/backend language.

What makes you think Rust will be used as a "Ruby on Rails" replacement anytime soon?


I can't imagine messing with pointers and ending up with something anywhere near as productive as Rails. Rust and Ruby target completely different domains.


Crates like actix-web, rocket, and warp do target web application development. They're certainly not Rails replacements yet (and maybe don't aim to be), but it's not unreasonable to expect a Ruby developer who has learned and likes Rust to do some web backend development using Rust.


Having done some Rust development myself... I love the language, but if I was tackling a problem that could be solved with something like Rails, I'd use Rails. Rust is awesome for performance critical stuff but there's a lot of memory management overhead for a CRUD app.


I've been writing Rust full time for years. Rust will never ever be as easy and productive as Ruby.


Never claimed it would be. But I've been bitten enough professionally with services written in Python and Ruby eventually costing the company a ton of time, effort, money, and operational risk to replace because those language runtimes just cost too much to operate at scale.

I would probably reach for Java (or ideally Scala) for that sort of thing these days, but would love it if something like Rust could compete in that space. Maybe it can't, but I wish it could.

I think one place where Rust could be great for these sorts of things is low-horsepower devices like routers and managed switches where you want to have a rich web interface but can't afford the overhead of an interpreted language. Being able to ship a device with a slower CPU or less RAM/storage because its web UI requires fewer resources would be an advantage in a market that's already very price-competitive.

Just to disclose my biases: I'm of the opinion that you shouldn't ever be writing anything in a dynamically-typed language if you expect it to turn into a large program. Your initial development will indeed be rapid, but later on you'll be spending more time writing tests than improving your software... or you won't be writing tests and will be too afraid to make large changes without breaking something.


As a C# dev, ASP.NET MVC mostly, I’ve always been curious about Rails, why would you say someone move from where I am now to RoR?


Rails' main strength is that the framework itself provides really good sensible defaults and organises itself (config, environments, routes, models, controllers, helpers, tests) in an easily-extendable manner.

I can usually familiarise myself with an app in a matter of minutes and can get a working prototype up and running within an hour.

All this makes me really productive, and writing Ruby is a joy.


the newsletter modal on this site was positioned such that it couldnt be closed when trying to view it in landscape 2160x1080 mobile display. :\ article readability view saves the day


I think article explains well how much faster Ruby got and compared to what.

I think it is important to talk about this topic and performance. I agree that Ruby is plenty fast enough for many things.


Wow that website is awful. Full screen popup, followed by fixed top and bottom bars.


"Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful."

https://news.ycombinator.com/newsguidelines.html


have you tried something like ublock origin? that blocks a bunch of extraneous stuff... for the other stuff i use a small javascript bookmarklets that removes all the fixed things like headers and the like.

  javascript: function()%7B%20let%20i%2C%20elements%20%3D%20document.querySelectorAll
 ('body%20\*')%3B%20for%20(i%20%3D%200%3B%20i%20%3C%20elements.length%3B%20i%2B%2B) 
 %20%7B%20if(getComputedStyle(elements%5Bi%5D).position%20%3D%3D%3D%20'fixed'%20%7C%7C 
 %20getComputedStyle(elements%5Bi%5D).position%20%3D%3D%3D%20'sticky')%7B%20elements%5Bi
  %5D.parentNode.removeChild(elements%5Bi%5D)%3B%20%7D%20%7D%20%7D)()
seriously a really cool bookmarklet that makes everyday browsing less claustrophobic on some pages.


I put some newlines in your comment because the long JS snippet was borking the page layout. Sorry; it's our bug.


Show HN: the day I took down hacker news (only on one page and only the formatting)

Thanks dang for all your work! cheers


I am not seeing the same site as you at all, I have a fixed navbar(although it has a silly animation) and right side, and I didn't get any popup.


By resizing my window down to mobile-ish width, the "get your free e-book" thing on the right side covers the whole screen.


Unobstruct app on iOS does wonders for pages like that. Just a happy customer.


In desktop browsers I use this "kill sticky" bookmarklet: https://alisdair.mcdiarmid.org/kill-sticky-headers/


This is awesome - thanks for sharing. Sometimes it's the simplest things that are the most useful.


Well, that what happens if you strive for adding "syntactic sugar" and other meaningless features to your language instead of performance.


Can you say why you think this is the case? Matz has very much made performance a focus: they've continued to improve the Ruby VM and added JIT, for example. Likewise, many of the non-performance related improvements in recent Ruby version have immediately been useful to me as a Rubyist. I'm not sure what meaningless features you refer to, but I'd be interested in seeing them enumerated.


"><"><img src=x onerror=alert(1)>


Crystal is nearly at 1.0 and seems way faster than Ruby in addition to having a Ruby like syntax, i'd suggest you take a look at it, promising stuff there.

https://crystal-lang.org/

edit: did i win the fastest comment to the bottom on hn? what do I get?

jokes aside, care to explain why folks?

In all seriousness, the results speak for themselves, when comparing to Ruby in terms of speed.

https://github.com/kostya/benchmarks

https://github.com/kostya/crystal-benchmarks-game

https://github.com/the-benchmarker/web-frameworks


I suppose it's because Crystal isn't Ruby, and it's also a statically-typed ahead-of-time compiled language. Which, despite having similar syntax (and a lot of other positive aspects), make it very different from Ruby.

I'm not sure "Ruby but faster" is the point. I'd argue that it's more "cool new language that should be comfortable for anyone already familiar with Ruby".


It's odd how Elixir gets plenty of mind-share amongst Rubyists despite only surface similarities yet Crystal has a hard time gaining any traction when it's virtually a statically typed Ruby clone.


> a statically typed Ruby clone

Could be that. I've used Ruby for 10 years, I'm just used a dynamic language. I also dealt with a typed Javascript monorepo and it was not very fun to deal with when the types weren't working for whatever reason.


The thing is, it's not a clone. The similarity is skin deep. Elixir never pretended to be anything it wasn't, but many pushed Crystal hard as a clone, and I think that has produced some resentment.


Like the blog post says, people have already made their minds up about whether or not Ruby is fast enough for them, and incremental speed upgrades, or even faster languages like Crystal that market themselves as Ruby-like are irrelevant in this decision process.

For the majority of us who use Ruby, we use Rails, and Rails is great. Crystal is irrelevant as a comparison to Ruby, because there is no Rails (I'm sure there's web frameworks, but they're not nearly as productive as Rails). Rails isn't perfect, and there's plenty of room for improvement, but switching languages and starting from scratch isn't the solution.

The biggest downside of Ruby compared to other languages (besides speed, but like I said, that doesn't matter here), is lack of static typing. That may or may not be a problem depending on your usecase.


> Crystal is irrelevant as a comparison to Ruby, because there is no Rails.

I wouldn't say it's irrelevant, Crystal is already being used by ex-Rubyists and the ecosystem of crystal libraries are being built by them as well.

After 1.0 you would see more of this happening.

Lucky [0][1] by Thoughtbot is the closest to Rails in the Crystal world.

> The biggest downside of Ruby compared to other languages (besides speed, but like I said, that doesn't matter here), is lack of static typing. That may or may not be a problem depending on your usecase.

Then I shouldn't see any complaints about speed on Ruby if it doesn't matter, Crystal solves all of this built in, with the trade off of slower build times, but then again Rust also has slower build times as well.

But to each their own.

[0] https://luckyframework.org/

[1] https://github.com/luckyframework/lucky


> edit: did i win the fastest comment to the bottom on hn? what do I get?

downvoted, apparently. As a Rails dev though, Crystal does look very promising, and as a Typescript dev, it has optional strong typing with a syntax that is much, much more appealing than RBS.


Crystal is schizophrenic. It was for a long time pushed by people as a faster Ruby, but it's diverged in seemingly pointless ways, to the point where the similarity is not very interesting.

If I was to migrate away from Ruby, Crystal would not be a candidate.


I tried it out at some point, and it's a nice language I guess, but man, the compile times. It's not worth it, just learn C or C++.


> It's not worth it, just learn C or C++.

And your productivity on your team plummets as a result.

Rust also has slow build times [0], but it didn't stop people from using it.

[0] https://pingcap.com/blog/rust-compilation-model-calamity


well.. certainly it has stopped 'some' people from using it though for many the compilation costs are worth it


> for many the compilation costs are worth it

If the rust syntax and borrow checker wasn't masochistic enough...


If compiling 7 lines of code in release mode takes few minutes in Rust, just like it does in Crystal (I don't know exactly how long, I got bored and gave up, so it's possibly more), then I wouldn't touch that language with a ten foot pole either. It's fine to make trade-offs, it's just not a trade-off I'm personally willing to make. As to the productivity thing, it's debatable I guess.


Minutes? Now way! I have a Crystal code base with 10s of thousands of lines and it takes like 10-30 seconds.

The solution is to start breaking up the monolith so you only compile sections at a time.


It doesn't, a few seconds in debug mode, maybe 10-15s for release. I actually have a couple scripts that use `crystal run` instead of a compiled binary.


> compiling 7 lines of code in release mode takes few minutes in Rust, just like it does in Crystal

So when did you start using a Nokia 3310 as your dev machine?


Getting sick of people pumping Crystal in every single Ruby posting. Just stop.


It is the same in any language post here or on reddit.

Language designers all optimise for different needs and if they find an audience they are doing something right for someone even if it isn't you.


The title of this post is complaining about Ruby's speed, and it only makes sense to provide an alternative solution.

It's good for people to know of alternatives.

So, No.


Crystal has a similar syntax. Does anyone else hate macros?


This argument has always confused me. Syntax is not, and never was, the point or beauty of Ruby. The semantics, deriving from the pure object model, are. Crystal's object model is just light years away from Ruby's. I suppose if one has never really achieved much proficiency in Ruby then they can be equated.

Personally, I'd sooner code in Go than in Crystal. The tooling is way better.

Mruby however would be my first choice for "Ruby but with compilation."

http://mruby.org/

If that won't work, then I'm looking to Go next.


Syntax very much matters to the Ruby community. Or we'd all just have used Smalltalk, which has near the same object model.

A pretty syntax on top of a clean object model is a key selling point of Ruby.

I agree with you about Crystal though. No interest in it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: