The fact this is only a 50% time increase is why casting the piece as an important optimization is frankly making the piece illegitimate.
If it took 10x or 100x the time, then you have something to talk about. 1.5x? Not Worth Mentioning. Definitely crappy hyperbole that linkbaited a bunch of people to waste their time reading it.
If you're worrying about 1.5x speedups in non-tight loop parts C++ code, I would contend you're probably looking in the wrong place. Even in a tight loop, 33% faster isn't much to write home about.
The thing about all this is: It doesn't make your web app load a "teensy bit faster". The time required to make millions of strings like this is still far far far below that of human detection.
I'd contend, if it made the memory footprint of the program any larger (with larger code pages), accommodating this could Very well slow your startup time down if it made the code page a bit bigger cause a processor cache miss to load another module you had to write to handle using artificially small strings.
This isn't handwaving at the issue of dynamic language runtime speed. That's a strawman of your own construction. This is me pissed off at such a dramafilled title being slapped on an inconsequentially small speed difference in Ruby strings.
I've been similarly pissed off in meetings where people spent 2 hours arguing for an "optimization" that would have saved a total of ~400 ms total if we sold 100 million units and they ran for an average of 20 years each.
This isn't about scaling. This isn't about optimization. This isn't about making dynamic languages faster. This is about saving functionally no time, ever, and usually wasting people's time and making the code slower with premature optimization.
Did you read the article? The very first sentence is
"Obviously this is an utterly preposterous statement: it’s hard to think of a more ridiculous and esoteric coding requirement."
Or how about this:
"Don’t worry! I don’t think you should refactor all your code to be sure you have strings of length 23 or less. That would obviously be ridiculous. The speed increase sounds impressive, but actually the time differences I measured were insignificant until I allocated 100,000s or millions of strings – how many Ruby applications will need to create this many string values? And even if you do need to create many string objects, the pain and confusion caused by using only short strings would overwhelm any performance benefit you might get.
For me I really think understanding something about how the Ruby interpreter works is just fun! I enjoyed taking a look through a microscope at these sorts of tiny details. I do also suspect having some understanding of how Matz and his colleagues actually implemented the language will eventually help me to use Ruby in a wiser and more knowledgeable way. We’ll have to see… stay tuned for some more posts about Ruby internals!"
I truly do not understand the emotional reaction you were having when you wrote this comment. It sounds like you've had some issues in the past with your time being wasted debating pointless optimizations, and that's what you were reacting to.
The article is not advocating pointless optimizations. The article is simply exploring a cool little piece of MRI.
It sounds like you have a lot of experience with dealing with optimization, though. It'd be cool if you wrote something educational about optimization.
If it took 10x or 100x the time, then you have something to talk about. 1.5x? Not Worth Mentioning. Definitely crappy hyperbole that linkbaited a bunch of people to waste their time reading it.
If you're worrying about 1.5x speedups in non-tight loop parts C++ code, I would contend you're probably looking in the wrong place. Even in a tight loop, 33% faster isn't much to write home about.
The thing about all this is: It doesn't make your web app load a "teensy bit faster". The time required to make millions of strings like this is still far far far below that of human detection.
I'd contend, if it made the memory footprint of the program any larger (with larger code pages), accommodating this could Very well slow your startup time down if it made the code page a bit bigger cause a processor cache miss to load another module you had to write to handle using artificially small strings.
This isn't handwaving at the issue of dynamic language runtime speed. That's a strawman of your own construction. This is me pissed off at such a dramafilled title being slapped on an inconsequentially small speed difference in Ruby strings.
I've been similarly pissed off in meetings where people spent 2 hours arguing for an "optimization" that would have saved a total of ~400 ms total if we sold 100 million units and they ran for an average of 20 years each.
This isn't about scaling. This isn't about optimization. This isn't about making dynamic languages faster. This is about saving functionally no time, ever, and usually wasting people's time and making the code slower with premature optimization.