It's always fun to look at different (micro)benchmarks comparing language/frameworks/systems to each other.
I'm curious about real world examples where a change (preferably measurement/profiling driven) has lead to a significant positive outcome in performance, code quality/maintainability, etc.
Did changing from Python to Go make it so that you could avoid horizontal scaling for the size of your app, reducing operational complexity?
Did switching from Dart to Rails speed up development because of the wide range of libraries available, speeding up time to market?
While most bottlenecks exist outside of languages/frameworks, I find it interesting when the language/framework actually made a difference.
An example I'll use is switching an internal library from C# to F#:
The module as designed mutated 3 large classes through a pipeline of operations to determine what further work was needed.
I incrementally rewrote this module in type-driven F# with 63 types to model the data transformations and ensure that the correct outcome desired was compiler verified. In the process 3 bugs were fixed and 12 additional bugs were discovered that while edge cases, had a couple of old tickets with "unable to reproduce" as the last comment in the ticketing system.
This could have been done in C# and because I did it in F# it is most likely slightly more difficult for the other team members to jump into. It probably also uses more memory to represent the types than the C# version.
In this case however, the trade offs were worth it and I've been told the module has barely needed to be touched since.
The script optimized the placement of samplers in a building, in order to maximize the probability of detecting airborne pollutants, or to minimize the expected time required to detect. The rewrite cut the runtime down from 2-3 days to sub-hour.
Some of the speedup was intrinsic to the interpreted/compiled divide. However most of the speedup came from the greater control Fortran gave over how data got mapped in memory. This made it easier for the code to be explicit about memory re-use, which was a big help when we were iterating over millions of networks.
Re-using memory was helpful in two ways, I think. First, it avoided wanton creation and destruction of objects. Second, and more importantly, it allowed bootstrapping the work already invested in evaluating network `N` when it came time to evaluate a nearly-identical network `N+1`. Of course, I could have made the same algorithms work in R, but languages like C or Fortran, which put you more in the driver's seat, make it a little easier to think through the machine-level consequences of coding decisions.
That experience actually taught me something interesting about user expectations. When the Fortran version was done, my users were so accustomed to waiting a few days to get their results, that they didn't run their old problems faster. Instead, they greatly expanded the size of the problems they were willing to tackle (the size of the building, the number of uncertain parameters, and the number of samplers to place).