Did I write that it's impossible? Of course you can always reimplement code with C extensions. The fallacy I was talking about is the assumption that it will be limited to a few hotspots. For non-trivial work it's either not true or if it's true it can hardly be shown before starting the implementation with high certainty - meaning that you don't know whether the faster time to get a working prototype won't be offset by a longer period of optimization (and possibly less maintainable code, because you now have two layers interacting with each other, with the interface being boundaries between performant and non-performant code, instead of logical ones). All I'm saying is that one should think a bit further ahead when choosing the implementation language / frameworks / persistency layer - how important is performance (e.g. how many requests per seconds do you want to serve per node)? How important is it to have a working prototype quickly? How important is fault tolerance? How much computation are you doing anyway, rather than doing I/O bound ops? Depending on these answers, the solution could be python/ruby, or Erlang, but it could also be java/scala or even C/Fortran. Always going the dynamic language route first might hold up in 95% of cases, but the remaining 5% could hurt a lot (or be a big missed opportunity of doing something with high impact, because these 5% of cases are often not touched by most programmers).