This is pretty much the idea I had about replacing MPI for my HPC projects - I'll certainly check this out once I have python bindings for my Fortran code (yes I'm serious ;-) ). Are there any benchmarks available that compare this to pure Fortran+MPI or C/C++ + MPI (with the lsci version using wrapped versions of the same code)? I think this could have great benefits for stability regarding Erlang's respawning of processes if you implement proper checkpointing facilities that can be passed through to the python code as well as C/Fortran code wrapped with python. Stability is the big thing to solve for Exascale applications - but performance kind of needs to be proven first when it comes to HPC.
C bindings are compatible with Fortran, you mainly have to think about two issues:
* depending on the compiler / setting, the Fortran function names have one or two underlines as well as their module name prepended.
* index order for multidimensional arrays are reversed in Fortran.
Other than that the datatypes basically just work. So as long as you have C bindings this is not what I'm worried about, it's rather the overhead of the Erlang VM when compared to MPI (although for usual HPC loads this probably wouldn't even be an issue as long as it's not an order of magnitude slower, since most of them are not network bound if done right).