The thing is, you can write a perfectly normal fortran code, and instantly gain speedup (CUDA, distributed computing with OpenMP, etc) just by enabling some compiler flags. You can't do this in C/C++ as you have to deliberately write your program to use those tech. Also, vector/matrix operations are first class in fortran and you don't need to rely on 3rd party libs.
> The thing is, you can write a perfectly normal fortran code, and instantly gain speedup (CUDA, distributed computing with OpenMP, etc) just by enabling some compiler flags.
I'm not sure I understand you correctly. Can you give examples of such flags?
> Also, vector/matrix operations are first class in fortran and you don't need to rely on 3rd party libs.
It may be useful as long as you're hell-bent on not
using libraries (which is somewhat contrary to one of the
pro-FORTRAN arguments that FORTRAN has lots of libraries that are tested and ready to use).
This is a weak consolation though, since anything complex enough deals with custom matrix/vector types for sparse matrices or data types used in parallel computations.
Not sure about gfortran, but commercial fortran compilers supports automatic parallelization (e.g. intel fortran compiler -parallel flag [1]). You can even go as far as parallelizing you program across a cluster of machines via OpenMP by simply sprinkling some directive in your program to mark the code that must be parallelized via OpenMP. I remembered incorrectly about cuda. PGI fortran compiler supports CUDA but you still need to deliberately use it in your code, though there are projects that attempt to make this automatic (not sure if they're really took off).
> It may be useful as long as you're hell-bent on not using libraries (which is somewhat contrary to one of the pro-FORTRAN arguments that FORTRAN has lots of libraries that are tested and ready to use).
Yes, library is still used but it's typically only for data input/output. For example NetCDF is a popular data format and many fortran projects support the format via 3rd party library. But for complex matrix computation, this is essentially what fortran was made for so it's not typical to use 3rd party library for this. Most big fortran projects in the area I was involved with (meteorology and air pollution) uses minimal amount of 3rd party library and mostly rely on built-in fortran functionality, with optimization being left to the compiler (typically intel or pgi fortran). There is definitely code reuse, but it's in the form of the scientist collecting snippets of useful algorithm over the years and copy it to the project when they needed.
On a side note:
having (semi)automatic parallelization with code generation
for GPGPU would be very nice.
> There is definitely code reuse, but it's in the form of the scientist collecting snippets of useful algorithm over the years and copy it to the project when they needed.
Well, doing complex matrix calculation yourself in C/C++ without 3rd party library is hard. Unless you write everything yourself or specifically use intel MKL library, the benefit of enabling automatic parallelization on C/C++ won't be as impactful as in fortran where it's common do all calculation without using any 3rd party math library.