A huge source of uncertainty in molecular dynamics comes from the force model, which are crude approximations of electronic structure. There is a great deal of active research on improved force models, many of which are implemented in AMBER (http://ambermd.org) and can be used from most popular MD packages. Other approximations, such as the cut-off for short- vs long-range force evaluation, must also be validated.
Anton is built for a very specific configuration; the modeling assumptions may or may not be valid for a given scientific or engineering experiment. Its value is in being able to rapidly experiments for which the modeling assumptions have already been validated (and run those configurations for longer simulated time), but not for questioning the modeling assumptions or for developing better models.
This is a very underappreciated point. For example, in pharmaceutical drug discovery, what often matters most is the ability to play around with different modeling assumptions, and to do large-scale "search engine" work on different compound properties and how they might affect desired overall properties of the final product.
I wonder if one reason why this highly specific computational architecture is only deployed in two places (as far as I can tell) is because it's actually not very applicable to mainstream molecular dynamics use cases? A lot of the business around pharma drug discovery seems to be pushing for more cloud computing solutions that offer better analytics APIs for engineers, rather than beefing up hardware for larger-scale simulations.
I don't see how this follows from the design. Anton is general enough that a wide range of force field terms can be implemented efficiently, so frankly, I'd be using it to validate terms, rather than predict anything.
It is an interesting time to bring out a new architecture. There was a time that any short term advantage a custom architecture would have over commodity chips would evaporate in 4 years, less than the depreciation time of the machine. However as the ability to scale commodity machines becomes harder, and they scale in useless ways (like the P4 debacle of focusing on video streaming performance) the economics once again shift to more bespoke machines. And some of these machines understand that memory throughput is just as important as instructions per clock and so you get more general purpose architectures with the memory bandwidth of modern GPUS. That opens up some interesting opportunities for large memory computation spaces.
Anton is built for a very specific configuration; the modeling assumptions may or may not be valid for a given scientific or engineering experiment. Its value is in being able to rapidly experiments for which the modeling assumptions have already been validated (and run those configurations for longer simulated time), but not for questioning the modeling assumptions or for developing better models.