I can see in the 70s being able to justify it as a given that the project would succeed if implemented. But that's a full 20 years after this machine. Completely different technology and probably order of magnitude lower cost per performance unit. So the question is, what workloads were are so high value to justify the risk the project would fail and the enormous cost per compute unit.