You validate a random selection of results using actual experiments. If you validate 10k results and a maximum of 1 of the validations contradicts the prediction, you're at about 99.99% accuracy.
10k experiments may seem like a lot, but keep in mind that if we can engineer nanobots out of proteins the same way we build engines from steel today, the number of "parts" we may want to build using such biological nanotech may easily go into the millions.
And this kind of AI may very well be as useful for such tech as CAD is today. Or rather, it can be like the CAD + the engineer.
That’s the bottleneck the model was trying to avoid in the first place. The goal of science is to come up with models we don’t need to validate before use, and it’s inherently iterative.
Nanbots are more sci-fi magic than real world possible. In the real world we are stuck with things closer to highly specialized cellular machinery than some do anything grey goo. Growing buildings from local materials seems awesome until you realize just how slow trees grow and why.
> That’s the bottleneck the model was trying to avoid in the first place.
Some real world validation is always needed, but if the validations that are performed show high accuracy, the number of experiments will go down a lot.
> Nanbots are more sci-fi magic than real world possible.
The underlying physics isn’t changing in 100 years. Individual components can be nanoscale within controlled environments, but you simply need more atoms to operate independently.
10k experiments may seem like a lot, but keep in mind that if we can engineer nanobots out of proteins the same way we build engines from steel today, the number of "parts" we may want to build using such biological nanotech may easily go into the millions.
And this kind of AI may very well be as useful for such tech as CAD is today. Or rather, it can be like the CAD + the engineer.