I don't think this makes much sense at all. How can a gpt based machine learning solution arrive at a model that can be explained? Explanation is not simply understanding what the model knows about a system. If that were true then shap values and partial dependence would be all we need. We also need to be able to understand how a model arrived at a given solution. And not simply which neurons fired but what is the actual structure of the system of study. You could have a full 3d view of a fluid flowing with an infinite number of trackable particles and a perfect computer to calculate a neural network to predict where the particles will go. That model will probably perform very well even with a very high amount of turbulence in the system. However you will be no closer to producing navier stokes equations then you were when you started. The model cannot tell you what those equations are even though it is able to approximate them to high precision. Why would adding an LLM to this process suddenly produce these equations? Because the LLM scraped the internet and more or less will figure out it's a fluid and assume navier stokes applies? What if we replace the system of study with the singularity in a black hole where we don't understand the physics? How will the LLM explain that?