Hacker News new | past | comments | ask | show | jobs | submit login

> It’s a little horrifying that Norvig doesn’t seem to understand these nuances.

yes. its quite incredible actually. It is one thing to sort of tribulate about brain / cognition domains that nobody knows much about using a pseudo-language of "learning" or "intelligence" (or even the original sin: "neural networks") and it is another level altogether to expose publicly how much one is subject to "I have got a nice little hammer and to me everything is a nail" pathologies

Conflating the statistical verification of physical laws / theories with the theories themselves reveals in much clearer terms how limited the conceptual model the AI crowd want to inflict on us. We do know about more about physical laws than about the brain and they don't come about like they say. Go back to school.




Newtonian gravitation is absolutely “we propose a model, we fit it, here is the result, here are the error bars on _g_”.

Einsteinian gravitation is a better model. It fits the data better, that’s how we know.

These are models, exactly in line with Breiman’s “first culture”, data modelling - the model has been designed according to intuition about the underlying mechanism.

You seem to be arguing that if a model is not derived from first principles it is not science. This is ahistorical nonsense. Science has always been full of phenomenological models which haven’t lent themselves to direct interpretation. That doesn’t make the model wrong, it makes it uninterpretable. Some of them become interpretable later, some are replaced by interpretable models, often they give key insight to underlying relationships which must hold which then guide the search for a mechanism, and nearly always they’re profoundly useful. Consider Lev Landau’s order parameter model.

“Physical law” is just a synonym for “extremely high-confidence interpretable model”.


> “Physical law” is just a synonym for “extremely high-confidence interpretable model”.

You are abusing the term "model" to the point of ridicule. What is the "model" in Newtonian gravity? An iconic relation to which you might (over)fit some data "AI style"? The associated concepts (action/reaction, momentum) that float somewhere around the "formula" and are absolutely necessary to understand and use it but are not reflected in the slightest in this quantitative expression? The drastically alternative ways you can re-express the formula, which neverthless all contain exactly the same physical content? The explanatory (modeling) potential of the concepts (what happens when I split a mass in two?) that suggests a more profound link to what is actually being described?

At a very basic level it is true that we verify the conceptual constructs that we call physical laws by deriving certain statistically testable relations. But to argue that these derivations is all there is to it is simply... wrong.


> What is the "model" in Newtonian gravity?

F = G * ((m1 * m2) / r^2), plus, yes, the concepts of distance and inertial mass.

G (and the bounds on the precision of our measurement of G) is derived by fitting from experiment. It's a regression model which happens to have R^2 extremely close to 1, and that's why we can treat it as (nearly-)always true.

It's absolutely a statement about observed behavior which we then _interpret_ (incompletely, but usefully, as Einstein showed) as an inverse square law. This is precisely a _model_ of behavior. That model can be derived from first principles or it can be purely phenomenological, and different models are useful for different intellectual tasks.

Once you have a statement which is nearly always true, you can ask _why_ it's nearly always true, and that's very useful, but "law" really _does_ just mean "statement to which we haven't found counterexamples yet".

But the model is just a model. Science is the process of building, interpreting and invalidating models, and different pieces of science live at different _points_ on this continuum. Large language models in linguistics live off to an extreme point on it, but even there, models have designed-in inductive biases (eg the attention mechanism in most LLMs) which reflect the modeller's hypotheses about the structure of the problem.

You seem, like Chomsky, to want science to be much more Platonic and profound than it actually is. That's your choice.


Science consists of theories, not just models. This newspeak definition that science is just models is a form of scientism. It's reductionism which is why Chomsky is against it, and to attempt to rebut essentially an accusation of reductionism as being Platonistic is a mistake.


You're confusing _scientific_ theories (rebuttable empirical statements formalized typically through mathematical models) and mathematical theories (statements of fact derived from an axiomatized system).

Chomsky wants linguistics to be mathematics.


For some reason you want to hammer all of science into some sort of helpless pre-science. To a more-or-less direct statistical fit of measurements to a non-descript, intrinsically meaningless mathematical formula. A procedure that somebody might apply to an observational set using a generic family of formulae even if they have absolutely no clue of what is going on.

We know it doesn't work this way once we actually start building a real model and move beyond what is really just exploratory data science. We'd be nowhere with physical science if the mental models we construct (which have nothing particularly numerical about them in the first instance) did not actually have a coherence that is both amazing and extremely productive.

This is not some Platonic drift or remotely philosophical. This is how physics is done. Even casual familiarity with the history of physics and reading the writings of key thinkers would point to this.

Heck, in quantum mechanics the relation between the physical/mathematical model people came up with the actual "measurement" is profoundly non-trivial and does not remotely fit your bizarre reductionist "programme".


Thank you.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: