Hacker News new | past | comments | ask | show | jobs | submit login
New evidence shows water separates into two different liquids at low temperature (birmingham.ac.uk)
169 points by voisin on Aug 21, 2022 | hide | past | favorite | 94 comments



> The researchers used a colloidal model of water in their simulation, and then two widely used molecular models of water. Colloids are particles that can be a thousand times larger than a single water molecule. By virtue of their relatively bigger size, and hence slower movements, colloids are used to observe and understand physical phenomena that also occur at the much smaller atomic and molecular length scales.

> Dr Chakrabarti, a co-author, says: “This colloidal model of water provides a magnifying glass into molecular water, and enables us to unravel the secrets of water concerning the tale of two liquids.”

this seems backward to me? the colloidal model sounds like a higher level model of molecular water: the opposite of a magnifying glass. or is it that they model several molecules within the context of a single colloid, and by modeling only one colloid instead of a larger volume it becomes computationally feasible to use a more detailed model of the molecular behavior within that small space?


The key preposition being "into". It gives us the ability to peer into molecular water [from our macroscopic perspective]


In response to those commentators puzzling over the evidential status of computer simulations, you should know that this is currently an open question being debated in the field of Epistemology of Measurement, which aims to give a rigorous account of what it means to `measure' something.

It is not at all obvious that simulations should be excluded from the category of `evidence', especially when the simulation depends heavily on the input of observational data. Consider, for instance, using a simulation to interpolate the temperature in some location based on the actually observed temperatures in surrounding regions. Let's also not forget that modern precision measurement is highly dependent on making detailed computer models to predict how the measuring instruments function, blurring the line somewhat between simulation and measurement.

Some references:

[1] Nature: "Virtually a measurement", by Wendy Parker. https://www.nature.com/articles/s41567-020-01138-3

[2] Stanford Encyclopedia of Philosophy entry on "Measurement in Science": https://plato.stanford.edu/entries/measurement-science/#EpiM...

[3] "The problem of observational grounding: how does measuring generate evidence?" by Eran Tal. Video: https://vimeo.com/184697369 Article (open access): https://iopscience.iop.org/article/10.1088/1742-6596/772/1/0...



I'm curious about all the hate for computational science in this topic; of all the places on the internet. By all means critique the model and the simulation, but the idea of simulation itself as a scientific process?


https://youtu.be/OL6-x0modwY

That is the scientific method. And it is presented by one of our best ever thinkers.

Simulation fits right up front in the, "First we guess it" part of the method.

That is what happened here. Now don't get me wrong. It is a good guess, but that is all it is.

Next step is to compute the consequences.

Simulation applies here too. We can arrive at something we could compare to nature, the authority.

That has not happened. Until it does, words like "evidence" are well out of reasonable bounds.

Simulation as an input to the time tested, production proven scientific method is hardly controversial.

Saying that any kind of science has happened yet is and should be controversial!

No science has been done, yet we see words like "evidence" and "shows"

I do not know what "scientific process" means. Feels a bit more like a P.T. Barnum style upsell for potential money getting than it does science when the sim work is framed this way.

If you ask me, the people doing that are jumping the gun. The upsell is unnecessary.

Simulation suggests water may seperate into two different liquids when in these conditions...

The differences are small, but really important. Stuff line this tends to add right up and suddenly we will have a generation of people who are not well trained skeptics able to do solid science.

Given the rapid pace of change, and how compelling this stuff can be, I feel it is important to hold the line on solid science.


By your logic, your “best ever thinker” did not do any science, ever, period - because he was a theoretical physicist.

> “I do not know what scientific process means”.

Maybe that’s the problem right there.


First, a bit of housekeeping: I did not say best ever thinker.

I did say one of our best ever thinkers.

The difference is significant!

Now, back to our discussion:

He [Feynman] would say the same!

My logic speaks to the value and importance of the scientific method. My take on that is influenced by Feynman, who is careful to make the difference between theory and the scientific method clear.

Here the method is very well presented by Feynman. His thoughts on the matter are clear and align well with this discussion.

In this case, the method has not yet been applied. Until it is, we really do not have a basis for "evidence" and "shows" being used as they are here.

That is my objection.

"Scientific process" needs some context to be useful.

Also as written, it resembles "cheese food" rather than "cheeze." And yes, I mean "scientific process" rather than "science." The cheese food people really want others to think it is cheese, but it just isn't. The scientific process people seem to really want others to think science, or "scientific method" but it just isn't.

Do you have some context to share, perhaps clarify where I have it wrong?

I will say the same thing here, "this is the don't get me wrong" part of my comment above.

The simulation results are intriguing and warrant the application of the scientific method. Let us query the authority properly and arrive at some understanding!

Once we have that, we may well be able to move the simulation from suggestive to predictive!



Well, yes!

That is the "don't get me wrong" part of my comment above.

My objection is the "up sell" in the press release, not the use of advanced numerical capabilities, nor what they suggest.


The problem is the use of the word "evidence" to describe the output of a computer simulation. That's wrong. Computer simulation outputs are not evidence. They can be good guides on how to proceed to get evidence, but that's not the same as being evidence.


I’d say it’s just a different kind of evidence. It’s obviously not on the same level as direct observation, but it can provide useful information we can use to refine our theories and home in on specific behaviours we can look for in direct observations and experiments. Saying that this information is not evidence, of any kind, seems to me to be terminological points scoring. It’s evidence for what to look for next experimentally.

I’m also annoyed by people saying things like simulation “isn’t science”. We’ll obviously it’s not all of science, we can’t run a simulation and say all right, job done, we’ve finished the science. Nobody is claiming that, it’s a complete straw man. Simulations can absolutely be part of the scientific process.

I just don’t see the point in these “well actually” rebuttals of claims never actually made by these researchers, or the author of the article. It’s the same old story every time any science news gets posted here.


It's pre-evidence.

Computer simulations of real-world theoretical phenomena are propositions of potential outcomes that need to be verified through real-world testing to determine whether our modelling correlates with, or is skewing away from, reality.


> It's pre-evidence.

No, it's pre-dictions. That's what the rest of your post says. Yes, simulations are one way of finding out what your theoretical model predicts. But evidence is the information from the real world that you compare the predictions with. There is no such thing as "pre-evidence".


> I’d say it’s just a different kind of evidence.

No, it isn't. "Evidence" means something the real world tells you when you run some kind of experiment or make some kind of observation. Simulations are not the real world.

> it can provide useful information we can use to refine our theories

"Refine" in the sense of "improve the machinery our theories use to make predictions", sure. But predictions are not evidence. Evidence is the information from the real world that you compare the predictions to to see if they are correct.


I think you're probably right that it's better to call this a prediction rather than evidence, but there's no consensus on this. Right now I don't think it's reasonable to say this article is making a mistake using the term in this way. It's not wrong. Maybe it should be? Sure.


> Right now I don't think it's reasonable to say this article is making a mistake using the term in this way.

I disagree, because I think the ordinary lay person takes "evidence" to mean what I said and would agree that a computer simulation is not evidence. And this article is supposed to be for the ordinary lay person.


If HN can't agree, I see no reason to suppose all lay people would unite as one.


"Evidence" is anything that should cause you to update your beliefs. The simulations show that a certain possible phenomenon is consistent with known physics when this was not known before, and therefore they raise the likelihood that the phenomenon exists.


> "Evidence" is anything that should cause you to update your beliefs.

No, this is too broad. We are talking about testing scientific theories, not just updating a random person's beliefs. "Evidence" is what you compare theoretical predictions with to test theories.

> The simulations show that a certain possible phenomenon is consistent with known physics when this was not known before

More or less, yes.

> and therefore they raise the likelihood that the phenomenon exists.

No, this is a non sequitur. Knowing that a phenomenon is consistent with the laws of physics tells you nothing about whether that phenomenon actually exists. The set of phenomena that actually exist is too tiny compared to the set of phenomena that are consistent with the laws of physics for knowing something is a member of the latter to give any useful information about the former.


Your argument ultimately comes down to an argument from authority about the use of the word "evidence". Arguments about word definitions are uninteresting. What is substantively at stake is if a simulation like the one in the paper can provide information about a scientific phenomenon. Because a pure simulation can cause a change in beliefs, it does provide information.


> Your argument ultimately comes down to an argument from authority about the use of the word "evidence".

No, it comes down to an important substantive point about what the word "evidence" implies and that that implication is not valid. See below.

> What is substantively at stake is if a simulation like the one in the paper can provide information about a scientific phenomenon.

And the answer to that is obvious: a simulation can provide information about what a particular scientific theory predicts. It cannot provide information about what the real world actually does. Only the real world can do that. But using the word "evidence" to describe the output of a computer simulation obfuscates that vital distinction. That is why I objected to it.

For a real-world example of why obfuscating this distinction is not a good idea, see kadonoishi's post downthread.


> a simulation can provide information about what a particular scientific theory predicts. It cannot provide information about what the real world actually does.

If the simulation had come out in the negative, and it turned out this phase of H20 was totally inconsistent with all known physics, that would cause any reasonable person to assign a lower probability to the proposition that this phenomenon exists in the world. That's providing information about the world. Since the outcome would provide information in the negative, it also does in the positive.

I agree that there's an important distinction to be drawn between observational evidence vs. outcomes of simulations in terms of how they provide information, but they both provide information that was not accessible before.


> Since the outcome would provide information in the negative, it also does in the positive.

No, this doesn't follow. The two cases are not symmetric. Ruling out a phenomenon (if we assume for the sake of argument that a "negative" simulation result can actually do this--in reality things are quite a bit more complex, the simulation was not run as a binary yes/no test of a "phenomenon") rules it out. But showing that a phenomenon is possible doesn't tell you anything useful about whether it actually happens.


My problem with calling this evidence is that the simulation itself is taken as the theory being tested, without saying so. Nothing wrong with that, but it's an extremely specific theory. So specific, we can already call it wrong.


It's evidence if it can be used to falsify theories, imo. That is true of some computational science, i.e. if you use a computer to make predictions from a theory to compare the predictions with observations.


There is already a word for what you are describing, and that's proof. Evidence is a far broader term. It can be weak, strong, misleading or just indicative.

If I have a problem with the title it's the word "shows", this evidence indicates a behaviour water may have, it doesn't show that it has it.


> It's evidence if it can be used to falsify theories

No, this is too broad, because falsifying theories takes two elements: a prediction, and the evidence that the prediction gets compared to. Only the second is evidence; the first is not. And simulations only give you the first.


Exactly, if you can falsify theories subject to the assumptions of the simulation, that’s still evidence.


> if you can falsify theories subject to the assumptions of the simulation, that’s still evidence.

No, the falsification itself is not evidence; it's a falsification. Evidence is what you compare the predictions of the simulation to, and then you call the theory falsified if the two don't match.


I spent some years doing computer simulations of cancer screening. The confusion between the output of the simulation, which is not evidence, and real evidence, led me into a nightmare of the Hall of Mirrors, where everything I attempted to learn about cancer screening turned out to be merely a reflection of my own assumptions. Everywhere I turned it was another funhouse mirror, where the computer warped my own ideas because the code I wrote was not a perfect flat reflection of what I meant, because my code introduced distortions, which took immense rigor to avoid.

The confusion between real evidence, and the output of my computer simulation which was not evidence but I _confused_ it with evidence, led me down a dark path lost in a labyrinth of funhouse mirrors of my own construction.

Listen to pdonis, people.


It's the tone of the press release.

The title style "Evidence shows water does X" tends to make people think "there is concrete evidence that physical water does X", rather than "an abstracted model of spheres intended to mimic some aspect of water predicts that water might do X." When people read the article and find out the methods within don't support the conclusions they feel are presented in the title, they understandably become somewhat hostile.

As other comments note, the actual people doing this work are typically ~very~ aware the limitations of their models and that all results are best interesting fictions, but this typically does not make it into public-facing science communications.

Why the press release would be designed to encourage readers to inuit stronger conclusions that actually supported by the work is left as an exercise to the reader. How might this impact public trust in scientific institutions?


This place has lots of world class thinkers but is heavily overburdened with title janitors and tone police.

This is an actually interesting development but there’s zero discussion of it and 50 comments that want to split hairs about whether conclusions reached by simulations count as evidence or not.


Science discussion here is often lacking. I'm not sure where the right forum is for that :P

I think the topological bond analysis is actually really cool, and I think it might have wide-reaching implications in the study of liquids. I wonder if you looked closely at liquid phases without "long-range order", if you might occasioanlly see something resembling order if you looked through this lens rather than through the radial density function.


Your own comment also fails to discuss the topic, instead adding yet another opinion to the meta-topic "evidence" debate (that it is "splitting hairs").


Maybe this is evidence this place does not in fact have "lots of world class thinkers"? It's classic bike-shedding.


HN can be a tough crowd, for some very good reasons. It's got a fair number of people with PhDs and real world accomplishments etc.


Simulation is a way to form a solid hypothesis. It’s not evidence though.


There a lot of hate because the people here know how easily biased and misleading these models are. Simulation is fine, simulation with no ability to do it in real life or prove it and then claiming it as evidence for anything is probably very silly.


> There a lot of hate because the people here know how easily biased and misleading these models are

If only they had thought to ask HN whether it was useful.

People doing this are EXCEPTIONALLY aware of the drawbacks and limitations of what they're doing. Nobody in the scientific world is under the impression that a simulation on a large approximation of molecules could be used to say "hey so water actually does behave exactly this way". If you go read the actually paper and not the press release the abstract pretty much says "This phase transition is observed in a few distinct models of water, so it's probably useful to try and understand this further"

The self aggrandising press release and statements from the scientists are pretty silly, sure, but the actual publication is not making claims about demonstrating that water does anything. And if you look at statements quoted in the press release, most statements are talking about how the work might provide direction on where to look later:

It's just like the string theory or dark energy threads where all the software engineers who've read a pop science book pontificate about how all the physicists are just so obviously wrong and you need to do it in so-and-so way, bonus points if you get some sort of "maybe lambda calculus is the true representation of the universe"


Calm down, you’re attacking some kind of strawman representation of who is on HN. Has it occurred to you that a non-trivial subset of the HN commenters are scientists?


I believe it was a response to the comment, "...the people here know how easily biased and misleading these models are." That looks like a suggestion that HN by-and-large understands how this research is flawed.


> Calm down, you’re attacking some kind of strawman representation of who is on HN. Has it occurred to you that a non-trivial subset of the HN commenters are scientists?

Read literally any thread about science here, and it's stuffed full of people who will say "now I'm not a scientist but it's clear they're all doing it wrong". I'm not an idiot and yes, it has occurred to me that there might a some scientists here, but there's a much larger subset of arrogant software engineers who think if anybody would just listen to them all the world's problems would be solved.

This isn't specific to software engineers, at this point it's a running joke where a physicist decides "oh, if only <some other field> knew <some advanced math/physics concept> all of their problems would be solved


> Read literally any thread about science here, and it's stuffed full of people who will say "now I'm not a scientist but it's clear they're all doing it wrong".

This happens on literally any thread regarding computer science or programming in general as well. Perhaps you just don’t notice on topics you don’t care about?

This website isn’t moderated for wrongthink so you need to use your brain to parse what people are saying and contextualize it.


> There a lot of hate because the people here know how easily biased and misleading these models are...simulation with no ability to do it in real life or prove it and then claiming it as evidence for anything is probably very silly...

... unless it's climate science, then you gotta believe it.


user "sidlls" posted the link to the actual Nature article.

https://news.ycombinator.com/item?id=32544794

The abstract makes it very clear the contribution explains the empirically observed anomalous behavior of water.


> of all the places on the internet.

Well of all the places I always expect HN to be the hardest on these topics, especially because being in the industry means you know 80% of press releases are buzzword riddled PR bullshit


Mostly a combination of Ultracrepidarianism and Dunning-Kruger, I would guess. Without doubt there are many very smart people here who know a lot about their respective fields and who have lots of opinions about other fields they may not know so much about.

See also the recurrent discussion of how “current hypersonics is nothing new”, or “a machine cannot learn, it is the programmer who gives the structure “, etc.


I hope they don’t accidentally invent ice-9.


So they tested a liquid made from very large molecules, and are guessing that if the large molecules do X, then water molecules will also do X. So interesting results, and there's more research to be done.


So you're saying there's a chance for ice-nine?


Ice IX already exists :) https://en.wikipedia.org/wiki/Ice_IX


I wonder if this has anything at all to do with a pool of cold water at the bottom being 4 degrees C. (because at that temperature you have maximum density)


> New evidence > ... > The team has used computer simulations to help explain what features distinguish the two liquids at the microscopic level

I regrettably(?) don't have the physics background to interpret the actual paper (linked in the sibling comment), but saying "computer simulations" are "evidence" seems suspect to me


And Albert Einstein only had pencil and paper!

https://press.princeton.edu/books/paperback/9780691171074/ei...

"The book examines Einstein’s theory of general relativity through the eyes of astronomers, many of whom were not convinced of the legitimacy of Einstein’s startling breakthrough. These were individuals with international reputations to uphold and benefactors and shareholders to please, yet few of them understood the new theory coming from the pen of Germany’s up-and-coming theoretical physicist, Albert Einstein. Some tried to test his theory early in its development but got no results. Others — through toil and hardship, great expense, and perseverance — concluded that it was wrong."


I think it is a reasonable position to maintain, then, that there was insufficient evidence for GR until at least the Eddington experiment. That the derivation may have seemed so inevitable to Einstein once he saw it, or that its elegance was so pleasing, is not enough.

For a more modern example, String Theory is a very attractive hypothesis to many physicists, but as yet remains beyond the reach of experimental physics. Pursue it, by all means, if the elegance makes you think you're hot on the heels of the truth, but don't jump the gun and say we have evidence if the only experiment you can run is a computational thought experiment.

In the case of the water research, what's been shown is that some computational models (read "approximations") of how water behaves matches this old hypothesis that's been posed. But no experiment was conducted to show that real live water behaves this way.

Granted, it looks like the researchers did this for three _different_ models and found corroboration among them, and are saying that gives the result more punch.

I don't think folks here are trying to say this computational approach is invalid. I think there's a justified quibble with calling it "evidence" of a real world phenomenon that has never been observed.


Also, as with GR, there’s an experimental observation that doesn’t fit with the simpler model. For GR, it was the Michelson-Morley experiment, which showed that the speed of light is constant, for this, it’s “the anomalous behaviour of its thermodynamic response functions upon cooling, the most famous being the density maximum at ambient pressure” (https://www.nature.com/articles/s41567-022-01698-6.pdf)

So, basically, there’s a reigning theory that correctly describes lot of observations but then, a loose end is discovered, and theorists try to find the simplest theory that fixes that loose end.

I think string theory is different. There are no observations that show the existence of loose ends, only a desire to have a quantum theory of gravity, or even a Grand Unified Theory (https://en.wikipedia.org/wiki/Grand_Unified_Theory)


> For GR, it was the Michelson-Morley experiment, which showed that the speed of light is constant

The Michelson-Morley experiment did not show that the speed of light is constant, that was known for some time before; and it and was the basis for Special Relativity, not General Relativity. The Michelson-Morley experiment showed that there is no "frame dragging" - that, if there exists a "luminiferous aether", it is not itself moved by the movement of the Earth through it, as would be expected for any normal substance.

General Relativity was necessary as a theory to bring back gravity (and describe acceleration) in the context of SR (since SR requires the speed of light to be constant and finite, but Newtonian gravity is instant). It was experimentally confirmed 42 years after the Michelson-Morley experiment, by confirming the gravitational lens effect in a solar eclipse.

> I think string theory is different. There are no observations that show the existence of loose ends, only a desire to have a quantum theory of gravity, or even a Grand Unified Theory

The lack of a quantum theory of gravity is a huge gap in our understanding of the world, not a simple desire. Quantum mechanics makes obviously wrong predictions when adding gravitational effects (even worse if trying to account for curved space-time). General relativity makes wrong predictions about the behavior of elementary particles. So, we are left today without a model of how the world works - and the precise missing piece is "quantum gravity".

Of course, it may well turn out that in fact we need a completely new theory that would replace both QM and GR - but what we know for sure is that QM and GR as they are right now are not correct, and the most obvious incongruity is quantum gravity.

You're right about a GUT though - that would be nice to have from a mathematical beauty point of view, but it's not in any way required, unlike quantum gravity.


"or that its elegance was so pleasing, is not enough."

Yes, see "Lost in Math"


Albert Einstein produced a new theory, not new evidence. I maintain that language continues to mean something.


But as far a I know, Einstein's General Relativity solved a very concrete problem: previous theories couldn't explain the orbit of Mercury.

General Relativity matched the observations, if wasn't just pen and paper or simulations.


And much more important to the discussion: The models Einstein produced matched existing reality, and also made predictions. Those predictions where eventually tested by other scientists with new experiments that matched Einstein's model's predictions, so the outcome of those experiments are now evidence that those models are correct.

This paper has a model, that hopefully and presumably, accurately describes real world phenomena, and it has been interrogated to produce a new prediction. Now someone has to devise an experiment to verify or reject this new prediction. Until that experiment is run, and the results found, we have no new information about how water behaves. All we have is a new way to verify or reject this model. If a later experiment confirms what this model predicted, then we will know something new about the world, and these simulations will be trustable in a new domain.


>And Albert Einstein only had pencil and paper!

And a slide rule, it's not like he needed to write every calculation.


Evidence is not proof, it is evidence. It is still valuable because it can guide the way to experiments that give us empirical proof, or refutation of the conjecture.


It behooves the authors to present the limitations of the study. Lay people, unfortunately the majority, will interpret this to be definitive.


>Lay people, unfortunately the majority, will interpret this to be definitive.

They're not the target audience of scientific papers.


On top of that, the linked article is simply a press release and not the research paper itself. Papers usually list limitations and such, that's very normal


This leads me to a different path. Some (most?) papers require an almost absurd amount of context; in the extremes, single digit audiences with the knowledge to understand.

Perhaps putting that context into publications would be more effective?


You’re describing a pop science book, and of course people write those too.

But research publishing is a traditionally internal dialog among people who ostensibly spent years being inducted into a common contextual foundation.

Post-web journalism and open access publishing might be challenging that tradition, but the answer is probably not to staple an introductory textbook to every paper.


> Perhaps putting that context into publications would be more effective?

It's called introduction


Then all scientific papers would be thousands of pages long.


Ahh, the dangers of writing a comment whilst distracted. I meant rather to include the context you need to have yourself; something akin to "you must be this tall to ride", "you need a doctorate in physics to have a chance of understanding this properly".


Studying the positions movements of single molecules in a fluid is near-impossible; increasingly (as we've eaten all of the low-hanging fruit), more science is being done through simulation. Certainly, we should vet those simulations very carefully (as should we more direct experiments). But they're just another tool to understanding the world.


Computer simulations are not nothing either.


> The team expect that the model they have devised will pave the way for new experiments that will validate the theory and extend the concept of ‘entangled’ liquids to other liquids such as silicon.

Yes, they are very useful. Now they have something specific to look for with in experiments.


Interesting. Some enterprising souls will read it as well. Next thing I'll be receiving spam mail offering me to buy some "entangled water" to treat all my ills.


(Cough) homeopathy (cough)

Give 'em five hot minutes...


A new phase of article has been found at Birmingham in which, at a pressure of 800 words, key interstitial details disappear from the lexical lattice. In one example, the summary of a topic about two different forms of liquid water was found to allude to evidence of their existence coming from a computer simulation, yet without giving away the slightest hint at the expected temperature and pressure ranges at which these forms are expected to actually occur.


> yet without giving away the slightest hint at the expected temperature and pressure ranges at which these forms are expected to actually occur.

If it makes you feel better, this information isn't even clearly presented in the paper :)

For computational studies like this, one should generally assume that any explicit 'Temperature' or 'Pressure' is a fiction. The important things are the shape of the trends in behavior with T and P. But as this behavior is dependent on a number of other semi-arbitrary parameters, the actual values of T and P themselves with the simulation probably do not map particuarly well to real life.

Also, physicists like to use reduced unitless parameters (e.g. this paper uses T* = kT/e_BB, where e_BB is an energy parameter used in the colloidal model). This is convenient for math, and also keeps pesky reality separate from elegant models.


Evidence?

> computer simulations

> model

> model

> computational work

> computational evidence

> colloidal model

I'm sorry, theory's great and all, but I'm not interested. Call me when you see this in water not dreamed up in silico, okay? Yeah, I know, supercritical is hard, but... that's kind of the point, yeah? Why trust your model if we know this stuff is hard? Why care if it can never be realized? (Okay, you got me, I'm a constructivist too.)

(Must every press release by a British university be trash?)


> Call me when you see this in water not dreamed up in silico, okay?

The article, surprisingly was pretty clear about this. The headline? Pretty sensationalist.

> (Must every press release by a British university be trash?)

This has been the bane of science for as long as science has been around: How do you take something that isn't particularly interesting to laypersons and help them understand how important this little micro-bit of progress might be... on not be... It's hard.


> This has been the bane of science for as long as science has been around: How do you take something that isn't particularly interesting to laypersons and help them understand how important this little micro-bit of progress might be... on not be... It's hard.

I mean, I've seen my own work go through the Science News Cycle ( https://phdcomics.com/comics/archive.php?comicid=1174 ). I know what it's like. I'm just pointing out that British universities have a reputation for being absolutely terrible about this, to the degree that there's no point trusting a single thing their PR departments say anymore.


This little micro-bit of progress simply is not important to laypersons. I'm sure it's interesting to people in the field, but otherwise it's nothing.


> This little micro-bit of progress simply is not important to laypersons

Yes. Unfortunately, a little press goes a long way in getting funded, so we'll all be reading these over-hyped science press releases for a long time.


It seems the article extremely clearly communicates (as your plentiful quotes aptly demonstrate) that the evidence in question is a computational model.

I do not get your point about this being bad science communication at all. Your own quotes demonstrate that the press release is crystal clear about the type of evidence.

Your beef is with the science, not the communication. And you shouldn’t mix those two up.


> Your beef is with the science, not the communication. And you shouldn’t mix those two up.

No, my beef is that communicating this was irresponsible because it does not represent meaningful progress in an accurate understanding of the physics of water, and thus no impression should be given that it does.

Science grows through communication. The two are not separable.


I am someone that regularly rails against in silico work, but come on. Treat this as a sanity check to start understanding

1. if the colloidal thought model explains unexplained observables

2. If the colloidal model makes predictions not observed yet.

It might be too harsh to say this is not a meaningful advance. The system may be too difficult to make predictions without a computer, so having a thought model is useless. Now that we have a computer model, we can start doing experimental work that we wouldn't have chosen to do otherwise.


> Call me when you see this in water not dreamed up in silico, okay?

But it's you who's calling.


I couldn't agree more.

There is no evidence here. There is a model of water that apparently does something. Like FIFA22 (the game) is a model of the football (soccer) teams. But the model doesn't mean this can be translated to reality.

And at least with FIFA22 you can check against reality for accuracy!


> Evidence

>> computer simulations

But we live inside a computer, didn’t you know that? That the Universe itself is one big computer (and physics is computation)? A 42-bit one, unlike your fancy laptops…


Evidence comes from experiments, not simulations. You don't get to overload the word "evidence" in this way, not on my watch.

A model can't produce evidence, it can only predict its possibility, and give hints to the experimenter on where to find it.


Right on. "Facts or observations presented in support of an assertion".

Maybe it changes once we have a complete and exact model of the physical reality. Until then...


Models based on our understanding of the world provide a level of certainty lower than experimentation but still scientifically valid.

Discoveries are often statistical in their nature about how uncertain the are.


See also: heavy water (https://en.wikipedia.org/wiki/Heavy_water). Not so novel but still interesting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: