Computational chemist here. The phys.org article is a pretty bad rendering of the original paper. The "theoretical interpretation" described in the headline is nothing new. The actual scientific advance described in the paper is an experimental (not theoretical) technique for measuring transition states of molecules, involving making them very cold so the transition states last longer. This lets us verify some quantum mechanics predictions, but we already had a lot of verification. Calculating small-molecule reaction paths via quantum mechanics has been a standard part of theoretical chemistry for decades. We can do it at a very high level of accuracy using millions of CPU hours, or at a passable level of accuracy using thousands of CPU hours.
Big molecules are still a challenge because the scaling of the accurate methods is very bad (N^4 or worse, where N is the number of electrons!). These calculations are something that quantum computers could theoretically do much better than classical ones.
Dude, your job looks awesome. I have been thinking a lot about the ramifications of chemestry computation since last year.
This comment https://news.ycombinator.com/item?id=16027737 about how to get around NP-hardness made me go "Oh, can you even ask such a question? Can you expect to solve getting around this kind of problem?" and so I went researching.
I found the paper Complexity of Protein Folding https://www.gwern.net/docs/biology/1993-fraenkel.pdf and chapter 4 exposes two possible realities: either nature solves NP-hard problems in polynomial time or it functions in a classic Turing machine model. I was like "Welp I guess when you put it like this, talking about nature working within a machine model I suppose it sounds rather silly, yes. It may be that nature just does her thing in her own way somehow." The paper says a protein with 100 amino acids may assume 8^100 conformations and nature solves for that in 1 second. Also the spin glass https://en.wikipedia.org/wiki/Spin_glass model is apparently some other thing that nature solver eerly fast.
Can we chain these natural occurances in order to calculate a function with inputs in record time?
Many of the early computer science approaches to protein folding were laughably naive, and the assumption about folding described in the Fraenkel paper: "It is believed that the native folded three-dimensional conformation of a protein is its lowest free
energy state, or one of its lowest. "
is not factually correct. In reality, proteins never reach their lowest energy state unless they are very simple proteins, instead, they rapidly reach a state of kinetic interconversion between several accessible states. We can simulate proteins and measure those state transitions.
Really, solving protein folding is about classical approximations and clever sampling.
> The paper says a protein with 100 amino acids may assume 8^100 conformations and nature solves for that in 1 second
There are some problems with this - firstly that paper is quite old. Not sure what the current state of the art is but this might be helpful : http://folding.stanford.edu/dig-deeper
Secondly, the idea that the protein might adopt any of 8^100 (or some large number) of combinations is not quite right. Consider a very simple model of just a chain of nodes connected together on a 2D grid. There are certainly a large number of possible ways to arrange this chain on the grid, but if you start off in one particular arrangement (call it 'unfolded') and then move towards a more compact one ('folded') then you will follow some particular path of arrangements.
However, there are ideas around 'quantum biology' from Johnjoe Mcfadden (http://www.johnjoemcfadden.com/) that might be interesting.
My job is pretty awesome. But I had to get a PhD first, something that in my experience is risky for students' mental health. I don't recommend it to people unless they show a high tolerance for repeated failure (aka "research").
As for your question: NP-hardness refers to the worst-case performance, not the average case. Many individual instances of NP-hard problems are solvable. Stochastic algorithms do quite well, and protein folding in nature is a kind of stochastic algorithm (classically we would call it simulated annealing, although really it is quantum annealing). So as far as we know, there's nothing nature is doing to solve NP-hard problems faster than known algorithms. But it does do quantum annealing very fast, something we would really like to exploit using quantum computers.
For more like this - the intersection between computability and nature - I HIGHLY recommend Scott Aaronson's "Quantum Computing Since Democritus".
But, a quantum computer would not in general solve NP-hard problems in polynomial time (as far as we know). The known exponential speedups are for "easier" problems like factoring and finding quantum ground states, which are not NP-hard.
The article is poorly written but your comment is very clear. This looks totally fascinating. Is there some kind of introductory book you would recommend (graduate level)? One burning question that I would like to ask you is if computational chemistry has ever predicted reactions that had not been discovered before? That would be more than awesome.
Former/failed/flamed-out computational chemist right here. I can't give a recommendation, because I forgot which book I used way back when, but the answer depends somewhat on whether you're interested in "large" or "small" structures. Each will use different techniques.
Molecular mechanics, which is interested in simulation of big biomolecules and the like, ignore quantum mechanical effects, or use a hybrid of quantum/classical mechanics, because the structures of interest are large enough the QM effects can be approximated away.
Other subfields of computational chemistry use only QM, with some approximations to make the programs tractable.
Current computational chemist here (in QM). It depends on what specific areas you are looking at. Computational chemistry is, itself, a fairly large field with several subfields and specializations (as mentioned in a sibling).
I can't think of a particular example for an unknown reaction being predicted first, then verified, but it wouldn't surprise me. However, there are cases of computations being used to guide experiment (who have to have an idea of where to look for something).
Theory is generally used to explain, rather than predict. Ie, given a reaction, what is the most likely mechanism? Or helping explain the source of a particular property.
With increases in computing power, databases, etc, we may see computational chemistry start to predict novel materials (via machine learning, for example).
I think that the application of information theory to large systems of chemical reactions is likely to be fruitful. As your source says, it will be important for biological systems, with their myriad interlocking reactions. From my perspective, an issue is that this calculation takes as input the rate constants of the reactions, whereas my research has been at the level below this: finding the rate constants (and similar properties). This can be quite difficult, so my mind boggles a bit at the idea of a calculation which requires large amounts of such data.
Hijacking this comment to shamelessly plug my project. We're holding a survey at https://docs.google.com/forms/d/1NXMpmD3DkDuAPgaOpLM2baJRyCi.... Our goal is to build the best platform for Computational Chemistry. Still in development stage, but I'd be glad to answer any questions. Also very appreciative of anybody who completes/shares this survey!
I'm a bit interested to see where this project is headed. My main question is what do you mean by "platform for computational chemistry"? A new package or an interface to existing packages (like WebMO)?
We're building an interface like WebMO, but modern. We are not re-writing any quantum chemical software for the moment, however we've written some as grad students. For now we are concentrating on replacing JMol and getting our API functional with NWChem.
Is it me or is this article especially poorly worded? It took me a little too much effort to figure out what this figure caption even means:
> Negative ions typically have a geometry that is very close to the transition state of the corresponding neutral reactions
CH3OHF anion is prepared and de-ionized with laser light. It dissociates into methanol and F radial, under very slight changes of the involved molecular geometries...?
Phys.org is a roaring dumpster fire, and frankly the domain should never show up on HN. If it’s on phys.org, they found it somewhere, and that original source will be far better than the half-assed version on phys.org.
My point is mostly that the problem is larger than phys.org. UNM itself is the source of the lesser article.
That phys.org just aggregates material available elsewhere is also an additional reason not to link it. It's suggested to link original articles (in a different sense than you are using) in the submission guidelines.
You’re certainly not wrong, and the endless parade of press releases and aggregators of press releases are indeed, terrible. Having said that however, I’ve never come across a site as popular and worthless as Phys.org (edit: in this space at least). The AAAS, iiie and other maintain decent aggregators, or the original is best.
It does seem to be rather poorly written, e.g. the next paragraph
> they actually don't follow Newton's Law, they follow Schrodinger's Law so that theory is what we call quantum mechanics. The quantum mechanical interpretation tells scientists a lot of insights.
For a pchem thesis my advisor was seeing unexpected experimental results and asked me to try explaining from first principles. I started with Schrodinger's equation, let the math do the work, and wound up with a fairly reasonable match.
Looking back on it now, perhaps I had a few too many free variables (J Neumann – “with four parameters I can fit an elephant…”).
In any case, to me, it was so cool to see how such a fundamental equation connects to very complex experimental processes.
This title has to be one of the most genericized-press-release titles posted on HN. Almost every other experimental chemistry paper would technically fit this title! Makes you feel for the copy writer trying to write a title for some complex chemistry paper during the holidays. Despite the bland press release title, it’s a pretty interesting experimental chemistry research and the original paper reads well. It’s surprising how clear the (apparent) transition peaks are in the graph data. These reactions occur on such incredibly minuscule timescales it’s amazing to see it so clearly!
Interesting:
> The PES is fit using the permutation invariant polynomial–neural network (PIP-NN) method [49].
This caught my eye as I recall reading some earlier attempts at NN based polynomial fitting in QM models, but this sounds more advanced. Does anyone know if generalized neural net methods for fitting the potential energy curves are common in QM modeling now? If so, are there limitations to the method that’d prevent adopting it for fitting polynomials in other applications?
Big molecules are still a challenge because the scaling of the accurate methods is very bad (N^4 or worse, where N is the number of electrons!). These calculations are something that quantum computers could theoretically do much better than classical ones.