Hacker News new | past | comments | ask | show | jobs | submit login
Graph Networks for Materials Exploration (deepmind.google)
241 points by reqo on Nov 29, 2023 | hide | past | favorite | 42 comments



The linked paper from Lawrence Berkeley National Lab is almost way cooler, automated wetlab material science experiments: https://www.nature.com/articles/s41586-023-06734-w


They look cool but are expensive, hard to set up and maintain, and rarely exceed what can be done for the same amount of money by a small amount of well-trained chemists.

The arm in the figure 1-3 is probably $100K, before talking about the support contract and site integration.


Grad students at Cal cost a professor ~100k a year, and then leave after 2-5 years with any optimizations they might have personally made. They also only work 6-12 hours a day, and having been said grad student, get mind numbingly bored after about 10-15 repetitive syntheses, spending lots of time on them, when the (only) interesting part, is the XRD pattern at the end... I would have absolutely advocated for such an arm if I was still there.


There are many reasons why wasting a grad student on this problem (rather than a tech) is bad. I say this with a lot of experience: I was that grad student and I was the guy automating the lab and the guy setting up the compute infrastructure.

I think core facilities are better candidate than individual professor labs.


We already do this with automated e.g. drug testing. I've worked with a couple different machines (and worked on the development of another) that existed specifically to rapidly do certain chemical and biological tests, to parse through computationally or AI suggested drugs. They run at about the same cost, with similar service contracts, and they're VERY common in the pharmaceutical industry. If your goal was to find a process to produce some precursor chemical necessary for material development (prior to heading to the foundry), it makes sense.


I work in pharma and most of the time when I visit labs, they don't run 24/7 and in fact run at about 10% or less of their total capacity.


Well, people are still buying them, otherwise I'd be out of job. My experience with the Tecan and Hamilton machines that I got to interact with was that the setup seemed tedious as hell, but once it was off and running, it would rapidly outpace even the best pipetters.


yeah they are great for employment insurance.


How much do a small number of well trained chemists cost to employ? I’d expect north of $100k a pop. Although I guess this doesn’t help if you still need a chemist to interpret the output of the arm.


Indeed, the x-ray diffraction interpretation wasn't completely automated. From the experimental paper: "When the automated refinement gives a poor fit, manual analysis is performed"


I think the novelty here is in the automation of it? If you (or let's be real, some eccentric billionaire) set up 100 of these and hooked them up to run 24/7, they could generate a stream of test results. If you can scale this up maybe you'd hit economies of scale?

Shame there's no eccentric billionaires that love shiny projects with little hope of success. /s


This is already happening all over the world across multiple industries. It's typically called lab-in-the-loop.

I have been involved in projects with eccentric billionaires to build such things. It's challenging to make forward progress in a meaningful way (IE, beyond a press-and-paper prototype), and often the reasons are entirely banal and provincial (many scientists in the field feel threatened by ML and automation; others just don't know how to work in a large-scale environment, others want to come up with the perfect experiment yet never actually run one, and even others want to use the automater as a quick-turn-around, not economy-of-scale tool. Further, just getting the necessary support infra to make the system run well can often be quite challenging.


You say it's challenging, are you implying there are actually any successful instantiations running at scale?


I don't think identification of possible new materials is a rate-limiting step for discovery of better catalysts, batteries, etc. The problem is not coming up with new materials -- it's coming up with new materials that _have desired properties_ and _can be cheaply synthesized_.

It's like if you asked a chemist to draw a few possible structures for organic molecules that have never been synthesized. They can do that. But not all of those possible molecules they came up with will be easy to synthesize. And neither they nor anyone else (without doing a lot of experimental work) will be able to tell you which of those possible structures, if any, would work as a painkiller or an oncology drug.

Still, I do think this is a nice demonstration of how more data enables very accurate predictions of energies that would otherwise require expensive DFT calculations. That part is definitely interesting.


I think a more interesting application of this process is to attempt to find easier, safer, more reliable, or more efficient methods of producing or processing existing materials.

See: the more or less accidental rediscovery of room temperature polyester/PET recycling (including separation from blended fabrics without damaging the cotton) using CO2 as a catalyst.

There exist quite a few cases of very simple solutions to very difficult problems where the start and end products are already known, but we just don't know how to effectively get from A to B without causing certain undesirable side-effects.


I'm certain you could build an embedding that provided a utility function for molecules based on price and synthesizability. That's an approximation of what the chemist's brain is doing.

You wouldn't ask a chemist to evaluate the molecules (in drug discovery), though- you'd have a molecular biologist (really a lab tech) set up a screening campaign, and in many cases, the biological readout that predicts something could work as a painkiller or oncology drug is relatively straightforward to implement experimentally at scale (high throughput screening). Unfortunately those readouts aren't super-predictive of the full biology, however.

I expect DeepMind or Isomorphic to announce, in the next five years, that they have made a model that can quickly identify whether a specific molecule would be likely to pass clinical trials and the rest of the FDA process. With a false negative rate ("predict that a drug would not get through to approval, but in reality it would have") below around 25%, we could easily save billions a year in failed drug costs.


I would be pretty surprised if Deepmind could automatically identify drugs that pass phase 3 trials in the next five years.

First, there’s a banal point that many trials take years to read out, so any prospective study would have to be beginning about now. I don’t think Deepmind or anyone else can do what you describe currently.

More importantly we just don’t understand human biology very well at all. Like there are phenomena that are critically important to drug and disease behavior that are just totally unknown. So machine learning systems trained on current knowledge just won’t have the necessary data.

But I’ve been very surprised before by ML advances so who knows?


100% agreed. This is primarily a breakthrough on using graph networks to show some promise on the task. It will take several more iterations for it to be transformative to the industry.


Are applications like batteries, semiconductors, solar panels, etc. bottlenecked by the number of available materials? Also, I wonder if the discovered materials are kind of "interpolating" between materials that are already known, or if they expand the convex hull in some way. (Though perhaps it's difficult to precisely define what the convex hull of materials is.)


Nice trick, but it’s almost useless.

We have been using for decades integer programming to explore all the possible permutations with hard constraints that include manufacturability.

Their references list is lacking, to say the least.


it would be great if you could provide a bit more detail to your criticism, especially pointers to relevant literature you think they missed

Frankly I don’t really see how some discrete optimization thing solves the problem this paper is addressing, because evaluation of the constraints (i.e., thermodynamic stability of a crystal) is one of the most computationally challenging aspects of the problem


The paper (and graphics inside) refer to 48,000 materials that have been discovered by previous computational methods. Is this what you mean?

Looks like the contribution here is an order of magnitude increase in high probability stable materials.


Isn't the problem how to actually scale these discoveries to industry processes? Yes you can create some crazy materials a low levels, but scaling up the small scale stable processes is difficult.


I put together a gpt which lets you ask questions about and visualize the materials discovered by the GNoME material discovery project discussed here. Fun quick little project.

https://chat.openai.com/g/g-5Kt4lhwvF-unofficial-gnome-mater...


There is an ongoing effort to simulate living cells using big computers. I wonder if deep learning will get there before others. A system like AlphaZero can start with small cells and keep going up in complexity just how we imagine life came to be...


> The GNoME project aims to drive down the cost of discovering new materials. External researchers have independently created 736 of GNoME’s new materials in the lab, demonstrating that our model’s predictions of stable crystals accurately reflect reality.

It seems like a neat project.

I wonder, though, what does an unsuccessful prediction look like? They successfully created 736 of the materials. I’m sure they didn’t make 380000-736 bad predictions, hahaha!

Would it be interesting to know about materials in their set where fabrication was attempted but didn’t work out? Or maybe it is much more complicated than that; maybe it is assumed that there are crystals in the set that are basically impossible to fabricate for complicated engineering reasons, and but that’s fine because it is just the beginning of the investigation.


For all the automation effort, there's always something that has to be done by hand...

From the experimental paper: "The XRD sample holders must be cleaned manually when the lab has depleted its stock"


A couple other observations on the experimental side:

They define success as being a sample with >50% of the target material. I guess that's success, but wow you can't test any actual properties (hardness, electrical conductivity, etc.) with samples like that.

As the reviewers noted, they're only making oxides (no alloys or intermetallics).


Likely that issue could be "solved" by using disposable plastic tubes or plates if you really wanted.


[flagged]


"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."

https://news.ycombinator.com/newsguidelines.html



It is though. Trademark protection doesn’t mean that nobody can use the name for any purpose. Otherwise, “ABC Window Repairs” couldn’t use the name because Microsoft has trademarked “Windows”. In the GroupOn example you cite, it seems like the GNOME foundation has a valid complaint. That doesn’t apply to this case, because the DeepMind system doesn’t have anything to do with desktop UI.


See https://news.ycombinator.com/item?id=38463234 ; the relevant class is "Computer and software products and electrical and scientific products", and these are both pieces of software.


Thank you.


It's tangential to the thing being presented. There's one of these comments in almost every thread about a project with a potentially colliding name and they are repetitive and uninteresting.


The abbreviation doesn’t even fit the name. The o in GNoME doesn’t match anything.

Only Google can be this bad at naming things.


Clearly it is the o in for.


[flagged]


Doesn't similarity of the domain matter? A tablet is a lot similar to a desktop computing environment than an ML tool for science labs.


They're both software. Trademark classes are about distinguishing entirely different fields: a food brand called "GNOME" isn't a conflict with a piece of software, but another piece of software is.

Here's a list of the USPTO's trademark classes: https://www.legalzoom.com/articles/trademark-classes-the-com...

"Computer and software products and electrical and scientific products" is a single class.


They both fall in that same class but they are unlikely to be misconstrued for each other. To my knowledge, GNoME doesn't actually have a user interface vs GNOME which is well... the user interface. It is unlikely that there will be any misconstruing of GNoME the material synthesis lab project and GNOME the UI framework/user interface.

This compares to Groupon's gnome which was a point of sales endpoint and user interface of its own. There could be an argument made that those products could have conflicting name but I don't think that's the case for GNoME.


Not that you're wrong, but thats a hilariously broad category (it also includes almost all electronics)


Gray goo next. Please stop this madness and quickly regulate this before we are doomed (:




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: