Hacker News new | past | comments | ask | show | jobs | submit login
Computers Create Recipe for Two New Magnetic Materials (duke.edu)
112 points by type0 on April 19, 2017 | hide | past | favorite | 66 comments



We did something pretty similar in a nearby field of materials science, using support vector machines rather than linear regression and focusing on a less specific target:

http://www.nature.com/nature/journal/v533/n7601/abs/nature17...


Nice work! Any plans to branch out to other materials?


I'm not with the lab any more, but last I heard they were soliciting data from nearby but related subfields (our lab was doing mostly vanadium templated tellurides, but there are a lot of other things to explore in the space of inorganic–organic hybrids, and there are labs working on them). The problem really is data sharing; the most active collaborators were willing to give us lab note-books as long as we paid to digitize them; it was very hard to convince other labs to digitize their own notebooks. TBH I think that's the hard part of a lot of this (collecting the data).


I know Moore's Law is basically dead but I think that advances in applications of existing computing power like this will resurrect it, or something like it, before too long. Basically, there's a less dramatic version of the AI snowball where faster computers lead to even faster computers. Whether someone manages to build strong AI on top of that is different question.


There's at least a 10x speedup potential in existing hardware if people stop doing silly things like Electron apps and shoehorning everything into web browsers.


What is your recommended tutorial/book for learning to build well-designed desktop UIs that work on Windows, OSX, and multiple different flavours of Linux?


1) documentation of any chosen cross-platform UI framework

2) any decent software book that teaches to separate the presentation layer and platform layer from the rest of the program (that way, you have much less to port between platforms and maintaining it is not as costly as people fear)


> of any chosen cross-platform UI framework

such as...

EDIT: my purpose in this line of questioning is to assert that if you are trying to persuade someone to not do a thing, you will be more effective if you can give someone a straightforward alternative.

I likewise think that the folks trying to get people to stop writing python2 should pick a release of python3 to become an LTS release in the same way that in 2014 python2.7 was effectively declared an LTS release with support until 2020.


https://en.m.wikipedia.org/wiki/List_of_platform-independent...

EDIT to parent's edit:

I'm not really trying to persuade anyone; not anymore. I understand the incentives that push people toward wasteful solutions. This won't stop until we hit a resource limit. All I'm saying is that there's a lot more we can do with current hardware once Moore's law is definitely dead. The mostly untapped potential is conditioned on people not doing extremely wasteful things just to shave off a little development time.


That list seems to be useless.

Skia is not a GUI library for example. CEF is not a GUI library, it's a WebView.

And there should be at least a column indicating whether the library has support for usual desktop applications and/or for multimedia full screen things (such as games, or a movie player with OSD and custom graphical design and elements, textures).

And the GUI debate is about "where are abstract libraries that compile to native apps", after all, anything else is just for prototyping a "native app". And usually people just answer but Qt is nice enough, yet everyone uses Electron :/


Luckily it's a wiki, so you can improve it :)


Of that list, what is the one that you would recommend from your experience working with it and what book or tutorial would you recommend someone work through to get a solid mental model of it?


Qt, learn Qt and you can write performant native applications for all the major desktop platforms.


I'm not complaining about Qt. It seems like a perfectly cromulent framework. But I have personally seen at least one application that would suggest that some people have tried to write Qt applications without first learning Qt.

Please, people, for the love of those whom your customer may hire after you, take the time to understand signals and slots before porting that godawful mess of garbage you got from your customer's previous contractor to Qt.


Fantastic, I have heard of Qt before and I know I've seen python bindings for it.

What is the go-to tutorial/book that humans who are knowledgeable in Qt recommend for efficiently learning it?


It's been a few years since I've touched QT. The official documentation was quite good at the time.


FLTK http://fltk.org/ simple dead easy.

Juce https://www.juce.com/


Someone just bashed me for suggesting an old looking laptop (2012 core i5) because they need fast photoshopping (average consumer skill level).

The natural reaction for someone is to push harder, not to look and sense what is the reason for the performance or perceived performance issue.

I want to run far away from the whole computer mainstream today because my brain is suffocating.


Why? Most of the processing can be done on the backend (in a fast language if required) and as long as you maintain 60 fps on the client everything is perfect smooth.

Given that this is doable in HTML/CSS/JS on a Nexus 5 (an almost 4 year old smartphone) it just doesn't make economic or UX sense to optimise the UI beyond the 60 fps target.


You know that modern operating systems are multiprocess, so unless your program is a videogame or something solving a computationally-expensive problem, it should minimize its resource usage.


120hz, 144hz, 165hz and 240hz monitors are all becoming more common.


While I don't like the "Web OS" approach, I don't think that many AI applications run in Javascript in Electron.


Not yet. Anyway, I was just pointing out a mostly untapped speed gain that doesn't require resurrecting Moore's law.


if you're running production AI in JavaScript in your browser, I'm highly impressed.

If not, then the snark is unwarranted.


My snark was referring to Moore's law in general, not to its application in domain of machine learning.


No one is using Electron for things where a 10x speedup actually matters.


Funny you say that, because "actually matters" is the crux of the issue here.

Does the fact that a glorified IRC client eats a big chunk of system resources matters to said program's developers? Probably not, it would seem. Does it matter to me as a user of such program? Yes, it does very much. The problem is, right now there isn't a good way for the market to signal this back to the developers so that they would care.


right now there isn't a good way for the market to signal this back to the developers so that they would care.

Switch to a competitors app?

Also when people talk about Electron sucking the one example that keeps coming up is Slack, which is making me wonder to what extent the core problem is Electron and to what extent Slack is just a badly written app.


There's also VSCode and its blinking cursor eating up 13% of CPU, then there's also:

https://josephg.com/blog/electron-is-flash-for-the-desktop/ (and associated discussion over at https://news.ycombinator.com/item?id=14087381).

As for switching to competitor's app, you can't do that if it's a networked app using propertiary protocol. The SaaS world has killed interoperability.


Which app is this? If it's slack, you can use IRC or XMPP clients instead: https://get.slack.help/hc/en-us/articles/201727913-Connect-t...


I think we'll see something different. We're now at 'peak datacenter' again and because there is only so much power and so much cooling and bandwidth that you can haul to a DC we'll see the pendulum swing back towards decentralization.

We've gone through several of those cycles and they're not in any fundamental way tied to Moore's law.


That's not really what I'm talking about. I'm just talking about new applications of datacenter computing power eventually driving research to the point where we get around some of the limitations that put the brakes on Moore's Law.


The limitations that put the brakes on Moore's Law are physical limitations there is no getting around those (at room temperature, using Silicon) in this universe at a cost level that consumers are willing to accept.


Nice set of caveats you've got there. ;)

Let's check back in ten years, see how things are going.


> Let's check back in ten years, see how things are going.

I'd have written the same thing 10 years ago and you would have written the same thing back then too.

So, another 10 years then?

What will happen on the chip level is more cores, not faster cores, possibly larger caches.


Luckily for Moore he didn't say anything about clock speeds.

He spoke of the number of transistors per area in a plane doubling every year. He didn't specify silicon. He didn't specify photolithography. He also said "at least for the next decade" in 1965.

In 1975 he revised it to every two years. In 2015 Gordon Moore himself said "I see Moore's law dying here in the next decade or so."

So let's let poor Gordon off the hook. He's being attributed things he never actually said.


No, ten years ago I wasn't so optimistic. There are a bunch of things that can and will happen to move things along incrementally but what I think, based on no real evidence, is that there will be one or more major breakthroughs at some point that will likely be computation-assisted and we will get back to something similar to Moore's Law.

Edit: typo


I'm super curious in what area you feel these breakthroughs will be.

Tunnel effects are real and very hard to reduce, even at lower temperatures. The band-gap can't get much smaller, supply voltages are about as low as we know how to get away with.

There are solutions in terms of exotic materials with even more exotic fabrication methods.

I linked to a nice video the other day, see if that interests you:

https://www.youtube.com/watch?v=NGFhc8R_uO4

That's the state of the art per 2012, not much has changed since then, though there has been some incremental improvement and optimization as well as larger dies for more cores.


Your logic is useless against my feelings. ;) Seriously, I really don't have a solid argument here, it's just a fuzzy idea.


3D circuits/chips is one possibility. There are others that are being researched. Then there are the unknown unknowns...


> 3D circuits/chips is one possibility.

Yes, but we've been aware of that one for decades, it just isn't going to happen for anything other than maybe (and that's a small maybe) for memory. Removing the heat is hard enough with 3D heat infrastructure and 2D chips. Removing the heat from 3D chips is not possible at a level where the interior of the chip is above permissible values unless you clock the whole thing down to where there are no gains.

> There are others that are being researched. Yes, but nothing that looks even close to ready to mainstream.

> Then there are the unknown unknowns...

They've been 'unknown' for a long time now.

Really, this is as far as I can see more hope than reason.


Human brain is 3D and has no problems with TDP and cooling while being extremely fast at parallel processing of data like images or sound to the point that the fastest supercomputers ever built still can't compete.


Things are already going bad for anyone who believes in Moore's Law. Intel themselves have switched focus to getting more out of silicon because they don't believe in magic speed boosts for the future. Part of why they bought a FPGA company. Also why AMD's Semi-Custom business was booming. Even large customers are realizing you gotta do something other than a process shrink.


That's impressive. Co₂MnTi could be very useful. Cobalt, magnesium, and titanium are all easily obtainable. 665C Curie temperature, which is above neodymium magnets (400C) but below samarium-cobalt (720-800C). Any drawbacks to this material? Hard to compound? Low coercivity? Hard to cast or machine?


"easily obtainable" is relative. For instance, Cobalt is actually less common in the Earth's crust than either of the two rare earths we most typically associate with magnets, Neodymium and Cerium. With Cobalt demand reaching unprecedented levels due to Lithium battery demand reaching unprecedented levels, this discovery in and of itself is not a simple economic win.

Also, Magnesium is 20-25 times more common than Manganese. Just as well- production of Magnesium metal is pretty small because it's so difficult to work with.


Mn is manganese, not magnesium (Mg).


Even more plentiful.


If they were able to predict the Curie temperature, I'd think they could predict the coercivity as well. However things like brittleness would probably fall outside of their domain.


With all the hype usually flying around, a little surprised that it didn't say "AI Creates Recipe..."


After all, every search problem is an AI waiting to break free.


Site has been hugged to death for now. http://archive.is/6ZZv1


When in doubt - use brute force.


What I think we are starting to see is the beginning of widespread use of computer based discovery through essentially machine learning techniques. I think AI is pretty far off but utilizing all the computing power we have to discover new materials and create useful things isn't very far off. I wonder what will happen if we can eventually tell computers to create us a better laptop, bike, phone, or lamp.


The title of the press release is misleading, there was no non-trivial machine learning/AI involved. "Computers Create Recipe for" translates to: the researchers picked a class of materials, ran DFT simulations (the usual way to simulate this sort of thing) for all combinations of elements in that class and fabricated the ones that were predicted to have interesting properties.

The regression mentioned in the press release was only used to predict one property (the Curie temperature) of the materials based on experimental data for similar materials.

It's still a really impressive piece of work, just nothing to do with AI.

The paper is open access, so you can read the full text here: http://advances.sciencemag.org/content/3/4/e1602241.full


I think we need to rename AI. It's not really intelligent. It probably won't be for a long while. The machines don't know, in an existential sense, what they are looking at or making. We need to call it Artificial Insight.

They spot facts in byte streams that we don't see. We can then contextualize the info into another part of the domain, or drive it deeper.


AI is whatever hasn't been done yet. - Douglas Hofstadter (1980)

... via my fortune clone @ https://github.com/globalcitizen/taoup


> I think we need to rename AI.

From what I understand, we already have; that's why you usually see it called "machine learning" in research and academia - and "artificial intelligence" everywhere else.

That isn't to say there's no crossover in both directions, but usually if you are being serious on the subject, you call it ML - if you're trying to hype it or build interest, you call it AI.

Note that this only goes back so far; prior to the early 2000s or so, the terms were used more or less interchangeably. At one point, the term "machine intelligence" was used, then died out, but I've seen it used again recently.


In what sense do you "know" what you're looking at but a machine doesn't?


Ask humans to tell you what "cat" means, and you'll receive as many answers as there are respondents. Some will derive from science, some from common experience; some will describe the relation of cats to their environment, others will talk about personal emotional connections with particular cats.

Ask a convolutional neural network what "cat" means, and the best you can get is a probability distribution of pixels on a grid. It's not intelligence, but just an encoding of facts provided by an actual intelligence.


Ask neural networks that were trained on different inputs what a cat is and you'll get as many answers as there are neural networks.


No, you'll get the same kind of answer. It's not like one of the neural networks will write me a poem in response, on its own initiative. The form of the answer was decided by the human intelligence that created the neural net encoding.


The form of the human's answer was decided by the genetic code that led to the formation of the brain and the experiences the brain was exposed to up to the question. The brain is more complex by many orders of magnitude than your garden variety artificial neural network, so it is only expected that the range of possible answers is also broader.


And that's the rub. Why call these simplistic systems intelligence at all? It's like calling a watermill a fusion reactor.


Because they do tasks that people think require intelligence. It's like calling a water mill and a fusion reactor both devices that can generate energy.


Intelligence is required for handling novel tasks, not combinatorial optimization or search.

Once they stop tweaking Watson (example) for every tasks, I'll declare it an AI.


Words mean things, and when a word is used in a sufficiently broad number of contexts its utility at conveying meaning is diminished.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: