Hacker News new | past | comments | ask | show | jobs | submit login
Parallel Supercomputing for Astronomy with Julia (juliacomputing.com)
161 points by SiempreViernes on Oct 11, 2018 | hide | past | favorite | 61 comments



Oh hey, I worked on this. Paper with all the details here: https://arxiv.org/pdf/1801.10277.pdf. Happy to answer questions.


Finishing the whole job in under 15 minutes is impressive but also a bit suspicious, in a way. Back when I used a top 10 HPC facility at a national lab I saw a lot of jobs in the queue that could have run just fine on business class servers without the fancy, expensive interconnects needed for Grand Challenge problems. People didn't run their jobs on smaller machines because there was money in the budget to build a world class Top 10 computing resource but not money to buy less exotic hardware matched to typical jobs. Individual research groups didn't really get their typical compute needs measured or surveyed.

These are obviously leading questions, but:

- Are there significant research advantages to super-fast turnaround (less than 15 minutes, enabled by massive parallelism) in this domain?

- Do you feel like a massively parallel system with Xeon Phi nodes is a good match for this problem? Or did the code get optimized to run at high scale on Cori Phase II because that's where you were given compute resources?

Finally, a bit less provocative:

Does this approach effectively scale down to e.g. a university that can afford a large storage array and beefy commercial servers (optionally equipped with Phi or other accelerators), but doesn't have HPC resources that are contenders for the Top 500? Or do you really need things that smaller systems can't deliver, like many terabytes of memory in distributed global arrays?


| Are there significant research advantages to super-fast turnaround (less than 15 minutes, enabled by massive parallelism) in this domain?

No, the 15 minute turn around time is not important given the dataset we have at the moment, but showing that it was possible to do was considered important from a science perspective, because of the upcoming LSST telescope. LSST will generate an amount of data equivalent to the full dataset we had available every 3-4 days, so being able to scale this up far enough to accommodate that as well as future planned extensions to the algorithm, necessitated showing scale. The actual science runs by the project are usually done on a few hundred nodes over a couple of hours.

| Do you feel like a massively parallel system with Xeon Phi nodes is a good match for this problem? Or did the code get optimized to run at high scale on Cori Phase II because that's where you were given compute resources?

Cori Phase II worked well for this problem, though I wouldn't be surprised if GPUs wouldn't have been a better fit (though harder to program of course and at the time the Julia GPU infrastructure probably wasn't quite ready yet - even KNL was a struggle since LLVM was still in the process of completing support for it). The Celeste project is still ongoing (working on science goals more so than extra parallelism or performance improvements at the moment), but I wouldn't be surprised if there was an attempt to run on Summit at some point, especially now that Julia's GPU compiler is much more mature.

One of the biggest problems we failed to anticipate actually was getting the data from disk to compute units quickly enough. Early in the project we crashed the interconnect on the machine, so for the challenge run we weren't allowed to do anything other than pull the data directly from disk (lest we bring down the machine again while other challenge runs were ongoing). I haven't really looked at the interconnect on Summit, so I can't say how well it would handle that.

| Does this approach effectively scale down to e.g. a university that can afford a large storage array and beefy commercial servers

Yes, it scales fairly well. In fact you could probably do it with spot instances on a public could fairly well. The biggest thing would once again be getting the data to the compute units quick enough. That's quite demanding on the network (and ideally you want to pre-stage the data in memory). Certainly it's feasible to do this on a large-ish university cluster on the SDSS data set in a few hours. Probably less feasible on LSST data once that comes online, but maybe by that point improvements in computation speed and storage speed will have made up for that and it'll become feasible again.


>No, the 15 minute turn around time is not important given the dataset we have at the moment, but showing that it was possible to do was considered important from a science perspective, because of the upcoming LSST telescope. LSST will generate an amount of data equivalent to the full dataset we had available every 3-4 days, so being able to scale this up far enough to accommodate that as well as future planned extensions to the algorithm, necessitated showing scale. The actual science runs by the project are usually done on a few hundred nodes over a couple of hours.

I've heard that the folks working on the EHT array need months to crunch numbers. Could something like this be used to speed up that process? Or is there some other reason that would prohibit that.

P.S. I want pictures of black holes.


I don't know. I'd imagine that the folks working on the EHT already are making use of plenty of HPC for image reconstruction. It's quite a different problem from the Celeste application of course, so this work isn't directly applicable, but if they ever wanted to rewrite their code in Julia, they should give us a shout ;).


Very low key question that might be in your paper but I'm lazy:

Did you use one of the compilation options for this or just the JIT compilation (if direct compilation is available, I'll be honest, I'm not that abreast of Julia developments). One of the key things for more computational simulation tools (as opposed to just analysis) is we find that the awesome compilers for C and Fortran codes are a huge boon, especially on systems that have ones that improve on stock intel (not to mention gcc) by a good factor. You lose that if you're just using the jit for the sake of using a "nicer" language, I guess.

I had similar sentiments to the other cat who asked about the 15 minute run, but I guess it's fun to show it "can be done." I was planning to play a little in Julia and this shows it can be worth it.


The JIT compiler was active, but most of the computationally heavy parts were ahead of time compiled. The compiler had a number of tweaks to increase the quality of the generated code (though at the expense of a good amount of compile time). Generating the ahead of time image probably took a good 5-10 minutes (much more if we had tried to do it on the KNL nodes that we actually ran on), so it was infeasible and would have been a waste of time to try to do so in a JITted fashion. Some JIT compilation probably still happened (for code paths that weren't in the pre-compilation trace, e.g. logging code or exception code paths hit by a few processes).


For some reason, my brain forgot you did this on KNL. I guess even with cross compilation and then passing it to the platform compilers and it performing well is a sign it's good. Anyway, it's great you guys got this through especially with a tool like Julia. At least in my corner of academia, people utilizing "untested tools" unnecessarily raises the ire and skepticism of peers (for rather stupid reasons). I appreciate you guys "making the first move" which makes it easier for the rest of us looking to branch out and use these tools for our own codes.


Just wanted to clarify that all code generation was done through LLVM (as opposed to Intel's proprietary compilers).


Does this make it possible to get closer to David Hogg's and Dustin Lang's "theory of everything"? (Is that the same D Lang as in the Celeste paper?)

That is the idea that a catalogue is released as a complete model of the sky, rather than as a table of intensities and coordinates. (https://arxiv.org/pdf/0810.3851.pdf)


I don't feel qualified to speculate on the implications for astronomy because that's not my field. However, Celeste is indeed a generative model and you can draw an image sample from the Celeste model and compare it to the original (there's a few examples in the Celeste issue tracker - https://github.com/jeff-regier/Celeste.jl/issues/625 is one I found on a quick look but there's more), so I think it qualifies in that sense. Of course you may want a more sophisticated model than Celeste for a complete model of the sky.


If I wanted to create a code challenge that would be equivalent to the difficulty of this problem could I simply generate a series of images from 3d point lights and add noise? Is the changing location of earth in space factored into these calculations?


The data is publicly available (as is the code), so you could just use it. The data itself is organized by region of the sky being imaged, so that calculation has already been done for us (and at the distance the stars/galaxies are at the difference in distance due to the Earth orbit doesn't matter). In the actual application, both the noise and the psf are modeled based on the actual telescope, so just adding arbitrary noise is a bit of a different problem, but probably a good start. Do note also that Celeste is a generative model (including modeling the noise), so you could just use it to generate a bunch of images for you.


I see you have the forward function to render out "color" given the properties of the stars that contribute to this region in space. All you have to do is find the highest probable property values that would generate that set of pixels. Sounds extremely parallizable which is probably why you managed to get the most out of the xeon phi.


It's not quite that easy because of a) the noise (you're fitting distributions not values) and b) the fact that multiple light sources affect a given pixel, so you have to be a bit clever. But yes, you can get parallelism by spatial partition.


Cool I miss working in technical computing sigh

I recall when having a cluster of 17 supermini's was a really big thing :-) of course I monitored it using a dial up 110 baud portable print only terminal.

Don't miss the poor pay though


This is really neat, I used to do research in Bayesian networks, I had no idea they were being applied to astronomy, is their use widespread?


I worked on the computational side of this, so while I understand the science, I don't have a very good overview on the state of the art in astronomy. However, my understanding from the statisticians and astronomers on the project is that variational inference had not been attempted on this scale before (in Astronomy or otherwise).


I found it funny to compare a paper by my colleague Andrew and this talk. Ok we used gibbs sampling and a teeny weeny sad old hadoop cluster and we were looking a phone lines and not stars, but Bayesian inference to make a single catalogue... Ours does what's this made of and where is this thing, rather than "is it a star or a galaxy"

Also ours doesn't run in 15 minutes!

https://rd.springer.com/content/pdf/10.1007%2F978-3-319-7107...


This is really a great achievement. I am wondering how much communication was necessary between the cores during the computation or is the problem a so called "embarrassingly parallel" workload where the work could be split into independent tasks?


I would be curious to know what would be the elapsed time on the same machine (and problem) using more "enterprisey" languages like Java, C#, ...


I'm not sure what the state of AVX-512 codegen was in the CLR/JVM back in early 2017, but given that LLVM's was still in buggy early stages, I would guess that it was a WIP at best so performance was probably not ideal (we had to work quite hard to get decent quality code generation). On KNL, you're either running AVX512 all the time or you're pretty much dead in terms of performance.


I work in HPC, and, well, Java and C# are more or less irrelevant in that space. Well, there is a little bit of Java, some people are using things like Hadoop or Spark, but otherwise, no.

"Hard-core" computing is almost all C/C++/Fortran (and of course CUDA for GPU's etc.). Python and R are fairly popular, but in those cases (hopefully) most of the heavy lifting is done by library code (again, C/C++/Fortran, or increasingly CUDA via ML libraries such as tensorflow) rather than the interpreter. Julia is very promising in this space, as it offers a solution to the "two-language" problem. I'm hopeful for Julia to make more of an impact, but it's of course a slow process.

(There is a (tiny) bit of ASM, but that's more or less exclusively done for widely used performance-critical libraries like BLAS, or FFT, not for application code.)


Is Julia a JIT compiler? How is Julia different than Pypy or even the JVM? There is also the new GraalVM in the landscape.

I don't see the plus value in Julia compared to Python or Java. It will still be slower than C/C++, probably less portable, and all the legacy libraries have to be rewritten in Julia.

If Julia is only a new syntax, to me Python is already very simple. If Julia is a JIT compiler, why not participate to already available compilers?

Julia is still there, so I guess it adds value, but I don't know where to place that effort in the grand scheme of things.


Thsee questions pop up every time Julia comes on HN and have been answered time and again for example by Chris Rackauckas [1], at the end of the day there will always be pros and cons and people unwilling to change from languages they’re proficient at. Personally I’ve been using Julia for scientific computing for 4 years and I’m hooked.

[1] https://thewinnower.com/papers/9323-i-like-julia-because-it-...


This question implicitly presumes that language design doesn't matter—that you can just hook a JIT up to any language syntax and semantics and presto... it’s as fast as C. But it doesn't work like that. You can't just take an arbitrary language and "rub some LLVM on it" and make if fast. The dozens of failed efforts to speed up Python and R with LLVM and other JITs over the decades proves that the source language matters. (PyPy is the only one with any real success, and they had to give up on CPython ecosystem compatibility.) It's Python and R semantics that are the problem and this can't be fixed with a JIT. Java is much better in terms of performance, of course, but still has issues. (The lack of value types is a big one, for example; being forced to go through a VM runtime is another problem at the highest levels of performance demands.) Julia can be seen as an experiment which proves that if you design a high-level dynamic language from the start to be compiled and run efficiently, then you can do much better than you can by bolting a JIT onto an existing slow dynamic language after the fact.


> How is Julia different than Pypy

Well, no GIL for a start, which is a pretty strong selling point when running on several dozen of cores at once.

> I don't see the plus value in Julia compared to Python or Java

I don't see how you can put Python and Java in the same bag.

> It will still be slower than C/C++

Not that much https://news.ycombinator.com/item?id=17204750

> probably less portable

Who cares? 99.99% of HPC are Linux clusters anyway. And Julia runs on macOS and Linux, that cover the overwhelming majority of the concerned users.

> and all the legacy libraries have to be rewritten in Julia.

https://docs.julialang.org/en/v0.6.0/manual/calling-c-and-fo...

> I don't know where to place that effort in the grand scheme of things.

According to you interrogations, browsing their website would be a good start.

https://julialang.org


Thank you for the detailed answer. Here's my feedback on some of the points :

> I don't see how you can put Python and Java in the same bag.

What I meant is that the trio Python/Java/C is ubiquitous for many people in both entreprises and scientific fields, from embedded to web servers. It allows for great reusability of codes and people's skills.

> Who cares? 99.99% of HPC are Linux clusters anyway. And Julia runs on macOS and Linux, that cover the overwhelming majority of the concerned users.

But will Julia be able to output the necessary instructions for the future hardware accelerators, which could be totally different architectures? I'm thinking of all the new neural networks cores, the DSP, fpgas, and heterogeneous computing from different rival vendors. It seems Julia is deeply dependant of LLVM.

> and all the legacy libraries have to be rewritten in Julia.

> https://docs.julialang.org/en/v0.6.0/manual/calling-c-and-fo....

If you have to reuse C and Fortran libraries, why not just use Python which can do the same, or even Lua, or Lisp. Python is already the defacto language to glue libraries together onto a higher level algorithm.


> But will Julia be able to output the necessary instructions for the future hardware accelerators, which could be totally different architectures? I'm thinking of all the new neural networks cores, the DSP, fpgas, and heterogeneous computing from different rival vendors. It seems Julia is deeply dependant of LLVM.

Yes. Watch HN in the next week or two for an announcement that may interest you ;).


Oh gosh, you're making me impatient now :)


> What I meant is that the trio Python/Java/C is ubiquitous for many people in both entreprises and scientific fields, from embedded to web servers. It allows for great reusability of codes and people's skills.

That's true. But it's also Academia's role to try (and fail or succeed) at developing and evaluating new solutions; and in the case of Julia, I have to concede I'm pretty excited to see where they will be going. The solution of mixing the ‶glue″ and the ‶high performances″ languages in a single one while still letting people call upon older C ABI libs is a little revolution in this context.

> If you have to reuse C and Fortran libraries, why not just use Python which can do the same, or even Lua, or Lisp. Python is already the defacto language to glue libraries together onto a higher level algorithm.

Because Julia is far faster, and because no GIL.

Of course, like in every other tech, what floats my boat doesn't necessarily floats your, so maybe for your usecase Python/Lisp/Lua/Ruby/... is better.


Tbh, I just don't see any usage of Java or C# on HPC systems. Python is used a bit but in embarrassingly parallel mode. Part of it might just be cultural/biases but it just isn't used, especially since there isn't much of a need to create and manage a complex software architecture that enterprise systems leverage.


"Peta floating point operations per second per second" XD


wow - 28 people on the about page and the only woman is the diversity director.


Yep, that's something we need to fix. So far we've done most of our hiring from the open source community, which is unfortunately overwhelmingly male. We've done some work to try to increase diversity in the community and Jane graciously agreed to take a break from her PhD studies at Caltech to help with outreach as part of that (also thanks to the Sloan Foundation for funding the corresponding effort). Clearly a lot more work is needed on that front, but I'm hoping the work that Jane and others have been doing this past year have helped on the community side. Of course there's work to do on the company side independently, but I imagine the community will continue to be an important hiring channel, so I consider improving diversity there as a necessary ingredient.


Nothing needs to be fixed. Make the opportunity available to women but do not enforce a gender quota. Candidates should only be selected on merit. Equal opportunity is good. Forcing equality of outcome is BAD.


Nobody is forcing anybody to do anything, but even compared to tech itself, the open source community tends to be less diverse. I think it's fairly clear that there is a large number of qualified people who are not adequately represented in the open source community at the moment. In order to not miss out on these people, we need to a) try to get more such people involved with the open source community through outreach, etc and b) once we get to that point in our hiring, make sure that our hiring funnel is not just restricted to the subset of the population that happens to already be active in the open source community. In my opinion, the key is just to have a sufficiently wide funnel at the top to avoid injecting bias at the top of the funnel.


Fair enough. Women are generally less interested in engineering, so there will always be a gap. James Damore was fired from Google for writing this memo on the tech gender gap. You may find it interesting. https://www.documentcloud.org/documents/3914586-Googles-Ideo...


He was fired for being sexist and an idiot, not for “telling the truth” as his sexist supporters would make it.


The James Damore memo is quite a complex subject. The story is nuanced and at no point Damore was sexist. He goes the extra mile to say that women can be as good engineer as any man and propose a different mode to interest women in engineering jobs.

I do not know if you have read the memo, but it only objective is to find a way to hire more women in an effective way.

Personally I liked the proposition in the memo because it would allow for men and women to equally share child-duties.

That said I do not criticize google's decision to fire him as the overwhelmingly negative exposure the memo had was quite a damage to the company...


He was an idiot in the sense that you obviously should not write and distribute an essay at work on a sensitive topic, if you are not an “expert” and were not asked to do so. As far as anyone reasonable could tell the facts he cited are correct but his opinions and recommendations were obviously politically incorrect.


just a bit of a factual remark: he was asked to do so.

diversity and inclusion committee at google ask employees for feedback, the memo was a feedback saying "current policies do not work as expected, maybe offering more part time and family friendly jobs would allow both men and women that do want to spend time with their family to like working at google".


If you have a gender imbalance of 27:0 that is not predicated on self-evident physiological differences, there's a cultural problem that artificially limits the pool of potential candidates. It is in a field's best interest to fix that.


I was trying to hire specialists in a similar field, in Europe. Applicants were at least 50:1 m:f. Instead we focused on getting diversity based on other things - country of origin, industries worked in before, age etc. It has worked out really well, we have a dynamic team, and yes, once we started hiring for other roles the gender balance of applicants flipped completely.


Sure: When hiring, you work with what's available. But as a community, it doesn't hurt trying to figure out where such ridiculously large gender imbalances come from, and if there's something that can be done about it.


Chimpburger... I think your heart is probably in the right place. But think about the wider canvas.

In terms of what might be best for me, were I looking for a job... equal opportunity is fair/good. Of course I would think that... I'm a white male... I am a historically privileged class of person in the workforce. Anything that doesn't adversely my job opportunities is good for me, right?

From the perspective of a company that is trying to build a high-output, innovative team... diversity is really, really awesome: diversity of thinking patterns, backgrounds, interests. I have experienced this first hand and am now 100% sold on the importance of diversity and oversampling certain segments of "talent pool" to reach diversity goals. I may I miss out on some opportunities due to how I am categorized in the overall talent pool.... but diversity is really good for the organization and society in general.

Some sports analogies apply: you can't make a basketball team out of all shooting guards or a soccer team out of all forwards. You need a lot of different kinds of talent to make a good team.

That is just my opinion... I would rather work on a diverse team and I'd rather live in a diverse city/culture.


Your comment seems to rest on the implicit idea that different sexes, and perhaps races, have inherently different abilities - that "diversity", as the word is used in these contexts, amounts to diversity in talents and abilities. But I don't see any reason to think that diversity of race or sex is equivalent to diversity in intellectual qualities. In fact, many people would consider that sexist and racist.


I am not sure how you got that out of my comment... you created a rather unfortunate strawman of my comment and are calling me sexist and racist? That hurts. Are you going to call me republican next?! Because that’s where i draw the line.

I have worked in high tech for 20 years: biotech, a PhD @ Carnegie Mellon, NASA, some startups, back to biotech... I have worked with so many awesome people of all walks of life that notions of gender or racial superiority are long gone. Quite the opposite, i have experienced the tremendous benefit that usually arises with diverse teams: deeper group experience, less competition and more cohesion, stronger friendships and sense of connection, diverse technical experience and interest, really different and novel ways of approaching problems and developing solutions (things that blew my fragile little mind), etc.


No, I'm sure you are not a racist, please don't take my comment that way. But sometimes we can promulgate ideas that we don't actually agree with unintentionally. It's the pattern of thinking that I wanted to examine, that I seem to encounter everywhere lately. Others spoke of the lack of "diversity", by which they clearly meant diversity of sex and race or ethnic background. You added the idea that this type of diversity was beneficial because it's an advantage to have a diversity of interests and intellectual strengths in different areas. So you are equating a racial and sexual diversity with the latter. If that's not what you meant, I apologize for misconstruing your comment.


Lee... You are straw-manning my statements again. You are reading a bias into my statements and put extrapolated argument, that I didn't write, into my virtual internet mouth. I don't like it.

Yes I did write of myself as a historically privileged class... which definitely falls along racial/gender divides. Hopefully the manner of that frank discussion indicates that this privilege really sucks for most everyone (we really all lose) and is short sited. Then I started a new paragraph which signifies a though break --> moving on to a different but related thought.

Nowhere in my post did I the use word "race" or the phrase "intellectual strengths", and I did't use those phrases because I don't believe in them... I used "thinking patterns, backgrounds, and interests."

To address this: "You added the idea that this type of diversity was beneficial because it's an advantage to have a diversity of interests and intellectual strengths in different areas. So you are equating a racial and sexual diversity with the latter." I wrote none of that... I believe none of that. It requires multiple logical fallacies to get from what I expressed to what you wrote above. Rather than jumping to conclusions, why don't you ask me to clarify what I meant?

So what do I think about diversity and inclusion, in brief: Diversity is highly multivariate... what team diversity means can vary from situation to situation. Hiring for a Mechanical engineering team vs. women's fashion design team would probably have very different diversity hiring goals but the desired benefit and improvement for the team is similar.

If people only look at diversity as race and gender, they are missing the boat entirely... how about personality, age, ethnicity, religion, educational background, work experience, world/life experience, family responsibilities, life aspirations, out of work interests, health concerns and disabilities, sexual preference, ...all factors which make us different/unique and interesting.

The flip side of diversity, which is inclusion and appreciation, is pretty simple: it is really important to try to understand people, appreciate them for who they are, do your best to allow them to be who they are in the workplace, and not pigeon hole them into a box.

Regardless of what I'm called next, I'm done replying to this thread. If you figure out how to construe this as "I must be the second coming of Hitler", more power to you.


I obviously misconstrued your comments, for which I apologize. It sounds from this last comment that we are largely in agreement about these things.


Suppose you had a brain tumor that could possibly be fixed by surgery. You are given the choice of two surgeons. One is a female who graduated from her medical school to fill a gender quota at the expense of other male students. The other option is a more qualified and experienced surgeon but just so happens to be a male. If you are true to your ideology then you should choose the female surgeon. Am I right?


That female surgeon is smarter and more capable than either of us... brain surgery is like 7 years of internship and residency AFTER med-school. She makes through all that... she can operate on me. One decimal pt on her pre-med gpa doesnt matter for shit at that point.


This is not how it happens. So don’t be disingenuous.


You completely missed the point.


Nobody said anything about quotas except you. The GP said they were working on sourcing their candidates differently.


@lern_too_spel: I know they didn't, but they did not say they were not considering a gender quota.


Wow, what an eloquent argument. Does that means you intend to harass a woman since you haven’t denied planning that?


It wasn't an argument.


IIRC, Jane is a Caltech PhD scientist. I imagine she is making a strong contribution to Julia ecosystem.

The general problem of diversity is ubiquitous in tech, sort of rough to call out a young startup over this issue. Most good companies strongly desire diversity in their teams because of the real benefits diversity provides.


apologies to Jane if I implied that her skills weren't on par - that wasn't my intention.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: