Hacker News new | past | comments | ask | show | jobs | submit login
GPU vendor-agnostic fluid dynamics solver in Julia (b-fg.github.io)
224 points by moelf on May 8, 2023 | hide | past | favorite | 104 comments



The authors have previously also shown off that this can do 3D visualization in real time: https://twitter.com/gabrielweymouth/status/16486827416201953...


Hmm, the 3d render is done in Julia as well right?

I wonder if there's a way to share the data buffer across languages. Would be neat if it was feasible to use the real time model data in a game.


>I wonder if there's a way to share the data buffer across languages

This is pretty much what Arrow is made for.


I assume you're talking about Apache Arrow?

Interesting! Don't think I've seen this one.

https://arrow.apache.org/

Edit: I'm surprised to not find anything when I try to find projects that try to use Apache Arrow in unity 3d. Seems to be a lot of interesting potential in leveraging various simulation libraries in game like applications.

Edit2: Ah, probably because something like waterlilly would have to be rewritten quite a bit too use arrow.


Success stories like this make a better argument for "Why Lisp?" than abstract blog posts.

We know macros are awesome, but if you're trying to convert others please provide code, screenshots, or even an interactive web demo.


> abstract blog posts

If you refer to the blog post that made Top HN yesterday, it is very much backed by actual experience (https://nyxt.atlas.engineer/) and quite a load of code (https://github.com/atlas-engineer/nyxt/tree/master/source).


> the blog post that made Top HN yesterday

Why Lisp? https://news.ycombinator.com/item?id=35852321


Good to know! I skimmed yesterday's article.


Siscog, a decades old Portuguese company using CL for their products.

https://www.siscog.pt/en-gb/products

https://www.siscog.pt/en-gb/news/siscog-sponsors-the-2022-eu...


Just a little thing, this is in Julia


It turns out that Julia is ~a lisp, just with a weird syntax. If you look at the metaprogramming facilities, all expressions are first turned into s-exprs while parsing. There is no problem having a LISP syntax for Julia, and in fact this has been implemented! (https://github.com/swadey/LispSyntax.jl)

https://docs.julialang.org/en/v1/manual/metaprogramming/


Some have said Python is "enough" of a Lisp, but it's really not. Julia is much closer to being a true Lisp. It has macros sure, but its the overall feel and flexibility of the ecosystem that feels Lispy. At least as much as I've dabbled with Lisp and its history.


Have you tried running `julia --lisp`? That's a full-blown Femtolisp interpreter built right into the REPL! I also recommend playing with `Meta.show_sexpr` which can take any Julia expression and represent it as an S-expression.

For example:

julia> Meta.show_sexpr(:(f(x, g(y,z))))

(:call, :f, :x, (:call, :g, :y, :z))

Lastly, this old doc page comparing and contrasting Julia with Common Lisp is a fun read: https://docs.julialang.org/en/v1.3-dev/manual/noteworthy-dif...


To put it in perspective, Julia is a Dylan like Lisp.

https://en.wikipedia.org/wiki/History_of_the_Dylan_programmi...

https://en.wikipedia.org/wiki/Apple_Dylan

Nowadays still alive as Open Dylan,

https://opendylan.org/


After clicking trough to the repository, I found this part a bit perplexing: "running on a GPU requires initializing the Simulation memory on the GPU, and care needs to be taken to move the data back to the CPU for visualization."

The original purpose of GPUs were visualization, so that seems backwards to me. And, GLMakie is used, which makes it even more counter-intuitive, isn't that specifically built for GPU visualization?


You are right. Ideally we would like to keep everything in GPU memory of course. But we have not been able to render CuArrays with Makie yet, and I'm not sure if that is actually implemented, as you suggest. If so, it would be great to see an example of this :)


As a card carrying python-stack scientist who works at the intersection of machine learning and physical sciences for the last decade who now is working on an R package (pro-tip: go where the money is don't bring the money to you), can someone make a convincing argument for me to learn Julia? I would like to hear more than the typical "the code auto-differentiates" or "it's faster" or whatever it is that people have said in the past. I am really not trying to be flippant I just don't see the added value of learning a new language unless it has interesting packages/functionality that my current toolset does not (e.g., this is why I am working on an R package).


I'll put it this way: I'm just an idiot engineer, not a programmer really- but I've written some blazingly fast code in Julia that would have taken me way, way longer to write and resulted in way, way slower results in other languages I've played in.

I have to shoutout Chris Rackauckas for being a such badass, helpful person too. He'll probably be in this thread any minute because he's the best damn advocate for Julia there is. :-)


Not so sure about that last part. He's definitely an incredible force in terms of coding, project management, stuff like that. The Julia package ecosystem wouldn't be half of what it is without him.

But "best damn advocate" is probably the opposite of how I'd describe him in terms of interactions here (and generally with people outside the core Julia circle). He very often comes across as dismissive, overly defensive, and passive aggressive in comments here. All of that is dwarfed by his package contributions tbh, in terms of impact on Julia. But still, probably half of the negative perception about Julia community that people have, come from reading these interactions.


Reading subtleties in internet comments is a but too much lol. Might as well do palm reading.


One reason to keep an eye on Julia if you are envisioning a long and varied career in the broader computational / data science world is that its not at all clear which will be the leading platform going forward (Python, R, Julia or something else altogether).

This ambivalence might sound absurd on the face of the exponential recent growth of Python (which apparently enticed even some people with serious mojo to get into the act) but take two steps back with me and look at the big (if still hazy) picture:

We are going through a remarkable period where complex algorithmic applications left academia and research labs and diffuse into mainstream society and the economy like never before. This process carries enormous risks and opportunities, which are currently basically... ignored (well, the risk side).

Despite its undeniable strengths and loveability, Python is actually a poster child of the move-fast-and-break-things phase. It is not necessarily best placed for the next phase. The next phase will invariably see a re-examination of all aspects of the stack and qualities that will be prized will be those that eliminate the frictions and risks associated with the large scale deployment of algorithms. The stakes are high, which means there will be plenty of resources seeking to create reliable platforms. The future need not look like the past.

None of the usual suspects ticks all the boxes. In fact we don't even know all the boxes yet. Depends how fast and how seriously models and algorithms get deployed at scale. Python, Julia and R have been propelled forward by circumstances as the main algorithm-centric platforms, and they have each their various warts and blessings but the near and mid-term future will test how well they can deliver on aspects they may have not be designed for.


> its not at all clear which will be the leading platform going forward (Python, R, Julia or something else altogether). This ambivalence might sound absurd on the face of the exponential recent growth of Python

It sounds absurd because trends don't reverse overnight. You can be fairly confident that Python will be the top language in this space for a while and that R will never be the top choice for most applications.


Irrespective of whether Julia ends up the winner of the shift or if the shift happens, it is quite possible for trends to reverse very fast. See Perl or Objective-C for that matter.

https://www.tiobe.com/tiobe-index/perl/

https://www.tiobe.com/tiobe-index/objective-C/

In 2006 (for Perl) and 2014 (for Objective-C) it was clear they had the momentum for their particular space however their limitations were well known and as soon as a better language came along the momentum flipped in an equally dramatic manner. Python is much more widespread so it will remain strong in some areas but you could see the flip in ML/DS given challenges productionizing across broad capabilities (not just doing NN's).

As the joke goes -- python is the second best language for everything, if you know only two languages. With ML expanding beyond narrow big tech domains there will be need for specialized languages like Julia (and others perhaps like Mojo etc..)


Objective C was replaced single-handedly by Apple, it wasn't a natural trend reversal.


It was also boosted by Apple in the first place! so nothing natural about these kind of trends. If Google and FB hadn't picked up Python for ML it wouldn't have taken off as much, which is also to say if they (or another large player) back another language you could see a similar decline in Python usage.


Google tried Swift for ML and it didn't make any dent in Python.


I think so too in the short term (1-2 years at least) Python will gently move into the last stage of its adoption curve (even doing nothing).

But now is a time where at various high places people will say: "Ok you got my attention. What is this snake language you are talking about and explain why I should bet the house on it".

And the answer is not simple.


What's your argument - that people who don't understand the space at all will dictate rewrites in other languages? It seems very unlikely.


you obviously have never worked in the corporate world


The corporate world will want to do $x because $x is in the news, they won't be making nuanced arguments about tradeoffs in a domain they don't understand. Least of all arguments that go entirely against the trends in the industry.


> society and the economy like never before. This process carries enormous risks and opportunities, which are currently basically... ignored (well, the risk side)

What risks are you talking about here?


take the entire stack (including all dependencies, toolchains etc.) and think about scenarios of accidental or malicious malfunction, but also reproducibility, auditability of outcomes, that sort of stuff. The overall ability to provide locked-down, performant, safe, secure deployments of high-quality, validated algorithms without breaking the bank. In other words the risks (but also the frictions / costs) in the "productionising" of algorithms.


I do get where you are coming from. Indeed, it makes little sense to use Julia for lots of machine learning when PyTorch and Jax are just so good. And it sounds like you don't want to use Julia, so who am I to try and convince you? Python/R are capable languages.

But, there are still reasons I reach for Julia.

Interesting packages where I prefer Julia over Python/R: Turing.jl for Bayesian statistics; Agents.jl for agent-based modelling; DifferentialEquations.jl for ODE solving.

I would much rather data-munge tabular data in Julia (DataFrames.jl) than Python, though R is admittedly quite nice on this front.

Personally I reach for Julia when I want to use one of the previous packages, or something which I want to code up from scratch, where base Julia is much preferable to me than numpy.


Three reasons: Julia feels more like math, there's a huge long-term commitment to the language because it's used for climate modeling, and package management is completely painless.

I love Python, but I can also see eventually doing everything in Julia over the longer term. Mind you, it's entirely possible that AI continues to improve and in 5 years any package will be available in any language, you'll look at code mainly for verification purposes in whatever language you happen to prefer.


For me personally, I just think it's really fun to write julia code. Granted, I'm neither machine learning nor physical science, but the fact that I can go through the whole stack and choose an abstraction that's right for the problem at hand (Metaprogramming? Regular struct-based abstractions? External program? LLVM optimization? Inline assembly?) and still being able to understand what's going on while getting good performance at the same time, is just magical to me. Maybe that's not for everyone, but to me the ratio of dev time to run time is just really, really good.


I think the main idea behind Julia is to minimize the burden of doing the necessary but wasteful software engineering parts of scientific computation.

Let the language optimize more so you don't have to write a C++ library or figure out how to use it optimally. Don't waste as much time setting up your environment or worrying about platform compatibility. Don't worry about using multiple languages for different types of computation. And make the on-ramp fairly painless by being a convenient glue language.

It fills the gap of otherwise not having a managed and JITed language for general mathematical computation. If it's more burdensome for you to switch, then don't switch.


IMO, the biggest reason for me is that the code looks a lot more similar to the math than in python/R. This comes from a number of places (multiple dispatch, ability to use unicode symbols, you don't have to vectorize everything, etc), but the end result is code that looks a lot like the math you are trying to do (for examples, see https://discourse.julialang.org/t/from-papers-to-julia-code-...)


If you work partially in physical sciences, and TFA doesn't entice you to try Julia (someone with no GPU programming experience realize functionality to do serial, parallel cpu, parallel GPU navier-stokes, all without touching c, c++ or fortran - in mostly similar codesize/loc - achieving a 30x speedup) - i can't imagine what would?


Here's my best short case for you:

If you're writing code that is fundamentally based on mathematical principles and models, even if you aren't personally using mathematics every day, its going to feel a lot better in Julia. That is: Julia looks a lot more like mathematics than Python.

__

Longer version:

Obviously some people are mostly writing websites or GUIs or whatever in Python and won't see the beauty in this.

But if the problems you are working on have, at their base, a mathematical foundation (even if you don't actively practice the math), it's much more beautiful IMO. So, simulation, data analysis/science and machine learning, statistics, etc...

Once you get used to using it for that though you'll realize it's actually quite nice for a lot of other things as well and the "mathematical mindset" it somewhat pushes results in cleaner solutions for other problems too. Just in general the syntax and patterns are nice.

Here are some quick things using randomness in Julia that would be a bit slower and more verbose in Python:

Generate a random number:

> rand()

Pick a random message:

> rand(["First message", "Hello", "Foo"])

Generate a random 3x3 matrix of booleans

> rand(Bool, (3,3))

Define a function and run it elementwise on a random matrix of bools:

> myprint(x)= x > 0 ? "Happy" : "Sad"

> B=and(Bool, (3,3))

> myprint.(B)

Returns:

> 3×3 Matrix{String}:

> "Happy" "Happy" "Sad"

> "Sad" "Happy" "Happy"

> "Sad" "Happy" "Happy"

And many many more nice features...but the Julia design meaning functions like rand() just apply how you expect regardless of the input type are quite nice. rand(list of stirngs) *should* give me a random string and rand(range of numbers) *should* give me a random number in that range! No one would write an academic paper and define a new rand function for each input because well...it's clear what the user wants - rand of something.


> > B=and(Bool, (3,3))

Just in case it confuses anyone else: this one is supposed to be `rand` as well, not `and`.


if you have to solve mathematical programming/convex optimization problems, JuMP as a frontend for free or commercial solvers is hugely better than any alternative.

likewise if you are solving differential equations, DifferentialEquations.jl is hugely better than any free alternative I know of and arguably better than paid packages. The broader SciML ecosystem that's built up around this has a lot of cool stuff in it too.

other than this it seems like you wouldn't care about the other potential advantages, and might be more put off than average by the disadvantages and occasional rough edges.


> I would like to hear more than the typical "the code auto-differentiates" or "it's faster" or whatever

> unless it has interesting packages/functionality that my current toolset does not

Multiple dispatch, code-specialization, JIT compiling, automatic loop fusion, broadcasting, many ways of doing compile-time optimizations.... I'm sure there's more.

But I guess you can dismiss these features the same way that you dismiss "the code auto-differentiates" or "it's faster".


If you frequently want to develop and maintain publicly uses functionality that requires writing some in a faster compiled language and then binding to an interactive one like R or Python. Test coverage and multiuser maintenance is way easier when it's all just one language and has a sud package manager.


Context: Coming from a statistics background, I learned a bit of R, then a bit of Python for data analysis/science, then found Julia as the language I invested my time in. Over time I keep up with R and Python enough to know what's different since I learned them, but don't use them daily.

What I always tell people is the following:

If you are writing code using existing libraries then use whichever language has those languages. The NN stack(s) in Python are great, the statistical ML stack(s) in R are simple and include SOTA techniques.

If you are writing a package yourself, then I assume you know the core of the idea well enough to be able to write your code from the "top down" i.e. you're not experimenting with how to solve the problem at hand, you're implementing something concretely defined.

In this case, and tailored to your use, I would argue that Julia has more advantages than disadvantages, especially compared to R or Python. Here are a few comments:

1. Environments, dependencies, and distribution can all be handled by Pkg.jl, the built in package manager. There is no 3rd party tool involved, there is no disagreement in the community on which is better. This is my biggest pain point with Python.

2. Julia's type system both exists and is more powerful than that of Python (types or classes) and R (even Hadley's new S7(?) system). By powerful I mean generics/parametric types and overloading/dispatch built in. You can code without them, but certain problems are solved elegantly by them. Since working heavily with types in recent years, I find this to be my biggest pain point in R and I wouldn't want to write a package in R, although I like to use it as an end user.

3. New developments in scientific programming, programming ergonomics, hardware generic code (as in this post), and other cool features happen in Julia. New developments in statistics happen in R (and increasingly Julia), new developments funded by big companies happen in Python.

4. The Python and R interpreter start up faster than Julia. The biggest problem here is when you are redefining types, which is the only thing in Julia that can't currently be "hot reloaded" i.e. you need to restart Julia to redefine types.

5. Working with tabular data is (currently) far more ergonomic and effortless in R than Python and Julia.

6. Plotting is not a solved problem in Julia. Plots.jl is pretty easy and pretty powerful, Makie.jl is powerful but very manual. Time to first plot is longer than R or Python.

7. Julia has almost zero technical debt, R and Python have a lot. Backwards compatibility is guaranteed for Julia code written in >v1.0 and Pkg.jl handles package compatibility. If I send you code I wrote 4 years ago along with a Project.toml containing [compat] information then you could run the code with zero effort. (This is the theory, in practice Julia programmers are typically scientists first and coders second, ymmv.)

8. You can choose how low level you want your code to be. Prototyping can be done in Julia, rewriting to be faster can be done in Julia, production code can be done in Julia. Translating Python to C++ production might mean thinking about types for the first time in the dev process. In Julia, going to production just means making sure your code is type stable.


You can have nice foreign function interface between R->Julia and Julia ->R. If you're already happy pulling out slow functions into RCpp, then maybe there's no speed benefit. But there are some very nice, very fast libraries in Julia, where if you have a tight inner loop, it could be worth looking into

It reads and writes a lot like python (but nicer IMO), I don't think the learning curve is immense to try it for small optimizations. And it's also not unreadable so other people can verify your code


> nice foreign function interface between R->Julia and Julia ->R

JuliaCall[1] and RCall[2].

Python<->Julia is similarly well exercised with PyCall[1], and recently PythonCall[2].

[1] https://non-contradiction.github.io/JuliaCall/index.html [2] https://github.com/JuliaInterop/RCall.jl [3] https://github.com/JuliaPy/PyCall.jl [4] https://cjdoris.github.io/PythonCall.jl/stable/


I don't get the money part. Do you mean the people with money have a need for R packages or how are the two related?


I mean i never would have signed up to develop an R-package on my own. But I was at the right place at the right time to work on a project with funding that is interesting and it just happens to be in R. It's nice to learn a new tool (no matter what it is), but I would not have chosen R if it was my choice.

Moreover, I think sometimes people get their PhD and think they deserve to use the tools they put in their toolbox, on the problems they focused on, and don't see that all they really did was get a ticket to the game. Most scientists have a phd. Most scientists don't work on the thing their PhD is about ten years later. The sooner you open up to that the sooner you will get out of the postdoc chase and get a job that is a lot more rewarding (both intellectually and financially). All this means that there may be problems you are going to learn about and focus on that you never thought you would at some point, and being open to that and seeing it as an opportunity will carry you further then not.


I think it's fine if you don't learn Julia. When I was in university some of the course work had to be done in MATLAB. I think Julia could definitely be used instead nowadays. Simply being free is reason enough. You could argue that python/numpy would be an option as well.


> more than the typical "the code auto-differentiates" or "it's faster" or whatever it is that people have said in the past

Why are you asking to be convinced if you don't want to be convinced?


Dope! Can anyone help me with the following, as this has been 'floating' around in my head for nearly a decade ;

Wales (the animal) oft have barnacles on the leading edges of their flippers, which results in an eddy effect which increases efficiency/thrust.

Da Vinci was the earliest known documentor of eddy-based pumps and predicted the eddies in the ventricular systems of the hearts pumping of blood...

What I would like to model is a toroidal propeller with leading edge bumps ('barnacles') while also having the dimpling pattern of a golf ball to reduce drag... and I want to measure if this idea holds water.

I just dont know how to model this using this tool...

help?



I have a really weird hypotho

So we look at the actions of the faster currents as the flipper cuts through, and where does the current flow, such that you can direct micro currents to the other bumps and managing overall flow... such that certain bumps 'feed' others...

and if they are maleable, and you can manage each...


Sooo, in terms of malleable wing sections there's some neat research done by MIT that might be interesting to you [0]. There's also some new work that pops up when I was searching for [0]; apparently the same researchers have been working on this idea for some time [1]. The idea is simple in concept, produce small building blocks and assemble large shapes from those building blocks. If you design your building blocks right and assemble them correctly you can achieve some pretty impressive macro properties, including compliant mechanisms [2]. And then there's this 3D printer I stumbled across yesterday that prints on successive sheets of carbon fiber [3]. This could be relevant for the manufacturing process (they currently use injection molding in [1]?) as it would offer composite material strengths at a high speed (they claim 17 seconds for a relatively complex part).

[0] https://news.mit.edu/2016/morphing-airplane-wing-design-1103

[1] https://news.mit.edu/2019/engineers-demonstrate-lighter-flex...

[2] https://www.science.org/doi/10.1126/science.1240889

[3] https://www.youtube.com/watch?v=0BKzikfssTM


Wonderful, thank you - so when I said that I have a really weird design idea ;;

-

The best arch for dividing a sphere is a hexagon.

The idea for dimpling comes from the honeycomb sandwich design patterns we have for use in structurally solid airplane components, coupled with a micro design for stem-cell research from someone from University of San Francisco....

She was building a micro printed 'injector' where she would inject proteins to various stem cell pods.

She would then measure the various cells to see how she could get them to express in a desired outcome....

The hex is from some top secret shit I saw back in the day...

SO... I am thinking that one can util the hex layout and by slurping/pumping vacuum or hyrdo, one can manipulate the dimples on an interface.. on a toroidal propeller in water, pressure is equalized in a certain way to allow live dynamic prop deformations....

In a helo blade it has to be gas activated. but the material overlay has to be able to handle millions of deformations on an individual cell or a neighborhood of cells to reduce piping.

leading edge bumps inflate on way out and release to tailing edge bumps on exit...


What are the vortexes created by the cells of a buterflies wings. that is how they manage vortex. its through complimentary vorticies


curious ; are you invalidating, or validating my comment?


I was providing information in case you didn't know this stuff was out there. If you're going to do some CFM, it might be useful to see what prior art there is.


Thank you so much, super surprised that info is out there along the same thought lines!!

I really think there is something here


Why use this tool? Why not simply build and test the propeller? Your results would be much more reliable - CFD is famously finicky, especially for a situation like a propeller where you have rotating flows.

The point of golf ball dimples is to act as vortex generators and improve flow attachment, so having both barnacles and dimples is redundant.


+1 to this - Though it has been many years, at one point I helped write CFD code for the US Navy. The disconnect between real-world measurements and simulations was vast for any structure remotely complicated (ie anything but an axially symmetric simple shape like a torpedo). While the CFD code has gotten way better (largely via Moore's law), I expect propellers are still quite hard to simulate, and your best path is to build a model and see how it performs.


But the CFD isn't to replace building the model...it's to give you a head start and reasonable starting location, and to find glaring errors. Otherwise you're going to be spending thousands and thousands running back to the wind tunnel every time you want to iterate..


Agreed, and that is exactly how we used it with the Navy - the CFD simulations narrowed the universe of options so we could be more efficient with the wind tunnels and water tanks. In this case, however, he has a specific shape that he wants to test. For that, going straight to a physical model makes sense.


Is building and testing an actual propeller going to be easier? I would have thought that setting up and running a simulation could be done in a moderate amount of hours, and could then be quickly iterated on. The only requirements are a laptop and an internet connection.

Building a model, on the other hand, could potentially involve a multi-year effort to re-educate yourself to learn how to build models, having to acquire hardware and materials, and setting up a lab for testing. And then building many different actual models.


Yep.

and i didnt give the following context:

Like JWST "variential dimples and bumps via hydrolic mechanisms, for optimised flow" now we have the hard part added in. (dimple flexing) and there are more levels than that.....


Trying to optimize across a bunch of different dimple/bump patterns is a great use case for CFD. However, you'd want to validate a CFD model like that against real world test data (either by running your own tests or by finding test data in the literature).

To be clear, I think adding vortex generators to boat propellers is a great idea. But to me the starting place would be a building physical propeller with vortex generators (even if they aren't the perfect vortex generators) and seeing if it has a noticeable impact on boat performance. That would justify the difficulty/time required to build a CFD model which you could use to optimize the design. (I'm saying this given that you don't have experience in CFD and it would be a large outlay of your own time to learn how to use it.)


for example you want to be able to shape your dimples in order to precisely control your eddies and currents...


No, that's backwards. Running a CFD simulation would involve a multi-year effort to re-educate yourself to learn how to build an accurate CFD model.

Building a physical model, on the other hand, merely requires 3D printing the desired shape, doing some sort of casting process to get that out of metal, putting it on a boat, and seeing if you get a noticeable performance impact.

It's one of these situations: https://xkcd.com/1425/


How do you figure that, when the CFD software is already built and available, and the premise in the question is that that the asker is already in a position to start using it? It's pretty clear that the background of the asker is aligned with a software approach.

> Building a physical model, on the other hand, merely requires 3D printing the desired shape, doing some sort of casting process to get that out of metal, putting it on a boat, and seeing if you get a noticeable performance impact.

I love how you sneak the word "merely" in there, when for people like me, and presumably the asker, this would be a Herculean task :D


>the premise in the question is that that the asker is already in a position to start using it

That's an incorrect premise. The asker is clearly not in a position to use an off the shelf CFD package because they don't have the solid basis in fluid mechanics required to interpret the results.

I think maybe you/the asker are thinking of CFD like a tool that simulates fluids. That's not what it is. It's a set of approximations which, sometimes, are applicable to specific circumstances. Even LES, the most general tractable model, requires you to make informed assumptions about the boundary conditions.

> I love how you sneak the word "merely" in there, when for people like me, and presumably the asker, this would be a Herculean task

I'm sure building a physical prop would be challenging if you had no experience, but it is the sort of thing a lot of people can learn from Youtube and do in their garage.


As long as you can define a signed distance function, and the function governing the motion of the propeller, WaterLily can simulate it!


What turbulence models do you have implemented? It's been ~ a decade since I played with CFD, wondering if DNS is computationally tractable these days.


There is no explicit LES model, aka implicit LES. All the additional dissipation required when running coarse meshes relies on the numerical dissipation of the implemented schemes.


DNS is not used much (at all?) for real world problems, but LES has become much more common.


I don’t think that’s true. What about cavitation?


Cavitation has nothing special other than multi-phase flow. And implementing a VOF method is definitely in our roadmap. You can currently simulate it without cavitation to get a feeling of the unsteady flow solution, or wait for us to implement VOF.


Cavitation is precisely what I would like to test...

Also, will 'waterlilly' work with a viscosity to Air?

Meaning - toroidal props are a newer entering thing in drones... and I'd like to find out if the above applies equally to fluid/air?


Then you should is a different solver that fills your needs. WaterLily is an incompressible flow solver that works in non-dimensional units (assuming constant unit density). So you can change the viscosity of the fluid by modifying the Reynolds number (with set a set characteristic velocity and length scale).


On related note. Julia [1] announced their v1.9 release.

[1] https://news.ycombinator.com/item?id=35861288


This is slightly premature. They only just tagged the release on Github several hours ago. While it does suggest the actual release is imminent, it's not really official until you see here on Discourse:

https://discourse.julialang.org/c/announce/25


What? On the official page it's still on 1.9-rc3


The home page is not yet updated (probably waiting for the official binaries to be done, which are also not yet on the download bucket).


> probably waiting for the official binaries to be done

For 3 weeks? Because that's how old those release notes are.


There's a lot more than release notes going into a release - we've had 3 release candidates for a reason, and those regressions/bugs need to be fixed first.


e got it, but doesn't that suggest that just because release notes exist, doesn't mean something has been released? Unless ksec has special inside info for his claim, I think his link doesn't show anything.


The release was just cut 9 hours ago, as shown on the releases part of the Github page (https://github.com/JuliaLang/julia/releases/tag/v1.9.0). That then starts the jobs for the creation and deployment of the final binaries, and when that's done the Julialang.org website gets updated to state it's the release, and when that's done the blog post for the new release goes out. You can even follow the last step of the process here (https://github.com/JuliaLang/www.julialang.org/pull/1875), since it all occurs on the open source organization.


There will also be a release blogpost, highlighting the new stuff. The release will likely come with that.


Really? Fluid solver with no pics?



I thought this might be related to the prior Julia to wasm fluid dynamics simulation, but it seems to be independent of that effort.

https://alexander-barth.github.io/FluidSimDemo-WebAssembly/


This is computational fluid dynamics, not colorful fluid dynamics /s


What is the difference?


This is cool but following some of the links it seems like there are a lot of immature parts of the ecosystem and things will not "just work". See for example this bug which I found from the blog post: https://github.com/odsl-team/julia-ml-from-scratch/issues/2

Summarizing, they benchmark some machine learning code that uses KernelAbstractions.jl on different platforms and find:

* AMD GPU is slower than CPU

* Intel GPU doesn't finish / seems to leak memory

* Apple GPU doesn't finish / seems to leak memory

Would also be interesting to compare the benchmarks to hand-written CUDA kernels (both in Julia and C++) to quantify the cost of the KernelAbstractions layer.


I‘m currently playing around with Oceananigans.jl (https://github.com/CliMA/Oceananigans.jl). Do you know how both are similar or different?

Oceananigans.jl has really intuitive step-by-step examples and a great discussion page on GitHub.


Oceananigans is used for climate modelling and they use a different set of equations for this purpose (hydrostatic Boussinesq equations instead of Navier-Stokes equations). On the other hand, the numerical method both use is the same, finite volume, and the way we have CPU and GPU execution is using KernelAbstractions.jl in both cases too.


The title says "GPU vendor-agnostic". But in fact for AMD only professional (expensive) GPUs are supported (ROCm is officially unsupported on most consumer and integrated GPUs).

To be truly vendor-agnostic it needs to support OpenGL or Vulkan.

Also this is the first time I saw examples of Julia code and the syntax looks worse than C++.


You may be confusing front end APIs and the compiler backends.

Julia is flexible enough that you can essentially define domain specific languages within Julia for certain applications. In this case, we are using Julia as an abstract front end and then deferring the concrete interface to vendor specific GPU compilation drivers. Part of what permits this is that Julia is a LLVM front end and many of the vendor drivers include LLVM-based backends. With some transformation of the Julia abstract syntax tree and the LLVM IR we can connect the two.

That said we are mostly dependent on vendors providing the backend compiler technology. When they do, we can bridge Julia to use that interface. We can wrap Vulkan and technologies like oneAPI.

https://github.com/JuliaGPU/Vulkan.jl https://github.com/JuliaGPU/oneAPI.jl

As for syntax, Julia syntax scales from a scripting language to a fully typed language. You can write valid and performant code without specifying any types, but you can also specialize methods for specific types. The type notation uses `::`. The types also have parameters in the curly brackets. The other aspect that makes this specific example complicated is the use of Lisp-like macros which starts with `@`. These allow for code transformation as I described earlier. The last aspect is that the author is making extensive use of Unicode. This is purely optional as you can write Julia with just ASCII. Some authors like to use `ε` instead of `in`.


Regarding syntax, I think there are too many weird character combinations (:., ==:, etc) like these:

    ex.head == :. && return union!(sym,[ex])
    start = ex.head==:(call) ? 2 : 1
    
Also I don't like |> combination because it is hard to type. Why not use just single pipe character.


The colon is a bit overused (ranges, ternary, quoting), but this would be pretty clear with parentheses and spacing, i.e. ex.head == :(.) and ex.head == :(call). Now you can see it's a comparison against the symbols '.' and 'call'. Kind of like saying C has too many weird character combinations because there's a "--> operator".


Indeed there are no special operators, :. and ==: etc. They are just :, . and ==

And macro definition code looks quite different from 'regular' code since it works so much with expressions and symbols.


> Also this is the first time I saw examples of Julia code and the syntax looks worse than C++.

For someone who writes both Julia and C++, the above comment comes across as an obscene joke.

Possibly, you object to the programming style in that library, the choice of identifiers or whatever? But that has nothing to do with language syntax.



> Also this is the first time I saw examples of Julia code and the syntax looks worse than C++.

That's surely an exaggeration, but just for context, most Julia code isn't nearly this macro-heavy. The first half of the final code showcase is all macros and expression manipulation, and those always look a bit weird. Usually, those comprise less than 10% of your code though; and if you're a regular user and not a package author, probably much less than that.


I would guess the majority of users never write a single line of macro definitions or expression evaluation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: