Edit: I'm surprised to not find anything when I try to find projects that try to use Apache Arrow in unity 3d. Seems to be a lot of interesting potential in leveraging various simulation libraries in game like applications.
Edit2: Ah, probably because something like waterlilly would have to be rewritten quite a bit too use arrow.
It turns out that Julia is ~a lisp, just with a weird syntax. If you look at the metaprogramming facilities, all expressions are first turned into s-exprs while parsing. There is no problem having a LISP syntax for Julia, and in fact this has been implemented! (https://github.com/swadey/LispSyntax.jl)
Some have said Python is "enough" of a Lisp, but it's really not. Julia is much closer to being a true Lisp. It has macros sure, but its the overall feel and flexibility of the ecosystem that feels Lispy. At least as much as I've dabbled with Lisp and its history.
Have you tried running `julia --lisp`? That's a full-blown Femtolisp interpreter built right into the REPL! I also recommend playing with `Meta.show_sexpr` which can take any Julia expression and represent it as an S-expression.
After clicking trough to the repository, I found this part a bit perplexing: "running on a GPU requires initializing the Simulation memory on the GPU, and care needs to be taken to move the data back to the CPU for visualization."
The original purpose of GPUs were visualization, so that seems backwards to me. And, GLMakie is used, which makes it even more counter-intuitive, isn't that specifically built for GPU visualization?
You are right. Ideally we would like to keep everything in GPU memory of course. But we have not been able to render CuArrays with Makie yet, and I'm not sure if that is actually implemented, as you suggest. If so, it would be great to see an example of this :)
As a card carrying python-stack scientist who works at the intersection of machine learning and physical sciences for the last decade who now is working on an R package (pro-tip: go where the money is don't bring the money to you), can someone make a convincing argument for me to learn Julia? I would like to hear more than the typical "the code auto-differentiates" or "it's faster" or whatever it is that people have said in the past. I am really not trying to be flippant I just don't see the added value of learning a new language unless it has interesting packages/functionality that my current toolset does not (e.g., this is why I am working on an R package).
I'll put it this way: I'm just an idiot engineer, not a programmer really- but I've written some blazingly fast code in Julia that would have taken me way, way longer to write and resulted in way, way slower results in other languages I've played in.
I have to shoutout Chris Rackauckas for being a such badass, helpful person too. He'll probably be in this thread any minute because he's the best damn advocate for Julia there is. :-)
Not so sure about that last part. He's definitely an incredible force in terms of coding, project management, stuff like that. The Julia package ecosystem wouldn't be half of what it is without him.
But "best damn advocate" is probably the opposite of how I'd describe him in terms of interactions here (and generally with people outside the core Julia circle). He very often comes across as dismissive, overly defensive, and passive aggressive in comments here. All of that is dwarfed by his package contributions tbh, in terms of impact on Julia. But still, probably half of the negative perception about Julia community that people have, come from reading these interactions.
One reason to keep an eye on Julia if you are envisioning a long and varied career in the broader computational / data science world is that its not at all clear which will be the leading platform going forward (Python, R, Julia or something else altogether).
This ambivalence might sound absurd on the face of the exponential recent growth of Python (which apparently enticed even some people with serious mojo to get into the act) but take two steps back with me and look at the big (if still hazy) picture:
We are going through a remarkable period where complex algorithmic applications left academia and research labs and diffuse into mainstream society and the economy like never before. This process carries enormous risks and opportunities, which are currently basically... ignored (well, the risk side).
Despite its undeniable strengths and loveability, Python is actually a poster child of the move-fast-and-break-things phase. It is not necessarily best placed for the next phase. The next phase will invariably see a re-examination of all aspects of the stack and qualities that will be prized will be those that eliminate the frictions and risks associated with the large scale deployment of algorithms. The stakes are high, which means there will be plenty of resources seeking to create reliable platforms. The future need not look like the past.
None of the usual suspects ticks all the boxes. In fact we don't even know all the boxes yet. Depends how fast and how seriously models and algorithms get deployed at scale. Python, Julia and R have been propelled forward by circumstances as the main algorithm-centric platforms, and they have each their various warts and blessings but the near and mid-term future will test how well they can deliver on aspects they may have not be designed for.
> its not at all clear which will be the leading platform going forward (Python, R, Julia or something else altogether). This ambivalence might sound absurd on the face of the exponential recent growth of Python
It sounds absurd because trends don't reverse overnight. You can be fairly confident that Python will be the top language in this space for a while and that R will never be the top choice for most applications.
Irrespective of whether Julia ends up the winner of the shift or if the shift happens, it is quite possible for trends to reverse very fast. See Perl or Objective-C for that matter.
In 2006 (for Perl) and 2014 (for Objective-C) it was clear they had the momentum for their particular space however their limitations were well known and as soon as a better language came along the momentum flipped in an equally dramatic manner. Python is much more widespread so it will remain strong in some areas but you could see the flip in ML/DS given challenges productionizing across broad capabilities (not just doing NN's).
As the joke goes -- python is the second best language for everything, if you know only two languages. With ML expanding beyond narrow big tech domains there will be need for specialized languages like Julia (and others perhaps like Mojo etc..)
It was also boosted by Apple in the first place! so nothing natural about these kind of trends. If Google and FB hadn't picked up Python for ML it wouldn't have taken off as much, which is also to say if they (or another large player) back another language you could see a similar decline in Python usage.
I think so too in the short term (1-2 years at least) Python will gently move into the last stage of its adoption curve (even doing nothing).
But now is a time where at various high places people will say: "Ok you got my attention. What is this snake language you are talking about and explain why I should bet the house on it".
The corporate world will want to do $x because $x is in the news, they won't be making nuanced arguments about tradeoffs in a domain they don't understand. Least of all arguments that go entirely against the trends in the industry.
> society and the economy like never before. This process carries enormous risks and opportunities, which are currently basically... ignored (well, the risk side)
take the entire stack (including all dependencies, toolchains etc.) and think about scenarios of accidental or malicious malfunction, but also reproducibility, auditability of outcomes, that sort of stuff. The overall ability to provide locked-down, performant, safe, secure deployments of high-quality, validated algorithms without breaking the bank. In other words the risks (but also the frictions / costs) in the "productionising" of algorithms.
I do get where you are coming from. Indeed, it makes little sense to use Julia for lots of machine learning when PyTorch and Jax are just so good. And it sounds like you don't want to use Julia, so who am I to try and convince you? Python/R are capable languages.
But, there are still reasons I reach for Julia.
Interesting packages where I prefer Julia over Python/R: Turing.jl for Bayesian statistics; Agents.jl for agent-based modelling; DifferentialEquations.jl for ODE solving.
I would much rather data-munge tabular data in Julia (DataFrames.jl) than Python, though R is admittedly quite nice on this front.
Personally I reach for Julia when I want to use one of the previous packages, or something which I want to code up from scratch, where base Julia is much preferable to me than numpy.
Three reasons: Julia feels more like math, there's a huge long-term commitment to the language because it's used for climate modeling, and package management is completely painless.
I love Python, but I can also see eventually doing everything in Julia over the longer term. Mind you, it's entirely possible that AI continues to improve and in 5 years any package will be available in any language, you'll look at code mainly for verification purposes in whatever language you happen to prefer.
For me personally, I just think it's really fun to write julia code. Granted, I'm neither machine learning nor physical science, but the fact that I can go through the whole stack and choose an abstraction that's right for the problem at hand (Metaprogramming? Regular struct-based abstractions? External program? LLVM optimization? Inline assembly?) and still being able to understand what's going on while getting good performance at the same time, is just magical to me. Maybe that's not for everyone, but to me the ratio of dev time to run time is just really, really good.
I think the main idea behind Julia is to minimize the burden of doing the necessary but wasteful software engineering parts of scientific computation.
Let the language optimize more so you don't have to write a C++ library or figure out how to use it optimally. Don't waste as much time setting up your environment or worrying about platform compatibility. Don't worry about using multiple languages for different types of computation. And make the on-ramp fairly painless by being a convenient glue language.
It fills the gap of otherwise not having a managed and JITed language for general mathematical computation. If it's more burdensome for you to switch, then don't switch.
IMO, the biggest reason for me is that the code looks a lot more similar to the math than in python/R. This comes from a number of places (multiple dispatch, ability to use unicode symbols, you don't have to vectorize everything, etc), but the end result is code that looks a lot like the math you are trying to do (for examples, see https://discourse.julialang.org/t/from-papers-to-julia-code-...)
If you work partially in physical sciences, and TFA doesn't entice you to try Julia (someone with no GPU programming experience realize functionality to do serial, parallel cpu, parallel GPU navier-stokes, all without touching c, c++ or fortran - in mostly similar codesize/loc - achieving a 30x speedup) - i can't imagine what would?
If you're writing code that is fundamentally based on mathematical principles and models, even if you aren't personally using mathematics every day, its going to feel a lot better in Julia. That is: Julia looks a lot more like mathematics than Python.
__
Longer version:
Obviously some people are mostly writing websites or GUIs or whatever in Python and won't see the beauty in this.
But if the problems you are working on have, at their base, a mathematical foundation (even if you don't actively practice the math), it's much more beautiful IMO. So, simulation, data analysis/science and machine learning, statistics, etc...
Once you get used to using it for that though you'll realize it's actually quite nice for a lot of other things as well and the "mathematical mindset" it somewhat pushes results in cleaner solutions for other problems too. Just in general the syntax and patterns are nice.
Here are some quick things using randomness in Julia that would be a bit slower and more verbose in Python:
Generate a random number:
> rand()
Pick a random message:
> rand(["First message", "Hello", "Foo"])
Generate a random 3x3 matrix of booleans
> rand(Bool, (3,3))
Define a function and run it elementwise on a random matrix of bools:
> myprint(x)= x > 0 ? "Happy" : "Sad"
> B=and(Bool, (3,3))
> myprint.(B)
Returns:
> 3×3 Matrix{String}:
> "Happy" "Happy" "Sad"
> "Sad" "Happy" "Happy"
> "Sad" "Happy" "Happy"
And many many more nice features...but the Julia design meaning functions like rand() just apply how you expect regardless of the input type are quite nice. rand(list of stirngs) *should* give me a random string and rand(range of numbers) *should* give me a random number in that range! No one would write an academic paper and define a new rand function for each input because well...it's clear what the user wants - rand of something.
if you have to solve mathematical programming/convex optimization problems, JuMP as a frontend for free or commercial solvers is hugely better than any alternative.
likewise if you are solving differential equations, DifferentialEquations.jl is hugely better than any free alternative I know of and arguably better than paid packages. The broader SciML ecosystem that's built up around this has a lot of cool stuff in it too.
other than this it seems like you wouldn't care about the other potential advantages, and might be more put off than average by the disadvantages and occasional rough edges.
If you frequently want to develop and maintain publicly uses functionality that requires writing some in a faster compiled language and then binding to an interactive one like R or Python. Test coverage and multiuser maintenance is way easier when it's all just one language and has a sud package manager.
Context: Coming from a statistics background, I learned a bit of R, then a bit of Python for data analysis/science, then found Julia as the language I invested my time in. Over time I keep up with R and Python enough to know what's different since I learned them, but don't use them daily.
What I always tell people is the following:
If you are writing code using existing libraries then use whichever language has those languages. The NN stack(s) in Python are great, the statistical ML stack(s) in R are simple and include SOTA techniques.
If you are writing a package yourself, then I assume you know the core of the idea well enough to be able to write your code from the "top down" i.e. you're not experimenting with how to solve the problem at hand, you're implementing something concretely defined.
In this case, and tailored to your use, I would argue that Julia has more advantages than disadvantages, especially compared to R or Python. Here are a few comments:
1. Environments, dependencies, and distribution can all be handled by Pkg.jl, the built in package manager. There is no 3rd party tool involved, there is no disagreement in the community on which is better. This is my biggest pain point with Python.
2. Julia's type system both exists and is more powerful than that of Python (types or classes) and R (even Hadley's new S7(?) system). By powerful I mean generics/parametric types and overloading/dispatch built in. You can code without them, but certain problems are solved elegantly by them. Since working heavily with types in recent years, I find this to be my biggest pain point in R and I wouldn't want to write a package in R, although I like to use it as an end user.
3. New developments in scientific programming, programming ergonomics, hardware generic code (as in this post), and other cool features happen in Julia. New developments in statistics happen in R (and increasingly Julia), new developments funded by big companies happen in Python.
4. The Python and R interpreter start up faster than Julia. The biggest problem here is when you are redefining types, which is the only thing in Julia that can't currently be "hot reloaded" i.e. you need to restart Julia to redefine types.
5. Working with tabular data is (currently) far more ergonomic and effortless in R than Python and Julia.
6. Plotting is not a solved problem in Julia. Plots.jl is pretty easy and pretty powerful, Makie.jl is powerful but very manual. Time to first plot is longer than R or Python.
7. Julia has almost zero technical debt, R and Python have a lot. Backwards compatibility is guaranteed for Julia code written in >v1.0 and Pkg.jl handles package compatibility. If I send you code I wrote 4 years ago along with a Project.toml containing [compat] information then you could run the code with zero effort. (This is the theory, in practice Julia programmers are typically scientists first and coders second, ymmv.)
8. You can choose how low level you want your code to be. Prototyping can be done in Julia, rewriting to be faster can be done in Julia, production code can be done in Julia. Translating Python to C++ production might mean thinking about types for the first time in the dev process. In Julia, going to production just means making sure your code is type stable.
You can have nice foreign function interface between R->Julia and Julia ->R. If you're already happy pulling out slow functions into RCpp, then maybe there's no speed benefit. But there are some very nice, very fast libraries in Julia, where if you have a tight inner loop, it could be worth looking into
It reads and writes a lot like python (but nicer IMO), I don't think the learning curve is immense to try it for small optimizations. And it's also not unreadable so other people can verify your code
I mean i never would have signed up to develop an R-package on my own. But I was at the right place at the right time to work on a project with funding that is interesting and it just happens to be in R. It's nice to learn a new tool (no matter what it is), but I would not have chosen R if it was my choice.
Moreover, I think sometimes people get their PhD and think they deserve to use the tools they put in their toolbox, on the problems they focused on, and don't see that all they really did was get a ticket to the game. Most scientists have a phd. Most scientists don't work on the thing their PhD is about ten years later. The sooner you open up to that the sooner you will get out of the postdoc chase and get a job that is a lot more rewarding (both intellectually and financially). All this means that there may be problems you are going to learn about and focus on that you never thought you would at some point, and being open to that and seeing it as an opportunity will carry you further then not.
I think it's fine if you don't learn Julia. When I was in university some of the course work had to be done in MATLAB. I think Julia could definitely be used instead nowadays. Simply being free is reason enough. You could argue that python/numpy would be an option as well.
Dope! Can anyone help me with the following, as this has been 'floating' around in my head for nearly a decade ;
Wales (the animal) oft have barnacles on the leading edges of their flippers, which results in an eddy effect which increases efficiency/thrust.
Da Vinci was the earliest known documentor of eddy-based pumps and predicted the eddies in the ventricular systems of the hearts pumping of blood...
What I would like to model is a toroidal propeller with leading edge bumps ('barnacles') while also having the dimpling pattern of a golf ball to reduce drag... and I want to measure if this idea holds water.
I just dont know how to model this using this tool...
So we look at the actions of the faster currents as the flipper cuts through, and where does the current flow, such that you can direct micro currents to the other bumps and managing overall flow... such that certain bumps 'feed' others...
and if they are maleable, and you can manage each...
Sooo, in terms of malleable wing sections there's some neat research done by MIT that might be interesting to you [0]. There's also some new work that pops up when I was searching for [0]; apparently the same researchers have been working on this idea for some time [1]. The idea is simple in concept, produce small building blocks and assemble large shapes from those building blocks. If you design your building blocks right and assemble them correctly you can achieve some pretty impressive macro properties, including compliant mechanisms [2]. And then there's this 3D printer I stumbled across yesterday that prints on successive sheets of carbon fiber [3]. This could be relevant for the manufacturing process (they currently use injection molding in [1]?) as it would offer composite material strengths at a high speed (they claim 17 seconds for a relatively complex part).
Wonderful, thank you - so when I said that I have a really weird design idea ;;
-
The best arch for dividing a sphere is a hexagon.
The idea for dimpling comes from the honeycomb sandwich design patterns we have for use in structurally solid airplane components, coupled with a micro design for stem-cell research from someone from University of San Francisco....
She was building a micro printed 'injector' where she would inject proteins to various stem cell pods.
She would then measure the various cells to see how she could get them to express in a desired outcome....
The hex is from some top secret shit I saw back in the day...
SO... I am thinking that one can util the hex layout and by slurping/pumping vacuum or hyrdo, one can manipulate the dimples on an interface.. on a toroidal propeller in water, pressure is equalized in a certain way to allow live dynamic prop deformations....
In a helo blade it has to be gas activated. but the material overlay has to be able to handle millions of deformations on an individual cell or a neighborhood of cells to reduce piping.
leading edge bumps inflate on way out and release to tailing edge bumps on exit...
I was providing information in case you didn't know this stuff was out there. If you're going to do some CFM, it might be useful to see what prior art there is.
Why use this tool? Why not simply build and test the propeller? Your results would be much more reliable - CFD is famously finicky, especially for a situation like a propeller where you have rotating flows.
The point of golf ball dimples is to act as vortex generators and improve flow attachment, so having both barnacles and dimples is redundant.
+1 to this - Though it has been many years, at one point I helped write CFD code for the US Navy. The disconnect between real-world measurements and simulations was vast for any structure remotely complicated (ie anything but an axially symmetric simple shape like a torpedo). While the CFD code has gotten way better (largely via Moore's law), I expect propellers are still quite hard to simulate, and your best path is to build a model and see how it performs.
But the CFD isn't to replace building the model...it's to give you a head start and reasonable starting location, and to find glaring errors. Otherwise you're going to be spending thousands and thousands running back to the wind tunnel every time you want to iterate..
Agreed, and that is exactly how we used it with the Navy - the CFD simulations narrowed the universe of options so we could be more efficient with the wind tunnels and water tanks. In this case, however, he has a specific shape that he wants to test. For that, going straight to a physical model makes sense.
Is building and testing an actual propeller going to be easier? I would have thought that setting up and running a simulation could be done in a moderate amount of hours, and could then be quickly iterated on. The only requirements are a laptop and an internet connection.
Building a model, on the other hand, could potentially involve a multi-year effort to re-educate yourself to learn how to build models, having to acquire hardware and materials, and setting up a lab for testing. And then building many different actual models.
Like JWST "variential dimples and bumps via hydrolic mechanisms, for optimised flow" now we have the hard part added in. (dimple flexing) and there are more levels than that.....
Trying to optimize across a bunch of different dimple/bump patterns is a great use case for CFD. However, you'd want to validate a CFD model like that against real world test data (either by running your own tests or by finding test data in the literature).
To be clear, I think adding vortex generators to boat propellers is a great idea. But to me the starting place would be a building physical propeller with vortex generators (even if they aren't the perfect vortex generators) and seeing if it has a noticeable impact on boat performance. That would justify the difficulty/time required to build a CFD model which you could use to optimize the design. (I'm saying this given that you don't have experience in CFD and it would be a large outlay of your own time to learn how to use it.)
No, that's backwards. Running a CFD simulation would involve a multi-year effort to re-educate yourself to learn how to build an accurate CFD model.
Building a physical model, on the other hand, merely requires 3D printing the desired shape, doing some sort of casting process to get that out of metal, putting it on a boat, and seeing if you get a noticeable performance impact.
How do you figure that, when the CFD software is already built and available, and the premise in the question is that that the asker is already in a position to start using it? It's pretty clear that the background of the asker is aligned with a software approach.
> Building a physical model, on the other hand, merely requires 3D printing the desired shape, doing some sort of casting process to get that out of metal, putting it on a boat, and seeing if you get a noticeable performance impact.
I love how you sneak the word "merely" in there, when for people like me, and presumably the asker, this would be a Herculean task :D
>the premise in the question is that that the asker is already in a position to start using it
That's an incorrect premise. The asker is clearly not in a position to use an off the shelf CFD package because they don't have the solid basis in fluid mechanics required to interpret the results.
I think maybe you/the asker are thinking of CFD like a tool that simulates fluids. That's not what it is. It's a set of approximations which, sometimes, are applicable to specific circumstances. Even LES, the most general tractable model, requires you to make informed assumptions about the boundary conditions.
> I love how you sneak the word "merely" in there, when for people like me, and presumably the asker, this would be a Herculean task
I'm sure building a physical prop would be challenging if you had no experience, but it is the sort of thing a lot of people can learn from Youtube and do in their garage.
What turbulence models do you have implemented? It's been ~ a decade since I played with CFD, wondering if DNS is computationally tractable these days.
There is no explicit LES model, aka implicit LES. All the additional dissipation required when running coarse meshes relies on the numerical dissipation of the implemented schemes.
Cavitation has nothing special other than multi-phase flow. And implementing a VOF method is definitely in our roadmap. You can currently simulate it without cavitation to get a feeling of the unsteady flow solution, or wait for us to implement VOF.
Then you should is a different solver that fills your needs.
WaterLily is an incompressible flow solver that works in non-dimensional units (assuming constant unit density). So you can change the viscosity of the fluid by modifying the Reynolds number (with set a set characteristic velocity and length scale).
This is slightly premature. They only just tagged the release on Github several hours ago. While it does suggest the actual release is imminent, it's not really official until you see here on Discourse:
There's a lot more than release notes going into a release - we've had 3 release candidates for a reason, and those regressions/bugs need to be fixed first.
e got it, but doesn't that suggest that just because release notes exist, doesn't mean something has been released? Unless ksec has special inside info for his claim, I think his link doesn't show anything.
The release was just cut 9 hours ago, as shown on the releases part of the Github page (https://github.com/JuliaLang/julia/releases/tag/v1.9.0). That then starts the jobs for the creation and deployment of the final binaries, and when that's done the Julialang.org website gets updated to state it's the release, and when that's done the blog post for the new release goes out. You can even follow the last step of the process here (https://github.com/JuliaLang/www.julialang.org/pull/1875), since it all occurs on the open source organization.
This is cool but following some of the links it seems like there are a lot of immature parts of the ecosystem and things will not "just work". See for example this bug which I found from the blog post:
https://github.com/odsl-team/julia-ml-from-scratch/issues/2
Summarizing, they benchmark some machine learning code that uses KernelAbstractions.jl on different platforms and find:
* AMD GPU is slower than CPU
* Intel GPU doesn't finish / seems to leak memory
* Apple GPU doesn't finish / seems to leak memory
Would also be interesting to compare the benchmarks to hand-written CUDA kernels (both in Julia and C++) to quantify the cost of the KernelAbstractions layer.
Oceananigans is used for climate modelling and they use a different set of equations for this purpose (hydrostatic Boussinesq equations instead of Navier-Stokes equations). On the other hand, the numerical method both use is the same, finite volume, and the way we have CPU and GPU execution is using KernelAbstractions.jl in both cases too.
The title says "GPU vendor-agnostic". But in fact for AMD only professional (expensive) GPUs are supported (ROCm is officially unsupported on most consumer and integrated GPUs).
To be truly vendor-agnostic it needs to support OpenGL or Vulkan.
Also this is the first time I saw examples of Julia code and the syntax looks worse than C++.
You may be confusing front end APIs and the compiler backends.
Julia is flexible enough that you can essentially define domain specific languages within Julia for certain applications. In this case, we are using Julia as an abstract front end and then deferring the concrete interface to vendor specific GPU compilation drivers. Part of what permits this is that Julia is a LLVM front end and many of the vendor drivers include LLVM-based backends. With some transformation of the Julia abstract syntax tree and the LLVM IR we can connect the two.
That said we are mostly dependent on vendors providing the backend compiler technology. When they do, we can bridge Julia to use that interface. We can wrap Vulkan and technologies like oneAPI.
As for syntax, Julia syntax scales from a scripting language to a fully typed language. You can write valid and performant code without specifying any types, but you can also specialize methods for specific types. The type notation uses `::`. The types also have parameters in the curly brackets. The other aspect that makes this specific example complicated is the use of Lisp-like macros which starts with `@`. These allow for code transformation as I described earlier. The last aspect is that the author is making extensive use of Unicode. This is purely optional as you can write Julia with just ASCII. Some authors like to use `ε` instead of `in`.
The colon is a bit overused (ranges, ternary, quoting), but this would be pretty clear with parentheses and spacing, i.e. ex.head == :(.) and ex.head == :(call). Now you can see it's a comparison against the symbols '.' and 'call'. Kind of like saying C has too many weird character combinations because there's a "--> operator".
> Also this is the first time I saw examples of Julia code and the syntax looks worse than C++.
That's surely an exaggeration, but just for context, most Julia code isn't nearly this macro-heavy. The first half of the final code showcase is all macros and expression manipulation, and those always look a bit weird. Usually, those comprise less than 10% of your code though; and if you're a regular user and not a package author, probably much less than that.