Hacker News new | past | comments | ask | show | jobs | submit login

See the about section: https://github.com/GetFirefly/firefly#about-firefly

> The primary motivator for Firefly's development was the ability to compile Elixir applications that could target WebAssembly, enabling use of Elixir as a language for frontend development. It is also possible to use Firefly to target other platforms as well, by producing self-contained executables on platforms such as x86.




Great! Nice. Very neat possibilities here.

I'd be very very interested to hear follow up, on how Firefly does actors. I feel like there's so many potential ways to target wasm, but the high concurrency spirit of Beam has such unique flavor. I'd love to read in & hear that spirit is well preserved.


There are details on this also: https://github.com/GetFirefly/firefly#runtime

Generally it should be assumed that actors and their concurrency model is fully supported as that is a part of the core semantics for BEAM languages.


> as that is a part of the core semantics for BEAM languages.

It's a part of the semantics of the runtime:

- an actor is all but guaranteed to not bring down the runtime

- an actor is all but guaranteed to never affect other actors

- runtime knows how put processes to sleep until the message they listen to arrives. This means all functions are re-entrant. Well, any process is put to sleep after a cretain number of reductions so that no process takes away time from other processes.

- runtime all but guarantees that process errors are a) isolated and b) propagated. That is when a process dies all other processes that monitor it are guaranteed to receive a notification. That's why supervision hierarchies in Erlang are possible


> - runtime knows how put processes to sleep until the message they listen to arrives. This means all functions are re-entrant. Well, any process is put to sleep after a cretain number of reductions so that no process takes away time from other processes.

This paragraph seems confused. Re-entrancy doesn't have a lot to do with sleeping. Do you perhaps mean to say something about preemption? BEAM/ERTS is not really preemptive, a process can only be suspended at specific places, but one of those places is function calls (aka reductions) and BEAM languages don't offer looping constructs other than recursion, so it's hard to go for very long without calling a function, so it's effectively/semanticaly preemptive, unless there's naughty things in NIFs you brought, or the native code provided by ERTS.

Processes being descheduled after a while doesn't mean they didn't take away time from other processes: if you have one cpu and one process running an infinite loop, adding a second one takes cpu time away from the first, but they both will get some time (if they're both set to the same priority)


> It's a part of the semantics of the runtime:

This is implied. It would be absurd to have a different static and runtime semantics. in fact, the core goal for formal methods is a statically be able to reason about runtime dynamics.

Just like it would be absurd to build a compiler for C where "+" in fact is treated as "-", it would be absurd to build a compiler and runtime system for, eg., Elixir that is not able to execute GenServers.


> This is implied.

You'd be surprised how many people miss that runtime drives this. I've seen many discussions where people claimed "you can implement all this is a library" :)

> it would be absurd to build a compiler and runtime system for, eg., Elixir that is not able to execute GenServers.

Well, Akka did it on top of JVM: https://doc.akka.io/docs/akka/current/typed/fault-tolerance.... Can't say about its limitations though.


> You'd be surprised how many people miss that runtime drives this. I've seen many discussions where people claimed "you can implement all this is a library" :)

I undrstand, everything concurrency is definitely not easy to implement in "User land" and is something you want good primitives for – why I am also amaxed over this project as they must have embedded that functionality (the scheduler) in the executable (Which they also say they did).

> Well, Akka did it on top of JVM ...

> Akka is a toolkit for building highly concurrent, distributed, and resilient message-driven applications for Java and Scala.

Akka does not seem to claim that they build a new runtime for BEAM languages?


I would argue it’s intrinsic to the language itself. In fact I did as a pedagogical tactic.

https://youtu.be/E18shi1qIHU


You can't have those things without runtime support. E.g. in Go/Rust a panic will kill the app. In Erlang an equivalent catastorphic failure in a process will kill the process, will notify the monitoring processes, the app will keep on running.


Also the BEAM bytecode compiler and runtime are incredibly slow and unoptimized. It doesn't matter much for process handling and IO dominant workloads, but you would not want to run it with normal tasks.

An AOT compiler with better optimizations will run circles around BEAM on benchmarks.


Thankfully it is not slow! Last I measured it and its immutable data structures would beat Scala's immutable data structures in benchmarks.


Scala is pretty slow, though :-).


> Also the BEAM bytecode compiler and runtime are incredibly slow and unoptimized

Can you provide any citations for the BEAM runtime being unoptimised? In my experience it has been very carefully optimised over many years, generally prioritising latency over throughput.


Citations? I've looked at it, being maintainer of several fast and slow VM's by myself.

I don't see careful optimizations, it's rather sloppy. More like perl and python, unlike lua, php or v8.


It's widely known that it's IO focused and unoptimised around say maths use cases as it was built to be a zero maintenance telecoms platform


> zero maintenance telecoms platform

More like zero downtime maintenance.


I'm more of an observer, since I'm not actively using Elixir, or Erlang, right now. I read that BEAM now supports JIT compilation. Doesn't this solve the performance issues for the most part?

EDIT: Apparently not LLVM JIT but that's beside the point.


LLVM? Pretty sure they scrapped that for being slow, BeamAsm is a JIT written from scratch.

Edit: It actually uses part of AsmJit, not quite from scratch, my mistake.


Updated my comment


BEAM does have a JIT on some platforms (iirc, amd64 and aarch64), but it's not an optimizing JIT like you might be familiar with from Java's Hotspot and similar systems.

In BEAM Asm, the design is for the whole VM to either be interpretted (status quo) or native (JIT). In JIT mode, all the loaded modules are translated to native code as they're loaded; this needs to be fast or startup times are delayed, IIRC, there is an optimization path, but it's simple. There's no reoptimization of hot code paths later either; just the one time process.

The main benefit of this process is to remove a specific part of interpretation overhead, the instruction unpacking and dispatch overhead is eliminated. This can be significant for some applications and not for others, but it's really the main target, any other optimizations that happen are a bonus.


It does JIT with AsmJit (not LLVM).


Updated my comment




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: