Hacker News new | past | comments | ask | show | jobs | submit login

In terms of performance, the way we build applications today is such a low bar that IMO it opens the door for functional programming. Even if it is not as fast as C or raw assembly - if it is significantly faster than Electron, but preserves the developer ergonomics... it can be a win for the end user!

I created an Electron (TypeScript/React) desktop application called Onivim [1] and then re-built it for a v2 in OCaml / ReasonML [2] - compiled to native machine code. (And we built a UI/Application framework called Revery [3] to support it)

There were very significant, tangible improvements in performance:

- Order of magnitude improvement in startup time (time to interactive, Windows 10, warm start: from 5s -> 0.5s)

- Less memory usage (from ~180MB to <50MB). And 50MB still seems too high!

The tooling for building cross-platform apps on this tech is still raw & a work-in-progress - but I believe there is much untapped potential in taking the 'React' idea and applying it to a functional, compile-to-native language like ReasonML/OCaml for building UI applications. Performance is one obvious dimension; but we also get benefits in terms of correctness - for example, compile-time validation of the 'rules of hooks'.

- [1] Onivim v1 (Electron) https://github.com/onivim/oni

- [2] Onivim v2 (ReasonML/OCaml) https://v2.onivim.io

- [3] Revery: https://www.outrunlabs.com/revery/

- [4] Flambda: https://caml.inria.fr/pub/docs/manual-ocaml/flambda.html




They already said they were working in games. None of what you said applies to that field.


I have a suspicion this is only semi-true.

For controlling what the CPU and RAM are doing? Yes. The graphics shader, on the other hand, is a pipeline architecture with extremely tight constraints on side-effects. The fact the shader languages are procedural seems mostly accident of history or association to me than optimal utility, and the most common error I see new shader developers make is thinking that C-style syntax implies C-style behaviors (like static variables or a way to have a global accumulator) that just aren't there.

The way the C-style semantics interface to the behavior of the shader (such as shader output generated by mutating specifically-named variables) seems very hacky, and smells like abstraction mismatch.


Not exactly shaders, but for GPGPU stuff, futhark [0] seems to show that a functional paradigm can be very good to produce performant and readable code.

[0] https://futhark-lang.org/index.html


> is a pipeline architecture with extremely tight constraints on side-effects

That was true 10 years ago. Now they're just tight constraints but not extremely so: there're append buffers, random access writeable resources, group shared memory, etc.

> The way the C-style semantics interface to the behavior of the shader seems very hacky

I agree about GLSL, but HLSL and CUDA are better in that regard, IMO.


I would say "real time graphics" is one of the niches FP is not well suited for, most business software doesn't need to work at the level of the machine.


There is certainly prior art for complex games running smoothly in Haskell: https://wiki.haskell.org/Frag

This particular solution used functional reactive programming, essentially a composition of signal/event processing functions/automatons.


Ten years ago, that was the only substantial game written in Haskell. That you're citing that same game now is a bit telling.

Note the upload date:

https://www.youtube.com/watch?v=0jYdu2u8gAU


Ok here's a talk about making Haskell games that took place last week: https://keera.co.uk/blog/2019/09/16/maintainable-mobile-hask... I don't deny that making games in Haskell is niche, but it's certainly possible. Frag was just an example I remembered (ten years is recent for an old git like me).


If I remember correctly, in that thesis the author mentioned explicitly that the game didn't run very fast. If you watch the video from 2008, the in-game stats list framerates >60fps but the game itself is very laggy. Maybe there is a separate renderer thread?


Ironically the first CAD workstations were developed in Lisp, and Naughty Dog is famous for their Lisp/Scheme based engines.


Here's a talk on making real world commercial games with Clojure on top of Unity.

https://www.youtube.com/watch?v=LbS45w_aSCU


I think you are seriously overselling the talk, and what Arcadia is ready for.

you: Here's a talk on making real world commercial games with Clojure

video: dozens of game jam games have been made


come on, the "games" showcased here have the complexity level of a 2003-like game and they barely achieve 200 fps on modern hardware. When I look at similar trivial things ran with no vsync on my machine, it's >10000 fps


That's just moving goalposts. The games showcased are the same complexity as plenty real world commercial games that are making good money in 2019. If you're doing triple-A game development, maybe you need to get down to the metal, but for tons of games you'll be perfectly fine with FP.

Also worth noting that the idea is to use FP around stuff like the actual game logic, and then handle rendering details imperatively.


> The games showcased are the same complexity as plenty real world commercial games that are making good money in 2019

I mean, fucking todo apps are making "good money" in 2019, it does not mean that they are good examples. These kind of presentations should improve on the state of the art, not content themselves with something that was already possible a few decades ago. No one gets into game dev to make money, the point is to make better things than what is existing - be it gameplay wise, story wise, graphics wise...


The Poker prototype could be from 30 years ago, and drops to 15FPS on any game action! Arcadia is a neat toy at this point, but run far away if you are looking to do real world commercial development.


Even assuming that that's true (and it very well may be), the general topicwasn't games, and there are many places where "the norm" in programming as a whole differs from the norm in performance sensitive areas.


Brilliant. Haskell was standing outside the door not until it was good enoguh to be an industry standard, but until industry standards dropped so low that it became competitive!


Hey, and good luck with revery :) I am doing something very similar but I wouldn't ever consider any GC language for the low-level part.

I want to write UI in javascript because it's really nice language for prototyping but I also want it to be fast and javascript is unpredictable. Now, this might not be the case with OCAML but no matter what optimizations your compiler (or JIT interpreter) can do, you're still living in a lie, it's still some abstraction which is going to leak at some point.

I've recently removed quite a lot of rust dependencies (wrappers) and the speedup is very noticable, it's because abstractions always come with a cost and you can't just pretend you're living in a rainbow world.

BTW: you're not going to get much lower than 50M, cocoa has some overhead (10M IIRC), node has too (20M) and OCaml GC needs some heap too, and if you have any images, you need to keep them somewhere before sending to the GPU and GC to be fast needs to keep some mem around so that allocs are faster than just plain malloc.

BTW2: in rust world, it's common to see custom allocators and data-oriented programming because it starts to get noticeable and this is hard to do if you can't reason about memory.

If anyone is interested too, here's a repo https://github.com/cztomsik/graffiti


I couldn't have said it better myself. Thx for those links! And, yes, the compiler's flambda variant is an exquisite delight.


Is it fast because native & types or fast because of other reasons? The speed hierachy I've found goes: dynamic types w/ GC = 8x slower than C, static types w/ GC = 3x slower than C & static types w/ hybrid memory management like reference counting = 2x slower than C.


Does Revery use a native toolkit (winforms, gtk, etc) or is it also a webview like electron?

I've seen a couple of gui toolkits in rust following the Elm architecture and I think it's an amazing idea. It would be great if I was able to create apps like this using something like Qt behind the scenes.


revery does custom rendering on GPU, just like flutter & switfui (and graffiti)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: