Hacker News new | past | comments | ask | show | jobs | submit login
How we got to LiveView (fly.io)
802 points by klohto on Sept 22, 2021 | hide | past | favorite | 283 comments



We use Phoenix and LiveView to power all of our non-video interactions on Glimesh.tv[0] and the immediate out of the box features and performance are unmatched. LiveView allowed us to get a completely real time updating channel where streamers can edit their metadata (game, title, viewer count, etc) and all of the viewers can see it in real time. Not to mention we implemented a distributed chat system that sends message updates in real time to both browser clients and API clients. Both of these features combined amount to less than 1000 lines of code and "just work" across multiple web nodes.

It can be daunting to jump into such a strange world as a LiveView environment may look (Elixir syntax, OTP terminology, etc) but honestly once you dig in deeper, everything just makes sense. LiveView (and HEEx) continue to be very simple to understand abstractions on top of the rock solid OTP platform. It's a joy to build real time applications using it, and I very much appreciate the "developer experience" focus both Chris & Jose have for us Elixir devs!

I'm excited for the launch of Phoenix 1.6 and HEEx is shaping up to be a complete replacement for your traditional SPA + Backend API, and using one consistent language for your full stack really has very freeing & powerful benefits, especially for small teams!

[0] https://github.com/Glimesh/glimesh.tv/


Are you using Live View for everything related to navigation too? For example if you transition between any page, is this happening through Live View?

In any case, I'd love to chat with you on my podcast on how you built and deploy Glimesh if you're interested. It's at https://runninginproduction.com/, there's a become a guest button in the nav bar on the top right if you wanted to schedule a call to be on the show.


Yeah, the channel / category pages are all "live patched" generally, which means the user can navigate around without incurring a full HTTP roundtrip. Some pages like the account settings are "dumb views" though and do not offer the same advantages.

Thanks for the offer to be on the podcast, I'll check it out!


I've used a similar technique with HTMX[1]. General navigation just patches the main "content" section, but actions where it's necessary to replace the entire page (e.g. login/logout) just do a full page "dumb" refresh.

[1] https://htmx.org


There's also a Clojure web setup based on htmx: https://whamtet.github.io/ctmx/


Nick! Completely unrelated to this discussion but I just wanted to say I've bought a few of your courses and love you're work. What a pleasant surprise to run into you on HN.


Hi! Thanks a lot, I really appreciate it.


Slight difference from stock phoenix but yes. Only area I’m not 100% on is auth. https://hexdocs.pm/phoenix_live_view/live-navigation.html


Right, the normal way for Phoenix LiveView apps is to make the registration / login exist inside your regular controllers / actions since a LiveView cannot directly modify your cookies or global session (with some caveats).


Hey great work on Glimesh! I stopped streaming for while and when I came back hitbox.tv was no more. So I went searching and comparing providers and ended up selecting Glimesh and have been really happy with it. I didnt realize you were using phoenix but the snappiness makes sense now!


Creator of Phoenix here. I'm happy to answer any questions folks have about LiveView, Phoenix, or Elixir in general. We've had some big LiveView features land recently with uploads and HEEx so now's a great time to jump in!


Almost every time I see a discussion about LiveView there’s someone complaining about the issue of latency/lag, and how it makes LiveView unsuitable for real-world applications.

From what I understand, the issue is that every event that happens on the client (say, a click) has to make a roundtrip to the server before the UI can be updated. If latency is high, this can make for a poor user experience, the argument goes.

As the creator of LiveView, what’s your take on this? Is it a real and difficult-to-solve issue, or do people just not see "the LiveView way" of solving it?

I think LiveView looks amazing, but this possible issue (in addition to chronic lack of time) has made me a little unsure of whether it’s ready to use for a real project.

Thanks for creating Phoenix, btw!


These kinds of discussions miss a ton of nuance unfortunately (as most tech discussions do), so hopefully I can help answer this broadly:

First off, it's important to call out how LiveView's docs recommend folks keep interactions purely client side for purely client side interactions: https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#m...

> There are also use cases which are a bad fit for LiveView:

> Animations - animations, menus, and general UI events that do not need the server in the first place are a bad fit for LiveView. Those can be achieved without LiveView in multiple ways, such as with CSS and CSS transitions, using LiveView hooks, or even integrating with UI toolkits designed for this purpose, such as Bootstrap, Alpine.JS, and similar

Second, it's important to call out how LiveView will beat client-side apps that necessarily needs to talk to the server to perform writes or reads because we already have the connection established and there's less overhead on the other side since we don't need to fetch the world, and we send less data as the result of the interaction. If you click "post tweet", wether it's LiveView or React, you're talking to the server so there's no more or less suitability there compared to an SPA.

I had a big writeup about these points on the DockYard blog for those interested in this kind of thing along with LiveViews optimistic UI features:

https://dockyard.com/blog/2020/12/21/optimizing-user-experie...


Thanks for the pointers and insights. I’ve been reading up on this tonight (local time), and this whole issue seems to be mostly a misconception.

Between things like phx-disable-with and phx-*-loading, and the ability to add any client-side logic using JS, there doesn’t really seem to be any limitations compared to a more traditional SPA using (for example) React and a JSON API.

I hope I haven’t added to the confusion about this by bringing it up, I was just very curious to hear your thoughts on it.


I think the big difference is that with React a lot of interactions can be completed completely client side, with the server side component happening only after the fact (asynchronously).

I’ll grant you that that isn’t often the case, and recovering from inconsistencies is pretty painful, but I can see how people would go for that.

I kind of like the idea I can just build all my code in one place instead of completely separate front and back-end though.


> LiveView will beat client-side apps that necessarily needs to talk to the server to perform writes or reads because we already have the connection established and there's less overhead

Don't modern browsers already share a TCP connection for multiple queries implicitly?


Yeah. The overhead I see that's being reduced from a performance point of view is the server not needing to query the session/user information on every message, compared to ajax. That's true for websockets in general. And then the responses might be slightly smaller because it is an efficient diff with just the changed information.


There's a funny story here. We created Fly.io, Chris created Phoenix. We met earlier this year and realized we'd accidentally built complimentary tools. The pithy answer is now "just deploy LiveView apps close to users". If a message round trip (over a pre-established websocket) takes <50ms it seems instantaneous.

This means moving logic to client side JS becomes an optimization, rather than a requirement. You can naively build LiveView and send stuff you shouldn't to the server, then optimize that away by writing javascript as your app matures.

What I've found is that I don't get to the optimize with JS step very often. But I know it's there if I need it.


How would you exactly 'optimize with JS'? Do you think this optimization can be done to the extent of enabling offline experiences? Might not be full functionality, but bookmarks/saved articles, for example.


Lots of answers here including one from Chris McCord himself, but I'll offer my take based on my professional experience developing web apps (though I've never used Phoenix professionally):

A large majority of businesses out there start off targeting one region/area/country. The latency from LiveView in this scenario is imperceptible (it's microseconds). If these businesses are so lucky as to expand internationally, they are going to want to deploy instances of their apps closer to their users regardless of wether or not they are using LiveView.

LiveView could be a huge help to these startups. The development speed to get a concurrent, SPA-style app up and running is unparalleled, and it scales really well. My guess would that be that people who are worried about the latency here (which is going to exist from any SPA anyway) are the ones who are developing personal pages, blogs, educational material, etc. that they are hoping the world is going to see out of the gates. In this case, LiveView is not the answer!!! And as I've stated elsewhere 'round here, LiveView does not claim to be "one-size-fits-all". If latency really IS that big of a concern, LiveView is not the right choice for your app. But there really is a huge set of businesses that could really benefit from using it, either because they are a start-up focused on a single area/region/country, or they are already making tons of money can easily afford to "just deploy closer to their users" and could benefit from LiveView's (and Phoenix's) extreme simplicity.


Pretty much this. Also, I’m not sure most people realise how incremental LiveView can be. You can use it for a little widget on any page and later swap in a react component if you truly need one (which most apps probably don’t).

It’s not designed to run the NY Times. But it is a super useful tool that will benefit a ton of apps out there.


Is microseconds correct? Even with a good connection in online games I’ve only seen ping latencies of 3ms or so, and a more common range on an average connection is 20ms-50ms.


Should be, though milage may vary, of course. I'm having trouble finding a better example but https://elixir-console-wye.herokuapp.com/ is made in LiveView. You can try it out and see what you get (I have no idea where it's deployed, it's a phoenixphrenzy.com winner and plenty more there to browse through). Its payloads are a bit larger than some typical responses I have in my personal apps and I'm seeing 1-2ms responses in Toronto, Canada (chrome devtools doesn't show greater precision than 0.001 for websocket requests).


ms is milliseconds


Yep, which is what I meant in my comment you're replying to (as per my statement that devtools only report to the 0.001). But as pointed out by jclem, I'm probably wrong about microsecond response times anyway. I'm very likely thinking about the MICROseconds I see in dev, which of course doesn't count :) But with the heroku link above, I am seeing as low as 1-3 MILLIseconds in Toronto, Canada.


One light-microsecond is about 300 meters, this must be milliseconds.

Edit: Just saw that this was already pointed out. Apologies, didn’t mean to pile on.


I pointed out below that I actually DID mean microseconds but likely skewed by times I was seeing in dev. Hopefully it does not take away from my point that response times are still imperceptible when in roughly the same region (I'm seeing 1-3 milliseconds in the heroku-hosted LiveView app I linked below).


For a lot of the LiveView applications that I write (which is actually quite a few these days), I will usually lean on something like AlpineJS for frontend specific interactions, and my LiveView state is for things that require backend state.

For example, if I have a flag to show/hide a modal to confirm a resource delete, the show/hide flag would live in AlpineJS, while the resource I was deleting would live in the state of my LiveView.

This way, there are no round trips to the server over websocket to toggle the modal. Hopefully that example makes sense :).


I'm surprised to see so few mentions of AlpineJS. Personally, PETAL has become my de facto stack.


The main thing that's kept me from using Alpine in my serious projects is that it doesn't work with a strict CSP.


What is PETAL?


Phoenix, Elixir, Tailwind, Alpine, and LiveView.

https://changelog.com/posts/petal-the-end-to-end-web-stack


The PHP equivalent would be the TALL stack (Tailwind, AlpineJS, Laravel and Livewire). Although Livewire just communicates over AJAX. The original Websockets version didn't make it.

I just found out that Livewire was inspired by LiveView.


It’s telling that every answer is “just deploy servers near your users.”

One of YouTube’s most pivotal moments was when they saw their latency skyrocketed. They couldn’t figure out why.

Until someone realized it was because their users, for the first time, were world wide. The Brazilians were causing their latency charts to go from a nice <300ms average to >1.5s average. Yet obviously that was a great thing, because of Brazilians want your product so badly they’re willing to wait 1.5s every click, you’re probably on to something.

Mark my words: if elixir takes off, someday someone is going to write the equivalent of how gamedevs solve this problem: client side logic to extrapolate instantaneous changes + server side rollback if the client gets out of sync.

Or they won’t, and everyone will just assume 50ms is all you need. :)


> It’s telling that every answer is “just deploy servers near your users.”

This isn't the takeaway at all. The takeaway is we can match or beat SPAs that necessarily have to talk to the server anyway, which covers a massive class of applications. You'd deploy your SPA driven app close to users for the same reason you'd deploy your LiveView application, or your assets – reducing the speed of light distance provides better UX. It's just that most platforms outside of Elixir have no distribution story, so being close to users involves way more operation and code level concerns and becomes a non-starter. Deploying LiveView close to users is like deploying your game server closes to users – we have real, actual running code for that user so we can do all kinds of interesting things being near to them.

The way we write applications lends itself to being close to users.


Imagine how painful HN would be if you upvoted someone and didn’t see the arrow vanish till the server responded. Instead of knowing instantly whether you missed the button, you’d end up habitually tapping it twice. (Better to do that than to wait and go “hmm, did I hit the button? Oh wait, my train is going through a tunnel…)

Imagine how painful typing would be if you had to wait after each keypress till the server acknowledged it. Everyone’s had the experience of being SSH’ed into a mostly-frozen server; good luck typing on a phone keyboard instead of a real keyboard without typo’ing your buffered keys.

The point is, there are many application-specific areas that client side prediction is necessary. Taking a hardline stance of “just deploy closer servers” will only handicap elixir in the long run.

Why not tackle the problem head-on? Unreal Engine might be worth studying here: https://docs.unrealengine.com/udk/Three/NetworkingOverview.h...

One could imagine a “client eval” code block in elixir which only executes on the client, and which contains all the prediction logic.


You'd use the optimistic UI features that LiveView ships with out of the box to handle the arrow click, and you wouldn't await a server round-trip for each keypress, so again that's now how LiveView form input works. For posterity, I linked another blog where I talk exactly about these kinds of things, including optimistic UI and "controlled inputs" for the keyboard scenario: https://dockyard.com/blog/2020/12/21/optimizing-user-experie...

While we can draw parallels to game servers being near users, I don't think it makes sense for us to argue that LiveView should take the same architecture as an FPS :)


> Deploying LiveView close to users is like deploying your game server closes to users – we have real, actual running code for that user so we can do all kinds of interesting things being near to them.

Then why do you start running forward instantly when you press “W” in counterstrike or quake? Why not just deploy servers closer to users?

Gamedev and webdev are more closely related than they seem. Now that webdev is getting closer, it might be good to take advantage of gamedev’s prior art in this domain.

There’s a reason us gamedevs go through the trouble. That pesky speed of light isn’t instant. Pressing “w” (or tapping a button) isn’t instant either, but it may as well be.


> Then why do you start running forward instantly when you press “W” in counterstrike or quake? Why not just deploy servers closer to users?

You do both? Game client handles movements and writes game state changes to a server, which should be close to the user to reduce the possibility for invalid state behaviors? You really haven't seen online games that deploy servers all over the world to reduce latency for their users? What?

Both web apps and games do optimistic server writes. Both web apps and games have to accommodate a failed write. Both web apps and games handle local state and remote state differently.


I read his post as a criticism of how little optimistic updating is done in web apps, and how bad the user story is. Why can't it be easy to build every app as a collaborative editing tool without writing your own OT or CRDT?


Because an occasional glitch when the client & server sync back up is acceptable in a game. Finding out that my order didn't actually go through is much worse. Especially since click button, see success, and close browser is an relatively common use case.


Consider these two scenarios.

1. SPA with asynchronous server communication. A button switches to a spinner the moment you click it, and spins until the update is safe at the server. Error messages can show up near the button, or in a toast.

2. LiveView where updates go via the server. The button shows no change (after recovering from submit "bounce" animation) until a response from the server has come back to you. To do anything better, you need to write it yourself, and now you're back in SPA world again.

There's a reason textarea input isn't sent to a server with the server updating the visible contents! Same thing applies to all aspects of the UX.

EDIT: https://dockyard.com/blogs/optimizing-user-experience-with-l... talks about this. That'll handle things like buttons being disabled while a request is in flight, but it won't e.g. immediately add new TODO items to the classic TODO list example.


That's a deliberate UI choice, though, and it doesn't always make sense in non-transactional workflows. It's easy to wait for Google Docs to say "Saved to Drive", and going to a new page to save a document would be really disruptive to your workflow, for example.


I remember this story but can't find it anywhere. If I recall correctly they deployed a fix that decreased the payload size. However, in doing so they actually opened the door to users with slow connections that were unable to use it at all before. So measured latency actually went up instead of down.


That’s the one! Where the heck is it? It’s one of my all time favorite stories, but it seems impossible to find; thanks for the details.



YES! Thank you! I’ve seriously been searching for like five decades. What was the magical search phrase? “YouTube Brazil increase latency” came back with “How YouTube radicalized Brazil” and other such stories. (Turns out the article mentions “South America” rather than “Brazil”; guess my Dota instincts kicked in.)

Anyway, you rock. :)


Thank you! It was impossible to find anything on Google since any variant of "youtube", "latency" etc showed results for problems with YouTube or actual YouTube videos talking about latency.

The trick was to use HN search: "youtube latency" and select Comments. First result was a comment on https://www.forrestthewoods.com/blog/my_favorite_paradox/ which links the story in the "Bonus!" section.


> Mark my words: if elixir takes off, someday someone is going to write the equivalent of how gamedevs solve this problem: client side logic to extrapolate instantaneous changes + server side rollback if the client gets out of sync.

most games have the benefit that they're modeling the mechanics of physical objects moving around in the world and are having their users express their intentions through spatial movement. the first gives a pretty healthy prior in terms of modeling movement when data drops out and the latter can be fairly repetitive and thereby learnable and predictable.

whether or not user interaction behaviors can be learned within the context of driving web applications seems a little less clear, to me at least. it does seem like there are a lot more degrees of freedom.


Nothing so complicated. All that's needed is a local cache so that when you type a new message in the chat window, you immediately see it appear when you hit submit (optionally with an indication of when the message was received by the peer). But there's quite a bit of tooling required to reliably update the local cache, run the code both in the client and on the server.


Firebase does this brilliantly with Firestore queries. Any data mutation by the client shows up in persistent searches immediately, flagged as tentative until server acknowledges.


> server side rollback

server controlled client side rollback, you mean?


On modern internet, with some assumption, you can get to like 2x faster(in my case) when sending data over an *already establised* connection.

Example:

A full fresh HTTP connect from client to first byte take ~400ms(I'm in the US the server is in Europe). This includes: resolve dns, open tcp connection, ssl handshake etc...

But if the connection is already establish, it only takes ~200ms to first byte.

If I deployed the server in the same region, say US customer <-> US server, this came down to 20ms...

That means, it's good enough.

Not super ideal but It's a trade-off we're willing to make.


LiveView (I think) already achieves this optimisation as both the initial content and any future updates come over the same persistent websocket connection.


Not OP, but:

> If latency is high, this can make for a poor user experience, the argument goes.

Deploy auto-scaling servers closer to your users: Use fly.io (or any other competent edge platform, really).


A better balance would be to build the webapp in hybrid mode, some logic can be run by client-side javascript. Only the event handlers that rely on data from the server needed to be sent to the server.

In this pixel paint demo, the state and reaction of "changing the pen color" can happens locally: https://github.com/beenotung/live-paint/blob/dd3b370/server/...


I had never used LiveView and tangential to the latency consideration here. Two applications that I think can be enabled by siphoning all events to server are server-side analytics and time travel debugging (or reconstruction of a session). I am so glad to learn of this tool and definitely giving a try in my next project


Here's a demo that illustrates that delay: https://breakoutex.tommasopifferi.com/

Agree it's super neat framework and hope a client side implementation can be written to "stage" the change.


Chris said in a different comment optimistic ui updates already exist…

Here’s the link: https://dockyard.com/blog/2020/12/21/optimizing-user-experie...


Here is an article about updating the UI without waiting for the roundtrip to the server.

http://blog.pthompson.org/liveview-tailwind-css-alpine-js-mo...


Hi Chris.

We've been using LiveView to build a new app at Precision Nutrition and are largely quite happy with it so far.

One concern we keep coming back to is that of the need for constant connectivity in order for the app to work. I'll throw up the disclaimer here that I've not spike on how to handle network disconnects. That said, we've had a few of our internal users lose their connection to the web socket, and the app just beach balls. You can't do anything. Another example we keep coming back to is a user going onto a subway.

How do you envision spotty connections being handled long term?


LiveView will automatically recover the connection, but you are correct it requires a connection to allow interactivity, but this isn't different from being unable to post a tweet while driving under the subway. The interesting thing about the subway usecase is even google docs last I checked will go into read-only mode when the connection is lost, so I don't consider this scenario particular different than the status quo.

LiveView adds a `phx-disconnected` class, which is what I assumed you mean by beachball, and interactions are paused while the user awaits the UI to tell them they are good. This is all driven by your CSS and you can also hook into the connection life-cycle with a few lines of JS, so the app should be telling the user "Reconnecting..." vs it appearing broken, so this is really up to your UX folks. By default new phx apps make use of the topbar library so any page loading event or dropped connection will show the loading bar/spinner up top.

As far as spotty connections go, when I try simulating 30% packet loss, my LiveView WebSocket connections have no perceptible degradation, so it's hard to say where the cut off is. You can also fall back to long-polling which may benefit very edge users, but I would only do so if absolutely necessary.

Having said all that, one thing we are upfront about in general is LiveView is obviously not a good fit applications that require offline support :)


this isn't different from being unable to post a tweet while driving under the subway

In principle, your Twitter client could remember the state locally, let you keep working, and just keep trying to sync.

In practice... most apps are really bad at working offline, so just handling the disconnection and reconnection in a robust and consistent way is already much better than average!


> it requires a connection to allow interactivity, but this isn't different from being unable to post a tweet while driving under the subway. The interesting thing about the subway usecase is even google docs last I checked will go into read-only mode when the connection is lost...

I think that 'read-only mode' and 'can't interact at all' mode are not the same.

I often have google news open when the train vanishes down a tunnel and keep scrolling down reading headlines until we're out the other side.

If a website freezes on me, I close it.

> Having said all that, one thing we are upfront about in general is LiveView is obviously not a good fit applications that require offline support :)

Not a good fit is rather charitable...

It becomes totally unresponsive and totally unusable in any offline or high latency situation (trains, stadiums, remote areas) right?

I know, and I've read your responses that 'well, all websites have to interact with a server eventually, so it's pretty much the same as that only better...' but, well... when you build websites like this, that's why product managers say "no, we don't want a website, we want an app".


> Not a good fit is rather charitable.

s/obviously not a good fit/non-starter :)

> I think that 'read-only mode' and 'can't interact at all' mode are not the same.

I agree, but it depends on the application. LiveView doesn't "freeze", but the content on the page is not going to continue updating or be interactive. This indeed limits some applications that want to allow the user to continue editing a document, but your example of a news site absolutely still functions fine for read-only offline.

> It becomes totally unresponsive and totally unusable in any offline or high latency situation (trains, stadiums, remote areas) right?

Yes, just like the vast vast majority of web applications today, including the vast vast majority of SPAs that could, in theory, work offline, but don't because of the added complexity on the client and server, state syncing, conflict resolution, etc. If working under the offline condition is a hard requirement, LiveView is out full stop. But even for SPAs, this is an opt-in feature today that few choose.


Thank you for your response, Chris. It's much appreciated!


I've been experimenting with some progressive enhancement style HTML. You can build forms that post the "normal" way, then get the full liveview experience when the socket is mounted.

If an app can be reduced to just CRUD, I think it works very well.


I've tried to get into Phoenix a few different times, but I've always been stymied by the unfamiliarity of Elixir's syntax, especially codebases that made heavy use of composition and guard clauses, meaning that the business logic was scattered across a half-dozen different files which had to be read in sequence to understand a single web request. This was a big let-down from the (to my mind) very straight-forward nature of MVC code and project structure that Rails pioneered. Do you have any tips for structuring Phoenix projects or approaching these kinds of codebases?


I'm not sure I follow. Phoenix is also MVC and its request model is far simpler than Rails'. In Rails we have controllers, we have callbacks (or "filters" as they are called now), and we have "middleware". That's three concepts right there! In Phoenix, we have a connection struct that flows through a pipeline of "plugs". Plugs are just functions that transform said connection. Each part of the request pipeline is implemented as a plug. You can create your own plugs and "plug" them in where you see fit. Read more here: https://hexdocs.pm/phoenix/plug.html

Of course, LiveView isn't MVC in the classic sense, but it still uses plugs. Its goal is to simplify the SPA which I'd say it does a particular good job of.

[Edited to fix part of a sentence I deleted by accident]


> Do you have any tips for structuring Phoenix projects or approaching these kinds of codebases?

Ignore the "View" modules. Put nothing in them. You can also eschew the "Context" modules for your database stuff, and just use Ecto functions directly, though, if you do get into the habit of using Context modules, it will make your life easier to have a unified point of view that abstracts away, e.g. caching, or, if you decide to go full CQRS/event-driven, if you need to send a copy of your data to a datalake, etc, etc.

Actually LiveView tends to have less module-indirection than "deadview"s.

If you prefer to have simple apis, you can just drop phoenix altogether and just stick to Plug. This will give you a ruby/sinatra-style experience that may be more your style, and, IMO, a good way to learn elixir. I did this for a long time before finally giving in and learning phoenix. I turned out OK.


This has most likely to do with the codebase itself rather than with Elixir or Phoenix in your case.

And having to go through a few files to understand how a request is handled is not out of the ordinary in an app, especially if it's grown over the years?


Tbh Phoenix has a lot of boilerplate and some... Opinions that make the codepath slightly more complicated (often for good reason... that might not apply to all use cases) in the default project.

For example:. What exactly is the distinction between an endpoint and a router?


Endpoint == instance of phoenix webserver. Changing things in the endpoint gives you WAF-like control and you can do some early footwork here and store useful data in conn[:private] for you to use later in the modules called in your router

You can have multiple routers. I just built a thing where foo.com uses one router for the main site and *.foo.com is something else.


Point is, they are just plugs. You can put routes in an endpoint, it'll work just fine. The distinction is just a -often useful- prescriptive opinion by Phoenix. (Plug does not split between endpoints and routers)


very true, but the abstraction is pretty useful


it's useful if your project gets very large. It is a non-negligible cognitive cost when your project is very small and you are new to Elixir and Phoenix


Hi, really excited about the new release - Heex and Esbuild support is fantastic.

Do you think it is possible for newcomers to pick up Phoenix and at the same time learn how frameworks work? For example, FastAPI [0] is a widely extensive Python framework yet its documentation essentially is just a long tutorial which explains basic web-development concepts while teaching its core. OTOH, almost all Phoenix learning material I have come across so far are targeted to experienced devs (especially Rails).

I really love that Phoenix is opinionated but I feel like there should be some ways to learn about the rationale behind those choices, without having 10 years of development intuition.

[0] https://fastapi.tiangolo.com/


Take a look at these courses on Elixir/OTP[0] and Phoenix LiveView[1] from Pragmatic Studio. Both of them together I felt gave a well paced overview of elixir servers, and Phoenix framework from a good starting point even for beginners as you seem to be looking for. There's a combo price deal if you by both courses.

[0] https://pragmaticstudio.com/courses/elixir

[1] https://pragmaticstudio.com/courses/phoenix-liveview


It's absolutely possible, but you are right our docs and guides are more tailored to folks already familiar with web development. I would love to see more Elixir and Phoenix content for someone who has never written a program before. I do think we'll start seeing more of these kinds of efforts, but it's the nature of the age of the ecosystem and adoption cycle that content is initially tailored to more advanced users.


This article really helped me. It basically builds Phoenix from the ground up. Starting with "hello world" in the terminal.

http://www.petecorey.com/blog/2019/05/20/minimum-viable-phoe...


Hi Chris,

A someone who's familiar with Kotlin, I can see how Erlang's processes and mailboxes would work. The question I have is, you mention Phoenix maxed out available FDs but RoR would have struggled. I didn't quite get what those fundamental differences are that prevent RoR/Ruby from scaling-up io-bound workloads as effortlessly as Elixir/Erlang, given you point out that sync.rb was built upon a similarly capable event-io lib.

Also, if I may, what do you make of your former employers 37 Signal's turbo (hotwire) with RoR? Does the vision you have for Phoenix with LiveView match what you see them doing with hotwire? How do the solutions compare if you have had a chance to take a look?

Thanks.


I am not Chris, but maybe I can answer the Hotwire and LiveView difference.

Hotwire is building stuff from regular HTTP requests (ala AJAX) and uses WebSockets only for Turbo Streams. LiveView is all on WebSockets.

You can think of Hotwire as being stateless and stateful when required. LiveView is stateful.

Hotwire exchanges HTML snippets at various length whereas LiveView tries to do a very minimal diff for an update.

WebSockets would be faster for your regular updates, but you need to keep a connection open. In other stacks this might not be ideal, but it's what Beam is made for.


1. LiveView looks incredible! But Phoenix is also great at standard JSON APIs. How do you think about the decision between using LiveView vs a SPA with a Phoenix API backend? I'm not too sure about the limits or future vision of LiveView. Do you think there's a place for SPAs, or is the goal for LiveView to be able to replace them in almost all use cases?

2. What's your dev environment like? VSCode? What extensions, etc?

3. Excited to see you at fly.io! I think I saw a tweet about them wanting to make deploying Livebooks (built on LiveView) easy. Any news on that front?


1. Thanks! You are right Phoenix also great for APIs, and this story is pretty much baked as far as Phoenix specific features go. There's absolutely a place for SPAs, and the JavaScript ecosystem has number of great options. Anything requiring offline support is obviously out for LiveView, as well as highly complex UIs, tho that is pretty vague. For example, I wouldn't build Google Docs or Google Maps with LiveView, but as we've seen with Livebook, you can do a shocking amount of complex things with a LiveView application and a few escape hatches to JS. We're still finding out where the bounds are.

Our goal isn't to replace SPAs, but I do think we'll obviate them for a large class of applications.

2. VSCode with VIM keybindings. I bounce from vim/emacs/vscode (all w/ vim emulation), but vscode has stuck pretty well for the last couple years. Much to my chagrin, it's faster than terminal emacs.

3. No news quite yet, but we're working on it. Stay tuned!


Thanks for your work! One question I had is about when it comes to expanding beyond the web, what are the solutions available with Phoenix? For example, if I'm making a web app using LiveView, do I need to make a second app for my API for an iOS app?


We structure Elixir applications differently than a lot of platforms. Phoenix is not your app – it's just one thing that slots into your Elixir application's supervision tree. So you can run one Phoenix web server or multiple in a single Erlang VM, and nothing changes. Your Phoenix endpoint isn't global, so it will happily sit alongside several, or it will happily serve a single web server to any number of Phoenix routers serving different APIs.

So to answer your question directly, you could either add your JSON or GraphQL API directly in the same router that serves your LiveViews, or you could create a router specific to the API, or you could introduce a completely separate Phoenix endpoint and router that starts a 2nd web server. Both would boot as part of the app serving different ports. Phoenix remains a great choice for native clients if we're talking JSON/GraphQL, but because we also have native channels clients in objc/swift. Hope that helps!


I was actually just searching for this same thing on Google. I recently started a new project that LiveView would have been great for, but the fact that I need a native iOS/Android app caused me to go with Socket.IO instead for the backend. It seems likely to me that there is a way to use Phoenix channels with web, iOS, and Android.. but there isn't a lot of good information on doing it that I've been able to find.

Edit: I realize you could just use regular web sockets with Phoenix to do what Socket.IO does. But my confusion comes from actually getting it all to play nice with a LiveView front end, as well as a native app front end.


Phoenix channels is similar to socket.io, except we multiplex the channels with their own events, vs socket io that creates a single bidirectional "channel" with events. So you can think of channels as namespaced socket.io, with the server side allowing isolated and concurrent message handlers to live.

The channel docs give a solid overview of the server side, and we have a listing of third-party channels clients for most platforms here:

https://hexdocs.pm/phoenix/1.6.0-rc.1/channels.html#client-l...


Out of my head (just amateur in Phoenix right now) you could just leverage the API endpoints in Phoenix and implement the iOS and Android frontend with them. The Views are decoupled. So you can have the business logic, as you would normally do, inside Phoenix, then the API on top of this logic, and then you could either implement the whole LiveView workflow on top of the API or just directly on top of the business logic (if you’re feeling kinky but it invalidates the architecture Phoenix sets up for you).

The Getting Started guide on Phoenix has a lot of useful material even for your use case. I might be missing something crucial though :)


Hey Chris, excellent work. Very excited about 1.6 and LiveView. If you update your Programming Phoenix, I would certainly pay again.


Yeah, would love to see an updated book to dive back in for 1.6.


Seconded!


Business-wise, what do you think the sweet spots are for LiveView, Phoenix and Elixir right now? I love the BEAM ecosystem (I've been using it since 2004, on and off), but for a lot of places, Rails is still a great place to start. What kinds of applications have you seen where all the BEAM features just make Phoenix et al not just a little snappier, but a clear winner?


The sweet spots for me are:

1. It is much easier to trace things through the entire Phoenix stack than it is in Rails. It is also much easier to add things to the Phoenix stack using plugs.

2. Elixir is concurrent, whereas Ruby is not, so when performing long-running processes, you can just do them in Elixir/Phoenix without having to rely on workarounds like Sidekiq, Resque, RabbitMQ, etc.

3. Writing multi-threaded applications is much easier in Elixir than many (if not all) OO languages.

4. Pattern-matching for variables and functions, and binary pattern-matching for parsing text.

5. Mix.

6. BEAM, OTP, and supervision trees.


Looking more at things like #2. Sidekiq isn't terrible though, and if you get big enough, you're going to want some more management than just spawning a BEAM process to do something. What have people settled on for that with Elixir/Erlang?


You could use something like oban: https://github.com/sorentwo/oban


Sidekiq is great, but it is a thing you have to think about (in particular, how you're shuttling state from workers to a place where front-end can see it). The big win with Phoenix as I understand is not having to think about it at all: it has a natural expression in the language and the platform, and that expression is performant.


If it's just a small fire and forget thing in Erlang, you can just spawn the process, sure. My point is that once people start to care about what happens with those background jobs, you probably need some infrastructure around them, and then the Erlang thing might start to resemble Sidekiq a bit more.


This.

"When we want to have more control, i.e. persistence on disk, a workers pool, retries, among other things, there are several popular solutions:

Oban: a robust PostgreSQL-based queue backend process system. It has a UI in its pro version Exq: a Redis-based Resque and Sidekiq compliant processing library. It has an open source UI exq_ui verk: same as Exq. Is is compatible with Resque and Sidekiq job definitions"

https://sipsandbits.com/2020/08/07/do-we-need-background-job...

So basically sounds to me like Elixir is doing pretty much the same thing as Ruby. We usually do want more control and disk persistence, so...


Sidekiq is not a workaround. This is a myth. Most companies need to persist their jobs (what happens during deployments or when a process dies? do u just let all the job info disappear?) and have some kind of queue system. Even Elixir has some redis backed queues. BEAM, OTP, concurrency, this whole thing is solved with the current fashion of devops teams and kubernetes. It doesn't really matter what tech stack you use anymore it can quite easily scale. Our devops team is 2-3 people and they're scaling our Rails architecture easily. Our scale is quite big since we're a b2c company and most of their effort isn't even on scale but on making deployments easier and troubleshooting all kinds of dev problems / pager duty alerts. They would do the same amount of work if they were doing that with Elixir. If you're hitting Whatsapp scale yes Ruby is less than ideal. Could we stop pretending like the challenges of 5 companies are what most devs need to go through?


In Elixir you'd use Oban instead of sidkiq and you get more performance and it scales out horizontally with your app. Each new instance of your app is essentially a new sideqik server as a bonus

edit: > BEAM, OTP, concurrency, this whole thing is solved with the current fashion of devops teams and kubernetes.

This is a hilarious statement which I hope is satire


I don't see BEAM and OTP as being synonymous with Kubernetes. They have some overlaps but they aren't drop-in replacements for each other. If anything, they are complimentary technologies.

The advantages of BEAM and OTP are that I can spawn new concurrent processes throughout a cluster and then use the actor model to send direct messages to those processes from any other processes, regardless of where the sender and receiver happen to be. I can also easily configure Behaviours so they automatically run on either a single node or every node in the cluster, depending on what they are doing.

In both cases, if a node goes down, Elixir will automatically restart any affected processes on another node. When a process has an issue, I can surgically handle that issue at the individual process level. Kubernetes doesn't handle problems that granularly.

One advantage of Kubernetes is autoscaling. OTP and BEAM don't do that.

By using libcluster, you can combine Kubernetes and BEAM/OTP and get the best of both worlds.


I don't know about Oban. I don't think it's bad design for the workers to be separate from your main app like in Sidekiq (they do 'require' the app but they are essentially separate from the servers). Anyway making this point as a huge win for Elixir over Ruby seems really exaggerated to me.


It prevents unnecessarily moving data around. And if you're running the whole stack on the same server with Sideqik you're dealing with interrupts and memory copies between processes; in Elixir it's all within the same allocated memory and nothing gets copied. Plus it's another service you don't have to monitor because it's automatically monitored by BEAM.


What if you want to scale your job workers and your servers separately?


Oban will let you choose which servers the jobs can execute on, and you could do the same with your own app code

> Queue Control — Queues can be started, stopped, paused, resumed and scaled independently at runtime locally or across all running nodes (even in environments like Heroku, without distributed Erlang).


I think it's nice that you can run everything in the same server. If you're scaling, and want to split everything up, that's a "nice problem to have", but it's convenient if you can just run everything in the same application/codebase when you're getting started.

I do agree with you that's not a huge win compared with Rails, but it is nice to have. I think you'd have to look more at something like "lots of concurrent, long-lived connections" for the real wins over Rails for the BEAM ecosystem. I mean, you can do that in Ruby if you want to, but it's going to be cleaner and simpler with Elixir/Erlang.


Just getting started into Liveview, what confuses me is the proper way to integrate javascript. The JavaScript interoperability documentation page isn't really helpful.

A simple case for me is using hotkeys to submit a form. Couldn't figure out a way to trigger a phx-submit from a onkeydown handler.


If you think the docs can be improved, please let us know. You can also find us on elixir-slack or elixirforum.com to ask for help. This should get you started:

    //<input type="text" name="body"  phx-hook="SubmitOnEnter"/>

    let Hooks = {}
    Hooks.SubmitOnEnter = {
      mounted(){
        this.el.addEventListener("keydown", e => {
          if(e.key === "Enter"){
            this.el.form.dispatchEvent(new Event("submit", {bubbles: true}))
          }
        })
      }
    }

    ...
    let liveSocket = new LiveSocket("/live", Socket, {hooks: Hooks, ...})


I might try setting a mounted hook on the element with something like

  this.el.addEventListener('onkeydown', function (event) {
      this.el.dispatchEvent(new Event('change', { 'bubbles': true }))
    }, true);


Hi.

I got a couple of things:

I seem to remember in the original announcement presentation there was a demo of SVG being updated inside a page 60 times per second. All from server. Did this actually become feasible? I’m thinking graphs and maps with live data. I might not need as smooth animation there. Though that could make for nice dashboards.

Other bit that interests me is web apps for long running tasks. What’s the story in Phoenix and Elixir land for handling external shell processes from web requests? I’m trying to do lightweight job control without investing into separate systems for processing pipelines, CI/CD and the like.

Thank you for Live View. It continues to be an inspiration even though I haven’t yet had a chance to dive into server side of it.


The 60fps rainbow works as a fun stress test demo, but really you should not be pushing events down the wire every 16ms to animate something on the client :)

That said, you are right that SVG charts/maps are a surprisingly fantastic fit for LiveView. You could actually render a fully interactive and dynamically updating chart by only sending SVGs – and it will probably send less data than hydrating the same client-side chart with JSON! Check out the Context Chart lib for some examples. They have some links running LiveView examples if you scroll down: https://contex-charts.org

> What’s the story in Phoenix and Elixir land for handling external shell processes from web requests

Great story here thru erlang ports. I'm a big fan of the Porcelain library which wraps ports with some nice features on top: https://github.com/alco/porcelain


Hey, thanks a lot. These are great tips. Context Charts look really good and lightweight to run. Porcelain seems verty promising. Guess I’ll start with one of the tutorials elsewhere in the thread. Good info here all around. Much appreciated.


I’m in love with LiveView havent done much Elixir yet but I do Django and have done ASP .NET in the past. I think its a brilliant concept. The only thing kind of holding me back on using Phoenix with personal projects and ideas is really the authentication stuff not being built-in (or when I looked I got this impression). I’m curious do you ever see Phoenix being fully supporting all the vitals or was your decision not to include authentication based on modern approaches differing like JWTs and such? Curious I am definitely going to use it in the future. As a backend nerd I always held love for Erlang and I think Phoenix / Elixir are my real way into the ecosystem.


The Phoenix 1.6 release candidate has been out for a few weeks, and we now include a `phx.gen.auth` generator for a fully bootstrapped authentication solution: https://hexdocs.pm/phoenix/1.6.0-rc.1/mix_phx_gen_auth.html


Nice! I'm glad to see this, I'll definitely see about trying this out soon then. So I guess its really just a matter of time while Phoenix grows. Although from what I've seen it seems to have everything else I need.


There's been some progress on this front, I believe:

- https://hexdocs.pm/phx_gen_auth/overview.html


If one wanted to learn LiveView today, what's the best resource to learn? Also, will there be improvements to the LiveView latency story (if it's even possible) when providing a service to a global audience? Thanks for all your work!


Someone else already mentioned this in a separate thread, but Pragmatic Studio's courses on Elixir and LiveView are outstanding. They're not free but they're worth the cost. You can get both for around $200 if you buy their Pro Bundle.

https://pragmaticstudio.com/courses/elixir


Non technical Fly.io founder here: LiveView and Elixir work very well distributed all over the world. Here's an example app I fumbled my way through to show it in action: https://liveview-counter.fly.dev/


"Non-technical Fly.io founder", who do you think you're kidding? You have more commits than me in the last week.


Haven't played around with Fly yet, but it seems amazingly well-suited to LiveView apps and y'all seem like great people in general. Is there a Fly solution for if someone needed a big swinging...database? I guess one fear is getting locked into Fly + LiveView given latency concerns with alternative hosting providers, but outgrowing Fly's database offerings.


Our Postgres is pretty great up to about 500GB. We'll be pushing that higher over the next year, but we have several customers using Crunchy Data (https://www.crunchydata.com/products/crunchy-bridge/) in conjunction with their Fly.io apps.

We intentionally give you super user access to your DBs so it's easy to migrate or spin up your own replicas. Our bet is that _most_ people won't outgrow our Postgres but having an escape hatch is still very comforting.

We previously worked on Compose.com. We, at least, understand how to manage huge databases when the time comes.


Sounds like customers are in good hands!


Will there be a simpler quick start deploy process for LiveView coming soon? (last I looked at the tutorial it seemed a little tedious compared to other languages/platforms)


Hi Chris - this is superb, really pleased to see LiveView getting so much attention, and thanks for all your hard work.

Can you talk a little about the current story with deployments and managing state across/during those?

Obviously Erlang/OTP supports hot upgrades, but those are hard to design correctly and not supported by container/VM environments like Heroku and (I presume?) Fly.io.


Hot upgrades are a powerful and essential feature – when you have hard requirements for it. As you said, hot upgrades aren't simple and introduce a lot of work to get right. I like to tell folks, imagine you are building a pusher.com competitor, so your customers are paying for you to host active connections for their end-users. In this scenario, bouncing hundreds of thousands of connections for a cold deploy is a terrible offering because your clients are paying you or that reliable wire to their user's browser. Hot upgrades makes total sense here. Equally, if you're building a game server in Elixir, you likely don't want to teardown and rebuild your world state and bounce connected players.

So you opt-in to this complexity when you have a clear requirement for it. They other key point in this, is in either hot upgrade or cold deploy cases, you still have to design your stateful system to recover from complete restart. Servers catch on fire and things go bad sometimes, so the cold deploy approach of rebuilding the state on restart is not only completely viable by itself, but you're doing it anyway even with hot upgrades. Hope that helps!


You can store state client-side. Say you have a view to edit users, with a modal dialog. A typical LiveView app will change the URL when you open the modal, so it's like /users?action=edit&user=bob. When a server goes down the client reconnects, and the LV receives the URL params in its handle_param callback, so it can recover the earlier state. Forms recover automatically in a similar way.

With this and a DB to store permanent data, an LV app can be deployed just like any other standard container app without user impact.


If LiveView is backed by PubSub, then I think you can offload persistence to Redis.


LiveView in its standard configuration does not use Redis, it uses built-in Erlang distribution. But you can use Redis if you want to or if direct connections between nodes aren't possible.


Hi Chris, two questions about phoenix and liveview.

I would like the web app to have good offline support and use localstorage for backing when the connection goes down (re-upload when connection is back up). It it in anyway feasible to add some middleware to the client connection handling of liveview so that it still plays well with it's features?

I've read that it is possible to drop in some bits of JS here and there into the liveview powered page. Is it also possible to do it the other way around and go to drop in liveview "components" into an existing SPA? Throwing away my existing SPA code is quite a loss, but the liveview appeal is certainly there for other parts of the website.

Thanks for the great work on phoenix, it looks quite attractive.


Are you guys looking into the Web Transport protocol for the future? Right now you have to tunnel the websocket connections over http2 and it will probably be the same for http3 afaik.

I know there is this work in progress (https://w3c.github.io/webtransport/) and websockets are probably fine for a long time but sooner or later (unless there is an update to websockets) it will probably be faster to just do normal http requests and listen on server sent events.

What are your thoughts for Liveview for the future? Will it forever stay on websockets or would you be open to change the underlying technology if / when new stuff becomes available?


Phoenix Channels (which LiveView is built on) is transport agnostic, so as soon as we can open a pipe from the browser's JS environment, and listen on the server, we could have a `Phoenix.Transports.WebTransport`. All the existing user-land code and LV apis remain unchanged :)


That is something I suspected. When I first saw your talk about LiveView on youtube my mind was completely blown away.

It is such an awesome technology.


Hey Chris! First, a huge thank you for making this. I started playing around with LV v0.2 or something. It's definitely come a long way since then!

I've read the blog post about getting 2 million concurrent connections on a single server with minimal tuning. That's pretty mind blowing. How does that compare will real-world scalability with LV? Like, can I actually have 2 million people looking at a LV simultaneously? I think the worry I hear most is whether or not LV scales. I've personally never run into any problems at all, but the projects I've built with LV haven't been the highest-traffic apps.


My pleasure! LiveView is built on Phoenix channels, so it has the same scaling characteristics of a channels application. We had 2 million connections, but what that means is those 2 million websocket connections each started a channel to join the chat room, so we were running full-blown channels in that load test. For LiveView, the main consideration will be memory usage since you are likely to keep more state than something offloading some state to a JavaScript or native UI. That said, check out our docs on `temporary_assigns`, which allows you to specify which template state you hold on to, and which you only need the first time and can throw away (or fetch on demand) later.

The other thing to consider load-wise is the very state that you can now keep in memory allows you to reduce system load. Instead of fetching and authenticating the current user for every interaction, you do that one time for the lifetime of the visit. Database load is drastically reduced and you don't spend CPU cycles doing token verification. So while there's a cost to holding state, in general this will allow you to do much less work than a stateless application.


Thanks for your work, this really looks promising!

One question I keep thinking about. What if I want to still use React on the frontend, does it still make sense to use Phoenix for the backend then or am I throwing all benefits over board then?


As a personal testament: I did this for a side project, and it worked perfectly. Phoenix does very well as "just" an API/backend environment.


Phoenix is still fantastic for JSON and GraphQL apis. In fact, with GraphQL subscriptions, it's extremely well suited because how well we handle WebSockets and pubsub. The community has a robust GraphQL toolkit which works with Phoenix. It has long been 1.0 and has had a book published around it, so quite solid: http://absinthe-graphql.org


How do you handle issues of consistency once you need to scale your app server across multiple nodes? Under the assumption that in the past much of this would be handled in the database, are there any guidelines regarding what state to maintain in the app server in order to avoid any issues? Or is this just a non-problem?


What resources would you recommend for someone new to Phoenix, LiveView, Elixir to get started on this stack?


This is a pretty good liveview tutorial: https://github.com/dwyl/phoenix-liveview-counter-tutorial

We also have a hiring project that's designed to ease people in to Phoenix + LiveView. We extracted this from a real app and tried to make it as simple as possible to work on: https://github.com/fly-hiring/phoenix-full-stack-work-sample


What sort of experience level are you hiring for? I've been teaching myself elixir and liveview over the past year for my first job (building dashboards for a non-technical nonprofit) and I've been hoping to transition to working somewhere technical but I have no idea how to do it. Thanks!


Your experience level, actually. Here's the full post:

https://fly.io/blog/fly-io-is-hiring-full-stack-developers/


Thank you!


One question here; is there anything special about Elixir/Beam which makes Liveview on Phoenix a great fit?

Or can LiveViews be done on more performant languages like Go, Rust etc? I am just surprised why we don't see more LiveView implementations in other languages?


Laravel has LiveWire https://laravel-livewire.com which works in a similar way.


> I am just surprised why we don't see more X implementations in other languages?

X probably doesn't have the mainstream appeal some people think it should have.


There has been a large number of LiveView-like solutions pop up that are inspired by what we're doing. I tried to outline what makes Elixir uniquely suited to handle this kind of thing in the post, but I'll try to distill it here:

1. The concurrency and distribution model. Processes (light weight green threads) and extremely cheap, isolated, and concurrent. Process can message each other, and messaging is location transparent. So you can send a message to another process on another Elixir server using the exact same primitives as sending a message to a process on the local server. This allows for all kinds of things that are hard or not reasonably possible in other platforms:

- Start a process on a node in us-east1, and message it from Tokyo. This is simply built-in.

- Run a primary DB in us-east1, and RPC from your Tokyo instances to perform writes. The RPC mechanism is again built in. You have code running on another node, and you simply run your code over there and get the result. There's no marshaling of data structures, deploying message queues, protocol buffers, etc

- Using Phoenix Pubsub, broadcast a message from anywhere in the cluster, `PubSub.broadcat(:my_pub, {:new_msg, "hi!})` and it will arrive to all instances who are subscribed, anywhere

This kind of stuff can be made to work well in Go and Rust, but you need to bring in libraries and do more work. They are absolutely great at "network programs", but lacking the distribution primitives means libraries and solutions are usually more bespoke vs Elixir where everyone in the community simply uses what is provided out of the box. So there is no interplay of ops or dependencies to try to reconcile.

2. Processes support stateful applications. Most of the LiveView-like solutions still go over stateless HTTP because websockets and cheap concurrency aren't as viable. This drastically limits what you can do efficiency wise. For example, our diffing engine requires state to live on the server so we know what changed. If you simulate LiveView over HTTP by sending all the state from the client for every interaction, you are sending way more data, doing all the HTTP authentication, fetching the world, then sending the entire template back.

3. Process are preemptively scheduled and load-balanced on IO and CPU. This allows your LiveViews to perform blocking CPU bound work and the scheduler will make sure every other user gets their fair time share. In other languages like Node where you rely on evented IO, any CPU bound work blocks the entire program.

4. Processes are isolated. On the evented IO example, imagine your websocket handler in Node.js has an uncaught exception caused by a single user interaction. It brings down the connections for all connected users. In Elixir, all processes are isolated, garbage collected in isolation, so you aren't jumping thru hoops to handle these kinds of degradation modes.

Hope that helps!


I really appreciate the work and innovation you are doing, but these other alternatives exist because not everyone is starting a project from scratch, not everyone is able to rewrite their project, and not everyone is building a team with the required skills from scratch.

At a previous company I worked for, a decision was made to move to other platforms not because Elixir was bad, or LiveView was bad (it was good!) but because it was really difficult to hire for.

It is difficult to find people with elixir experience, and we didn't want to have production system built by just people learning a language, platform and architecture on the fly.

Also, that's considering "backend" developers. It was *TOTALLY* impossible to hire frontend developers wanting to work in this setup. No single frontend developer wanted to learn Elixir, and all the backend related stuff. This doesn't happen for example with Node... yes, it might be a terrible choice for the backend, but most frontend devs are happy to work with it.

Regarding to the solutions in other platforms not having the Erlang VM to power them, etc,etc.... not everyone is at google/twitter/facebook scale, 90% of companies out there can be run perfectly fine with run of the mill django/rails/laravel setup.

Premature optimization is as bad when doing an unnecessary SPA as it is using the Erlang VM when rails was enough.


Phoenix 1.6 has been in alpha/release candidate for a few weeks now. Just curious, what issues are blocking the stable release of 1.6.0? I've checked GitHub but I haven't found any major outstanding bugs waiting to be ironed out.


No blockers. We're simply giving folks time to try things out and report issues. No show stoppers at the moment so I imagine final release won't be far off!


I'm wondering how this framework handles user feedback for network errors and delays?

It seems like your changes aren't really committed until they hit the database, but there are a lot of intermediate states.


We have optimistic UI features where we'll apply a css class to the interacted element, lock that DOM container for inbound updates, and only remove the class when that specific interaction is acknowledged by the server. We can also swap text in a similar fashion. The DockYard blog I linked up thread covers these things in detail.


HEEx looks awesome. How does compilation work? Does it properly & contextually encode/escape to avoid XSS? I'd love to learn more about the inner workings.


> I created Phoenix to build real-time applications. Before it could even render HTML, Phoenix supported real-time messaging. To do that, Phoenix provides an abstraction called Channels.

I think it's really neat that Chris started out with real-time messaging. When I first started working with web frameworks, it definitely felt like real-time stuff was an afterthought—a layer built atop an older model that leaked a lot of the details.

Working with LV has been an absolute delight. Need to render a PDF and download? Fire off a `Task.async` to render in a separate thread on the backend. When it's done, that thread can just `send` a message to the LV process to update the UI that the PDF is ready. So easy and painless. LV really hits a sweet spot for me: I'm primarily a backend dev, but with LV I can build really nice stuff on the front-end with minimal effort/headache from NPM.


This would make a great short blog post.


I've used LiveView as it was meant to be used in a couple of small personal projects but in my latest (much larger) project I'm using it in a way that (given what I just read in this post) would make Chris' head spin.

Basically I'm using it as a container wrapper for a large React app and keeping all the state on the client. This is only for the app pages on the site, the rest of the pages are either traditional "deadviews" or LiveViews.

Why?

1. I have a lot of state. Its an accounting app and I want the UI to be zippy. Because I don't want to keep all that state on the server (per user connection) I would have to use temporary assigns to keep the server state small which means lots of queries and data shipping when searching/reordering/generating reports. Nothing beats only shipping the data once over the wire. And I do know about all the hacks to update local lists using components - that doesn't help when I need to use/display the same data in different ways.

2. While I love the expressiveness of Elixir the lack of a type system makes UI development/refactoring much slower for me as opposed to React+TypeScript. Note: I have several years experience in both Elixir and React+TypeScript over many projects so I don't come to this conclusion lightly.

3. Using LiveView is much nicer than using Channels since I can delegate common elements of the page to it instead of replicating it in React. Sort of a Russian nesting doll of rendering. Plus the LiveView is colocated with the other pages in the site which makes it more tidy.

4. I don't have to write an API - its just LiveView messages.

5. I don't care about SEO for the app pages (and explicitly don't want it indexed).

6. I'm using Elixir/Phoenix/Ecto for its best parts - supporting lots of websocket connections and hosting the core logic (and the non-app pages). I shudder at the thought of running a fleet of node apps to do the same.

I'm not sure why I wrote this other than to let folks know that LiveView can be used in ways that might not be obvious from an initial look.


I don't know if would feel comfortable if my accounting apps state was mostly local and if my pc decided to take a nose dive I don't have the state on the server that assumed I have.


The data would be in the server database not in memory hogged by state for each client's active connection.

I seriously don't understand server-side rendering when toasters are more powerful than a mid tier pc i bought 15yrs ago. Why not do apis and just send the data?


Personal anecdote: LiveView is absurd. It's the biggest change I've experienced in web development since Rails v1.

I've been able to build rich, interactive, games without a single line of Javascript. It takes complicated server / api / front-end build projects and results in literally 1/10th the amount of code for the same result.

It's one of the few times in our world that the technology isn't just "new and cool" but "fundamentally better".


I really wanted to use it at my last job, where we had a Go backend and a React SPA and a sprawling, ad hoc RESTish API in between that was only used by the SPA.

We were mostly split between Reactors and Gophers, and the amount of time we spent arguing about the API and implementing it... I'd say was easily one third of the word, if not more, that would have just gone away if we didn't have to worry about the details of the data we were moving back and forth.

My current job is a bit too front-end heavy for LiveView to make sense. But I still keep an eye on Phoenix. It really feels like it could be a secret weapon, at least until every one else adopts a similar pattern.


The time spent on discussing data exchange details (protocol) between teams is regained in their ability to work largely independently of one another (without constant sync).


> HTTP almost entirely falls away. No more REST. No more JSON. No GraphQL APIs, controllers, serializers, or resolvers

In React / Typescript world you can get a little bit of this with Blitz - but not the live updating part, as far as I know. I found there was quite a lot to learn, but I’d also been out of the React world for a couple of years. Probably getting up to speed with TS was half of it, and obviously that’s not due to Blitz.

It has felt quite magical thinking only about the DB and React, and so far it’s just worked.

“Blitz is a batteries-included framework that's inspired by Ruby on Rails, is built on Next.js, and features a "Zero-API" data layer abstraction that eliminates the need for REST/GraphQL.”

https://blitzjs.com/


You kind of still need to think about your queries though, especially when building more complicated stuff.

I do agree the batteries included part of Blitz makes it really pleasant to use if you need both front and backend.


> You kind of still need to think about your queries though, especially when building more complicated stuff.

I’m not sure I follow - are you talking about client-side invalidation, which you may have to do manually with functions like `invalidateQuery`? I can see that being a gotcha in more complex Blitz apps. https://blitzjs.com/docs/mutation-usage#cache-invalidation


We used Liveview in a production app but have sense re-written it all in favor of React. The biggest issue with LV was the fact that users with poor connection were experiencing a channel timeout causing the page to completely refresh. This was unacceptable UX, and there was nothing we could do about it. Really a shame, because I was enjoying liveview.


Haven't used liveview, is it not possible to just implement some kind of heart-beating and re-connection logic?


LiveView has built in re-connection logic! I think he means if theres no internet connection theres nothing to re-connect too.


> We also shipped a live_redirect and live_patch feature which allows you to navigate via pushState on the client without page reloads over the existing WebSocket connection.

This is one LiveView feature I've deliberately avoided so far. The reason is that when you replace real page loading with client-side navigation using pushState, accessibility for blind users suffers. When a real page load happens, a screen reader knows when the load is complete, and can handle it in an appropriate way, e.g. by beginning to read the new page. But when client-side JS updates the DOM in-place, a screen reader has no reliable way of knowing that conceptually, a new page just loaded. The usual work-around is to have an invisible ARIA live region that says something like "navigated to [page title]". That's better than nothing, but still a regression from real page loads. Of course, SPAs have the same problem.

This really ought to be fixed in ARIA, but until then, I'll keep doing form submission and page navigation the old way. Still, LiveView is really nice for real-time updates within a page.


It's unfortunate there's no standard way to dispatch this kind of thing ARIA wise. Fortunately live navigation is opt-in, so folks can continue to use `<%= link "page", to: ... %>` vs `<%= live_redirect "page", to: %>` when needed. On the form submission side, we actually still have you covered within LiveView, because the server can instruct the client to do a plain redirect:

    def handle_event(save, params, socket) do
      ...
      {:noreply,
      socket}
      |> put_flash(:info, "It worked!"
      |> redirect(to: "..."))}
    end
So the server can do `redirect` or `live_redirect` and flash works in both cases, so I believe this broadens LiveView usage a bit for your data entry requirements.

I'll also check out the ARIA region approaches. I think something like that would be drop-in for LiveView in the live layout, and we at the very least could include docs on it.


As far as I know you can't pipe that sort of tuple through those functions. Is that a future feature? It would be mildly convenient if assign({:noreply, socket}), ...) == {:noreply, assign(socket, ...)}


There is a misplaced `}` :) He actually wanted to do :

        def handle_event(save, params, socket) do
      ...
      {:noreply,
      socket
      |> put_flash(:info, "It worked!"
      |> redirect(to: "..."))}
    end
which would work.


Ah. It's a nonstandard formatting though.


Agree-- Even if it's possible I always feel dirty to do that kind of "magic", and always prefer this way:

    def handle_event(save, params, socket) do
      socket = socket
      |> put_flash(:info, "It worked!")
      |> redirect(to: "...")
      {:noreply, socket}
    end
I feel like it's more readable this way.


If you're interesting in contributing accessibility improvements to LiveView, we'll fund it. Feel free to email me, I think this is really important work.


This sounds like an optimised implementation of the design described in "The Future of Web Software Is HTML-over-WebSockets":

https://alistapart.com/article/the-future-of-web-software-is...

HN discussion: https://news.ycombinator.com/item?id=26265999

I find developments in this area very exciting as I'm sick of dealing with layers and the required tedious glue you have to write to join them together.


Same but I already find ajax based app very slow if they implement optimist updates, now with this system, you have to wait a round trip for each interactions. This means click a counter will round trip before updating?


You can apply this to any interaction where you have to wait for a round-trip anyway. It saves you coming up with a JSON schema and endpoint for the request, and then gluing it all together.

It doesn't mean that you don't use javascript at all, you just use it in a limited fashion for enhancing specific bits of your website. Just like it was in the 2000s when making websites was easy.


If you want to save state for the counter to the server you would do a round trip. Otherwise you’d use some light weight JS lib, most popular one for simple interactions is Alpine.js. So simple ui JavaScript use Alpine and something like form validation and showing errors will do a round trip and save you writing validation twice (on the client and on the server).


Ok, so the trade off is a less snappy UI as soon as you have a state, because you can't do optimistic updates on the view, send the request, then deal with the error.

But you write only the code of the validation, and the vue, once, so it's more productive, and the state is always consistent.

I assume this means no PWA mode, although I don't really miss this.


The round trip is very quick, websocket with a tiny payload in most cases. This video does a great job of discussing the advantages (skip forward to 6:50) https://youtu.be/8xJzHq8ru0M


Payload size doesn't matter that much, a ping is ping.

Currently my ping to this website is 167 ms:

ping news.ycombinator.com PING news.ycombinator.com (209.216.230.240) 56(84) bytes of data. 64 bytes from news.ycombinator.com (209.216.230.240): icmp_seq=1 ttl=49 time=167 ms 64 bytes from news.ycombinator.com (209.216.230.240): icmp_seq=2 ttl=49 time=167 ms 64 bytes from news.ycombinator.com (209.216.230.240): icmp_seq=3 ttl=49 time=167 ms

Which means it's a cost I have to pay, before payload matters, before parsing matters, before rendering matters.

I'm at home, on fiber, on an ethernet cable, so ping should be very fast. And it can be:

ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=5.63 ms

Unfortunately, the network is not uniform.

It's of course, worse on Wifi, and even worse on mobile phone, on the go. Depending on your location, YMMV quite a lot as well. Then you have hardware quality playing a role, the user may use other software (torrenting, streaming, playing) that can also affect that.

Bottom line, you cannot assume the internet connection is snappy. Or stable (which is a problem with websocket, as you have to implement robust automatic reconnect, and it is slow and expensive, but then your use cannot use your software).

So knowing the trade off you make is important. On a local network in the corporate world, life view seems like a terrific deal. On a heavy mobile user centric app, it will probably hinder your app ergonomics.


Since I'm a persistence person and not a UI person, I am more interested in what is on the other side of the Phoenix/Elixir/BEAM VM server cluster: the database/persistence. I understand that currently apps mostly talk to PostgreSQL, like a Rails/Django app would?

Then, if something modifies the data in the database not by using the Phoenix app, the Phoenix app would not find out until it loaded the values from the database. And what would prompt it to do that?

But if your entire active state can fit in RAM in a cluster of BEAM VMs, you might turn the BEAM VM cluster itself into a distributed database, in which the only way to modify the data is by talking to the Phoenix app (using Liveview, regular REST HTTP API or something else, maybe even postgresql protocol). If this is the case, the app server can guarantee that no state change could have happened by something else modifying the database or whatever, and a client receiving updates via websocket would be sure that it has a complete and accurate picture of the data.

Of course, you would need snapshot isolation (versioning of tuples in RAM) and you would need to store the transaction log durably on disk and you would need to garbage collect (postgresql vacuum) the no longer needed records. Snapshot export and "live update pushing" to a traditional SQL database would be desirable for BI and stuff. But basically, if a modern distributed database was also a good application server, it would be neat.


> Then, if something modifies the data in the database not by using the Phoenix app, the Phoenix app would not find out until it loaded the values from the database. And what would prompt it to do that?

Just like Rails, phoenix is the doorway to your application. If there are data changes happening that aren't through your application, then you're doing it wrong.


> If there are data changes happening that aren't through your application, then you're doing it wrong.

This is super-unrealistic. State mutation legitimately happens through many channels. The trick is signaling to the application layer that state has changed and either the app needs to reload it or update it.

In the case of GP, I would either build in a web callback that can be used by outside processes, or put a message on a queue like SQS that can be consumed by the application. This isn't a phoenix thing so much as an enterprise app design thing.


> This is super-unrealistic. State mutation legitimately happens through many channels. The trick is signaling to the application layer that state has changed and either the app needs to reload it or update it.

13 years in professionally, many companies, many contracts, and many projects later I've yet to see a system where it wasn't the case. Small or large (16k/rps at the large end). Startup and "enterprise".

And why wouldn't it be the case? Why not have your data flow through a "central" business logic layer?


Some applications use a database which is shared by multiple applications, and you are not allowed to change the models (i.e. you can't ALTER TABLE). You code your application knowing that data can be changed by others. Your application coordinates with other applications through the database. Using separate services that each have their own database is not required and incurs overhead.


> Some applications use a database which is shared by multiple applications

The smallest piece of the pie possible. Even if you ignore things like wordpess.


Postgres can do a form of pub/sub with triggers on table change events, pg_notify. For anything more complicated get Postgres to publish all its changes using Change Data Capture to a Materialize instance. With Materialize, every query can push live updates to an app server, whenever the results change. Debounce a little and push the html, ideally don’t overwrite something the user is editing.

The Materialize folks made a blog post/demo of the materialize->app server->browser pipeline in Python+JS you could follow along with. They have a Postgres CDC implementation too, so hook it all up and it will go.

https://materialize.com/a-simple-and-efficient-real-time-app... (the images are 404 though)

https://materialize.com/docs/guides/cdc-postgres/


Phoenix apps tend to use Postgres, but not always. We've seen some really interesting use of Erlang's distributed mnesia database. There's even an mnesia Ecto adapter: https://hexdocs.pm/ecto3_mnesia/Ecto.Adapters.Mnesia.html


> But if your entire active state can fit in RAM in a cluster of BEAM VMs, you might turn the BEAM VM cluster itself into a distributed database

It's hard to imagine an active state that couldn't fit into RAM in a cluster of BEAM VMs. I've run clusters with thousands of nodes, with some nodes having 768 GB of ram. With today's servers, 4 TB of ram per node is approachable. The new pg module avoids pg2's dependence on (cluster) global locks that limited effective cluster size.

Of course, you have to want to do it, and mnesia is mostly key-value, so I don't think you'd have a good time if you need other kinds of queries (but I could be wrong). And you need to have readily partitioned data if you want your cluster to be operable; having all the nodes in one mnesia schema makes a lot of things hard.


Multiple points of entry to a SQL database is the root of all evil if I’ve ever seen it.


What's the difference between "the data is in the server memory" and "no one has write access to postgres outside the application servers"?


If you make sure no one can write to postgres except the app server, the difference is only efficiency.


> Sync.rb works like this: the browser WebSockets to the server, and as Rails models change, templates get re-rendered on the server and pushed to the client. HTML rendered from the server would sign a tamper-proof subscription into the DOM for clients to listen to over WebSockets. The library provides JavaScript for the browser to run, but sync.rb programmers don't have to write any themselves. All the dynamic behavior is done server-side, in Ruby.

It sounds like Stimulus Reflex is essentially Sync v2 (today's libraries are able to accomplish Chris's original vision)

https://docs.stimulusreflex.com/


I think that's true? More platforms should invest in the "sync" vs. "render" abstraction, especially the platforms that have strong concurrency already.

A Big Phoenix idea though seems to be that you can take this a big step further: now that you're syncing, keep the state that you'd be attaching to React components serverside, and let sync update the front-end. It feels like a lot of the benefit you'd get out of a carefully-designed GraphQL API, but with none of the plumbing work.


I've been thinking about this too after doing a few LiveView projects but on occasions needing better interactivity on the front end than is easy with hooks and something like AlpineJS.

Many web front ends are built on the Elm architecture / Redux or my favourite, re-frame in Clojurescript land where the view is driven directly by the app state.

What I'd love to have would be the same LiveView connection, with one part of the state object on the server synced to one key in the global app state in the browser. You could keep the rest of the server data private, and the browser side can keep it's own state that's not needed on the server out of the way, but one part would always remain up to date.

Not sure if it would be 2 way sync, or more likely, send events back to the server and the updates would magically arrive on the syncd part of the app db. Events from server to client would also be useful.

It should be pretty efficient if it was diffed too. Phoenix LiveSync ;-)


I'm leading the engineering effort of a stealth startup in Miami and we are using Liveview to power everything. It's very liberating. Heex does have it's warts and it is very new so expect to do some forum spelunking - but Chris McCord and Jose Valim are both very active and care about the product they are building. Usually things get fixed quickly. They are the heart and soul of the Elixir/Phoenix ecosystem so thanks a bunch for all their hard work.


I switched to Surface for my project a few months ago. Have you tried both? Do you think Heex has any major benefits to make it worth slowly porting to?


Just want to say - thanks Chris! LiveView is the most innovative web technology I've seen in the last 5-10 years, and it is an absolute joy to use.


Agreed. I haven't been this excited and motivated by any technology since I got into rails 10 years ago. It's been a great cure for the jadedness I've been feeling about our field in the past couple of years.


"Since I got into rails 10 years ago..." -- this is my line! Cheers!


I have never used LiveView, but I'd love to see the web moving towards this kind of interfaces. HTML over the wire is powerful! The deep integration between all the parts (Phoenix, LiveView, the HEEx engine...) is nice, and compile-time template validation looks really cool.

One thing that concerns me about doing everything over WebSockets, is that it seems you now need to keep a connection with every connected client, even if they are not doing any "highly interactive" actions. I think (in most cases) it would be more efficient to do a bunch of AJAX requests to receive those HTML chunks. Now that we have CDNs, HTTP/{2,3}, etc. the benefits of using WebSockets for everything seem less obvious.

This is what HTMX[0] does by default, and then you can use WebSockets[1] if you need it. Another great thing about HTMX is that the backend can be whatever you like, as long as you can handle HTTP requests and return HTML.

In any case, I like both approaches and I would love to see the web development ecosystem moving back to sending HTML over the wire, regardless of the framework.

[0] https://htmx.org/

[1] https://htmx.org/attributes/hx-ws/


Elixir (the language phoenix is written in, which is the framework that has these live views) is great with concurrency. You can have incredible quantities of parallel sockets open with basically no overhead. There have been various write-ups about it if you're interested.


> is that it seems you now need to keep a connection with every connected client

But seriously, what is the problem with this? I am guessing you don't really have much experience with the BEAM. It's completely not-objectionable, there aren't many stability issues, and they are very light weight (maybe a couple of kB/process).

I believe (have not done this myself) you can do liveview over long-poll, since liveview only cares about there being Phoenix Channel, which abstracts over websocket/longpoll.

https://equip9.org/2020/01/06/phoenix-liveview-longpoll

One of the things I want to try for the gee-whiz factor is to build an Elixir WebRTC client and serve liveview over a WebRTC data channel. I see no reason why that shouldn't work.


Do Phoenix channels have server-side state though? LiveView has server side state, so long polling would only work if there were also some form of server pinning involved (or a single server). Websockets are persistent connections so no need to worry about pinning a server across several requests.


You can go with Server Sent Events too, that are more suitable for notifications. WS may be blocked in corporate networks


The Symfony community has been going for Server Sent Events rather than websockets, via https://mercure.rocks/. I don't understand the benefit when every other framework wanting interactivity is still going for websockets.

As you say SSE seem more suitable for notifications rather than bi-directional communication.


I hope too HTML Over the Wire will be trending, I hate all the SPA and JS madness nowadays.

SPA/React/Flutter are great tools and technologies, when you have a team for the backend and one for the front end. If you are a little startup or a single man project, that’s quite huge to maintain


I've had such hard time wrapping my head around how to think in liveview - I've tried tutorials and building my own projects, but always end up thinking "am I doing this the right way"? How do other people structure their codebase, and what are the considerations they have? I think I'm trying to force my way of working with react and laravel, and lack some kind of a fundamental way of thinking to get my "aha!" Moment


I totally get the "Am I doing this the right way?" feeling, especially coming from Rails where everything was so opinionated and wanting to stay idiomatic.

Phoenix, while it does have opinions, is far less opinionated in the sense that it doesn't do its darndest to force you into certain conventions (for example, if your module name doesn't match your file name, Phoenix won't complain). Its generators do try and push you toward using good DDD practices (which is my opinion is a GREAT thing), but of course the generators are completely optional.

I don't have experience writing large LiveView apps but I would say that if you are familiar with any component-based frameworks (like React), I would take a look at SurfaceUI[1]. It simplifies a few "gotchas" in LiveView (though I would say they are very minor gotchas and worth learning about at some point) and gives you a component-rendering syntax more like React. Once you get going, you'll learn that LiveView doesn't have all the headaches that come with bigger React apps (like having to memoize functions or comparing props to avoid a re-render and whatnot). The recent release candidate for Phoenix 1.6 has made strides for a cleaner component syntax, but if you're having trouble with LiveView, Surface might bring some familiarity.

[1] https://github.com/surface-ui/surface


Influenced by your comment and others below, I created a shared spreadsheet where we can share our code structures & patterns informally & learn from one another.

https://docs.google.com/spreadsheets/d/10gCxVJyrme6Rv-LepqId...

Once there are enough contributions, perhaps a community member could write a blog article summarizing the major patterns - the use cases, patterns, benefits & drawbacks.

Don’t be shy!


Yes, I'm curious to hear about the patterns people use to structure their LiveView apps, particularly large ones.


I love liveview, but I hate genserver behaviour and liveview inherits some of the messiness of genserver behaviour.

This is a pattern that I started using for my Liveview pages: https://www.youtube.com/watch?v=HA4h0cajgaA

Not much help if you are looking for full LiveView apps, though. I haven't checked it out yet, but i hope surface addresses some of the structural organization issues.


I’m not a hard core web dev, but it seems that from a high level, Phoenix is similar to the Smalltalk Seaside web framework that people were excited about 10+ years ago. Can anyone who is similar with both speak to that? Are they really similar, or is there a key distinction I’m missing?

(I’ve become a big fan of Elixir as a former Smalltalker)


LiveView is a mirror that reflects how much accidental complexity is left in mainstream web tech stacks: It turns out people on your average React + Spring project spend the majority of the time on hassles instead of functionalities.

With that being said, it's hard to replicate LiveView in most tech stacks in an as performant way. Blazor feels like LiveView at a glance but doesn't feel like it could handle as much as LiveView in production. The key difference is BEAM was designed like an operating system (processes, preemptive scheduling, etc) instead of a programming language runtime.


I will read your article more thoroughly because it does sound interesting, but I'm curious how well this server-side state works from a scalability/robustness point of view.

It being stateless is normally a feature, as it doesn't matter which server the request hits, we can elastically scale them up and down.

With a "session" being server-side statefull, you now have session affinity and are you persisting/hyrdating that session somewhere? Is it kept in memory and lost of the server goes down? How long does that session persist? Lots of questions...


Has anyone tried server-side Blazor in ASP.NET Core? How does that differ from the LiveView approach?


I've written multiple apps in server-sided Blazor. For intranet apps its fantastic. When the server is not located nearby the delay gets more noticable and its not pleasant to use.

Currently on .NET 5 the missing of hot-reload is the biggest issue. Otherwise its really a breeze to build anything.


We use Laravel Livewire (inspired by LiveView).

It's been amazing to work with.

Thanks LiveView for inspiring Livewire!

It saves development time and solves lots of other problems along the way. I think it makes better web apps just by the way it works.

We are been using it on all our projects the past year.

Alpine js is a great compliment to Livewire.

They call it the TALL stack. Tailwind, Alpine, Laravel, Livewire.

The performance is as good or even better than vue apps.


In Phoenix it's called PETAL :) Phoenix/Elixir/Tailwind/Alpine/LiveView


For anyone wanting a "SPA" like feeling of their apps without having to deal with all the SPA complexity, and not starting from scratch or being able to rewrite their applications, try Unpoly. It is really great and very underrated.

https://unpoly.com/


How Unpoly differ from Hotwire (Turbo+stimulus) or htmx+alpinejs?

https://htmx.org/ https://alpinejs.dev/ https://turbo.hotwired.dev/ https://stimulus.hotwired.dev/


(My opinions)

Compared to Hotwire:

I think hotwire is the best option if you're using Rails. From what I know, hotwire needs a lot more of backend "collaboration" than Unpoly, so you will need backend "hotwire" implementations. You can see these things are being built for django, etc.

Compared to Htmlx+ Alpine:

Let me start with the disclaimer that I think Alpine.js is an aberration. Writing code inside html attributes is a fantastic way to break every editor templating language plugin, syntax highlighting, etc.

Despite of me not liking Alpine, I think unpoly handles a lot more use cases, and it feels a lot more "declarative" to me.

Compared to both:

Unpoly is kind of the "django" or "rails" in this category of tools. It gives you a lot of stuff. From page transitions loading progress bar (a'la turbolinks) to ajax replacing links, to modals, to popups to sidebars, to automatic blocks replacement (up-hungry) to form posts, to handle error responses, layers, polling.... and it has an awesome way of writing your own "components" with the up.compiler api.

I honestly think the only reason Unpoly isn't the most popular solution in this space is because of bad marketing (well, no marketing at all...). I really think it is the best (if you're not using Rails, then I'd use Hotwire+Stimulus+etc...).


I’d try Inertia.js compared to Unpoly. It seems a more modern alternative. It’s like HOTwire Turbo (aka. Turbolinks), but with JSON over the wire. But no API needed.


Fly.io is doing amazing things. I'm really impressed with their work. If any of the Fly.io people read this, my hat's off to you.


I didn’t use elixir or Phoenix but I felt that this article is well written for people who don’t have much context on either of those.


This strikes me as sort of like Meteor, only without the suggestion that real time apps will be trivially easy to build.


I thought exactly this. I wonder if we can get author to elaborate on the differences with Meteor.


Not the author, but I can elaborate on the difference with Meteor since I have experience with both.

Meteor is not server generated but rather a SPA framework with a server side part to it. It helps you build a SPA app quickly and it also uses websockets for all the transport but usually you just send json data there.

I don't really like Meteor because the performance is not that awesome and there is problems with node since it is single threaded. Meteor still doesn't support worker threads or other similar solutions in node so if you have something blocking on the server side the entire app will go down.

Liveview in the other hand is server side only (except for the liveview part). All the state is on the server and when the state changes it will push out the changes in the template and rerender automatically. In Meteor the recommended stuff to use is pub/sub and meteor methods but you still have to rerender yourself just like in a normal SPA.

Meteor is thus simply a way to quickly build SPAs and Phoenix and Liveview is a way to quickly build SPA-like feeling to apps but with everything living on the server.

You can think of Liveview as Google Stadia but for websites. Everything is rendered on the server and you just get the updates.


If Liveview is Google Stadia, to what would you compare Meteor?


Not GP, but my take is that Meteor is more like Flash, if we're going to go with a game platform comparison...


Does this mean the entire state of each connected user is hold on the server?

What's the resource cost ?

And that adding an element on a list don't show it up before a server round trip ?

What's the latency cost ?


Imagine building federated tech with this. ActivityPub stuff like Mastodon, Peertube, etc. The performance on small servers would be incredible.


https://pleroma.social is an ActivityPub server like Mastodon but in Elixir.


Would be awesome if a Pleroma front-end was written in this way. Just host one app that is ultra-performant.


When I was still working on Pleroma, I was stealth building a fully featured frontend on Hotwire + LiveView for some parts. Sadly this won't go further :-)


I've heard from career Elixir devs that the Pleroma codebase is not very idiomatic in terms of how it's all set up.

I have to wonder if a Phoenix / LiveView activityPub server+client could be built that would be compatible with it. Something that would appeal to existing Elixir devs.


Tbh, yes, it's not.

I had long term goals to improve it but, as long term goals, especially when you're alone, isn't improving. And even more given how much manpower was throwed at it by its sponsor and it (personal opinion) went too bad to improve its codebase.

Regarding ActivityPub Servers. It's actually quite easy to build one -- but building one that is compatible with the current state of the fediverse is another story. Mastodon built on OStatus, then moved to AP but while keeping weird extensions and so on, complicating (a lot) all of it. There's an attempt at this on Bonfire[1] which is loosely a Pleroma fork (started at Moodle's MoodleNet, forked by their workers).

You'd have better luck building a "Mastodon" client (which would work with Pleroma's client API) than wasting your time working on AP S2S for Mastodon.

[1]: https://bonfirenetworks.org


Wow bonfire looks like what I wanted!

> We're currently in the middle of a refactor to convert all components and templates from LiveView to Surface, which is a server-side rendering component library (built on top of Phoenix and LiveView) that inherits a lot of design patterns from popular JS framework like Vue.js and React, while being almost JavaScript-free compared to common SPAs.


> We're already exploring optimistic UI on the server, where we can front-run database transactions with Channels, reflect the optimistic change on the client, and reconcile when database transactions propagate

How far is this away, roughly? It's literally the sole feature I need to make the switch to literally any new (to me) web framework right now.


Live view is great ! I used live view recently while working on Zcash Block Explorer. everything on the home page auto updates in near realtime ( I didn't write a single line of JS !) https://zcashblockexplorer.com/


Couple of questions from someone who’s never tried Phoenix/Elixir:

1. How does this stack handle non-browser clients, like mobile apps or APIs?

2. Can this scale horizontally? i.e, can you throw in more servers when you need to? Since servers hold state in-memory (IIUC) I’m not sure about this one.


1. Fantastic for non-browser clients. The community has native channel clients for objc/swift/c++/rust/java so you can do real-time things natively, and course do standard json/graphql. 2. Yes, we scale vertically and horizontally. Because we have distributed pubsub and messaging baked-in, the horizontal scaling story just works – you don't have to rearchitect your app to add more servers :)


You can have more servers and you route the client to the server that made the initial connection. Same as with any other web socket.


I didn't realize Fly hired Chris McCord. Wow!


How is it a good idea for every user interaction to hit the server?


Websockets are crazy fast and the BEAM has crazy fast IO (at the expense of slower CPU-bound tasks, but that is another story). LiveView also minimizes (to an obsessive degree) the amount of data that flows over the socket. The payload is about exactly the size as the exact diff being rendered in the DOM (often times this is simply the `innerHTML` of a node).

The advantage is stack simplicity. As stated in the beginning of the article, LiveView completely removes your need for any kind of API between your front and backend. It also makes it very easy (through JS hooks) to offload any interactions you want to the client if that makes more sense. But of course, if you end up offloading EVERY interaction to the client, you should be using a frontend framework--LiveView is very clear about not being suitable for every need--if you are building a super UI heavy app (like a text editor or painting app or the like), LiveView probably isn't going to cut it.


A server round trip is a cost. It's not a good idea or a bad idea.

I will pay the cost of a round trip if (a) it simplifies my life and (b) the cost is low enough. LiveView simplifies interactive app development (for me). Since I can run my Phoenix servers close to people, the round trip cost is usually very low.


This is not the idea. Interactions that don’t influence server-side state are suggested to be handled by JavaScript. Docs give an example for integrating AlpineJS, but you can use vanilla js or other libraries with little plumbing.


Whether it's a good idea or not really depends on your app, what you're trying to do, and who your users are.

Now if you want to avoid round trips to the server for interactions such as opening a modal, you can!

There are at least two ways to handle this - Javascript hooks that let you attach JS to your DOM, or the light AlpineJS framework. The latter is a perfect fit for Liveview and it's part of the unofficial go-to stack name PETAL - Phoenix Elixir Tailwind AlpineJS Liveview.


The first 20-odd years of the web were mostly this way... somehow we managed.


it helps debugging and avoid any issue with cache(old JavaScript, unexpected JS error etc).


This looks cool, but how does it work when your server is located 100ms away from the client?

Every interaction will take a minimum of 200ms to complete, which would become fairly noticable.


For things requiring the server to be there anyway – the exact same way an SPA would work, but a little faster round trip.

For things that should happen instantly on the client, you'd run some JavaScript on the client via a pcx-hook (JS escape hatch), or using a tool like Alpine.js to handle purely client-side interactions.


Thanks, I read some of your other posts describing the same thing (and linked blog post), and I’m fairly confident that would indeed work in the majority of scenarios.

I’ll have to try it out sometime now ;) there is still some nagging sensation that I’ll run into a fairly major blocker somewhere, but right now I can’t articulate what I’m afraid of.


Dumb question: does Phoenix have out-of-the-box support of a development model where you can create a stateless web app?

I ask because I feels like Phoenix is rallying around the stateful approach of LiveView and just want to ensure that model isn't the only model of development Phoenix will support (short and long term).


Yes it does. By default, any Phoenix application works this way just as any other app.

One of the biggest perks to the BEAM VM is how well it handles managing state though.

Most other languages aren't designed to do this well, so stateless web apps focus on just passing information from the client to somewhere else that holds the state (database, redis, etc).

Nothing is really stateless, it's just a question of where you decide to hold the state.


Shameless plug. I built this game in LiveView recently as a side project https://robotrace.neillyons.io. I'm available for hire if anyone needs an Elixir dev :) Email on my HN profile.


A bit side track, if you want to use the liveview feature but not ready to develop with dynamic type language (Phoenix LiveView is based on Elixir), and being more familiar with the node.js ecosystem, checkout ts-liveview (which is based on Typescript)


Does this kind of technology exist in other languages in a similar fashion? I am aware of Blazor for c#, but I believe that uses webassembly and ships the entire c# runtime/gc making it very heavy.

It is so cool that this just has a small js layer client side.


Here is an awesome list of all such Liveview-like frameworks.

https://github.com/dbohdan/liveviews


Blazor has different modes/variants. I think the server-side variant is similar to liveview and doesn't need a .net runtime on the client.

> Alternatively, Blazor can run your client logic on the server. Client UI events are sent back to the server using SignalR - a real-time messaging framework. Once execution completes, the required UI changes are sent to the client and merged into the DOM.


Yes, Blazor server-side is similar to LiveView, there is also LiveWire for the Laravel / PHP community (from the alpine.js creator) and Django Unicorn for the Django / Python side.

Unsure on the Rails / Ruby side, maybe stimulus-reflex, but there is also Hotwire as well ofc.


As someone with loads of Python and Django experience, should I invest in learning Erlang first or dive straight into Elixir/Phoenix?

Would the experience be enlightening regardless if I choose to deploy apps with it and use it at work?


Depends what you are trying to accomplish.

If you are reaching for Django Channels in your Django project then yes I would investigate Phoenix, Phoenix.Channels, Phoenix.Presence, and Phoenix.LiveView.


Only if you want to go with LiveView. For Django, as I’m a Python dev too, I’m going with htmx+alpinejs. Or you can choose the Hotwire stack (Turbo+stimulus)


There is also Django Unicorn (https://www.django-unicorn.com/docs/), which is an implementation of LiveView for Django.


you don't need to learn Erlang first. you can directly dive into Elixir and Phoenix. the experience working with elixir/phoenix will be so soo good that it will be a hard to go back to any other stack.


Only thing which prevented me going fully into Phoenix was that there was no good library for authentication and admin interface at that time.

Are there any good libraries like devise and activeadmin now ?


There are good libraries around authentication and authorization. There was at one point an analogue to ActiveAdmin, but it looks to be a dead project now. I generally discourage the use of those kinds of interfaces but if you must, this is more current: https://github.com/mojotech/torch


Great write-up!

There's one small typo:

> Uploads us the existing LiveView WebSocket connection

"use"


How does LiveView compare to Blazor?


What is up with fly.io prices in India? It’s 3 times more than in Asia even.


"Do you remember when Ruby on Rails was first released? I do. Rails was also a revolution."

I remember and I disagree. It was dog slow, run by a guy who made slides that said "Fuck you." and was rife with memory leaks.


Technically it was probably leaking objects, not memory per se, but… I suppose that’s a bit of a pedantic distinction/memory leaked with the objects… but none of those things is incompatible with it being a revolution.


I worked for a gentleman[0] who, while not a programmer by trade[1], he was an old Unix guy and a Linux early-adopter. All of that to say that when he'd made a recommendation for me to explore this or that technology, I generally listened. He pointed me at Ruby on Rails.

I admit I came into researching Rails a little reluctantly and pretty early on, as well. That probably biased my opinion of things a lot more than it should have but a few tutorials later, and after fooling around with a toy project over the weekend I thought: "Neat. But either it's daaamn slow, or I'm doing something horribly wrong." I don't remember what I loved/liked but I remember being annoyed at odd laginess[2]. The more I investigated within the community, the more I felt like I was corresponding to "a bunch of kids". And I don't mean "kids" as in insulting-term-used-for-early-20-something-know-it-alls[3], but ... like 12-year-old boys trying to be popular, and failing -- a bit immature with unnecessary drama. I ended up not exploring much beyond that weekend. It's not that I found the community to be entirely unwelcoming; really seemed like it was. It just wasn't a group I was positive I wanted to be a part of.

Of course, not too long after that, I recall discovering the term "Brogrammer" loosely tied to Ruby on Rails and I thought, "Yep! That's what I meant by 'kids'". I remember reading a rage-quit post from someone that hit the front page of HN[4] a little while later, and a few other things here and there. It seemed at a certain point, the things written about Rails that found my eyes were more frequently some form of drama/nonsense than anything telling me why I needed to look more closely at Ruby on Rails.

[0] Miss ya, Lou, if you're out there!

[1] And not in any way a middle-management guy -- he managed multiple development teams over the years and was put in charge of probably the hardest job in his career, managing 6 people with critical, mostly different, responsibilities.

[2] My apologies for the vagueness, it's out of concern that faulty recollection will result in me maligning something incorrectly. I did these exercises monthly, for about a decade. This one lasted about 5 days because I decided it wasn't worth further effort; I wasn't going to use it.

[3] Yeah, I was absolutely one of those and have found that calling out others faults is really stupid when you share them.

[4] Zed Shaw; not sure where the original is but if memory serves, calmer heads prevailed after a short time, he took down the original and I didn't Google so I'll shut up and share the link I found: http://harmful.cat-v.org/software/ruby/rails/is-a-ghetto


Looks like the brogrammers found you, friend. You’ve also been downvoted into oblivion. Know that I see you.


Thanks for the downvote, you Rails fan you. ;-) My comment is entirely factual.


> Imagine your boss asks you to display how many other visitors are viewing a nearly sold out item to convert sales

Yeah, you could say “no, such overt manipulation is definitely unethical”.

But on a more technical side, I like to be a progressive enhancement purist (though sometimes it’s not the pragmatic option) and I don’t see any good reason why client-side scripting should be required for an online store. Can LiveView be used as a progressive enhancement, producing traditional HTML with links, form submissions and all that will work without client side scripting? If not, I don’t think LiveWire is suitable for that specific sort of site; and if it is, then surely you can’t know how many visitors there are on a given item? To be sure, the estimate will still be more accurate than the likes of (30 + 20 * Math.random())|0 which is maliciously manipulative, but it’s not correct any more.


> In the process of building Phoenix, I believe we've hit on some new ideas that will change the way we think about building applications in much the same way Rails did for CRUD apps.

I think that's the issue with Phoenix. An actual opinatred vision like the Rails one came as an afterthought.


That’s a little ironic since, in a real sense, Rails itself came as an afterthought, not a fully formed vision. DHH built Basecamp without a framework, and then refactored a lot and extracted the core of what he’d happened to build into a very general reusable framework. You could criticize Phoenix for not having the same pedigree I suppose, of being first and foremost a product for getting things done, and only a framework as a secondary decision, but saying rails started with an opinionated vision and Phoenix didn’t is a bit funny, or like the pot calling the kettle a water retaining vessel.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: