One thing worth highlighting in the post is that Phoenix now uses Ecto 3.0. This latest version of Ecto separates Ecto's data mapping and validation from Ecto's SQL adapters. Practically this means you can use Ecto's changesets when you either don't need persistence or aren't using a SQL database.
Been waiting for this release to start playing with Phoenix.
I'm super interested to see how liveview turns out. Can it be a game changer and free us from JS framework morass? Anyone know if there's a public liveview repo/work to look at somewhere?
There's no public repo yet, but I'd like to open it up soon. I've been ironing out the details of the programming model before opening it up – namely nested live views and how the remote client spawns and reconnects to the server processes. I made solid progress last week, and posted a sneak peak of the nested views and error recovery:
https://twitter.com/chris_mccord/status/1059273315666350080
I still usually use react or similar with Phoenix, but it’s pretty easy to sprinkle things like that across SSR pages. It’s a really great tool overall and I find that one I’m in a good groove I get a ton done in Elixir. The tooling is amazing as well. My team runs credo and dialyzer alongside tests in ci, and the whole thing finishes in 2 minutes, it’s really a pleasure to work with.
Thank you Phoenix team - changed my career for the better. My stress levels dropped drastically since moving to Elixir and Phoenix.
Immutable language, blazing fast predictable performance, super quick test suite, real-time capabilities that don't require gigabytes of RAM... the list goes on.
There's not that many production elixir jobs out there. I'm hiring full time remote or NYC elixir devs if anyone is looking to find this kind of development experience. bill@stellaservice.com
Yay for webpack as default.
My perception of phoenix is not only that it is the best in class but that it also keeps getting better instead of bloated.
Not to take away from your excitement or the validity of the change, but non frontenders find webpack rather bloated. To be fair, I’ve met more than one front-end developer with the same perception.
There's no panacea when it comes to front-end build tools, but the webpack 4 release vastly improved configuration and docs, which was a big reason for us to make the jump. We've maintained the old brunch workflow for non-js pros, where you place js in assets/js and css in assets/css and things Just Work™, so folks fond of the old brunch way can continue to handle assets exactly as before, but webpack will be used underneath. I also like to think we now strike a nice balance between zero hassle asset bundling and professional front-end engineers using phx.new and quickly getting to work.
As a professional front-end engineer I thank you. The old system was a constant pain point for me. Every time I started a new Phoenix project I agonized over how to integrate the front-end workflow I liked with Phoenix. This will be a big productivity win for me.
> Not to take away from your excitement or the validity of the change, but non frontenders find webpack rather bloated. To be fair, I’ve met more than one front-end developer with the same perception.
Regardless it still seems to be the build system most major platforms and large orgs are choosing to integrate with (for better or worse). Which comes with its own benefits.
It might not be the best build system for a raw new js project but in terms of broad support, stability, adoption rates, 'no one ever got fired for choosing IBM', etc it seems to be the system of choice.
Those were two separate points, I should have used a linebreak there in retrospect.
My point was that if you look at the changes and new features, all of it seems to me as stuff that improves core functionality, there's no feature creep.
Yay webpack because that's what I use so less work for me next time I start a new phoenix project (:
Elixir and Phoenix seem to have very fast release cycles. I don't know if it's a good or a bad thing. It's great to add a lot of features, or remove the ones that don't really work, but it also renders literature obsolete pretty quickly.
This book actually should be in great shape since Phoenix 1.3 included the larger re-org of project code. A book written for Phoenix 1.3 will be entirely relevant for Phoenix 1.4. I think we had a total of two deprecations, with no breaking changes. As another commenter said, our last major 1.3 release was 2017-07-28, so if anything, I would expect folks asking us to speed up the release cycle a bit :)
Yes the book will probably be mostly relevant technically. But from a marketing point of view, it's dead. If I know nothing about phoenix, I check the website and see that the current version is 1.4, am I gonna buy the book that covers up to 1.3?
Re: my comment about release cycle, you're right. It was probably more applicable to elixir than phoenix.
It's over 1 year for an albeit big point release. Not necessarily fast compared to Rails even in the early days [1]. I think the main problem is there aren't enough quality independent tech blogs like when Rails came onto the scene.
There aren’t as many, true, but I’m not sure you necessarily need them most of the time. You aren’t going to end up tangling with ActiveRecord, upgrading for two months, or running into bizarre class load issues. These just less magic to worry about.
Less releases with breaking changes definitely feels like a feature to me at this point. It's also nice that many of the libraries Phoenix depends on are decoupled from the core.
This reminds me of Two Scoops of Django by Daniel Greenfeld and Audrey Roy. When they first started the series, they were trying to pump out a book every minor release, but Django development is pretty fast. After a few versions, they decided to only start writing for LTS releases.[1]
The Phoenix team has been very accessible for help and growth of the language. Very happy to work with it every day and to be able to contribute back occasionally!
Have a question for phoenix veterans. The server performance in benchmarks like techempower have not been stellar, yet concurrency is one of its most touted features. If i wanted to build an app like hackernews and host it on digital ocean free drop, how many concurrent users can i hope to serve with request latency less than 100ms. A rough ballpark figure here will help me visualise phoenix's performance better.
Big thanks and props to the team on this, very exciting! Looking forward to pitching in on the guides or other areas where some contrib help may be needed.
When you talk about BEAM supervision magic, you forget that in the end it's just a regular process of the OS. If the BEAM's process is killed, supervision magic won't work.
It's not any different from the Golang's process of the same nature.
Is there any benchmark comparing two architectures , one using OTP, the other using containers + kubernetes ( let’s say in go or java) ? In terms of availability, performance, maintenance cost, etc.
You can use Docker + containers with OTP, they are not mutually exclusive. Daniel Azuma gave a talk at the most recent Elixirconf about this subject.[0]
Fun fact, Firefox used to be called phoenix. The rename happened over a trademark for another product called phoenix (a database if I recall correctly).
The rename happened over a trademark for another product called phoenix (a database if I recall correctly).
They first renamed it from Phoenix to Firebird due to a trademark dispute with Phoenix the BIOS company (which doesn't exist any more). They then promptly got called out by Firebird the Open Source database project and eventually had to change their name again to Firefox.
Yea, you'd think if you're forced to change the name of your product due to the name clashing with and existing software project/company, you'd take some time to make sure the new name you chose doesn't also clash with an existing software project/company.
I wonder what the biggest deployment of Phoenix is at this point.
I've written a few cowboy apps, but my professional Erlang experience has only been with chat servers that do not use HTTP. I'm not sure if I'd choose Erlang to build web services in. Cowboy is a single maintainer project, and Go has had HTTP2 support for ages now.
Cowboy maintainership has not been an issue, but there are other options in the Erlang and Elixir ecosystem for webservers, ie elli, chatterbox, and Ace. And to be fair, chatterbox is an Erlang http2 server that has been around for 3 years now. Also please consider blatantly discounting the ecosystem and casting doubt on well-maintained projects isn't a very high value-add to this discussion :)
For other readers: you might choose Erlang or Elixir for web services if you want high scalability with distributed communication for free, which is super important for example when building any real-time web service. It's what allows Phoenix to support millions of channel connections on a single server:
https://phoenixframework.org/blog/the-road-to-2-million-webs...
Erlang and Elixir's concurrency model is also unique from what Go offers out of the box in that the way you build applications is via supervised processes with built-in failure recovery.
What does this built in failure recovery mean, functionally?
I mean consider e.g. some code in a standard Rails monolith raises an Exception. The exception bubbles up and is caught by a middleware that displays a status 500 page. Isn't that basically failure recovery too?
Or for the same Rails monolith, one worker process hits bad code that eats memory until it is killed by an OOM-killer. Unicorn/puma notices that the worker process is missing and restarts it. Is that more analogous? In either case the request is failed but the server pretty much "recovers", ready to accept new requests.
So what is unique about the supervised processes with built-in failure recovery approach? Or am I viewing this at the wrong level of resolution or something?
Heroku uses cowboy for their routing layer. If you inspect the headers in a response from an app served on Heroku, cowboy is the server. I would guess Heroku is the largest cowboy deployment. As for largest Phoenix deployment, I'm unsure.
it's a legitimate concern. the only alternatives to cowboy for http2 are chatterbox and ace which are both single maintainer and, as far as i know, not in wide use (even by erlang standards). for http1 you have elli but it's also never been widely deployed and it's unmaintained as of years ago
Not really. As mentioned elsewhere, there are many contributors to Cowboy and major companies using it as well. Heroku, AWS uses it in Cloudfront last I checked, Incapsula too.
Perhaps, but Go’s threading model uses a lot more memory than Erlang and Elixer and Go’s threading is handled in public memory, whereas Erlang and Elixir processes are all protected in private memory. Erlang and Elixir’s supervision tree is also pretty damn great. OTP is solid and has been thoroughly proven. Elixir also handles dependencies much better than Go, in my opinion. I’m sure Go is nice too, but I don’t think the backhanded compliment about Cowboy, Erlang, and Elixir is warranted.
Additionally there are a number of features that make Ecto 3.0 a great release: http://blog.plataformatec.com.br/2018/10/a-sneak-peek-at-ect...