Hacker News new | past | comments | ask | show | jobs | submit login
Elixir, Phoenix, Absinthe, GraphQL, React, and Apollo (schneider.dev)
526 points by schneidmaster on April 21, 2019 | hide | past | favorite | 153 comments



Stack described above is the one I’ve been working on for professionally for the last year and I wouldn’t recommend it.

Main reason is the absurd amount of complexity with costs heavily outhweighting benefits gained from the solution.

For example, simple task of adding new entity consists off: On backend: creating migration, creating data entity (Ecto), writing structure module (sanitization, validation, basic logic, if needed), mounting queries, writing input mutation(s), writing output query/queries, unit testing On frontend: creating component, creating form, writing graphql queries, writing mutation, wrapping components/queries in components, connecting query to components, providing action handlers for inputs, unit testing, integration testing

Now I have authors list. And even though I am full stack I haven’t yet spent even single minute on having Proper UX Design set in place. Oh, do we need to add author’s birthdate? Dang, let me adjust all of that.

In my opinion technical debt accumulates faster than in other solutions. GraphQL is complex. React (done right) is complex. Apollo is complex (Elixir is simple, yet it’s only one cog). Deciding on doing file upload in GraphQL led me to a rabbit hole, which took at least a week to dug out from.

When trying to find the source of all development issues my thoughts go towards GraphQL. Maybe it is too complex or we didn’t had enough experience in it? Yet it was really nice to work with when all the backend and frontend boilerplate was already written. It makes sense, even though requires some heavy thought behind it. Maybe it’s Apollo, which locks in one of two specific trains of thought, or Absinthe, which requires side-hacks in order to get some bit more advanced features working with Apollo, like file uploads or query batching.

From a perspective I’d say this is just too much. Every single part of this stack adds overhead to the other parts. Maybe if I had more specialized team members it would get easier, but being only 3 of us, with me, as a full stack, constantly switching between all of that was a tiresome, low-productive effort. Right now we’re disassembling app to a classic REST-app and we’re seeing development speed increase on week-to-week basis, even though we had to scrap most of the frontend.

I guess there would be some benefit on having the write up of all of it, since it doesn’t even scratch the surface of the year of development issues with this stack, but even in this very "short" form it may serve for a word of warning that this stack is not necessarily something you want to care for.


Phoenix Live View[1] fits the gap between server rendered HTML pages and JavaScript rendered front-ends. If you’re after a responsive web UI without needing to learn so many disparate frameworks and technologies it could be a good fit.

[1] https://github.com/phoenixframework/phoenix_live_view


Phoenix Live View is very new and not appropriate at all for real-world use yet.


> Phoenix Live View is very new

Its technique is not new, it's just server-side rendering via websocket and DOM patching (Morphdom is also very old)

> not appropriate at all

"At some" would be appropriate. Pretty much typical CRUD (real world) that needs to fire requests to server anyway (e.g. form, business logic validation).

The only reason not to use it _right now today_ is it's not 1.0 yet.


I'm not one of those people who think a <1.0 version number is an absolute dealbreaker, but when the first commit is only 6 months old it's probably not the wisest move to make it a cornerstone of your stack.


We've also felt the complexity of React and Apollo. It works best when, as others mentioned, you've got distinct teams that can focus on each part. In situations where that isn't the case the same decouplings that make it easier for teams to operate independently just add overhead and complexity.

We're in a similar boat these days so in fact our latest projects are back to simple server side rendering, but we're still making the data retrieval calls with GraphQL. It ensures that that the mobile app and reporting tools we're also developing will have parity, and we don't need to write ad hoc query logic for each use case. The built in docs and validations have simply proven too useful to pass up, and you really don't need a heavy weight client to make requests.


> Maybe if I had more specialized team members it would get easier

It really would be easier. We are using this stack, and have separate backend and frontend devs. Each love their side of the stack. Being backend myself, I don't find the it too time consuming to get new entities going. However, I imagine if you were doing the whole stack and repeating ecto schema, absinthe schema, apollo queries, then it might get more tedious. I particularly enjoy how easy it is to modify the schemas once it's set up too. If we ever need to expose more data, it is usually done within minutes. There is a massive benefit in the forced standardization of GraphQL too. Being rigorous about standardizing APIs and how you do filtering, sorting, embedding, nesting and so on is tiring and a waste of time - you end up writing mini frameworks. Absinthe and GraphQL reduce this pain for us considerably.


That was one of the reasons I choose this stack. The plan was hiring team just after stack was set. Unfortunately financial plans toppled and we (2 full stacks + front end UX) were stuck with very complex architecture designed for ~10.

If I would be to working on only 1 part it would be great, but instead of parallelizing effort it was sequenced which kind of sucked.


It's early optimization at the architecture level. I've seen it happen so many times.

On top of all you said, the GraphQL stack is horrible for caching. You will not have this problem until you reach really high traffic, but once you do, it will eat you alive.

Unlike a rest endpoint, you can't cache a URL. You can't use HTTP headers. You don't know beforehand what GQL query will come and even with batch queries for different types it's super hard to optimize. It's very easy to have an N+1 query hell.

The lesson I learned is to use the boring stuff until it really needs to scale up. A REST API with static HTML and some sprinkles of JS will get you to the phase where you actually need to start using React, GQL, etc.

GQL trades a lot of things for flexibility, but 99% of the apps don't need that in the first place.

But hey, on the upside everyone can put in their resumes that they used all the new hot shit :)


> it's super hard to optimize. It's very easy to have an N+1 query hell.

Both Postgraphile and Hasura deal with this. I have no idea about Absinthe.


Similar experience but without GraphQL. We had server side rendering with a Node server. Our production server became a Node farm with Phoenix + PostgreSQL requiring less than 1 GB of RAM and Node using at least 8 extra GBs. We eventually ditched SSR, send the React app and wait for it to render. We're back to 1 core and (mostly unused) 4 GB. It's a business application with complicated UI, customers don't mind waiting a couple of seconds of they want to start from a bookmarked screen.

For a simple UI I'd generate HTML server side with eex and spare us the cost of front end development. It's also a productivity nightmare. The amount of work needed to add a single form field with React/Redux is insane.


Just a quick one: why would you need redux for forms? This is in my opinion a total overkill.

I have forms either having their own state or (preferred) just use Formik for all of this. In my stack, this then allows to just add a field in the GraphQL schema (backend), add it in the query, add the formik field + yup validation and done.


Some people would argue that if using Redux, also having local state logic is an anti pattern.

That would mean that if you use Redux, a form also requires actions for form update/submit/success/error and the form data should be stored in the redux store.

That is one of the main issues I have with Redux, which I feel adds automatic complexity for simple things, but at the same time I'm not sure if it's very good to have a mix of tings happening from store/actions/reducers and others from local state/ajax.


> Some people would argue that if using Redux, also having local state logic is an anti pattern.

I won't disagree that this is a popular opinion, but there's little practical benefit to storing state that's truly local to a single component (or a very small tree) in Redux just because it's there.

Even the maintainers of Redux maintain that it's perfectly acceptable to use local state - https://redux.js.org/faq/organizing-state#do-i-have-to-put-a...


I don't see how you can blame something for adding complexity based on what other people _think_ is an anti-pattern.

In fact, it's most of the times not desired to update your store before you know the data has been validated anyway. The store should always be the source of truth, but that also means that it should be valid.

That's the approach I am going with in any case when working with some kind of global state.


Abramov advises against using redux for forms


Or use the browser's built-in form state management: https://medium.com/@everdimension/how-to-handle-forms-with-j...

Bonus: it's almost certainly more accessible than custom solutions.


Any idea why SSR used so much RAM? I wonder if the virtual DOM approach of React contributed substantially to it, and whether something like Svelte (https://svelte.dev/) would do much better.


We investigated it a little then decided that finding a proper solution wasn't worth our time. This is what we discovered.

We had a pool of 4 Node instances which is the default of the solution we were using (a patched version of https://github.com/hassox/std_json_io)

Each Node instance has a 1 GB memory limit (the heap? It seems Java like). We failed to find a way to raise it but, again, we didn't invest too much into it except some googling. It seems there used to be a command line option for that but it doesn't work anymore.

Each hit to Node raises the memory usage until it gets to 1 GB and throws an error and gets recycled, which unfortunately translates to a 502/503 to the client. We can intercept those errors in Elixir and try again but it's far from ideal.

To have less errors we naively decided to increase the number of workers but we also had to increase the RAM of the server. The first hit for each client gets served by Node so eventually Node's resource usage dwarfed Elixir's. We felt like we were doing it wrong (I'm sure there is a way to get a saner setup) and decided to turn off server side rendering. Nobody complained and we're saving some $40/50 per month on that single server, plus our time which is worth more than that.

I think that projects with little load should run on low tech uncomplicated solutions: a reverse proxy and an application server were enough in the 90's for the same scenario and are still OK now.


What load did you have? Was SSR used in private routes as well?

In my experience, SSR should not introduce more complexity than you already have.


This the best part "Main reason is the absurd amount of complexity with costs heavily outhweighting benefits gained from the solution." Couldn't agree sooooooooo much ! thx xlii.

I will remember that one "Right now we’re disassembling app to a classic REST-app and we’re seeing development speed increase on week-to-week basis" . thx


Thanks for this. I'm not a web dev, but yes, after seeing the 3rd or 4th "and then we do this" I've just started scrolling towards the end of the post and got bored fairly quickly. My thoughts exactly: this seems way too complex for creating just a few pages; too many dependencies, too many things to remember and update.


I want to say Any sufficiently complicated REST API contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of GraphQL API.

Although complex, GraphQL (done right) is much easier than REST (done right).


The real key part of this phrase is "sufficiently complicated". GraphQL shines in certain scenarios, but not every API needs this complexity.


Unless I'm missing something, Hasura requires much less effort on the back-end than what you're describing with Absinthe. Hasura runs in its own process (or processes; it doesn't have any state of its own so it can scale), and it can deliver events to your back-end via webhooks, so it doesn't matter what language you use for the back-end.

As for file uploads, it seems to me that the best way to do that is to have the front-end upload directly to your cloud storage service. Both S3 and Google Cloud Storage have a feature called signed URLs, where your back-end can create an object, grant the necessary permission, then give the client a temporary URL to upload to that object. Then just store the URL in the database.


Most of the things I've seen people complain about being complex seems to be a lack of either understanding, proper tooling and in most times, the codebase itself.

Newer technology usually adds a ton of different patterns and abstractions that can hide a lot of things away from you, so it becomes hard to understand, unless this information is presented as a 101, or you invest all your time to read documentation about each individual part of everything. Which, to be frank, nobody has time for.

I have used GraphQL with Apollo and React for the last couple of years for different kinds of projects and what I have noticed is that the tools themselves, while quite abstracted, also try to support a lot of edge cases which can make it hard to apply to your project. I had to either use other libraries or build my own that helped me abstract some of the things that were most commonplace but required a lot of boilerplate otherwise.

I have found tremendous value by following the philosophy of not to pre-optimize for everything, until it becomes a burden to work with. Although, it can be hard to determine when something is going to become so big that it will be difficult if not impossible to refactor, but you learn that as you go.


I agree! You must not use all fancy new things to build your application, just because they exists.

We also took a step back and removed GraphQL-stack and use simple clean REST-Api only, it has increased our productivity and we don't have to divide our time for an other module, which must be maintained too. Before we used REST and GraphQL, becuase it make no sense to put everything into GraphQL.


I assume you're keeping Elixir/Phoenix as RESTful api, and then React on the front-end? I've heard Redux suffers from similar issues with complexity, so what will you do?


Yes. Elixir and Phoenix are wonderful and still my favorite language/framework and they are still backing things.

For front end I wanted to move back to Ember since I had a lot of positive experience with it but that was impossible due to the business requirements.

As for Redux I try to stay away from it utilizing philosophy “you don’t need it unless you know you need it”. Cargo cult made Redux default piece of stack and it’s another very complex and sensitive to mis-design piece.

Instead of using Redux we rolled our own adapter since we had to work with some data outside Apollo and it’s we are rather happy with it. Not sure how it will scale to RN though.


> Stack described above is the one I’ve been working on for professionally for the last year and I wouldn’t recommend it.

Any alternatives that you would like to suggest?


> * Right now we’re disassembling app to a classic REST-app*

still with elixir/phoenix or did you scrap even that part?


Nah. Elixir stays ;)


Hey folks! Co-Author of Absinthe here, happy to answer any questions about it. The post here is good, although these days we recommend using Dataloader vs Absinthe.Ecto. Dataloader really extends the idea behind Absinthe.Ecto while providing in request caching, pluggable backends, and easier query manipulation.


I've got an app with back-end oauth-based login. I had a bit of a headache integrating the sessions with Absinthe and finally arrived on this in my Context module:

```

  def call(conn, _) do
    context = build_context(conn)
    Absinthe.Plug.put_options(conn, context: context)
  end

  def before_send(conn, %Absinthe.Blueprint{} = blueprint) do
    if blueprint.execution.context[:logout?] do
      Auth.drop_current_user(conn)
    else
      conn
    end
  end

  defp build_context(conn) do
    with ["Bearer " <> token] <- get_req_header(conn, "authorization"),
         {:ok, data} <- MyApp.Token.verify(token),
         %{} = user <- get_user(data) do
      %{current_user: user}
    else
      _ -> %{}
    end
  end

  # rest of the file
```

Then made a Logout middleware that sets logout? to true in the resolution context.

Is digging into the Blueprint as in the code above necessary? Is there a simpler way of solving this?


Good to know, thanks! I wasn't familiar with Dataloader, I'll have to check it out :)


I can second this, we've been using dataloader in production for almost 2 years now. Fantastic library.


I just started a new app, and I decided to try Phoenix LiveView. It's really simpler and easier to work with. It won't work well with all kind of apps, especially those with offline support, but for many many apps/website it is a good fit.

The last app I did was a mobile app written in Elm with Phoenix. I can't recommend Elm enough, it makes JS much easier to work with. For communication I used a simple REST oriented API.

While I understand why GraphQL exists, I think it is a major addition in complexity for little to no benefits. I tried it, wrote half a project with it, but they removed it. When things gets complex (for example, let's say the user.email field can be redacted under some conditions) your endpoint becomes really hard to manage. GraphQL certainly have a use for large API with graph oriented datasource (well, like facebook), but it is a specific use case.


Hm, I disagree on the last point. GraphQL's utility has nothing to do with whether you have a graph-oriented database. The "graph" part is merely talking about the means by which queries traverse the type system. Since REST APIs can easily be modelled as a GraphQL schema, they have a lot of overlap in utility.

Basically, if you need "API-driven" front-end, the only thing that might make harder than REST for a given language is lack of libraries. If you don't need API-driven, then just do things the traditional way, or try Phoenix LiveView.

The one caveat is that GraphQL (especially with Relay) maps exceptionally well do the component-based way of building things. So if you've chosen that paradigm, GraphQL might still be a good fit, even if you're just building a website.


I want to like Elm, but every time I try to pick it up it's always my biggest time sink.


Could you elaborate? I'm on the fence about Elm mostly because of 'ecosystem issues' (single dev, various complaints about communication, etc.), but more than once I've almost started a project because I figured the practical benefits would still be worth it.


Does anyone here who is working with elixir professionally have a sense of what kind of mastery is needed to jump into an elixir dev role? I've used it on and off for ~4 years at this point and in a few side projects (most recent one using everything in the title, weirdly enough) but can't really tell if I'm "qualified" to look for a job in it. It's this weird loop of "I've only done something as a hobby so I'm not qualified to do it professionally" vs "If I don't use it professionally I'll never be qualified to do so".

Same goes for react.


Smart teams hire great engineers and trust them to get good with the stack at hand. Don't get hung up on how much experience you have with a particular stack when looking for jobs.

I've hired Rails developers for Django roles and seen them get productive within weeks.

If you've used Elixir on-and-off for four years you have more experience than the majority of engineers who might be considered for an Elixir role.


> I've hired Rails developers for Django roles and seen them get productive within weeks.

What did you like about the Rails developer that you chose the person for the role? Basically, I want to know irrespective of the language, how do I show my "quality as an engineer"?


oh it's simple, you just have to invert a binary tree


If you’re looking for an Elixir dev role, we’re growing our team at HIFI and hiring junior and senior engineers in NYC (also open to remote).

We’re building a business management and financial services platform for music creators and rightsholders. Our stack is GCP, Elixir, Typescript and React Native.

Feel free to email me at the address in my profile. Would be happy to chat.


> "I've only done something as a hobby so I'm not qualified to do it professionally"

Definitely don't let this hold you back. As long as you're up front about where you are, and are willing to learn, lack of professional experience with a particular language is by no means a deal breaker.


I think for Phoenix it's quite easy to deal with unless the project requires a lot of Erlang/OTP stuff.

If it's just plain old request/response styles website (I think most of the project falls into this bucket) if you can handle other frameworks like Rails or Python, you can handle the same thing really well after some usage. Some basic knowledge of the language/MVC/Ecto would do.

Though there are a lot of React developers already using React but don't know what they are doing. Using React actually requires a good sense of the ecosystem - which set of libraries to use, which pattern to follow etc. Plus some basic concept like state/props/immutability/HOC would do.


I do find that I feel a weird kind of 'tension' as an Elixir programmer where on the one hand I think I'm pretty proficient with Phoenix and Elixir's syntax, conventions, and ecosystem, but way out of my depth when it comes to OTP, distributed systems, and so on.

It almost feels like these are two completely distinct worlds, but because they're part of the same language and community, it's difficult to feel proficient or even 'adequate' at times.

I never quite felt this way as a PHP/Ruby/JavaScript developer, as for the most part the challenge seemed to be knowing the right frameworks, new language features, or bundler/build tools to be 'good'.

In the most recent episode of Elixir Outlaws, the topic of overusing OTP/processes/Agents is discussed (https://elixiroutlaws.com/37), and it made me feel a bit better about my lack of experience in this area (and much as the podcast can be on the 'whiny' side of things, it's probably my favorite Elixir-focused one and I highly recommend it! If you're reading this: Hi Chris, Amos and Anna. being a friend of the show one day is on my bucket list :).)


I don't think deep mastery of the language is needed in order to jump into a dev role. At least form my POV, as long as you know the language well enough to be productive and are comfortable with going to the docs, that's enough.

The more important thing is if you understand how the web works. I'd much rather work with an senior Rails or Laravel dev who had just been playing with Elixir for a few weeks than someone who had studied the language deeply but had little web development experience.

Gatekeepers are annoying but you can usually side step a few qualifications as long as you can do the work.


My main issue with elixir dev jobs is, that they are usually not just "request/response", because elixir offers so much more with OTP.

At least that's what's holding me back.


> I also find a lot of joy in its pattern matching and its pipe operator. It’s refreshing writing heavily functional code that still approaches the beauty of ruby, and I find that it drives me to think clearly about my code and write fewer bugs as a result.

This.

I use Elixir everyday. While it feels 'refreshing' to use pattern matching, sometimes it gets over-used, as a simple swtich-case statement is easier to read.

Same with pipe operator. I found newbie programmers tend to pipe for the pipe's sake, and write functions just to form the 'shape' of pipe, but the data shape changes are not easy to see and inspect.


Doesn’t pattern matching lead to either more verbose code or a duplicate code? Most of the time?


It could, when its used "wrong", though thats usually a result of trying to fit pattern matching where there are other more idiomatically way to express a code. I wouldn't say "most of the the time", its more of a common beginner's mistake.

Pattern matching allows you to express the code in a more declarative way, and usually would result in a more succinct code.


Great article. One point though (not aimed at the author).

The majority of engineering articles I see today are along the lines of "X with Y, using Z", where X, Y, Z are specific products, frameworks or libraries, often trademarks.

I rarely see more generic engineering/architecture topics such as: [virtual DOM based / string based client side templating] with [JSON/Protobuf] over [REST/Websocket] with a [compiled / interpreted / JIT compiled / virtual machine based] backend runtime, built on [RDBMS/NoSQL/hybrid] etc.

Am I alone in feeling this way?


I'm actually writing a tutorial series aimed at programming beginners where we implement things in multiple languages/frameworks side by side.

I think it's a great way to learn concepts of software engineering instead of just memorizing conventions.


Yeah this actually sounds awesome, do you have anything live/bookmark-able yet?


I have the first few lessons 90% completed but not published yet. You can bookmark my personal site natemeye.rs since they will initially be posted there before I move them to their own location.


> we implement things in multiple languages/frameworks side by side.

please do post the link.


I like this idea. Especially if you give a high-level overview of software design concepts.


I agree, and I've been itching to write an article that is a bit like the 'generic' approach you describe. But I'd have to start a blog first :-/.

For many web developers, front-end, full-stack or otherwise, perhaps it's usually clear enough what the underlying approach of a particular framework is. But I imagine people who don't read up on the latest developments especially it might be helpful to get an idea of the underlying fundamentals of particular stacks/frameworks.


I've just been trying to get back into elixir recently, myself. I'd done some basic crud 'helloworld' stuff when I first tried about a year into professional development.

I've since had the Fortune to spend time learning about cloud native apps, distributed service patterns, and supporting infrastructure (spring cloud, pcf, vanilla k8s, gcp) and now returning to elixir having at least better understanding of what erlang and OTP offers.

I'm super excited to see I'm not alone in finding this sort of stack is worth fiddling with(although, I tend to pick Vue when not using angular for work).

Thanks for posting this!!

For those interested in what resources I'm leaning on: The Manning 'Elixir in Action' and the Pragmatic Programmer's graphql texts along with exercism.

Anybody else have any preferred resources for these technologies?


I almost exclusively learned the basics of Elixir / Phoenix by looking at the source code of https://changelog.com which is at https://github.com/thechangelog/changelog.com.

Major kudos to them for open sourcing their platform. It covers like 50+ common web dev features.

I pretty much skimmed the docs to get the ultra basics, looked at that source while I was building my own app and then asked questions when I got really stuck. Even managed to sneak in some refactoring on their code base a few days into learning Elixir. It's super approachable without much more if you have some type of programming background beforehand.


This is a sweet shout-out. Thanks!


If you're interested in the Phoenix Framework you should check out "Programming Phoenix: Productive |> Reliable |> Fast".

It's written by the creator of Phoenix (Chris McCord) and the creator of Elixir (José Valim), and it's a fantastic intro to Phoenix.



This has been on my radar. I'll have to add it to my queue on Safari. Thanks for the rec!!


I liked this "Elixir For Programmers" by Dave Thomas

https://codestool.coding-gnome.com/courses/elixir-for-progra...


I can recommend this one too! It skips right over the boring stuff and focuses on the (Dave-Thomas-flavored-) good parts. Definitely worth it if you're already somewhat proficient in Elixir.


I'm a big fan of both the Manning and Pragprog books.

Even if you've worked at it a bit already, you'll probably get something out of doing the first dozen or so challenges on my channel as well: https://youtu.be/G3JRv2dHU9A?t=595


I am pretty much in the same boat. My resources have been Programming Phoenix first edition. I am now going back and reading Programming Elixir and then re read Programming Phoenix 1.4 and then the GraphQL book by same publisher.


Elixir and OTP are really nice, but I'm frustrated by the type system, even with typespecs. I finally gave in and started learning Haskell. I would still choose Elixir/Phoenix for some web apps though.


Expect a long journey if you’re trying to get to the state where you can make a web app. I am making a conjecture here but from my limited but nontrivial experience it seems libraries in the applications ecosystem of haskell apply the most advanced features they know of to solve their problems. That would normally be fine, but they do not abstract away this complexity; due to the type system’s rigor, these choices are propagated to you as the consumer of the library because you have to consume the library in a way that satisfies whatever advanced type features they have. So the end result is that to get anything done like make a basic Yesod web app you have to be at an advanced level. I feel i benefitted from learning the structural and type theoretic aspects (monads etc) of Haskell but I also am glad i ditched it after some time because the investment just did not seem worth it anymore after a certain point. Anyway just my perspective, hope its helpful if you find similar difficulties. Right now I’m checking out Scala as an alternative, but i’ve just started so I can’t offer any comparison other than I know it is less rigorous.


> I am making a conjecture here but from my limited but nontrivial experience it seems libraries in the applications ecosystem of haskell apply the most advanced features they know of to solve their problems.

Apologies to others for the “me too” reply, but I wanted to let you know you aren’t alone, I also feel this way. It’s intellectual masturbation (without any reward/benefit), IMHO.


Haskell is a funny beast because I get a lot of joy from “conquering” the next level up (like a game) but in terms of productivity I’m not getting a lot done. OTOH in more pedestrian languages I’m using that brain BHP on getting stuff done. I really got into the Haskell thing a while back, going to meetups etc. but getting a job using Haskell is hard unless you’ve got experience already and want to take a pay cut.


Agreed - it certainly is rewarding in itself to get to the next level, but that's precisely it: the means become ends in themselves. I stopped using stuff like Arch Linux a while back for similar reasons.


Yep. Tools like Docker and Kubernetes on the other hand, I have found they give you a nice amount of leverage for much easier to understand concepts. In a similar "Workhorse" category I would put Git and Typescript. All of these require some effort to get to know, but they pay off nicely in terms of productivity.

Haskell is a pretty decent general purpose programming language, and if used well you can produce nice programs, but the problem is that the library you need will use some clever type system stuff and suddenly you are spending hours trying to understand how it all works.

For example I'd be happy doing a lot of stuff in the IO monad with some pure bits to the side where needed. Avoid the free monads, monad transformers, type classes and all that jazz. So use it like an imperative language for the most part.


The problem I have with Haskell for getting anything actually done is that nothing I try to do actually works. I mean in the basic "try out the tutorial to learn the thing and be able to get to the end without an error I don't have enough experience to debug" sense. Stack should help with this, but it doesn't when the versions you get (and function signatures of those versions) have changed since the docs were written. Cloud Haskell is an example of this: the tutorial docs are incomplete (the very first command they have you write is wrong, among other things), and they don't specify a stack version so the Hello World example doesn't build with the current snapshot and there's a really odd change to a public API I just can't get my head round.

I've had the same experience with Yesod and Scotty too, although not recently - every time I decide it's been long enough for the ecosystem to mature a bit more, I bounce off issues like this which I just don't get elsewhere, and decide to give it another couple of years.


I'm not familiar with Cloud Haskell, but I can attest that documentation and information availability in general have become a huge priority for me recently after getting burned by lack of it in recent years. This is actually very tricky to nail a sweet spot in, because it almost requires that a technology be currently popular, yet stable. There are long-lived Haskell libraries that have poor documentation and Haskell is rare enough that there aren't often answers to common pitfalls on StackOverflow etc. Ruby on Rails still has ongoing releases, but its popularity has declined and apparently so has its sources of community information beyond the official docs, which only cover so much. And then you have the immature, fast-moving JS ecosystem. Ironically for these reasons I could easily see myself choosing Java for projects in the near future; I don't like the language, but I've learned that the problems I have with Java are not my biggest problems as a professional engineer anymore.


Every time I see how many variants (and flags) the GHC has and I am like "feck no, I'll check it out again next year". But I think it's a culture thing in that community and it won't ever change.

That's what one of the things that's kind of making me give up on OCaml as well although there things aren't that bad.


Thank you for the input. Maybe I will never get to the point where I can comfortably make a web app, but learning the language has been a fun experience so far. I've heard good things about Scala and Akka.


Have you checked out Gleam?

https://gleam.run


Why is Elixir always being paired with Phoenix? Can’t I just have a backend API running on Elixir and a javascript front end to interact with it? I’d prefer a simple React, Postgres, Elixir stack (REP).


I have an app like that, but I found it convenient to use Phoenix anyway as it provides test helpers, a nice router, an efficient template (EEx) compiler with escaping, etc. I just ignore the parts I don't need.

Phoenix LiveView looks super interesting for simplifying this further: https://dockyard.com/blog/2018/12/12/phoenix-liveview-intera...


Yep! Just use Plug. There’s no need to drag the entire Phoenix stack in.

https://hexdocs.pm/plug/readme.html


Then you'll need to create own version of migrations, routing, configure asset building pipeline etc etc etc. Phoenix have everything in place, without need to re-implement lots and lots of basics


A bit pedantic, but migrations are provided by Ecto [1], not by Phoenix. You don't need Phoenix to use Ecto.

https://hexdocs.pm/ecto/Ecto.html


Well, you'll still need to add Ecto, configure it to use separate configs for prod/dev/test environments etc - so point still stands :)


Because Phoenix is the web framework for Elixir. It's easier to just use phx instead of rolling your own solution around Plug / Cowboy.


Same reason why Ruby web apps are usually written in Rails: familiarity with the tool plus you end up rewriting most of Rails as soon as your app is not a toy (tests, flexible router, associations between models, migrations). After some experiments with Sinatra years ago I always start with Rails now.


Plus it's much easier to remove stuff from Phoenix if all you need is something like Sinatra.


You can use Plug instead of Phoenix if you like, but why do you object to Phoenix? It's quite customizable.

Generate a new Phoenix app. Open up the endpoint file. Don't want routing? Comment it out. Don't want logging? Comment it out. Keep what you want, discard the rest.


If you want something simpler and more building-block, micro-framework style in Elixir, you'll want to take a look at Raxx[0]. You could also just build atop Plug itself, though Phoenix really is just a whole lot of Plug.

[0]: https://github.com/crowdhailer/raxx


You might also be interested in Raxx.Kit, A project generator that gets your project started as quick as if you were using phoenix https://github.com/crowdhailer/raxx_kit


In this specific case it is also because Absinthe/graphQL subscriptions are built upon Phoenix channels so if you plan to use subscriptions, Phoenix is the easiest route.

Phoenix also is pretty minimal layer over Plug, especially if you don't include the html library.


You can have Phoenix serve API endpoints and not include the phoenix html template package in your mix files.

> I’d prefer a simple React, Postgres, Elixir stack (REP).

That's not even a web stack. Web stack for example LAMP include a web server (Apache) or something that is able to response to client request. Elixir is a language it can't serve anything. You need cowboy or something and then build around it. Or you can just use Phoenix.


> You can have Phoenix serve API endpoints and not include the phoenix html template package in your mix files.

Incidentally, that's what I did for the project in the blog post. If you bootstrap with `mix phx.new --no-html --no-webpack` then you get a leaner version of Phoenix that's good for an API, without the HTML and frontend asset parts.


Oh wow I didn't even know there's a command for it. Thanks.


Don't let the history of other languages and frameworks fool you: Phoenix is a very minimal set of helpers over the much more bare Plug/Cowboy setup. And it's very easy to strip away parts you don't need.

The fact that you'll have 4-20 small(ish) boilerplate files on a brand new project isn't the end of the world and it gives you a lot of flexibility later.


I have just starting learning Elixir and Phoenix. It seems like a solid replacement for Rails. But I keep getting hung up on performance. With React, it seems like you are developing more API back-end, then web server back-end (if that makes sense). But when you look at the performance compared to Go and the steeper learning curve, why not just use Go? Kubernetes really solves the share-nothing let-it-fail aspect. Obviously, Erlang/Elixir are hands-down a good fit for fault tolerant distributed systems where performance may not be as critical or you can use NIF. But outside chat and a few other use-cases, not sure the majority of web services falls into this. However, the functional code is so nice. Anyways, any input on how to convince engineering leadership how Elixir might be better than Go would be helpful.


For non-stateful services, it seems very difficult to me to convince people to use Erlang/Elixir over Go. It also seems to me that Go is going to be more maintainable over time than Elixir. Also, Elixir has smaller corporate backing compared to Go.

fwiw, I work at a telecommunications company and I've had no success convincing anybody to use either Go or Erlang/Elixir over Java (& Clojure).


That’s ironic considering that Erlang was designed for telecommunications.

But I think statefulness is the right context between the two.

Why do you think Go would be more maintainable?


I realize dialyxir exists, but I think a lack of static typing can lead to some maintainability issues over long term. At least this is my experience working on a 6 year old JavaScript codebase.


You seem to list political / culture / habitual phenomena here. Care to list technical argumentation instead?


Do you think language adoption in a company is anything other than "political / culture / habitual"?

But in any case, Go is a very straightforward language if you're already familiar with popular languages like C/C+, C#, Java, or JavaScript. You basically just need to learn the concurrency features (goroutines/csp/select keyword). Elixir is a much larger conceptual leap in that it is functional and the currency toolset is a lot more to grok. Deploying code with Go is also easier than Erlang/Elixir. Static typing is also pretty huge (dialyxir seems more like a bolt on).


> Do you think language adoption in a company is anything other than "political / culture / habitual"?

Sadly you are correct. But what I was getting at is that I'm interested in how do you choose a language. Us the techies should strive to make technical decisions not based on politics, wouldn't you agree?

> But in any case, Go is a very straightforward language if you're already familiar with popular languages like C/C+, C#, Java, or JavaScript.

In every single area of my life I found out that what is cheaper at the start always ends up hugely expensive later. But I am not here to argue life philosophy. Just a quick tidbit.

> Elixir is a much larger conceptual leap in that it is functional

True. I still found the investment very worth it though, even after I spent 14-15 years with OOP / imperative languages before that, mind you. But that's a matter of personal career choice.

> Static typing is also pretty huge (dialyxir seems more like a bolt on).

This is undeniably one of the big weaknesses of Erlang/Elixir (and the BEAM languages in general). It made me try and reach for languages like Go and OCaml.

Go is pretty easy but extremely verbosive. Also the imperative languages' tendency to always show you how you are doing things instead of the what you are doing (namely the way of the functional languages) is something that is poking my eyes out lately, for the better or worse. Additionally, people use Go's typing escape hatch (`interface{}`) way too often, to the point that the typing system feels completely optional.

OCaml I like quite a bit, and its typing system is world-class. Lightning-fast compilation times are a huge productivity boost, one I didn't expect. But lack of even basic parallelism wrappers is pretty off-putting (I am not counting convenient pthreads-like DSL, they are just that and nothing else).

All in all, I'd still go for Elixir for 90% of what I ever do due to its preemptive scheduler and very friendly parallelism and concurrency story. And higher-level tools -- like Broadway -- keep getting added.

The BEAM languages are not a panacea. That's a fact. They seem to fit excellently in the web programming though.


putting aside all the other benefits of a typed language, would you say Elixir's "let it crash" philosophy makes this even slightly less of an issue? Or are they entirely unrelated?


Ok, for backend, I’m deciding between rust with actix, jvm (kotlin + akka/quasar) or elixir.

I kinda know the theoretical differences but I’m curious what hn thinks. Has someone actually deployed things in some of these.

I do have preference towards rust but maybe there’s something better.


Erlang/Elixir's VM is gear toward high uptime, low latency, and highly concurrent.

https://stressgrid.com/blog/100k_cps_with_elixir/

What it's not good at is numerical computation.

As for the others I have no clue but I am sure jvm won't be as low latency as Elixir's VM since it's not a preemptive scheduler.


I think a lot of folks writing low latency code in C, or for the JVM, would be surprised at the suggestion a lack of preemptive scheduler makes their work futile.


If those folks enjoys writing low latency code in C for backend... I'm not going to stop them. I'll sit in a corner somewhere and enjoy Elixir.

There's a JVM vs BEAM VM comparison paper (http://ds.cs.ut.ee/courses/course-files/To303nis%20Pool%20.p...).

I believe section 2.2. ERTS of the paper talk about BEAM VM advantage for low latency.

> The per process heap architecture has manybenefits, one of which is that because each pro-cess has it’s own heap and there are presum-ably numerous process’s,...

update:

Ah... I think the relevant part is in 3.3 Lightweight threads.

> 3.3. Lightweight threads

> The effect is that Erlang is one of a fewlanguages that actually does preemptive mul-titasking. The reduction count of 2000, whichis sub 1ms, is quite low and forces many smallcontext switches between the Erlang processes.


Java will get lightweight threads soon. You can google "java fibers".


Fwiw we’ve had a lot of success with Golang (graph-gophers/graphql-go w/ dataloader). Running in prod for about a year. Worth noting is that any GQL server implementation not written in JS, will play catch-up to the JS/Apollo counterpart. The JS ecosystem just moves so much faster


Before having success with Go, did you try or consider any other languages for your GraphQL server? What about other GraphQL libraries in Go?


Our backend is mostly Go so we _had_ to make it work. Tried most of the available libraries, gqlgen etc. But eventually settled on this one because of the schema-first approach and nice resolver api. And the defacto golang dataloader lib is from the same author.


Makes sense. Thank you!


Dotnet Core is actually really nice as well, I think v3 is right around the corner. Something worth checking out if you're looking into beckend technologies.

I've been using it lately, coming from Node.js for the last few years, and it's been a really nice change.


I've been on the fence about trying dotnet core for perhaps too long. What do you like about it?


Anything .Net can be replaced by Java.


That really depends on what you're building.


When would erlang,jvm be preferable over actix.


They're all wonderful, established technologies and all very capable of making concurrency relatively painless. At this stage, you should go with whatever you would be most productive in: do you already know one of the languages? If so, go with that one (always be shipping!). If not, how much do you like the syntax of each? How much do you rely on static types? How mature are the third-party libraries you're going to be relying on?

I've been building software using the exact stack described in the linked article for the last three years. I don't think about my backend very much, it mostly just works. Most of the work is on the front end. Elixir/Phoenix/Absinthe have been almost completely frustration-free. I'm sure Rust and Kotlin are great too.


rust is still figuring out async, so it's not as good a fit as say go or elixir.

If you are specifically making a graphql backend, idk why you would anything but node, since it has the most mature ecosystem.


I mean actix just hit 1.0 and I don’t think it impacted them one bit.


The ecosystem is not really mature.


The project I've been working on the past couple of months is a very similar stack to this—Phoenix, Absinthe, Apollo, React, Redux.

The main difference is I've been using TypeScript and the Absinthe resolvers are using Dataloader.


How's the integration of Apollo and TypeScript? Do you like the stack and would you recommend it?


Both apollo-client and react-apollo ship with definitions in their npm packages, so using TypeScript with Apollo isn't a problem.

I don't like the stack. It's way too heavy. If I were building everything again now, I'd go with a standard Phoenix app and ditch Redux for sure and hold off on fullblown front-end frameworks and Graphql.

I'd ship v1 with Phoenix, UJS and Turbolinks, and it would take 1/4 the time. Then, if the project grew to where it needed and could support a larger team, I'd bring in Absinthe, Apollo and then finally Vue.

My main recommendation would be to keep things as simple as possible for as long as possible. Use something like Phoenix or Rails or Laravel and get stuff shipped.


It always bugs me that, with Absinthe, you have to define Ecto schema and GraphQL schema separately, when most of the time, they are very similar. Can Absinthe somehow figure out from Ecto?


Always wanted something like that as well. I am guessing until we get something like `clojure.spec` -- true typing, even if gradual (but at least not a success-typing) -- then it's not happening anytime soon.


Good read. Curious about Elixir. Why would one use this over ‘plain old’ Erlang / OTP apart from the Rubyesque syntax, which might appeal to RoR devs?


It's 100% the syntax. It appeals to ruby devs and also I think it's a lot easier to grok in general. But you can write Erlang code directly in an Elixir file, you can use Erlang libraries directly in Elixir, and Elixir compiles to the same BEAM instructions that Erlang does, so there's functionally no difference in terms of capabilities or what happens when they run.


This is not totally true. It’s not 100% syntax. Elixir has one additional and widely used feature: macros.

In Elixir, as in Lisp, you can transform your code as a data structure into new code. This is importantly used in Ecto, the de-facto object data mapper in Elixir. It’s also used in Absinthe, a library from the posted link for making GraphQL APIs.


Protocols are also an important feature. `IO.inspect`, `Enum`, etc. are very nice additions.


In general the standard library organization is superb and well laid out. It’s the only language I don’t have to consistently lookup "how do I do X with string", or “Y with a list".


Yeah, this is fair. I was mentally lumping certain language features under "syntax" but that's not really accurate.


* consistent standard library (huge win IMO)

* efficient utf8 binary strings (which is actually conforms with utf8 standard), no mixup with charlists

* mix etc standard tooling

* macros

* enum stuff

* pipeline operator "|>", which by its mere existence enforces input/output param's consistency in user-written functions

etc


well better unicode support too


It's not just syntax. There are semantics differences.

A big one is that Elixir allows rebinding of a variable. Erland does not, which tends to lead to ugly names since in erlang you can't do common reassignment patterns like x = x * someConversionFactor.


* Unified tooling (`mix`).

* Tests inside documentation.

* Macros.

* Protocols.

* `mix xref` has a family of sub-commands that gives you good code analysis (callers, callees etc.)

* First-class Unicode support. All strings are UTF-8 by default (you can fallback to ASCII if you need it; there are also good transcoding libraries in both Elixir and Erlang).


Many folks seem to prefer Elixir to "Plain old Erlang" because for those who come from an imperative and / or object-oriented background, picking up Erlang means taking on a significantly steeper learning curve.


Friendlier tooling and documentation are a couple of reasons.

Elixir's whole reason for existence is to make the Erlang programming model easier to use. Try it and see if it does.


> Why would one use this over ‘plain old’ Erlang / OTP apart from the Rubyesque syntax

The two really big features are macros and protocols.


> much friendier syntax for developers

This drives me batty. More familiar? Sure. “Friendlier” is very subjective.


Loved this deep dive - appreciate you going step by step and explaining each part.


Thanks! Glad you enjoyed it :)


great write up. shall we dub this the PAAGER stack?


APGEAR


parage (ˈpærɪdʒ) n 1. (archaic) lineage, family, or birth


PEA-RAG


When you get to the absinthe part you lose me. It's the same gut feeling I had when I see Redux where it works but it was created at the very beginning of this entire workflow discovery and better tools/approaches have since come out.


Can you elaborate on what they are, for the curious?


Redux is still a solid choice. Apollo is used more often now because it handles a lot of GraphQL optimization but it's by no means a panacea. And sometimes, despite being an advocate of Apollo, I miss the simplicity of Redux.


Nice! This is 1 for 1 the exact tech stack we use at Distru :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: