Hacker News new | past | comments | ask | show | jobs | submit login
Rails adds support for Fiber-safe ActiveRecord ConnectionPools (saeloun.com)
196 points by ksec on Feb 24, 2022 | hide | past | favorite | 135 comments



This is really exciting news. Lots of folks slam Rails for being slow and inefficient despite there being massive sites using it at scale (Shopify, GitHub, etc.) with very commendable response times and uptime.

At the same time you can still run a single $20 / month server for "smaller" apps even with Rails + Sidekiq + Action Cable + Postgres + Redis all running on the same server to power your solo developed SAAS app with thousands of customers while having a p95+ response time of <= 100ms.

PRs like this just mean things are going to get even better than they already are and is probably a precursor to converting a bunch of Rails internals to use Fibers over time. The best part about it is I as an end user don't need to know the details, I just know my CPU and memory usage will go down over time while runtime performance continues to get better than it already is.

I wonder how long it'll take before the pendulum swings so hard back into the direction of Rails that it shatters even the original growth spikes of Rails from all those years ago. With Hotwire Turbo being a thing now it's very possible to build very nice feeling apps without writing a ton of JS while leveraging good old HTML and HTTP with sprinkles of Stimulus and WebSockets.

The more I think about it, the more I talk myself into believing Ruby / Rails really are a once in a generation combination. It's truly that good for getting real shit done in a pleasant way.


I really hope some sanity comes back to this industry. Rails and Laravel are the best tools by far to build like... 90% of what we build on the internet (Other 10% being offline first apps and stuff like figma, etc).

It just hurts to see how my team, and previous teams I worked with struggle with all the SPAs and microservices, and Go backends and GraphQL nonsense when what we're doing are just fancy crud forms with maybe one or two really interactive widgets overall.

So much time and money wasted just for following fashion.


There is no "sanity" lost, nor are the masses "following fashion". That is unfair to those who put real thought, time and effort in their software.

Rails has its strong sides. And its weak sides. Trade-offs are to be investigated. An extremely weak side of Rails, is how it is very tightly coupled to the database. Which -its strong side- is perfect for simple CRUD applications. But it falls down quickly when you have much more event-driven or domain-logic heavy applications.

And yes, I know Rails can be used for this. But just like I can, potentially write a web-app in Bash, it's not where it shines and it will bring you pain.

For Rails, the pain quickly becomes evident when a lot of business-rules are spread out through "Validators". Or when migrations become a pain to manage. Or when you have more and more complexity moved into async jobs. All of these are signs that the CRUD-nature and/or the tight database-coupling is harming you more than helping you.

Choosing some other architecture that better fits your domain is not "following the latest fashion". It's sane, proper due-diligence.


>There is no "sanity" lost, nor are the masses "following fashion". That is unfair to those who put real thought, time and effort in their software.

It's also a forthright description. Developers chase fads, the new shiny, just like everyone else does. Neither are we immune to cargo-culting; I'd go so far as to make the absurd comparison to web apps written in shell script to be an instance of this.

I've been behind the curtain on large Rails apps. Every issue you mentioned is either reasonably mitigated or exists more-or-less equally on other frameworks.


> exists more-or-less equally on other frameworks.

You are probably just comparing apples-to-apples here, then. Certainly: replacing Rails with Django or Symfony won't solve the core issues one has with Rails, because Djange and Symfony follow the exact same pattern and architecture.

But architectures (please note that I deliberately don't use the term framework here) like CQRS, Event-sourcing, Event-buses, Actors, Reactive etc. all fit much better for many use-cases than MVC.

All architectures have downsides and upsides. The one's I mention above with which I have experience have severe downsides in other areas. Some of those downsides are solved in Rails, some in other architectures. It's always about weighting trade-offs and choosing what fits best for your use-case.

So please don't just label everyone who made such tradeoffs and chose to forego rails because it doesn't fit them as "chasing fads".

If anything, blindly choosing Rails is just as bad as "following fashion": both is making decisions without any basis.


Well, Rails wants to be on a relational database. That's a pretty strong constraint that - say - NextJS or Flask avoid.

I think it's true that it sometimes feels a little crazy that form posts are so much tricker in the SPA world. I sometimes wonder if NextJS has got this right: build a semi-old fashioned web app with a frontend and a backend, so all the data transport in between those is pretty light on code, and then use APIs to data services with nicely defined domain objects.

Still yet to put my money where my mouth is on NextJS though.


Most data is relational. It’s important to embrace the efficiencies and safety guarantees that come along with that.


> Most data is relational.

No. It's not. It may be in your domain. But it really very much depends on the domain and use-case.

I daresay that purely on information density and amount of gigabytes, most data you use in your business is both document-based and hierarchical: files, docs, directories. But obviously I don't know you or your business, so this is just a wild guess.

This misunderstanding that "data is mostly relational", I believe, comes from the fact that it is easy to make most data relational (edit: to clarify: turning something into X is not the same as something being X to begin with). Much easier than to make most data document-based, indexed, hierarchical or into graphs. For one because the tools (RDBMSes) are mature and omnipresent. Practical: it's easy to put word docs and directories in postgres. But it's hard to have millions of CSVs with profile information accessible on directory structure

And, if what you say were true, that "most data is relational", then the best tool for most of our jobs would be a step-up in relationality from an RDBMS and we should be using graph-databases instead.


Isn't a hierarchy a relation? Item, item's parent, item's children. It suits itself well to a RDBMS.


I'm talking as a whole. So yes, there are cases where it's not relational but on average most data is relational. A user has many projects. Those projects have many tasks. This is very common to most web-based software used today. Think of all the tools out there that work perfectly with this structure.

Most of our data isn't files, docs, directories. It's data given to us by our users or generated by our own processes. I'm not getting the reference to CSVs and word docs.

Yes, we store uploaded files but the metadata is stored in the database in a row that has some identifier that links to the directory where the file is stored. That works fine. We don't need a document database.


You're just only classifying relational data as data. YouTube's petabytea of data is mostly video and audio. Perhaps some of its data should be stored relationally, but definitely not most of it.


Yes, a lot of data is relational (although probably most of the world's data is not relational). But not all things need to persist data or are data driven.


If all data belongs to a user or an organization, which is usually the case, and that user does some actions to some objects, is that not relational? It's easy to see how most web apps fall into this pattern.

Building your own API to join data is absolute torture.


Django has these same limitations in my experience, but to me the issues are largely caused by the common usage patterns which are encouraged by the official docs and community, and lead to absurd amounts of coupling/ball-of-mud apps.

you can be disciplined about clean architecture/DDD/hexagonal architecture, etc, but you have completely ignore the patterns encouraged by Django, Django Rest Framework, etc etc.

Hide the ORM behind Repository interfaces

Marshall ORM objects into true Domain objects

Do not call to the database in the HTTP adapter/validators

Etc etc


> Hide the ORM behind Repository interfaces

> Marshall ORM objects into true Domain objects

> Do not call to the database in the HTTP adapter/validators

The tradeoff for this is exploding the number of files, directories, and tests, needed for the project. Which has its own cost and overhead.


I'm not experienced in Django.

But doing this in Rails is very hard. It goes against the most strongest of opinions this opinionated framework has. So much, that if you strive for such a design (and you should, IMO) Rails is more in the way than helping you - I know, I tried.


Sure, writing boilerplate is always going to have overhead.

But in my experience the stricter-yet-more-verbose design strongly increases maintainability and velocity in the long term, which should be self-evident due to the massive reduction of coupling.


> An extremely weak side of Rails, is how it is very tightly coupled to the database.

You mean that Rails — being an MVC framework — is tightly coupled to (or defined in terms of, even!) the schema you've chosen for your business domain, as understood by your RDBMS.

If that's a problem for you, it's very likely because you're using your RDBMS wrong. (E.g., not taking advantage of abstractions like views+triggers to provide schema [API] stability atop storage [implementation] mutability.)

An RDBMS schema is an encapsulating API for your data, just like a GraphQL schema is an encapsulating API for your data. It's just not in fashion to use it as one — very likely because the spectre of "non-standard SQL" and inter-RDBMS portability has scared all the framework developers away from actually using RDBMSes for what they're for (instead using them as glorified indexed document stores.)


> it's very likely because you're using your RDBMS wrong.

Sorry. But no.

RDBMSes make poor tools for complex domain logic, business-rules, etc. If your solution to "my database and business-logic are too tightly coupled" is to move more of that business-logic into the database, "you're using your RDBMS wrong".

RDBMSes are near impossible to test. They have no source-control, no collaboration models (pull-reviews, CI/CD). They have no deployment strategies. Lack distribution.

Which is why in e.g. Rails all business logic is in the models, commonly and not in your DB schema.

(and with "no", I do understand that it is possible, just that accessible tools, frameworks or even books and course lack severely).


> just that accessible tools, frameworks or even books and course lack severely

Right. Using an RDBMS "correctly" requires that you live in 1985. The result is actually better operationally (for model evolution, at least) than what living in 2022 gets you, though. If your product or service is "about" the data it captures/models in the DB, living in 1985 usually turns out to be worth the sacrifice.

The deeper problem is that because everyone uses RDBMSes wrong, the RDBMS vendors aren't being incentivized to invest into the developer-UX for the features that differentiate RDBMSes from other DBMSes. These features could give you a 2022 experience instead of a 1985 experience, but the vendors just don't care about the 2022 experience.

The exception to the incentives being analytical data warehouse services, like BigQuery and Snowflake. These companies do have these incentives, but they don't have the bandwidth to actually address them; nor do they operate on an open-core model where anyone outside the company could help them with their roadmap. (I say this in frustration; my own company is waiting on one of these vendors to implement support for arbitrary-precision integer columns that aren't actually just uint128s; and "lack of bandwidth to address features" is the common refrain I hear from them.)

And there's also the fact that the major RDBMS vendors are all stuck in a 1985 mindset, where they their primary users are analysts manually typing SQL. I still have yet to see an RDBMS whose pitch is "this database is for machines to talk to; it [literally] optimizes for the case of predictable — or even pre-registered† — queries, at the expense of efficient execution of arbitrary one-off queries." (Because, who runs arbitrarily-complex reporting queries against their service's production DB, rather than shoving those off to some other data-warehouse system? "Production DB for a service" should be its own product category.)

---

† Picture: your applications would register versioned schemas in advance with the DB, including all the static templated queries the application plans to do; and then, in exchange, the application would get persistent a set of persistent cross-connection-sharable query-handles. Rather than allowing the user to define indices, the DBMS uses the knowledge of what queries the app will perform to build whatever indices best optimize its own query plans. Multiple app-versions can be registered + connected simultaneously, as versioned-API "portals" onto different projections of the same data. Etc. You know, the same 2022 stuff you expect to be handling yourself on the business layer — but exposed as part of the first-class meta-schema of your DB.


This overlooks the actual problem. In Rails, you have a db schema, models are tightly coupled to the db schema, then the community push is to put logic into these models.

Having two models pointing at the same table is disregarded as bad idea.

Then, validation for form data is coupled directly to the db schema, thanks to models, not separated by scenario.

Then, the json rendering part is coupled to the model, because it's "convention over configuration" and the community pushes for DRY everywhere, without concern for the cost. So now, the json api is surfacing any change to the underlying STORAGE mechanism of the data, instead of being stable.

Now, add a react + Typescript interface that so popular nowdays on top of it and the result is that an update to the DB schema causes changes in the model, in the controller, in the json view, potentially breaking typings in the frontend.

This is the most common situation in Rails applications.

Now, let's take this situation 2 years in the future:

The json produced by this app is massive, there are many more new endpoints. Each endpoint has json rendering, but DRY is a hard requirement, without evaluating the risk, so every json rendering for a given model is reused in other json rendering objects.

Now one change of the underlying storage ripples through many, many API endpoints, potentially affecting hundreds of files.


I think you misunderstood the thrust of my argument.

A Rails model that talks directly to an RDBMS table, does indeed couple your business logic directly to your data storage representation. I'm not arguing with that.

There are two ways to fix this, though. And most people only consider one of them.

The "software-oriented" way to fix the coupling — and the one that software developers usually reach for — is to embrace domain-driven Hexagonal architecture on the inward edge: to create a domain layer within the business layer that defines its own non-DB-synchronized domain objects; and then to write an explicit DB gateway that can be Commanded and Queried with those domain objects, where that layer translates those domain objects into relational row-tuples.

(Note that this is separate in concept from embracing Hexagonal architecture on the outward edge. Gateway abstractions + internal-domain/external-view model isolation+translation are well-suited concepts to encapsulating the different rates-of-change needs of your system vs. the external systems it interacts with. That's what the Hexagonal-architecture concept was invented for, and why MVC separates Controllers from Views.)

The "database-oriented" way to fix the coupling, is to continue to model your business domain within the DB, but to model it in a faceted, denormalized way, such that each query-schema gets its own view, and each command-schema gets its own writable view driven by a trigger + stored procedure, and the tables where data actually lives (and what columns/types/indices they have) are an implementation detail of how those views were written. Just like in OOP, you hide the data, encapsulating it behind an API (the views.)

When your Rails models bind to these "schemas", each Rails model then inherently represents a Query-result or a Command-changeset — without having to complexify your business layer. These objects can just be your domain objects. You're getting your domain objects directly from the database, and sending them directly into the database, because the database itself contains the translation layer mapping the domain into the storage representation. Just as with the first approach, you're free to change these mappings at any time — you just add some DB schema migrations which add/update/remove the views backing the models, and re-point your models at the new views. (If you create these views using unique-per-version names, then different versions of your app can even live in peace together, with all their versioned bits-and-pieces of translation layer co-existing and being shared between where relevant.)


Oh, yeah I definitely misunderstood your point.

This makes sense and it's an idea that makes me excited, I wish there was a community drive toward that.

I do think that most Rails developer will dismiss it because some reason like "It's weird to have a Model backed by a view" or something along those lines. I wish the Rails community pushed (way) harder on dramatically increasing SQL knowledge, because it's so important to the whole framework.

Instead most of that SQL knowledge is hidden behind active record, making it incredibly hard to acquire.

There is one thing that this approach doesn't solve, and that is the inherent flaw of activerecord: having Save on the business logic entity makes it so that returning such object back from a business-logic method is dangerous, since now any other code could potentially write to the DB (and this is what usually happens due to lack of software design principles).

I suspect you are going to suggest something along the lines of "set readonly some records and command records are only used in specific spots", which could make sense!


> Or when you have more and more complexity moved into async jobs

How does this cause pain in rails specifically? Any 3 tier architecture is going to have the same headaches.


> How does this cause pain in rails specifically

As long as the jobs are self-contained, cohesive and decoupled, there's no problem.

They hardly ever are decoupled. Often, jobs will reach back into the model-layer (rather than receiving all data with which to work). You are now coupling yet another concern tightly to your Active-records. In practice, e.g. you have 400 jobs waiting, which will do a `deliver_to(Report.find(report_id).recipient.email)` What happens if your domain-model Report changes to have `recepients` instead?

They hardly ever are cohesive. It's hard to orchestrate jobs - the tooling is simply not there in Rails-. So having tiny jobs spawning other jobs and/or sending eachother messages is not the common way: instead we build Giant Balls Of Sequential Code that perform a job. e.g. that ReportJob above will now (1) collect data, (2) prepare and filter it (3) push it into a template, (4) build a PDF from it, then (5) upload that to storage and (6) build an email and attach it. That is violating SRP on so many levels it hurts to write it down.

And they are hardly ever self-contained. As mentioned above, many jobs will reach back into the models rather than having all the data in their job. Worse: jobs in Rails lack own state and management thereof. So they'll often push (intermediate) state through your models into your database. e.g. that ReportJob may trigger a statemachine changes with things like @report.transition_to(:pdf_generated).save!. Causing all sorts of issues with transactional boundaries (what if an HTTP request comes in that updates the same Reports' name, while you are building a PDF from it?) and thus race-conditions, locking and other issues.

ActiveJob and the likes are very good as an aside on a CRUDdy app. To handle simple commands like notifications, emails, uploads, etc. But they lack severely when your domain is mostly command-based. In those cases, architectures such as Event-sourcing or Actors are a much better fit.


> And they are hardly ever self-contained. As mentioned above, many jobs will reach back into the models rather than having all the data in their job

This is usually for a very good reason.

If you passed in a user model into your job directly and then your job executed with that user there's a race condition here waiting to happen where the user might have been altered in between when the user clicked the button to issue the job and then job itself executing.

This could have very bad side effects like the user receiving an email that they opt'd out of or maybe the user doesn't even exist in the system anymore or your job saves the user back to the DB with stale data that the user updated since then.

It's super common and a best practice to send a unique ID of the user as a job argument and do the user lookup in the job at execution time of the job. This is typically something you'd do in any system, not just Rails. You mention "actors" which hints at Elixir but Oban (Elixir's most popular job processing library) also uses this approach of passing IDs because this problem isn't related to your tech stack.

Also, you listed out 6 sequential steps in your ReportJob example. Based on what you wrote it sounds like they need to be done sequentially and can't be further parallelized. Based on no other context and it being a typical web app I'd keep that as 1 job and have 6 functions calls in the job that have the implementation details for each step so the job itself is very readable. I wouldn't make a jumbled mess of jobs-calling-jobs spaghetti there.


Calling it "fashion" is disingenuous.

I ran Rails in production years back and swore it off then. We had constant memory leaks that seemingly came from Rails itself, and the only solution we had was "just restart the server." We also had no typing then, so every bug was a runtime bug. Hopefully it's improved in the years since...

I've been happily running Go backends for the past 7 years now, and they're stable and fast and easy to refactor.


> I ran Rails in production years back and swore it off then. We had constant memory leaks that seemingly came from Rails itself, and the only solution we had was "just restart the server."

This hasn't been a serious problem in a decade.

> We also had no typing then, so every bug was a runtime bug. Hopefully it's improved in the years since...

The ecosystem has been introducing gradual typing, but even at high scale, types were not remotely the most common type of problem I ever ran into, and certainly not "every" bug.

(ex-Braintree engineer, we processed billions of requests on Rails)


> types were not remotely the most common type of problem I ever ran into, and certainly not "every" bug.

If you took that away from what I wrote, I apologize. I meant that without a compiler and type-checker, you would only find bugs at runtime. In my experience, the vast majority of these would be easily discovered by a compiler. Presumably that experience is shared by Ruby devs since they're now adding type-checking.

> This hasn't been a serious problem in a decade.

That may be true. I haven't had any need to revisit Ruby or Rails since I moved to Go. But it was a serious problem with no workaround, and I've never encountered any scenario like that since switching to Go.


While I do like static typing, I find more bugs because people have no clue about security and they think they can do it better by just tying together a few libraries and storing a jwt in local storage or they forget to handle Prisma exceptions or they didn't know what session fixation is or because they forgot to consider a corner case of foreign key exceptions from the database library, etc, etc than because somebody passed a string where an integer was needed.

Giving up frameworks with 10+ years of hardening and documentation and libraries and support etc just because of coroutines or static types or nice syntax or because that's what google does then I should do it too makes absolute no sense to me.


Yes. And new projects are even better started now with Rails than in the past (aside from the JavaScript hell still plaguing rails).

Active storage and variants make life so easy.


> (aside from the JavaScript hell still plaguing rails)

It's not too bad now, set up a Node environment and there's first class support for using vanilla esbuild or if you want to go without Node entirely there's import maps and lots of goodies at the Rails level to help you manage your JS dependencies. Technically Webpack is still supported too but it's vanilla Webpack instead of Webpacker, that's all part of the new https://github.com/rails/jsbundling-rails abstraction for using a number of different JS bundling tools with their stock set ups.

Personally I went with the esbuild + tailwind combo and it's been smooth sailing. I have an example app here https://github.com/nickjj/docker-rails-example.


If you're finding bugs in production that a compiler or typechecker could have found then there's something seriously wrong with your test suite. That's how Rails works, if you don't have 90%+ statement coverage it will be hell.


IME, adding that 90% statement coverage is much of the tedium and frustration of the job in Ruby-land--in particular for things that you just get solved for free with something like TypeScript.

It might have improved somewhat, but I find myself pretty comfortable with a much more pared-down test suite that focuses on correctness tests at logical module boundaries in TypeScript, rather than verifying things the computer can just do. I do look forward to seeing Ruby's gradual typing become more entrenched in the ecosystem, though, because I like the language--I just don't like using the language professionally because of the additional manual work I find myself doing.


> IME, adding that 90% statement coverage is much of the tedium and frustration of the job in Ruby-land--in particular for things that you just get solved for free with something like TypeScript.

I would argue that if your unit test is only testing things which would have been shown by the type system of another language, you are testing at too low of a level. In addition to being tedious, such tests are often very brittle.


Those tests are brittle, and they're also the thing that protects you at module boundaries when those boundaries are being hammered on by different groups of people.

Having them not be necessary is nice.


On the other hand, if you have a compiler/typechecker that finds things that your test suite could find I will always reach for the compiler/typechecker. No sense in writting tests against something a standard tool will find aside from sanitizing data from your inputs.


My experience with ruby is often documenting the expected argument and return types with comments and then writing code and/or tests to enforce the types.

Having also used Go a fair amount, I very much prefer the real type system which both documents and enforces.

I enjoy both languages though.


I’ve worked on many large rails apps at scale and large Java apps at scale. There has been no significant difference in the big count between them despite Java having static types.


Java has a bad type system that makes it awkward to express many well-typedness guarantees (too much boilerplate to create new types (classes or interfaces), no sum types (tagged unions), no null safety, exceptions are a mess especially with lambdas, and don't get me started about all the reflection madness of popular frameworks that throws type-safety out of the window, etc.). As a result you don't get much safety in exchange for all the boilerplate. Still, I find Java massively easier to refactor. You can remove a field from a class and verify that you've changed all the places that were using it (barring reflection, of course). In Rails, that requires at least running the whole testsuite which could be very slow (because Rails tests are often very slow). And even then, you have no guarantee you didn't miss a particular case.

Languages with better type systems do exist (Rust, Swift, Kotlin, from what I hear even typescript?, and all ML type languages including Haskell, ...). They are much better at preventing bugs. My life became better when I had to stop worrying about NPEs, for example.


I’ve had the same experience with Swift, Kotlin, and typescript. No clear difference in quantity of bugs or speed of development.


> [...] types were not remotely the most common type of problem I ever ran into, and certainly not "every" bug.

That's a typical response from dynamic typing advocates, but the response is that in a language with a good type system, many more things can be type errors than would be in a dynamically typed one.

For example, from my time writing Ruby, trying to call methods on `nil` was an incredibly common error, but this is simply a type error in some more modern statically typed languages (including Kotlin and Swift).


A substantial number of bugs I see where a method is called on nil are business logic errors that result in something not being where it's expected, and those bugs would just manifest differently at runtime with static typing. Every web app I've ever seen in a statically typed language has plenty of logic relying on "unwrap nullable or raise error" mechanisms.


That may be your experience. It's not mine. Null-safety forces you to think about the exceptional case (and whether it is supposed to occur at all). If you do so poorly, yes, you'll have errors too. But it's not something that can happen to you by accident.

For example, if you have a method A that gets a foo from the method B and then does something to that foo, you might get a runtime error if B returns nil. If this happens over dozens of method calls, it can start to become very hard to see where the error was introduced.

But if you know that B is always supposed to return foo and never nil, then the compiler can show you where the error in your definition of B is ahead of time.

The value of null-safety is not in making everything nullable and then just proceed with unwrapping etc., the value is that we can make a lot of types explicitly non-nullable.

I don't think it's controversial that a good type system allows to verify more soundness guarantees at compile time. The only controversy is whether it's worth the effort.


I agree with you about the value of static typing and non-nullability, don't get me wrong. My experience is just that _web development_ and other use cases traditionally favored by dynamic interpreted languages tend to have a lot of unexpected nulls of a type that are not prevented by strict typing. There are other types of bugs that I absolutely expect to see less of with strict typing.


I agree, I think it’s a good reason to use the static typing in Ruby > 3.0.

Any static analysis tool you can use to catch bugs before runtime in production is something we want.


> I ran Rails in production years back and swore it off then. We had constant memory leaks that seemingly came from Rails itself, and the only solution we had was "just restart the server."

Nothing is perfect. In my book restarting a server because of memory leaks is an acceptable trade off versus all the crazynes of having to decide and maintain on how to tie together a database, error validation, background jobs, translations, react, redux, an API to interface with the backend, logging, etc, etc, etc. Also, Go is just part of your system, I'm pretty sure you also have a frontend stack, and complexity increases a lot as I explained here. Compare that to Laravel+Livewire or Rails+ Hotwire. Night and day. I'm pretty sure any serious business will take the restarts any day vs the increased complexity and developer time.

> I've been happily running Go backends for the past 7 years now, and they're stable and fast and easy to refactor.

Yes, and I bet something built in 7 years with Go would have taken 7 months with Rails or Laravel.

On the static typing stuff, I'm with you on that. PHP (and Laravel) are a bit closer to that, but nothing is perfect. And for Web Development using Laravel or Rails is a good trade off.


Go is good and all, but it's odd to compare it to the vast feature set that Rails provides. The point of Rails is to give you standard tools so you don't have to consider / reimplement / configure / hook up e.g. background processes for every app you build.


As someone who seems to work with Go backends quite a bit, what's your preferred way of doing this? I've been playing with net/http compatible routers and I've been liking them, but interested to hear what someone with more experience uses. Any good way of dealing with common boilerplate that frameworks like Django and Rails help remove?


On small projects, I just write out the boilerplate. It's annoying but it's straightforward to read and revisit years later.

On larger projects (100k LOC+) I use chi for my router in combination with codegen via sqlc and moq, and I wrote a small program to generate the routes for me automatically with a config file.


sqlc is very interesting. Gives me sqlx (from Rust, not the Go package) vibes. Definitely will play around with this.


There's a variety of projects out there. gorilla/mux, chi, httprouter are all pretty commonly used for just defining routes. If you want a full on framework, gin and beego are also common. For the database (I'm unfortunately mainly familiar with postgresql related projects) there's pgx, sqlx, gorm, and then code generators like sqlc such as sqlboiler and entgo.

Golang encourages libraries and frameworks with much smaller surface, so there is indeed a boilerplate issue, but that's also an issue with golang as a whole.


> I ran Rails in production years back and swore it off then. We had constant memory leaks that seemingly came from Rails itself

Curious - were these actual memory leaks or Rails memory bloat due to how ruby allocates memory and holds on to them?


This is my experience, too. I'm currently working on a TypeScript+Node service, which is decent, but Go is my preference these days.


I’ve seen a few times where a company replaced one guy using a Rails monolith with jQuery and replaced em with a 30 person team that is far less effective.


Also, note that when not going the microservices/spa/kubernetes route, the alternative is not "old style reloads on every click and jQuery spaghetti", I'd say that's equally as bad.

Nowadays there are alternative middle ground solutions such as Livewire, Hotwire, LiveView, Unpoly, Htmx, etc which provide a great way to organize the code and keep it maintainable.


I think these are all bad ideas on the other extreme. Once you incur the cost of a round trip to the server the additional latency due to sending HTML instead of JSON is pretty close to 0. You really only need something like Turbolinks to avoid a full page reload/render.

Amusingly enough at $Current_Job the JSON we send back that is larger in size than the HTML it's rendered into. We'd likely have better performance doing all server side rendering + Turbolinks.


Yeah, this is the case I was talking about with the 90% of cases I was referring to.

At a previous job, we had a many, many thousand lines codebase of typescript, redux, observables, epics, thunks, custom server API libraries, websockets, Elixir backend, Kafka to communicate with other microservices, etc, etc..for...a frigging signup wizard.

Which then failed in so many stupid ways, had almost no server validation (everything unexpected was a 500) and it took days to do the smallest of the changes. But hey, don't dare to suggest doing this with Laravel would take 2% of the effort because you'd be crucified in the next frontend guild meeting.


THIS

I think that if we were more pragmatic and less about following the current fashion or trying to do what FANG does (which probably is just the opposite of what everyone else needs) we wouldn't be in such a high need of developers... which... I shouldn't be saying out loud probably :-)


At the end, it's all trade-offs.

After working with Rails for 10 years, I ended up preferring the opposite end of the spectrum of just using small frameworks like Express and just writing our own boilerplate.

I agree with Elm's take on boilerplate: it's not boilerplate and glue code that makes application development hard, and I don't think it's always a net positive to build abstraction around it like you see in large frameworks. As a Rails app grows, I just find myself spending more and more time unrolling abstraction to figure out what's going on.

Most things, like authentication, I prefer to just read in the application code itself to see what's being written to the cookie rather than dissect, say, the highly-abstracted Devise gem in a Rails app to debug session issues.


Yes, me too. But it makes absolutely no sense business wise. We like to play with tech, and learn and use the new shinny. While you're writing your frontend in elm and building your graphql API with Apollo and your serveless functions on kubernetes your causing a cost to your company which could have been avoided.

That's not engineering. That's playing with Legos just because we can.


I don't think this is a fair characterization of their post. They aren't playing with legos. Rails trades boilerplate code savings for abstraction. The GPs point is that boilerplate isn't hard to write while abstraction makes everything harder. And I think it's a fair criticism. Large Rails projects are difficult to work in because of that abstraction. It can be hard to even figure out what code is running let alone identifying bugs there.


The complexity is still there even with microservices or a Go or a Clojure codebase, it's just that either you've created a mess or you've created your own framework.

In my experience, you either use one of these big frameworks, or you end up building one. I've seen that happen more than once, and these "home made frameworks" are far worse, less documented, more buggy and have like 6 different patterns depending which generation of developers before me built it (and of course, I had left own opinions and mistakes too).

Writing microservices in Go or Flask or Sinatra won't make all the complexity related to the Web go away magically. The're still there. Now they're distributed across services and implemented mostly from scratch.

If you are not FAANG and don't use one of these big frameworks, you have only two options:

1. Create a distributed spaghetti mess of random libraries tied together 2. Build your own "big framework".

I've been many times in companies which were through the process of breaking up the monolith into separate services. It never finished, and every single service was just a proxy to the monolith, and everything took twice the time to implement, and now both all the services and the monolith have to be kept running and maintained. The only "positive" was the group of devs that wanted to play with new tech and somehow managed to become a team that only works on one of those services, so now they're "free" from the monolith hell. Business wise it was a disaster, would have been better to just let those engineers find another job with the tech stack they preferred and hire new ones willing to work on a single codebase and keep things simple. Again, this was not a Google scale company, these were 50 ~ 100 engineers organizations.


> Large Rails projects are difficult to work in because of that abstraction. It can be hard to even figure out what code is running let alone identifying bugs there.

I read this a lot, but I have never experienced it, even on projects that I have joined after they were already huge.

Rails conventions are pretty reliable. Most Rails developers choose predictable patterns that resemble the framework conventions. Obviously people coming from other frameworks will often build things differently (sometimes fighting Rails to do so), but a few minutes in the REPL makes everything clear.


>I read this a lot, but I have never experienced it, even on projects that I have joined after they were already huge.

You've never been in an IDE, tried a go to definition, and have it not work?


No, but I do not use an IDE to write Rails code.

I can think of a few cases where Rails architecture would be relatively difficult for IDEs: ActiveRecord callbacks, and of course metaprogramming and magic methods like "find_by_x".

In most cases this sounds like a mismatch between IDE and framework, but the "method_missing" chain could be a unique challenge.

I avoid using that construct in application code, and just accept that some of it exists in the framework -- but that it's almost certainly not the cause of any problem I'm debugging.


But sometimes it is. As an example, I've been trying to track down the code that runs for uncaught exception in a controller. It's (probably) somewhere in ActiveController but there's no easy way to find it.


Throw a junior rails developer in a big app and let them work. Is shocking the amount of overhead we get used to as Rails developers.

You get used to it, but it shouldn't be like that


I think this is a universal truth, with nothing specific to Rails.

What framework/language/environment can you think of where a junior dev can be dropped in to a big project, and not flounder?


It's probably a problem related to any framework with a lot of magic.

With Go and Phoenix, this problem is not there. They are very explicit, so when something unknown is found, it's written there what it does.

With rails there are things going on at all time: write a record? Some stuff is written on the db, other stuff is triggered, some stuff depends on thread variables, then other stuff is magically saved in other tables, other is updated (touched), which in turn might trigger more things. It is painful for a new developer because all of this happens implicitly.


Coming from a systems background, I might have a higher tolerance for "implicit" actions. They're always there, and almost never covered in code, although your code depends on them to function.

But I think I understand your point. You can ask a neighbor to "run to the market to get bread and milk" and expect reasonable results -- but you'll need to be more explicit with a foreign visitor.

Rails optimizes for neighbors, but in fairness the phrasebook and guidebooks are excellent. :)


They are, indeed


The abstraction hasn’t hindered me much and I work on a monolith.

One problem I have faced is when the amount of database queries on a page explodes it becomes hard to optimize without caching. I would prefer to avoid caching but that doesn’t appear to be the rails way (and for obvious reasons).

Also, at a company with a Rails app you do sometimes get other sources and processes polluting your Rails app. And when that happens you can’t utilize the efficiencies that rails provides.


How can it not make sense, if when there is an issue with devise, then it takes weeks to debug vs 1 day with the handrolled code?


I’d add Elixir and Phoenix to that list and then agree with you.


I'd add Phoenix to the list when it gets its documentation in the same echelon as that of Rails or Laravel. It may be as productive if you already know how to use it, but good luck building anything non-trivial on your own if you don't.


I like Phoenix, I really really do...but you are correct regarding docs. The documentation in Phoenix is fine for "what does this function do?". However, if you want the Rails Guides experience - it's just not there.


What do you find lacking in the Phoenix docs? I've always been impressed with the docs in the major Elixir and Phoenix libraries.


There's a good deal that I feel is lacking, but a big issue is the lack of continuity of documentation between Phoenix's component parts. If I want to build a CRUD app with Rail, Django, or Laravel, then I can get all the information I need from their docs. If I want to do the same in Phoenix, I'll have to jump between the docs for Phoenix, Ecto, Plug, and HEEx/Liveview.

Rails, while it has its equivalent of these modules, presents them in a cohesive manner and makes it clear the role they play in the greater context of the application. Phoenix doesn't make it nearly as clear and instead of being able to gradually learn these various concepts as they become relevant you have to take on the burden of them all before being able to even begin to be productive.


Have you looked at the Phoenix Guides [1]?

[1] https://hexdocs.pm/phoenix/overview.html


Yes. They do a poor job of the above and I still stand by what I said.


Don’t you usually learn how to use your tools before you build with them? This feels like a strange response.


As a bronze league engineer who still pulls a decent wage, you absolutely do not need to understand your tools to build things.


I've always found the Phoenix docs to be more like references. There's "guide-like" aspects to it but in my opinion it's no where near the level of where the Rails docs are. I agree with the person we're replying to, I almost always found myself having to research third party sources when learning Phoenix (blog posts, IRC, etc.) to get answers to questions that I never had to ask about Rails, Flask or Django because their docs covered it very well.

The Rails doc feels like you have DHH at your side guiding you on exactly how to do something in the context of a practical application and whenever you think to yourself "that's great, but what if I want to do...", often times the very next sentence in the docs will answer your exact question.

The Rails routing docs https://guides.rubyonrails.org/routing.html are a great example of this when they talk about namespaces and scopes but there's a million other examples.

It feels like it was written by folks who have been through the thick of it and back 100 times over to extract out the exact questions folks would have when using various features of the framework. Each piece of the docs feels like it's a mini book written on par with any book on a technical subject and then there's a completely separate reference guide docs with more examples.

Even the styling of the page itself just feels good. It's really easy to skim and navigate. The Phoenix docs feel more like a big wall of text with a few small headers and code blocks. It's hard to explain but personally my brain identifies the Rails docs as easy to mentally parse where as the Phoenix docs are not based on nothing more than the styling aspect alone.

Overall the Rails docs feel like they are written with a ton of empathy around the person reading them, completely holding your hand from beginning to end to solve a practical issue which is exactly what you need when learning something new. The reference guides are always there if all you care about are a few quick examples.


That’s true. Rails docs are fantastic.

Don’t get me wrong here, I’m a big Rails fan as well. Been using it for a decade at this point. It’s the best development experience out there. But eventually, you’re going to run into issues that are harder to address with it. Happens on every big project I’ve run into.

But, they all became big projects thanks to the productivity of Rails.


I don't think it's coming back. The big players who started with Ruby during the peak of its 2008~ hype curve are having serious scaling pains and are looking to other more performant languages as a way out. Engineers at these companies are going to be taking these lessons elsewhere.

This isnt even mentioning the compelling environmental arguments against a computationally taxing dynamic language like Ruby, which is going to be louder in the next few years.


People too often cite "speed of the language" as the only reason to abandon Rails (and Ruby). But this is a bit of a stretch, because you'll hardly ever have this performance problem at all and if you do, paying a little more for hosting infra "solves it"¹. You'll hardly ever reach the scale at which this truly starts to matter.

I'm certain there are other, far more valid reasons to migrate to Java, .net, Go, Rust or whatever. Rails' opinionated setup might simply not fit your use-case. Lacking language features (Interfaces, Typing, etc). Poor libraries(in your domain). Complex deployment- and operating stack. Too few good and experienced developers who know and like Ruby around and so forth and so on.

There are numerous reasons to move away from Rails and Ruby. Speed is but one. And often the easiest to solve in other ways.

¹ Actually, in my previous role as Rails consultant I often did performance tuning for Rails. 99.99999% of the times the executing time nor the GIL, nor the GC, are the problem. But the database is. Poor DBA, in my experience is the thing that makes most Rails apps slow. For which I blame Rails, because it makes it so damn easy to forego any sane design on the DBA, and just hobgobble together some `@user.devises.active.order(:used_at).messages.new` in your views, which fires the worst ever query on your database.


Poor database performance is a thing at my current gig. But often the only solution is having a custom view made.


> But often the only solution is having a custom view made.

Sometimes.

But I'd urge you -and every Rails dev struggling with this- to look at how and where you can decouple and separate the concerns in your (quite likely) ball-of-models.

Most often that is a far more sustainable solution. One that helps you not only with performance.


It's not coming back to what it was in 2012 but it's also not disappearing. It's here to stay.


> compelling environmental arguments against a computationally taxing dynamic language

Cost of development has always been the more significant cost over compute power for these businesses. Considering the entire web uses under 10% of the global energy rewriting your entire stack in c++ for the small performance improvement is not going to make a dent in that.

Why would businesses do this for such a small environmental improvement when they could spend half the money putting in solar and produce significantly more green energy than their stack uses?


> The big players who started with Ruby during the peak of its 2008~ hype curve are having serious scaling pains and are looking to other more performant languages as a way out.

Interesting, how do you know about this? I was under the impression GitHub were still mostly happy with Rails.


Github has been rewriting performance-critical components in Go for a while now.

https://github.blog/2020-05-20-three-bugs-in-the-go-mysql-dr...

> Although GitHub.com is still a Rails monolith, over the past few years we’ve begun the process of extracting critical functionality from our main application, by rewriting some of the code in Go—mostly addressing the pieces that need to run faster and more reliably than what we can accomplish with Ruby.


So in other words Rails has been a success for them.


The context here is whether or not they're still happy with it.


> The context here is whether or not they're still happy with it.

In 2018 GitHub got acquired for 7.5 billion dollars. Rails got them to that point and they've been up and running since 2008. I'd say that's a very big success.

Given GitHub's contributions to Rails master and other activity around projects like https://github.com/github/view_component, from the outside it looks like they're still very happy with it. It's hard to say if we'd ever get a real answer on what they think internally, it would be pretty unlikely that their CTO is going to publicly write an official company blog post on "I wish we didn't choose Rails".


> Lots of folks slam Rails for being slow and inefficient despite there being massive sites using it at scale (Shopify, GitHub, etc.) with very commendable response times and uptime.

Github is moving away from activerecord for exactly these reasons:

https://github.blog/2020-05-20-three-bugs-in-the-go-mysql-dr...

> Although GitHub.com is still a Rails monolith, over the past few years we’ve begun the process of extracting critical functionality from our main application, by rewriting some of the code in Go—mostly addressing the pieces that need to run faster and more reliably than what we can accomplish with Ruby.

full disclosure, I'm the author of SQLAlchemy for Python. But I certainly wouldn't expect Github to migrate from Ruby to Python for speed reasons, it would be like trading up your Ford Pinto for a Dodge Dart - not worth a rewrite :). Right now if you have middleware that is to be crushed with billions of reqs per hour, interpreted languages are simply not going to cut it.


That's a good blog post, thanks for posting!

I do not think "moving away from activerecord" is an accurate summary of what you yourself quoted, "extracting critical functionality from our main application… mostly addressing the pieces that need to run faster and more reliably than what we can accomplish with Ruby."

Anyway, ruby and python are indeed pretty close performance-wise, for whatever reason python has become more popular (and it clearly has), it's not performance! Yet people always bring up performance when talking about why ruby isn't more popular... doesn't seem to be hurting python much.


I think it's because when they say Ruby they mean Rails, and the paradigm the Rails ORM uses is good for simple queries, but doesn't scale well. SQLAlchemy/Hibernate use a different paradigm, which scales a little better.


except NodeJS, apparently, and this is far back as 2011, when LinkedIn switched to NodeJS from Rails

https://venturebeat.com/2011/08/16/linkedin-node/


> At the same time you can still run a single $20 / month server for "smaller" apps even with Rails + Sidekiq + Action Cable + Postgres + Redis all running on the same server to power your solo developed SAAS app with thousands of customers.

This is true but in a narrow sense. Rails and Ruby are inherently inefficient, both in the performance and memory profiles.

Rails is theoretically threadsafe, but not really - I've been hit by a threading issue just within the first hour of testing, and even if Rails was, the ecosystem isn't. Which means, one needs multiple processes, which requires lots of memory.

Speed isn't great, either. I can't really do apples-to-apples comparison, but I think it's realistic to state that equivalent, statically typed, applications are very significantly faster (without explicit optimizations), which requires more computing units.


Things are changing on this front.

We now have better fibres for IO bound code and ractors for thread safe CPU bound code.

Personally I don’t think ractors are completely solid but I don’t think it will long until they are.

I think Ruby’s future is pretty rosey compared to a few years ago.

I would even say that small projects can avoid Sidekiq and Redis by using Good Job and Postgres making them even cheaper to start with.


Regarding the ractors, I'm a big fan - I believe true multithreading in a language with a GIL is a revolutionary change. On the other hand, I also believe that Ruby has always had a culture of single threading, so for the ecosystem to adjust (in practice, for having threadsafe libraries), it will take a very long time (if ever; I'm pessimitic and personally believe the ecosystem will never catch up, but of course, I wish all the best :)).


I think there is a hacker news law that says that any post about a feature of a language or framework will eventually devolve into a general argument about whether that language or framework is good or not, and have nothing to do with the specific feature :D


Have you looked at Phoenix?

A ton of new Rails features/development you mentioned is inspired by work the Phoenix team built, who were also once core Rails contributors.

https://www.phoenixframework.org/


What does Phoenix have to do with it? The guy can't say anything good about Rails without us talking about Phoenix? Have you looked at Django? Have you looked at Laravel? Have you looked at Go? Rust? Node? We get it everyone is doing web frameworks noawdays, Rails isn't unique.


Sheesh, why so much hostility? Elixir and Phoenix were created precisely to address Ruby's and Rails' shortcomings, and by noteworthy Rails contributors no less. They're incredibly good (paradigm-shifting even), the community is great, etc. They're also not mainstream yet, so it makes perfect sense to mention them. Just because you're tired of hearing about something it doesn't mean there aren't (lots of) people who have never heard about it. (See https://xkcd.com/1053/)

In fact, if you're tired of hearing about this, you might want to expand your bubble a little bit.

(I have no involvement with them, just a passerby.)


For starters, phoenix has had "support" for the equivalent of fiber-safe database connection pools forever, and it's advanced enough that you can concurrently run tests that checkout database "sandbox" transactions and you can spawn a task off it "that runs in another fiber" which will still know which database sandbox to use and with about 10LOC you can make it so your integration test can issue an http request to the server and the "fiber" that handles the request operates in the same sandbox. Oh, and this pattern is part of the standard library so it will work with just about any other thing that needs to share global state (like mocks), not just databases.


The reason he mentions Phoenix and Elixir is because it differentiates from every other language and framework due to its concurrency and memory models. Those models make hard things easy that you simply cannot do in other web frameworks.

That is why he mentions it. It’s not “another web framework”, it’s changing the expectations for what you should be able to do without all of the crazy plumbing that was mentioned earlier.

He replied with that because it’s quite literally the answer to the problems pointed out. Not “another web framework, k thanks”.


Phoenix is often mentioned in conjunction with Rails because Elixir's syntax is often described as "Ruby like" and the guy who made Elixir is(was?) a member of the rails core team.


As someone who uses and likes Elixir and Ruby, the syntax is more accurately described as Erlang and Ruby in a car crash.


Yeah I get it, thanks. I still think it would be nice to stop it.


Stop discussing technologies that were made specifically to address the short-comings of the topic of the article?


> Yeah I get it, thanks. I still think it would be nice to stop it.

It would be nice for the rest of us to have informed, considered responses in comments rather than little rants. But it seems as though no one's getting what they want today.


>while having a p95+ response time of <= 100ms.

Which is where the argument began. Emphasis on the 100ms. But lets not derail ( no pun intended ) into that again.

>PRs like this just mean things are going to get even better than they already are and is probably a precursor to converting a bunch of Rails internals to use Fibers over time.

And the fact this actually got merged. For the longest time watching rails I thought no one was interested. Samuel Williams seems to be the only one pushing for it.


We migrated a rails app, that needed in memory data structures, from rails to java. We went from needing rolling restarts and 100 aws servers all cpu bound, to running 10% cpu on 2 java servers never needing a restart.

Ruby and rails is great for a PoC or a quick crud app, but it is not close to being performant which usually doesn't matter but sometimes it does.


It's hard to know what solved the problem here. I wouldn't assume that it's the Java bit, although it could be.


Oh got it, I should outline more of the problem we faced.

We had a high throughput server that needed a fair amount of in memory data to service the requests. The in memory data would update, one set of the data would update very quickly while the larger data set would update slowly.

Two things on this: 1) Ruby is not concurrent, it is multi-threaded but not concurrent. When ruby starts up you generally create a new process per CPU to ensure you can utilize all cpus. This is generally fine for a CRUD app but does not work well if you really need to share in memory data structures in a low latency way, it will result in any changes to memory being repeated for each process, so a 32 core machine is 1/32 as efficient with memory. 2) Ruby, at least at the time, had a non-compacting garage collector. This over time leads to memory fragmentation which ends up causing ruby to run out of pages, as such you run out of ram. This mean with Ruby you have to be careful if you get closer to the limit of how much ram you have on a machine and you are allocating/freeing large spans of memory, if freeing memory doesn't free the page you can still run out of memory. This appears as a memory leak for the machine but ruby appears to still just be using the same amount of memory. Only fix for this is having rolling restarts. Java using a compacting garage collector, which can help alleviate this issue, of course it does come with some pauses.

I also found the ruby schedule is not pre-emptive. This might have changed but at the time it meant if you had a lot of threads that were waiting on external IO that the ruby machine's performance deteriorated. Java's scheduled appeared to be more performant. Looking at this post makes me realize I should turn this into a blog post with links to the actual numbers to demonstrate each issue clearly.


Ruby has a severely lacking ecosystem imo and since it is virtually a language to support rails, so I find it hard to believe it will pick up anymore traction than it already has.

Static typing, lack of static analysis tooling, and autoformatting are sorely missed. Whenever I work on a ruby/rails codebase I feel like I've gone a decade in the past. The DX is not that great and many other ecosystems have comparable DX with a modern toolset.


This article is full of inaccuracies, I don't think the author really understand the subject.

> As Rails continues to replace the usage of threads with fibers

That's not what we're doing... Rails doesn't really use threads anyway. We just introduced an indirection so that the request state can be stored either on thread local or fiber local variables.

> An isolation level determines how database tractions are propagated to other users and systems.

The author is confusing `active_support.isolation_level` which is either Fiber of Thread with database transaction isolations. The concept is similar, hence why the name was chosen, but it's two very different things.

> Ideally, in the foreseeable future, we can expect good performance improvements to Rails I/O operations.

Not really no. I/O will still be IOs, you may gain a tiny bit because of the cheaper "context switches", but it's gonna be very marginal for the vast majority of Rails apps.

The goal is to allow the community to use fiber based server if they wish, but I (the author of most of these changes) don't except any substantial performance improvement from it.


Nice - there's a demo repo of this on edge Rails here: https://github.com/machty/rails-edge-async-test


What people think about active record outside of Ruby, I find that strange to couple the storage interface to the entities that you manipulate.

Is active record patern popular outside of Rails?


It is a reasonably common design pattern[0].

But it has serious downsides[1], which is a valid reason to forego it for applications that you already know will be complex in domain-logic, transaction-boundaries or where distribution and concurrency are crucial features.

Indeed, the tight coupling to the database is a real and felt problem. It makes migrations and deployments hard. And it makes layering your architecture almost impossible. e.g. I'm currently working on a Ruby (not Rails) application following the Hexagonal Architecture. Where ActiveRecord is powering the persistency adapter; away and aside. AR is not happy in that role. And we use hardly any features of AR, so it is overkill and will be fased out soon (we didn't design it this way, we merely moved the AR out of an existing tangled mess into its dedicated layer/adapter; the next logical step is to remove AR entirely).

[0] https://martinfowler.com/eaaCatalog/activeRecord.html [1] https://en.wikipedia.org/wiki/Active_record_pattern#Criticis...


It's an extremely common pattern.

ActiveRecord is an example of an ORM. ORMs are a very common way to connect an object oriented language with a relational database. Since many languages are object oriented, and the databases many people use are relational databases, this is a common solution.


There’s no reason why you have to couple storage interface to business logic.

I’ve done this before. Develop business logic as a separate gem and then include that in a Rails project.

ActiveRecord is actually entirely optional in Rails, it’s just the default.

You can also use POROs as presenters and mixin ActiveModel if you want things like validations.

Then couple that with Sequel as an alternative to ActiveRecord to run data access objects.

Sequel is often regarded as a more permanent alternative to ActiveRecord. It’s not a drop in replacement though so that might be a consideration for team skills/support.


s/permanent/performant/


I believe it's quite common outside of Ruby world - for example both Python and PHP have popular active record implementations (Django ORM and Laravel ORM respectively).


In Rails you can replace the ActiveRecord layer using regular Rudy classes that include some ActiveModel modules.

In this architecture, you’re using these classes to replace models. Which gives you a layer of abstraction between the UI and the database.

Querying becomes more difficult and some libraries won’t work out of the box, but it’s quite easy to implement.

Most people don’t do this because this architecture isn’t the default, takes more time, and requires a larger application to justify doing a lot of work twice.


It's not quite coupled to the storage, in that you can use a number of different drivers with ActiveRecord for the actual storage portion. But I think you might mean coupling the model so tightly to it's CRUD logic.

Inside the application you are actually trying to get all the benefits of having an Active Record model, not all the benefits of having a Car/Cat/Dog model. So you don't actually have a Car model, you have an Active Record model that represents a Car. Thinking about it that way I think sets the right expectations for what you're trying to achieve.


Can you realistically open thousands of fibers though ? You will hit the db connection limitation pretty soon no? I'm just wondering what was the end goal - do people hit maximum threads limitation and need fibers?

EDIT: I might be wrong here it could be that each Fiber released the connection when it's de-scheduled. I'd still like to get a grasp on how this improves performance ...


Do you mean DB connections for the DB server or some db connection limit in rails? For the DB server, you can scale that out horizontally so it doesn't have to be a hard limit if you need more connections.


The fiber can't release the connection when yielding if there's an open transaction.


Hey! So what's your take on this, is this something Sidekiq can benefit from or not so much?


Realistically I don't see any benefit at all. That's not to say it's a bad change, just not one that provides much of any real world benefit to a thread-based system like Sidekiq. I would love to be proven wrong though!


My biggest gripe with Rails has been the lack of real concurrency without a large number of cores and threads (where each worker consumes an entire rails app worth of memory). It doesn't take long before you're chugging on 12+ gb of memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: