I've been using Elixir for the past 5-6 years for my startup. We use pg_notify extensively to broadcast changes between running nodes (basically, use Phoenix.PubSub locally in our apps, with a GenServer to subscribe+re-broadcast using pg_notify).
This has been a really elegant and low-complexity way to get distributed pubsub without the complexity of running a distributed erlang cluster (which seems a lil bit painful in a K8S+Continuous Deploy world)
There -are- some big downsides to be aware of though.
1. You can't use PgBouncer w/ LISTEN/NOTIFY. This has been really painful because of the high memory overhead of a pgsql connection + elixir keeping a pool of open pgsql connections. The tried and true method of scaling here is to just use PgBouncer. We've kicked the can on this by vastly over-provisioning our pg instance, but this has cost $10s of thousands on the cloud. Of course, it's solvable (dedicated non-pgbouncer connection pool just for LISTEN/NOTIFY, for example), but painful to unwind.
2. The payload has a fixed size limit (8KB, IIRC). This has bitten us a few times!
Even though I really like pg_notify, I think that if I were starting over, I'd probably just use Redis Pub/Sub to accomplish the same thing. Tad bit more complex if you're not already running Redis, but without the downsides. (Of course, w/ Redis, you don't get the elegance of firing a notification via a pg trigger)
We got around this at my company by just pooling all of the LISTEN/NOTIFY streams into a single database connection in software, here's a sample implementation:
function software_listen(channel, callback):
if not channel_listened(channel):
sql("LISTEN " + channel)
listeners[channel].append(callback)
function on_message(channel, data):
for listener in listeners[channel]
listener(channel, data)
function unlisten(channel, listener):
listeners[channel].remove(listener)
if len(listeners[channel]) == 0:
sql("UNLISTEN " + channel)
For #1 I've been keeping a keen eye on pgcat [1], in particular the https://github.com/postgresml/pgcat/issues/303 which
implies that it should be possible to add support for transaction mode LISTEN/NOTIFY support.
Triggers can affect throughput quite a bit on busy tables.
And we didn't want people schemas polluted with triggers.
But also we use Erlang distribution and Phoenix.PubSub because with a global network clients connected to the same node in the same region will get normal Broadcast messages to each other faster. If we ran them through Postgres or Redis the added latency wouldn't work for a global thing.
> Postgres offers quick and simple Notifications that can help you react to changes in your database without much overhead. They are particularly interesting if you can’t use Phoenix’s PubSub, for example, if another non-Elixir application also makes changes to your database.
> Be aware that this listener can easily become a bottleneck if you have lots of messages. If you can’t handle the messages quickly enough, the message queue will fill up and crash your application. If you’re worried about this case, you could create one listener per channel or use a PartitionSupervisor to start more handlers and spread out the work.
Why not insert into an events table instead of pg_notify? That way the events are recorded within the database itself, can be processed by any component, the state of processing can be saved in the table so even if the component dies, it can resume (and can even fan out the actual processing to workers). Further, you have the record of all events alongwith the flexibility of interacting with the event information with SQL and with partitioning, you can have a clean way to manage performance + ability to easily archive past/processed events.
If you want to listen to database changes, check out Debezium. Instead of triggers it takes advantage of the more recent CDC functionality that most SQL Servers has implemented today. The difference is that triggers works on every transaction while CDC works on a redo log file. This makes it possible to transfer database changes with minimal performance impact.
One of my clients uses a tool operating within the same space as Debezium called Qlik Replicate to listen for log-based CDC to replicate the data to a different database. The challenge we found was that not everything was recorded into the redo log, like DELETE's wrapped inside functions in one vendor's database. There was a long list of caveats we eventually uncovered beyond even the documented ones. This challenge arises even if performing CDC between two databases from the same vendor on the same version, somewhat narrowing the use cases we could apply CDC upon.
If you must care about bit-level accurate replication, catching gaps after the replication completes is we concluded after a year of chasing that chimera a deep rabbit hole with an event horizon that constantly recedes into the future. If you can tolerate some replication errors and can tolerate not knowing where they happen without a lot of investigation, then CDC works great.
CDC gets us close, but I'm still looking for someone who is working upon covering the edge cases that redo logs alone do not address.
If you want to listen to database changes in Elixir you can also get really good stuff done by using Cainophile (https://github.com/cainophile/cainophile). Same mechanism. I don't know the details of Debezium so I can't say if you are leaving fantastic things on the table. But I've had good fun with Cainophile. For example I've used it in my videos on Electric SQL to react to changes in a Postgres database. It matches nicely with realtime-ish UI via LiveView. So meshes really good with the current Elixir stack.
That library is great but pretty dated. I put together WalEx which borrowed a good bit from Cainophole but uses the new WAL listening logic in Postgrex:
What is dated? WalEx is still using logical replication slots, right? Trying to bring myself up to speed. Currently am using cainophile and would like to understand what WalEx is offering above and beyond. Cainophile isn't actively maintained but I think that's because it is feature complete, at least according to the author's concerns.
That is interesting. So this would mean using what I already have when using Ecto instead of the other postgres client I think cainophile uses.sounds good to me.
Yeah I was gonna say, there are probably ways to use the PostgreSQL Write-Ahead Log (WAL) to stay up to date with every change, without having triggers. This CDC you mention sounds similar to that.
Yes. Debezium is the charm for this. If your systems are normal size, as in not Amazon scale, the embedded engine is a great addition to a codebase. It doesn't have the fault-tolerance of the full deployment, but it's suitable for many, many purposes.
An issue with triggers is that they don't really live in your codebase. It's too easy to forget they even exist, and maintenance and support will eventually become impossible. The benefit of Debezium is that everything lives in your codebase and reasoning take less cognitive overhead.
Being the scale of Amazon has nothing to do with whether or not fault-tolerance is important to an application. In fact, amazon has far fewer real needs for fault-tolerance than many other businesses (its a retail site!).
You should use embedded mode if you do not require fault-tolerance and can miss updates. Otherwise, don't. Regardless of scale.
Most popular databases have change data capture as a feature. They read backups of the transaction log and extract events they care about (dml operations against subscribed tables) and then store the results into shadow tables. They tend to be relatively low overhead compared with many hand-rolled solutions.
One other benefit is they capture all the changes to the underlying data, not just the net changes.
It’s important to realize though that CDC records change information but isn’t a mechanism to move it anywhere. You would still have to devise a means to move the data to another system.
Debezium is a data movement tool that uses CDC for the underlying tracking.
Listening to your database — is there a way to create an audio stream so you can actually "listen to" reads (a low hum), writes (tinkling wind chimes), and failed transactions (BANG! CRASH! BOOM!)? That would be a useful ambient sound for a DBA's office, wouldn't it?
I thought this was going to be about using audio to literally listen to the database along the lines of "What different sorting algorithms sound like"
https://www.youtube.com/watch?v=t8g-iYGHpEA
I actually have this exact problem right now with SQL Server on AWS RDS. Unless I want to pay for standard+ editions in my dev/stage/qa/etc. environments, I can't use the baked in CDC features. And because of the minimum instance sizes for Standard+ edition, it costs ~1700 bucks per month per database. This is fine for production, because I need features like High Availability, but paying a significant premium over web/express in those environments seems like lighting money on fire.
We're already tracking changes for the purposes of time travel queries and other auditing purposes using Temporal Tables (SQL:2011 feature). I'm thinking a cron job triggering a lambda every minute should be sufficient to read from the history tables and publish out change data events over a bus.
I'm not sure how this interacts with temporal tables, but I know that I've run into issues before just trying to track something with a high watermark, in that long running transactions might add records "in the past" behind your watermark that you track, so you need some mechanism to make sure you're capturing those (i.e., you can't just track an auto-inc primary key). Unfortunately, I think CDC is really the best way to handle this.
I agree that it is probably the best choice, I'm just not willing to >quadruple my server costs for the feature, unfortunately. I appreciate you taking the time to reply, and I did some further research.
It looks like as long as transactions are not taking excessive time to complete, that temporal tables will be sufficient, since the history tables get marked with the transaction begin times. I'll use a sliding window approach and dedupe.
V1 requires to run a sweeper procedure in the background, but an upcoming v2 version does without the sweep loop. Unfortunately too fresh for readme to be updated, but relevant lines in tests to show it off:
The "in the past" problem can be solved in a number of ways (misc locking patterns). Have played with a number of such schemes in mssql-changefeed (see sibling post).
CDC has 20-40 second delay for mssql which makes it unusable for many purposes.
Debezium is a cool project, though as a fair warning does come with a fair amount of on-going maintenance. And you should prolly be using Kafka and comfortable with the JVM generally.
FWIW, we're (estuary.dev) an open-source and fully managed tool for building CDC pipelines out of Postgres from the WAL.
Pretty cool! not the same but for flyway + java I have a simple github action that on push to develop checks the diff for any *.sql files and mails them to the DA team, as they dont have access to our repo
An approach I moved from this to, was to use the DB trigger to write jobs directly to an Oban oban_jobs table.
Oban jobs run almost instantly when they are queued so there's no perceptible difference in speed, but you get all the benefits of resilience that Oban provides, so your webapp connection can go down, such as during a deploy, and still catch up reliably.
It's also handy to be able use the oban_jobs table for debugging purposes.
Very interesting project, thanks for linking. Was thinking about building something like Oban.Peer a while ago. We're not using Elixir. Might use Oban as an example :-)
This has been a really elegant and low-complexity way to get distributed pubsub without the complexity of running a distributed erlang cluster (which seems a lil bit painful in a K8S+Continuous Deploy world)
There -are- some big downsides to be aware of though.
1. You can't use PgBouncer w/ LISTEN/NOTIFY. This has been really painful because of the high memory overhead of a pgsql connection + elixir keeping a pool of open pgsql connections. The tried and true method of scaling here is to just use PgBouncer. We've kicked the can on this by vastly over-provisioning our pg instance, but this has cost $10s of thousands on the cloud. Of course, it's solvable (dedicated non-pgbouncer connection pool just for LISTEN/NOTIFY, for example), but painful to unwind.
2. The payload has a fixed size limit (8KB, IIRC). This has bitten us a few times!
Even though I really like pg_notify, I think that if I were starting over, I'd probably just use Redis Pub/Sub to accomplish the same thing. Tad bit more complex if you're not already running Redis, but without the downsides. (Of course, w/ Redis, you don't get the elegance of firing a notification via a pg trigger)