Each of language groups across the globe has its own dominant and different messaging apps. US has Messenger, Korea has KakaoTalk, Japan took LINE, China built WeChat, Russia picked Telegram, and so on. The Meta Facebook/Messenger/Instagram triad isn't the global default of social apps the way it might look to people from US.
And I don't think it takes conspiracy theories to explain it, maybe users don't like platforms that isn't dominated by similar users of their primary language, or maybe there are something else that prevent app experiences optimized for two distinct cultures at the same time.
This isn't really true. WhatsApp was used pre-acquisition and continues to be dominant throughout LATAM, Africa, and Europe in addition to US/NA. Only in the APJC region and Russia do we see significant divergence in messaging apps.
Having traveled extensively in these places, I always theorized it was due to UX behavior aligning well with the local languages. While the countries WhatsApp dominates speak different languages, they all use the Latin alphabet. In Russia and APJC there are many non-Latin alphabets used and those languages may also use different directions for writing/reading than Romance and Germanic languages.
One advantage of Telegram over WhatsApp is that you don't have to display your phone number to your contacts and random people in group chats and blogs.
With some amusing exceptions: doctors are exclusively on WhatsApp; older (60+) people are often only on WhatsApp (and pre-Microsoft Skype before that).
LINE is very popular in Thailand for unclear reasons, I've heard the theory that their cute sticker packs set them apart in the early days. In the rest of SEA Whatsapp is the most popular.
Listener callout does not have to be part of the synchronous writes. Say if you keep a changelog, listeners can be asynchronously notified. These checks can be batched and throttled as well to minimize call-outs.
I would suspect this is what Google does with Cloud Firestore.
If you’re throttling, then your database can’t actually keep up with broadcasting changes. Throttling only helps ensure the system keeps working even when there’s a sudden spike.
Batching only helps if you’re able to amortize the cost of notifications by doing that, but it’s not immediately clear to me that that there’s an opportunity to amortize since by definition all the book keeping to keep track of the writes would have to happen (roughly) in the synchronous write path (+ require a fair amount of RAM when scaling). The sibling poster made mention of taking read / write sets and doing intersections, but I don’t think that answers the question for several reasons I listed (ie it seems to me like you’d be taking a substantial performance hit on the order of O(num listeners) for all writes even if the write doesn’t match any listener). You could maybe shrink that to log(N) if you sort the set of listeners first, but that’s still an insertion speed of m log(n) for m items and n listeners or O(mn) if you can’t shrink it). That seems pretty expensive to me because it impacts all tenants of the DB, not just those using this feature…
They have abandoned it at their lowest to focus on Zen. Now they seem to start picking up the slack. Upcoming Instinct MI300 APU brings HBM as unified system memory, together with hardware cache coherency between CPU cores and GPUs within and across NUMA nodes.
Each process has its own GCD queue hierarchy that are executed by an in-process thread pool. Though it has some bits coupled with the kernel for stuff like Queue/Task QoS class -> Darwin thread QoS class and relatedly priority inversion.