I'd be fascinated to know what sort of things it is hard to write Postgres extensions for and how that interplays with their existing architecture choices, if you have the time to give some examples.
You know, we almost wrote TB as a Postgres extension!
I think it would have been the right thing to do 5 years ago, before some of the groundbreaking research that came out in 2018 like fsyncgate and “Protocol-Aware Recovery for Consensus-Based Storage” that really changed the way that distributed databases need to be designed [1].
These days we also have io_uring, Deterministic Simulation Testing and safer systems languages (Rust, Zig). And high availability, i.e. consensus, almost has to be part of the (distributed) database going forwards.
Beyond this, in the case of TB, a Postgres extension didn't satisfy our storage fault model, or our design goals of gray failure tail latency tolerance and static memory allocation.
What we're excited about with TB, is also this vision that people will one day be writing their own extensions for TB, swapping out the accounting state machine for another, with TB doing all the distributed heavy lifting.