Thinking about it, with proper library design the 'custom allocator' should never be baked into a dependency, but instead injected by the dependency user all the way to the top (and the dependency user can then just inject a different more robust allocator). It's a design philosophy and convention though, so a bit hard to enforce by the language.
Not the language but the standard library can, requiring the allocator to be supplied to the containers at initialization. (Which zig does, or at least did when I looked at it a few years back).
The problem with that approach is that you end up with a “function coloring problem” akin to async/await (or `Result` for that matter). Function that allocate becomes “red” functions that can only be called by a function that itself allocates.
Like Result and async/await it has the benefit of making things more explicit in the code, but on the flip side it has this contaminating effect that forces you to refactor your code more than otherwise, and also cause combinatorial explosion in helper functions (iterator combinators for instance) if the number of such effect is too high, so there's a balance between explicitness and the burden of adding too many effects like these (or you need to go full “algebraic effect” in your language design but then the complexity budget of your language takes a big step, it's unsuitable for either for Rust or Zig which already have their own share of alien-ness (borrowck and comptime, respectively)).
Odin solves that with an implicit context pointer (which among others carries an allocator) that's passed down the call chain (but that also means that all code is 'colored' with the more expensive color of an implicitly passed extra pointer parameter).
A global allocator is IMHO worse than either alternative though.
Also, from my experience with Zig, it's not such a big problem. It's actually a good thing to know which code allocates and which doesn't, actually improves trust when working with other people's code
I tend to like the more explicit approach in general (I like async/await and Result more than exceprion typically) but at the same time I acknowledge that there's a trade-off and that a language cannot make everything explicit without becoming unusable, so each language must make their own choice in terms of which effect is going to be explicitly written down in code and which will not. And it's going to depend a lot on the expected use case for the language.
With Zig aiming at the kind of code you can write in C it doesn't surprise me that it works pretty well. (Also I'm a bit sceptical about the actual future of the language, which IMHO came a good decade too late, if not two: I feel that Zig could have succeeded where D couldn't, but I don't see it achieving anything nowadays, as the value proposition is too low IMHO. Except as a toolchain for C cross compilation actually, but not as a language).
Austral could use its linear types to deny any dependency the ability to allocate memory, just like it can deny io, giving those dependencies access through a provided type. They did it for supply chain safety, but it also lets you specify how your entire dependency tree handles all the system resources.
>A global allocator is IMHO worse than either alternative though.
Just fyi, console games settled on global allocator. Because everyone allocates and game needs to run at say 7 out 8Gb used consistently. It makes folks passing around pointers to allocators completely wasting their time and space. There are small parts of code with explicit pooling and allocator pointers but they are like 5% of total code.
It is funny when C++17 standard got PMR allocators that make a dream of explicit passing allocators around come true then folks noticed that 8 bytes in every string object are not that cheap. There are very small islands of usage of PMR allocators in the library ecosystem.
It does not make global allocators universal truth. It just shows that tradeoffs are different.
I really hope that there are no console games which pass *string objects* around, with or without embedded allocator pointer ;)
In general, the idea that each individual object is uniquely allocated doesn't make sense since objects of one type almost never come alone - especially in games, and you definitely don't want to carry allocator pointers in each invidiual object around, at most pass them into explicitly called creation and destruction functions.
Games typically only have few lifetime buckets (a frame, an active map region/zone, an entire map/game session, or static lifetime for the entire duration of the game), each of those can be handled by an arena allocator that can be flushed at once without calling individual destructors (because 'objects' should really just be dumb data items).
Most of this doesn't fit into the memory-management-via-RAII idea of automatically self-destructing objects when they are no longer referenced of course (because then the object - or at least the smart-pointer - indeed needs to know how it was allocated).
I do know though that a lot of ancient game- and game-engine codebases still do this OOP-inspired 'object spiderweb' (e.g. each object living in its own heap-allocation and referencing other objects via smart pointers - but that is really not how it should be done since the late 90s when memory latency became an issue).
As much as I (as engine/performance guy) would like to see strings gone we have not eliminated them.
Passing allocators hit a brick wall because most of the code is threaded tasks and to make something that will go into another async API like GPU has you need to allocate in non-blocking way. Which means that each specialized allocation and each non specialized allocation (like you need to pass something to tasks further in task graph) need to be non blocking on unknown statically thread. Having global multiple allocators does not make it easier to test and reason about. It just means that passing things as arguments is not useful if you are not 'calling code' synchronously for the most part. TLDR task graphs and async APIs make code look alien to people outside of gamedev. That's a fact of life.
Object graphs are independent from that. I can not say gamedev has resources to polish object graphs as much as in old smaller console times or like embedded folks would like. I have to confess that lots of our objects are not even in the C++ code anymore. They are in runtime of visual language that designers used... We live fast and ship mostly broken things... /end of rant