I don't know quite enough about the field, but it's possible they're the leaders (like Python) because they're just "good enough" and they were the best options widely available when they were adopted + took off?
One of the arguments for python is the exceptional support of automation differentiation and GPU computing through deep learning libraries. Most python based PPLs focus on static model with differentiable log joints, allowing the application of HMC or variational inference. Unfortunately, the support of efficient automatic differentiation libraries in Julia is still in its infancy. But I hope with some more work by the community and the Turing team, this will change sooner than later.
I thought with libraries like Zygote there is some really nice stuff already in Julia. I'd say it's still early days for good autodiff libraries in general and I think we still haven't really explored what they can do.
The reality is that most of a modelling task is preprocessing your data before it can be passed to a probabilistic model and postprocessing to make decisions using it. The code is usually written in R or Python so there is a strong pressure for your library to be in that language as well.
And being rough around the edges is an ok price to pay for not losing an ecosystem.
My take is that most modern PPLs have language bindings in JavaScript/Python/R because they are explicitly courting analysts/data-scientists/applied-statisticians, or they are taking advantage of modern technical stacks implementing auto-diff and co-processor routines.
Most pre- and post- processing (with the exception of visualization) should probably be a part of your model!