Modern smartphones have quite competent GPUs. They can run 3D games at 1080p/60fps. They could raymarch circles around an svg icon.
And if it wasn't enough, the renderings could be cached for later use.
TBH not many people consider SVG format worth the effort unless the graphic is made from scratch. 1) an organization rarely rationalizes having a UI dev spend 6 something hours either transforming an existing PNG to an SVG, or spending 12 hours compleitly replicate an existing design in photoshop only for it to be exportable as an SVG. I could be far fe5ching things but sometimes business or upper management get short sighted on the small things. Probably there is no rationalizarion in spending extra effort in having these graphics all svg format, thus improving your application's load and bandwidth utilization, vs pumping more funds towards other easier quicker solutions.
AFAIK, the majority of designs are created in photoshop or illustrator. No conversion nessecary. In any case, I would consider it a liability if I didn't have my designs in an editable format. What if tommorow I realized I wanted a variant in a different color? Or I decided I wanted to use a different background? If all I had was a bitmap logo, I would consider it worthwhile to have someone change it into a scalable format anyways, even if I wanted to transfer them as PNGs.
Also, numerous free tool exist to convert bitmaps to SVG. Using one of these, plus some manual smoothing and correction, I'm sure the conversion would take no more than half an hour.
So you just CPU-rasterize out to a buffer, and blit it to the screen. After the initial render, it's equivalent to the PNG. The application essentially decompresses its assets on launch.
May be SVG is not the best vector format after all. May be we need something simpler, where every item directly maps to GPU graphics calls, also binary would help.
GPUs have graphic calls anymore. At best they have fragment and pixel shaders.
Converting a vector graphic into a good shader is not trivial, compared to using the built in functions of texturing.
It gets even worse once you realize how limited older mobile GPUs are or what are the incompatible subsets of features supported at decent performance.
If you want something simpler that works with GPUs it needs to be exclusively built out of triangles and/or code that can operate on a single pixel independently of its neighbors.
Resolution independent formats are inherently anti-triangle and by the time you've hit triangles you already have a target resolution in mind.
Or put another way, GPUs really don't like vector graphics in general. That's not what they're built for or good at.
GPUs like vector graphics just fine; rasterizing vector graphics is literally what GPUs are designed to do. It's infinitely scalable vector graphics that they tend to dislike.
Once you've subdivided your curves and such into triangles/vertices/etc., the GPU ends up being a lot happier about its existence.
GPUs are not designed for vector graphics, they are designed for triangles and triangles only. Triangles are a subset of vector graphics, but are almost never the subset that people mean when they say "vector graphics."
Specifically vector graphics typically includes curves, which GPUs just don't do at all.
Once you've tessellated a curve into triangles you've already baked in a desired resolution. You can't have resolution independent vector graphics in a GPU friendly way.