The author would make an even stronger case if they showed the results after compression. On the few examples in the home page, which I compressed with gzip -9 (zstd performs similarly):
In most cases, even after gzip compression TVG still has a substantial lead over SVG.
This is evidence that the size improvement does not come entirely from the binary format (it would be possible to devise a binary format for SVG without changing the language and semantics), but from the simplified graphic primitives as well. If it was just XML overhead, compression should mitigate most of it.
If you run tiger.svg through `svgcleaner`, you get a file that's 57614 (with no visual difference[1]) that compresses to 20924 with gzip or 19529 with zstd.
`svgo` gives 61698 / 21642 gz / 20228 zstd, again no visual difference.
Not really a need for TVG if you can clean your SVG as part of your deployment pipeline.
(You can go even further and trim the coordinates to 2 places of precision which ends up with 52763 / 18299 / 17114 zstd at the expense of still largely invisible differences but I've had SVGs where this level of cleaning did materially affect the output.)
Somebody recently pointed me at a nice online GUI for svgo, so you can try it for yourself without installing anything: https://jakearchibald.github.io/svgomg/
I'm not sure, but it seems svgcleaner can remove unused and invisible graphical elements[1]. I don't know if TinyVG preserves them. but if it does, it's not a fair comparison.
Did you try converting svgcleaner processed SVG to a TVG?
While you're not wrong, I'm gonna put my graphic designer hat back on for the first time since high school and point out that sometimes you _do_ want those invisible elements still there, especially if you're gonna want to do further editing on the file later on.
> especially if you're gonna want to do further editing on the file later on.
I think you'd generally only use the cleaning / optimising step when deploying / packaging the asset - you'd leave the original as, well, the original for further editing (and to take advantage of better optimisations if they come about.)
Very true, and I'd expect graphic designers and most dev's to know that.
I've worked with enough people who only had the optimized assets because "Well optimized is better, right?" [0] that I thought it was worth pointing out.
[0] I was working on some web stuff for them and they were curious if I could also do some graphics work, small local company
We are starting to miss the point of TinyVG with this discussion. The point is a simplified standard, so we don't end up with feature incomplete implementations. I mean just look at all the stuff Adobe Illustrator can, but Browsers can't. Final size a nice-to-have that comes with a minimalistic approach to the standard.
Just tested this and brotli gives an extra 1-1.5k saving over the zstd versions (18329 vs 19529 for tiger-clean.svg, 15749 vs 17114 for tiger-prec2.svg)
> If it was just XML overhead, compression should mitigate most of it.
Strong enough compression should mitigate most of it, but DEFLATE (and consequently zip and gzip) is not a strong enough algorithm.
For example, let's imagine that a particular format is available both in JSON and in a binary format and is entirely composed of objects, arrays and ASCII strings, so binary doesn't benefit much from a compact encoding. Now consider a JSON `[{"field1":...,"field2":...,...},...]` with lots of `fieldN` strings duplicated. DEFLATE will be able to figure out that `","fieldN":"` fragments frequently occur and can be shortened into a backreference, but that backreference still takes at least two bits and normally a lot more bits (because you have to distinguish different `","fieldN":"` fragments), so they will translate to pure overhead compared to compressed binary.
Modern compression algorithms mainly deal this pattern with two ways, possibly both. The backreference can be encoded to fractional number of bits, implying the use of arithmetic/range/ANS coding (cf. Zstandard). Or you can recognize different token distributions for different contexts (cf. Brotli). They do come with non-negligible computational overhead though and became only practical recently with newer techniques.
I see much more gain in TinyVG in CPU usage to decode and render an image. XML is definitely not the most efficient way to expose data that is not meant for human consumption.
That would be what I’d care about the most. Smaller file size, but not an order of magnitude difference? Meh.
Easier for the browser to process? Well that’s going to have a tonne of useful ramifications.
Honestly that’s what annoys me about web services in general. (Rant mode enabled). The human readability aspect is moot because conversion is cheap, yet everything these days is built on XML, JSON and YAML.
The increasing use of middleman services whose entire job is to parse these formats into native types, then process the data, then serialise back into the same inefficient format, makes the issue a whole lot worse.
I mean, sure, this stuff is used so heavily that some amazing work has gone into parsing with SIMD at ridiculously high rates, but this is still orders of magnitude more time and effort for a CPU to perform the same thing with a native format. Even for things like strings, an actually-sensible representation like [length][body] would save all kinds of hassle by avoiding processing delimiters, searching for quotes, etc, and would make loading a value as simple as allocating the ALREADY KNOWN size and reading it.
Anyway, that’s my rant. The more parse-friendly formats out there the better.
XML, of course, is an opposite: It is a rather good way for humans to create data meant for computer consumption. Once the data are laid out, they can be transformed into a more efficient machine form in the same way a program is compiled, but for the web this is rarely done for any format, including pure machine-to-machine interaction. E.g. JSON is mostly used for machine-to-machine exchange and it is far from begin efficient for this.
By far the biggest advantage of SVG on the web is DOM integration, thanks to its choice of XML. This enables, among other things,
- Trivial adaptive styling: the simplest case would be matching color of surrounding text;
- Easy and relatively cheap dynamic updates to individual components, trivially integrated with any DOM-manipulation tech, including declarative frameworks big and small.
I don't see a binary format easily replicating those.
Edit: Of course it's okay to have more opaque binary formats.
SVG is firewalled off from js and css regardless of how you use it. The only way to practically interact with SVG content (i.e. make it actually more useful than a jpg) you need to embed it directly, which requires stripping the `<?xml` and `<!DOCTYPE`, which requires an xml parser/serializer.
SVG is so bad people have been using fonts instead (which are binary).
And 99.9% of people who use SVG are doing it with illustrator or inkscape anyway, the binary vs text argument here seems totally irrelevant.
The only way to practically interact with SVG content [...] you need to embed it directly
Right, and this is actually pretty common -- you can use <svg> tags directly in JSX, and you can set up a build pipeline that lets you import .svg files directly into your .jsx/.tsx.
It's useful because you can do things like animate your SVG using CSS, driven from the JS/JSX code.
Most of the time (and speaking from experience here), any “interaction” with inlined SVGs happens at compile time, setting the size or colors. Animating opacity, scale is done via CSS on the container element.
And for complex animations you resort to Lottie. So interacting with SVG via its DOM API is incredibly rare.
No I don't agree. Simple things like styling the inside of the SVG or animating the colors is pretty common and does not require reaching for Lottie. I don't think Lottie is all that commonplace outside of very large codebases in borderline enterprise environments.
You're probably right in general, but I had a need just the other day to animate something other than opacity and scale. I realised I would need to inline the SVG to make it work, and it was pleasantly straightforward.
> SVG is firewalled off from js and css regardless of how you use it. The only way to practically interact with SVG content (i.e. make it actually more useful than a jpg) you need to embed it directly
You're contradicting yourself here.
> ... which requires stripping the `<?xml` and `<!DOCTYPE`, which requires an xml parser/serializer
That doesn't make sense either. Removing the XML declaration and the doctype declaration can be done using the same editor you're using for your HTML at hand - unless you're referencing the SVG as external resource via href, and then you don't need to remove it all. XML and doctype declaration aren't needed for XML conformance anyway, and your tool should offer an option to export without these.
Note according to the WHATWG HTML spec (redacted snapshots of which used to be known as W3C HTML 5.x until recently) when used embedded in HTML, SVG content is actually parsed using HTML/generic SGML rather than XML rules. For example, unlike in XML, element/tag and attribute names can be written in any mix of lowercase/uppercase chars, etc.
I write SVG by hand every day, but I imagine to be the 0.1% you are talking about. The ability to write simple or complex graphics in text form both inside HTML or CSS is a feature, and an incredibly powerful one.
Hold up. Are you blaming SVG because you don't know the <object> tag exists? I mean, yeah, you're right that CSS won't come through with an <img> tag. It will with <object> though.
Ah, I misunderstood what you were trying to accomplish. Yes, it makes sense that you have to embed the SVG in your HTML doc. SVG isn't an image format; it's a document format just like HTML is. Embedding the SVG puts it in the same document context as the HTML and therefore the same styling context.
Would you expect a parent's CSS to affect HTML in an iframe, a separate document context? That would be confusing behavior to be sure. Side effects galore.
No I wouldn't, because a) iframes are used to provide isolation, and b) nobody really wants that.
The use case for styling external SVGs with CSS is waaay more obvious. We should at least have the option to do it, even if it isn't enabled by default.
Are you really going to compare a technology like SVG that was supported by IE9 (and lower with a plugin) with one only just supported by the Blink engine May, 2022 in Chrome 101?
For all intents and purposes with regard to web development, fonts have been monochromatic (except for emojis). Any with sufficient knowledge gathered over decades can write/edit an SVG in the editor of their choice, graphical or textual.
How does one edit their own color font? Can you do it in a text editor? Do most font editing programs support it? Or were you speaking of fonts purely as a consumer rather than a producer?
> SVG is firewalled off from js and css regardless of how you use it.
That's a weird claim, since most of my SVG customizations are done using CSS.
> SVG is so bad people have been using fonts instead (which are binary).
No they're not, SVG is far more popular than icon fonts, which has been in steady decline for years.
> And 99.9% of people who use SVG are doing it with illustrator or inkscape anyway
False again. SVG is a popular export format for myraid of desktop, mobile & online design & publishing tools and presentation software which I've used over the years, none of which were Illustrator or Inkscape. I've also hand edited lots of SVGs to customize it to look how I want, which isn't feasible for Icon fonts.
Are you sure you know enough about web development to make these claims?
All of my data viz code is React-controlled SVGs, for instance. I was able to implement generic zoom and pan on top of this. Also things like nested graphs look good in SVG
So fine with new and actually improved technologies instead of the many GUI rehashes that some companies make money off delivering all the time.
I don't often use SVG, and am by no means an expert on it, but I dislike the experience every time. Like having to edit an .SVG file to fix a wrong displacement that places it a bit too far down. Trial and error, trial and error, until you randomly hit the spot that makes the thing begin to behave.
I think TVG popularity boils down to browser support and nothing else. I hope somebody with the skills picks up the task of integrating it into Chromium, then the rest of the world will bend and bow quickly.
Also, one might hope that a freeware GUI editor will see the light in not too long.
Perhaps the authors should do a new single-protocol Mail/Calendar/Contacts/Notes/etc. standard as well? I recall IMAP being a horribly clumsy and complex protocol, and didn't Einstein say:
inkscape allows you to modify and save SVGs in plain SVG format. You can also right-click an SVG and modify its properties in the inspector window.
SVG is a very impressive, extremely clear format with many bells and whistles such as text-to-path and animation built-in.
The only improvement I can see it might need is a better compression functionality that can selectively prioritize symbols to load first if the SVG is sufficiently large.
I love SVG, but there are a lot of improvements that I can think of: conic gradients, better (and faster) filters, a less awkwards non-scaling stroke definition, better color spaces for gradients, lightweight symbols that are not copies in shadow DOM, meshes, etc.
Also a simplified font spec while we're at it? While there are libraries for it, it is apparently discouraging if you wanted to code your own parser for the existing font formats.
Unlike PDF, PostScript is a full Turing-complete programming language, capable of far more document complexity than PDF (though less interactivity). The PostScript Language Reference Manual is 900 pages, and the PostScript Language Reference Supplement is another 160.
PDF however includes (a subset of) Javascript, making it just as Turing complete.
The spec for PDF 1.7 (dated 2008) is 745 pages long. The more modern PDF 2.0 is not freely available. I'm not willing to spend hundreds of euros to get access to the document, but together with the long list of errata and additional documents linked from the standard body's website, I'm willing to bet it's at least equivalent in length to PostScript.
> PDF however includes (a subset of) Javascript, making it just as Turing complete.
Not really, because the JavaScript is quite limited in what it can do (e.g. forms and interactive features). It can't produce text or graphical elements. A PDF reader can show view of a PDF that looks correct even if it doesn't implement any of the JavaScript features.
The lack of multipage support is the most obvious distinction, I think, but you could probably add the necessary metadata and render fake borders and boxes to simulate pages if you really wanted to. As far as I know, SVGs cannot contain forms for one example. PDFs can also be digitally signed according to the spec, and they contain DRM provisions.
PDF has a lot of features that I would never think of myself (pronunciation guides, for example) which would require designing a custom solution for in many other formats such as SVG.
If I wanted to render something to be printed and I wanted it to be printed exactly as specified, I would consider PDF (and PostScript) files to be much more reliable than SVG files. SVGs are great for images and icons, but they're simply not designed to do the things PDF was designed to do.
Conversely, PDFs are difficult to embed and require proprietary tools to use most of their less common features, so in many areas they're much worse than SVGs.
I'm going to sound like Richard Stallman, but being human-readable and Notepad-accessible is a huge advantage for SVG compared to this and its binary nature.
It's optional, so if the need you identified exists it will be implemented. Its essential selling-point is compactness, so being the target, the path of least resistance should be optionally allowing the text-form, not building around it.
I think there is a place for rich text and compact binary formats. Then can pre-render from complicated source to produce smaller and faster binary format. You would edit the SVG and then make TinyVG (and minified SVG).
SVG is weird among image format in that it is XML. It requires a lot of effort to parse and render.
What does "Notepad-accessible" mean? Surely you're not referring to notepad.exe in a sentence beginning with "I'm going to sound like Richard Stallman"?
Senseless, if you not explain what will work with TinyVG and what will not work.
Tried some real life svg files from a current project, all failed:
`Node has unsupported transform:` matrix, translate, Failed to translate color spec rgb, ... The transformed files are all larger.
All files are simple SVG without any special content. Exportet from Affinity Designer.
Those sound like missing features in the importer rather than an actual gap in the spec. Transforms and different color spaces could be baked into the data at import time.
A lot of people are defending SVG by pointing out how it integrates with CSS and JS the DOM and I'm like "yes that's why a simpler format is better". SVG is too powerful to use as a simple image format.
Everybody trusts user-uploaded JPGs. Trusting user-uploaded svgs is dangerous without detailed domain knowledge.
Json vs XML already demonstrated how less is more.
There are good reasons to enjoy the idea of a vector graphics equivalent to HTML just like there are good reasons to enjoy a vector graphics equivalent to PNG.
I want to like this more, but it doesn't seem very compelling.
As others have noted, the size difference is essentially nil if you compare to a gzipped minified SVG (for which there are off-the-shelf tools).
No CSS means you can't easily integrate with CSS animations.
The spec also seems unnecessarily quirky -- I would have expected it to be really, really simplistic and minimal, along the lines of https://qoiformat.org. For example, there's a header flag that sets the "unit" size to 1, 2 or 4 bytes; but also a variable-length encoding for values, so it seems like small uint32 units would mostly fit into 1 byte anyway. Only sRGB colors are supported -- why not linear?
I'd definitely prefer to write a renderer for TinyVG rather than SVG if I were working from scratch. But SVG exists and is pretty well-supported already so you don't have to. (And if I enjoyed working with XML I might actually prefer SVG.)
Possibly TinyVG would be good for embedded systems? If there were a really small and fast implementation, and if anyone has a need for portable vector graphics on a system that can't handle SVG.
> Only sRGB colors are supported -- why not linear?
sRGB is smaller for storage. All actual calculations (alpha blending, gradients) are already done in linear space.
I'm not sure doing calculations in linear space is actually a good idea though. Most art programs and the web use sRGB, so accurate conversion isn't possible.
I don't think the size is a key advantage here, even if it were true. 96 kilobytes simply isn't very big to start with, considering most of the content that gets sent around.
Like you said, support is what matters. SVG is notoriously difficult to implement and it took forever to be supported everywhere, which also contributed to the persistence of Flash. TVG is supposed to be easy to implement, which seems to be to be the advantage.
JPEG, by contrast, is a pretty simple format with only a few minor quirks.
If someone already implemented an XML parser for you, SVG is easier to implement than TVG (if you are trying to extract the same type of content out of SVG as out of TVG).
>if you are trying to extract the same type of content out of SVG as out of TVG
But then you haven't implemented SVG. You can't say "we support SVG", and you lack a good way to communicate what users can expect from your implementation. It's much easier to communicate clearly when you can implement a whole standard rather than a haphazard subset of one.
Also, parsing overhead is such a tiny fraction of the overall effort (in either case) it doesn't really mean anything.
"SVG is a horribly complex format and an overkill for most projects"
Can this be given more substance..?
Does it also apply to SVG v1.1 (Tiny/Basic)? It seems to cover a similarity limited scope. From the landing page its unclear if its fundamentally doing something different. In my naiive perspective itd seem more sensible to just make a new way to encode SVG?
Just casually browsing the SVG spec[0] looks like it covers a lot of ground. Different coordinate systems, CSS styling, paths, clipping, filters, animations, scripting, fonts etc. There needs to be an open vector format that supports all of those things, but maybe it is inappropriate that this is what get shipped by default.
SVG Native was supposed to fix that [1]. SVG Native uses only one coordinate system, no CSS styling, no filters, no animation, no scripting and no text or fonts. However, it didn't get much interest and it's starting to look like a dead version of SVG.
Maybe there isn't any need for a simpler vector graphics format because no one is using TinyVG, either.
I have been working on a Go/WASM canvas renderer for this subset, but the drawing overheads seem too high. I also tried rendering into WebGL, via creating an Image from a b64 encoded SVG. That works ok, until you try to manually (code, not CSS) animate any of the SVG attributes. Creating on the fly Images seems to leak memory (at least in FF) and trying to re-render into a texture seems to always involve a read-back. Overall not sure if I continue this route. Only Video seems to avoid the read-back, but are we supposed to render SVG into frames of video for efficient rendering? Of course I could inject into the DOM a 'naked' SVG, but it would be nice to find a route to use SVG via canvas or webgl. Maybe that would make the format more popular... But a binary format might be nice if rendering could be done via a shader :)
>SVG Native was supposed to fix that [1]. [...] However, it didn't get much interest and it's starting to look like a dead version of SVG.
It's pretty bad marketing to call a different product by the same name. Particularly when it chooses a totally different side of the trade-off versus its homonym. Not only have I never heard of SVG Native, I'm confident that I'm going to forget what it is in a few days because I don't have a proper name to remember it by. Also, the adjective "native" here gives me no useful information as to what problem it tries to solve — how do I infer "simplified" from "native"?
“SVG Native is a subset of SVG 1.1, which removes animation, interactivity, linking, remote resource loading, scripting, and CSS ”
So anyone who supports svg 1.1 supports this automatically. I guess it helpa define/narrow a small svg subset to target if you write a new renderer. For instance rendering with JavaFX primitives, this would maybe be achievable
And this new thing also sounds kinda like a subset of svg..
I always found SVG a non-readable file for most images I’ve used. Yes, if you place a simple rectangle it works, but when you export an image it’s almost impossible to edit it manually.
I like this edge on TVG, as a binary format is more optimal for size and you’re not losing much visibility on this case.
I believe TVG popularity will depend on the final use cases. Having to use a poly fill on the browser delays the reproduction of the image until JS is loaded. It would be interesting to compare both cases: download an optimized SVG vs download and print a TVG file using the polyfill.
oh yeah i don't edit paths by hand, just stuff like cleaning up groups, removing unused css, changing colors, etc. when i do need to edit paths i use https://yqnn.github.io/svg-path-editor/
There's already a very good binary encoded vector graphics format, which also supports animation among other things --- SWF. The descriptions in this specification actually look somewhat like it.
The original SVG is 96,719 bytes large, while the optimized one is 85,806 bytes large. When converted to TinyVG, the file shrinks to 27,522 bytes. This means we only have 32% size of the optimized source data.
I have a tiger.swf which is 21381 bytes, and since there's another comment here about gzip'ing them, tiger.swf.gz turns out to be only 17296 bytes. In conclusion, I think a subset of SWF, of which much existing rendering code is available, will provide much better efficiency and compatibility than this.
SWF is flash. Vector graphics are just a subset of the format. Any tool claiming to support that format would have to support the rest of the spec, which is overkill for many vector applications.
Even though Adobe owns SWF, they don't support it in Illustrator outside of exporting.
You can always pull a marketing stunt like .webm vs. Matroska if you want to advertise support for a variant of Flash constrained to just the vector parts.
Plenty of existing tools only support a subset of SWF, mainly the vector graphics and animation parts. It's used for UIs in games and various other applications.
IMO binary encoded versions of files that are already small are more hassle than they're worth. Now you need a specialized tool to view and edit the file. I have, on many occasions, edited SVGs manually in a text editor. Losing that is a big devex issue.
Having worked a bit with protobuf and CBOR in the last year - the binary encodings are a big pain in the ass. I understand that for massive companies the payoff could be worth it (although I think the people that decide to use proto/CBOR don't calculate the hours wasted on an obtuse encoding scheme), but certainly for small shops you're wasting your time.
Its simpler format should make it faster to parse & decode, at least. Whether that might affect render times is likely to depend on the renderer itself.
Who is this project intended for? I was under the impression that browsers and Adobe products implement their own VG renderers (not client-side JS code provided by websites).
If that's correct, is this project trying to build a format that's better than SVG and parseable out-of-the-box by those renderers?
Or is it trying to get browsers/Adobe to extend their renderers to support TinyVG? If this is true and TinyVG isn't supported yet, how was the website able to render that Tiger picture?
As far as I understand, it's more geared towards embedded systems, game ui, and gui icons. Places where a full SVG renderer is absolutely overkill. I think they see the tooling being used to create SVGs as usual, but then converting those SVGs to the much more stripped down format of tvg.
I wish the website briefly list the features that exist in the SVG format but doesn't exist in theirs, so I could see exactly what I'm missing. It could remove a lot of doubt to dip my toe if it turns out that I really don't need the missing features.
Been thinking about using webassembly to compile something like this, then you build the callouts to do the actual rendering... could potentially be even smaller. Compile TinyVG > Wasm > Wasm to bytecode.
Sounds a little like an early version of Flash. Size comparisons to SVG after compression would be interesting too as most Web assets will be delivered compressed.
I could conceivably see a use for this in contrainted devices, i.e. embedded with small, low resolution displays. It sees to be missing text rendering entirely, though.
It seems lots of people are discovering that SVG is overkill for most applications. Here (https://docs.google.com/document/d/1YWffrlc6ZqRwfIiR1qwp1AOk...) is a document by the flutter devs where they outline a new format. I hope we won't end up in a situation where there are 14 competing successor candidates for SVG.
Google also has Vector Drawables for Android [1]. I'm sure they'll also create the other 12 competing successors, because each of their platforms need a new separate format.
Facebook/Meta will create a 15th standard; people will decry it for not following one of the previous 14 standards, but everyone will adopt it. Then Google will re-release one of their 14 but change everything to look more like Meta's.
This is evidence that the size improvement does not come entirely from the binary format (it would be possible to devise a binary format for SVG without changing the language and semantics), but from the simplified graphic primitives as well. If it was just XML overhead, compression should mitigate most of it.