I am not sure how relevant this is, but I recently published a proof a concept library that converts SVG documents into simulated machines. See the link below.
The rendering in my library relies upon OpenGL and the NanoVG library. My library is called Tiny Sim. All of these nano tiny and vg names seem to be colliding adding a bit of confusion in my opinion.
This is actually just like... aesthetically pleasing to watch. As someone who loves watching/reading about Game of Life lifeforms, I'd love to see some sort of gallery where people can share their creations and/or build upon other people's
There are many other popular vector graphics formats beyond those mentioned here. Lottie has already displaced SVG for animation/motion graphics, and for static content IconVG exists and is backed by Google. There's also PDF, PostScript, Flash, the glyf format in OpenType, etc.
Animation is issue #2 on https://github.com/google/iconvg and I have some ideas but no code yet. I'm also midway through changing the current "version 0" format into a "version 1" format, dropping things like the ArcTo op (inspired by SVG) precisely with one eye on (future) animation support. The ArcTo large-arc-flag, like any boolean-typed value, is impossible to interpolate smoothly.
IMO as long as animation isn't tied to After Effects like Lottie is, you can't go wrong. The fact that the most popular open vector animation format is tied to an expensive, proprietary piece of software that isn't even primarily a vector art package makes me very sad…
Not to mention After Effects is an extremely poorly performing and highly bloated piece of software tied to mountains of spyware… It’s almost impressive how bad the software is by modern standards.
I find After Effects to be one of the most impressive pieces of creative software ever made. An entire industry had been built around it. That said, there is plenty to improve upon.
Franky somebody should do like a TinyDF replacement for pdf... That's also a somewhat bloated format at this point.
Ideally though, the tiny* formats should be forward compatible (?) and interpretable as a valid non-Tiny* document. That would be quite nice for wider adoption (like JSON).
"somewhat bloated" is the understatement of the century. PDF documents can contain executable code in two different programming languages (JavaScript and PostScript), interactive content, digital signatures, their own form of encryption, and more. Why they thought adding JavaScript to an already infamous attack vector of a format is beyond my understanding.
Javascript was added to PDF in order to implement interactive and dynamic forms, including input validation and normalization and dynamically adding/removing form elements based on user interaction. The contents of forms can also be directly submitted to a server. Basically, Adobe didn’t want HTML forms to steal their show, so they implemented the equivalent with PDF. Not saying that was a good idea, but here you go.
PDF can contain executable PostScript? That was new to me. I know the syntax is still PostScript, but without the programming language. Is there a nice description somewhere?
Old versions of PDF allowed to embed arbitrary PostScript for some area of the page, but this was always specified not to have any effect except if the PDF gets converted to PostScript. Especially it was specified to be ignored by PDF viewers and non PostScript printers, making it rather useless. In current PDF versions, this is no longer allowed.
It seems that my understanding of this was not correct. I should have checked this before posting it, but I have a vivid memory of some PDF that rendered a different maze each time you opened it using PostScript and that override everything else. Most of what I'm reading suggests that the turing-complete aspects were removed in the design of PDF.
PDF is indeed a very bloated format, but documents are inherently complex. I don't think you can create anything that deserves the name "tiny" but can still handle anything you might want to send to a printer (and that's a much smaller use case than "anything you want to display on a screen").
As someone who has implemented generating PDF/A documents, I can say it is still a very complicated format, and I would be very happy if it was replaced with something a little more straightforward.
I have heard very good words about the eps format from a vector-format nerd (he reads file format specifikation books because its fun). I would love to here more opinions about eps from other developers.
How does eps really compare with the other formats mentioned above?
EPS is just PostScript that's designed to be more embeddable. It's a PostScript program that contains some metadata like a preview image (so you can display a thumbnail when embedding it even if you don't have a PS interpreter) and a bounding box (since PS has an arbitrary coordinate system, in order to represent an embedded document without actually interpreting it, you need to know the bounds of what it's going to draw).
The big problem with PS, EPS and Flash as simple graphics formats is that they're all turing complete and you can author documents that will never terminate. When importing EPS, Illustrator used to have a timeout and would fail if the render didn't complete in a couple minutes. I assume it still does, but haven't tried it in years.
I've been avoiding EPS like the plague for several decades. Unless something has changed since I last looked it would be well below SVG on my preferred format list.
A small nitpick as the resvg author: the repo located here https://github.com/RazrFalcon/resvg I'm not sure why the author linked some random, outdated fork. If you're trying to beat SVG, you should have done a better research.
But yes, SVG is extremely bloated and under-documented. Especially SVG 2. The core resvg codebase is close to 20 KLOC, while the whole package is like 50 KLOC.
On the other hand, resvg is an exception, because it doesn't rely on any system and/or 3rd party libraries. 95% of the code in the final binary was written by one person (me). Not because it was strictly required, but because it was fun. resvg is basically an epitome of RIIR.
Oh, i didn't see this. Thanks for some more correct and real numbers, i will correct them in the article later!
> I'm not sure why the author linked some random, outdated fork.
because it appears way higher up on the google search if you search for "svg rendering library". Sorry i didn't recognize that it's a fork!
Nice work then! I should check it out for SVG rendering and parsing.
How much work would it be to port over the C# SVG→TinyVG converter to Rust based on resvg? Considering that you already have a well done parser compared to mine...
>How much work would it be to port over the C# SVG→TinyVG converter to Rust based on resvg?
Probably a day, as long as you know Rust. I can take a look into it if you're interested. usvg (the SVG parser of resvg) is specifically designed to convert a real world SVG with all its quirks into a machine readable, minimal SVG/XML.
One thing to note is that usvg doesn't preserve text at the moment (will be converted into paths automatically) and Quadratic curves.
PS: I also have a longer, but still unfinished rant [0] over SVG complexity if you're interested.
> One thing to note is that usvg doesn't preserve text at the moment (will be converted into paths automatically) and Quadratic curves.
I mean, that sounds perfect in my ears as TinyVG doesn't has text/font support anyways and you need to perform the conversion at one point or another.
> Probably a day, as long as you know Rust. I can take a look into it if you're interested
That would be rad! I'll try to do that myself as well, but my last experience with Rust is over two years away, so i'd be happy to see a pro doing the work!
Apart from SVG, there are long-standing and well-established formats for vector graphics: Enhanced Metafile (EMF) and Encapsulated PostScript (EPS). Why not use them instead of a new file format?
IIRC EMF is specified as basically “whatever win32 GDI does”, it never was meant as portable format. And PostScript is a Turing-complete programming language - not really a way to make anything simpler, let alone SVG.
Your effort to create a lightweight vector file format with these features is really appreciable, even if it can never replace SVG for the more general use case. Anyway the tiger has rendering problems (I am quite obsessed with this tiger: even my rasterizer initially did not draw it correctly).
Oh, good catch. But this is actually not a problem in the rendering, but a conversion problem from SVG to TinyVG. The converter doesn't do hierarchical attributes yet, so if something uses a <g> to disable filling, it will still fill those elements
It's common to think of vector art of 'ground truth' and bitmaps as sampled image, but SVG and TinyVG don't add so much over bitmaps. When you zoom into 100-point polygon, you still reach the limit where there is no additional details. Those 100 points are in fact samples, an approximation of artist's intention. When boolean operations are preformed on polygons, information gets lost and the shapes degenerate. The SDFs in 3D modeling seem more pure way of defining and manipulating shapes than polygons and bezier curves.
TinyVG doesn't implement node referencing. In SVG it is done by the `clone` operation, which creates SVG node that references original shape. Using this you can build smart scenes with shadows, reflections, arrayed clones and other interesting effects. It's like DRY principle for art; you can later modify the original shape and see changes reflected in other parts of the scene.
TinyVG doesn't even seem to have object groups. If I wanted to draw a gauge and control needle rotation in runtime I would have to manually re-compute path coordinates each frame. In SVG I would group all parts the gauge needle, and in each frame I would update just the rotation angle in group transform.
For a vector format to de-throne the SVG I would expect it to be smarter than SVG while discarding legacy features. I'd like to see things like infinitely large dimensions for shapes (shapes for ground and sky, or a ray of sun), specifying design constraints, interactivity and procedural animations. IMO the TinyVG takes vector graphics into wrong direction, as a boring subset of SVG.
Edit: I like that TinyVG takes form of very readable lisp-like data description language. Maybe it could be used as a starting point for a smarter vector format for authoring graphics and not just for distributing the end results.
Most uses today of SVG, that I have seen in the wild are icons, graphs and diagrams. Which this TinyVG covers beautifully. Especially for embeded (TinyVG's authors motivation for this) you often need some icons.
Sure if you need something more complex keep using SVG, or create something new.
But for 80% - 90% off web and embedded its enough.
> TinyVG doesn't implement node referencing. In SVG it is done by the `clone` operation, which creates SVG node that references original shape. Using this you can build smart scenes with shadows, reflections, arrayed clones and other interesting effects. It's like DRY principle for art; you can later modify the original shape and see changes reflected in other parts of the scene.
TinyVG is final format optimized for rendering. You can design your graphic with all those futures, and then render them to TinyVG.
DRY doesn't apply since you will not be writing it by hand anyway, but use a tool.
Author clearly states that this is not an Authoring format, but display format.
What you are looking for is better authoring format.
> Edit: I like that TinyVG takes form of very readable lisp-like data description language. Maybe it could be used as a starting point for a smarter vector format for authoring graphics and not just for distributing the end results.
That's not TinyVG format. That is just sample generator that comes with TinyVG suite. TinyVG is binary format, and you can't edit it with text editor
For web and embedded I would still need to dynamically control transforms of object groups. Otherwise the graphics is baked and I can just as well use PNG.
Just like bitmaps, vector graphics looks best when it is displayed at size it was designed for. Too small and details will just blur together; too large and the design looks overly simple with unnaturally sharp edges.
> For web and embedded I would still need to dynamically control transforms of object groups.
Looking at what people mostly use SVG's for on the web, most people don't agree with you. Like I said most SVG's I see are either static icons, diagrams or graphs.
Weather it would be better for them to be bitmaps is another discussion. Lately in embedded I often come in situation where I have CPU cycles to spare, but bump into available storage limit on micro. So this might make sense in such situation
Whole point of this format is to be 80%. that's why it has tiny in name :) If you want to do dynamic content this is not format for you.
> Otherwise the graphics is baked and I can just as well use PNG.
that's what I use now.
But I will try this TinyVG for my next projects dashboard icons, to see cpu/storage space is favorable
I have to install a plugin to allow my WP sites to support SVG because they can potentially contain scripting, what a pain in the ass. All we wanted was vector images, what we got was vector images and a couple kitchen sinks. Bring on the alternatives.
SVG is basically HTML, but for graphics; with much the same ecosystem (JS, CSS, etc). In my opinion that's pretty awesome and opens a whole world of possibilities. At some point it looked like SVG was poised to become the successor of Flash, but the authoring site never appeared so we didn't get any successor to Flash.
Of course this has some drawbacks, particularly when talking about user-uploaded SVGs on a website (they probably shouldn't be on the same domain as the website at all). But that's a much narrower use case than what SVG was supposed to be or even than what SVG is used for today.
Really cool work - I've been looking for something like this for some time and have played around with HVIF and IVG to that end.
@ikskuh since I see you here in the comments, I have one nitpick: I don't think it's fair to compare your binary format to an uncompressed SVG file since they are most often used online (and often used even offline) as gzip- or brotli-compressed resources. It would be nice to see an additional comparison between gzipped TinyVG and gzipped SVG to help even the playing field.
Yes, i've heard it from some people already. Will add this to the benchmark tomorrow. It's late here in germany and running the benchmark takes roughly 30 minutes already. With gzip, i expect it to go up to 45 or 60 minutes, as we all want good compression rates!
> How will this look in different browsers? Let's test!
obviously different font widths for rendering of SVG text in different browsers/font stacks
> That didn't go as expected. I thought that at least both files on my Linux machine look the same, but it seems like Firefox doesn't like the font-size specification, while Chrome and Edge do.
I asked the Render-A-Webpage-As-An-SVG-framework guy about this last week. He claimed over multiple years he hasn't had a single report of broken text rendering from users of his framework.
So what's the deal? Are OP and me the only devs who have ever hit up against this issue in practice?
What’s the deal? The font-size declaration in the style attribute was invalid, and Chromium allowed it, in direct contravention of both the SVG 1.1 and 2 specs, while Firefox disallowed it.
SVG 2 removes the unitless number extension and defers to CSS Fonts Module Level 3 entirely for font-size, so that unitless non-zero numbers aren’t permitted even in a font-size attribute.
All up, it’s a bad example that doesn’t support the thesis in the slightest, because a rookie error was made. (There’s still something in the fact that such an error could be made so easily, but it’s not that big a deal, certainly not enough to suggest that text is unreliable beyond the fact that you don’t know what fonts are available.)
Text in SVG is just as good as text in HTML, except inasmuch as nothing implements runtime (font-dependent) line wrapping, so it’s kinda more like HTML with white-space:nowrap.
The interactions between presentation attributes and style properties are a bit fiddly and sometimes unclear. Take something like <rect x="2" y="2" width="calc(100% - 4px)" height="calc(100% - 4px)"/>; it’s not quite clear to me whether this is valid in SVG 1.1, though I think it should be in SVG 2. It works in both Firefox and Chromium, though Firefox logs a claim that it’s invalid in the dev tools. <rect style="x:2px;y:2px;width:calc(100% - 4px);height:calc(100% - 4px)"/>, on the other hand, is definitely fine.
> Text in SVG is just as good as text in HTML, except inasmuch as nothing implements runtime (font-dependent) line wrapping, so it’s kinda more like HTML with white-space:nowrap.
That's quite a caveat. How many current webpage designs depend on wrapping happening in exactly the same way across all browsers and device dimensions?
Huh. I'm afraid I don't know enough about Inkscape or the SVG specs to help with that, just enough to know it's a pain.
Off topic question for you though, that looks like probably i3 or dwm, did you make any modifications to Inkscape to make it more usable in a tiling wm, I always have to turn off tiling when I launch it.
I've seen impressive amounts of code written to get around one line bugs that could be trivially fixed, if only reported. One time, the LOC of the workaround basically matched the LOC of the tool, with the workaround implementing a full parser to consume the output of the tool, rather than just directly loading the json the tool was consuming. I used to get frustrated at these things, but I've become a bit demented, reveling in the asinine, self inflicted, suffering that I sometimes witness. Watching people copy paste code changes, rather than using git, is an infinite source of entertainment that's seemingly impossible to stop.
The problem I encounter while developing occasionally is that I encounter a series of errors and in the end you stop wanting to report the 5th bug of the day and just want to fix things the best way you can. Sometimes that attempt goes sour, sometimes it gets me out of the problem in 5 minutes.
> if only reported
Reporting bugs can be super time consuming, especially if your project is large and the maintainer has a high bar for bug reports (e.g. "provide a codesandbox link"). I can't spend 20 minutes reporting a bug that will be auto-closed in 6 months when I can attempt fixing it instead, especially if I'm annoyed at the tool already. This used to happen a lot with browsers and nowadays it happens with build tools and libraries.
Very interesting! I like the idea. One sort of concerning thing is in the comparison chart on the page, the middle column (the TinyVG renderer) the images look a little blurry to me. Is that just a limitation of the renderer? I suppose since the right column is clear, it means the spec is _capable_ of producing clear images, right?
I usually use weird non-ASCII bytes in file signatures of binary formats. Many tools will then correctly identify the file as binary data.
> RGB 565
Not important, but I would remove that. In hardware, these 16-bit formats is a thing of the past. Nowdays, you only saving 2 bytes/color compared to RGBA8, for non-trivial complexity cost.
> with the color channels encoded in sRGB
I would add another field in the file header for color space. For RGBA8, sRGB is the only reasonable choice, for FP32 colors however, linear colorspace is very reasonable for some applications.
> VarUInt.. encoded as a variable-sized integer that uses 7 bit per byte for integer bits and the 7th bit to encode that there is ”more bits available”.
This means you need to parse the complete uint just to skip the field, less than ideal. MKV does it much better https://www.rfc-editor.org/rfc/rfc8794.html#name-variable-si... you only need the first byte to find out the length. Note all modern CPUs have bswap instruction or an equivalent to load integers from memory while flipping endianness to little endian.
> Arc Ellipse
Even circular arc segments are relatively hard to implement. The only reason it’s relatively hard and not insanely hard, formulae created for SVG and similar: https://www.w3.org/TR/SVG/implnote.html#ArcConversionEndpoin... Pretty sure elliptical arc segments going to be insanely hard to implement and debug. Another thing, AFAIK no authoring software supports these splines, how people are supposed to get vector art with these things?
For simple image generation, nothing beats SVG or EPS. For years, these two technologies have let me add cool stuff to PDFs and web pages without the need for expensive/complex/insecure libraries. It is going to be pretty hard to beat the ease of generating these formats.
No, Blc is not a concatenative language. It uses application rather than concatenation to combine programs, and its semantics is not described in terms of stack operations (the blc seld-interpreter happens to use a stack though).
I did see that comment but it was the postscript code you mentioned in your post which evoked concatenative question :-)
I have, perhaps, a naïve internalisation of the reduction steps in something like Forth having the same shape as function application as an elimination step.
Am I the only one who dislikes S-expressions and prefers XML or JSON or even YAML? The parenthesis are way too confusing when editing manually and the format doesn't have enough semantic information to always correctly parse into common structures in any language besides lisp. I enjoy writing lisp myself but I really think it is a mistake to use them for common formats everywhere that I've seen it tried.
I'm not a fan of the usual uses of S-expressions in data formats, but for quite the opposite reasons. People seem to have a thing for making up new syntax, so you can't just throw a normal S-expression parser at the problem, and be done with it. For example, the wasm2ps reader [1] has to know how to read a comment in the WebAssembly text format. I decided to leave the project at handling integer instructions and control flow only, as there are other oddities like using a single atom for alignment and offsets in load and store instructions [2]. Some people I've talked to wished that people would stop inventing ad-hoc syntaxes, and just use S-expressions or something. Using S-expressions but adding more ad-hoc syntax gives you the worst of both options.
While admittedly less of a problem for implementation, it is also annoying to see lists with dangling parens on their own lines, and symbols with underscores or camelCase in the names, once you are used to the normal way of formatting Lisp code.
You enjoy writing Lisp, but somehow find parenthesis way too confusing when editing manually? That's pretty hilarious.
If you take the expression above and format it in any half-decent editor, it's pretty clear. YAML is a shitshow, JSON is yuck and let's not talk about XML.
Yes, I enjoy lisp the language and its constructs but I don't enjoy the parenthesis or the syntax. What's hilarious about that? I'm sure you could list some aspects of lisp that you don't like if you have a lot of experience with it.
>YAML is a shitshow, JSON is yuck and let's not talk about XML.
My issue with S-expressions as a data exchange format is that they're actually just the worst aspects of all three of those combined. If you're frustrated by XML having really deep trees and only having strings and elements as datatypes, then S-expressions are just as bad, reliably they only really have strings and lists as datatypes and the trees tend to have even deeper nesting than XML. For JSON the formats are equivalent if you remove everything from JSON except lists and strings, so S-expressions are strictly a worse subset, they're just as yuck. YAML is bad because it's complex and badly specified but S-expressions are still worse of a shitshow because there is no spec. You just have to hope the format you used is compatible with your Lisp implementation. And yes this has caused me problems where using "read" on S-expressions with certain characters in them or in certain encodings completely breaks on some Lisp implementations. If you're using some other language that isn't Lisp and you're rolling your own parser then good luck having that be compatible with anything, most parsers I see just pick a random Lisp implementation and aim for compatibility with it which is still not reliable.
The only good thing I can say about S-expressions is that they're quite easy to spit out from a bash script, but for this problem domain (vector graphics) there is an even easier option: use a very simple command-based format like postscript.
XML and YAML (including it's JSON representation) are complexity nightmares. But especially for xml, the tooling makes them appealing to use as e.g. distribution format with JS-less browser templating.
IMO that defeats the purpose of using this format if you have to write things to compile to it. If you do that you might as well just compile to the binary format since you won't be editing your canonical representation anyway. I'd imagine with image formats you'd want to pick a format that is the easiest possible thing to parse and consume while not harming it's "editableness" and I don't think S-expressions fit that bill.
> without the need for expensive/complex/insecure libraries.
Maybe because in the formats named, the image format is the expensive/complex/insecure libraries? I know that sounds a bit snappy, but I felt the author of TinyVG made a pretty good case for what they're aiming at: a simple vector format that does not come with a (or even multiple, in at least the case of PDF) Turing complete kitchen sinks.
You are 100% right. I will now throw out all of my work because despite being easy to generate, I could accidentally generate an image that does something bad (do you even halt bro?) and I don't want to do that. So I will throw out my simple well tested native solutions and replace them with a solution that needs to be compiled to a native solution.
The point is not the producer. The point is that complex formats require complex infrastructure (parsers, interpreters etc.) on the consumer of the format. And the issue with Turing complete kitchen sinks in your image format is not so much that it may not terminate (it is a problem, but a more minor one), but that such facilities are a) themselves often a vector, b) often a good way to leverage other vectors.
Look at log4j for a recent example on what unwanted complexity does. For something matching the subject matter at hand here, look at the various image parsing attacks that have gained large notoriety. Now force multiply by having PostScript, an actual programming language, in the path.
I am not really arguing because more image formats can only make my life easier but I have to wonder what is the point of this new format when I have to compile it into the existing complex-infrastructure format. If you tell me that we should add native support for this format, does that mean that we remove support for the complex-infrastructure formats or have we simply created a wider surface area for bugs? Finally, what makes this format preferable to the many existing format other than SVG?
It's a bit of a trade-off. An existing browser for example will likely just add the format, effectively creating more attack surface, yes. A new product however can choose to only implement the simpler one (and if no features of the more complex one are needed, will likely want to do so for a multitude of reasons).
Over time, the old format may potentially be removed if it falls in disfavor, or at least only be accepted in more and more restricted contexts, but this works better in closed systems of course. I just finished excising a complex format in favor of a much simpler one, and since it was internal to the organization cutting out the old one did not cause anyone problems. For browsers, I don't know. Deprecating Java Applets and Flash seems to have worked, but those are maybe "heavier" examples than SVG.
As for comparison to existing vector formats, I don't have the necessary domain knowledge there.
90+% of real-world SVG files display just fine with NanoSVG, just to name one of several small SVG renderers that are already available. And it's not one of those Microsoft Office scenarios where the 90% subset is slightly different for every user.
So I don't really see the point in introducing any new vector graphic formats. Just define a subset of SVG that supports the features commonly in use, give it a name, and adopt one of the existing lightweight renderers, forking it if necessary.
As others have pointed out, representation size doesn't matter because it'll be compressed anyway in any applications where size is important. Plain old text is fine.
Looks like tvg is so well compressed that using gzip compression doesn't help and beats compressed and optimized svg still by a magnitude of two. We'll get better numbers when i find time to improve the benchmark.
TinyVG was also designed to be used on constrained embedded systems with low RAM, and i have a proof of concept that it can render medium complexity files with less than 32k RAM without memory optimizations in the reference implementation. NanoVG doesn't seem to support streaming render events either, so it isn't suitable for low memory profiles as it creates a DOM that will be rendered
Huh, nice. I always reach for SVG when i want a hand-editable format for some random project, but your points on the parsing and implementation complexity seem very reasonable.
Two qns: 1. Did you consider a re-specification of SVG or a subset in another embedding language that's easier to parse? text/json+svg, application/bson+svg, etc. (Examples for clarity, not literally suggesting json/bson are appropriate choices)
2. SVG has broken gamma-correct blending. It was originally not specified at all so all implementations did it wrong; i think the spec fixes it in 1.2 but the only people who care are Inkscape, and they explicitly aren't implementing it because they don't want to author files that in all likelihood will never display properly in browsers.
So... the ship sailed in the wrong direction. Could be worth some thought at this early stage of your project, then you'd have a feature that you actually can't achieve in SVG.
Oh that explains why all my rendering experiments looked different from SVG. I think the reference renderer already does gamma correct blending.
This is something we should add to the specification for sure.
> 1. …
Even re-encoding the SVG crap would only remove XML from the equation, but that's not my main concern with SVG.
Zig has the Zen "only one way to do things". Meanwhile SVG has tens of ways to solve the same problem.
One example: You can set the opacity via css, fill-opacity and opacity. What you cannot do: Set the opacity via color. But you can make 50% black by just specifying fill-opacity, as black is the default color. to render lines, you need to set fill to "none". ugh.
I wondered how it compares to the same subset of SWF, which is another vector format designed with a huge emphasis on size and decode efficiency. The example tiger.svg is 96719, and tiger.tvg is 27522, but I have a tiger.swf which is 21381 and tiger.pdf 77377. It seems to be a little worse than SWF, but much better than SVG and PDF. I agree with the author that SVG is bloated, but not sure making another format is the answer, especially when there's already very good prior art.
(I have written SWF rendering and conversion code before, so I might be biased in saying that it's an example of a very well-designed vector format; too bad Adobe has been trying to kill it off.)
SWF is more of a painters/immediate-mode model and SVG/PDF/etc. is a retained mode that defines a set of objects. Different use cases for sure. I've written SWF->vector conversion tools, and the formats are quite different.
While I do appreciate the thought process and hard work, it does maybe feel like the most of the benefits could be obtained with a strict subset of SVG instead - something like SVG Tiny was meant to be, but with fewer bad decisions! That would allow for compatibility with the existing ubiquitous SVG ecosystem.
The problem with that is that you still need to implement a full XML parser. Even if you strip out any CSS and ECMAScript, it will still require the complexity of XML with all it's gloryness and escapiness and ambigious defines.
And even if we rely only on XML, we get the DOM and hierarchical structures. If we forbid those, we have <svg><object /><object /><object /></svg> as a file format and people will look weird because their other SVG won't be supported in there.
"XML is bad" is still one of the worst engineering arguments. Why is it bad? What is the tradeoff? Things that are bad are easy to quantify as they're measurable. What does it have? It has strong schema support built in. Any intelligent IDE allows for cmd-space completion by reading the XSD. I don't buy this argument as full XML parsers are not even as complicated as HTML5 parsers, which nobody seems to have an issue with. It's not the end-all-be-all of formats, but this is something that's oft-repeated and never quantified.
Your second point is actually valid though. If the TinyVG format takes advantage of non-tree based data-structures, I can certainly understand the motivation. I have a hard time conceptualizing how a tree based format would be beneficial to a vector format, other than describing metadata about certain "areas" of the image.
XML is bad for TinyVG as it is too large in several regards. XML kinda requires DOM parsing, XML is a text format (so inefficient encoding) and XML parsers need to be large so they can be called XML parsers.
All of these properties contradict a embedded world where you can render vector graphics from 32k RAM on a chip that doesn't even have enough memory for a framebuffer itself.
Also implementation complexity of XML is so high that i gave up on impementing a correct parser. I don't want to have a half-assed parser that cannot parse XML, but only "XML light" and making this is a huge amount of work which i don't want to waste in my leisure time
I agree that SVG/XML is bad for embedded and I think this format is cool but you probably can agree it's unlikely anything will displace SVG unless it matches it in features, one of the benefits which is to manipulate SVGs using the DOM in browsers.
Yes, but SVG can and will do forward references. Ergo, we have to store some kind of DOM in order to resolve them. Even if it's not in the form of the XML doc
> "XML is bad" is still one of the worst engineering arguments.
It’s reached meme status. I think most people conflate complex XML-based _formats_ (e.g., WSDL, XSD) with XML _the format_. The rules of XML parsing can fit on a notecard. And a basic parser that supports all the core functionality plus namespaces is really not that complicated to implement. Now, implementing schema validation, Xlink, etc. in your parser is definitely not simple. But to make a simple XML parser those are optional bits.
As someone who a) personally had to deal with XML fallout, and b) actually looked into its history, I respectfully disagree. I adore typed semi-structured data, but XML is not an implementation thereof that is appropriate for, at least, nowadays (I go into just slightly more detail in a sibling comment).
I know where you are coming from, but in the case of XML, I really think it's bad. I'm tired today so I'm not going to write an essay, but XML's history alone (look up how exactly it derives from SGML) made it a good idea with a bad execution.
Semi-structured data is not bad. Namespaces are not bad. Schema are not bad. In fact, I also on the other hand very much lament that XML's backlash lead us to JSON, which is entirely untyped semi-structured data where everyone has to write an (often buggy and incomplete) ad-hoc typechecker for every single document.
Those features are not the problem, the problem is that they are embedded in the SGML-borne XML, that has many weird and intricate corners that only make sense in light of its history, and that lead to complex parsing, obscure behavior, and lots of potential for vulnerabilities--in the parsers or in code that just uses a (perhaps in itself safe) parser. DTDs, Entities, and Processing Instructions are just some of the more known warts.
XML is bad because it bloats the size by roughly 3x (compared to other text formats like s-exprs), is difficult to implement properly. HTML is bad too, we're just more stuck with it so it's less useful to complain.
The size is not really an issue for any compression algorithm. I've seen quite a few small XML parsers that don't bother with the full spec and only implement a subset.
Even when you compress XML, it's still around 10% bigger than JSON, and around 40% bigger than protobuf (not endorsing either, just examples). Furthermore, making the compressor work harder isn't free. The time to compress and decompress XML is roughly 2x higher than with JSON, and if you decompress the data before using it, XML will still hit you wrt RAM usage.
All of these would be fine if XML offered you some really good advantage over alternatives, but as far as I can see, it doesn't. It just eats up CPU/memory/bandwith/keystrokes for no reason.
XML does offer some really good advantages when it comes to storing strings and sub-documents. I would absolutely not want to write any kind of document in JSON, but XML has been used successfully for a great many document formats.
Excluding animations kills this for me . We need more SVG animations on the web. But the state of animation in SVG is a headache unless your using a third party library .
If your argument is that animations in SVG don't work well then wouldn't it be best to use TinyVG in place of static SVG assets and make something new that is better suited for animation?
We've already had animated GIFs, the marquee tag, Java applets, and Flash animation, and they've all died out because 99.9% of the time the animation is obnoxious and terrible. Vector animation would be just as annoying.
Java applets died out of technical reasons (and was replaced by flash)
Flash died, because it was proprietary and Adobe did not open it up. (and flash was vector animation btw.)
Otherwise it surely would still be around. And in a way it is, as you can export flash animations to the html canvas element. And some people do that (with quirks)
In other words, a simple, but powerful vector animation tool, is very much needed. The current state is a mess.
And you probably do not like vector animated advertisement, or websites that use animations for the sake of animations. Sure, no one wants that.
But how about games, or interactive graphic to for example show complex data in context to some map? Or animations for didactic purposes? Cartoons?
A animation well done is actually one, you do not notice. (but you would notice, if it was missing)
Well, to me that is the same thing, because since it was proprietary, Steve Jobs and Apple could not control it (and maybe improve it and adopt to their standards) - so they rather threw it out.
(not that apple had a problem with proprietary tools, justs with proprietary tools not under their control, in a vital position)
If the Flash player would have been open in a way, chromium/webkit is, with many top players working on it - it very likely would still be around and maybe even dominating, as it was way superior in terms of features and more importantly, it was not a mess to work with, like HTML still is.
We do definitely need better vector graphics, including animations. Because then you can have the same crisp images and animations regardless of your resolution or screen size.
the biggest thing we could get for the standard that would really help people to animate TinyVG files via a secondary format, without animating TinyVG files, is if you can tag an item with a reference. Maybe do commands 17-26, which are (command n - 16) + an optional 32-bit "reference" field on the top. References are basically "up to the implementor" to do whatever they want with. You might want to also do a command "16" which is an optional "group separator", followed by a 32-bit reference that is otherwise ignored by the TVG engine.
You could probably take a hard line that once you have references, there is no need for any other extension, those are doable using references.
BTW, has someone worked on a js backfill that lets you load the file format in an image tag? I'd be happy to give it a shot with zig/wasm.
The question is if we talk about decoding or rendering performance. Right now, the software renderer is slow as heck and not optimized at all.
But the decoding speed should be blazingly fast, as there is not much memory heavy lifting to do, and only a handful of int to float conversions.
As soon as i have a competetive rendering (aka Vulkan), i will add those to the benchmark. My guess is that rendering TinyVG should also be much faster than SVG due to not having any matrix transformations or hierarchies included in the format
This looks nice, i was thinking of adding some vector support in my Little Forms[0] GUI toolkit at some point but everything feels overcomplicated and bloated - however this looks quite nice at a first glance. As i don't have any advanced vector drawing code at all (aside from the basic "draw line, draw circle" etc needed to draw the GUI itself) it can work as a template for what to put in there.
The examples in his benchmark image are noticeably blurry compared to the SVGs. Is this due to his software renderer or something in the format itself? Because for me most of the reason I would use vector graphics in the first place is to maintain a crisp look regardless of resolution, and if the middle images are what I can expect from it then I definitely wouldn't use the format.
Edit: Nevermind, this was answered in another comment, apparently it's a limitation of Chrome
@ikskuh -I do not see any mention of layers or grouping in the specification (unless it went by some other terminology I missed in my skim). In the blog post you said you wanted to avoid having an "authoring" file and a "redistributable" version. I am far from an artist but have hacked out a few .svg sketches and layers/grouping are a requirement for creating new content.
I think you got me wrong then. I WANT authoring files and redistributables to differ. Nobody should send SVG, xcf, psd files to the consumers. We use PNG or JPEG for pixel graphics, why not send of TinyVG instead of SVG, but use SVG as the source file.
This is what i've done for a lot of test files until i had the TinyVG text form ready.
Also, we should not edit graphic files in a code editor. We have better tools for that
The best streamable vector animation format was invented 20 years ago, called Macromedia Shockwave Flash. It was killed by poor mobile processors and Steve Jobs' "Thoughts on Flash"
It was killed by its vendor's poor implementation (poor performance, battery eater on mobile and on laptops) and lack of effort to port to and support non-Wintel platforms.
I don't think even the first generation of iPhone (400MHz ARM) was too slow to run the average Flash games of the time, given that SWF was originally designed to work well on mid-90s PCs (200-300MHz Pentium/Pentium II).
I really appreciate the idea, but with the given subset I really like to know more about the use case. The only one I can think about is icons. Considering that there is no animation support, only two point gradients and no text support (as far as I understand the spec), that rules tinyvg out for pretty much 95% of what I use vector graphics for.
If I want to have use those features I now need two (or more) formats, which kind of defeats the point of a less complicated format, for size, complexity and security reasons.
I was hoping to make an excalidraw exporter to TinyVG but unfortunately it doesn't support text. It makes sense given that rendering text is horrible but doesn't work with this use case.
SVG is great. CSS is great. Both are not difficult to understand, relatively. There are existing libraries. Why reinvent the wheel? Does this reinvent CSS too?
Well yes, it’s true you’d have to implement basic XML parsing to implement a subset of SVG. At the same time, despite how unpopular XML may be it’s among the easiest textual languages for which to write a parser. This isn’t hyperbole. It’s so trivial I didn’t even consider that that’s what you actually meant.
https://www.getlazarus.org/videos/physics/blueprint/
The rendering in my library relies upon OpenGL and the NanoVG library. My library is called Tiny Sim. All of these nano tiny and vg names seem to be colliding adding a bit of confusion in my opinion.