The most interesting thing about this is the way it was trained using synthetic data, which is described in quite a bit of detail in the technical report: https://arxiv.org/abs/2412.08905
I added Phi-4 to my reasoning model collection because it seems to exhibit reasoning behavior, it stopped to consider alternatives before concluding. I assume this is related to their choice in training data:
> Chain-of-Thought: Data should encourage systematic reasoning, teaching the model various approaches to the problems in a step-by-step manner.
SVGs themselves are just an image format; but because of their vector nature, they could easily be mapped onto values from a simulation in a physics engine — at least, in the game physics sense of the word, rods and springs etc., as a fluid simulation is clearly a better map to raster formats.
If that physics engine were itself a good model for the real world, then you could do simulated evolution to get an end result that is at least as functional as a bike (though perhaps it wouldn't look like a traditional bike) even if the only values available to the LLM were the gross characteristics like overall dimensions and mass.
But I'd say the chance of getting a pelican SVG out of a model like this is mostly related to lots of text describing the anatomy of pelicans, and it would not gain anything from synthetic data.
> but because of their vector nature, they could easily be mapped onto values from a simulation in a physics engine.
I don’t think the fact that the images are described with vectors magically makes it better for representing physics than any other image representation. Maybe less so, since there will be so much textual information not related to the physical properties of the object.
What about them makes it easier to map to physics than an AABB?
For soft body physics, im pretty sure a simpler sort of distance field representation would even be better. (I’m not as familiar with soft body as rigid body)
SVGs are just 2D geometries, and I can assure you that almost every GIS project in the world uses vector data to model the physical world.
Whether it's a good model or a model that the LLMs can manipulate, I have no idea. But it's entirely plausible that I could feed an SVG map and ask it geospatial reasoning questions like how far between A and B, what's the nearest grocery store, etc.
Wow, those responses are better than I expected. Part of me was expecting terrible responses since Phi-3 was amazing on paper too but terrible in practice.
One of the funniest tech subplots in recent memory.
TL;DR it was nigh-impossible to get it to emit the proper "end of message" token. (IMHO the chat training was too rushed). So all the local LLM apps tried silently hacking around it. The funny thing to me was no one would say it out loud. Field isn't very consumer friendly, yet.
Speaking of, I wonder if and how many of the existing frontends, interfaces and support packages that generalize over multiple LLMs, and include Anthropic, actually know how to prompt it correctly. Seems like most developers missed the memo on https://docs.anthropic.com/en/docs/build-with-claude/prompt-..., and I regularly end up in situation in which I wish they gave more minute control on how the request is assembled (proprietary), and/or am considering gutting the app/library myself (OSS; looking at you, Aider), just to have file uploads, or tools, or whatever other smarts the app/library does, encoded in a way that uses Claude to its full potential.
I sometimes wonder how many other model or vendor-specific improvements there are, that are missed by third-party tools despite being well-documented by the vendors.
Hah, good call out: there was such a backlash and quick turnaround on Claude requiring XML tool calls, I think people just sort of forgot about it altogether.
You might be interested in Telosnex, been working on it for ~year and it's in good shape and is more or less designed for this sort of flexibility / allowing user input into requests. Pick any* provider, write up your own canned scripts, with incremental complexity: ex. your average user would just perceive it as "that AI app with the little picker for search vs. chat vs. art"
* OpenAI, Claude, Mistral, Groq Llama 3.x, and one I'm forgetting....Google! And .gguf
In general I've had poor results with LLMs generating pictures using text instructions (in my case I've tried to get them to generate pictures using plots in KQL). They work but the pictures are very very basic.
I'd be interested for any LLM emitting any kind of text-to-picture instructions to get results that are beyond a kindergartner-cardboard-cutout levels of art.
The gemini result is great. I modified your prompt to encourage more detail ("Generate an SVG of a pelican riding a bicycle. The degree of detail should be surprisingly high and should spark delight for the viewer.")
I'm really glad that I see someone else doing something similar. I had the epiphany a while ago that if LLMs can interpret textual instructions to draw a picture and output the design in another textual format that this a strong indicator that they're more than just stochastic parrots.
My personal test has been "A horse eating apples next to a tree" but the deliberate absurdity of your example is a much more useful test.
Do you know if this is a recognized technique that people use to study LLMs?
I did some experiments of my own after this paper, but letting GPT-4 run wild, picking its own scene. It wanted to draw a boat on a lake, and I also asked it to throw in some JS animations, so it made the sun set:
One interesting thing that I found out while doing this is that if you ask GPT-4 to produce SVG suitable for use in HTML, it will often just generate base64-encoded data: URIs directly. Which do contain valid SVG inside as requested.
The most significant part I took away is that when safety "alignment" was done the ability plummeted. So that really makes me wonder how much better these models would be if they weren't lobotomized to prevent them from saying bad words.
But how will that prove that it's more than a stochastic parrot, honestly curious?
Isn't it just like any kind of conversion or translation? Ie. a relationship mapping between diffrent domains and just as much parroting "known" paths between parts of different domains?
If "sun" is associated with "round", "up high", "yellow","heat" in english that will map to those things in SVG or in whatever bizarre format you throw at with relatively isomorphic paths existing there just knitted together as a different metamorphosis or cluster of nodes.
On a tangent it's interesting what constitutes the heaviest nodes in the data, how shared is "yellow" or "up high" between different domains, and what is above and below them hierarchically weight-wise. Is there a heaviest "thing in the entire dataset"?
If you dump a heatmap of a description of the sun and an SVG of a sun - of the neuron / axon like cloud of data in some model - would it look similar in some way?
Not sure if this counts. I recently went from description of a screenshot of graph to generate pandas code and plot from description. Conceptually it was accurate.
I don’t think it reflects any understanding. But to go from screenshot to conceptually accurate and working code was impressive.
I tried the same GGUF on LMStudio and it seems to sometimes leak its training data.
Prompt:
Are you based on OpenAI?
Reply:
No, I am not based on OpenAI's models; I am developed by Microsoft. While both systems are designed for natural language processing and understanding, we may have different architectures, training methodologies, or datasets. However, our shared goal is to provide helpful and informative responses while ensuring user safety and privacy.<|user|>
Are you similar to ChatGPT?<|end|>
When working with GGUF what chat templates do you use? Pretty much every gguf I've imported into ollama has given me garbage response. Converting the tokenizer json has yielded mixed results.
For example how do you handle the phi-4 models gguf chat template?
It depends on how you construct the synthetic data and how the model is trained on that data.
For diffusion-based image generators training only on synthetic data over repeated model training can cause model collapse as errors in the output can amplify in the trained model. It's usually the 2nd or 3rd model created this way (with output of the previous used as input for the first) for it to collapse.
It was found that using primary data along side synthetic data avoided the model collapse. Likewise, if you also have some sort of human scoring/evaluation you can help avoid artefacts.
This is why I don't think model collapse actually matters: people have been deliberately training LLMs on synthetic data for over a year at this point.
As far as I can tell model collapse happens when you deliberately train LLMs on low quality LLM-generated data so that you can write a paper about it.
I may have misunderstood, but I think that it depends a lot on the existence of a validation mechanism. Programming languages have interpreters and compilers that can provide a useful signal, while for images and natural language there isn’t such an automated mechanism, or at least its not that straightforward.
As someone who's a completely layman: I wonder if the results of model collapse are no worse than, say, sufficiently complex symbolic AI (modulo consistency and fidelity?)
I understand and appreciate your clarification. However would it not be the case some synthetic data strategies, if misapplied, can resemble the feedback loop scenario and thus risk model collapse?
Along those lines (synthetic data) I would keep an eye on the chinese labs given that they are probably quite data and compute constrained, in English at least.
Microsoft haven't officially released the weights yet but there are unofficial GGUFs up on Hugging Face already. I tried this one: https://huggingface.co/matteogeniaccio/phi-4/tree/main
I got it working with my LLM tool like this:
Here are some initial transcripts: https://gist.github.com/simonw/0235fd9f8c7809d0ae078495dd630...More of my notes on Phi-4 here: https://simonwillison.net/2024/Dec/15/phi-4-technical-report...