More and more companies that were once devoted to being 'open', or were previously open, are now becoming increasingly closed. I appreciate Stability AI releases these research papers.
It's hard to build a business on "open". I'm not sure what Stability AI's long term direction will be, but I hope they do figure out a way to become profitable while creating these free models.
Agreed but this isn't the same as an open source library; it costs A LOT of money to constantly train these models. That money has to come from somewhere, unfortunately.
Yeah. The amount of compute required is pretty high. I wonder, is there enough distributed compute available to bootstrap a truly open model through a system like seti@home or folding@home?
Distributing the training data also opens up vectors of attack. Poisoning or biasing the dataset distributed to the computer needs to be guarded against... but I don't think that's actually possible in a distributed model (in principal?). If the compute is happing off server: then trust is required (which is not {efficiently} enforceable?).
Trust is kinda a solved problem in distributed computing, The different "@Home" projects and Bitcoin handle this by requiring multiple validations of a block of work for just this reason.
How do you verify the work of training without redoing the exact same work for training? (That's the neat part: you don't)
Bitcoin is trust-solved because of how the new blocks depends on previous blocks. With training data, there is no such verification (prompts/answers pairs do not depend at all on other prompt/answer pairs) (if there was, we wouldn't need to do the work of training the data in the first place).
You can rely on multiplying the work where gross variations are ignored (as you suggest): but that will take a lot more overhead in compute, and still is susceptible to bad actors (but much more resistant).
There is no solid/good solution - afaik - for distributed training of an AI (Open assistant I think is working on open training data?), if there is: I'll sign up.
There has been some interesting work when it comes to distributed training. For example DiLoCo (https://arxiv.org/abs/2311.08105). I also know that Bittensor and nousresearch collaborated on some kind of competitive distributed model frankensteining-training thingy that seems to be going well. https://bittensor.org/bittensor-and-nous-research/
Of course it gets harder as models get larger but distributed training doesn't seem totally infeasible. For example if we were to talk about MoE transformer models, perhaps separate slices of the model can be trained in an asynchronous manner and then combined with some retraining. You can have minimal regular communication about say, mean and variance for each layer and a new loss term dependent on these statistics to keep the "expertise" for each contributor distinct.
Forward-Forward looked promising, but then Hinton got the AI-Doomer heebie-jeebies and bailed. Perhaps someone picks up the concept and runs with it - I'd love to myself but I don't have the skillz to build stuff at that depth, yet.
>> but Y-Combinator literally only exists to squeeze the most bizness out of young smart people.
YC started out with the intent to give young smart people a shot at starting a business. IMHO it has shifted significantly over the years to more what you say. We see ads now seeking a "founding engineer" for YC startups, but it used to be the founders were engineers.
>> Training these big models is very very expensive.
Which is why they are not the future. A big model that can generate a picture about anything in response to any input makes for a great website. It generates lots of press. But it is not a reasonable tool for content generation. If you want to produce content in a specific area or genre, the best results come from a model trained or modified in the area. So the big generalized AI, if you use it, would only be the framework on which you built your specialized tool. Building that specialized tool, such as something dedicated to images of a particular politician, does not require huge amounts of computation. That sort of thing can and is being done by individuals.
I am waiting for a tool trained on publicly-accessible mugshots. It wouldn't be a very big project but could yield a tool to generate very believable mugshots of politicians.
Depending on your background and circumstances, there are ways to opt out of the race to a greater/lesser degree. Moving to a cheaper city in your country, or a cheaper country altogether, is one of them. Finding a less stressful way of making less money is another.
It's just hard being reminded that there's no escape hatch - we've welded them all shut for eternity. Being reduced to choices within a system but the choice horizon never extends to the system itself and won't within my lifetime makes me feel trapped.
Maybe, but in image generation it's also hard to be closed.
The big providers are all so terrified they'll produce a deepfake image of obama getting arrested or something, the models are so locked down they only seem capable of producing stock photos.
But they used to let you download the model weights to run on your own machine... But stable diffusion 3 is just in 'limited preview' with no public download links.
Both SD1.4 and SDXL was in limited preview for a few months before a public release. This has been their normal course of business for about 2 years now (since founding). They just do this to improve the weights via a beta test with less judgemental users before official release.
How is a closed beta anything out of the ordinary? They know they would only get tons of shit flinged at them if they publicly released something beta-quality, even if clearly labeled as such. SD users can be a VERY entitled bunch.
I've noticed a strange attitude of entitlement that seems to scale with how open a company is - Mistral and Stable Diffusion are on very sensitive ground with the open source community despite being the most open.
If you try to court a community then it will expect more of you. Same as if you were to claim to be an environmentalist company then you would receive more scrutiny from environmentalists confirming your claims.
That's… not really relevant to Stability AI at all. SAI isn't "claiming" anything. They are show, not tell (well, mostly). They give a technology away for free the likes of which everybody else keeps very tightly locked behind SaaS. Then people bitch about said free technology.
That's nothing new with Stability. Even 1.5 was "released early" by RunwayML because they felt Stability was taking too long to release the weights instead of just providing them in DreamStudio.