Hacker News new | past | comments | ask | show | jobs | submit login
StableCode (stability.ai)
305 points by kpozin on Aug 8, 2023 | hide | past | favorite | 107 comments



> People of every background will soon be able to create code to solve their everyday problems and improve their lives using AI, and we’d like to help make this happen

Yeah, this is not going to happen. Anyone who has ever tried to gather requirements for software knows that users don't know what they want (clients especially lmao.) The language they use won't be detailed enough to create anything meaningful. Do you know what language would be? Code... Unironically, the best language for software isn't English. It's code. Should you specify what you want in enough detail for it to be meaningful suddenly you're doing something quite peculiar. Where have I heard it before? Oh yeah, you're writing code.

These tools are all annoying AF. Developers don't need half-baked hints to write basic statements and regular people don't have the skills to hobble together whatever permutations these things spit out. Which rather begs the question: who the hell is the audience for this?


The reason that people need to gather precise requirements that are precise is because the specifications -> product loop is long. Imprecision results in lots of wasted effort.

If that loop is shortened drastically, then trying, checking and tweaking is suddenly a much more viable design method. That doesn’t require a precise set of requirements.


> That doesn’t require a precise set of requirements.

this exactly.

If the AI could make something that semi works, and you check the output, and repeat until you find the output satisfying, then it will be one of the biggest improvements to software development. Sure, you wouldn't use it to write mission critical software such as aviation, etc. But you'd use it to automate the sorting of your email, or write a quick auto-reply and auto mail merge, or bang out a quick site.


Then people will use it in production and hit edge case after edge case. Software engineers have spent their careers learning to spot these in advance and deal with them, while AI will just have to guess what to do, or just let the program crash.

Let me tell you, with some of the tickets I've had to deal with I do not think most people could actually describe the problem accurately enough to an AI to actually fix the issue.


Job postings in 10 years are gonna torture themselves not to name some random model as the only way to churn out bug fixes for some no longer supported model and it's frozen in time language of choice.


> If that loop is shortened drastically, then trying, checking and tweaking is suddenly a much more viable design method.

No, you still need the skill of gathering precise requirements, otherwise you end up in endless churn of implementing the wrong requirements and then implementing the wrong corrections when you get bad corrections.

(Maybe we didn’t know before the general adoption of notionally-Agile development methods, which didn’t have this as there premise but were focused on other benefits of a shortened spec->product loop, we certainly know it after the widespread adoption of those methods.)

Shortened development loop does mean that you are more likely to have the whole market/domain shift under you between the time the requirements are defined and when the system is implemented, though, a frequently-realized risk with big upfront design that renders even precise and accurate (at the time gathered) requirements incorrect when implemented.


A better tool to improve production would be an expert system that would gather requirements. But the people who want software to do their job either can't specify what they do or don't want to invest any of their time in what they see as someone else's job.


You're missing the point: Natural language can be much a higher layer of abstraction than the programming languages we currently have. It's much faster to say "Add a button to download the output as a PDF" than write JS directly.

You'd be surprised by what regular people can build when you give them the power to create software. Here are a bunch of apps created using my tool/GPT-4: https://showcase.picoapps.xyz Most of our users have never coded before, and are able to build small tools to make their and their customers' lives better.


Sure, they can "build" it, but can they maintain it? Can they only add more layers of mud? How do you refactor the statement "Add a button to download the output as a PDF"?

That's not a replacement for software engineering.


I thought about this for a while now and I think for the class of problems GP is referring to, no maintenance will actually be needed, the tools will just be discarded re-written again -- at least in one-man projects where no other person than the author is needed to create the tool. Maybe in the future some kind of pseudo-code between real code and natural language will be established. This kind of pseudo-code could be as expressive as code but more concise than natural language. I am not certain if it will ever replace actual software engineers for scopes beyond on-man projects.


That makes sense, but never underestimate how easy it is for a throwaway script or tool to become production-ized and a dependency for other production processes.

We've all seen that obscene production workflow built on a Google Sheet or Jupyter Notebook that now needs to support this or that new feature or integration... Add AI-generated tools to the pile.


What's hard about refactoring it? I sorta get the point you're trying to make, but codegen models generate code, but they also understand and explain code very well in plain language. I believe domain experts are smart enough to solve the problems that crop up and maintain their code with help from AI.


Hence, the usefulness of tools like UI.Vision RPA


Additional idea for why this won't be used as much as they hope: Creating the software is just one part of the entire process of utilizing software. Small scripts actually could be possible to be written by AI, but actually using them could turn up challenging for normal users.

Anyone who has setup a coding project knows that actually creating the project structure, setting up dependencies, build scripts, making the code compile/be interpreted are all problems that can have extremely obscure, frustrating errors, and they happen before you even start coding.

Then, not to mention, deploying the software. Even if you give someone code, they won't immediately know how to run it. End users get worried at the idea of opening a terminal and running a command in it, no matter how easy it is. Not to mention setting up the software to do so. (Is the right Python version even installed?)

As such, even if an AI could write a perfect script in code from standard text to, say, lowercase all of the words in a document, it would still be hard for non-developers to use because of the surrounding knowledge barrier, outside of the code itself. Although, yeah, it would be easier.


> These tools are all annoying AF. Developers don't need half-baked hints to write basic statements and regular people don't have the skills to hobble together whatever permutations these things spit out. Which rather begs the question: who the hell is the audience for this?

On the contrary: developers are exactly the people capable of handling those complex requirements you speak of. As a developer, getting a computer to handle basic statements is great and frees you to handle the big stuff.

Being able to write “// Retrieves second value from path” and have the computer spit out some string parsing method is great. All those little helper methods that showily fill up projects are great candidates for an AI. Especially if it helps you break up code into smaller, more composable (and disposable) chunks. If an AI writes the code, and can easily do it again, maybe people would be more willing to delete stuff that isn’t needed.


This is already happening.

It’s true that they won’t know how to exactly specify their needs. But they can share input and output examples and iterate on the solution.

I know folks without any programming background using ChatGPT to write code for them.

The code doesn’t work right off the bat but by iterating with the agent they can either get a solution or solve a portion of the problem.


if you mean things like "write me Python code to read a file and count all occurrences of a word" then all cool. Good luck with something like "write me some Java code for Android to send a message to an app user"


Agreed. The space of “read a file and X” problems is huge though and it’s been awesome seeing folks solve these themselves.


It’s about giving the domain experts who understand what the requirements should be a way to build something without having to have the domain knowledge of code.


I think the problem with that is the same reason why "no code" platforms struggle to succeed. Writing software without understanding control flow or libraries or APIs is practically impossible. Instead of being a liberating experience like it should be, it becomes a confining and frustrating one where you don't understand what is and isn't possible.

LLMs will work really well when developers know what they want and how to ask for it, same with many no-code platforms. If you don't understand programming though, you can't even know if your request is possible.


I've worked a fair amount with a client that was using a no-code platform (Bubble) and I agree. These platforms make it easy to get started, but quickly their users get stuck at problems that would take me 2 lines of code to solve, but they just get stuck at the glass walls of the no-code platform.

Now, is it still better that no-code platforms exists and give non-technical people the chance to get started? Yes, probably. But the transition path is not clear to me, since no-code platforms don't want their users to move on. So, naturally, they evolve to do more and more complex stuff, which in turn makes their whole platform more complicated and scares off their very target audience.

So now, you need to hire agencies and "no-code developers" to work on your no-code app. Back at square one.

I can see the same story playing out with AI-based coding. If you don't know coding, AI-based coding is just a layer on top of a no-code platform.


That is true. I have seen this first hand with bpmn tools such as camunda, where the analysts simply cannot wrap their head around it.

However, over time you will need to describe less and less of the code for a large majority of use cases. I expect Generative AI will be able to take more generic prompts based on a specific vendor and really generate more with less prompting given context of whatever you are targeting. Ie azure , camunda, etc


> However, over time you will need to describe less and less of the code for a large majority of use cases

Over the same time, the sophistication demanded of software will expand to more than offset this.

Source: this process of advancing tooling and advancing demands has been going on the whole time software has existed, with a consistent pattern.


Yes, the problem that had been “solved” a couple times a decade since the 1970s, at least, and every time the new tools that do that end up mostly used by professional software developers, not domain experts. Honestly, the only thing that has come close to actually even minimally addressing this is spreadsheet software.

It hasn't been solved any better this time, either.


Sorry, but in my experience fish think about water more than domain experts think about requirements.


I agree the claim isn't going to happen.

> Which rather begs the question: who the hell is the audience for this?

In my opinion audience for code-generation AI are developers, not the general public. It's immensely useful to be assisted by AI to autocomplete and suggest my code. Whether that is because I'm not familiar with a language syntax or just don't have all the language API in my head.

The general public isn't going to have a clue how to put things together, and until AI can generate reliable and fully functioning code I doubt this is ever going to be for the general public. AI right now is essentially the combination of Google+StackOverflow for me but in a much faster pace. Instead of browsing through tens of SO questions and Google links to get to the exact situation I'm in I can just prompt the AI with all the details and get one response that has the answer to my problem, usually!


Shit, even if you give someone all the Lego blocks in the world and a infinity accurate picture of the minimally complex final product, less than 1% would figure it out.

I bootstrapped dev learning by collecting all the necessary pieces of code but at the end of the data I feel like I'm just writing a huge semitechnical novel and the problems I encounter have nothing to do with the basic building blocks, it's entirely about code flow, data flow, entry points, race conditions and things you encounter after you hit 99% of test cases.

This stuff seems like new age "low code" environments.


The audience for this is management. They'll spend lots of their budget on it. They'll use this to put something together that does something, just not what they want. Then they'll show it to you and tell you to make it work the way they think they want. After all, if a manager can "do it" in 90 minutes with no training a developer should be able to make it perfect in a few days. And they'll make you use the new tool so you learn it and so they can justify the expense.


The point is the nuance of writing code is now no longer an elitist act that strikes the egos of those who understand the intricacies, it’s now democratized and in the hands of anyone. It’s not “good code” but it can be. It’s a kin to hiring your nephew that says he can code but can’t really other than stdio stuff but at least has the right attitude and asks the right questions.

I do believe there will be a day where we communicate what we need and software is written on the fly to facilitate that need. For better or worse.


> The point is the nuance of writing code is now no longer an elitist act that strikes the egos of those who understand the intricacies, it’s now democratized and in the hands of anyone.

Insofar as anything like that was ever true, it still is.

Not that writing code has ever been the hard part of software development.


Assuming that the requirement quality is a constant . And humans as system have the ability to compile this high level instruction to low level code. now imagine there exists systems which A. Augments human ability making it more efficient. or B. replace humans completely. The only reason this is valuable is that it might POTENTIALLY reduces the $/hr cost of the system.


Copium

More people will be able to express themselves, it doesn’t matter that your uncle won’t


Its metrics on HumanEval seem not particularly good (26.89 Pass@1 for it vs. 61.64 for PanGu-Coder2 15b). Is it targeting a very specific latency for responses? I'd think a 15b quantization should run fast enough for most use cases? Even phi-1 1.3B has better performance at 50.6.


> People of every background will soon be able to create code to solve their everyday problems and improve their lives using AI, and we’d like to help make this happen

Just like everytime people hyping a technology have said this with something else where “AI” is but otherwise an identical claim, no, it didn’t happen last time, its not happening this time, and there’s a pretty good chance its not happening next time, either.


Is this a "product" that one could install and use or a model that one should expect an OEM to integrate into a product before programmers can use it? I'm asking because I don't see any links that would help me figure out how to try it out.


To be honest, you’d better buy GitHub co-pilot and enjoy the productivity boost at a cheap price. Trying to download/install/setup/use StableCode is worth it only if you want to learn all those steps as well. If what you care is the final result, just buy an existing service.


I have bought into co-pilot, but I can’t say it’s that much of a productivity boost. More often than not it recommends something completely wrong. I guess it might be more useful if I did more spring boot or maybe hibernate.

I’ve found chat gpt to be more helpful in general. I can paste some code in and have a discussion about what I want it to fix for me.


I may put all my open source stuff on GitHub, but hell will freeze over before I willingly let Microsoft get a whiff of my private data, no matter how irrelevant it may be.

GitHub Copilot sounds pretty neat though, I will admit that.


I didn’t have a good experience with copilot. It was ok for some auto completions, but I found it very distracting to correct it many times when it didn’t do well. It’s like it interrupted my flow. Maybe there is a shortcut to enable it on demand rather than always on, but it wasn’t obvious in the Jetbrains plugin


Yeah, why not upload all my employer's proprietary code to Microsoft? What could possibly go wrong?

I get that consuming an API is far easier than setting up your own inference backend, but there are legitimate issues to consider before going in that direction.


Ctrl-F for “Code for using StableCode Instruct to generate a response to a given instruction.” and you’ll see a super straightforward piece of code to copy to test it out for generating code


Thanks! The verbiage at the beginning of the announcement seems to go out of its way to not call StableCode a "model," which was confusing. By contrast, the recent release of SDXL 1.0 is described as a "model" in its announcement.


Yes, the model is available. However, it just released so no one has wrapped it in a plugin yet. I would expect that within the month there will be a nicely runnable local version, similar to llama2's wrappers.


Yet another site whose data privacy policy amounts to nothing more than an Accept button. Refuse to use such sites.


It's a model you download and run yourself, on your own hardware. No privacy policy needed.


He's referring to the actual website, which doesn't give you the option of reject profilation cookies (mandatory in Europe). I commented about another website posted here few days ago. It gets me mad too


Use uBlock Origin and then you won't have to see them ;)


But won't that just default you to agreeing to whatever.


The ublock cookie ones will decline if it’s possible I believe. IANAL but I doubt that most of the things that pop up websites, get you to agree to would really hold up in court unless they had evidence that you specifically agreed to something. From what I’ve seen most of them are like it or lump it where if you don’t agree, they exit you from the website anyway.


Hard to believe it can work that well when it only has 3B parameters, but I'd love to be proven wrong.


I was impressed enough by replit's 2.7B model that I'm convinced it's doable. I have a 4090 and consider that the "max expected card for a consumer to own".

Also exllama doesn't support non-llama models and the creator doesn't seem interested in adding support for wizardcoder/etc. Because of this, using the alternatives are prohibitively slow to use a quantized 16B model on a 4090 (if the exllama author reads this _please_ add support for other model types!).

3B models with refact are pretty snappy with Refact, about as fast as github copilot. The other benefit is more context space, which will be a limiting factor for 16B models.

tl;dr - I think we need ~3B models if we want any chance of consumer hardware to reasonably run coding models akin to github copilot with decent context length. And I think it's doable.


I'm fairly confident a coding specific model should be a lot smaller - 3b should be plenty if not 1b or less. As it stands, there are quite a few 7-13b model sizes that can predict natural language quite well. Code seems at its surface a much simpler language, strict grammars, etc so I wouldn't think it needs to be anywhere near as large as the nlp models. Right now people are retraining nlp models to work with code, but I think the best code helper models in the future will be trained primarily on code and maybe fine tuned on some language. I'm thinking less of a chat bot api and more of a giant leap in "intellisense" services.


> Code seems at its surface a much simpler language

When using GitHub Copilot, I often write a brief comment first and most of the time, it is able to complete my code faster than if I had written it myself. For my workflow, a good code model must therefore also be able to understand natural text well.

Although I am not sure to which degree the ability to understand natural text and the ability to generate natural text are related. Perhaps a bit of text generation capabilities can be traded off against faster execution and fewer parameters.


Understanding should be much easier, for the same reason humans (e.g. children, foreign-language learners) can always understand more than they can say: human language is fairly low-entropy, so if there's a word you don't understand, you can pick up most of the meaning from context. On the other hand, producing natural-sounding language requires knowing every single word you're going to use.


I'd really like to see smaller models trained on only one specific language, with it's own language specific tokenizer. I imagine the reduction in vocab size would translate to handling more context easier?


I think simply having the vocab more code friendly (e.g. codex) would make the biggest difference, whitespace is the biggest one (afaik every space is a token), but consider how many languages continue `for(int i=0;`, `) {\n`, `} else {`, 'import ', etc.

My understanding is that a model properly trained on multiple languages will beat an expert based system. I feel like programming languages overlap, and interop with each other enough that I wouldn't want to specialize it in just one language.


There's also just far more tokens to train on if you do multi-language. I'd guess only the most popular languages would even have enough training data to get a specialized version - but it would still be an interesting trade off for certain use cases. Being able to run a local code assistant on a typescript-only project for example, with a 32k context window would really come in handy for a lot of people. I don't know enough to understand the impact of vocab size vs context size.


Its worth noting that from what I can tell - A model well trained in most languages would be able to learn the niche ones much more easily.

The vocab size of llama2 is 32,000. I guess I personally don't think that there's enough difference in programming languages to actually save any meaningful number of tokens considering the magnitude of the current vocab.


I wonder if you could train a model generally across a lot of languages, then specialize for a specific one with a different tokenizer / limited vocabulary? Here's the reference I've been using for llama 2 tokens:

https://huggingface.co/mlc-ai/mlc-chat-Llama-2-7b-chat-hf-q4...

it looks like if you just limit it to English it'd cut the count almost by half - further limiting the vocab to a specific programming language could cut it down even more. Pure armchair theory-crafting on my part, no idea if limiting vocab is even a reasonable way to improve context handling. But it's an interesting idea - build on a base then specialize as needed and let the user swap out the LLM on an as-needed bases (or the front-end tool could simply detect the language of the project). 3B or smaller models with very long context which excel at one specific thing could be really useful (e.g. local code completer for English typescript projects)


replit’s model is surprisingly good at generating code, even at following complex instructions that I was sure would confuse it. I have found it’s a bit weak on code analysis, for open-ended questions like ‘is there a bug anywhere in this code?’ that GPT-4 can answer.


exLlama is not the only viable quantized backend. TVM (as use by mlc-llm) and GGML (which is used by llama.cpp) are very strong contenders.

~7B-13B will work in 16GB RAM with pretty much any dGPU for help, and context extending tricks.

TBH I suspect Stability released a 3B model because its cheap and quick to train. If they really wanted a good model on modest devices, they would have re used a supported architecture (like Falcon, MPT, Llama, Starcoder...) or contributed support to a good backend.

*Also, I think any PyTorch based model is not really viable for consumer use. Its just too finicky to install and too narrow with hardware support.


Reminder that GPT-2 was considered “too dangerous” to be released at just 1.5B weights


My memory may be imperfect, but I thought it was more "we aren't sure and we want to promote a culture of safety" rather than "this is definitely unsafe… oh wait never mind"?


It's actually even less remarkable than that. It was an experiment in having a limited release, to shift the field toward a different release convention.

> Nearly a year ago we wrote in the OpenAI Charter: “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time.

> This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.

> We will further publicly discuss this strategy in six months.

https://openai.com/research/better-language-models


I had that thought at first too, but then the scope is really small (programming) compared to other models (everything) so might not be that bad.


phi-1[0] is only 1.3 billion parameters and performs very well in coding tasks - small models have a massive amount of potential

[0] - https://arxiv.org/abs/2306.11644


But it does mean, hopefully, it is easier to run on small hardware. Making it much more accessible.


How does it compare to GitHub Copilot?


The model, source, etc. are available under permissive terms

https://huggingface.co/stabilityai/stablecode-instruct-alpha...

You can “run it locally”. Very handy if you do not trust automatically sending all your code to someone in the United States.


> to reproduce, distribute, and create derivative works of the Software Products solely for your non-commercial research purposes

I wouldn't call these terms permissive. It's in line with the recent trend in released AI models, but fairly restrictive in what you're actually allowed to do with it.


The Completion model appears to place the model weights under the Apache 2 license, which is a permissive license: https://huggingface.co/stabilityai/stablecode-completion-alp...

The Instruct model has that non-commercial restriction, but I'm not sure why. They say it was trained with Alpaca-formatted questions and responses, but I'm not sure if that includes the original Alpaca dataset.


Hmmm... so on that hugging face page there's a text box where you enter input then you click the 'compute' button.

So I asked it to "Write a python function that computes the square of the input number."

And it responds with:

     def square(x):
Which seems quite underwhelming.


I believe that is more related to how the default Huggingface inference UI is prompting. Running locally with the correct prompt template it gives default completes, eg

``` def square(x): return x*x ```


When they don't voluntarily answer the question, you know the answer.


It's not easy to compare them, to be fair.

I guess you could come up with a thousand example prompts and pay some students to pick which output is better, but I can also see why you wouldn't bother. It probably depends on language, type of prompt, etc.


Sure it's easy -- you can use benchmarks like HumanEval, which Stability did. They just didn't compare to Codex or GPT-4. Of course such benchmarks don't capture all aspects of an LLM's capabilities, but they're a lot better than nothing!


One could team up with Hackerrank/leetcode, let the model code in the interface (maybe there's an API for that already, no idea), execute their code verbatim and see how many test cases they get right the first time around. Then, like for humans, give them a clue about one of the tests no passing (or code not working, too slow, etc.). Give points based on the difficulty of the question and the number of clues needed.

I guess the obvious caveat is that these model are probably overfitted on these types of questions. But a specific benchmark could be made containing question kept secret for models. Time to build "Botrank" I guess.


On HumanEval, Copilot is 40+ on pass@1 comparing to 26 for stable code 3b.

HumanEval is abused but this model is only good for its size, it is no match for Copilot … yet


> On HumanEval, Copilot is 40+ on pass@1 comparing to 26 for stable code 3b.

Can you put those numbers into context for those who haven't done HumanEval? Are those percentages so that 40+ means 40+% and 26 is 26%? If so does that imply both would be failing scores?


Is it good at algos?

From interviews:

Implement queue that supports three methods:

* push

* pop

* peek(i)

peek returns element by its index. All three methods should have O(1) complexity [write code in Ruby].

ChatGPT wasn't able to solve that last time I tried https://twitter.com/romanpushkin/status/1617037136364199938


I tried using aider to work with GPT-4 on this problem. Initially it went for a solution based on `shift`. But when challenged, it realized that shift was O(n) and was able to come up with a dual stack solution. It considers this solution O(1) when amortized over many operations. I don't know ruby well, so I can't verify that.

https://aider.chat/share/?mdurl=https://gist.github.com/paul...


GPT4 made a mistake on its first try, but after asking what the complexity of pop is, it figured out its mistake and fixed it.

https://chat.openai.com/share/d527f65f-8a6d-4602-acab-4d80ed...


in what world is a hashtable lookup worst case O(1)? Your own solution doesn't match your requirements.

If you want amortized complexity then a simple vector suffices.


1. I like toxic comments like that saying something is simple without actually solving the problem, you're the best.

2. The average complexity to search, insert, and delete data in a hash table is O(1), for interviews it works 99% of the time.

3. There is alternative O(1) solution you're looking for, I'll leave this exercise to you, bro. As well as the other exercise of being less toxic and a bit more respectful to people you don't know online lol.


Pointing out a mistake is toxicity? Your ego is off the charts. Also, I gave you a solution that would probably be _better_ than yours in 99% of scenarios, a simple vector from any standard library, std::vector in C++, and Vec in Rust. This solution gives you O(1) worst case for peek, and amortized worst case O(1) for the other operations.

To be actually toxic for a moment, you do know that amortized and average are different right?

There is an easy solution to get O(1) for all operations too by allowing them to throw an exception: a simple array. There is no other O(1) solution for all operations that I am aware. In fact it is probably not too difficult to prove that such a solution does not exist.


If your interviewer is asking for solution and you keep insisting that "I have you a solution"... without actual code... Also saying my ego is off the charts. Good luck passing interviews with that attitude.


Average and big-O notation don't go together... Yes, it works 99% of the time, but parent explained to you why you won't get that 1% of the offers. That, plus taking offence by his comment, which makes you not pass behavioral part of the interview.


> which makes you not pass behavioral part of the interview

I'm okay not getting 1% of the offers. I'm not $100 bill so everyone likes me.

I appreciate an attempt to educate me though, I wanted to make clear that any discussion like that is useless without solution to a problem. Post your solution, we'll discuss downsides. You can see it from my side, and I have another one. The parent commenter ain't got no solution, but keeps insisting he can implement that easily with this and that...

Good luck passing interviews with that attitude...


I can't seem to find a demo, if anyone has a chance to test it, how does it compare to replit and wizardcoder?


Looks like there is one on the Hugging Face page: https://huggingface.co/stabilityai/stablecode-instruct-alpha...

Not very promising based on this lame test


I ran it locally and it seemed to do better. I switched Python to Bash and it also gave a good answer (nproc).


I have thought about how these tools can be useful quite a lot. I have a prompt I can feed chat gpt and it will create whole feature "skeletons" with my naming rules and architecture quirks. Taking a lot of time from getting started when building something new. But with chat it is still too inconvenient, having something like this integrated in the ide via a script would he more convenient but still a very specific use case.

I think what I want is this idea of "code completion" but not for writing the methods, which is the easy part. Instead the tool should structure classes and packages and modules and naming and suggest better ways to write certain things.


If I’m reading this correctly this could be an open source model that may compete with the likes of copilot?

That is something I’d be very interested in if they can get the compute requirements down to those of say a standard 13B model. Then I could fine tune (correct term?) it on my offline data and hook it into something like fauxpilot and my IDE.

I had a look at some of the recent code models (wizardcoder,strider etc) but it seemed that you need a really large model to be any good and quite a few of them were trying specifically for python.


Trained specifically for Python*


AI Cannot magically read minds. Having said that It would be nicer to have complete solutions rather than code hints. Imagine having to write a detailed prompt rather than choosing a prediction. Something like : "Write a React/Node JS app that has authentication and a home page" and the AI model give you a complete project as the output. It would be great if it generates deterministic output for the Prompt. AI can really help increase the productivity of Programmers.


> ~120,000 code instruction/response pairs in Alpaca format were trained on the base model to achieve this result.

Very curious where they are getting this data from. In other open source papers, usually this comes from a GPT-4 output, but presumably Stability would not do that?


Either way, the race to zero has been further accelerated.

Stability AI, Apple, Meta, etc are clearly at the finish line putting pressure on cloud only AI models and cannot raise prices or compete with free.


Lots of folks out there would rather skip the hassle of running their own models, and that's totally understandable. Similarly, you've got plenty of folks who'd rather pay for managed hosting services instead of dealing with the nitty-gritty of setting up everything themselves using free tools. This opens up exciting opportunities for successful companies to offer some real perks – think convenience, a smoother user experience, and lightning-fast speeds, just to name a few! All of these things save time and are worth paying for.


> Stability AI, Apple, Meta, etc are clearly at the finish line

I'm very optimistic and expect them to catch up. I've used the open models a lot, to be clear they are starting to compare to GPT3.5Turbo right now, they can't compete with GPT4 at all. GPT4 is almost a year old from when it finished training I think?

I expect open source models to stay ~1.5 years behind. That said they will eventually be "good enough".

Keep in mind too though that using and scaling GPUs is not free. You have to run the models somewhere. Most businesses will still prefer a simple api to call instead of managing the infrastructure. On top of this many business (medium and smaller) will likely find models like GPT4 to be sufficient for their workload, and will appreciate the built in "rails" for their specific usecases.

tl;dr - open models don't even compare to GPT4 yet (I use them all daily), they aren't free to run, and a API option is still preferably to a massive if not most companies.


> Keep in mind too though that using and scaling GPUs is not free. You have to run the models somewhere.

Long or medium term these will probably be dirt cheap to just run in the background though. It might be within 3-5 years since parallel compute is still growing and isn’t as bounded by moores law stagnation


I get decent performance with my 4090, enough that LLMs with exllama at 30B quantitized are very usable. But we're severely VRAM limited, especially on lower end hardware which rarely sees > 10GB of VRAM.

I don't know how much slower it could be and still be useful though. The big thing is we need more VRAM, 30B is context length limited with only 24GB of vram, I've only barely made it above 3.2k tokens before running out.

I hope you're right, that it becomes common for systems to have either dedicated TPU type stuff similar to smartphones, and that they absolutely load the crap with VRAM (which I don't think is even that expensive?)

Models will also get smaller but I'm skeptical we'll get GPT4 performance with any useful context length under 24GB VRAM any time soon.


Open Source doesn't mean free. It costs a lot of money to run models and keep models up to date, and maybe a "good enough" model runs relatively cheaply, but there's always going to be a "state of the art" that people are willing to pay for.


Hardware is still a limiting factor.

Cloud AI providers get a big advantage from batching/pipelining and fancy ASICs. The question is how much they are willing to lower the tax.


As a user who cares more about the product, how does it compare to the gpt-4 code capability? gpt-4 is good enough for me, if it works better than gpt-4 I would love to try it!


I love stability AI


Any performance metrics?


"People will never ..."

Ai: "Hold my beer".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: