They are not close to GPT-4. Yet. But the rate of improvement is higher than I expected. I think there will be open source models at GPT-4 level that can run on consumer GPUs within a year or two. Possibly requiring some new techniques that haven't been invented yet. The rate of adoption of new techniques that work is incredibly fast.
Of course, GPT-5 is expected soon, so there's a moving target. And I can't see myself using GPT-4 much after GPT-5 is available, if it represents a significant improvement. We are quite far from "good enough".
Maybe it should be an independent model in charge only of converting your question to American English and back, instead of trying to make a single model speak all languages
I don't think this is a good idea. A good model if we are really aiming at anything that resembles AGI (or even a good LLM like GPT4) is a model that have world knowledge. The world is not just English.
There’s a lot of world knowledge that is just not present in an American English corpus. For example knowledge of world cuisine & culture. There’s precious few good English sources on Sichuan cooking.
>I think there will be open source models at GPT-4 level that can run on consumer GPUs within a year or two.
There is indeed already open source models rivaling ChatGPT-3.5 but GPT-4 is an order of magnitude better.
The sentiment that GPT-4 is going to be surpassed by open source models soon is something I only notice on HN. Makes me suspect people here haven't really tried the actual GPT-4 but instead the various scammy services like Bing that claim they are using GPT-4 under the hood when they are clearly not.
You're 100% right and I apologize that you're getting downvoted, in solidarity I will eat downvotes with you.
HNs funny right now because LLMs are all over the front page constantly, but there's a lot of HN "I am an expert because I read comments sections" type behavior. So many not even wrong comments that start from "I know LLaMa is local and C++ is a programming language and I know LLaMa.cpp is on GitHub and software improves and I've heard of Mistral."
LLMs are going to spit out a lot of broken shit that needs fixing. They're great at small context work but full applications require more than they're capable of imo.
I don't see the current tech making supply infinite. Not even close.
Maybe a more advanced type of model they'll invent in the next years. Who knows... But GPT-like models? Nah, they won't write useful code applicable in prod without supervision by an experient engineer.
It's weird for programmers to be worried about getting automated out of a job when my job as a programmer is basically to try as hard as I can to automate myself out of a job.
You’re supposed to automate yourself out but not tell anyone. Didn’t you see that old simpsons episode from the 90s about the self driving trucks? The drivers rightfully STFU about their innovation and cashed in on great work life balance and Homer ruined it by blabbered about it to everyone, causing the drivers to try to go after him.
We are trying to keep SWE salaries up, and lowering the barrier to entry will drop them.
Curious thought: at some point a competitor’s AI might become so advanced, you can just ask it to tell you how to create your own, analogous system. Easier than trying to catch up on your own. Corporations will have to include their own trade secrets among the things that AIs aren’t presently allowed to talk about like medical issues or sex.
As someone who doesn’t know much about how these models work or are created I’d love to see some kind of breakdown that shows what % of the power of GPT4 is due to how it’s modelled (layers or whatever) vs training data and the computing resources associated with it.
This isn't precisely knowable now, but it might be something academics figure out years from now. Of course, first principles of 'garbage in garbage out' would put data integrity very high, the LLM code itself is supposedly not even 100k lines of code, and the HW is crazy advanced.
so the ordering is probably data, HW, LLM model
This also fits the general ordering of
data = all human knowledge
HW = integrated complexity of most technologists
LLM = small team
Still requires the small team to figure out what to do with the first two, but it only happened now because the HW is good enough.
LLMs would have been invented by Turing and Shannon et al. almost certainly nearly 100 years ago if they had access to the first two.
That’s true now, but maybe GPT6 will be able to tell you how to build GPT7 on an old laptop, and you’ll be able to summon GPT8 with a toothpick and three cc’s of mouse blood.
What is inherent about AIs that requires spending a billion dollars?
Humans learn a lot of things from very little input. Seems to me there's no reason, in principle, that AIs could not do the same. We just haven't figured out how to build them yet.
What we have right now, with LLMs, is a very crude brute-force method. That suggests to me that we really don't understand how cognition works, and much of this brute computation is actually unnecessary.
Maybe not $1 billion, but you'd want quite a few million.
According to [1] a 70B model needs $1.7 million of GPU time.
And when you spend that - you don't know if your model will be a damp squib like Bard's original release. Or if you've scraped the wrong stuff from the internet, and you'll get shitty results because you didn't train on a million pirated ebooks. Or if your competitors have a multimodal model, and you really ought to be training on images too.
So you'd want to be ready to spend $1.7 million more than once.
You'll also probably want $$$$ to pay a bunch of humans to choose between responses for human feedback to fine-tune the results. And you can't use the cheapest workers for that, if you need great english language skills and want them to evaluate long responses.
And if you become successful, maybe you'll also want $$$$ for lawyers after you trained on all those pirated ebooks.
And of course you'll need employees - the kind of employees who are very much in demand right now.
You might not need billions, but $10M would be a shoestring budget.
And when you spend that - you don't know if your model will be a damp squib like Bard's original release. Or if you've scraped the wrong stuff from the internet, and you'll get shitty results because you didn't train on a million pirated ebooks.
This just screams to me that we don’t have a clue what we’re doing. We know how to build various model architectures and train them, but if we can’t even roughly predict how they’ll perform then that really says a lot about our lack of understanding.
Most of the people replying to my original comment seem to have dropped the “in principle” qualifier when interpreting my remarks. That’s quite frustrating because it changes the whole meaning of my comment. I think the answer is that there isn’t anything in principle stopping us from cheaply training powerful AIs. We just don’t know how to do it at this point.
>Humans learn a lot of things from very little input
And also takes 8 hours of sleep per day, and are mostly worthless for the first 18 years. Oh, also they may tell you to fuck off while they go on a 3000 mile nature walk for 2 years because they like the idea of free love better.
Knowing how birds fly ready doesn't make a useful aircraft that can carry 50 tons of supplies, or one that can go over the speed of sound.
This is the power of machines and bacteria. Throwing massive numbers at the problem. Being able to solve problems of cognition by throwing 1GW of power at it will absolutely solve the problem of how our brain does it with 20 watts in a faster period of time.
> Transistors used to cost a billion times more than they do now
However you would still need billions of dollars if you want state of the art chips today, say 3nm.
Similarly, LLM may at some point not require a billion dollars, you may be able to get one, on par or surpass GPT4, easily for cheap. The state of the art AI will still require substantial investment.
Because that billion dollars gets you the R&D to know how to do it?
The original point was that an “AI” might become so advanced that it would be able to describe how to create a brain on a chip. This is flawed for two main reasons.
1. The models we have today aren’t able to do this. We are able to model existing patterns fairly well but making new discoveries is still out of reach.
2. Any company capable of creating a model which had singularity-like properties would discover them first, simply by virtue of the fact that they have first access. Then they would use their superior resources to write the algorithm and train the next-gen model before you even procured your first H100.
I agree about training time, but bear in mind LLMs like GPT4 and Mistral also have noisy recall of vastly more written knowledge than any human can read in their lifetime, and this is one of the features people like about LLMs.
You can't replace those types of LLM with a human, the same way you can't replace Google Search (or GitHub Search) with a human.
Acquiring and preparing that data may end up being the most expensive part.
Of course, GPT-5 is expected soon, so there's a moving target. And I can't see myself using GPT-4 much after GPT-5 is available, if it represents a significant improvement. We are quite far from "good enough".