Hacker News new | past | comments | ask | show | jobs | submit login

If the barrier for entry is low enough for several players to enter the field this fast - I wonder what could raise the barrier? The models getting bigger I suppose.



A few months (weeks?) ago I would've said that this already was the case for language models. It's absolutely mind-blowing to me what is happening here - same with stable diffusion. Once Dall-E was out, I was sure that there was no way that anything like this could be run on consumer hardware. I'm very happy to be proven wrong.

In a way, things are still moving in this direction, though. 8 or so years ago it was more or less possible to train those models yourself to a certain degree of usefulness, as well, and I think we've currently moved way past any feasibility for that.


LLaMA can be fine tuned in hours on a consumer GPU or in a free Colab with just 12GB of VRAM, and soon 6GB in 4bit training, using PEFT.

https://github.com/zphang/minimal-llama#peft-fine-tuning-wit...


Fortunately, there still are some possibilities to improve training efficiency and reducing model size by doing more guided attentional learning.

This will make feasible to train models at least as good as the current batch (though probably the big players will use those same optimizations to create much better large models).


Soon you'll need a government license to purchase serious compute.


Our saving grace seems to be the insatiable push by the gaming industry for better graphics at higher resolutions. Their vision for real-time path traced graphics can’t happen without considerable ML horsepower on consumer level graphics cards.


They can just slow down certain algorithm on gaming cards via firmware. I think they already did this for Crypto Mining on some Gaming cards.


FW locks aren’t effective. Most of those locked cards have jailbreaks to allow full speed crypto mining.


Yesterday's "serious compute" is today's mid-range PC.


The Vice Chairman of Microsoft already mentions that he is open to regulation. The EU also is working on plans to regulate AI. So you probably only are allowed to use AI in the future if it's approved by something like the FD(A)A.


Maybe I'm having a looped view of this but I fail to see that regulation wouldn't harm more than it saves here. The truly dangerous actors wouldn't care or would be based in some other country. Having a large diversity of actors seem like the best way to ensure resilience against whatever threats might arise from this.


What about the models that are out already? Will men with guns raid my home and confiscate my computer?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: