Hacker News new | past | comments | ask | show | jobs | submit login
Cerebras-GPT: Open Compute-Optimal Language Models Trained on Cerebras Cluster (arxiv.org)
97 points by cs-fan-101 on April 7, 2023 | hide | past | favorite | 12 comments



Recently, we announced in this post (https://news.ycombinator.com/item?id=35343763#35345980) the release of Cerebras-GPT — a family of open-source GPT models trained on the Pile dataset using the Chinchilla formula. Today, we are excited to announce the availability of the Cerebras-GPT research paper on arXiv.


Thank you for open sourcing these models!

I mentioned that the sizes of the models are relatively small (13B max). Is it an inherent limitation, or training a bigger model is possible, just has not been done in this exercise?


Someone else can answer this better than I, so I'll probably end up deleting this in an hour or two. But I think the purpose of this research was not to create an excellent GPT model. I believe it was to explore the scaling effects on Cerebras hardware and determine a helpful framework for compute-optimal training regimes so that customers who might use Cerebras hardware can be confident that:

1) Standard AI/ML scaling assumptions still apply on this hardware.

2) They have a starting point for hyper-parameter estimation and can get better results sooner.


> But I think the purpose of this research was not to create an excellent GPT model.

Yes, understood. I feel that this phrase is a response to the other commenter that suggested that Cerebras should release a ChatGPT-competitive model. I don't think it's easy and I don't think it's a focus for a hardware maker, such as Cerebras.

> I believe it was to explore the scaling effects on Cerebras hardware and determine a helpful framework for compute-optimal training regimes so that customers who might use Cerebras hardware can be confident that:

> 1) Standard AI/ML scaling assumptions still apply on this hardware.

This is my point. Is it possible to train a 100B model on Cerebras hardware? 500B? In this respect, the quality is secondary to the capability for the purpose of demonstration of capabilities.


> Maximal Update Parameterization (μP)

The use of μ (mu) as a sort of… pun acronym thing is pretty clever, nice one.


Thanks for publishing this. I quickly skimmed the paper, I saw the impressive linear scaling as you scaled to 16 nodes. How long did it take to train the various models in wall clock time?


If they can release ChatGPT level competitive open source models, this will be there biggest proof-of-concept backed marketing. After all, their business is selling hardware to a variety of business and institutions.


This would be an interesting step in the industry as it would couple AI reg to hardware sales and black market law enforcement.


Very interesting that someone finally tries out muP in the real world. Do I understand the usage correctly:

MuP is only used to get around choosing an lr for each size? Here I wonder how it compares to standard heuristics like the one in the OG scaling laws paper by OAI and tricks like back winding a few steps after loss explosion.

For some reason muP was not trusted with the largest trainings? Why is that?


You can download these models on huggingface[0]

[0] https://huggingface.co/cerebras


You all do realize that the O̶p̶e̶n̶AI.com founders, Sam Altman, Greg Brockman, et al have hedged and invested in Cerebras?

I can only see Cerebras being an acquisition target, if they continue releasing their AI models out there. The value in Cerebras is their AI accelerator hardware and O̶p̶e̶n̶AI.com certainly needs that, since that is where the money is.


"Open"AI doesn't have time to muck around with hardware design, every month counts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: