Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
nomel
5 months ago
|
parent
|
context
|
favorite
| on:
Llama 3-V: Matching GPT4-V with a 100x smaller mod...
It's llama 3 training cost + their cost. Meta "kindly" covered the first $700M.
> We add a vision encoder to Llama3 8B
lanceflt
5 months ago
[–]
They didn't train the vision encoder either, it's unchanged SigLIP by Google.
qeternity
5 months ago
|
parent
[–]
“We finetuned billions of dollars of research by Google and Meta.”
Consider applying for YC's W25 batch! Applications are open till Nov 12.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
> We add a vision encoder to Llama3 8B