Hacker News new | past | comments | ask | show | jobs | submit login

maybe someone more informed can help me understand why they didn't compared to Llava (https://llava-vl.github.io/)?



The purpose of this research is to compare large vision-language models where the vision component is pre-trained using different techniques, namely on image classification versus unsupervised contrastive pre-training (see OpenAI's CLIP). PaLI-3 also isn't an instruction-tuned model, so comparing it to Llava would be a little apples-to-oranges.


Maybe they just didn’t know about llava while conducting their research. It can take days to train a model sometimes.


Weeks to months at larger scales even.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: