Hacker News new | past | comments | ask | show | jobs | submit login

Llama 3.2 vision models don't seem that great if they have to compare them to Claude 3 Haiku or GPT4o-mini. For an open alternative I would use Qwen-2-72B model, it's smaller than the 90B and seems to perform quite better. Also Qwen2-VL-7B as an alternative to Llama-3.2-11B, smaller, better in visual benchmarks and also Apache 2.0.

Molmo models: https://huggingface.co/collections/allenai/molmo-66f379e6fe3..., also seem to perform better than Llama-3.2 models while being smaller and Apache 2.0.




1. Ignore the benchmarks. I've been A/Bing 11B today with Molmo 72B [1], which itself has an ELO neck-and-neck with GPT4o, and it's even. Because everyone in open source tends to train on validation benchmarks, you really can not trust them.

2. The method of tokenization/adapter is novel and uses many fewer tokens than all comparable CLIP/SigLIP-adapter models, making it _much_ faster. Attention is O(n^2) on memory/compute per sequence length.

[1] https://molmo.allenai.org/blog


> I've been A/Bing 11B today with Molmo 72B

How are you testing Molmo 72B? If you are interacting with https://molmo.allenai.org/, they are using Molmo-7B-D.


It’s not just open source that trains on the validation set. The big labs have already forgotten more about gaming MMLU down to the decimal than the open source community ever knew. Every once in a while they get sloppy and Claude does a faux pas with a BIGBENCH canary string or some other embarrassing little admission of dishonesty like that.

A big lab gets exactly the score on any public eval that they want to. They have their own holdouts for actual ML work, and they’re some of the most closely guarded IP artifacts, far more valuable than a snapshot of weights.


I tried some OCR use cases, Claude Sonnet just blows Molmo.


When you say "blows," do you mean in a subservient sense or more like, "it blows it out of the water?"


yeah does it suck or does it suck?


How about its performance compare to Qwen-2-72B tho?


Refer to the blog post I linked. Molmo is ahead of Qwen2 72b.


What interface do you use for a locally-run Qwen2-VL-7B? Inspired by Simon Willison's research[1], I have tried it out on Hugging Face[2]. Its handwriting recognition seems fantastic, but I haven't figured out how to run it locally yet.

[1] https://simonwillison.net/2024/Sep/4/qwen2-vl/ [2] https://huggingface.co/spaces/GanymedeNil/Qwen2-VL-7B


MiniCPM-V 2.6 is based on Qwen 2 and is also great at handwriting. It works locally with KoboldCPP. Here are the results I got with a test I just did.

Image:

* https://imgur.com/wg0kdQK

Output:

* https://pastebin.com/RKvYQasi

OCR script used:

* https://github.com/jabberjabberjabber/LLMOCR/blob/main/llmoc...

Model weights: MiniCPM-V-2_6-Q6_K_L.gguf, mmproj-MiniCPM-V-2_6-f16.gguf

Inference:

* https://github.com/LostRuins/koboldcpp/releases/tag/v1.75.2


Should the line "p.o. 5rd w/ new W5 533" say "p.o. 3rd w/ new WW 5W .533R"?

What does p.o. stand for? I can't make out the first letter. It looks more like the f, but the nodge on the upper left only fits the p. All the other p's look very different though.


'Replaced R436, R430 emitter resistors on right-channel power output board with new wire-wound 5watt .33ohm 5% with ceramic lead insulators'


Thx :). I thought the 3 looked like a b but didn't think brd would make any sense. My reasoning has led me astray.


Yeah. If you realize that a large part of the llm's 'ocr' is guessing due to context (token prediction) and not actually recognizing the characters exactly, you can see that it is indeed pretty impressive because the log it is reading uses pretty unique terminology that it couldn't know from training.


I'd say as an llm it should know this kind of stuff from training, contrary to me, for whom this is out of domain data. Anyhow I don't think the AI did a great job on that line. Would require better performance for it to be useful for me. I think larger models might actually be better at this than I am, which would be very useful.


Be aware that a lot of this also has to do with prompting and sampler settings. For instance changing the prompt from 'write the text on the image verbatim' to something like 'this is an electronics repair log using shorthand...' and being specific about it will give the LLM context in which to make decisions about characters and words.


Thanks for the hint. Will try the out!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: