Hacker News new | past | comments | ask | show | jobs | submit | wcallahan's comments login

Jina.ai’s API is one of the best parsers I’ve seen. And better priced.

I believe so, presuming it’s released the same as 70B recently was. IIRC, I had to consent on Hugging Face to their policy on “research and non-commercial use only” before downloading the model.

https://github.com/ggerganov/llama.cpp/discussions/4576


Well, that's why I am asking: Facebook has not made any legal binding promise to release the weights, and people just assume/hope they will release weights. But OP's 'releasing it' will cover anything from a blog post to an API to a chatbot service in Whatapp. The exact date matters far less than whether there is any new information on the weights being released.


I’ve personally run the Llama 3 models locally on a Mac Studio with 128GB of ram, and found it (and other open source models) to perform highly erratically locally.

And while I’ve also encountered quality control issues with the open source models in “professionally hosted” settings as well (e.g., Groq, OpenRouter), it was much worse locally.

Sentences ending abruptly and non-sensical/unrelated words were a common problem.


Those were still quantized right? My understanding is that LLaMA3 does not quantize well at all.


I run llama 3 8B q 4 k m locally, and it's amazing! This is my system prompt:

You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.


Can anyone speak to the practical differences between the 70B model and this?

What text-based cases have their efficacy greatly enhanced by this?


Nah, it’s a “brief” (as they refer to their blog-like posts) article, so it was short. But it appears Meta is releasing it the same as their other open models.

“Meta Platforms plans to release the largest version of its open-source Llama 3 model on July 23, according to a Meta employee. This version, with 405 billion parameters, or the "settings" that determine how Al models respond to questions, will also be multimodal, meaning that it will be able to understand and generate images and text, The Information previously reported.

Meta in April released two smaller models within the Llama 3 family, with 8 billion and 70 billion parameters, which developers quickly embraced. The earlier launches served to build excitement for the largest Llama 3 model, which was expected to be launched around now. Its release comes about a year after the launch of Llama 2. Meta declined to comment.”


It seems keeping both names reflects the high degree of opinionated fans in the open source community (for both apps, apparently).

They seem to be taking from the playbook of a merger sharing two names for a transition period to call attrition risk…

I share the view in support of Thunderbird as the future (although I’m an iOS user still so awaiting over here), but was surprised to see some many comments in support of the opposite.

Especially given Thunderbird has been signaling this path for over a year.

And re: your migration to a Google-less mobile experience, I’m right there with you.

It’s still surprisingly difficult to get great email client developers to support basic IMAP/CalDEV/CardDEV functionality (e.g., Superhuman, Missive). It’s the only thing keep me from migrating all accounts to a great host like Fastmail.


If our data source is Postgres, can edits/updates be sent directly back to Postgres?

If the import is one-way right now, are there any plans to make the above possible?


Where is the English language version? Would love to use with my son!


Click on menu symbol, then on "Sprache wechseln", then English


Just open the Hamburger menu and then click „Sprache wechseln“.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: