Hacker News new | past | comments | ask | show | jobs | submit login

How do you guys get it to run?

I tried with ollama, installed from Homebrew, on my M1 Max with 64GB RAM.

I downloaded the phind-codellama model using

  ollama pull phind-codellama
But when I give it a prompt, for example

  ollama run phind-codellama "write a production grade implementation of Sieve of Eratosthenes in Rust"
It prints the following error message

    Error: Post "http://localhost:11434/api/generate": EOF
and exits.

Even though it worked to run ollama with some other models.

Is the version in homebrew not able to run phind-codellama?




Yes you'll need the latest version (0.0.16) to run the 34B model. It should run great on that machine!

The download (both as a Mac app and standalone binary) is available here: https://github.com/jmorganca/ollama/releases/tag/v0.0.16. And I will work on getting that brew formula updated as well! Sorry to see you hit an error!


Is there any chance you can add the phind-codellama-34b-v2 model? Or if there's already a way to run that with ollama, can you tell me how or point me to some docs? Thanks a ton!


Is there a limit on the number of lines/tokens it can take as input? Can it be increased? The limit seems to be somewhere above 30 lines.

I'm on a MacBook Pro M1 Max 64GB.

    % ollama --version
    ollama version 0.0.16
    % ollama run phind-codellama "$(cat prompt.txt)"
    ⠧   Error: Post "http://localhost:11434/api/generate": EOF
    % wc prompt.txt
         335    1481   11457 prompt.txt


This feature was surprisingly hard to find, but you can wrap multi line in “””. So start off with “””, then type or new line (with enter) anything you want, then close with “”” and it’ll run the whole prompt. (“”” is 3 parenthesis quote, the formatting from HN is making it look funny)


>3 parenthesis quote

Do you mean triple double quotes?


That's what I've been using but still get that error.


Same here, would be interested to know if there's a solution!

I created an issue, if you have an example prompt to add that would be helpful! https://github.com/jmorganca/ollama/issues/422


Thanks for creating an issue! And sorry for the error folks… working on it!


Haha no need to apologise! Thanks for an amazing project :)


Datapoint here on an M1Max Macbook Pro: I use the triple doublequotes and I'm not getting that error


This project is sweet and I fortunately have a Mac M1 laptop with 64GB RAM to play with it.

But I'd like to also be able to run these models on my Linux desktop with two GPU's (a 2080Ti and a 3080Ti) and a Threadripper. How difficult would it be to set some of these up on there?


Not hard with text-generation-ui! https://github.com/oobabooga/text-generation-webui

I personally use llama.cpp as the driver since I run CPU-only but another may be better suited for GPU usage. But then it's as simple as downloading the model and placing it in the directory.


Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects Command '. "/Users/pmarreck/Downloads/oobabooga_macos/installer_files/conda/etc/profile.d/conda.sh" && conda activate "/Users/pmarreck/Downloads/oobabooga_macos/installer_files/env" && python -m pip install -r requirements.txt --upgrade' failed with exit status code '1'. Exiting...

I'm admittedly running Sonoma beta, that's probably why

I will try it on my Linux machine later tonight (currently AFK with laptop)


My Threadripper has 64 cores/128 threads and I'm wondering if any models can take advantage of CPU concurrency to at least mitigate some of the loss from not using a GPU (should one not be using a GPU)


From my experience llama.cpp doesn’t take full advantage of parallelism as it could. Tested this on an HPC cluster - increasing thread count certainly did increase CPU usage but did not meaningfully improve tok/s past 6-8 cores. Same behavior with whisper.cpp. :( I wonder if there’s another backend that scales better.


I'm guessing the problem is that you're constrained by memory bandwidth and not computation power and this is inherent to the algorithm, not an artifact of any one implementation.


do you think can get this working with https://github.com/jackmort/chatgpt.nvim?


Not exactly using ollama, but you can use localAI [1] and set your local environment "OPENAI_API_HOST" to localhost and it should work.

[1] https://github.com/go-skynet/LocalAI




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: