Hacker News new | past | comments | ask | show | jobs | submit login

What’s everyone’s coding LLM setup like these days? I’m still paying for Copilot through an open source Xcode extension and truthfully it’s a lot worse than when I started using it.



I'm happy with Supermaven as a completion, but only for more popular languages.

Otherwise Claude 3.5 is really good and gpt-4o is ok with apps like Plandex and Aider. You need to get a feel for which one is better for what task though. Plain questions to Claude 3.5 API through the Chatbox app.

Research questions often go to perplexity.ai because it points to the source material.


I gave up with autocomplete pretty quickly. The UX just wasn't there yet (though, to be fair, I was using some third party adapter with sublime).

It's just me asking questions/pasting code into a ChatGPT browser window.


I just pay the $20/month for Claude Pro and copy/paste code. Many people use Cursor and Double, or alternative frontends they can use with an API key.


I use Cursor and Aider, I hadn't heard of Double. I've tried a bunch of others including Continue.dev, but found them all to be lacking.


can you please elaborate on how you use Cursor and Aider together?


I don't really use them together exactly, I just alternate backwards and forwards depending on the type of task I'm doing. If it's the kind of change that's likely to be across lots of files (writing) then I'll use Aider. If it only uses context from other files I'll likely use Cursor.


Supermaven (vscode extension) was quite handy at recognizing that I was making the same kind of changes in multiple places and accurately auto-completed the way I was about to write it, I liked it better than copilot

I just wish they were better at recognizing when their help is not wanted because I would often disable it and forget to turn it back on for a while. Maybe a "mute for an hour" would fix that.


neovim with the gp.nvim plugin.

Allows you to open chats directly in a neovim window. Also, allows you to select some text and then run it with certain prompts (like "implement" or "explain this code"). Depending on the prompt you can make the result appear directly inside the buffer you're currently working on. The request to the ChatGPT API also is enriched with the file-type.

I hated AI before I discovered this approach. Now I'm an AI fanboy.


www.workspaicehq.com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: