I actually had a discussion with Phind itself recently, in which I said that in order to help me, it seems like it would need to ingest my codebase so that it understands what I am talking about. Without knowing my various models, etc, I don't see how it could write anything but the most trivial functions.
It responded that, yes, it would need to ingest my codebase, but it couldn't.
It was fairly articulate and seemed to understand what I was saying.
So, how do people get value out of Phind? I just don't see how it can help with any case where your function takes or returns a non-trivial class as a parameter. And if can't do that, what is the point?
I am using Phind quite a lot. It's using it's own model along GPT 4 while still being free.
It is also capable to perform searches, which lead me - forgive me founders - to abuse it quite a lot: whenever I am not finding a good answer from other search engines I turn up to Phind even for things totally unrelated to software development, and it usually goes very well.
Sometimes I even ask it to summarize a post, or tell me what HN is talking about today.
I am very happy with it and hope so much it gains traction!
I am not related to Phind or any other AI company, but yes, this is definitely the case, and you should assume that they will be ingesting your code through regular web scrapes now (giving extremely general knowledge about your library) and through reading specifically the library source code soon (this is what you are asking about here). If you wanted to try this strategy, I would suggest that you do it by providing the model with a large database of high-quality examples specific to your library (so, perhaps the examples section of your website, plus snippets from open source projects that use the library). These will probably be the last to be specifically ingested by general coding models.
Thanks for releasing Phind-CodeLLaMA-34B-v2, it's been helping me get up to speed with node and web apps and so far it's been spot on. :) Super impressive work.
Me too. For past few weeks, I had been working on my AHK scripting with Phind. It produced working code consistently and provided excellent command line for various software.
Also I use it for LaTeX, too. It is very helpful providing various package than trying to hunt more information through Google. I got a working tex file within 15 min than it took me 3 weeks 5 years ago!
I’ve had some consistency issues with phind but as a whole I have no real complaints, just glitches here and there with large prompts not triggering responses and reply options disappearing.
As a whole I think it works well in tandem with ChatGPT to bounce ideas or get alternate perspectives.
(I also love the annotation feature where it shows the websites that it pulled the information from, very well done)
Been playing with Phind for a while and my conclusion is: the Phind model works well on those long existing stuff like C++ libraries, but works generally bad on newer stuff, such as composing LCEL chains.
The first coding question I tested it on, it gave me something completely wrong and it was pretty easy stuff, I’m sure it gets a lot right but this just shows unreliability