Hacker News new | past | comments | ask | show | jobs | submit login

But this is bound to happen at some point I think?

ChatGPT is massive success, but that means the competitor will jump in at all cost, and that includes open source effort.




Bound to happen, so establish yourself as deeply as possible as quickly as possible. Once folks are hooked up to these APIs, there's a cost and friction to switching. This just feels like a land grab that OpenAI is trying to take advantage of by moving quickly.


Is there though? It's just a matter of swapping out $BASE_API_URL.


Most of the clients I'm working with aren't interested in the base level of service. They are looking to further train the models for their specific use cases. That's a much higher barrier to switch than replacing an API. You've got to understand how the underlying models are handling and building context. This sort of customer is paying far more than the advertised token rates and are locked in more tightly.


not really. fine tuning generally just involves running tailored training data through the model - the actual training algorithm is fairly generalized.

For example, the Dreambooth fine tuning algorithm was originally designed for Google's image, but was quickly applied to Stable Diffusion.


You have to rebuild all your prompts when switching providers.


If the superlative LLM can’t handle prompts from another provider, it just isn’t the superlative LLM.

This area by definition has no moats. English is not proprietary.

Use case is everything.


Switching to another LLM isn't always about quality. Being able to host something yourself at a lower or equal quality might be preferred due to cost or other reasons; in this case, there's no assumption that the "new" model will have comparable outputs to another LLM's specific prompt style.

In a lot of cases, you can swap models easier but all the prompt tweaking you did originally will probably need to be done again with the new model's black box.


Host something yourself also for educational reasons, just experimenting, this is how new applications and technologies to be discovered and created.


Do you? They're natural language, right?


You don't have to, but they will have been optimized for one model. It's unlikely they'll work as well on a different model.


I can't wait for TolkienAPI, where prompts will have to be written in Quenya.


I can’t wait to hire Stephen Colbert to write prompts then


No problem, just ask ChatGPT to translate it in Quenya.


I imagine AI would be able to perform the translation. "Given the following prompt, which is optimized for $chatbot1, optimize it for $chatbot2".


technically true, but the way these prompts are/can be template-ized it should be relatively trivial to do so.


Thee would be less friction to switch if the implementations (which are early enough) accounted for sending requests to multiple service providers including ones that don't exist yet.

OpenAI has a view few do - how broadly this type of product is actually being used. This is possibly the real lead to not just getting ahead, and staying ahead, but seeing ahead.


And also, what people are actually asking it. Are people using it to generate cover letters and resume help, or are they doing analysis of last quarters numbers, or are they getting programming help. That'll help them figure out what areas to focus on for later models, or areas to create specialized models for.


Yup. Moreover this type of model will only do certain types of things well, and other types of models will do other things much better.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: