Hacker News new | past | comments | ask | show | jobs | submit | idosh's comments login

oh no doubt... got amazed by seeing some people use my product


How is it compared replicache, watermelondb and the rest?


Brilliant quotes made me laugh so hard:

> If you’re unlucky, you might read this and think, “holy shit, no wonder I’m burned out”.

> building an operator for everything which is the proposed solution of some people who really just want to watch the world burn.

But actually, the takeaway is this:

> You want simplicity where it benefits the user, which often requires increased complexity for the developer.

Professionals should make it easier for downstream users. In the context of this article, it means the platform engineers should abstract the complexities for application developers to be able to deploy safely their applications


It's about time!


But I think that's the misconception today. Micromanagement is often being confused with just management. Terminology wise I agree with your term


Can you elaborate on your plans for OpenPipe? Sounds like a very interesting project


Currently OpenPipe allows you to capture input/output from a powerful model and use it to fine-tune a much smaller one, then offers you the option to host through OpenPipe or download it and host it elsewhere. Models hosted on OpenPipe enjoy a few benefits, like data drift detection and automatic reformatting of output to match the original model you trained against (think extraction "function call" responses from a purely textual Llama 2 response) through the sdk.

Longer-term, we'd love to expand the selection of base models to include specialized LLMs that are particularly good at a certain task, e.g. language translation, and let you train off of those as well. Providing a ton of specialized starting models will decrease the amount of training data you need, and increase the number of tasks at which fine-tuned models can excel.


Thanks! I need to dive into the project and learn more. Sounds exciting


Any compliance yet? HIPAA etc


Congrats on the launch! Sounds like an exciting project. Do you plan to store also the raw data (input + output)? It can be relevant for fine-tuning, optimizing costs, etc. Since you already store metadata, I think it makes sense to have a one-stop shop.


Agree – Langfuse stores all prompts/completions, model configuration and metadata. Currently the GET API can be used to use the data for finetuning and we build a wrapper to access a filtered sample via the Python SDK.


The goal eventually is to be able to create organs in lab condition that can be implanted


How does it handle flaky connection or even offline scenario? How does it work with service worker?


We're using Redis for vector search. It's pretty rad in terms of performance and other capabilities


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: