Hacker News new | past | comments | ask | show | jobs | submit | ankrgyl's comments login

Cool to see the balance between UX and security. I really like the UI!


thanks! i wanted anyone to be able to create notes and be able to share them with others, so tried to come up with the most secure way to accomplish that :)


Thanks for mentioning Braintrust!

We are very committed to the proxy :)

Although, to your point, we have seen less market pull for routing, and more for (a) supporting the latest LLMs, (b) basic translation (e.g. tool call API b/w Anthropic & OpenAI), and (c) solid infra features like caching/load balancing api keys/secret management. So that's our focus.


We absolutely can. The proxy already supports running in multiple environments, including your own AWS VPC deployed on Lambda, so I'm sure we could add support for Bedrock (Anthropic or other models).

Feel free to ping me at ankur@braintrustdata.com and we can chat more!


We think a lot about this. I think it's very important to us to provide building blocks that foster a more open ecosystem. Part of that is providing building blocks (like the proxy) that do not lock you too much into "our" way of doing things either.


Model routers are a "semantic" abstraction -- they analyze the contents of a prompt and automatically determine which provider to use. This proxy is (currently) much simpler: you pick the provider you want and we add a bunch of nice features like caching and low-latency on top.

We will likely work with or build some routing capabilities in the future!


LiteLLM is a great project. The key differences (from what I can tell) are that:

a) This is hosted b) Supports caching and load balancing c) Can manage multiple providers behind a single API key d) Implemented in Typescript (vs. Python)

On the other hand, LiteLLM is a more mature project and supports significantly more model providers than we currently do!


It's not open source at the moment, but we're definitely open to it.


totally pointless if its not open source


Thanks for all the support Bryan and team!


Who do you think is the sweet spot user for Rivet? Anyone building any LLM app, or a certain kind (e.g. agents)? Is there a use case for which you'd advise against using Rivet?


I think it's people building tool-using agent applications.

We've been collaborating with several amazing teams over the past few months, who have been pushing Rivet in various ways. We used it for a chat interface at Ironclad, but we've seen companies like Bento and Willow integrate it with different UX paradigms.

The commonality seems to be that we are all integrating LLMs into an application, and want the LLM to somehow interact with that application (set up search filters, build a guide based on documentation).


I'm not super familiar with Dolt's SQL implementation, but I'm surprised that a simple `count(*)` query timed out:

https://www.dolthub.com/repositories/dolthub/transparency-in...


Queries on DoltHub need to go to S3 to fetch all the chunks. This only works for databases < 1GB generally. You will get much better performance if you clone the database locally.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: