Hacker News new | past | comments | ask | show | jobs | submit login

Will vLLM be supported as well?



Hey it's Gal from Traceloop,

That's a good tbh. I wonder whether we should implement instrumentations for LLMs "hosting solutions" or for specific LLMs (E.g. LLaMa/Falcon) and ignore the hosting solution (not sure if that's even possible though as it sort of dictates the inference api).

wdyt?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: