does your team do usability tests on the apis before launching them?
if you got 3-5 developers to try and use one of the sdks to build something, i bet you'd see common trends.
e.g. we recently had to update an assistant with new data everyday and get 1 response, and this is what the engineer came up with. probably it could be improved, but this is really ugly
just to add to this, it's not helped by the docs. either they don't exist, or the seo isn't working right.
e.g. search term for me "openai assistant service function call node". The first 2 results are community forums, not what i'm looking for. The 3rd is seemingly the official one but doesn't actually answer the question (how to use the assistance service with node and function calling) with an example. The 4th is in python.
I'm sorry for your experience, and thanks very much for sharing the code snippet - that's helpful!
We did indeed code up some sample apps and highlighted this exact concern. We have some helpers planned to make it smoother, which we hope to launch before Assistants GA. For streaming beta, we were focused just on the streaming part of these helpers.
Is there a technical reason why log probs aren't available when using function calling? It's not a problem, I've already found a workaround. I was just curious haha.
In general I feel like the function calling/tool use is a bit cumbersome and restrictive so I prefer to write the typescript in the functions namespace myself and just use json_mode.
You can reply here or email me at atty@openai.com.
(Please don't hold back; we would love to hear the pain points so we can fix them.)