I agree with this. We’ve already gotten pretty good at json coercion, but this seems like it goes one step further by bundling decision making in to the model instead of junking up your prompt or requiring some kind of eval on a single json response.
It should also be much easier to cache these functions. If you send the same set of functions on every API hit, OpenAI should be able to cache that more intelligently than if everything was one big text prompt.
It should also be much easier to cache these functions. If you send the same set of functions on every API hit, OpenAI should be able to cache that more intelligently than if everything was one big text prompt.