We’re using the new OpenAI assistants with the code interpreter feature, which allows you to ask questions of the model and have OpenAI turn those into python code that they run on their infra and pipe the output back into the model chat.
It’s really impressive and removes need for you to ask it for code and then run that locally. This is what powers many of the data analysis product features that are appearing recently (we’re building one ourselves for our incident data and it works pretty great!)
We’re using the new OpenAI assistants with the code interpreter feature, which allows you to ask questions of the model and have OpenAI turn those into python code that they run on their infra and pipe the output back into the model chat.
It’s really impressive and removes need for you to ask it for code and then run that locally. This is what powers many of the data analysis product features that are appearing recently (we’re building one ourselves for our incident data and it works pretty great!)