Hacker News new | past | comments | ask | show | jobs | submit login

I'm interested in the cost of gpt-3.5-turbo-instruct. I've got a basic website using text-davinci-003 that I would like to launch but can't because text-davinci-003 is too expensive. I've tried using just gpt-3.5-turbo but it won't work because I'm expecting a formatted JSON to be returned and I can just never get consistency.



You need to use the new OpenAI Functions API. It is absolutely bonkers at returning formatted results. I can get it to return a perfectly formatted query-graph a few levels deep.


There is also Code Interpreter now in plugin beta so should influence it's ability to output proper formats without hallucinations.


You can try to force JSON output using function calling (you have to use either the gpt-3.5-turbo-0613 or gpt-4-0613 model for now).

Think of the properties you want in the JSON object, then send those to ChatGPT as required parameters for a function (even if that function doesn't exist).

    # Definition of our local function(s).
    # This is effectively telling ChatGPT what we're going to use its JSON output for.
    # Send this alongside the "model" and "messages" properties in the API request.

    functions = [
        {
            "name": "write_post",
            "description": "Shows the title and summary of some text.",
            "parameters": {
                "type": "object",
                "properties": {
                    "title": {
                        "type": "string",
                        "description": "Title of the text output."
                    },
                    "summary": {
                        "type": "string",
                        "description": "Summary of the text output."
                    }
                }
            }
        }
    ]
I've found it's not perfect but still pretty reliable – good enough for me combined with error handling.

If you're interested, I wrote a blog post with more detail: https://puppycoding.com/2023/07/07/json-object-from-chatgpt-...


With the latest 3.5-turbo, you can try forcing it to call your function with a well-defined schema for arguments. If the structure is not overly complex, this should work.


It's great at returning well-formatted JSON, but it can hallucinate arguments or values to arguments.


i’ve had it come up with new function names, or prepend some prefix to the names of functions. i had to put some cleverness in on my end to run whatever function was close enough.



I'm assuming they will price it the same as normal gpt-3.5-turbo. I won't use it if it's more than 2x the price of turbo, because I can usually get turbo to do what I want, it just takes more tokens sometimes.

Have you tried getting your formatted JSON out via the new Functions API? I does cure a lot of the deficiencies in 3.5-turbo.


From what I can find, pricing of GPT-4 is roughly 25x that of 3.5 turbo.

https://openai.com/pricing

https://platform.openai.com/docs/deprecations/


In this thread we’re talking about gpt-3.5-turbo-instruct, not GPT4


Sorry about that. Got my thread context confused.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: