The major costs come from continuing a conversation in a chat. You pay for the entire existing conversation, because that becomes the prompt for the next question. This tool doesn't reflect that.
Not all use of GPT models is conversational. In fact, I’d put money on most uses where you’re paying per token (read: not ChatGPT) aren’t conversational.
Apologies to anyone hit with Loading. The rate limit was reached a lot quicker than I expected and it brought down everything while I slept. Back to working now.
It works now. It's a nice little tool. Would be great to have the ability to test the cost of a response as well, and optionally combine the costs of the input/output.