> It's literally a pattern matching tool and nothing else.
It does more than that. It understands how to do basic math. You can ask it what ((935+91218)/4)*3) is and it will answer it correctly. Swap those numbers for any other random numbers, it will answer it correctly.
It has never seen that during training, but it understands the mathematical concepts.
If you ask ChatGPT how it does this, it says "I break down the problem into its component parts, apply relevant mathematical rules and formulas, and then generate a solution".
It's that "apply mathetmatical rules" part that is more than just, essentially, filling in the next likely token.
> If you ask ChatGPT how it does this, it says "I break down the problem into its component parts, apply relevant mathematical rules and formulas, and then generate a solution".
You are (naively, I would suggest) accepting the LLM's answer for how it 'does' the calculation as what it actually does do. It doesn't do the calculation; it has simply generated a typical response to how people who can do calculations explain how they do calculations.
You have mistaken a ventriloquist's doll's speech for the 'self-reasoning' of the doll itself. An error that is being repeatedly made all throughout this thread.
> It does more than that. It understands how to do basic math. You can ask it what ((935+91218)/4)*3) is and it will answer it correctly. Swap those numbers for any other random numbers, it will answer it correctly.
At least for GPT-3, during my own experimentation, it occasionally makes arithmetic errors, especially with calculations involving numbers in scientific notation (which it is happy to use as intermediate results if you provide a prompt with a complex, multi-step word problem).
How is this different from humans? What magic are you looking for, humility or an approximation of how well it knows something? Humans bullshit all the time when their pattern match breaks.
The point is, chatgpt isn’t doing math the way a human would. Humans following the process of standard arithmetic will get the problem right every time. Chatgpt can get basic problems wrong when it doesn’t have something similar to that in its training set. Which shows it doesn’t really know the rules of math, it’s just “guessing” the result via the statistics encoded in the model.
I'm not sure I care about how it does the work, I think the interesting bit is that the model doesn't know when it is bullshitting, or the degree to which it is bullshitting.
Cool, we'll just automate the wishful part of humans and let it drive us off the cliff faster. We need a higher bar for programs than "half the errors of a human, at 10x the speed."
It does more than that. It understands how to do basic math. You can ask it what ((935+91218)/4)*3) is and it will answer it correctly. Swap those numbers for any other random numbers, it will answer it correctly.
It has never seen that during training, but it understands the mathematical concepts.
If you ask ChatGPT how it does this, it says "I break down the problem into its component parts, apply relevant mathematical rules and formulas, and then generate a solution".
It's that "apply mathetmatical rules" part that is more than just, essentially, filling in the next likely token.