Rarely will it do what it is asked without me coming up with a backstory.
q: What is an average price of x.
a: This is a language model that only has access to information before 2021
q: what was the price range in 2021
a: we don't have access to realtime pricing information
q: who does
a: we don't have access to that information.
q: How to do x
a: step 1 research x step 2 follow research step 3 do something obvious
q: write me an app that does..
a: code from a basic tutorial
I don't understand what people with a higher than average IQ are getting from llms. The answers are basic and are common sense. It's stories are like a student trying to fill the page with words to hit a word count. It can write code as well as tutorials.
> I don't understand what people with a higher than average IQ are getting from llms
People with higher than average IQ are smart enough to ask detailed and specific questions like “Fix the race condition in this [giant block of code]” rather than asking dumb, overly general things like “build me an app!!”.
Also, chatgpt models with internet access are completely happy to help you price compare. Go try bing chat, or ChatGPT plugins.
So it's limited to people smart enough to know there is a race condition but not smart enough to fix it also trusting enough to accept the answer (how could this group audit the fix?).
If you know enough to ask the right series of questions you already know the answer.
I’m smart enough to fix a race condition, but why would I spend a hour debugging when ChatGPT can find the issue in 10 seconds. Sure it might not be perfect, but having some initial code to fix the issue is a huge time save.
Making ChatGPT write a whole app is problematic because small issues are hidden by the large quantity of generated code. Having ChatGPT fix an issue in your own code is comparatively easy to spot check for correctness.
I guess youre not really seeing the actual benefits of LLMs.
Let's say I dont "learn it". I would have to make the _same_ mistake 120 times for me to actually make it worth it to take the time to "learn" it myself.
I'd figure someone somewhat smart, will look at this error and after the 2nd or 3rd time asking GPT, will see a pattern.
Its amusing how people become so reductionists with LLMs.
If you have to go through the effort of fixing it once you will likely write code that won't produce that error and you will warn others because of the pain you felt.
Knowing that there is an error can be as trivial as reading a failed test or a bug report something even someone grossly insufficient to the task could do.
Fixing the problem could be a 5 minute or a 5 hour tour. For some subset of the tasks that fall towards the hard end of the range validation and or adaptation of a proposed solution is liable to be faster than starting from scratch. I don't know how this could possibly be in dispute with capability rapidly increasing.
It's usable as limited search engine which currently has less junk than main commercial ones and less strict requirements for prompts that would not bury you under a pile of junk. I suspect both advantages are temporary, but enjoy them while they last.
q: What is an average price of x.
a: This is a language model that only has access to information before 2021
q: what was the price range in 2021
a: we don't have access to realtime pricing information
q: who does
a: we don't have access to that information.
q: How to do x a: step 1 research x step 2 follow research step 3 do something obvious
q: write me an app that does.. a: code from a basic tutorial
I don't understand what people with a higher than average IQ are getting from llms. The answers are basic and are common sense. It's stories are like a student trying to fill the page with words to hit a word count. It can write code as well as tutorials.
Everything is surface level information