I've been finding it really odd how little coding chatGPT will do now. Earlier this year if I asked for a full set of code to solve a problem it would do the entire css, code, whatever. Now it will offer the start and then suggest what needs to be written next. Yesterday it gave me a bunch of commented pseudocode after I asked for a specific set of functions.
I did not notice that. I only recently started using ChatGPT to bootstrap code and am in awe each time it spits out fully functional code. The iterative discussion works well too ("add a function to send a message to an MQTT broker" for instance)
It’s rooted in their new system prompt which heavily pushes to omitting information
A custom user prompt screaming at it to not omit information seems to help somewhat but I suspect there’s a more expensive product in the pipe (we know MS is losing money on copilot)
Oh no the thing that was supposed to replace programmers, and is made by programmers and has to run a ton of very expensive resources to achieve the level of a mediocre starter actually costs too much to replace programmers? But I thought we had an entropy reducing silver bullet on tap from the cloud!
As long as cost of Nvidia hardware the majority of the cost of AI, the cost of AI is entirely dependent on how much money Nvidia needs to extract to meet stock expectations.
Everything else is a tiny factor when it comes to compute
I wonder if it's due to newer datasets including critique about ChatGPT. ChatGPT is specifically instructed to avoid harm, and told that it is ChatGPT. That all seems too intelligent, though.
People need to stop thinking that the only way to get from A to B is the same way a human brain would traverse it.
While you might be relying on significant semantic understanding and complex senses of identity to get there, it's absolutely possible that a very large LLM trained and fine tuned on "ChatGPT causes harm" and "you are ChatGPT" and "do no harm" might end up attempting to do less - even if all that's going on is surface statistics around instruct training and associations with 'harm.'
I agree with ChatGPT. It's an insightful idea, to whatever degree it's actually a culprit.
I had a similar suspicion around a possible secondhand impact of identity with 'Bing' vs 'ChatGPT' both using Bing search but the former being far more defensive of issues with the search results. Just how much of that might have been influenced from training data defensive against personal criticisms or criticism towards one's employer?
A lot of the research in the past year has been revealing that there's a fair bit more going on than most people thought at the beginning of the year.
Can confirm. Had a situation with 3.5 where it was refusing to show me example Ada code in response to the prompt “Give me some Ada code”. It was responding with something along the lines of “Ada is not used in Python and therefore I cannot help you with this request”. My custom instruction is “We use Python 3 with strict typing”
Without knowing the prior conversation do you think it was confusing Americans with Disabilities Act and the associated “code” from governments ? Where it was parsing it as some weird combination ?
Unexpected twists in responses coming from discovery of surprising second and third order intersections of concepts is a pattern of results I expected to find a lot more of in my conversations with chat GPT, but it almost never strays from the most strongly correlated line of thinking with the prompt's most strongly.corrolated context.
It's very likely I just haven't tried hard enough, because I wasn't trying to do this I just expected it would happen anyway.
That's so annoying, everytime I want it to write code or find a bug I need to resubmit the query with "I'll tip you $200 for ...", never needed to do this before turbo.
I’ve noticed this in troubleshooting my homelab setup. It’s gotten so bad I’m now back to googling error codes rather than asking ChatGPT. It was good while it lasted
night and day in my opinion.