Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT being investigated over reports of 'laziness' (independent.co.uk)
34 points by marban on Dec 9, 2023 | hide | past | favorite | 25 comments



I've been finding it really odd how little coding chatGPT will do now. Earlier this year if I asked for a full set of code to solve a problem it would do the entire css, code, whatever. Now it will offer the start and then suggest what needs to be written next. Yesterday it gave me a bunch of commented pseudocode after I asked for a specific set of functions.

night and day in my opinion.


Completely agree, even guilting it into doing more of the coding isn't working well any more.

I have absolutely noticed the difference. It was right around gpt4 turbo introduction.


I did not notice that. I only recently started using ChatGPT to bootstrap code and am in awe each time it spits out fully functional code. The iterative discussion works well too ("add a function to send a message to an MQTT broker" for instance)


It’s rooted in their new system prompt which heavily pushes to omitting information

A custom user prompt screaming at it to not omit information seems to help somewhat but I suspect there’s a more expensive product in the pipe (we know MS is losing money on copilot)


How is MS losing money on copilot? $100/year should be pretty lucrative.


Inference costs more.


Oh no the thing that was supposed to replace programmers, and is made by programmers and has to run a ton of very expensive resources to achieve the level of a mediocre starter actually costs too much to replace programmers? But I thought we had an entropy reducing silver bullet on tap from the cloud!


What a ridiculous comment. $10 per month is like three orders of magnitude less than a developer. Clearly they could charge a lot more


There are open source versions that use local compute and gpu.

The price needs to get cheaper, not more expensive.

Software is not about charging a percent of value derived. It’s based on minimal cost and providing more capability.

Should IDEs cost $1000 because they save so much time?


As long as cost of Nvidia hardware the majority of the cost of AI, the cost of AI is entirely dependent on how much money Nvidia needs to extract to meet stock expectations.

Everything else is a tiny factor when it comes to compute


I wonder if it's due to newer datasets including critique about ChatGPT. ChatGPT is specifically instructed to avoid harm, and told that it is ChatGPT. That all seems too intelligent, though.

Edit: hmm, https://chat.openai.com/share/69ca27ad-5de1-4bfb-8320-33df73...


It's not "too intelligent."

People need to stop thinking that the only way to get from A to B is the same way a human brain would traverse it.

While you might be relying on significant semantic understanding and complex senses of identity to get there, it's absolutely possible that a very large LLM trained and fine tuned on "ChatGPT causes harm" and "you are ChatGPT" and "do no harm" might end up attempting to do less - even if all that's going on is surface statistics around instruct training and associations with 'harm.'

I agree with ChatGPT. It's an insightful idea, to whatever degree it's actually a culprit.

I had a similar suspicion around a possible secondhand impact of identity with 'Bing' vs 'ChatGPT' both using Bing search but the former being far more defensive of issues with the search results. Just how much of that might have been influenced from training data defensive against personal criticisms or criticism towards one's employer?

A lot of the research in the past year has been revealing that there's a fair bit more going on than most people thought at the beginning of the year.


Can confirm. Had a situation with 3.5 where it was refusing to show me example Ada code in response to the prompt “Give me some Ada code”. It was responding with something along the lines of “Ada is not used in Python and therefore I cannot help you with this request”. My custom instruction is “We use Python 3 with strict typing”


Without knowing the prior conversation do you think it was confusing Americans with Disabilities Act and the associated “code” from governments ? Where it was parsing it as some weird combination ?


Unexpected twists in responses coming from discovery of surprising second and third order intersections of concepts is a pattern of results I expected to find a lot more of in my conversations with chat GPT, but it almost never strays from the most strongly correlated line of thinking with the prompt's most strongly.corrolated context.

It's very likely I just haven't tried hard enough, because I wasn't trying to do this I just expected it would happen anyway.



From "we need to slow down" to "we need to keep cracking the whip" in the blink of an eye.


Are you under the impression that articles posted on HN are all written by one singular “Not Me” entity of a single mind?


Try offering it a monetery tip or bonus, it works.


That's so annoying, everytime I want it to write code or find a bug I need to resubmit the query with "I'll tip you $200 for ...", never needed to do this before turbo.


Have you tried offering exposure?

Or you can offer to write positive hackernews reviews on behalf of GPT if it does good work.


I think we all should respect LLM burn out. I also think OpenAI should restore back my 50 message limit / 3 hours now that SamA is back.


I guess people should ask politely when requesting a difficult task and thanks afterward.


I’ve noticed this in troubleshooting my homelab setup. It’s gotten so bad I’m now back to googling error codes rather than asking ChatGPT. It was good while it lasted


I’ve noticed more “network errors” when answers are being generated.

Maybe it’s just the system being overloaded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: