Hacker News new | past | comments | ask | show | jobs | submit login

I’ve had coworkers suggest a technical solution that was straight up fabricated by an LLM and made no sense. More competent people realise this limitation of the models and can use them wisely. Unfortunately I expect to see the former spread.



I've spent a few hours last week crafting a piece of code for my coworker, and then when I asked him to test it in the real environment, it turned out that the API he wanted to connect to the code I gave him was just a hallucination by ChatGPT.


We had a manager joining last year which, on their first days, created MRs for existing code bases, wrote documents on new processes and gave advice on current problems we were facing. Everything was created by LLMs and plain bullshit. Fortunately were able to convince the higher ups that this person was an imposter and we got rid of them.

I really hope that these type of situations won't increase because the mental strain that put on some people in the org is not sustainable in the long run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: