Hacker News new | past | comments | ask | show | jobs | submit login

Breaking peers into groups and letting them compete on wargames is also a fun exercise. Not sure how this will be impacted by LLM-powered coding software, however.



I put the second puzzle into ChatGPT. It makes an extremely stupid mistake but gets lucky.

GPT gets JavaScript numerical semantics utterly wrong but the only non-integer operation is a red herring: "the result [of 14/3] is approximately 4.6667, but JavaScript will store it as 4 since we are not using floating-point numbers"


Is this GPT 3.5, or GPT 4?


3.5


You should try with 4 and compare. 4 is leagues better.


I've yet to be wowed by anything from gpt 3.5 I've gotten myself or 4 I've seen others post, so I'm not gonna buy it.


Doesn't this strike you as an anti-scientific, close-minded / willfully ignorant attitude?

Why not just discover for yourself? I think you would do well to have your own opinions instead of relying on others.


No. There's a limited amount of hours in a day and dollars in a month I spend on discovering things.

I spent about four hours looking into LLVMs myself. I decided it was worth it for VSCode autocompletion and not worth it for anything more complex.

I'm working through a networking textbook right now in my learning time. So far my estimate is that that's a much more useful long term.

But I peek at HN posts about LLVMs. People are always telling me if I do yet one more thing I'll be wowed, but people are never posting concrete evidence of amazing things they've done. My heretics tell me that means this* is vaporware.

*Not that LLVMs are vaporware, but LLVMs allegedly an order of magnitude more useful than Copilot are.

And the results of this specific question aren't interesting. GPT 3.5 getting the second question of the first lesson in a middle school intro to programming class wrong shows its limitations. GPT 4 getting it right merely shows it's as least as capable as your average middle schooler before they've been taught programming, doesn't tell me anything interesting about it's ability to help me.


This smells of Dunning-Kruger effect. Four hours is not adequate time to research the current state of the art of LLMs. You're adopting an authoritative tone a subject which you readily admit you've only spent four hours learning about.

Exactly what evidence are you looking for? What is your bar of excellence? I've had more "wow" moments than I can count. So much research is going into this space. It's definitely worth it to do a bit more yourself before writing it off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: