Hacker News new | past | comments | ask | show | jobs | submit login

I don't know if I completely agree with that summary; ChatGPT (and Claude and Gemini) also has some degree of memory and context as well. It can at least to some degree remember what I and it typed and adjust things based on that. I would consider that some level of "intelligence", even if it's kind of baby intelligence.

ETA:

I should point out that it doesn't just "smoosh google results together", and it can actually do pretty interesting transformations beyond a simple "Grabbing the first result". If I ask ChatGPT to give me ten unique Java homework assignment problems, it will give me exactly that, and from my experience they actually are unique, at least they don't show up immediately when I search for similar things on Google. I can then ask it to give me those questions in LaTeX so that I can render it into a pretty thing. I guess you could argue that that is just an AST, so fair enough, but I think it's pretty easy to see why people are losing their shit over it.

It also has for more than a year hooked into Wolfram Alpha, so it addition to usually correctly parsing your problem, it can then send that parsed problem to a more objective source and get a correct answer.

ChatGPT has been an immense timesaver for me. It's been great to generate stuff like homework assignments, or to summarize long text into something more palatable. I don't automatically trust its output obviously, but it's considerably more useful than a Markov chain.




I'm always surprise by this kind of article or comments of people who don't know anything of how LLMs work or what they can do. The problem is that it requires, as it is the case for most tools, some learning curve. Prompting is not always straightforward, and after using these models for a while, you start discerning what should be prompted and what won't work. The best example I have is a documentation that I wrote in Word that I wanted to translate in Mardown on a GitHub site (see https://github.com/naver/tamgu/tree/master/documentations). I split my document into 50 chapters of raw text (360 pages) and I asked chatGPT to add Mardown tags to each of the chapters. Not only did it work very well, but I also asked the same system to automatically translate in French, Spanish, Greek and Korean each of these chapters, keeping the Markdown intact. It took me a day to come up with 360 pages translated into these languages with GitHub ready documents. So the electric consumption was certainly high for this task, but you have to compare it to do the same task by hand over maybe a few weeks of continuous work.


Every token emitted is a full-pass through the network, with the prompts and previous tokens (sent by you and the AI) given as input.

And I agree that there is certainly a capacity for reasoning, no matter how flawed it is. There is plenty of evidence of AI solving novel problems 0-shot. Maybe not 100% of the time, but even if you have to run it 100 times and it gets it right 75% of the time in pure reasoning problems, it's doing better than randomness.


I completely agree. I'm not philosophically-brained enough to know how to define "intelligence", but I do think that ChatGPT qualifies as at least "intelligence-lite".


Yeah, as a distiller of collective knowledge as captured over the first 30 years of commercial internet, it does exactly what it should. That's not always "right" but it still provides huge value even if all it does is practically filter and distill, leaving you to nitpick or correct.

It doesn't have to be "magic" to displace a lot of things people spend time on every day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: