Hacker News new | past | comments | ask | show | jobs | submit login

The claim that somehow LLMs as they stand are somehow the answer is disturbing. ChatGPTs ability to be authoritatively wrong is a serious problem…

Just to see what it would do, I gave it a basic word problem the other day (two people drive towards each other) and it had the steps right, but buried in it was a simple logic error (it claimed that the two parties traveling at different speeds would travel the same distance in a unit of time).

That it was good enough to seem trustworthy, made it worse…




Sure, but do you need Siri to solve logic riddles? I don't. I need it to reliably hit other APIs and shortcuts with some basic arguments. Check out OpenAI's ChatGPT plugins [0]. You can connect the LLM system to an external service with just an OpenAPI model and a natural language description of what it does. I want that, via voice.

[0] https://openai.com/blog/chatgpt-plugins


Did you try that with ChatGPT4, or 3.5? GPT4 has gotten better at this type of problem, it seems.

I think the point people are making is that GPT models are getting better at an exciting/disturbing pace depending on one's point of view, while Siri has been around for years and has only gotten worse.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: