Hacker News new | past | comments | ask | show | jobs | submit login

People can communicate each step, and review each step as that communication is happening.

LLMs must be prompted for everything and don’t act on their own.

The value in the assertion is in preventing laymen from seeing a statistical guessing machine be correct and assuming that it always will be.

It’s dangerous to put so much faith in what in reality is a very good guessing machine. You can ask it to retrace its steps, but it’s just guessing at what it’s steps were, since it didn’t actually go through real reasoning, just generated text that reads like reasoning steps.






> since it didn’t actually go through real reasoning, just generated text that reads like reasoning steps.

Can you elaborate on the difference? Are you bringing sentience into it? It kind of sounds like it from "don't act on their own". But reasoning and sentience are wildly different things.

> It’s dangerous to put so much faith in what in reality is a very good guessing machine

Yes, exactly. That's why I think it is good we are supplementing fallible humans with fallible LLMs; we already have the processes in place to assume that not every actor is infallible.


So true. People who argue that we should not trust/use LLMs because they sometimes get it wrong are holding them to a higher standard than people -- we make mistakes too!

Do we blindly trust or believe every single thing we hear from another person? Of course not. But hearing what they have to say can still be fruitful, and it is not like we have an oracle at our disposal who always speaks the absolute truth, either. We make do with what we have, and LLMs are another tool we can use.


> Can you elaborate on the difference?

They’ll fail in different ways than something that thinks (and doesn’t have some kind of major disease of the brain going on) and often smack in the middle of appearing to think.


LLMs "don't act on their own" because we only reanimate them when we want something from them. Nothing stops you from wiring up an LLM to keep generating, and feeding it sensory inputs to keep it processing. In other words, that's a limitation of the harness we put them in, not of LLMs.

As for people communicating each step, we have plenty of experiments showing that it's pretty hard to get people to reliably report what they actually do as opposed to a rationalization of what they've actually done (e.g. split brain experiments have shown both your brain halves will happily lie about having decided to do things they haven't done if you give them reason to think they've done something)

You can categorically not trust peoples reasoning about "why" they've made a decision to reflect what actually happened in their brain to make them do something.


> People can communicate each step, and review each step as that communication is happening.

Can, but don't by default. Just as LLMs can be asked for chain of thought, but the default for most users is just chat.

This behaviour of humans is why we software developers have daily standup meetings, version control, and code review.

> LLMs must be prompted for everything and don’t act on their own

And this is why we humans have task boards like JIRA, and quarterly goals set by management.


A human brain in a vat doesn't act on its own, either.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: