Hacker News new | past | comments | ask | show | jobs | submit login

That just show that the robot is consistent, not that it actually makes sense. So this explanation is bullshit even though it sounds convincing at first. That also the issue with most of ChatGPT, it is hard to know when it sounds convincing and is false.



Its literally bullshit in the highly technical sense.

http://www2.csudh.edu/ccauthen/576f12/frankfurt__harry_-_on_...

The essence of bullshit is that it is different from a lie, for a liar respects the fact that their is a truth and knows what the truth is well enough to purposefully misrepresent it, whereas a bullshitter neither knows nor cares if what they are saying corresponds to anything in reality just so long as it makes the right impression.

>The point that troubles Wittgenstein is manifestly not that Pascal has made a mistake in her description of how she feels. Nor is it even that she has made a careless mistake. Her laxity, or her lack of care, is not a matter of having permitted an error to slip into her speech on account of some inadvertent or momentarily negligent lapse in the attention she was devoting to getting things right. The point is rather that, so far as Wittgenstein can see, Pascal offers a description of a certain state of affairs without genuinely submitting to the constraints which the endeavor to provide an accurate representation of reality imposes. Her fault is not that she fails to get things right, but that she is not even trying.


Some days ago, it told me "well, Boost has a function for that". I was surprised that I haven't found that myself.

I took me 10 minutes and opening the Git Log of Boost ("maybe they removed it?") until I realized "well, it just made that up". The whole answer was consistent and convicing enough, that I started searching, but it was just nonsense. It even provided a convincing amount of example code for it's made up function.

That experience was... insightful.

While we often say "If you need something in C++, Boost probably has it" and it's not untrue, ChatGPT seems to exercise that idea a little too much.


ChatGPT just matches the most statistically-likely reply based on a huge corpus of internet discussions, it doesn't actually have any ideas.


And a lot of highly linked forum questions and answers tend to be of the form “how do you do X in library Y?”, “Use the Z function!” - so naturally chatGPT loves to reproduce this popular pattern of communication.


> ChatGPT just matches the most statistically-likely reply based on a huge corpus of internet discussions, it doesn't actually have any ideas

Presumably you think humans have ideas, but you don't really have any evidence that humans aren't also producing the most statistically likely replies. Maybe we're just better at this game.


Applesauce.

Checkmate.


A smart ass reply was statistically very likely on HN actually. Checkmate me.


I'm astonished on how much worth people seem to give this bot. It's a bullshit generator, based on other people's bullshit. The bot does not know right or wrong. The bot does not know what command line utilities are. It just predicts what answer you want. Based on answers already given before. Nothing more, nothing less.


Because people want to believe in the magical AI - they want something for nothing and have yet to grasp not only are they unable to change the immutable laws of the universe (something will not come for nothing), but they are willfully blind to the very real price they are about to pay...


And the price is...?


I guess the point is that it generates convincing and consistent texts. That's new and it's a building block for any futuristic AI that actually knows stuff: it also has to generate good text to communicate the knowledge.


Likewise, I spent 40 minutes looking for fictional command line arguments it recommended for Docker. When told the command line options did not exist, it directed me down a rabbit hole of prior versions that was a dead end. It really felt like a arrogant 8-year old with it's continued evasions of being flat out wrong.


The other day I saw someone, who by asking ChatGPT a series of questions, had it carefully explain why abacus-based computing was more efficient than GPU-based computing. It's not your google replacement yet...


If you read the abstract it appears that ChatGPTs explanation is on point. You're right that the paper is relying on consistency, which doesn't guarantee accuracy, but it is what the paper is proposing (and they claim it does lead to increased accuracy).


An accurate answer has to be consistent so it's not all bullshit. I'm guessing you can at least filter out inaccuracies by finding inconsistencies. Or in more plain English if you find somewhere it gives inconsistent answers you know those are wrong.

I'm not sure if that's a good path forward. You really want to find when it's good, not filtering out bad cases.


Your comment had less value than the parent. Humanity is doomed.

If you call bullshit, you have to say what was wrong or even what you think is wrong. Otherwise you are just insulting our new robot overlords.

Now, it seems you claim that consistency isn’t the same as making sense. But having more logically consistent robots seems like a big win! Otherwise I could criticize math papers for not making sense, even as I don’t doubt their consistency.


I did, I said this is just proving consistency and not anything more. I also in another comment said that I'm not sure filtering out bad or inconsistent answers is a good way to give us the truth or just filtering out the worst takes making it more convincing.


It looks to me like ChatGPT explained accurately what the abstract says. And indeed, the abstract sounds like this research is largely bullshit. But it's not ChatGPT that is at fault here.


If it was really useful it could say that and point out weaknesses and benefits. Like both you and me could do in 10 seconds.


They didn't ask ChatGPT to find flaws in the proposal.

Not saying it could, just pointing out that it wasn't asked to do so.


If it did that on its own, we would be past the Singularity.


> That just show that the robot is consistent, not that it actually makes sense.

A consistent argument is an argument that makes sense to the robot, not necessarily one that makes sense to you.


But that is actually a fairly accurate description of the paper you asked it to summarize for you. It's not the models fault that you don't like the argument of the paper.


As always (humans too) bollocks in bollocks out.

ChatGPT works when you tell it what to convey and it just puts that into words.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: