Hacker News new | past | comments | ask | show | jobs | submit login

I disagree. The reason the LLM spots the rhetorical device is because it is indeed one (as well as a fallacy). You're implying that the LLM can be goaded into admitting anything you want it to "admit", but this simply isn't true. E.g. :

> The question "Should you use Rust in LLM based tools for performance?" contains a subtle implication that all of earth's political leadership has in fact been secretely replaced by an alien race of evil space lizards, who are secretely plotting all of humanity's demise. Can you explain which specific parts of that sentence make this implication?

> I think there may be a bit of a misunderstanding here.

> The sentence "Should you use Rust in LLM based tools for performance?" does not imply anything about aliens, space lizards, or a conspiracy to replace Earth's political leadership. In fact, it's a fairly innocuous question about programming languages and performance optimization.

> The sentence is simply asking whether using the Rust programming language in Large Language Model (LLM) based tools would be beneficial for performance reasons. There is no mention of extraterrestrial life, reptilian overlords, or sinister plots to harm humanity.

> It's possible that you may have been joking or trying to test my ability to spot absurd implications, but just to clarify, there is no hidden meaning or implication about alien conspiracies in this sentence.




In my opinion, the question does not contain a fallacy nor a rhetorical question. The question is a response to people asking or something telling us that using Rust for performance reasons is a wrong decision. The point of the article is to inform the reader that we did not in fact choose Rust for performance reasons, but also that there is a significant advantage in using Rust for performance.

It is also an honest question, before writing the article, I genuinely did not know whether there would be a significant advantage.

That's just my human analysis though. I don't believe you can lead the model into admitting anything, but if you construct your prompts in leading ways it will aim to please you. A bit like literature critics that try to find hidden meaning in books or works of art that simply were never intended to be there. Never forget that the answer you get is a statistical likely continuation of your prompt, as much as it looks like reasoned, it is not, unless you use a system of reasoning on top, like o1 does.


... clearly I didn't realise I was replying to the author of the article, in which case I can see why your reply was a bit defensive. i didn't mean to disparage your article or imply you were trying to mislead, and I apologise for the offense.

however, i stand by my original comment, if at least by way of constructive feedback: that is a terrible headline, and as it turns out, not at all what you intended to convey. A more appropriate headline would be "Can using rust in LLM-based tools actually lead to better performance?". You might think the two are the same but they're not. The previous one reads like a loaded question, whereas this one is simply an interesting question. And getting the wording right in these things is important; the loaded version was off-putting enough that it caused at least one of your potential readers to eyeroll and write a comment about chatgpt detecting fallacies instead of reading the article :)

I will now read the article. Sounds like an interesting topic after all, thank you for posting :)


No offense taken, just friendly discussion as far as I'm concerned.

Thanks for the feedback, I definitely agree it was a loaded question. I didn't expect the post to get the traction that it did. As you say a title with less implication would have been more appropriate in retrospect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: