Hacker News new | past | comments | ask | show | jobs | submit login

Man pages are pre-Internet Age help

StackOverflow is Internet Age help

ChatGPT is post-Internet Age help

In ascending order of helpfulness (modulo LLM confabulation)




There is a pre-internet and Internet pre-Stackoverflow bit. Called O'reilly books.

Learning to program for example Perl. I had the learning book, the reference book and the cookbook. learning I did once, the reference had post its on it and was on my work desk. The cookbook was more the facebook type you read it on the toilet.

They had books on everything that were the industry standard. DNS, Apache, Javascript, ... The revised them regularly. For Js had the 1st, 2nd, 3rd edition. I know this sounds all ancients like real paper books. But even work had O'reilly subscription so you could order what you need.

For me it was O'reilly books, man pages. But lets not forget when you installed Linux like RedHat (not fedora yet) you could install the linux howtos. Oh boy did I learn a lot from them too: https://tldp.org/HOWTO/HOWTO-INDEX/howtos.html

Finally there was IRC if you were stuck but this was off course the internet stage. I just went online to ask a question, dialup was expensive ;)


> (modulo LLM confabulation)

As of now, there is no LLM without confabulation, so your third item is effectively "hypothetical fantasy ChatGPT that we hope will be available some time in the future", not the actual tools we have right now.


And in descending order of correctness.

ChatGPT inherited "confidently wrong" and "misunderstanding question and then answering to question they misunderstood" from stackoverflow


I wonder if chatgpt will start “closing” questions as “not a good fit for its Q&A format,” or “too opinionated,” or “a duplicate” of something that is acutely unrelated.

Actually, this seems to be a case where chatgpt is smarter than the humans.


ChatGPT will not be able to hallucinate options to find and grep and ls because they all use every single letter of the alphabet.


A recent example: I used ChatGPT 4 to draft an mkvmerge command - take 2 video files, and merge them by only copying certain audio and subtitle tracks from the second file into the first file.

The resulting command looked good at first sight, something like „mkvmerge -o output.mkv first.mkv —-no—video -s 1 -a 2 -a 3 -a 4“. The problem here is that there can only be one -a flag, so it should have been „-a 2,3,4“ instead. But mkvmerge didn’t really care and just discarded every -a flag except the last one. So I ended up with only one of the audio tracks copied over. I only noticed when I actually checked the resulting file that it had less audio tracks than it was supposed to.

This would not have happened to a human after studying the man page - the documentation is very clear about the -a flag and I have no idea what led ChatGPT to come to the conclusion it did.


The lesson here is not to anthropomorphize ChatGPT. It didn't "conclude" anything. Based upon a corpus that includes tonnes of humans writing rubbish on the WWW, it came up with plausibly human-appearing rubbish that can fool humans. GIGO would apply, except that one can remix non-garbage into garbage with suitable statistical processes, we have now (re-)discovered. (-:


I think in this case it wasn’t GIGO but rather a too weak signal/noise ratio for mkvmerge. IMO there’s a subtle distinction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: