Hacker News new | past | comments | ask | show | jobs | submit login

A recent example: I used ChatGPT 4 to draft an mkvmerge command - take 2 video files, and merge them by only copying certain audio and subtitle tracks from the second file into the first file.

The resulting command looked good at first sight, something like „mkvmerge -o output.mkv first.mkv —-no—video -s 1 -a 2 -a 3 -a 4“. The problem here is that there can only be one -a flag, so it should have been „-a 2,3,4“ instead. But mkvmerge didn’t really care and just discarded every -a flag except the last one. So I ended up with only one of the audio tracks copied over. I only noticed when I actually checked the resulting file that it had less audio tracks than it was supposed to.

This would not have happened to a human after studying the man page - the documentation is very clear about the -a flag and I have no idea what led ChatGPT to come to the conclusion it did.




The lesson here is not to anthropomorphize ChatGPT. It didn't "conclude" anything. Based upon a corpus that includes tonnes of humans writing rubbish on the WWW, it came up with plausibly human-appearing rubbish that can fool humans. GIGO would apply, except that one can remix non-garbage into garbage with suitable statistical processes, we have now (re-)discovered. (-:


I think in this case it wasn’t GIGO but rather a too weak signal/noise ratio for mkvmerge. IMO there’s a subtle distinction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: