Isn't asking an LLM to find links that support an assertion equivalent to cherry picking sources? Also, just from a quick scan of some of these, it is apparent that the citations are not completely accurate.
IMO, this is kind of asymmetric lazy commenting that wastes other peoples' time. If you want to share something, just link to a article, and leave the LLM bullshit out of it.
I appreciate the pushback on this process, it made me think.
I actually asked the LLM for supporting or refuting sources. I didn’t think I was cherry picking. Looking at its response… maybe CharGPT didn’t pick up on the “refuting” detail, or maybe observationist was correct. So maybe next time a prompt “find supporting” and another prompt “find refuting” would be useful to ensure coverage of both sides.
My value add in the human+AI workflow was to check the links. They seem high quality and directly applicable to statements made. I took pressure off observationist to go find directly applicable links (and I saved time by not googling for each separate fact). That being said, I probably didn’t need to requote ChatGPT in full. I liked the full answer because it assured me ChatGPT was responding on each claim but the important thing was the links. So, more efficiency was possible in my yc comment.
IMO, this is kind of asymmetric lazy commenting that wastes other peoples' time. If you want to share something, just link to a article, and leave the LLM bullshit out of it.