IMO, an LLM-generated summary is almost never a useful post without further comment.
I don't trust current LLMs to correctly summarize complicated and nuanced text. Now, if someone with the relevant expertise wanted to carefully read an article, feed it into an LLM for a summary, verify its correctness, and post that, I'd be alright with it.
Or if the summary is interesting in some other way - like is it super wrong? or does it make interesting leaps? or maybe it is startlingly correct? - then sure, share it, but also share why it's interesting.
I don't trust current LLMs to correctly summarize complicated and nuanced text. Now, if someone with the relevant expertise wanted to carefully read an article, feed it into an LLM for a summary, verify its correctness, and post that, I'd be alright with it.
Or if the summary is interesting in some other way - like is it super wrong? or does it make interesting leaps? or maybe it is startlingly correct? - then sure, share it, but also share why it's interesting.