Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure whether to be appalled or flattered by the ChatGPT comparison at this stage of Language Model maturity, but I'm sure there's some hiberno-english artefacts in my breakdown that will advocate for my analog autonomy!

The "Nam Shub" was a nice parallel, but I felt Stephenson was re-treading ground already covered in Neuromancer with things like Space-Station Babylon:

"We monitor many frequencies. We listen always. Came a voice, out of the babel of tongues, speaking to us. It played us a mighty dub."

or even the post-coital after-images of VR

"His vision crawled with ghost hieroglyphs, translucent lines of symbols arranging themselves against the neutral backdrop of the bunker wall."

My main contention with Snow Crash is that it's much lauded world-building often represents a feeble distillation of what came before it. Nowhere do you see the architecturally foundational yet throwaway references to things like the 'predatory-looking Christian Scientists' or the wonderful description of the subservient wives of the Sararimen dressed in Hiroshima sackcloth and adorned with faked signifiers of domestic abuse as a fashion statement.

Re: the sprawl Trilogy, I've already compared the post-Neuromancer books very unfavourably in other comments on the thread - what was done to Mary-Sue the Molly character was unforgiveable, and the only properly developed innovation in the plot was the collage boxes in Count Zero.




I'm not sure if you are GPT.

But if you aren't, you have a rather deep understanding and can rattle it off rather quickly and succinctly.

(it does sound like GPT, but so hard to tell now)


You'll turn my head sir. I'm privileged enough to have an English Degree, a passion for Science Fiction, and a debating background. I'm also composed almost entirely of strongly held, and often contrary, convictions, so revel in the thrust and parry of online debate. Particularly during office hours.

Re: ChatGPT - I'm guessing it would be more easily detected than you think. It's a paradigm leap from Markov chains and the like, but I'd imagine it would still hallucinate a quote or reference some non-existent book in making a comparison at some point.

If it ever gets adept at literary criticism or critical theory as opposed to just cargo-culting its way past the lower threshold of readers I'll probably be in despair. Let's keep it trained on JIRA rather than JSTOR just to be safe, lest something like Roald Dahl's 'The Great Automatic Grammatizator' come to fruition.


You're killing me.

I like the cut of your jib, good sir.

Kudos. I would read more of this.


Haha, you can sure pen a phrase. I’d be shocked if you’re a circa 2024 GPT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: