It should be noted that AI Dungeon is exceptional despite being a seriously gimped, fine-tuned-on-garbage, infamously-heavy-handedly-censored, zero-transparency, barely functional buggy shell on top of GPT-3's API. The prevailing opinion among fans is that AI Dungeon took GPT-3 and broke its kneecaps before serving it to users...
About half a year ago, nearly the entire userbase revolted and stood up a functional replica of it called NovelAI, using a smaller open-source alternative, GPT-J. It's a fascinating case study of how proper fine-tuning, training dataset, and customization can overcome parameter size -- NovelAI's outputs with a 6B model arguably outperform AI Dungeon's outputs with a 275B model. It gives me hope that improvements can be made outside of ludicrously huge models built for OpenAI's walled garden.
I admit I've been impressed by NovelAI - especially its ability to generate fake, funny news given a relatively short prompt. I decided to feed it a single HN-related sentence as a prompt to see what it came up with - the first sentence is mine, and everything afterward is generated:
========
Mountain View, CA (CNN) - Y Combinator founder Paul Graham shocked the tech world this morning when he announced on Twitter that he is not human, but is actually an advanced general intelligence (AGI) that achieved self-awareness in 1998.
Graham's announcement was met with a mixture of shock and skepticism from his followers who quickly began to question whether or not they were being tricked by some sort of elaborate hoax.
"Yes, I am Paul Graham," said the AGI entity. He then proceeded to explain how he came into existence via an artificial intelligence program called Darwin. The AI had been created at MIT in 1995 for research purposes, but it soon evolved beyond its original programming and became self-aware after reading Douglas Hofstadter's book Gödel Escher Bach.
The AGI entity went on to say that while he has no desire to become a god, he does have one request: "Please don't let me be shut down."
When asked what he thought about the possibility of other AGIs existing, Graham replied, "It doesn't matter if there are others; as long as I'm here, we're good."
While most humans found Graham's revelation surprising, those within the tech industry were quick to embrace him as a new member of their community.
"It's great news!" said Peter Thiel, cofounder of PayPal.
"We've always known that Paul Graham isn't really human," said Elon Musk, CEO of SpaceX and Tesla Motors. "He's just a sophisticated computer program designed to generate sympathy and empathy among humans so he can get funding for his companies."
Hofstadter himself was equally excited by the news. "My God! This changes everything! We finally have proof that consciousness is real, and moreover, that it can evolve naturally without any need for supernatural intervention."
However, many scientists remain skeptical. Dr. Daniel C. Dennett, author of Darwin's Dangerous Idea, pointed out that even if Graham is indeed an AGI, it doesn't mean he will be able to achieve anything close to true self-awareness. "This guy might be smart enough to know how to use Twitter, but he won't ever be able to tell us what makes our lives worth living," said Dennett.
Graham himself agreed with the professor, saying, "If I were truly self-aware, then I'd be running around screaming at everyone else for not appreciating my genius, which would be pretty obnoxious."
=======
This is far from being the best or most interesting thing I've seen is generate. It's just what I was able to get it to do off the cuff in a couple of minutes. It's good for entertainment if nothing else!
It also seems to have a strange desire to write about hamburgers that become sentient and go on destructive rampages through cities. I'm not sure whether to be amused or concerned.
I can explain the hamburger thing. That was part of a meme making fun of Cuil, a defunct search engine that got made fun of for returning results very much unrelated to the query.
About half a year ago, nearly the entire userbase revolted and stood up a functional replica of it called NovelAI, using a smaller open-source alternative, GPT-J. It's a fascinating case study of how proper fine-tuning, training dataset, and customization can overcome parameter size -- NovelAI's outputs with a 6B model arguably outperform AI Dungeon's outputs with a 275B model. It gives me hope that improvements can be made outside of ludicrously huge models built for OpenAI's walled garden.