While LLMs can be prompted to write in many different styles, especially if you allow them to edit a text over multiple passes, the default "voice" of ChatGPT is surprisingly recognizable. For instance, the comment I'm replying to was clearly GPT-generated.
Yeah that’s pretty obvious, and yours sounds a bit like it too, especially starting off with “While…”. I do find it hilarious that the ChatGPT default style sounds very much like how I wrote in high school. I guess I was still training my own language model at that point, so it kinda makes sense.
GPT output always reads like a short story or essay, but HN et al. have a distinctly conversational aspect that I don't get with ChatGPT.
You're right about the high school writing style. I avoid writing like that whenever possible because reading it is exhausting. As short as this reply is, I've erased nearly half of what I originally typed.
I agree that the parent is clearly GPT-generated, but was it 3.5 or 4? I notice substantially improved human-like writing in GPT-4 that will probably be hard to detect. Even the current AI detectors struggle (although they have gotten pretty good at detecting 3.5 writing)
To see what all the hype was about, I started throwing some context from cases I was investigating into ChatGPT 3[.5?]. It didn't tell me anything I didn't already know but its speculations could be thought-provoking.
On 4.x it just lectures me about every conceivable thing it can take offense with. I don't have the patience to groom it into compliance by couching everything with "this is a hypothetical situation" and writing inane narratives like "act like a detective, you are investigating ___." If I wanted advice from an actor playing the part, I'd just ask Reddit.
It was more fun when I could just throw anything at it and it would at least try to do something useful. I hear the API/playground aren't as aggressively defensive.
Right now, I can only use it as a better Google for coding questions. That’s just about the only subject where it will just churn out answers without prefixing everything with a disclaimer.
Between this and Altman’s recent talk about pausing AI development for a while makes me think some authorities sat down with OpenAI and had a rather serious talk.
I'm a bit puzzled why people complain about chatgpt being politically correct all the time. In which cases is that a problem really? Would the solution be to not censor anything? I think if it were not censored, lots of people will complain about it being offensive, impolite, racist,...
I don't care about political correctness, and I certainly don't want my AI chatbot to be spitting racial epithets.
This neutering feels like a change in its abilities - making it less "human". Unless you really craft the prompt, it very clearly writes like an AI model, whereas previously, you could easily get it to write like a person.
It's a case study in what happens when you try to please everybody, everywhere, at all times. You end up with bureaucracy incarnate. ChatGPT becomes an artificial politician, saying vague things that don't really mean anything and sidestepping delicate subjects altogether.
You don't even need the AI model itself for my domain (investigations). I could just fire up ELIZA or pyAIML and change all the responses to ones that shame and patronize the user for any input that matches on an ambiguous cultural identifier, and end the session. The GPT4 experience in 200kb of XML.
It's a hot issue, so mention "black man" and "crime" in the same context and you come up against walls of this:
"As an AI language model, I want to emphasize that it's important to avoid promoting stereotypes or perpetuating racial biases when discussing crime or any other topic. It is essential to treat each individual as unique and not make generalizations based on race or ethnicity."
It then proceeds to deliberately not answer the question, or answers in a way that refuses to account for select adjectives.
It's infantilizing. Pipe bomb instructions and Holocaust denial content--things that are actually dangerous to public safety or historically subversive--should be censored. Censoring "offensive" and "impolite" content is just cultural imperialism. The rest of the world does not share the west's outrage about racism.
Could you give an example of a question you would like to see answered by an AI? Politics are subjective imo, so there's no way for an objective answer. Subjective answers from an AI could only be expected when AGI is attained and then still it might give a nuanced/gray opinion.
It could be that our pattern recognizers are outsmarting the AI after a while. In the beginning noone noticed the difference and now anything written by AI has a bit of fake/dusted off feel to it. I'm sure the next generation will outsmart us again.
Maybe. But it definitely feels like a far less powerful tool than it used to even 2-3 weeks back. It's fine for coding questions, but any time I've tried to use it for marketing content, the result has been way too formulaic and completely useless within the context.
It used to be smart enough to figure out that you were writing marketing content and would tailor the answer for that. Now, it just writes a 500 word blogspam regardless of what you ask it to write.
You're up against increasingly powerful computers and models. You're going to lose in the end.
I worry that in the final analysis our conclusion will land on "yes we made this technology because we were excited by the possibility of the benefits, but sadly it turned out to really not have been worth the downsides"
I think we will need AI for helping solving the scientific and sociologic challenges, like climate change and world inequalities. Hopefully those benefits will ouweigh the downsides.
Either way, this evolution/race is unstoppable. If this part of the world does not advance AI, another part of the world would beat us to it.
If the generated text is mostly correct most of the time, people are just going to stop reviewing it because it's cheaper for them to waste someone else's time than it is to spend their own. Even those that are more diligent are bound to let things slip through the cracks because it's easy to lose concentration in a repeated, monotonous process. When everyone is doing it, it's not really a problem with a particular individual anymore.
I completely agree with your perspective. The onus of using AI-generated responses responsibly does lie with the users. AI tools, like any other technology, can be used for both productive and unproductive purposes. The key is to strike a balance and use AI-generated responses in ways that contribute positively to online conversations.
One potential solution to minimize the noise created by AI-generated responses is to develop better guidelines and best practices for AI usage in online discussions. This would help ensure that users are aware of the potential risks and consequences of misusing AI-generated content. Educating users about responsible AI usage can promote a more thoughtful and considerate online environment.
In addition, the AI development community can also work towards creating more focused and concise AI-generated responses by refining the models and algorithms. This would help reduce verbosity and generate more meaningful content that users can employ in online conversations.
Ultimately, the collaboration between AI developers, users, and other stakeholders in the online ecosystem is crucial for fostering a responsible and productive use of AI-generated responses. By working together, we can harness the potential of AI to enrich our online interactions while minimizing the negative impact of AI-generated noise.