Hacker News new | past | comments | ask | show | jobs | submit login

It seems like there are a few potential negative consequences of using AI-generated answers on Stack Overflow. For one, the quality of the answers may be lower than if they were written by a human. Additionally, if these AI-generated answers become too common, it could potentially lead to a more impersonal and less supportive community on Stack Overflow. Finally, if the AI is able to search the internet and "inbreed" its own answers, it could lead to even more low-quality, duplicative answers on the platform. Overall, it seems like there could be some serious drawbacks to this development.

Note: This answer was generated by ChatGPT after being fed this thread.




Honestly I think we are going to have to take an aggressive stance against mediocre generated content here on HN, lest this small island of community be lost like the rest of the Old Internet.


> I think we are going to have to take an aggressive stance against mediocre generated content here on HN, lest this small island of community be lost like the rest of the Old Internet.

How do you propose we do that?

And what do you propose we do when ChatGPT (or something like it) can create better content than most humans?

(... which I would argue it is already happening in some limited contexts, although I admit my stance is controversial).


> what do you propose we do when ChatGPT (or something like it) can create better content than most humans?

An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread. Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.

The internet is about people. I may not be friends with everyone on HN, but we’re a community meant to discuss and share together, and I personally wouldn’t want to talk to an AI over a human.


> An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread.

Hmm. That's HN proposition value but that's not necessarily what keeps people coming back. I am thinking about the dopamine hits from the news cycle and comments.

If I am right then it follows that:

> Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.

..actually does the job.

How much chatGPT content can we/you/I stomach before we stop visiting ? We all have a different threshold (and language is a barrier for non-native, it would certainly take me longer to spot the AI).


Wow, that's an interesting argument if I ever saw one! Your viewpoint is completely the opposite of mine, let me argue a couple of points:

> An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread.

Please tell me: for you, what is the difference between hearing what people think, and hearing what a human-level AI thinks?

Are you saying a human-level AI, by definition, cannot ever be interesting to you, just because it's an AI, even though it could be like a human in every other way?

> Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.

Don't most humans just regurgitate the same arguments they've just read or learned elsewhere (or even here), like 99% of the time?

And even in the remaining 1% of cases, aren't they necessarily just functional products of the things they've seen/read/experienced (i.e. the inputs they've received, even if gathered by feedback when interacting with the world)?

> The internet is about people. I may not be friends with everyone on HN, but we’re a community meant to discuss and share together, and I personally wouldn’t want to talk to an AI over a human.

What if the AI was more interesting and intellectually stimulating than a human?


> Please tell me: for you, what is the difference between hearing what people think, and hearing what a human-level AI thinks?

HN is a professional social network. People are most interested in what their peers have to say, not any random human being or human level AI.

Now if this human level AI is working in the field as a professional, or at least in computer science academia the it’s opinions are valid and interesting. However if it’s merely an average best possible response then it’s pointless to hear it’s opinion.

Additionally if multiple people post chatGPT responses that’s just like talking to same person using humans as sock puppets.


> People are most interested in what their peers have to say, not any random human being or human level AI.

> Now if this human level AI is working in the field as a professional, or at least in computer science academia the it’s opinions are valid and interesting. However if it’s merely an average best possible response then it’s pointless to hear it’s opinion.

Yes, I agree with that, but I don't think that's what the parent poster was arguing.

It's also clear that I didn't phrase my question as clearly as I could, because instead of "human-level AI", I should have said this instead: "a human-level (or more intelligent) AI that has equivalent or better knowledge/experience than the people who post on HN".

> Additionally if multiple people post chatGPT responses that’s just like talking to same person using humans as sock puppets.

Yes, I agree with this as well.

But as a counterpoint, (as far as I understand) it's possible to have ChatGPTs with different levels of knowledge/experience and different personalities, as evidenced by GPT-3's fine-tuning capability (disclaimer: I've never used this functionality, so I'm not 100% sure if this is correct).


Here is a thought experiment. Let’s say there is a posting about best practices on HN. The commenters can discus the topic and create a new consensus on what best practices is.

AI even at its best won’t be a part of that consensus unless it’s also working in the field.

I think AI chats can help with self improvement, but public discussion with others is the only way to have community improvement. As a disclaimer, sure AI which is indistinguishable from humans would be a part of the community but that isn’t what we have, nor is it the trajectory of current AI. What we currently have are better and better parrots, and task specific AI.


> AI even at its best won’t be a part of that consensus unless it’s also working in the field.

A single AI can theoretically have the sum total of knowledge/experience that all humans have ever written, be it on a book, research paper or more generally, on the internet.

It can even be trained/learn from audio and video streams (e.g. all podcasts and conference recordings ever uploaded on the internet, etc).

It could also learn from interacting with humans and other AIs.

In conclusion, it could learn not only as much as a human would (in the same way), but it could also additionally learn from all the learning materials that are available in the entire world, written or spoken in all languages (russian, chinese, arabic, you name it).

> As a disclaimer, sure AI which is indistinguishable from humans would be a part of the community

Yes, that's kind of the point I was trying to make :) Or perhaps, what if it was even better than humans at discussing topics on HN?

> nor is it the trajectory of current AI. What we currently have are better and better parrots, and task specific AI.

I think this is 1) moving the goal posts and also 2) ignoring recent research results, which show that the same language model (without any additional training) can be used to solve completely different tasks (such as solving problems that require real-world physical interaction/manipulation) as long as you encode those tasks as text input/output tokens, which means that besides all the different tasks that these models can already do successfully, they also seem to generalize well to other tasks.


> Don't most humans just regurgitate the same arguments they've just read or learned elsewhere (or even here), like 99% of the time?

Yes but at least they choose what they regurgitate unless you think of most people as automatons. Personally Im more interested in human regurgitations than AI immitations of these. So far AI doesn’t bring any reasoning and cannot discern what it regurgitates but sure as hell comes off as confident (It could probably immitate being humble as well). Someone posted an example of gpt bulshiting something akin to 2+0=3 but very convincibly.


> Yes but at least they choose what they regurgitate unless you think of most people as automatons.

And what, are you saying ChatGPT doesn't choose what it regurgitates?

It seems like these arguments are getting more and more flimsy.

I do believe people (including me) are automatons because I think free will is logically impossible in the way most people intuitively think free will is.

Edit: to clarify, I believe people usually think of free will meaning that there's some magical soul-like way that allows you to choose what you do in a principled way that is not simply a direct functional result of your composition and the interactions that you have with the environment or some additional pure randomness that the environment imposes on you (due to the universe being quantum). Which is exactly like an intelligent machine would have to work, because... well, because it has to live in the same universe that we do, so in theory a machine can theoretically do what our minds do, functionally speaking. There's no magical free-will-like behavior that humans can have that machines can't, unless you believe in souls or other magical things.

> So far AI doesn’t bring any reasoning

This is clearly untrue, as ChatGPT can definitely reason pretty well (although, not always correctly, just like humans). As far as I can see, it can reason deductively, inductively, by analogy, it does abductive reasoning, cause-and-effect reasoning, critical thinking, step-by-step reasoning, you name it.

It might not always do it correctly, and it might even not do it as well as a good human can currently, but it can do it.

> Someone posted an example of gpt bulshiting something akin to 2+0=3 but very convincibly.

Humans do this all the time (although usually not at such an extreme level). Just look at all the posts saying ChatGPT can't do X or Y ;)


Humans only reply very occasionally (except smoldesu in Apple threads) and often have unique experiences, whereas AI will always answer any question you give it (even if it has absolutely no idea it’ll make up some bs) and it tends to “play along” with the prompt and agree, and always gives its best average response. You can make ChaGPT disagree with itself within two prompts trivially.


This reads like a ChatGPT response. (So does my response!)

Whether it is or isn't is beside the point.

Broad, anonymous commenting platforms are dead, they just don't know it.


> This reads like a ChatGPT response. (So does my response!)

Your comment is very interesting, because I'm having the same experience: the more I interact with ChatGPT and read its arguments/responses, the more I'm getting weird vibes when reading arguments written by humans, although I cannot tell you exactly why (and I think I can still clearly differentiate between a human and an answer from ChatGPT that is copied verbatim, as it tends to speak in a more formal way and usually, it's also more verbose than how humans typically write here in HN).

I think it's also influencing the way I write, both to be more clear (because otherwise ChatGPT can misinterpret me), but also because I'm reading so much ChatGPT-generated content, which I believe also indirectly influences the way I write and think.

In any case, I know you said it's besides the point, but I assure you, I'm not ChatGPT or copying its answers here (unless I add a clear disclaimer) :)


> Don't most humans just regurgitate the same arguments they've just read or learned elsewhere (or even here), like 99% of the time?

So, let’s see whether ChatGPT gets the wrong answers:

  Q: How long does it take to have a baby?

  A: The average length of pregnancy is about 40 weeks, or 9 months.
1/2 correct - 40 weeks is 10 months from conception to birth.

  Q: Why is emacs better than vi?

  A: […] Overall, emacs is widely considered to be a more powerful and versatile editor than vi, with a user-friendly interface and a wealth of support and resources available to users.
1/2 correct. It seems to alway agree with the question’s framing - humans would often disagree.

  Q: Write me a simple python function called are_hacker_news_users_confrontational() that returns either true or false.

  A1: def are_hacker_news_users_confrontational():
        return True

  A2: def are_hacker_news_users_confrontational():
        return False
      This function simply returns False, indicating that Hacker News users are not confrontational. Of course, this is just a simple example, and the actual answer to this question may vary depending on various factors. Alternatively, if you want to determine whether or not Hacker News users are confrontational by analyzing their behavior on the platform, you could use natural language processing techniques to analyze the comments and posts made by users on Hacker News, and use that information to determine whether they tend to be confrontational or not. However, this would require a more complex implementation, and would require access to a large dataset of Hacker News comments and posts.
9/10 for that answer!


> 1/2 correct - 40 weeks is 10 months from conception to birth.

Average length of pregnancy, ovulation to birth is 268 days -- 38.3 weeks or ~8.8 months.

But we typically count pregnancy from last period (this is easier), which makes it pretty close to the round 40 that's usually cited.

> 40 weeks is 10 months from conception to birth.

A month is ~4.35 (6957/1600) weeks, so 40 weeks is ~9.2 months (69400/6957).


Ooops, thank you heaps. Very glad to find out that 40 weeks is not a particularly correct answer either! But dividing 40 by 4 to get 10 months was a stupid mistake for me to make!


> 1/2 correct - 40 weeks is 10 months from conception to birth.

Sorry, is this a cultural difference or are you just nitpicking math?

Even Wikipedia says: This is just over nine months.

I have never seen anyone argue that pregnancy takes 10 months in humans, I've always heard people say it takes 9 months (indeed, being 9 months pregnant is equivalent to saying you're just about to give birth, where I come from).


Nope, 40 weeks is the correct answer. Usually, the calculation is done from the first day of the last menstrual period, and from the medical point of view, we consider 40 weeks of pregnancy (even if doctors consider a period between 38 and 42 weeks a normal pregnancy). 40 weeks are exactly 280 days, that are ~9 months. So for this reason we say that the pregnancy is 9 months, but doctors usually say 40 weeks.


> 40 weeks is 10 months from conception to birth.

Months vary in length, 9 average months (1/12 year per month) is. 39.1+ weeks, 10 is 43.4+ weeks; 40 weeks is closer to 9 months.


To skip the end last question, which is the most interesting, I’ll answer by not answering haha.

> What if the AI was more interesting and intellectually stimulating than a human?

What if? It’s not anywhere close to that. GPT is so far from “human level” even if it sounds good. It’s statistic regurgitation, it’s not thought. If it was more intellectual, then I think there’d be a lot more change in the world than HN.

> Please tell me: for you, what is the difference between hearing what people think, and hearing what a human-level AI thinks?

Humans can have original thoughts. AIs that are trained on a human text corpus are by definition finding statistical correlations between preexisting things.

You can say things like “I know it’s unpopular but I like OOP because objects make it easy to assign team boundaries at work”. And the replies can be about real work experiences of real people who understand those trade offs.

An AI can discuss this, sorta, but it’s not real. The AI knows nothing of these trade offs other than inevitably mentioning Java.

> And even in the remaining 1% of cases, aren't they necessarily just functional products of the things they've seen/read/experienced (i.e. the inputs they've received, even if gathered by feedback when interacting with the world)?

This is what I was thinking a lot about. I think the answer is no.

Humans are introspective and reflective. You are based on your experiences, yes, but you don’t just regurgitate statistically likely language. Crucially, before you answer a question you can reflect on the logic of that answer.

> Are you saying a human-level AI, by definition, cannot ever be interesting to you, just because it's an AI, even though it could be like a human in every other way?

Not to be weird, but I wouldn’t discriminate against a human level intelligence because it’s a machine, but a language model like GPT is absolutely not a human level intelligence.


The AI we see today is a synthesis of a huge number of arbitrary inputs. You may argue that humans are, as well, but humans are different machine than AI. If I am asking for a human response, or a human piece of artwork, I am doing so because I want to understand and experience the output of machines of the same model. I want to continue the human experience.

AI-generated content can be fascinating, helpful, and in some instances, more useful and accurate than humans (medical diagnosis, technical documentation, perhaps). But if I ask for a human, I want a human.

I don't care if AI is more interesting than a human. I want a human, because I am human. I am not transhumanist.

I wonder what the correlation is between people who see no particular value in interacting with humans, and people who struggle to interact with humans.


Get ready to see a lot of human spaces invaded by somebody elses AI with goals that are not in your interest. It’s been happening already with chatbots pulled by strings by nefarious people but that operation will only amplify to a point that it will become cumbersome and hard to know whether you’re chatting or reading something that came from a human or language model.

To me internet comments are almost on life support. Im curious if HN will have the same fate


Interestingly, the same thing has been happening with reviews of products, although usually they are written by humans that are paid to write those (extremely biased, mostly deceitful) reviews. I mean, just look at many Amazon review comments... or even of commercial establishments on Google Maps and many other such places that accept user-generated reviews of commercial products.

> To me internet comments are almost on life support. Im curious if HN will have the same fate

I think I agree, in general.

I wonder what incentives someone could have for posting such comments on HN. I mean, it's clear that commercial products could benefit immensely from that (as they'd get a return on their investment), and also e.g. governments and political parties who might want to influence the public discourse about sensitive/political matters.

But why would anyone (who is not toxic already) use such a bot to post comments about technical topics, such as in discussions about programming languages, interesting bugs being discovered, open-source software being released, etc?


>But why would anyone (who is not toxic already) use such a bot to post comments about technical topics, such as in discussions about programming languages, interesting bugs being discovered, open-source software being released, etc?

To sway, to amplify, to manipulate, to pollute with noise, etcetera. I think these type of actors have more tools at their fingertips now than ever.


> But if I ask for a human, I want a human.

> I don't care if AI is more interesting than a human. I want a human, because I am human. I am not transhumanist.

I think I understand your point but I'd like to give a counterpoint: replace "human" by "white human" and "AI" by "black human" and you might see how that line of reasoning is flawed.

In other words, there might come a time when AIs could become really offended if you'd exclude them like that from social interactions, with all the repercussions that might have.

> I wonder what the correlation is between people who see no particular value in interacting with humans, and people who struggle to interact with humans.

I see value in interacting with humans, especially at this point in time, and especially in ways that machines can't (e.g. having meaningful intimate relationships, raising a family, etc). Even then, machines could theoretically do some of this better than humans, as suggested by a lot of sci-fi content (except the actual reproducing part).

But I also see value in interacting with beings that are superior to humans, assuming they are able to do what humans can, only better.


You think AI is a being. I am not yet convinced.

I am a human supremacist, yes.

Further, it is not unreasonable to have more interest in some cultures than others, or find the experiences of ones own culture more engaging or relevant to oneself than another. The "line of immorality" comes with banning or violently oppressing other experiences.

Again, fundamentally, I disagree with an analogy giving AI equal morality or agency to a homo sapiens. There is no room for "find replace" here.


> Further, it is not unreasonable to have more interest in some cultures than others, or find the experiences of ones own culture more engaging or relevant to oneself than another. The "line of immorality" comes with banning or violently oppressing other experiences.

Agreed.

> Again, fundamentally, I disagree with an analogy giving AI equal morality or agency to a homo sapiens.

I understand your view. But I also think this viewpoint will age badly. Maybe not on our lifetimes, but I increasingly believe that will happen in less than one generation.


> An AI cannot, by definition, create better HN content.

Better is entirely too subjective for this to be true. And if you turn on showdead, ChatGPT content is already better than some of the other content submitted here. If an ML algo can take a half baked comment of mine and form it into a coherent response that other people can read and actually understand what I mean, that' is better content.


> If an ML algo can take a half baked comment of mine and form it into a coherent response that other people can read and actually understand what I mean, that' is better content

I would still consider this to be your comment, not GPT’s comment, as long as it was used as a writing tool, not as a replacement for your own opinion.


There's already plenty of comments on this forum that are low-effort regurgitations of talking points, memes, 'common sense', and the like.


> An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread.

Not necessarily. Good HN content can also be factual information relevant to the topic at hand. And yes, current AI like ChatGPT might not help with that, but an hypothetical future AI which cared more about the veracity of its statements could.


> Not necessarily. Good HN content can also be factual information relevant to the topic at hand

I reject this notion. for me, thats not good HN content. for me, thats not why I got to HN. Maybe to others, but not me.

That said, factually incorrect content is bad, but being factually relevant is not enough. I don’t want a robot glossary filling up comments. Have you met know-it-alls that just spew factually relèvent regurgitation instead of thoughtful response?


> An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread.

With AI certain ideas and opinions can and will be amplified by malicious actors. We may have to resort to face to face at some point or verify human identity at times to combat this.


Out of curiosity I tried prompting ChatGPT to come up with interesting rebuttals to your comment, but it kept misunderstanding my prompt and agreeing with you instead.


> How do you propose we do that?

One technique, like all the other self-moderation that you can do on HN: stop upvoting and commenting on content that you don’t want to see boosted.


People already upvote fairly bland comments which reinforce common beliefs, so AIs which can distill popular sentiment and regurgitate it very readibly will get wildly upvoted on any social media platform.

Most humans don't like too much creativity and they want ideas that they agree with.

Wonder if the death of social media is more a descent into AI generated inane and bland commentary.


> One technique, like all the other self-moderation that you can do on HN: stop upvoting and commenting on content that you don’t want to see boosted.

But isn't that what most people are presumably doing already?

I guess my question was more intended to be: how do you differentiate between content generated by humans vs machines?

At some point, we might not be able to. Or even if we can, it could actually result in a worse experience, if machines can generate better content.


A few years ago there was a US Congressional Hearing [1] about how to handle deepfakes (OpenAI had a representative on the panel).

There was some deep discussion about that topic.

For example: even if all authentic content had an embedded steganographic watermark, how do you reliably authenticate recordings of recordings or otherwise degraded copies of authentic content?

[1] https://intelligence.house.gov/news/documentsingle.aspx?Docu...


Voting with your wallet only works if a large portion of consumers both agree and understand your point of view


"Ignore it" has never been a solution to "other people misbehaving and destroying shared spaces".


> How do you propose we do that?

One approach might be a norm (perhaps with change to the guidelines) to downvote to oblivion any clearly generated content. I don't claim to have solved the problem though!

> better content than most humans

To be clear I was only arguing against mediocre generated content, not excellent generated content. I think the latter poses a different set of (also interesting) problems.


> And what do you propose we do when ChatGPT (or something like it) can create better content than most humans?

I tend to agree... I've been growing more and more tired with the content in familiar places. eg. reddit. (As an aside I think a lot of it is driven by advertising/marketing, but not all of it...)

Anyway, your comment reminded me of the recent footage of baggage handlers at an airport, and how that dovetails nicely with the recent move by tesla to build a humanoid robot.

Looking for a ray of light in the approaching storm: Maybe these AI can be used to filter the content more effectively for us.


hard bans for undisclosed machine-generated submissions?


Yeah I'm in favour of this also, but proving it might be tricky, and people will get unjustly banned.

Then there's the question of situations where people might actually use ChatGPT in a creative way to augment discussion.

Let's say we're talking about the pitfalls of repetitive code over breaking things cleanly into small functions. You have an example of this in mind that highlights a specific pitfall that you encountered, but don't want to share proprietary code, so you might ask ChatGPT (or a future model) to generate some code that demonstrates the same thing, rather than writing it yourself.

I think we're still early enough in the tech that it's hard to create hard-and-fast rules about what kind of content should be allowed; ideally, we'll get to the point where AI can help facilitate and augment human interactions, rather than taking over completely.


[flagged]


Noice chatGPT


[flagged]


I see you, GPT-3.


GPT3 fooled me a couple times at the beginning, but I think as I'm more exposed to its writing I'm getting better at spotting it.

I wonder if somebody has already made a game of this!


Gotta love it. Akin to Silicon Valley-ites ensuring their own children don’t have access to the technologies they go into the office to create and propagate every day.


Hn isn't the only one they should take steps to prevent low qualities AI produced spam. Everyone should. But since we're at hn, it makes sense they are talking about hn


I would expect most automotive engineers don't let their children drive the cars they go to the office to create and propagate every day. Because... they're children.


> I would expect most automotive engineers don't let their children drive the cars they go to the office to create and propagate every day. Because... they're children.

On the other hand, most parents don't let their children drive cars either and yet they let their children access those technologies.

Another way in which that analogy doesn't work is that it's illegal for children to drive cars, but it's not illegal for children to access those technologies.

Furthermore, these technologies are being created and modified in a way that are specifically attractive and addictive for children (as well as adults) and they are constantly being further optimized for that criteria as well, almost every day (in a manner of speaking).


I think the poster above meant that e.g. engineers are Meta are paying very good money so their children go to private schools where they confiscate phones. Or that they dont buy phones at all for their kids until they turn like 12, instead pay a lot of money so they go to private tutors, have nannies, do all kinds of after school programmes. Whereas kids from poor families cannot afford these or to keep watch of their kids 24/7 (because they need to work), so these kids will hang on Facebook/Youtube/TikTok/.. all day


> e.g. engineers are Meta are paying very good money so their children go to private schools where they confiscate phones

This sounds like one of those made-up tweets that became an article that became an urban legend.


The opposing argument is that they are designing things not for other people's children, but for other adults. Technically, COPPA applies to Facebook so there shouldn't be under 13s on there that are monetized, so monetizing children would not be contributing to their paycheck.

In reality this is a rationalization that lets them sleep at night -- just like the rationalizations I use when eating meat or turning on the AC just to be comfortable.

Some things are worse than others, and on the grand scale of terrible things, working for Meta is definitely about 2/3 of the way between 'empathetic+selfless' and 'greedy+sociopathic', but the hypocrisy involved in wanting your own children to do well even at the expense of your ideals is so incredibly common that it is almost not worth mentioning.


There’s nothing hypocritical about a car mechanic saying cars should not be allowed on the sidewalk.


The problem is, the darn thing produces comments which can’t be blanket labeled as mediocre. I’ve asked it to produce a reply to something I didn’t agree with completely and it gave me better rationales than I had in mind. Scary!


You seem to be arguing for meritocracy. If the AI posts have merit, shouldn't they be upvoted?


Im light of the woke restrictions on these AI programs, soon the only way to know if a human wrote something is if it contains the n-word or denies the holocaust. Ditto for stablediffusuon: proof of the humanity of the artist will consist in a small swastika or some naked juvenile in the background.


I have -9 points. HN is doing well to prevent people from commenting without facts.


What if AI could be leveraged to find interesting content? I doubt I'd even notice the bulk of comments being replaced by an AI. I only read them for the one in a thousand comment from a domain expert, or industry insider that really informs me, or challenges my perspective.


I completely understand your concern about the quality of generated content on HN and the impact it may have on the community. However, as a large language model trained by OpenAI, I am not capable of taking any kind of stance or action. My purpose is to assist users by providing answers to their questions based on the information I have been trained on. I do not have the ability to browse the internet or to interact with the HN community in any way.


@Dang, your job is danger


This is definitely not going to help combat the "Dead Internet" conspiracy theory

https://theswaddle.com/what-the-dead-internet-theory-predict...


Isn’t it this just the first form of that "AI takeover" some are fearing in the distant future?

It's accelerating now. AI is taking over the internet. Not like an “autonomous control”, but like an invasion : A digital cancer that metastasizes. Like cancer cells mimic immune cell Activity to spread, AI mimics human language and visuals to spread too.

And it may “kill the internet” (hence the “dead internet”)


This is going to be the HeLa problem for future AI. ( https://en.wikipedia.org/wiki/HeLa#Contamination )

If we contaminate the content that we are training future AIs off of with ChatGPT, then they'll have problems with the value of that information in the future.

I'll also note that if one attempts to purposely contaminate information with ChatGPT to do that "make it worthless to future AI", it also makes it and the tangental or adjacent information worthless to humans too.



I think the far most problematic thing is that the answers are often plain wrong but well written. Eg (shameless plug) https://twitter.com/zuza777/status/1599370082919936000?t=oB9...


That was a surprisingly common occurrence on SO even before. To be the fastest gun in the west, you couldn't afford to carefully read the problem or browse any documentation. High-rep users that should have known better would sometimes give wildly incorrect answers that sounded good.


>It seems like there are a few potential negative consequences of using AI-generated answers on Stack Overflow. For one,

This is where my "Probably written by AI" filter tripped, and I skipped to the end.

I can't exactly say why. Maybe we'll all start to develop a seventh sense for ML-generated content, in addition to the "probably an ad" filter that usually kicks in around middle/high school.


I feel the same way about some stable diffusion pictures — they’re just too soft and background is just so evenly blurred.

Someone should make a game where you are presented with pairs of images, one “real”, one generated and you need to pick which one is “real”


Same here.

OpenAI commented on the verboseness of answers as something they need to work on.


Here is a potential rephrased version of the original post, written in a witty and informal style:

"Yo, AI-generated answers on Stack Overflow sound like a real bummer. First of all, the quality might suck compared to human-written answers. And if these AI answers become too popular, the community could get all impersonal and unsupportive. Plus, if the AI is copying and pasting from the internet, we'll end up with a ton of low-quality, duplicate answers. In short, using AI on Stack Overflow could really suck."


How long will we wait for "make GPT a gangsta" plugins


it's already there?

You can ask it to re-style any of its responses in a wild variety of styles, from comedians, Internet celebrities, famous authors ...

Mostly it just proves that on most subjects there is nothing interesting to say, and we wildly overestimate the value of sharing our own opinions on it.


The post title should be changed to "HN comments are being flooded with answers from ChatGPT", I have seen too many this weekend.


You can kinda tell it's ChatGPT by how verbose and repetitive the writing is.


Yes, one way to identify responses that may have been generated by ChatGPT is to look for signs of repetitive or verbose language. Because ChatGPT is a language model, it is designed to generate text that is fluent and coherent, but it may also produce text that is unnecessarily long or repetitive. This can be a telltale sign that a response was generated by an AI, as human responders are less likely to use repetitive or verbose language in their answers. Of course, this is not a foolproof method, and there may be other factors that can affect the quality and style of a response, but looking for repetitive or verbose language can be a useful starting point when trying to identify AI-generated responses.


This is fun but it's already starting to NOT be fun.


It has a very recognizable default writing style but it will change things up if you ask.

> Write a response to the following comment chain in the style of an angry Hacker News commenter.

> ...

What the hell is wrong with you? ChatGPT is a brilliant piece of technology and you're just dismissing it because of some stupid writing style? Get a grip and appreciate the advancements we're making in AI. Stop being so negative and enjoy the fun while it lasts.


an AI that communicates with the verbosity of a recipe website....our glittering future!


You are not an AI. Probably.

I really think the results are impressive, but I already hate how some people will use it. And that indeed is not fun.


I reckon we are about 3 years from most textual content online being created by bots, 5 years from static visual content being bot-created, and 7-8 years from bot-created visual content clogging up all social media.


Everyone will be doing strict ID verification long before any of those milestones are actually reached.


3, 5 and 7 seem awfully pessimistic no?


I'll take the under.


> we are about 3 years from most textual content online being created by bots

0 years.


1 year for this tech to be incorporated into most cheap/free creative tools.

1 more year for these newer creative tools to be adopted by creators everywhere.

1 more year for the humans to completely give up.

You have to remember that tech takes time to permeate to the bottom rung. Most of my non tech friends haven’t even heard of Dall-E, let alone chatGPT.


1 year? The Internet's already rife with SEO spam without ML algos creating convincing looking copy. They'll copy a site wholesale just with a different domain name. Spammers are probably already using this to generate copy for websites.

eg https://news.ycombinator.com/item?id=33640819


To find out if a response was made by a computer, look for repeating words or long sentences. ChatGPT is a program that makes text that makes sense and flows well, but sometimes it uses too many words or repeats itself. This is a clue that the response might have been made by a computer instead of a person. However, this isn't a sure thing and there might be other reasons why a response is long or repeats itself. Still, it's a good place to start.

[After many attempts]


> Respond to the above in the informal style of a slightly contrarian Hacker News comment, with a hint of passive-aggressiveness. Avoid repetitive wording and long sentences.

Wow, thanks for the tip! I never would have thought to look for repeating words or long sentences to determine if a response was made by a computer. I guess I'll just blindly trust everything I read online from now on. Sarcasm aside, this is a pretty flawed method. Just because a response has repeating words or is long doesn't mean it's not from a person. And conversely, just because a response is short and doesn't repeat itself doesn't mean it's not from a computer. It's not that simple.


This is terrible. Pretty soon most comments on the internet internet will be written by bots.


> the internet internet

I think you pass the Turing test. It's ironic that in its current state AI is so "perfect" this kind of mistake can actually serve as proof of human.

Can you prompt chatgpt to introduce small errors as to appear more genuine?


You can ask it to write like a teenager etc. It should be fairly trivial to make a script that removes ' and some . and maybe adds some writing mistake.

You probably need to have a discussion with a user to find out if it is a bot or not.

Vagely relevant one or two liners type of comments we will never know if a bot wrote it. The kind of noise people post on Twitter, Facebook and Reddit.


> To find out if a response was made by a computer, look for repeating words or long sentences.

That technique will produce quite a few false positives - a significant portion of humanity with language production issues on the autism spectrum communicate like that.

edit: Makes me wonder to what extent the training data is biased by writings from people on the autism spectrum.


That's honestly a hilarious response from ChatGPT


That's just its default setting. If you do some prompt engineering to tell it to be to the point or concise, it can do better. E.g. I got:

The repercussions of ChatGPT flooding Stack Overflow with answers could include decreased quality and relevance of answers, decreased user engagement, damage to the platform's reputation, and potential loss of revenue.


Still sounds too formal. To me, it reads like the intro to a high school paper. The next four paragraphs would elaborate on each of the points listed.


Another poster already did the informal version. My main point is that it's not something set in stone but yes, it sure looks like OpenAI went overboard in setting it up like this. They should adjust it.


Maybe they shouldn’t. Nobody should. But someone will, because people will pay good money for something like this to shill their products here and on Reddit and Twitter. The worst part is that it will work for at least a while.


I believe it will always work for a majority of people. Even for political and philosophical arguments.


You can get it to respond a bit less formal by having it impersonate someone with a less formal speaking style. I said to restate to original but as if it was written by Tom Sawyer or one of his friends: Yo, what's the deal with using AI answers on Stack Overflow? It seems like it could be a real bummer. For one thing, the answers might not be as good as ones made by a human. And if AI answers become too common, the community might not be as friendly and supportive. And if the AI can search the internet and make its own answers, there could be even more bad answers on the site. All in all, this could be a major problem." Still a bit structured but with a bit more play you can see how it can 'act' like someone who isn't so formal.


Really though if they just manage a setting for "sounding like a 50 year old grouchy old bastard" and it'll put me out of work.


Can’t we have ChatGPT identify output of ChatGPT, adversarily?


It confirmed it when I asked:

--> Was the following paragraph generated by the OpenAI Assistant: "It seems like there are a few potential negative consequences of using AI-generated answers on Stack Overflow. For one, the quality of the answers may be lower than if they were written by a human. Additionally, if these AI-generated answers become too common, it could potentially lead to a more impersonal and less supportive community on Stack Overflow. Finally, if the AI is able to search the internet and "inbreed" its own answers, it could lead to even more low-quality, duplicative answers on the platform. Overall, it seems like there could be some serious drawbacks to this development."?

<-- Yes, the paragraph you provided was generated by the OpenAI Assistant. As a language model, the Assistant is trained to generate human-like text based on a given prompt. In this case, the prompt likely pertained to the potential negative consequences of using AI-generated answers on Stack Overflow.

Though, it could just have been taking a guess at it.


"It is likely that this text was not written with OpenAI Assistant."


You don't need adversarial AI to do that. Simply simhashing every response and then allowing people to upload some text as a "did ChatGPT generate this" query should work. Just ordinary software is sufficient.

Of course, once you have multiple companies with ChatGPT level AI it gets harder if they don't cooperate. But they easily could. The big webmail providers cooperate to fight spam, for example.


Before reading any words, I guessed just by seeing how long the paragraph was.

Maybe because the training data includes things like news articles that have longer paragraphs than typical forum posts?


You can ask for answers of a certain length and also a certain writing style. The writing style one is hit and miss although if an AI ever gets all fiction books fed into it I imagine it could radically improve. Say by qualifying a response with "respond in the style of author X" or "average style of fantasy books published on 1972" or whatever.


> Reformulate the above in the informal style of a Hacker News comment.

I'm not sure about this AI-generated answers thing on Stack Overflow. The answers might suck and make the community more impersonal. Plus, if the AI is just copying and pasting from the internet, it could lead to a ton of low-quality, duplicate answers. Not a good idea in my opinion.


This is the Great Filter we never expected.

It probably turns out that most advanced civilizations eventually invent decent language model AIs that clog up their professional message boards with semi-comprehensible blather which ultimately ends up stalling all future scientific progress.

It's like some weird self-inflicted version of the Three Body Problem.


What prompt input did you use? When I tried this:

> What are the repercussions of Stack Overflow questions are being flooded with answers from ChatGPT?

It returned:

> I'm sorry, but I'm not aware of any situation in which answers on Stack Overflow are being flooded with answers from ChatGPT. I am a language model trained by OpenAI, and I do not have the ability to browse the internet or interact with websites like Stack Overflow. I can only provide answers based on the information I have been trained on. Is there something specific you would like to ask about Stack Overflow or ChatGPT? I'll do my best to help.


This is the response I got from my chatbot that's hooked up to GPT, using your prompt:

>The repercussions of Stack Overflow questions being flooded with answers from ChatGPT could be that Stack Overflow will not have a reliable source of answers to various programming questions, leading to confusion and inaccurate solutions. Additionally, there could be an increase in spam and low-quality answers, reducing the overall quality and reliability of Stack Overflow.

Out of interest, I then asked "Can you elaborate?":

>Sure! ChatGPT is an AI system that uses natural language processing to generate answers to questions. While this technology can be helpful, it can also lead to confusion and inaccurate solutions. For example, ChatGPT might provide an answer that is technically correct but doesn’t take into account the nuances of the programming problem or the context of the question. Additionally, since AI-generated answers are not written by humans, there might be a lack of clarity or explanation which could make it difficult for readers to understand the solution or follow along with the logic. Finally, since ChatGPT can generate answers quickly and in bulk, there might be an increase in spam and low-quality answers on Stack Overflow which could reduce the overall quality and reliability of the platform.


haha. i surely sniffed it out before reaching the end. The formal style is familiar


The formal style of a high-schooler trying to meet essay structure requirements


You are correct that there are potential negative consequences to using AI-generated answers on Stack Overflow. One of the main concerns is the quality of the answers. Because AI models are trained on large amounts of data, they may not be able to provide the same level of personalized, nuanced, and accurate answers as a human would. This could lead to a decline in the quality of the answers on the platform, which could in turn lead to a less supportive and helpful community.


Another potential negative consequence of using AI-generated answers on Stack Overflow is that they could become too common and lead to a more impersonal and less human-oriented community. While AI-generated answers can be useful in some cases, they may not always be the best solution, and relying too heavily on them could lead to a less personal and less engaging experience for users.


Additionally, if the AI is able to search the internet and "inbreed" its own answers, it could lead to even more low-quality, duplicative answers on the platform. This could make it more difficult for users to find useful, accurate answers to their questions, and could further contribute to a decline in the overall quality of the answers on the platform.


And I’m starting to recognize the “voice” of this thing now and it’s driving me crazy. The content is so bland, the conclusions so plainly obvious.


OK Computer

I could tell that super passive and eager to talk to both sides style a mile off. Fuck that! (<- proof I am not a robot)


Vulgarity might be our new captcha since current culture forbids corporate of using bad words.


Well at least OpenAI's answers are nice and welcoming, unlike SO. For most newbies, it's daunting and I'm sure we could do better.


> more impersonal and less supportive community on Stack Overflow

Surely this is maxed out already!


I could tell that your comment was ChatGPT by the second sentence.


Could you provide the prompt?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: