Hacker News new | past | comments | ask | show | jobs | submit login

If you don’t want to have what an AI generates then don’t use it. I do agree with the sentiment that the addition of “AI” which goes from a rebrand of what was already there to integration of LLMs is at the moment only somewhat helpful and obtuse. But, really your new systems shouldn’t be thin front ends to gpt4 and instead something far more tangible.

Output dashboards or reports or aggregate data. I have my own project which is a thin shell over gpt 4 but I tried experimenting with an SMS UI that while only has question and answer dialogs it presents the information in a different way. Think of what it can enable.




> If you don’t want to have what an AI generates then don’t use it.

If only it were that easy, the flood of other peoples AI generated content clogs up anywhere its not laboriously moderated out. DeviantArt, for example, has become more or less 99% AI content by volume over the last couple of years and is now basically useless if you're not interested in having a firehose of generic AI images blasted at you. I've seen people complaining that speciality groups for hobbies like crochet, interior design or car photography are overrun with fake AI images. Search engines are full of fake AI images and GPT-written SEO farms. Twitter is full of GPT powered bots. It's everywhere regardless of whether you deliberately engage with it.

Not that I think complaining is going to fix anything, we've irrevocably broken the signal-to-noise ratio of the internet by building an infinite noise generator, for relatively nebulous benefits in return.


Yep. Just because I don't use copilot doesn't mean I'm not stuck reviewing a bunch of copilot code


"You may not be interested in AI, but AI is interested in you."


> your new systems shouldn’t be thin front ends to gpt4

> I have my own project which is a thin shell over gpt 4

Physician, heal thyself!


> If you don’t want to have what an AI generates then don’t use it.

The author is writing about sort of AI outputs that other people and organizations are passing to him: chatbots, generated emails, phony presence at a meeting, and so on. Those use cases are a bit more like relatives who send their DNA to untrustworthy companies for analysis: you personally saying no for your own use doesn't actually mitigate or even affect the negative externalities imposed upon you by widespread general use.

I do agree that, for many use cases, personally opting out of junk generative AI is sufficient. But I'm not looking forward to the world flooded by low quality AI outputs that become impossible to avoid sifting through in all areas of life.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: