Hacker News new | past | comments | ask | show | jobs | submit login

> I would love to see some kind of identity and reputation system where the "high-fives in the form of rep" could follow people across communities. It may not feel like much compensation if you've contributed over 2500 answers, but having reputation gained in your area of expertise grant you a high level of trust to interact in other communities could be valuable, at least in my opinion.

Honestly I think that's an excellent idea - a rep "passport" of sorts which gains you a certain level of trust within certain communities.

> Assuming they're making this move to protect against AI / LLMs, I think SO is in an impossible situation here. When all the ChatGPT hype started, one of my first questions was "what happens to the incentive for contributors and creators?" Why would I want to contribute on a platform if I know an AI model is going to come in, take my contribution, and regurgitate it back to the masses in a way that I can't control?

Sadly, I think this is an unpreventable outcome of what is happening right now. I don't think anyone will have any control over this, at all. We can only hope it will never be the case that being active (actual human contributors) becomes a worthless pursuit.

> Even if I get some attribution from the AI/LLM, do I even want it? If the LLM is blending content from multiple sources, which changes the context and presentation I put effort into, is the quality going to be high enough to match what I strive to achieve for myself when I'm trying to build a reputation as a high quality contributor? What if the AI is hallucinating objectively poor quality content and giving me partial attribution?

Another excellent point, the prospect of this being possible today - AI attribution from a hallucinated version of a human's objective contribution sounds freaking terrifying to me. Not a world I want to live in, to be honest.

> I think AI is going to be disruptive and the whole idea, for me anyway, behind disruption is that you break an existing system and then everyone is free to take a shot at claiming part of the new gold rush that occurs while trying to build the replacement. The problem with AI is that it's going to break a lot of services that do a good job of serving the community and shouldn't be broken. SO is a great example of a healthy community that doesn't need disruption, but the massive amount of high quality, curated content is going to make them a prime target for LLM training.

As will every single human-created/curated content-source, IMHO. I think that "quality" will be really, really hard to objectively measure in the near future as the whole world of digital information becomes tainted with applied statistical models which can do a reasonably good job of predicting what people perceive to be high-quality reasoning, answers, content. I like the idea of underground speakeasies where there's no wifi, just humans.

> Personally I think the only solution is for "noai" variants of popular open source licenses so contributors have the ability to make it clear they don't want to contribute to AI/LLM companies. If SO had an option to flag contributions as CC-BY-SA-NOAI, I'd enable it on my stuff going forward.

That would be great, but I'm pretty sure that no LLM corporation would care about those flags, even with strict regulations in place from governments.




> I think that "quality" will be really, really hard to objectively measure in the near future as the whole world of digital information becomes tainted with applied statistical models which can do a reasonably good job of predicting what people perceive to be high-quality reasoning, answers, content.

That's the scariest thing I've heard today. Lol.

Even now, I think the proper use of grammar and spelling alongside assertive language has a lot of people fooled into thinking LLMs are actually intelligent. It's hard to explain to people how the LLMs know everything and understand nothing.

I've been bullish on the idea of using domains as identity for a long time. I think by using them as a universal ID you could build reputation and trust across the internet and that helps everyone a lot when trying to assess the reliability of information. If you add in attestations for factual info it gets even more interesting. Ex: GitHub attests user @john.example.com has 1000 commits to the XYZ project. Suddenly you have a more reliable way of ranking John's comments about XYZ as a topic, regardless of where they show up (as long as those identities are validated somehow).

If you look at that as "ranking people" and judge it in the context of being a valuable piece of input for LLMs/AIs, the big push for "better" identity systems like "Passwordless" start to look like a hell of a coincidence. My cynical side wonders if we'll see a push for validated (via government ID) identity systems. Something as simple as a "real human from Canada" tag would provide immense value for AI training (and marketing).

No matter what, I think AI is going to cause changes in the way online identity and reputation work. I think if it evolves into some kind of system with domains as identity it'll be decentralized and provide long term benefit. I think if we see something with verified IDs controlled by the current big tech companies it could devolve into something disappointing or even detrimental for the average user.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: