OT: Are there any alternate HN front-end that fix this? I don't want to create an account to see the thread. I know there are browser extensions, but for security reasons I only enable ones that are vetted by Mozilla. I think HN used to have a "web" link for paywall bypass, but I don't see it here.
Not quite the same. You'd need to open the comments, scroll to find it as it's usually not the top comment or missing. Often Twitter links appear in comments too, not just the submission link.
There are extensions that will "nitterize" twitter links among others, but TBH the one I have been using is now unmaintained and I need to find another one.
You would be accessing Twitter posts through a 3rd-party service, which would see everything you would see. Because, you know, it can't be any other way?
So your insistence on 'privacy' yet 'only vetted by Mozilla' is misguided at best.
I think you misunderstand. I'm not concerned about the privacy angle, I just don't want to create a Twitter account. Twitter no longer shows threads if you're not signed in.
Re. Mozilla vetted extensions, I cited security, but would love to learn something new about privacy if you'd like to share more.
Unsurprisingly it is on the blog of an AI company. The article contradicts itself multiple times, restating who is the developer of each of k0s, k3s, and k8s. Even if you are unfamiliar with these things a human being reading it can easily see it’s false information from the contradictions alone, which just goes to show this “nops.io” is just publishing AI blog spam without even having a final human proof read.
And the worst is that Google’s automatic algorithms are consuming it and thinking it is good content, making it the featured snippet.
It could be AI, but content farm garbage has been around for a long time. I remember seeing one that was on how calculate total addressable market, and the whole post was "find out the total market, and determine what percentage you can address." It was back in 2018 or so, so someone had to put in the effort to write it.
Only now its more scalable to do content farm garbage with AI, it's cheaper SEO on steroids.
I'm optimistic it will one day be possible to sift through AI generated garbage, but it will take time just like it did with email/spam. And the most likely outcome will be through paid services, either paid content or paid filtering, just like email works best to this day.
I remember the early email days, early 2000's, pretty much anyone could setup their own email server (qmail/sendmail), there wasn't much spam to worry about and it required a lot of effort to make spam cost effective. Fast-forward today, even though you can still setup one, it requires a crazy amount of effort to ensure delivery in-and-out due to spam abuse, that, or pretty much paying a transactional fee, which is the easiest, so large providers don't flag email as spam.
The information super-trash-way, the data is feeding on itself.
We're heading back to the information dark ages. I don't know if I'm glad or sad, the pendulum is swinging the other way, where printed books or face-to-face learning, will come back in vogue to get vetted information.
Yes! I've noticed some decay in content quality at different places recently. Search results have become especially bad, there's so many content farms with AI generated nonsense now. And what's funny is that helpers like Bing Chat are searching the internet to give answers.
There's an upside though, this makes me use the internet and my smartphone less and less. And I'm sticking to very specific sources now.
Time for me to stock up on some HDDs to create a personal archive of data that hasn't been touched by AI yet...
Crochet figures generally look 'pixelated', where a pixel is a single stitch. The eyes on these figures don't look stitched at all. The hair looks like waves of material that also lack a stitched look (compared with the faces). The curves of the lower lip cut across stitches (going by the size of the stitches on the adjacent skin) and don't have that pixelated look.
[Edit] Also, I don't know what these crochet sites generally show, but in the pictures I've seen, it's usual to show the figure against a plain backdrop or on a table, not part of a scene with a detailed background (which in this case, also lacks stitches)
The only thing that gives it away for me is that I can’t imagine how the lips would be done, and I’m not 100% sure it is impossible (I don’t know anything about crocheting).
There’s a lot of other giveaways - the hair on the left and middle doll would just be loose yarn or thread, there’s no stitching, and getting them to that kind of detail and consistency would be a nightmare. The noses all lack any stitch definition but simultaneously seem to blend into the face fabric. The sleeve on the left one is layered in a way that would be near-impossible imo in a crochet context. There’s a ton of gauge changes (the yarn size getting smaller/larger) which, while not impossible, would be very very difficult especially at the size it appears to be. The tiny hands would be painstaking at best. Then there’s some felting for the white “V” on the left dress and embroidery for the details on the dresses and crown. If anyone could actually make those to match the image it would be hundreds of hours and never look nearly that perfect.
The noses did look a little suspicious, come to think of it.
For the hair, I guess it didn’t really stand out, I’d believe it was possible to get pre-made doll wigs or something like that. Thinking about it, though, it does seem a bit odd that it would look like bundled yarn up top and then more like hair in the back.
For the hands, it is kind of funny, I’m not sure if it is an artifact of the typical “AI is bad at hands” thing or “intentional” (to the extent that the output of a ML model can be called intentional), but the hands do appear to have some mistakes.
Understanding it doesn’t help much. Our entire civilization has been on a path of generating fake everything for some decades now, and this is the final step
Is this really different from what we've always dealt with? We have ghost stories, drawings of monsters, fiction books, staged photos, etc. The history of fiction goes back as far as non-fiction.
As an industry and artifact, AI is rapidly filling the gap between what our brains “measure” (novel, obnoxious, sexy, …) vs. intended outcomes (learning, threat mitigation, healthy mates, …)
Culture leveraged that mismatch.
But that gap has now been colonized by a new species class: the uber adaptive, fast generation, artificially intelligent attentionvore.
The farming of people has just begun. The next Facebook is coming. :(
EDIT: Also, you don’t actually have to write that script now…
The sheer scale of how fast AI can put out content is terrifying.
Think about all the work humanity has thought, written out, drawn, etc.
We can probably create an order of magnitude more than the entire history of humans within a few days/weeks.
This is truly revolutionary. This doesn’t mean what happened in the past wasn’t. Those had profound impacts on human life, again due to the scale of their impact.
But keep in mind those technologies being introduced was very painful for the people at the immediate time, and then slowly got easier and easier. We are the same people that AI is going to be extremely painful on.
The analogy of the past doesn't mean that the same process will happen.
The industrial revolution after all was unprecedented in the history of human civilizations until it happened. Nobody had a roadmap on how it will all play out.
With increasingly better AI, not even techies will be able to tell apart what’s real and what isn’t by just looking at it. You’re probably in for the ride.
> There was this poor lady on a board who crocheted this damn homunculus when it was supposed to be this elaborate picture and she just got totally scammed! Waste of material and time and money!
I would love to see this. Both the instructions and the outcome. I have no clue what it would-- could!-- look like.
I think that tech companies will soon have to grapple with the fact that the majority of people currently have, or will have, predominantly negative opinions of generative AI.
We too often get lost in our tech bubble. Of course generative AI is useful for us. We also understand the nuances of the technology, so we can be more deliberative in its use. This is decidedly not what is happening with the typical person.
To put it bluntly, for all of the fanfare last year, how has generative AI had a positive impact on the average person? And how has it had a negative impact? I think that I can make a stronger case for negative impact (or no impact) than I can for positive impact, as it currently stands.
This is not inherent to generative AI; I can think of other technologies that followed a similar trajectory. But the difference is that generative AI leaders literally think that they are about to create a techno-utopia, and they may be sorely disappointed when others don't feel the same way.
A lot of the criticism of generative AI comes from techies though. I am one of them. I know there's plenty of others, on HN certainly, I think also on Twitter.
Personally, similar to your stronger case, I've now crossed a line where I find it more likely that generative AI will crash and burn and leave people shaking their heads at the misconceived notions of "making the world a better place" (the techno-utopias you say).
We've had a year of crazy hype about GPT-4 and I still can't figure out what Gen AI is going to be good for, other than letting people rip off creatives, for fun or a very limited amount of profit; or for flooding academic venues with auto-generated work that will bog down the whole process of science.
There's a middle ground between utopia and scam-tech.
Generative AI is a powerful tool this very second, and is accelerating at an incredible rate. It seems pretty obvious that its going to automate tons of the white collar workforce. The hype will crash, no doubt, but its not going quietly into the night, nor is it zombifying like some other hyped tech.
Anyway, I view the whole AI vs anti AI debate as a distraction. Its coming, like it or not. The significant question is who will be in control... Will AI be local, distributed among small providers, or concentrated in a few pseudo monopolies?
> We've had a year of crazy hype about GPT-4 and I still can't figure out what Gen AI is going to be good for
Natural language interfaces to perform business-specific tasks, in concert with a UI to tweak things after generation, is the current killer use case in enterprise software, and it's exactly what's been built (and is being built). OpenAI's 1B ARR isn't just from people who pay for ChatGPT+.
You're a techie and can't find legitimate usecases for GenAI? That honestly says more about you than it does GenAI. From coding to chat-with-your-data to all sorts of general data/text transformations, there is so much there.
Don't get me wrong, there are absolutely negative externalities, but if you can't see the useful cases as well then you're deliberately blinding yourself.
> how has generative AI had a positive impact on the average person?
Generative AI, for the most part, is a technology in search of a problem. For now. The tech is in a “demos well, productizes poorly” stage. Once you try doing anything at scale and with proper evals of success, you see that real world performance is pretty poor still.
Yes yes we have lots of totally-not-cherry-picked papers where researchers achieved something fantastic. Then you look at the detail and it’s either “we ran this once because expensive” or “it achieved this great result almost 35% of the time so it’s state-of-the-art best-in-class”
> create a techno-utopia, and they may be sorely disappointed when others don't feel the same way
Every techno-utopia I’ve ever seen in movies, books, etc has always been a secretly dystopia. Looks nice and polished on the surface, but achieves this result through aggressive oppression and disenfranchisement of dissenting voices.
Star Trek perhaps is the only one that didn’t follow that pattern. And even then you have groups outside the federation who are not super happy with how the federation does things.
My take is that the tech isn't in search of a problem, we're just early. It solves a lot of problems. The challenge is getting the right data pipelines in place to allow it to do that. The good news is that in many cases, it's a good thing to have that in the first place.
I agree! GenAI is great at solving lots of problems right now, but not all problems as many are hyping. And for many use-cases it's still too expensive/slow.
I wasn't trying to say it's useless, I was getting at this sentence from your 2nd paragraph: "[did it work?] The short answer is... mostly."
That "mostly" part is the cold reality many products have run into this past year. It's been the biggest problem in my experiments at least. The current tech requires a lot of babysitting.
Agree with everything you say, except perhaps the last sentence:
> But the difference is that generative AI leaders literally think that they are about to create a techno-utopia, and they may be sorely disappointed when others don't feel the same way.
I would argue it is fairly common for people in tech to speak in messianic terms of the golden age our technologies will usher in. History is littered with this world view. Of course, there are also critics from within the tech sphere, but I would say the dominant mental model is one of naïve optimism that crosses over into arrogance.
> the difference is that generative AI leaders literally think that they are about to create a techno-utopia, and they may be sorely disappointed when others don't feel the same way.
No, the difference is generative AI guys believe that the content generated by generative AI will soon be literally (literal literally, not figurative literally) indistinguishable from those made by humans.
It's quite different from other techs. When people invented movie films, they (almost certainly) didn't think watching a movie will be literally the same experience as going to the opera.
Are they wrong? I don't know. But so far I incline to agree with them.
If we are talking about the real world I have never encountered a person with a negative opinion about generative AI, and I'm not just talking about tech people but people that mostly use ChatGPT for stuff like "some ideas for hiking trails in X".
Same! IRL everybody I know is at least positive to neutral on gen AI, with a few who don't like it. In school, it is very very widely used for obvious reasons, and I've seen a few friends use it for emotional support as well. The most opposition I've seen is always from online strangers.
> But the difference is that generative AI leaders literally think that they are about to create a techno-utopia
What honestly baffles me is that not even those are presenting a positive vision for AI.
On the contrary, the main PR angle of "generative AI" leaders right now is the "AI is a critical danger for humanity" doomsday cult. The one "positive" spin on this seems to come from the e/acc faction of the cult who frame it as some sort of natural evolutionary progression: "sure, AI will wipe out humanity, but that's actually a good thing".
But both the e/alt and e/acc legs of the cult seem to agree that a) AI is a threat to humanity, b) It's inevitable that AI will become more powerful than humanity and c) we nevertheless have to build it for some reason.
If the two publicly visible mindsets of its thought leaders are "cold-hearted Manchester capitalist who wants to replace even more workers with machines" and "tech-pyromanic madman who fears his soul to be eternally tortured by an AI unless he brings that very AI into existence" then it's not exactly a surprise if the tech is received negatively.
I'm more and more thinking a government issued digital identity (like https://privacybydesign.foundation/irma-en/) that can be used to proof you're human (and other details of you want) but that can't be tracked back to an individual, but again optionally, can be used to create (multiple) online persona's is the way forward. I used to think of these things as dystopian, but fake content by fake persons is a bigger issue. Of course real persons could create persona's for such a bot, but a (personal and/or community based) blacklist mechanism based on the root account (the real human that created the persona) would go a long way.
> a lot of these photos lead to for-pay patterns that are also created by AI (or just stolen from actual artists) that won't even approximate what's in the photo (so scams)
We need software tools where anyone can design their own ambitious but reliable designs.
But add to that machines that can create instances of those designs and I can see human crafting getting completely devalued.
We expected AI Spock. We got Robo Leonardo da Vinci.
> a lot of these photos lead to for-pay patterns that are also created by AI (or just stolen from actual artists) that won't even approximate what's in the photo (so scams)
> a lot of these photos lead to for-pay patterns that are also created by AI (or just stolen from actual artists) that won't even approximate what's in the photo (so scams)
The issue with generative AI techniques in general is how low the barrier to entry is. Various forms of information that used to be difficult or resource intensive to create have suddenly become approachable and even trivial in terms of resource investment to create.
Overall, in any sort of cost/benefit analysis, the cost is just so low now the benefits don’t have to be much of anything, if anything at all. Entertainment factor alone, boredom, or perhaps a passing curiosity to try something are enough to create and present false or misleading information and push it out to the public, creating noise needed to filter through. There are plenty of other far stronger motives that make the problem even worse.
Misinformation and disinformation were already becoming an increasingly large societal issue IMHO. That is only going to get worse with wide access to generative AI. We already have a high degree of erosion in social trust where we pretty much have to consider motives and driving forces behind every transactional relationship we have these days and we could at least use costs to help sort that mess out: why would someone bother investing the resources to do this? Does it cost a lot to present me with false information and if so, is there enough potential motive behind that to make this information more likely to be false or misleading?
The answer to this is increasingly yes. It’s now far more difficult to start from a position of distrust and move to a point of trust or likelihood of trust and I think we’re going to see that even more in all sorts of aspects of daily life. I now have to assume most pieces of information out there are targeting me and attempting to manipulate me in some way (more than before). I fear we’re moving to a model of free speech that will put more weight on “authoritative” sources more so than in recent past in many cases considering liabilities authorities have when presenting false, misleading, or inaccurate information. Liabilities that in many cases aren’t real liabilities just perceived liabilities, granting authoritative information sources far more credit than is due.
Social validation. When you're narcissistic, legitimacy really doesn't matter so much as others believing you're amazing. And it works because it's hard to prove anything online. Maybe this means hobbyists will actually have to meet in person again. Or online crocheting communities need anti-AI AIs to detect and block impossible fakes.
Yeah. Even sites that encourage "real identities" like Facebook are full of would-be liars seeking validation.
...But anti AI-AIs are not going to work. Even of they work technically (and I strongly suspect they will not), the keepers of the internet do not have a financial incentive to pay for it.
Seems like the larger internet will be zombified, and users will go back to private online circles and meatspace focused interaction. Thats not so bad.
I think you’re right. Generally speaking things go through cycles of centralization and decentralization.
Generative AI is going to cause centralization of trust back to authorities and authoritative sources for now. It’s already occurring for the news and media, and it will occur for hobbyist communities as well.
Perhaps with a public/private key “is human” verification we might see continued success for creators and content on the Internet, but I think that using AI to detect AI is likely to falter because it’s just easier to meet in person or use a key instead. It’s a matter of category difference.
Definitely a problem but will open up value to other contributors as well. Someone willing to make time lapse videos of their entire project or someone who creates a site where contributors are verified by different means like sending work to verified assessors who can say this is real will be allowed to sell verified patterns at a good price.
I've been seeing this and fake furniture/rooms on Facebook ever since Midjourney got semi-realistic looking. It happens so often I got tired of calling it out. Some older people think it's real, but for the most part I think people know it's not real.
I’m really tired of these takes where people look at a community they aren’t a part of that’s struggling with AI-based issues, hand-wave some non-solutions (usually involving some reference to “reputation”) and then shrug and go “and if you can’t solve these problems that society has never successfully solved then I guess you deserve to fail”.
I am sympathetic, and while I'm not part of crochet I am professional ceramacist and it's even worse for us. The generated pictures I've created of pots are indistinguishable from genuine ceramics - even to my eye.
There is no recourse, no genie back in the lamp.
Photos of strangers saying "I made this" and trusting that is over.
Web of trust implementations will solve this. Not immediately, but eventually. I'm sympathetic to the pain of losing something to Schumpterian creative destruction, but I also embrace it as an inévitable part of life. My dad's going to die some day, and I will weep, but I will not ask "why, Lord?".
There will be a certain percentage of the population, though, which fails to distinguish artifice from reality and essentially drives themselves insane by obsessively consuming lies. See the millions of women on Instagram posting photos that are very obviously doctored and filtered beyond all credibility and yet somehow inflicting depression and eating disorders upon each other anyway.
Agree, online spaces will adapt, and honestly how great was a community to begin with if as the only thing keeping it from becoming a hotbed of fake content is that the users didn't have the tools to fake it.
I'm sure very few people would agree when I say that nothing of value has been lost, but honestly I think this is more about people's illusions about the online communities they engage in becoming harder to maintain than those communities becoming worse somehow. I get the impression that people feel like their online spaces are being invaded by phonys using AI all of a sudden. If your community is so full of fakers and liars that it is dominated by fake content immediately when it becomes possible to generate that fake content, and that content isn't recognized by a critical mass of the existing users and laughed off the platform, you have been interacting with a lot of fakers and liars all along. That sucks, and maybe it was better psychologically to have that social outlet even if you only could only leverage it as a social outlet because you were being deceived, but there it is.
> how great was a community to begin with if as the only thing keeping it from becoming a hotbed of fake content is that the users didn't have the tools to fake it.
Do online communities need to be great by some HN metric in order to be deemed worthy of survival?
Some knitters had fun sharing their work online. Now that's ruined by ML. If it was ruined by cryptocurrency mining instead, would we be posting similar "they had it coming" defenses of this destruction? There is a heavy bias on HN toward excusing the externalities of generative AI.
My hypothesis is that the community is not the community the earnest users thought it was. There is no metric I'm proposing apart from whether the people complaining would actually value the community in the first place if they had an accurate understanding of the type of people the community was made up of.
Lets say you go to the local comedy club and do your act. You aren't a professional comedian, but you want to do comedy for fun, and it's a social experience where you get to interact with people. You work from home and social outlets are very important!
You do this for years, getting feedback from other comedians and enjoying yourself. Later you decide to buckle down and watch the great comedians at work. You watch the top performances by the top comedians, and to your horror you notice something disturbing. The other 'comedians' at your social club are just reading lines they copied down from famous comedy acts they watched on Youtube, verbatim. In fact, out of hundreds of other amateur comedians you had been having drinks with, and being proud when they praised your act or gave feedback, maybe one or two of them were actually writing their own material. They are all faking it to get social credit in this club.
At what point was the club ruined, all along or when you found out?
> honestly how great was a community to begin with if as the only thing keeping it from becoming a hotbed of fake content is that the users didn't have the tools to fake it
It only takes a few users willing to post fake stuff to drown out a much larger number of real ones.
> I'm sure very few people would agree when I say that nothing of value has been lost, but honestly I think this is more about people's illusions about the online communities they engage in becoming harder to maintain than those communities becoming worse somehow.
You could easily do the same thing with HN.
Note that real contributions and moderation don't scale whereas fake stuff does scale. So nothing is proof against an avalanche of fakes.
Everyone will have a chance to be the big fish in a small pond again, rather than being a drop in a global ocean. Everyone's self-esteem will improve as a result. Nobody will feel inadequate compared to everyone else on the planet. Nothing is more demotivating than seeing leaderboards dominated by random Japanese kids that you'll never have a chance of ever beating.
Yeah, there may be a knitting goddess in Ohio you can't reach out to but you'll foster a deeper relationship with the hobbyist down the street, which would arguably benefit you more anyway.
> honestly how great was a community to begin with if as the only thing keeping it from becoming a hotbed of fake content is that the users didn't have the tools to fake it.
That's like saying, "how great was your house to begin with if the only thing keeping it from becoming smouldering embers was someone with a can of gasoline and matches?"
It is more like you hire someone to repair a pipe and when they open the wall they find mold everywhere. You had been happy living in the house for the last 15 years, but now you know about the mold and you feel grossed out being in the house.
I feel like that's not a good analogy either; it's not like there was a flood of AI-generated garbage on these forums for the past fifteen years; it's not like a person discovering that they've been chatting exclusively with bots, not humans, for the past decade.
I think these forums are going to discover that a different, more stringent kind of moderation is needed now that doing this kind of shit is cheap. I guess maybe it's more like you've been living your house happily for 15 years, but with the change in climate, suddenly your house is full of a kind of rodent that hasn't been able to survive in your region, and you're just shoveling out buckets of rodent poop every day, and trying to figure out what kind of traps to set out to stop them from coming in.
(Except the rodents aren't just appearing naturally; your neighbor started a rodent breeding farm, and is releasing them into your house so they don't have to pay for extra food or space, and is selling them for a profit while your house gets slowly filled up with rat shit.)
Again you are assuming the people using ai here are barbarians invading this forum. I suggest they may be the same people that were there anyway, and have been frauds all along. Before they posted other people's work (taking pictures of other peoples work irl, buying things at thrift shops to post about, stealing pictures off other forums), now they are using ai to fabricate it. The use of AI did not lead to any new fraud in my scenario, it actually made the fraud obvious for the actual experts who were previously fooled.
Please don't stink up HN comments with your fetish trash comments. This is one of the last places online where the comment section is still free from lazy puns, repeated jokes, one-word replies and sleaze.