> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.
You, personally, likely are not (apart from electricity use but that's iffy.) But the technology you want to use could not exist, and cannot continue to be improved, without those two things. That's not unclear in the slightest, that's just fact.
> I'm open to that conversation and debate, but diatribes like this make it far too black-and-white with "good" people and "bad" people.
I get that any person's natural response to feeling attacked to defend oneself. That's as natural as natural gets. But if shit tons of people are drawing the same line in the sand, no matter how ridiculous you might think it is, no matter how attacked you might feel, at some point, surely it's worth at least double checking that they don't actually have a point?
If I absolutely steel-man all the pro-AI arguments I have seen, it is, at the very best:
- Using shit tons of content as training data, be it written, visual, or audio/video, for a purpose it was not granted for by it's creators
- Reliant on labor in the developing world that is paid nearly nothing to categorize and filter reams upon reams of data, some of which is the unprocessed bile of some of the worst corners of the Internet imaginable
- Explicitly being created to displace other laborers in the developing and developed world for the financial advantage of people who are already rich
That is, at best, a socially corrosive if extremely cool technology. It stands to benefit people who already benefit everywhere, at the direct and measurable cost of people who are already being exploited.
I don't think you're a bad person for building whatever AI thing you are, for what it's worth. I think you're a person who probably sees cool new shit and wants to play with it, and who doesn't? That's how most of us got into this space. But as empathetic as I am to that, tons of people alongside you who are also championing this technology know exactly what they are doing, they know exactly who they are screwing over in the process, and they have said, to those people's faces, that they don't give a shit. That they will burn their ability to earn a living to the ground, to make themselves rich.
So if you're prepared to stand with them and join them in their quest to do just that, then I don't think anyone is obligated to assuage your feelings about it.
Your "steelman" is embarrassingly bad. Why play devil's advocate if you're going to do such a bad job of it? Here's an alternative:
- As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples. It is possible to create outputs that are very similar to existing works, just as a human painter could copy a famous painting. The issue there lies in the output, not the human/model.
- Provide comfortable office jobs for people in economically underdeveloped countries, categorizing data to minimize harm for content moderators worldwide. One piece of training data for a model to filter harmful content can prevent hundreds/thousands of people from being exposed to similar harmful content in the future.
- Reduces or eliminates unpleasant low-skill jobs in call centers, data entry, etc.
- Creates new creative opportunities in music, video games, writing, and multimedia art by lowering the barriers to entry for creative works. For example, an indie video game developer on a shoestring budget could create their own assets, voice actors, etc.
- Reduces carbon emissions by replacing hours of human labor with seconds of load on a GPU.
> As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples.
“a lot” is doing very heavy lifting here. The amount of examples a human artist needs to learn something is negligible in comparison to the humongous amounts of data sucked up by AI training.
> As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples. It is possible to create outputs that are very similar to existing works, just as a human painter could copy a famous painting. The issue there lies in the output, not the human/model.
I've seen this analogy parroted everywhere and it's garbage. Show me a human being that, in an afternoon, can study the art of Rembrandt and from that experience, paint plausibly Rembrandt style paintings in a few minutes each, and I'll swear by AI for the rest of my life.
Absolute bunk.
> Provide comfortable office jobs for people in economically underdeveloped countries, categorizing data to minimize harm for content moderators worldwide.
... who do you think the content moderators are? It's the same people being paid pittance wages to expose themselves to images of incredible violence, child abuse, non-consensual pornography, etc. etc. etc.
No person should have to look at that to earn a GOOD living, let alone a shit one.
> One piece of training data for a model to filter harmful content can prevent hundreds/thousands of people from being exposed to similar harmful content in the future.
Yeah this is the exact nonsense that is spouted every time you criticize this shit. "Oh all we need to do is absolutely obliterate entire swathes of humanity first, and theeeeeen..." with absolutely zero accounting for the job that has to be done first. And again, I don't see any AI scientists stepping up to page through 6,000 jpegs, some of which depict unspeakable things being done to children, oh no. They find people to do that for them, because they know exactly how unbelievably horrible it is and don't want themselves being exposed to it.
If it's so damn important, why don't YOU do it? If you're going to light someone's humanity on fire to further what you deem to be progress for our species, why not at least have the guts to make it your OWN humanity?
> Reduces or eliminates unpleasant low-skill jobs in call centers, data entry, etc.
And where are those people going? Who's paying them after this? Or are you going to suggest they attend a weekend Learn-to-Code camp too? And who's paying their wages in the middle of that transition, when the skills they have become unmarketable? Who's paying for their retraining? Or are we just consigning entire professions worth of people to the poorhouses now without so much as a thought?
> Creates new creative opportunities in music, video games, writing, and multimedia art by lowering the barriers to entry for creative works.
Derivative works. No matter how much you want to hype this up, AI is not creative. It just isn't. It gives you a rounded mean of previous creations that it has been shown, nothing more. AI will never invent something, in a thousand years it will not. This is why people call AI art soulless.
> For example, an indie video game developer on a shoestring budget could create their own assets, voice actors, etc.
Have you seen those games? They're shit. They're lowest common denominator garbage designed to get hyperactive kids on iPads to badger their parents into spending money.
> Reduces carbon emissions by replacing hours of human labor with seconds of load on a GPU.
So like, this just straight up means you know damn well people are going to die from this. They will be displaced, their labor made worthless, and they will perish. That's just like... what you just said there, because otherwise, the statement "reduces carbon emissions" makes no sense, because if someone gets fired and gets a new job, their carbon emissions do not necessarily go down, and they certainly aren't eliminated.*
> Show me a human being that, in an afternoon, can study the art of Rembrandt and from that experience, paint plausibly Rembrandt style paintings in a few minutes each, and I'll swear by AI for the rest of my life.
So it's okay to learn, but only if you do it very slowly? I surely don't need to point you to the existence of forgers - you know a human can study the art of Rembrandt and paint plausibly Rembrandt style paintings
> are we just consigning entire professions worth of people to the poorhouses now without so much as a thought?
We have been doing that since the dawn of history - what makes this any different from cars obsoleting the horse drawn carriage? Computers have been automating people's jobs for decades - should we ban programming writ large?
Where, exactly, do you feel the line ought to be drawn?
> So it's okay to learn, but only if you do it very slowly?
No, it's a fundamentally different process with different results. An artist learns from previous artists to express things they themselves want to express. An AI digests art to become a viable(ish) tool for people who want to express themselves, as long as that expression resides somewhere in the weighted averages of the art the model has digested. Two fundamentally different things, apples and oranges, and, also not without it's own set of limitations. Despite the rhetoric around this stuff that anyone can create anything, that's just not true: you can create anything that you can find a model that's suitable to create it, that was itself trained on a LOT of similar material to what you want to create. Effectively automated ultra-fine scrap-booking.
Honestly if creativity is your thing, even if you find creating difficult for whatever accessibility reason you feel like pretending you care about, you will find AI more frustrating than anything, and the bounds of your creativity are the model itself, and the safeguards whatever provider has decided are important to put in place. You've just exchanged one set of limitations you probably can't control for another set you definitely can't control.
> I surely don't need to point you to the existence of forgers - you know a human can study the art of Rembrandt and paint plausibly Rembrandt style paintings
Yes and those are worthless once found, just like AI art. And again, you've sidestepped the scale: Adobe Firefly can bash out 3 images in roughly 2 minutes of solid resolution. No human can even dream of getting close to creating Rembrandt forgeries at that rate.
> We have been doing that since the dawn of history - what makes this any different from cars obsoleting the horse drawn carriage?
Because cars costed a fortune when new and were toys for the wealthy, before Henry Ford came along some three decades later to fix that. And then, the former farriers had time to retrain for new work. Not to mention, carriage builders were still employed through many decades with the rise of cars, because originally buying a "car" meant you got a chassis, suspension, engine and the essentials, which you would then take to a coach builder to have a "skin" if you will build around it. Hence the term "coachwork."
> Computers have been automating people's jobs for decades - should we ban programming writ large?
Is this the "debate" you were saying you were open to? Hyperbolic statements with zero substance? I can see why few want to have it with you.
> Where, exactly, do you feel the line ought to be drawn?
Consent. Tons of people's creative output was used to build machines to replace them, without their consent and more often than not, explicitly against their wishes, under the guise of a "research project" and not a monetized tech product. Once again a tech company bungles into the public square, exploits it for money, then makes us live with the consequences. I frankly think that question aught to be reversed: what makes OpenAI entitled to all that content, for a purpose it was never meant for, with zero permission or consent on the part of it's creators?
I'm not opposed to ML as a concept. It has it's uses, even for generating images/writing/what have you. But these models as they exist now are poisoned with reams of unethically sourced material. If any of these orgs gave even the slightest shit about ethics, they'd dump them and re-train them with only material from those who consented to have it used that way. Simple as.
May I just say, as a third-party simply reading this back and forth from the outside, that the tone of your writing and the implied attitude with which you are engaging in "debate", reads as very aggressive and uninterested in actually having a sincere discussion. To me at least.
I imagine you probably won't like this comment, but perhaps you might use it as an opportunity for reflection and self-awareness. If your interest is actually to potentially change someone's mind, and not just "be right", you might consider approaching it in a different way so that your tone doesn't get in the way of the substance of arguments you wish to make.
You aren't wrong in the slightest, apart from that you've gotten the implication that I'm here for a debate. I'm not. I've been having this debate since the StableDiffusion blow up at the mid-ish of 2023. I've read these points restated by countless pro-AI people and refuted them probably dozens of times at this point, here and elsewhere, always ending in a similar deadlock where they just stop replying, either because they're sick of me, or I've "won" for whatever that means in the context of online discussion.
Nevertheless I'm always open to be persuaded by actual arguments, and I have on numerous issues, but I have yet to see any convincing refutations on these points I've outlined here regarding primarily, but not limited to:
- The unethical sourcing of training data
- The exploitation of lesser-privileged workers in managing it
- The harm being done and the harm that will be done to various professions if they become standard
And not mentioned in this thread:
- These various firms' superposition between potentially profitable business and "research initiatives," depending if they're trying to get investment or abuse the public square respectively
- The exploitative/disgusting/disinformative things these AI's are being used to produce to a society already saturated with false information and faked imagery
But these discussions usually dead end, like I said, when the other person stops answering or invokes the "well if we don't build it someone will" which is also unpersuasive.
Relating specifically to your point about wanting to change someones mind: in my first comment I do feel I put out an olive branch with empathy for being excited about a new thing. But when the new thing in question is so saturated from beginning to end in questionable ethics... I'm sorry there's only so much empathy I can extend. If you (not you specifically, but you as the theoretical person) are the kind of person ready to associate with this technology at this stage, when it's foibles and highly dubious origins are so well known, then I'm not overly interested in assuaging your feelings. This person came into this thread bemoaning the fact that so many people are calling them out on this and they're sick of it, and like, there's a great way to stop that happening: stop using the damn technology.
I will always extend empathy, but if your position is whining about people rightfully, IMO, pointing out that you are using unethical tech and you wish they'd stop? Like, sorry not sorry man, maybe you shouldn't use it then. Then you get less yelled at and a clear conscience. Win/win.
But I do appreciate the reply all the same, to be clear. You aren't wrong. I've just had this argument too much, but also don't feel I can really stop.
> always ending in a similar deadlock where they just stop replying, either because they're sick of me, or I've "won"
My general experience on Hacker News is that threads rarely go beyond one or two replies, so I'll often tap out on the assumption that the other party isn't likely to actually read/respond to any thread more than a couple days old. As far as I know, there's not any indicator when someone replies to your comments, unless you go and check manually?
If I'm just using the site wrong, do please let me know!
Otherwise, I'd suggest you might want to update from "sick of me" to "never saw the reply due to the format of the site". For what it's worth, it took me a while to adjust to that
> An artist learns from previous artists to express things they themselves want to express.
Ahh yes, that well known human impulse to produce stock artwork for newspapers and to illustrate corporate brochures. I can't imagine what the world would be like if we let cold, soulless processes design our corporate brochures!
I suppose this argument works for Art(TM), but why is it relevant to the soulless, mass produced art? Should it be okay to discard all the artists who merely fill in interstitial frames of an animation? Is "human expression" actually relevant to that?
> And again, you've sidestepped the scale
Pick one: either this is about speed or it isn't. Would you actually be fine with AI art if it was just slower? If not, then stop bringing up distractions like this. If this really is just about scale, it's a very different conversation.
> Because cars costed a fortune when new and were toys for the wealthy, before Henry Ford came along some three decades later to fix that.
Sorry, when did Rembrandt paintings stop being toys for the wealthy?
> And then, the former farriers had time to retrain for new work.
So, again, it's just that progress is moving too fast? If we just slow things down a bit and give the artists time to flee, that makes it okay?
> Hyperbolic statements with zero substance?
We haven't talked before, so I didn't know whether you were someone who was okay with automation putting people out of work. That's hardly zero substance. I'll assume this means you're fine with it, since you don't think it's even worth discussing.
> Consent
Okay, so, bottom line: you're saying that if they spend a few billion to license all that art, and proceed to completely replace human artists with a vastly superior product, you're OK with that outcome? (I'm not saying this is inconsistent, just trying to understand your stance - previously you were talking about the importance of artists expressing themselves and the speed at which AI can do things - what's actually important, here?)
> Ahh yes, that well known human impulse to produce stock artwork for newspapers and to illustrate corporate brochures. I can't imagine what the world would be like if we let cold, soulless processes design our corporate brochures!
As someone who works on the side in creative endeavors, I assure you that work that I do even that I would prefer to not carries with it my principles as a designer and a small piece of my humanity, every last thing, even the most aggressively bland and soulless contains an enigma of tiny choices based upon years of making things that most people will never notice. Or at least, I always thought they didn't notice, until you start putting even bland corporate art next to AI generated garbage. Then they do.
From the creative perspective, that's what I think lends it that... smoothed over, generic vibe. An artists "voice" even in something like graphic design, even in an oppressive and highly corporatized environment, would be best characterized as a thousand tiny choices that won't overall really impact a ton on their own in terms of the final product, but do give a given work it's "humanity" that no machine can touch. When I, for example, design an interface: why do I consistently use similar dimensions for similar components and spacings? I honestly couldn't tell you. To me, it "looks nice," a word choice that undermines decades in my industry but nonetheless is the most fitting. And all of those are subject to change by committee later on to be sure, but even so, they rarely are.
AI takes these thousands of tiny choices that contribute to this feeling and replaces it with a rounded mean of previous choices made by innumerable artists with different voices. It takes the "voice" as it were and replaces it with an cacophony of conflicting ones, which is subject to change it's tone with each pixel. This, IMO, is it's core failing.
> I suppose this argument works for Art(TM), but why is it relevant to the soulless, mass produced art? Should it be okay to discard all the artists who merely fill in interstitial frames of an animation? Is "human expression" actually relevant to that?
For the love of everything, yes. And you say "why is it relevant for soulless mass produced art" but we already know why it is, Disney spent billions of dollars showing us what happens when the content mill becomes so utterly and completely detached from the art it was meant to be with the MCU. The newer movies just... look like shit, and not because of AI (probably?) but because all the movies are made down to a formula, down to a process, no vision, no plan, just an endless remixing of previous ideas, no time for artists to put in actual work, just rushing from task to task, frame to frame, desperately trying to crank it the hell out before their studios go bust.
People rag on generic, popular art but even popular art is art, and if you take away the humans (or as Disney did, beat them into such submission they can no longer be human) people definitely notice.
> Pick one: either this is about speed or it isn't. Would you actually be fine with AI art if it was just slower? If not, then stop bringing up distractions like this. If this really is just about scale, it's a very different conversation.
It's relevant because you're bringing up industrialized mechanization as a comparison, and it's really an ill-fitting one. The printing press, MAYBE, could be an example on the scales we're talking about, and the main difference there is mass produced books basically didn't exist and literacy of the common people was substantially rarer, ergo, the number of scribes displaced in their skills was much lower.
But the vast majority of "technology replaces workers" type things can be (and you have invoked this already) compared to the industrial revolution, but again, the difference is scale. They didn't build a horseshoe maker by analyzing 50,000 horseshoes made by 800 craftsman that could then produce 5,000 of the things per day.
And sure, those horseshoes all suck ass, they're deformed, don't work well and the horses are visibly uncomfortable wearing them, but the corporate interests running everything don't care and so shit tons of craftsman lose paying work, horses are miserable, and everything keeps on trucking. That's what I see, all around me, all the time these days.
> Sorry, when did Rembrandt paintings stop being toys for the wealthy?
I mean, the art market being a tax-dodge and money-laundering scheme is a whole other can of worms that we really shouldn't try to open here.
> So, again, it's just that progress is moving too fast? If we just slow things down a bit and give the artists time to flee, that makes it okay?
I'd be substantially more pleased with a society that cared for the people it's actively working to displace, yeah. I don't think any artist out there is dying to make the next Charmin ad, and to your earlier point of soulless corporate art, yeah I'd imagine everyone would have a lot more fun making anything that isn't that. The problem is we have millions of people who've gone to school, invested money, borrowed money, and constructed a set of skills not easily transferable, who are about to be out of work. And in our society, being out of work can cost you everything from the place that you live, to the doctors that heal you to the food that nourishes you. I don't, and I doubt anyone gives a damn about maintaining the human affect in corporate art: apart from the fact that those humans still need to eat, and most of them are barely doing it as it stands now.
> We haven't talked before, so I didn't know whether you were someone who was okay with automation putting people out of work. That's hardly zero substance. I'll assume this means you're fine with it, since you don't think it's even worth discussing.
On the whole, less work is a-okay by me. Sounds great! The problem is we as a larger collective of workers never see that benefit: Instead of less work, we all just produce more shit, having our 40-hour week stuffed with ever more tasks, ideas, and demands of management as they add more automation and cut more jobs and push the remaining people ever harder.
We were on the cusp of a 30-hour workweek in the 1970s and now? Now we have more automation than ever but simultaneously work harder and produce more shit no one needs than we ever have.
> Okay, so, bottom line: you're saying that if they spend a few billion to license all that art, and proceed to completely replace human artists with a vastly superior product, you're OK with that outcome? (I'm not saying this is inconsistent, just trying to understand your stance - previously you were talking about the importance of artists expressing themselves and the speed at which AI can do things - what's actually important, here?)
What's important is I want people to survive this. I'm disillusioned as hell with our society's ongoing trajectory of continuously trying to have more, to do more, always more, always grow, always produce more, always sell more. To borrow Greta's immortal words: "Fantasies of infinite growth." I see the AI revolution as yet another instance where those who have it all will have yet more, and those who do not will be ground down even harder than they already are. It's a PERFECT solution for corporations: the ability to produce more slop, more shit, infinitely more, as much as people can possibly consume and then some, for even less cost, and everyone currently working in the system is now subject to even more layoffs so the executives can buy an even bigger yacht.
If you don't see how this stuff is a problem I don't think I can help you.
And mother-mortality is creeping upwards here in the states thanks to the cost of healthcare and Republican's ongoing efforts to control women's bodies.
> We had a world-wide plague, and far less 10% of the population died.
An inordinate amount of which was concentrated in America, because we've industrialized and commercialized political radicalization for profit.
> We have computers. We have the internet. We have an infinite wealth of media.
We have devices in our pockets that spy on us (also powered by AI), about five websites, and infinite derivative shit.
> We fixed the hole in the ozone.
That one I'll give you. Though the biosphere is still collapsing, we did fix the ozone hole and that isn't nothing.
> We eliminated lead poisoning.
Eeehhhhhh.... mostly? Plenty of countries still use leaded gasoline, and tons of lower-income people are still living in homes with both lead and asbestos.
> We are constantly making progress against world poverty.
In the developing world, maybe, but that comes with a LOT of caveats about what kinds of jobs are being created and how well those workers are being paid. China has done incredible work lifting their population, but not without costs that the CCP is only now starting to see the problematic side of. India is a similar story. And worth noting, both of those success stories, if you decide to call them that, are based heavily on some creative accounting and massive investment from the West. I don't think that's a bad thing but I'm also guessing said investors are expecting to be paid back, and it's finite and unsustainable.
Meanwhile, the developed world, workers are getting fucked harder than ever. Rent is now what, 2/3 of most people's income? People out here working three jobs and they still can't make a decent living.
> We got rid of kings and monarchs and tyrants.
We living in a different world here? We have an entire wave of hard-right strongmen making big splashes right now. Trump was far from an isolated thing. No they're not dictators... YET... but like, they don't usually start that way if you study your history.
> War is so rare, we don't even bother with the draft despite the army struggling massively with recruitment.
Uh, I think some Gazans, Ukranians, Iraqis, and Rohingya might take issue with that statement?
> You simply CANNOT look back on history and think we don't have it better
I mean yeah, I'm not one of those lunatics who think we were better shitting in caves. But that doesn't mean our society as it exists is not rife with problems, most of which have a singular cause: the assholes with all of the money, using that money to make the world worse, to make more money.
All of your objections are nitpicking about small, localized setbacks compared to massive global gains. As far as I can tell, we agree that the world is consistently getting better, and that these gains all come from technological progress. As far as I can tell, we agree that while the world isn't perfect, and some technologies do more harm than good, "technological progress" is a net positive.
I don't think you want to go back to a 50% child mortality rate, even if it somehow convinced Republicans to drop their crusade against abortions. I don't think you prefer World War 2 to the Ukraine war. I certainly don't think you want to reinstate monarchy and fascism across Europe.
If I'm wrong, then go ahead and tell me what decade you want to rewind to - what progress are you willing to give up?
If I'm not wrong, then... how does this at all lead to "hence being pissed about AI"? What's so uniquely evil about AI that we should give up the gains there, and assume it's a net evil in the long term, compared to everything else we've done?
> All of your objections are nitpicking about small, localized setbacks
Small wars are still wars. No, we don't have any global conflicts with well-naturalized two-sides like the Axis and Allies of World War II, yeah, true enough. But that's not because war is done or distasteful: it's because global hegemonic capitalism now rules all of those societies and makes certain such wars don't happen between the countries that matter. Which is why we have the "police actions" in Vietnam and Korea, why we had Operation Iraqi Freedom, why we nearly went to war with South America over the price of bananas, etc. The colonial powers have essentially unionized and now use the bludgeon of the military might of America to keep poorer indebted nations in line, and if they fail to capitulate, a reason will be manufactured to unseat the power in that place, more often than not by force, more often than not with heavy civilian casualties and economic destruction, the rebuilding of which in turn will be financed by the West afterward so the poorer countries never have a ghost of a chance in hell of standing on their own two feet and making their own fucking decisions about their resources and people.
That is not due to technical progress. Technical progress is, if anything, jeopardizing that balance because the information now is much harder to contain about how absolutely fucked everyone in the global south is at basically all times.
> As far as I can tell, we agree that while the world isn't perfect, and some technologies do more harm than good, "technological progress" is a net positive.
I would absolutely cosign that, if said technological progress wasn't extremely concentrated in the wealthy nations on this planet, while the other ones are making do scrapping our old ships wearing tennis shoes and smoking the cigarettes we export them.
> I don't think you want to go back to a 50% child mortality rate, even if it somehow convinced Republicans to drop their crusade against abortions.
No I want Republicans to govern on conservative principles, not mindless culture war bullshit. And I'd also like the Democrats to stop governing on conservative principles because their opposition in the states is a toddler eating glue and screaming about pizza places on the floor of the fucking Senate.
> I don't think you prefer World War 2 to the Ukraine war.
All war is terrible, the scale is irrelevant.
> I certainly don't think you want to reinstate monarchy and fascism across Europe.
A lot of fascist-leaning voters in Europe might do it anyway though.
> If I'm wrong, then go ahead and tell me what decade you want to rewind to - what progress are you willing to give up?
I want the progress. I just don't want it hoarded by a particular society on our planet. We ALL deserve progress. We ALL deserve to earn a living commensurate with our skills, and we ALL deserve to be supported, housed, and fed, and we already have the resources to do the vast, vast majority of it. We simply lack the will to confront larger issues in how those resources are organized and distributed, and the fundamental inequities that we reinforce every single day. Largely, because a ton of people currently have a lot more than they need, and a small amount of people have a downright unethical amount, and the latter group has tricked the former group into thinking they can join the latter group if they only work hard enough, while also robbing them blind.
> If I'm not wrong, then... how does this at all lead to "hence being pissed about AI"? What's so uniquely evil about AI that we should give up the gains there, and assume it's a net evil in the long term, compared to everything else we've done?
It's not uniquely evil at all. It's banal evil. It's the same evil that exists everywhere else: tech industries insert themselves into economies that they don't understand, they create something that "saves" work compared to existing solutions (usually by cutting all kinds of regulatory and human corners), sell that with VC money, crush a functioning industry underneath it, then raise the prices so it's no cheaper at all anymore (maybe even more expensive) and now, half the money made from cab services goes to a rich asshole in California who has never in his life driven a cab. It's just that, over, and over, and over. That's all silicon valley does now.
Okay, seriously? You don't care whether 100 or 100,000,000 people die? You don't see ANY relevant differences between those two cases? It must be perfect, or else we haven't made any progress at all?
I don't think I can help you understand the world if you really can't understand the difference there
You take ONE SINGLE POINT out of that entire post just to bitch about me making perfect the enemy of good?
My point isn't that 100 people dying isn't preferential to 100 million people dying. My point is that the 100 people died for stupid, stupid, stupid reasons. Specifically the ongoing flexes of the West over the exploited Global South.
Overall, I think you make a fairly convincing argument for all sorts of social changes - the problem is, that's not actually what you're advocating for.
> We ALL deserve progress. We ALL deserve to earn a living commensurate with our skills, and we ALL deserve to be supported, housed, and fed, and we already have the resources to do the vast, vast majority of it.
This is a great argument for UBI, or socialism, or... well, see, the problem is precisely that you never actually define anything actionable here. You've successfully identified a major problem, but your only actual proposal is "oppose AI artwork".
The problem is, "opposing one specific form of progress" doesn't actually do much at all to fix the issue. And indeed, if we had UBI or increased socialism/charity programs, then we wouldn't need to stop ANY form of progress.
And, of course, fixing the underlying issue is incredibly hard. We've tried Communism twice and proven that it's vastly more destructive. The Nordic Model seems to be doing well, but there's all sorts of questions on how it scales. And you're not actually proposing anything, so there's no room for the real, meaningful debate about those methods.
You, personally, likely are not (apart from electricity use but that's iffy.) But the technology you want to use could not exist, and cannot continue to be improved, without those two things. That's not unclear in the slightest, that's just fact.
> I'm open to that conversation and debate, but diatribes like this make it far too black-and-white with "good" people and "bad" people.
I get that any person's natural response to feeling attacked to defend oneself. That's as natural as natural gets. But if shit tons of people are drawing the same line in the sand, no matter how ridiculous you might think it is, no matter how attacked you might feel, at some point, surely it's worth at least double checking that they don't actually have a point?
If I absolutely steel-man all the pro-AI arguments I have seen, it is, at the very best:
- Using shit tons of content as training data, be it written, visual, or audio/video, for a purpose it was not granted for by it's creators
- Reliant on labor in the developing world that is paid nearly nothing to categorize and filter reams upon reams of data, some of which is the unprocessed bile of some of the worst corners of the Internet imaginable
- Explicitly being created to displace other laborers in the developing and developed world for the financial advantage of people who are already rich
That is, at best, a socially corrosive if extremely cool technology. It stands to benefit people who already benefit everywhere, at the direct and measurable cost of people who are already being exploited.
I don't think you're a bad person for building whatever AI thing you are, for what it's worth. I think you're a person who probably sees cool new shit and wants to play with it, and who doesn't? That's how most of us got into this space. But as empathetic as I am to that, tons of people alongside you who are also championing this technology know exactly what they are doing, they know exactly who they are screwing over in the process, and they have said, to those people's faces, that they don't give a shit. That they will burn their ability to earn a living to the ground, to make themselves rich.
So if you're prepared to stand with them and join them in their quest to do just that, then I don't think anyone is obligated to assuage your feelings about it.