Hacker News new | past | comments | ask | show | jobs | submit login
I don't want anything your AI generates (coryd.dev)
235 points by cdme 9 months ago | hide | past | favorite | 333 comments



I'm not a fan of this hyper aggressive line-in-the-sand argumentation about AI that pushes it all precariously close to culture war shenanigans. If you don't like a new technology that is perfectly cool and your right to an opinion. Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment. That is NOT at all clear, settled, or even correct much of the time. I'm open to that conversation and debate, but diatribes like this make it far too black-and-white with "good" people and "bad" people.


The issue is that without loud declaration like this money men will just soldier on with implementing shittier future.

It's always do something first and then ask for forgiveness. But at the point you ask for it it's too late and too many eggs were broken. And somehow you're richer at the end of it all and thus, protected from any consequences. While everyone else is, forgive my French, fucked.

Has Facebook been a net positive so far? Has twitter? You may make case for you YouTube, but what about Netflix?

It's only been good to us (engineers) and our investor masters, but not for the 90% of the rest, which may I remind you is the distribution that created us in the first place. Sorry for being dramatic, but I do seriously think these things need to be reigned in, and especially people like Altman who while believing themselves to be good-willed (and I have no doubts that he is better man than Musk for example) end up being Robert Moseses' of our generation. That is someone with good intentions who ends up making things worse overall.


Why would YouTube or Netflix be a net negative?


YouTube was one of the sites where the recommendation engine, trained to increase engagement, was pushing people into conspiracy theories and politically divisive content… and some other darker stuff.

They have done some work to try and mitigate some of that, but it seems like it will be a cat and mouse game between the AI and society, and a lot of damage was already done.


I said YouTube isn't a net negative. Even then you have to think globally, what about children who grew up consuming AI generated abomination animations made with 0 oversight of even its greedy creators?

Netflix was the cause for current slate of tax writeoff cancellations (no Netflix no overinvestment by Warner etc and no clumsy Zaslav cleanup), terrible shows being greenlit, homogeneity of camerawork, casting, identity-politics pandering that nobody asked for, oversaturation of the market with streaming services that is almost as bad as cable etc. It's basically a cancer overgrowth, good short-term and terrible long-term. Binge-tv model is also not a good thing, in terms of how much time it takes up, how little it brings in terms of pleasure and validates our growing impatience. I could go on and on. Does anyone even remember Mank or The Killer? Imagine Poor Things being Netflix-only release. Nobody would even say a single peep about it outside of niche film twitter accounts.

Generally speaking, if you've read "Seeing Like a State", then you can apply the same logic to companies and the entire industries, or really any aspirations of "man". We crave control and fear uncertainty, so we make environments far more deterministic, which brings more short-term profit but ruins the environment (be it nature or film itself). Look at Disney, Iger created the superhero movie boom (by making it super deterministic and boring: every movie is part of a giant puzzle so that each piece brings money) but in the process killed Star vehicles, killed experimentation (by directors, actors), and now Scorsese and Coppola need to throw around their weight to reverse the course. Sure, A24 exists, but before this all movies were A24 movies essentially. Now a major star being in a horror movie is an "Event". (Who is even a star anymore? DiCaprio, Cruise? These people were around since 80s. You think Chris Evans will have the same longevity?) Yeah, there were similar periods of dominance (80s action movies) but they weren't so precisely fine-tuned and featured greater directorial freedom and less emphasis on being non-offensive.

I guarantee you none of you will quote Avengers (Ultron onwards) in next 20 years. People still quote Terminator 2 or Predator or Lethal Weapon, despite them also being brainless flicks (some not so brainless actually). Look at Dr Strange 2, they forced Raimi to stop being Raimi basically (the first movie has some good moments), and made him fall in line with the "agenda", because the plan™ is too important to compromise on. In reality the plan™ is the money perputuum mobile lol. Sure, these people were always greedy, but stochasticity of the system allowed for good stuff to pass through their Eyes of Sauron lmao.

Tbh I don't even know why I'm responding a "throwaway" account.


> terrible shows being greenlit, homogeneity of camerawork, casting, identity-politics pandering that nobody asked for, oversaturation of the market with streaming services that is almost as bad as cable etc. … Binge-tv model is also not a good thing

None of this is unique to Netflix. Terrible shows have been greenlit since the dawn of television. Shows are incredibly homogenous because they’re largely produced by a select few people. If anything, Netflix has broken that homogeneity by allowing more indie film/tv creators to breakthrough (like squid game).

Casting and identity politics shenanigans are definitely not unique to Netflix and started way before Netflix started producing content. The oversaturation problem only became a problem when all the other networks wanted their own slice of the pie. It was actually great for awhile when Netflix was the only big player in town.

And finally, binge-tv has always been possible. My grandmother would sit in front of cable television sun-up to sun-down watching whatever was on. 24 hour marathons of mythbusters and other shows like that were very common. Reruns of all your favorite sitcoms play all night on all major cable channels. Bingeing isn’t a unique problem to Netflix. Netflix just allows you to do it with new shows instead of waiting arbitrary amounts of time.

Also, is binge-reading a novel unhealthy? I’ve had 8 hour reading sessions when I’m gripped by a great book, like the last book in the Wheel of Time series. If that’s acceptable, why isn’t watching a show for 8 hours acceptable? I don’t think the medium really has that much of a tangible effect. Now if you’re bingeing shows everyday, then it’s unhealthy. But once a quarter when a new show you like comes out? Idk, that seems fine to me.


I'm not sure if you're joking about binge-ing but you have to be delusional to think that someone taping their favorite show on their own or waiting for it until it ends is nearly the same as dropping all episodes at the same time on principle for every show and only offering that as an option for a long time. building your whole UI around it and encouraging this behavior? The point about others jumping on the bandwagon can only happen if Netflix broke down the barrier and over-invested (by going into deep red) to justify accessibility, knowing full well they won't be able to keep up the steam indefinitely. I know you're trying to be smart here, and failing, but are books built to be binged? Do they know when your attention is dipping? Do books have such UI to do this? You can make same inane argument about doing math for 8 hours a day or something.

Also the scale at which Netflix was throwing money around was unprecedented, so much so that other tv shows and writers were making fun of it. That's like saying periods of good investment client are equivalent to a dot com crash or a housing bubble.


Not to comment on AI, or the merits of television as a medium here, but specifically on the drop-releases of entire seasons of shows.

I do not want to retain the context of some show across weeks. If I'm going to watch something, it will be all in one go, over the course of some reasonable time period that _I_ define - that may be a single day (transatlantic flight, for example), or may be a single week.

Typically for the streaming services that don't release all episodes at once, that means I won't even start until the complete season is available, and almost inevitably will get so annoyed by the service that I will just cancel a subscription to it.


I’m not joking.

> but you have to be delusional to think that someone taping their favorite show on their own or waiting for it until it ends is nearly the same as dropping all episodes at the same time on principle for every show and only offering that as an option for a long time

What? The only reason cable didn’t release seasons all at once was to maximize profits. They get to run more ads, to force a user to stay subscribed for longer to finish their favorite show, charge studios extra for prime time spots, and more. These big productions are usually done with the whole season by the time it would release on cable. It’s not like they would stagger the release out of the goodness of their hearts to help people avoid bingeing.

What does it matter if you watch 8 hours of the same show or 8 hours of different sitcoms? I never mentioned taping a show and bingeing it all at once. I mentioned the fact that some people will watch large amounts of television regardless of what’s actually on the screen.

> I know you're trying to be smart here, and failing, but are books built to be binged?

And I guess you’re trying to be smart and failing? What’s with the backhanded comment. Why not engage with the argument being made instead of making comments like this?

That’s beside the point. The argument I was making is people for some reason find bingeing a show for 8 hours morally reprehensible. But reading a book is fine. The same thought process has been applied to gaming for long sessions. My argument is why are these different mediums deserving of different moral judgments? What makes reading, doing math, playing video games, or watching tv for long periods of time more or less reprehensible? These all serve one purpose: activities meant to entertain (maybe not math). Why does the medium the entertainment is delivered through make it any better or worse?

It isn’t. I think the thing people have a problem with, rightly so, is the lack of balance. It’s unhealthy to obsessively engage in one activity for long periods of time. But that’s a different argument altogether.

All in all, you didn’t really refute anything I said or try to show how any of these things are unique to Netflix. I agree with you by the way, but you’re framing your argument poorly in my opinion.


I honestly am not sure what you were arguing for in the first place. I agree that I was being rude to an extent, but your previous comment didn't lend itself to most charitable interpretation as it wasn't clear to me what in mine practically got you to respond refuting my points?

> My argument is why are these different mediums deserving of different moral judgments? What makes reading, doing math, playing video games, or watching tv for long periods of time more or less reprehensible? These all serve one purpose: activities meant to entertain (maybe not math). Why does the medium the entertainment is delivered through make it any better or worse?

I'm not making moral judgment on the medium itself. I love the medium, I'm a cinephile to be honest. I'm saying any entertainment that becomes gamified like this (be it books or shows), and having more and more control over you by the way of gathering data while you use it, is worse than the same type of standalone entertainment that has less influence. I don't think you will argue with me that a passive TV cannot influence you as much as a system that actually keeps track of your activities.

Basically, any medium that outstays its welcome in your life by underhanded tactics is bad in my opinion. If you've read The Diamond Age then you remember the "Illustrated Guide..." book which is basically adaptive AI that weaves in your life into its storytelling. Imagine an e-reader with GPT-6 embedded that does just that but instead of teaching you, it just keeps creating a more and more compelling story full of ads or something. I'd be equally opposed to that (and the reading of it). It's not the medium for me it's the vehicle of delivery becoming bigger than the delivery itself. The horse becomes the cart if you will. So a period of seeming freedom followed by this winter isn't good for the industry basically.

Now I'm not claiming Netflix is responsible for Marvel/Disney, those are separate beasts and processes. But I do argue that they come from the same tendency and desire that fuels other companies I mentioned prior: FB, Twitter, YouTube etc.

In terms of how Netflix itself is responsible, my argument is that its underhanded tactics in 'disrupting' the industry (lowering threshold of entrance, running at a loss for a time) just forced other players in the same local minimum and now everyone stayed in this way. And to make it even clearer I think my issue is that Netflix ushered an era of greater centralization and homogeneity, where practices throughout the industry became even narrower, and things like cancellations mid-season even more a norm. Now I'm not sure if it's necessarily different from the past (probably not) but knowingly creating a bubble and then resultant layoffs and losses of jobs are no different than a drug dealer who got you pure stuff first few times and then sells you diluted dope once you're hooked.

As I said I don't think anything fundamentally has changed in terms of how money men operate, what I don't like is how we keep giving them tools to become more and more powerful which is what I was railing against throughout this thread. Yes, we depend on their funding but it doesn't mean we have to help them secure their empires to a 1984 extent. Because at the rates it's going it will happen. Altman is a person who (given his recent actions, like military contracts) will lead us down that way. The employee revolt showed that these engineers only care for their bottom line.

Anyway, apologies for misinterpreting your point, but I do think you also didn't necessarily get mine. Since we are not in disagreement we can keep the argument but in more civil terms.


Man i really hate how people absolve themselves of responsibility like this. "It's not my fault I spent all weekend watching Netflix, it's their UI!"

No, it isn't. I love when they release full seasons at a time. I can watch them at whatever pace I please. If some degenerate can't control themselves that's their problem.

"But muh children" parent them. "But irresponsible parents" Netflix is probably the best case scenario there.


I don’t interpret this discussion as about absolving oneself of responsibility. To be fair, what people spend their weekends doing is none of my business.

But it is true that Netflix makes UI decisions that encourage binging. They are not evil for doing so, because honestly, there isn’t anything wrong with binging anyway, but it’s indicative of the logic that is going to be used when it comes to producing their own shows.


You could say they encourage binging, you could say they have good UX. I don't really know what type of functionality were talking about here, I'm thinking things like automatically playing the next episode, skipping intros etc. To me that's just good design, it's exactly what I want the app to do.

And I don't really see how Netflix benefits from binging. It seems to me that Netflix is more like gyms - they want customers who pay but don't use the service. The more you watch the more you're costing Netflix. If you pay but never watch anything you're the perfect customer, they're collecting free money.

If you just watch a full show in a weekend and unsubscribe that's really not ideal for them.


What if they are spending their weekends building devices to kill you? Does it now become your business?


No one is saying you don't have agency as an individual. This is an aggregate statement. In any A/B test you're interested in proportion of people converted who displayed increase in desired behavior. What you can't do is to go on an extrapolate this to any individual person, because that's now how statistics work or are designed.

You're taking an rightwing/libertarian approach (no judgement) where everyone has complete free will to do anything they want and make fully informed decisions. Rational actor and all that. Reality is quite different, and if you don't believe me you can peruse a ton of work in behavioral economics that show it.

Hell, I don't even need to go far to conjure an example: gambling addicts.


I don't really know what you think I don't understand. I'm not arguing from the point of view of an A/B test, I'm arguing from the point of a Netflix user.

I don't care how others ruin their lives. There's a million and one ways to do so and if you try to remove one they'll fine another. If you choose to binge Netflix all day that's on you. If you choose to overindulge in drugs or food or whatever, that's on you.

By all means, help people who need it. I'm a strong supporter of all kinds of social safety nets. Free healthcare, free rehab, free counseling, free education, bring it on. It's the best possible investment a society can make, any society that ignores these obvious improvements is shooting itself in the foot. If someone needs and wants help, help them.

All I'm saying is Netflix (and similar) is great the way it is. It's a much much much better experience than tv used to be.

So when I see people seemingly hold them responsible for the behavior of their users, it honestly makes me angry. They're doing what we want. We should celebrate them for that, not criticize them. It's not their fault people can't control themselves.

And I fail to see what you think the motivation is. You seem to think Netflix is secretly scheming to make their viewers binge more - why? That would be like a gym trying to make their support members come in to the gym. Fact is if everyone went to the gym on a weekly basis they wouldn't have space for half of their members and they would go bankrupt. I don't know the details of Netflix's server costs but I'm betting that if everyone on there were to start binging everything they would go under as well. I don't see any reason why Netflix would want people to binge more. Not one. It would increase server costs, maybe also licensing costs, bring in zero extra money, and once people were done watching everything interesting they would unsubscribe. It seems much more favorable for them if people watched one episode per week and kept paying for years while hardly using Netflix servers.

But we don't want one episode per week, we want to decide our own pace. We don't want to have to choose to play the next episode, we can pause the show whenever we want to. We don't want to watch the same intro for each episode.

That's why we pay for the service - it's what we want. That's their incentive. Not making degenerates spend more in server costs per week than they pay per month. That's not good business for Netflix.


This sounds like a critique of the creators of the AI moreso than the users of it, which TFA is targeting.


They always have done always will do and society has almost never done anything about it and isn't doing anything about it now. AI is just another tool in a the toolbox and as I'll keep repeating, the problem is never with the tool but with the tools using the tool.

How can we justify complaining about AI for these reasons; we've all sat on our asses and now they're are billionaires and soon to be trillionaires. We've already failed, dude.


Ai/ml will change out world.

It already does.

It's a Paradigma shift and probably the most impactful after the internet.

From human to machine interface to medicine, research, content creation etc.

No one cares about some dude posting some negative rant like that.


"Change the world" and "paradigm shift" are not inherently good.


It's evolution and our responsibility to make it good.

Doesn't matter though if we like it or not because it's happening.

The only thing preventing this is a total economy collapse so crazy that our society doesn't continue chip production.


We weren't able to make the internet good, what makes you think we'll do any better with AI?


The internet is responsible for longer lifespans and decreased mortality globally. It is, on balance, well beyond "good."


I can do my bank stuff online, pay and manage bills, can call my parents with video, send pictures etc.

Internet is a huge success.

Internet is connection not private websites.


I appreciate the optimism but to me this should have been you essay about the good of the internet. I'm convinced it would be worth reading. If you wrote it, I wont see it because that is how certain people like it.

My lengthy rant in response would likely be about the almost impossible puzzle of logistics, the access to the vast ocean of knowledge that humanity has accumulated and the organization of this complete mess that is civilization.

We have plenty of stuff, we just cant get it to the right place at the right time. We know plenty of stuff but we cant get it to the right person at the right price. We really want to make this democracy thing work but despite our effort we keep getting sausages filled with you don't want to know.

My definition of a huge success is different. Maybe I'm wrong for thinking we could do more with the tool. If I'm wrong I don't want to hear it :-)


To make it good, critics need to point out where it’s going bad.


Clearly somebody cares, or else we would not be in this subthread.


I care. So, you should recheck the facts that you presumably got from your Bing query.


> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

I don't think this article even remote attempts this claim. The closest it gets is suggesting that if these defenses are too much trouble for you, then perhaps your use case for AI wasn't great in the first place.

> but diatribes like this

How is this a diatribe? There's nothing bitter about the writing here, it's entirely couched within the realm of personal opinion, and is an unexpurgated sharing of that opinion.

Please don't position your arguments so that if I want to share my opinion I have to defend myself from accusations that I'm being exceedingly bitter or somehow interfering with what you intend to do.

You're effectively attempting to bully people out of their own opinions for the sake of your convenience.


> it's entirely couched within the realm of personal opinion

"AI output is fundamentally derivative and exploitative"

"If you want custom art, pay an artist."

"Human recommendations will always be better."

If you can't argue against any of those stances, what stances are up for debate?

Surely the person you're responding to was just posting their own opinion, and you're as much a bully as they are?


> I don't think this article even remote attempts this claim.

It's in the first sentence, "AI output is fundamentally derivative and exploitative (of content, labor and the environment)."


Any fruit of any manufacturing labour is fundamentally derivative and exploitative: it needs raw materials from the environment, and it needs labour for the intended transformation; if anything, the AI output is less exploitative because the raw inputs don't end up destroyed in the process.


> You're effectively attempting to bully people out of their own opinions for the sake of your convenience.

Maybe it's just me, but "bully" seems like a very exaggerated choice of words here.


No you absolutely should have to defend yourself. Like the author, I don't want to touch anything you create that is produced with generative AI.

The ONLY exception is if you can demonstrate that your model was trained solely on datasets of properly licensed works and that those licenses permit or are compatible with training/generation.

But the issue is that overwhelmingly, people who use generative AI do not care about any of that and in practice no models are trained that way so it's not even worth mentioning that exception in this day and age.


I'm with you, but I think it is a bit more complicated. I think a reason for a lot of pushback is because these systems are being over sold. A lot of tech over promises and under delivers. I'm not sure it is just an AI thing rather than the limit in which you can edge forward the amount of acceptable exaggeration.

It definitely is frustrating that many things are presented as binary. But I think we can only resolve this if we dig a little deeper and try to understand the actual frustration that is being attempted to be communicated. Unfortunately a lot of communication breaks down in a global context as we can't be reliant on the many implicit priors that may be generally shared across different groups. Complaining is also the first step to critiquing, but I think you're right that we should encourage criticisms over complaints, but I think we can attempt to elicit critiques from complaints too, and that we should.


The idea that machine learning like large language models and image generating systems exploit labor might be up for debate, but the fact that they are disproportionately damaging to the environment compared to the alternatives is certainly true in the same way that it's true for Bitcoin mining. And there's more than just those two aspects to consider, it's also very much worth considering how the widespread use of such technologies and the integration of them into our economy might change our political, social, and economic landscape, and whether those changes would be good or bad or worth the downsides. I think it's perfectly valid to decide that an emerging technology is not worth the negative changes it will make in society or the downsides that it will bring with it, and reject its use, technological progress is not necessarily inevitable in the way that every new technology must become widespread.


> disproportionately damaging to the environment compared to the alternatives

This is a new one to me. Do you have any source for that? Once a model is trained, it seems pretty obvious that it takes Dall-E vastly less to create an image than a trained artist. I have trouble believing the training costs are really so large as to change the equation back to favoring humans.


Dall-E is usually not an alternative to a trained artist, but an alternative to downloading a stock image from the internet, which takes way less energy.


AI generated images have already won numerous awards. They can easily make assets good enough for an indie video game. Even stock images have to come from somewhere


Jevon's paradox though? Planes are so much more efficient now than the first prototypes, yet usage is so much higher that resource consumption due to airplanes have vastly increased. Same goes with generative models.


I'm not sure your premise even makes any sense here, because it doesn't take an artist much more resources to produce art then it took them to just exist for the same amount of time. They're still just eating, sleeping, making basic usage of the computer, using heating and light, and so on either way. Whereas someone using dall-e is doing all of that plus relying on the immense training costs of the artificial intelligence. That basic usage of the computer in order to use the machine learning model might be shorter than the basic use of the computer to use procreate or something, but they'll still be using the computer for about the same amount of time anyway, because the time not spent not making art will just be shifted over to other things. So it doesn't seem to me like having machine learning models do something for you instead of learning a skill and doing it yourself will really decrease emissions or energy usage noticeably at all.

Furthermore, even if there is some decrease in emissions using pre-trained machine learning models over using your own skills and labor, the energy costs of training a powerful machine learning model like you're thinking of are way higher than I think you are imagining. The energy and carbon emission cost of training even a 213M parameter transformer for 3.5 days is 626 times the cost of an average human existing for an entire year according to [this study](https://arxiv.org/abs/1906.02243). Does using a pre-trained machine learning model take that much emission out of people's lives? Or a day's worth out of 228,490 lives, perhaps? I doubt it.

But we aren't even using such a small transformers anymore either — they actually aren't that useful. We're using massive models lile GPT-4, and pushing as hard as we can to scale models even further in a cargo cult faith that making them bigger will fundamentally qualitatively shift their capabilities at some point.

So what does the emissions picture look like for GPT-4? The study above found that emissions costs scale linearly with number of parameters and tuning steps as well as training time, so we can make a back of the napkin estimate that GPT-4 is 8,592,480 times more expensive to train than the transformer used in the study, since it is rumored to have 1.76 trillion parameters versus the 213 million of the model in the study, and GPT-3 was said to take 3640 days to train (despite using insane amounts of simultaneous compute to scale the compute up in conjunction with the scale of the model) versus 3.5 days. This in turn means it is 5,378,892,480 times more expensive to train a GPT-4 than it is for a human to live for one year. And again, to reiterate, no matter what work the humans are doing, they're going to be living for around the same amount of time and using roughly the same amount of carbon emissions as long as they're not like taking cross country or transatlantic flights or something. So it's more expensive to train gpt4 then it is for almost 6 billion people to live for a year. I don't think it's taking a year's worth of emissions off of 6 billion people's lives by being slightly more convenient than having to type some things in or draw some art yourself. And there are only 8 billion people on the planet, so I don't think there's enough people to spread smaller gains out across to justify the training of this model (you'd have to take a days worth of emissions off of 1,963,295,755,200 people to offset that training cost!), especially since in my opinion the decrease in emissions of using machine learning models would necessarily be absolutely miniscule.


This back of the napkin estimate for GPT-4 emissions costs is too high by orders of magnitude. Your estimate is that training it emitted about as much as CO2 as 5.38 billion average humans living their lives for a year did. With a world population of 8 billion, it would mean that GPT-4 training was equivalent to 0.67 years of total anthropogenic CO2 emissions. Since GPT-4 CO2 emissions all come from manufacturing hardware with fossil fuels or burning fossil fuels for electricity, this is roughly equivalent to 0.67 years of global fossil fuel production.

But OpenAI had neither the money nor the physical footprint to consume 0.67 years' worth of global fossil fuel production! At those gargantuan numbers OpenAI would have consumed more energy than the rest of the world combined while training GPT-4. It would have spent trillions of dollars on training. It would have had to build more data centers than previously existed in the entire world just to soak up that much electricity with GPUs.


That's a good point, that's what I get for doing a linear extrapolation. This looks like a better estimation, which doesn't look good for my argument: https://towardsdatascience.com/the-carbon-footprint-of-gpt-4...

I still think my point about imagining that using ML models decreases emissions versus a human doing the same task still stands though — humans don't produce that much more or less emissions depending on what task they're doing, and they'll be existing either way, and probably using the computer the same amount either way, just not spending aa much time on that one task, so I don't see how you can argue using an ML model to write or draw something uses less CO2 than a human doing it. You can't count the amount of CO2 the human takes to exist for the amount of time it takes for them to do the task as the CO2 cost of the human doing the task because humans don't stop existing and taking up resources if they're not doing a task unlike programs. And you can't really compare the power used to run the ML model to the power used by the computer the human is using during the time it takes them to do the task either, since the human will need to use the computer to access your ML model, interact with it to define the prompt, edit the results, etc (and also bc again they'll probably just shift any time saved doing that task on the computer to another task on the computer). Additionally of course there's the fact that you can't really use large language models to replace writers or machine learning image generation tools to replace artists if you actually care about the quality of the work.


Huge kudos for admitting this changes your reasoning - I don't see people willing to admit that often, especially on the internet.


Thank you! It would have been silly for me to deny that my math was off, I don't really know how I would have rhetorically done that lol. I did find another relevant link on this topic for consideration, though, after writing my above comment: https://futurism.com/the-byte/ai-electricity-use-spiking-pow.... According to that article, although large language models are not yet be drawing as much power as I calculated (so my linear extrapolation was still silly), apparently they might eventually do so (0.5% of the world's energy by 2027). The actual study is paywalled though so I don't really know what their methodology is and they may well be doing the same linear extrapolation thing I was doing above, so I'm not really sure how seriously we should take this. It's something to consider though when we weigh the costs and benefits.


I would argue that most social progress comes from automating a task and freeing humans up to do something else - your logic counts just as solidly against building a car in a factory, or using a sewing machine, or a thousand other socially acceptable things. Surely the "LLM Revolution" isn't worse than the Industrial Revolution was?


Nothing I said was about automation per se being bad? I'm not sure where you got that from. I was specifically talking about the relative carbon emissions of machine learning models doing something versus human beings doing something, and that the former doesn't have an advantage over the latter in emissions in my opinion. I don't think that really applies to automation in general, because I wasn't really making a point about automation, I was just making a point about the relative emissions of two ways of automating something. I actually agree with you that in principle automation is not a bad thing, and that economies can eventually adjust to it in the long run and even be much better off for it, although we would probably disagree on some things, since I think our current economic system has a tendency to use automation to increase inequality and centralized power and resources in corporations and the rich as opposed to truly benefiting everyone, because those with economic power are going to be the ones owning the automation and using it to their advantage, while making the average person useless to them and not directly benefitting us. But that's an entirely different discussion really.


It’s so ironic that you have this stance about the value of other people but you feel so humiliated by OP as to think they’re bullying.


I think you replied to the wrong post?


how's it more damaging to the environment of you can replace 1k people, that's 1k people staying at home instead of commuting, sure that causes pain if we can't figure out ubi or a way to house and feed the masses, also many of the biggest ai users are working to get their energy 100 percent from solar, wind, and geothermal. AI is something we've been heading towards since the dawn of man.

Hell, ancient Rome had automatons. There's no way to stop it. Ideally we merge with the ai to become something else than give it super powers and it decides to destroy us. I'm not sure the benevolent care giver of humanity is something we can hope for.

It's a scary but interesting future, but I mean we've also got major problems like cancer, global warming, etc, and ai is a killer researcher, that did 300k years worth of human research hours in a month to find tons of materials that can possibly be used by industry.

They're doing similar with medicine, etc... there's many pros and negatives, I'm a bit of an accelerationist, rip the band-aid off kind of guy, everyone dies someday I guess, not everyone can say they were killed by a Terminator, well at least not yet lol, tongue in cheek.


> I'm a bit of an accelerationist, rip the band-aid off kind of guy, everyone dies someday I guess

Are you volunteering to go first?


> how's it more damaging to the environment of you can replace 1k people, that's 1k people staying at home instead of commuting,

Check my comment above, where I do some rough back of the napkin calculations around this. Training gpt4 for example produced around 6 billion times the carbon emissions a human emits in total in a year, which probably includes commuting, so unless gpt4 removes the commute time of probably a significant fraction more than 6 billion people (since it wouldn't be eliminating their emissions entirely, just their commuting emissions) it is a net loss. Also, we can eliminate commute emissions by having better public transportation and walkable/bikable cities, we don't need to prostrate ourselves before a dementia addled machine God to get there.


>Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

Why should you be free of accountability for the effects of your actions?


Because the effects of my actions in this case have yet to be demonstrated, let alone shown to cause harm. The author claims there is expoititive harm to labor, the environment, and maybe others. That is not at all obvious or provably true yet. As I said, I'm open to the discussion, but I can't defend myself in good faith when people claim some slam dunk moral certitude. Again, don't use generative AI if it makes you feel bad, but there is absolutely nothing clear cut yet about this radically brand new technology.


>The author claims there is expoititive harm to labor [...]. That is not at all obvious or provably true yet.

Not at all obvious? These models are trained on vast amounts of content, much of it copyrighted, and basically none of it licensed.


Human artists have been training on the same content for decades and no one seemed to complain. You can argue that machines should be held to a different set of legal and ethical standards, but it's certainly not obvious.

Most factories are designed based on vast amounts of prior manual labor, so it's not like "automating a manual process based on analyzing existing methods" is new, either. Why is it okay to automate the knowledge of all those other craftsmen, but not that of painters?


Human artists are not robotically ingesting terabytes of content.


AI are not human artists, so there's no connection to your point and the discussion.


you are just making shit up to suit your narrative at this point.


So ai are human artists?


You could just as easily claim that since AI are not human artists that copyright does not apply to them.


It applies to whoever uses them as a tool. If you say copyright doesn't apply to a photocopier because it isn't human that doesn't mean it suddenly doesn't apply to you. It's just a bad argument.


Correct, not at all obvious. The obvious effect of generating an image of a dog on the moon is that you now have an image of a dog on the moon. If you showed it to 100 artists, some percentage of them might recognize it's AI, but none of them would claim it as their art and ultimately none would be harmed. The harm is non-obvious.


The flip side of that coin is brazen "ingenuity" with complete disregard for the consequences is just as bad as blindly declaring all AI is bad.

We need people like the person writing this article so the starry eyed people who are too excited about AI and pushing it into everything are kept in check.


^^ THIS ^^

The middle road I've taken is that I use various consumer AI tools much the way I used the Macintosh or the Atari ST with MIDI when they showed up while I was in music school, as tools that may be used as augmentative technology to produce broader and deeper artistic output with different effort.

There's something mystical and magical about relinquishing complete control by taking a declarative approach to tools that require all manner of technique and tomfoolery to achieve transcendent results.

The jump from literate programming to metaprogramming to what we have in the autonomous spectrum is fascinating and worth the investment in time, assuming the output is artistic, creative, and philosophical.

AI is not free, but the price being paid comes at the cost of creators trying to create safe technology usable by anyone of any age.

Given the similarity to selling contraband, these AI tools need far more than just conditional guard rails to keep the kids out of the dark web... More like a surgeon general's warning with teeth.

Bard and Bing should be treated as if they were Therac 25, because in the long run we may realize that like social media, the outcome is worse.


Please don't do “^^ this ^^”, comments are reordered here.


Thanks for the reminder. I won't make that mistake again. I'm guessing the spatial affordance, for lack of a better phrase, of "^^this^^" arose with ephemeral comments on irc, but is clearly dependent on the message list being static, which is not true here; hence, the technique is out-of-place here. Good to know. Thanks again.


> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

Can you please give me access to your private repositories? I'd like to se if there's anything useful there for me to sell. You shouldn't say no, at least I ask politely and use the magic words. It can only benefit humanity right?

I'm not against crowdsourcing LLM models, but copyright is copyright. I say that as someone who pirates heavily, but I'm not a hypocrite about what I do.


There's a version of the future where AI actually takes larger and larger chunks of real work while humans move towards spending more and more of their time and energy on culture war activities.


ALL technology can be weaponized, and what you are sleep-walking into is an era where AI is easily weaponized against not just nation states or groups, but the individual.

Either have this conversation now, or face the consequences when weaponized AI is so prevalent, you will have to dig a hole in the ocean to escape it ..


Your name is "stolenmerch"... I wonder if that colors your perspective at all.


Are We the Baddies?


> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

You, personally, likely are not (apart from electricity use but that's iffy.) But the technology you want to use could not exist, and cannot continue to be improved, without those two things. That's not unclear in the slightest, that's just fact.

> I'm open to that conversation and debate, but diatribes like this make it far too black-and-white with "good" people and "bad" people.

I get that any person's natural response to feeling attacked to defend oneself. That's as natural as natural gets. But if shit tons of people are drawing the same line in the sand, no matter how ridiculous you might think it is, no matter how attacked you might feel, at some point, surely it's worth at least double checking that they don't actually have a point?

If I absolutely steel-man all the pro-AI arguments I have seen, it is, at the very best:

- Using shit tons of content as training data, be it written, visual, or audio/video, for a purpose it was not granted for by it's creators

- Reliant on labor in the developing world that is paid nearly nothing to categorize and filter reams upon reams of data, some of which is the unprocessed bile of some of the worst corners of the Internet imaginable

- Explicitly being created to displace other laborers in the developing and developed world for the financial advantage of people who are already rich

That is, at best, a socially corrosive if extremely cool technology. It stands to benefit people who already benefit everywhere, at the direct and measurable cost of people who are already being exploited.

I don't think you're a bad person for building whatever AI thing you are, for what it's worth. I think you're a person who probably sees cool new shit and wants to play with it, and who doesn't? That's how most of us got into this space. But as empathetic as I am to that, tons of people alongside you who are also championing this technology know exactly what they are doing, they know exactly who they are screwing over in the process, and they have said, to those people's faces, that they don't give a shit. That they will burn their ability to earn a living to the ground, to make themselves rich.

So if you're prepared to stand with them and join them in their quest to do just that, then I don't think anyone is obligated to assuage your feelings about it.


Your "steelman" is embarrassingly bad. Why play devil's advocate if you're going to do such a bad job of it? Here's an alternative:

- As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples. It is possible to create outputs that are very similar to existing works, just as a human painter could copy a famous painting. The issue there lies in the output, not the human/model.

- Provide comfortable office jobs for people in economically underdeveloped countries, categorizing data to minimize harm for content moderators worldwide. One piece of training data for a model to filter harmful content can prevent hundreds/thousands of people from being exposed to similar harmful content in the future.

- Reduces or eliminates unpleasant low-skill jobs in call centers, data entry, etc.

- Creates new creative opportunities in music, video games, writing, and multimedia art by lowering the barriers to entry for creative works. For example, an indie video game developer on a shoestring budget could create their own assets, voice actors, etc.

- Reduces carbon emissions by replacing hours of human labor with seconds of load on a GPU.


> As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples.

“a lot” is doing very heavy lifting here. The amount of examples a human artist needs to learn something is negligible in comparison to the humongous amounts of data sucked up by AI training.


> As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples. It is possible to create outputs that are very similar to existing works, just as a human painter could copy a famous painting. The issue there lies in the output, not the human/model.

I've seen this analogy parroted everywhere and it's garbage. Show me a human being that, in an afternoon, can study the art of Rembrandt and from that experience, paint plausibly Rembrandt style paintings in a few minutes each, and I'll swear by AI for the rest of my life.

Absolute bunk.

> Provide comfortable office jobs for people in economically underdeveloped countries, categorizing data to minimize harm for content moderators worldwide.

... who do you think the content moderators are? It's the same people being paid pittance wages to expose themselves to images of incredible violence, child abuse, non-consensual pornography, etc. etc. etc.

No person should have to look at that to earn a GOOD living, let alone a shit one.

> One piece of training data for a model to filter harmful content can prevent hundreds/thousands of people from being exposed to similar harmful content in the future.

Yeah this is the exact nonsense that is spouted every time you criticize this shit. "Oh all we need to do is absolutely obliterate entire swathes of humanity first, and theeeeeen..." with absolutely zero accounting for the job that has to be done first. And again, I don't see any AI scientists stepping up to page through 6,000 jpegs, some of which depict unspeakable things being done to children, oh no. They find people to do that for them, because they know exactly how unbelievably horrible it is and don't want themselves being exposed to it.

If it's so damn important, why don't YOU do it? If you're going to light someone's humanity on fire to further what you deem to be progress for our species, why not at least have the guts to make it your OWN humanity?

> Reduces or eliminates unpleasant low-skill jobs in call centers, data entry, etc.

And where are those people going? Who's paying them after this? Or are you going to suggest they attend a weekend Learn-to-Code camp too? And who's paying their wages in the middle of that transition, when the skills they have become unmarketable? Who's paying for their retraining? Or are we just consigning entire professions worth of people to the poorhouses now without so much as a thought?

> Creates new creative opportunities in music, video games, writing, and multimedia art by lowering the barriers to entry for creative works.

Derivative works. No matter how much you want to hype this up, AI is not creative. It just isn't. It gives you a rounded mean of previous creations that it has been shown, nothing more. AI will never invent something, in a thousand years it will not. This is why people call AI art soulless.

> For example, an indie video game developer on a shoestring budget could create their own assets, voice actors, etc.

Have you seen those games? They're shit. They're lowest common denominator garbage designed to get hyperactive kids on iPads to badger their parents into spending money.

> Reduces carbon emissions by replacing hours of human labor with seconds of load on a GPU.

So like, this just straight up means you know damn well people are going to die from this. They will be displaced, their labor made worthless, and they will perish. That's just like... what you just said there, because otherwise, the statement "reduces carbon emissions" makes no sense, because if someone gets fired and gets a new job, their carbon emissions do not necessarily go down, and they certainly aren't eliminated.*


> Show me a human being that, in an afternoon, can study the art of Rembrandt and from that experience, paint plausibly Rembrandt style paintings in a few minutes each, and I'll swear by AI for the rest of my life.

So it's okay to learn, but only if you do it very slowly? I surely don't need to point you to the existence of forgers - you know a human can study the art of Rembrandt and paint plausibly Rembrandt style paintings

> are we just consigning entire professions worth of people to the poorhouses now without so much as a thought?

We have been doing that since the dawn of history - what makes this any different from cars obsoleting the horse drawn carriage? Computers have been automating people's jobs for decades - should we ban programming writ large?

Where, exactly, do you feel the line ought to be drawn?


> So it's okay to learn, but only if you do it very slowly?

No, it's a fundamentally different process with different results. An artist learns from previous artists to express things they themselves want to express. An AI digests art to become a viable(ish) tool for people who want to express themselves, as long as that expression resides somewhere in the weighted averages of the art the model has digested. Two fundamentally different things, apples and oranges, and, also not without it's own set of limitations. Despite the rhetoric around this stuff that anyone can create anything, that's just not true: you can create anything that you can find a model that's suitable to create it, that was itself trained on a LOT of similar material to what you want to create. Effectively automated ultra-fine scrap-booking.

Honestly if creativity is your thing, even if you find creating difficult for whatever accessibility reason you feel like pretending you care about, you will find AI more frustrating than anything, and the bounds of your creativity are the model itself, and the safeguards whatever provider has decided are important to put in place. You've just exchanged one set of limitations you probably can't control for another set you definitely can't control.

> I surely don't need to point you to the existence of forgers - you know a human can study the art of Rembrandt and paint plausibly Rembrandt style paintings

Yes and those are worthless once found, just like AI art. And again, you've sidestepped the scale: Adobe Firefly can bash out 3 images in roughly 2 minutes of solid resolution. No human can even dream of getting close to creating Rembrandt forgeries at that rate.

> We have been doing that since the dawn of history - what makes this any different from cars obsoleting the horse drawn carriage?

Because cars costed a fortune when new and were toys for the wealthy, before Henry Ford came along some three decades later to fix that. And then, the former farriers had time to retrain for new work. Not to mention, carriage builders were still employed through many decades with the rise of cars, because originally buying a "car" meant you got a chassis, suspension, engine and the essentials, which you would then take to a coach builder to have a "skin" if you will build around it. Hence the term "coachwork."

> Computers have been automating people's jobs for decades - should we ban programming writ large?

Is this the "debate" you were saying you were open to? Hyperbolic statements with zero substance? I can see why few want to have it with you.

> Where, exactly, do you feel the line ought to be drawn?

Consent. Tons of people's creative output was used to build machines to replace them, without their consent and more often than not, explicitly against their wishes, under the guise of a "research project" and not a monetized tech product. Once again a tech company bungles into the public square, exploits it for money, then makes us live with the consequences. I frankly think that question aught to be reversed: what makes OpenAI entitled to all that content, for a purpose it was never meant for, with zero permission or consent on the part of it's creators?

I'm not opposed to ML as a concept. It has it's uses, even for generating images/writing/what have you. But these models as they exist now are poisoned with reams of unethically sourced material. If any of these orgs gave even the slightest shit about ethics, they'd dump them and re-train them with only material from those who consented to have it used that way. Simple as.


May I just say, as a third-party simply reading this back and forth from the outside, that the tone of your writing and the implied attitude with which you are engaging in "debate", reads as very aggressive and uninterested in actually having a sincere discussion. To me at least.

I imagine you probably won't like this comment, but perhaps you might use it as an opportunity for reflection and self-awareness. If your interest is actually to potentially change someone's mind, and not just "be right", you might consider approaching it in a different way so that your tone doesn't get in the way of the substance of arguments you wish to make.

Just a suggestion. Take care.


You aren't wrong in the slightest, apart from that you've gotten the implication that I'm here for a debate. I'm not. I've been having this debate since the StableDiffusion blow up at the mid-ish of 2023. I've read these points restated by countless pro-AI people and refuted them probably dozens of times at this point, here and elsewhere, always ending in a similar deadlock where they just stop replying, either because they're sick of me, or I've "won" for whatever that means in the context of online discussion.

Nevertheless I'm always open to be persuaded by actual arguments, and I have on numerous issues, but I have yet to see any convincing refutations on these points I've outlined here regarding primarily, but not limited to:

- The unethical sourcing of training data

- The exploitation of lesser-privileged workers in managing it

- The harm being done and the harm that will be done to various professions if they become standard

And not mentioned in this thread:

- These various firms' superposition between potentially profitable business and "research initiatives," depending if they're trying to get investment or abuse the public square respectively

- The exploitative/disgusting/disinformative things these AI's are being used to produce to a society already saturated with false information and faked imagery

But these discussions usually dead end, like I said, when the other person stops answering or invokes the "well if we don't build it someone will" which is also unpersuasive.

Relating specifically to your point about wanting to change someones mind: in my first comment I do feel I put out an olive branch with empathy for being excited about a new thing. But when the new thing in question is so saturated from beginning to end in questionable ethics... I'm sorry there's only so much empathy I can extend. If you (not you specifically, but you as the theoretical person) are the kind of person ready to associate with this technology at this stage, when it's foibles and highly dubious origins are so well known, then I'm not overly interested in assuaging your feelings. This person came into this thread bemoaning the fact that so many people are calling them out on this and they're sick of it, and like, there's a great way to stop that happening: stop using the damn technology.

I will always extend empathy, but if your position is whining about people rightfully, IMO, pointing out that you are using unethical tech and you wish they'd stop? Like, sorry not sorry man, maybe you shouldn't use it then. Then you get less yelled at and a clear conscience. Win/win.

But I do appreciate the reply all the same, to be clear. You aren't wrong. I've just had this argument too much, but also don't feel I can really stop.


> always ending in a similar deadlock where they just stop replying, either because they're sick of me, or I've "won"

My general experience on Hacker News is that threads rarely go beyond one or two replies, so I'll often tap out on the assumption that the other party isn't likely to actually read/respond to any thread more than a couple days old. As far as I know, there's not any indicator when someone replies to your comments, unless you go and check manually?

If I'm just using the site wrong, do please let me know!

Otherwise, I'd suggest you might want to update from "sick of me" to "never saw the reply due to the format of the site". For what it's worth, it took me a while to adjust to that



> An artist learns from previous artists to express things they themselves want to express.

Ahh yes, that well known human impulse to produce stock artwork for newspapers and to illustrate corporate brochures. I can't imagine what the world would be like if we let cold, soulless processes design our corporate brochures!

I suppose this argument works for Art(TM), but why is it relevant to the soulless, mass produced art? Should it be okay to discard all the artists who merely fill in interstitial frames of an animation? Is "human expression" actually relevant to that?

> And again, you've sidestepped the scale

Pick one: either this is about speed or it isn't. Would you actually be fine with AI art if it was just slower? If not, then stop bringing up distractions like this. If this really is just about scale, it's a very different conversation.

> Because cars costed a fortune when new and were toys for the wealthy, before Henry Ford came along some three decades later to fix that.

Sorry, when did Rembrandt paintings stop being toys for the wealthy?

> And then, the former farriers had time to retrain for new work.

So, again, it's just that progress is moving too fast? If we just slow things down a bit and give the artists time to flee, that makes it okay?

> Hyperbolic statements with zero substance?

We haven't talked before, so I didn't know whether you were someone who was okay with automation putting people out of work. That's hardly zero substance. I'll assume this means you're fine with it, since you don't think it's even worth discussing.

> Consent

Okay, so, bottom line: you're saying that if they spend a few billion to license all that art, and proceed to completely replace human artists with a vastly superior product, you're OK with that outcome? (I'm not saying this is inconsistent, just trying to understand your stance - previously you were talking about the importance of artists expressing themselves and the speed at which AI can do things - what's actually important, here?)


> Ahh yes, that well known human impulse to produce stock artwork for newspapers and to illustrate corporate brochures. I can't imagine what the world would be like if we let cold, soulless processes design our corporate brochures!

As someone who works on the side in creative endeavors, I assure you that work that I do even that I would prefer to not carries with it my principles as a designer and a small piece of my humanity, every last thing, even the most aggressively bland and soulless contains an enigma of tiny choices based upon years of making things that most people will never notice. Or at least, I always thought they didn't notice, until you start putting even bland corporate art next to AI generated garbage. Then they do.

From the creative perspective, that's what I think lends it that... smoothed over, generic vibe. An artists "voice" even in something like graphic design, even in an oppressive and highly corporatized environment, would be best characterized as a thousand tiny choices that won't overall really impact a ton on their own in terms of the final product, but do give a given work it's "humanity" that no machine can touch. When I, for example, design an interface: why do I consistently use similar dimensions for similar components and spacings? I honestly couldn't tell you. To me, it "looks nice," a word choice that undermines decades in my industry but nonetheless is the most fitting. And all of those are subject to change by committee later on to be sure, but even so, they rarely are.

AI takes these thousands of tiny choices that contribute to this feeling and replaces it with a rounded mean of previous choices made by innumerable artists with different voices. It takes the "voice" as it were and replaces it with an cacophony of conflicting ones, which is subject to change it's tone with each pixel. This, IMO, is it's core failing.

> I suppose this argument works for Art(TM), but why is it relevant to the soulless, mass produced art? Should it be okay to discard all the artists who merely fill in interstitial frames of an animation? Is "human expression" actually relevant to that?

For the love of everything, yes. And you say "why is it relevant for soulless mass produced art" but we already know why it is, Disney spent billions of dollars showing us what happens when the content mill becomes so utterly and completely detached from the art it was meant to be with the MCU. The newer movies just... look like shit, and not because of AI (probably?) but because all the movies are made down to a formula, down to a process, no vision, no plan, just an endless remixing of previous ideas, no time for artists to put in actual work, just rushing from task to task, frame to frame, desperately trying to crank it the hell out before their studios go bust.

People rag on generic, popular art but even popular art is art, and if you take away the humans (or as Disney did, beat them into such submission they can no longer be human) people definitely notice.

> Pick one: either this is about speed or it isn't. Would you actually be fine with AI art if it was just slower? If not, then stop bringing up distractions like this. If this really is just about scale, it's a very different conversation.

It's relevant because you're bringing up industrialized mechanization as a comparison, and it's really an ill-fitting one. The printing press, MAYBE, could be an example on the scales we're talking about, and the main difference there is mass produced books basically didn't exist and literacy of the common people was substantially rarer, ergo, the number of scribes displaced in their skills was much lower.

But the vast majority of "technology replaces workers" type things can be (and you have invoked this already) compared to the industrial revolution, but again, the difference is scale. They didn't build a horseshoe maker by analyzing 50,000 horseshoes made by 800 craftsman that could then produce 5,000 of the things per day.

And sure, those horseshoes all suck ass, they're deformed, don't work well and the horses are visibly uncomfortable wearing them, but the corporate interests running everything don't care and so shit tons of craftsman lose paying work, horses are miserable, and everything keeps on trucking. That's what I see, all around me, all the time these days.

> Sorry, when did Rembrandt paintings stop being toys for the wealthy?

I mean, the art market being a tax-dodge and money-laundering scheme is a whole other can of worms that we really shouldn't try to open here.

> So, again, it's just that progress is moving too fast? If we just slow things down a bit and give the artists time to flee, that makes it okay?

I'd be substantially more pleased with a society that cared for the people it's actively working to displace, yeah. I don't think any artist out there is dying to make the next Charmin ad, and to your earlier point of soulless corporate art, yeah I'd imagine everyone would have a lot more fun making anything that isn't that. The problem is we have millions of people who've gone to school, invested money, borrowed money, and constructed a set of skills not easily transferable, who are about to be out of work. And in our society, being out of work can cost you everything from the place that you live, to the doctors that heal you to the food that nourishes you. I don't, and I doubt anyone gives a damn about maintaining the human affect in corporate art: apart from the fact that those humans still need to eat, and most of them are barely doing it as it stands now.

> We haven't talked before, so I didn't know whether you were someone who was okay with automation putting people out of work. That's hardly zero substance. I'll assume this means you're fine with it, since you don't think it's even worth discussing.

On the whole, less work is a-okay by me. Sounds great! The problem is we as a larger collective of workers never see that benefit: Instead of less work, we all just produce more shit, having our 40-hour week stuffed with ever more tasks, ideas, and demands of management as they add more automation and cut more jobs and push the remaining people ever harder.

We were on the cusp of a 30-hour workweek in the 1970s and now? Now we have more automation than ever but simultaneously work harder and produce more shit no one needs than we ever have.

> Okay, so, bottom line: you're saying that if they spend a few billion to license all that art, and proceed to completely replace human artists with a vastly superior product, you're OK with that outcome? (I'm not saying this is inconsistent, just trying to understand your stance - previously you were talking about the importance of artists expressing themselves and the speed at which AI can do things - what's actually important, here?)

What's important is I want people to survive this. I'm disillusioned as hell with our society's ongoing trajectory of continuously trying to have more, to do more, always more, always grow, always produce more, always sell more. To borrow Greta's immortal words: "Fantasies of infinite growth." I see the AI revolution as yet another instance where those who have it all will have yet more, and those who do not will be ground down even harder than they already are. It's a PERFECT solution for corporations: the ability to produce more slop, more shit, infinitely more, as much as people can possibly consume and then some, for even less cost, and everyone currently working in the system is now subject to even more layoffs so the executives can buy an even bigger yacht.

If you don't see how this stuff is a problem I don't think I can help you.


> we as a larger collective of workers never see that benefit

Child mortality has dropped from 50% to 1%.

We had a world-wide plague, and far less 10% of the population died.

We have computers. We have the internet. We have an infinite wealth of media.

We fixed the hole in the ozone.

We eliminated lead poisoning.

We are constantly making progress against world poverty.

We got rid of kings and monarchs and tyrants.

War is so rare, we don't even bother with the draft despite the army struggling massively with recruitment.

You simply CANNOT look back on history and think we don't have it better


> Child mortality has dropped from 50% to 1%.

And mother-mortality is creeping upwards here in the states thanks to the cost of healthcare and Republican's ongoing efforts to control women's bodies.

> We had a world-wide plague, and far less 10% of the population died.

An inordinate amount of which was concentrated in America, because we've industrialized and commercialized political radicalization for profit.

> We have computers. We have the internet. We have an infinite wealth of media.

We have devices in our pockets that spy on us (also powered by AI), about five websites, and infinite derivative shit.

> We fixed the hole in the ozone.

That one I'll give you. Though the biosphere is still collapsing, we did fix the ozone hole and that isn't nothing.

> We eliminated lead poisoning.

Eeehhhhhh.... mostly? Plenty of countries still use leaded gasoline, and tons of lower-income people are still living in homes with both lead and asbestos.

> We are constantly making progress against world poverty.

In the developing world, maybe, but that comes with a LOT of caveats about what kinds of jobs are being created and how well those workers are being paid. China has done incredible work lifting their population, but not without costs that the CCP is only now starting to see the problematic side of. India is a similar story. And worth noting, both of those success stories, if you decide to call them that, are based heavily on some creative accounting and massive investment from the West. I don't think that's a bad thing but I'm also guessing said investors are expecting to be paid back, and it's finite and unsustainable.

Meanwhile, the developed world, workers are getting fucked harder than ever. Rent is now what, 2/3 of most people's income? People out here working three jobs and they still can't make a decent living.

> We got rid of kings and monarchs and tyrants.

We living in a different world here? We have an entire wave of hard-right strongmen making big splashes right now. Trump was far from an isolated thing. No they're not dictators... YET... but like, they don't usually start that way if you study your history.

> War is so rare, we don't even bother with the draft despite the army struggling massively with recruitment.

Uh, I think some Gazans, Ukranians, Iraqis, and Rohingya might take issue with that statement?

> You simply CANNOT look back on history and think we don't have it better

I mean yeah, I'm not one of those lunatics who think we were better shitting in caves. But that doesn't mean our society as it exists is not rife with problems, most of which have a singular cause: the assholes with all of the money, using that money to make the world worse, to make more money.

Hence being pissed about AI.


All of your objections are nitpicking about small, localized setbacks compared to massive global gains. As far as I can tell, we agree that the world is consistently getting better, and that these gains all come from technological progress. As far as I can tell, we agree that while the world isn't perfect, and some technologies do more harm than good, "technological progress" is a net positive.

I don't think you want to go back to a 50% child mortality rate, even if it somehow convinced Republicans to drop their crusade against abortions. I don't think you prefer World War 2 to the Ukraine war. I certainly don't think you want to reinstate monarchy and fascism across Europe.

If I'm wrong, then go ahead and tell me what decade you want to rewind to - what progress are you willing to give up?

If I'm not wrong, then... how does this at all lead to "hence being pissed about AI"? What's so uniquely evil about AI that we should give up the gains there, and assume it's a net evil in the long term, compared to everything else we've done?


> All of your objections are nitpicking about small, localized setbacks

Small wars are still wars. No, we don't have any global conflicts with well-naturalized two-sides like the Axis and Allies of World War II, yeah, true enough. But that's not because war is done or distasteful: it's because global hegemonic capitalism now rules all of those societies and makes certain such wars don't happen between the countries that matter. Which is why we have the "police actions" in Vietnam and Korea, why we had Operation Iraqi Freedom, why we nearly went to war with South America over the price of bananas, etc. The colonial powers have essentially unionized and now use the bludgeon of the military might of America to keep poorer indebted nations in line, and if they fail to capitulate, a reason will be manufactured to unseat the power in that place, more often than not by force, more often than not with heavy civilian casualties and economic destruction, the rebuilding of which in turn will be financed by the West afterward so the poorer countries never have a ghost of a chance in hell of standing on their own two feet and making their own fucking decisions about their resources and people.

That is not due to technical progress. Technical progress is, if anything, jeopardizing that balance because the information now is much harder to contain about how absolutely fucked everyone in the global south is at basically all times.

> As far as I can tell, we agree that while the world isn't perfect, and some technologies do more harm than good, "technological progress" is a net positive.

I would absolutely cosign that, if said technological progress wasn't extremely concentrated in the wealthy nations on this planet, while the other ones are making do scrapping our old ships wearing tennis shoes and smoking the cigarettes we export them.

> I don't think you want to go back to a 50% child mortality rate, even if it somehow convinced Republicans to drop their crusade against abortions.

No I want Republicans to govern on conservative principles, not mindless culture war bullshit. And I'd also like the Democrats to stop governing on conservative principles because their opposition in the states is a toddler eating glue and screaming about pizza places on the floor of the fucking Senate.

> I don't think you prefer World War 2 to the Ukraine war.

All war is terrible, the scale is irrelevant.

> I certainly don't think you want to reinstate monarchy and fascism across Europe.

A lot of fascist-leaning voters in Europe might do it anyway though.

> If I'm wrong, then go ahead and tell me what decade you want to rewind to - what progress are you willing to give up?

I want the progress. I just don't want it hoarded by a particular society on our planet. We ALL deserve progress. We ALL deserve to earn a living commensurate with our skills, and we ALL deserve to be supported, housed, and fed, and we already have the resources to do the vast, vast majority of it. We simply lack the will to confront larger issues in how those resources are organized and distributed, and the fundamental inequities that we reinforce every single day. Largely, because a ton of people currently have a lot more than they need, and a small amount of people have a downright unethical amount, and the latter group has tricked the former group into thinking they can join the latter group if they only work hard enough, while also robbing them blind.

> If I'm not wrong, then... how does this at all lead to "hence being pissed about AI"? What's so uniquely evil about AI that we should give up the gains there, and assume it's a net evil in the long term, compared to everything else we've done?

It's not uniquely evil at all. It's banal evil. It's the same evil that exists everywhere else: tech industries insert themselves into economies that they don't understand, they create something that "saves" work compared to existing solutions (usually by cutting all kinds of regulatory and human corners), sell that with VC money, crush a functioning industry underneath it, then raise the prices so it's no cheaper at all anymore (maybe even more expensive) and now, half the money made from cab services goes to a rich asshole in California who has never in his life driven a cab. It's just that, over, and over, and over. That's all silicon valley does now.


> the scale is irrelevant.

Okay, seriously? You don't care whether 100 or 100,000,000 people die? You don't see ANY relevant differences between those two cases? It must be perfect, or else we haven't made any progress at all?

I don't think I can help you understand the world if you really can't understand the difference there


You take ONE SINGLE POINT out of that entire post just to bitch about me making perfect the enemy of good?

My point isn't that 100 people dying isn't preferential to 100 million people dying. My point is that the 100 people died for stupid, stupid, stupid reasons. Specifically the ongoing flexes of the West over the exploited Global South.


Overall, I think you make a fairly convincing argument for all sorts of social changes - the problem is, that's not actually what you're advocating for.

> We ALL deserve progress. We ALL deserve to earn a living commensurate with our skills, and we ALL deserve to be supported, housed, and fed, and we already have the resources to do the vast, vast majority of it.

This is a great argument for UBI, or socialism, or... well, see, the problem is precisely that you never actually define anything actionable here. You've successfully identified a major problem, but your only actual proposal is "oppose AI artwork".

The problem is, "opposing one specific form of progress" doesn't actually do much at all to fix the issue. And indeed, if we had UBI or increased socialism/charity programs, then we wouldn't need to stop ANY form of progress.

And, of course, fixing the underlying issue is incredibly hard. We've tried Communism twice and proven that it's vastly more destructive. The Nordic Model seems to be doing well, but there's all sorts of questions on how it scales. And you're not actually proposing anything, so there's no room for the real, meaningful debate about those methods.


That's not a steelman. At the very best:

- All content was viewed and learned from, which is ethical (even a good) use of all content that has ever been released content to the public.

- Gave jobs to 3rd world laborers.

- Benefited us, made some of us everymen more productive and able to build and create in ways that we weren't able to before.

I suspect you don't agree with all the above, but that's more like what a steelman argument should be.


This is a bad take, chief. You're not a smol bean. If someone is telling you that the technology you are using is harmful to many people and to society as a whole the least you could do is to make an argument that either those harms are not what is being claimed or that there are significant benefits that outweigh the harms. "Don't say it's bad, that makes me feel bad so we shouldn't talk about it" is both a weak and useless position.


> I'm not a fan of this hyper aggressive line-in-the-sand argumentation about fossil fuels that pushes it all precariously close to culture war shenanigans. If you don't like a new technology that is perfectly cool and your right to an opinion. Please don't position it so that if I want to use fossil fuels I have to defend myself from accusations of polluting the air and the environment.


I'm not sure what you think you did here, but juxtaposing climate change with copyright squabbles really brings out how much of a first-world-problem such squabbles really are.


Are you going to pretend that the emissions caused by the enormous usage of energy by ML training and inference are not a thing?

Also, I wouldn’t morally have a problem with AI companies violating copyright if they weren’t hypocritical about it and open-sourced their software.

Anyways, the main message of the analogy is that you can’t just wave away the moral responsibility for the consequences of your actions. It wasn’t supposed to be a comparison of severity.


You should have to defend yourself if you are going to use this unreliable, untested, irresponsible technology.

Everyone who wants to do things that completely ignore the reasonable concerns of their fellow citizens should feel some heat, at least.


My boss “writes” policies etc using ChatGPT. They’re generic, overly wordy and say nothing of original thought. I don’t read them and don’t care. When he sends me a link to them in MS Teams I always click the first auto suggested response Teams like “Looks great!” Machine v machine.


There's a part of me that thinks that this is actually what's great about tools like ChatGPT.

Such a huge percentage of typical intra-office communication is neither worth writing nor worth reading. It only happens because someone who will neither write nor read it has mandated that it must happen, and it's easier not to argue. Farming that work out to GPT, though, is excellent damage control. It minimizes the cost to write, and, as long as you can trust your colleagues not to do something antisocial like hiding important and original thoughts in the middle of a GPT sandwich, almost eliminates the cost to read.


But with these tools you can do much more of those things. Instead of damage control, it might move it to a whole new level.


yeah, I like the concept of bullet points in, lengthy email out, just to be translated back to bullet points at the other end, maybe eventually we just skip all the filler shit and bullet points become the norm, of course we can have ai help us brainstorm those too.


That's an interesting thought, whereas previously technology wrapped computer protocols (I send hello in chat and the computer will wrap and unwrap it in TCP for me), in your example we have the AI wrapping the message in social protocol.


That's hilarious. Someday we'll all be working QA, just scanning over AI output for issues, like manufactured goods passing by on a conveyor belt.


Well at least that’s better than being QA’d by the AI.


I hope at some point people will realize you can replace 95% of AI applications by a simple, stupid, very efficient interface. Guys, you are just cutting human interactions. We don't need AI for it. If it's AI end to end anyway, you can skip all the talking and just transmit whatever information directly. Not just text either, this extends to most everything, images as decoration, and whole websites enabled by super extra AI productivity. There is no point to human facing communication anymore, when everyone got AI to parse AI.


I wonder what he feeds the AI. Maybe some nice concise bullet points, which are what you’d probably want.

Maybe we’ll end up with dual layers of AI: one to expand out top-down requests, another to compress them to something efficient to read.

We can also have engineers prompt the AI: write me a weekly status report on what I did. Here are my git commit logs and jira points. Emphasize (some topic).

Then the owners can have the AI summarize those reports.

Whole layers of middle management might be in danger.


I read an article about this sometime last year.

Basically AI is the exact opposite of compression.

We take what should be a few bullet points, turn it into some overly wordy bullshit with AI, then the recipient uses AI to turn that wordy bullshit back into a few bullet points.

And it costs a ton of compute to do this.

Kind of insane. I hope society evolves to work smarter.


I forgot who said it, but it’s rude to make something that takes less time to write than it takes to read. I think that it’s a nice line in the sand.


I think the compute is going to net zero eventually either through green energy or hopefully better technology, our brains are very efficient computers if we can build a computer that works like our brains, or figure out analog(light based) computers, etc we could maybe get 1000x the compute for the power of a flashlight.


In the future the prompt will be attached as metadata for the recipient's AI to print when asked to summarize.


As long as they provide the model and understand that they are responsible for the output, seems fine.


You think its insane but you forget a company made money two ways. The fact that everything we do at all is dependent on a company making money in the process is what is actually insane.


Finally we can create corpospeak nonsense at an accelerated rate and destroy the environment at the same time! A new era of productivity is upon us.


A guy I know is a manager and is a huge AI proponent and was telling me he writes one sentence about some one for a performance review and then has chatGPT blow it up into multiple paragraphs of machine generated corpospeak. I guess if his subordinates want to survive they'll have to use chatGPT to summerize that back down to the one sentence.

This whole exercise reminds me of the two economists paying each other to eat piles of crap.


Just pass them into ChatGPT and ask for a summary - problem solved!


> Just pass them into ChatGPT and ask for a summary - problem solved!

Ask for an overly polite answer instead. You don't need to mention that it should be wordy. GPT will take care of that naturally.


His mistake was not instructing it to be terse. AI output doesn't have to be more annoying and less dense than human output.


>My boss “writes” policies etc using ChatGPT. They’re generic, overly wordy and say nothing of original thought.

Isnt this good for a workplace policy?

Wife had to do something similar and I'm happy it was overfit. I don't want some creative policy book.


I find that it is just real wordy by default. I commonly tell ChatGPT to be succinct.


I love Bing chat especially for the graphics it creates but I hate how every message says I'm Bing and I can help you with that...I get it.


do you read policies normally that weren't ai generated? I mean how many people never read the TOS or user policy?


There's a rather good SMBC comic (from the same person who wrote "A City on Mars" recently) about that sort of thing and where it leads: https://www.smbc-comics.com/?id=3576 Seems more and more prescient by the day.


I think the funny result of our ML work is that we essentially bringing the value of being online down and will eventually force people to interact IRL as costs of verifying these generations becomes too high. That is we are going back to pre-Internet era interactions.

I agree with the author, but I also don't see lower usage of all this already meaningless human-produced content as inherently bad.

The hope is dim, but I do wish being online would be restricted to strictly work-related purposes and we'd be forced back to human-to-human interactions as primary modus operandi. We'd see the depression and polarization rates go down significantly. These online community feedback loops are too toxic and bring little to the table.

If nothing online can be trusted then only offline can be the way to go, up until you people (the ones screaming Luddites at everyone 'normal' here) will decide that we need start augmenting our bodies to make better future, or in other words make profit to line your pockets.

I work in ML btw (for a loooong time).


Well said.

> If nothing online can be trusted

I think we're already pretty much there, except that it isn't just online. This is an all-media problem now.

(I work in ML as well)


We were ever in a position to "trust" media?

To me the advantage of having the internet was to allow a range of people without prior permission, a large sum of money, or more free time than sense to start publishing.

It was all seemingly meant to expand the number of voices in the "media" and reduce the requirement to put your trust into any one outlet. We took a wrong turn somewhere.


Sorry, I was unclear. I meant "media" in the general sense, not "The Media" in the sense of media companies. "Media" includes the internet.


No.. I know. I was saying that since alternative formats clearly don't improve access to the "truth" then sheer volume and open access is your only last resort, and do the extent the internet was supposed to bring that, it has been stunted somewhere along the way into the half-penetrated and half-captured version we have now.


I think you're underestimating the benefits of having social community online, I've found and made extremely close friends and even partners online and they have been utterly life-changing for me. I'm making/have made (I'm different cases) herculean efforts to be able to be with them in person permanently because obviously in person interaction is better, but being able to discover people that are so good for me and fit so well for me was made possible by the internet because such people are rare for the kind of person I am. I would not be nearly as well adjusted and happy in my life as I am without the close friends that I've made online. They mean a lot to me. I think the problem with the current online landscape, including polarization and the generation of meaningless content, has more to do with the specific form most online social spaces take, where it's this intensely public popularity contest, instead of something more like irc, where it's generally pretty private and ephemeral and limited to a small number of people. That and the rotten incentives that social media companies have to take advantage of their users.


I've been thinking about this a lot the last two years as someone who also grew up with social community online. It was life-changing for me too, but sometimes I catch myself wishing I could have had these experiences offline instead.

I read this a few months ago and I still think about it all the time. Curious about your thoughts. https://maya.land/monologues/2023/08/12/social-media-chalk-m...


That was actually an amazing read, thank you so much. It reflects my thoughts on the matter pretty well — to use the analogy of the article, there are certainly some versions of social media that are poison, and online interaction may be less "socially nutritious" than in person interaction ceterus paribus, but things are rarely eever ceterus paribus! You have to take into account the relative barrier to entry of online interaction versus in person interaction, because the alternatives may well be online or nothing because the barriers of anything else are far too high for an individual, and you have to take into account the possibility that the available in person interaction for someome may in fact be non nutritious or poison itself. Likewise, depending on the way you use the internet to socialize, it may be more or less socially nutritious: interacting on something like Instagram is basically poison, Twitter Facebook and Tumblr offer almost no nutrition at all, and forums and medium to large IRC chats (or Discord servers) maybe significantly more over time or none at all depending on how they work, while conversely a small IRC chat or Discord or group chat of close friends that you met online in other places and consciously gathered over time into an intentional community of people who all intimately know each other and share every day's victories and defeats, hopes and fears, traumas and healing, art and jokes, means a whole lot more, even if being with them in person would be better. That last option, where you use niche interest online communities to find people, but then graduate them to something "online" but far more intimate, with maybe even the goal of living near each other one day, is rarely pointed out, but it's something I started to intentionally do four years ago and I've found it's by far the healthiest option.


Really beautiful comment. I agree wholeheartedly with all you said about the last and healthiest option, I've been intentionally doing it too and it's been rewarding, even healing :)


Plot twist: that article was written by AI.


Well, was it? If you can't show that, this isn't an interesting, clever comment, you're just imagining to yourself an alternate universe where you're right. And I'd be surprised if it was, since LLMs are basically incapable of generating something that isn't vague, generic, and generally in line with common/average sentiments, by virtue of their reward function. AI writing might be indistinguishable from shitty human writing, but not from good human writing.


Whether or not they were joking, an AI did not write it. (I did; thank you for your comments!)



Disney has this floor that moves when you do, ie you feel like you're walking but really you're walking in place.

We're very close to holodeck technology, ai generated scenes and what not. Ais right now are single agents or groups of single agents, if they become a hive mind they could create worlds that multiple users can experience the same thing from different view points, essentially lucid dreaming or the meta verse, I mocked Zuckerberg for his meta shit and hoped they'd be last to figure it out, but the open source models from Facebook seems to me to be how you speed up building a meta verse.

I don't know if there's 4d chess, but I do think we'd be progressing a lot slower if everything was closed source. I'm glad it's open, a little nervous what terrorists and despots might do with the same technological access though.

my point being, our outside might really be inside virtual worlds. We might even have real jobs there. Imagine if we can order some food dish and a real person prepares it virtually like they would in the real world and you have a replicator device actually make it for you... kinda creepy but possible I guess.

Ready player one, is about to be reality I think.


> I don't want music recommendations from something that can't appreciate or understand music. Human recommendations will always be better.

I find Spotify's "discover weekly" list to be generally pretty good. Sure, there are some songs I dont like, but there are often 3-4 great songs each week that get added to my regularly playlist.

Its all good an well to say that human recommendations are better, but I'm not paying someone $50 per week to spend 3-4 hours finding me new and good songs. I get something that is maybe 80% as good included, and the reality is that is good enough.

I feel like one of the reason AI is doing well is it doesnt need to be better, it just needs to be "good enough" at a fraction of the price..


>there are often 3-4 great songs each week that get added to my regularly playlist.

I'm earnestly uncertain that a system with near total access to your listening history producing 3-4 great songs per week from the corpus of all human musical endeavour can be considered a "good" effort. Particularly when the 3-4 recommendations are jammed into a 60 minute playlist of otherwise questionable quality.


It’s better than my effort at finding music, and it makes Mondays a little nicer. My goal isn’t to find the best music ever, but to find new and interesting music more easily.


Have you compared against random sampling?


If it's randomly selected from the entire music library of Spotify, then it won't be good. Most of it would come from a long tail of bad or niche stuff.

For a practical example see https://en.wikipedia.org/wiki/Special:Random - odds of finding an article that's both interesting and high quality are low.


Spotify (and most other services) actually have a mix of human and algorithimic recommendations in things like discover weekly: https://www.theverge.com/2015/9/30/9416579/spotify-discover-...


I wish I had the citation handy but they also put sponsored content in your 'recommendations' even if you're a paying customer. Rubs me the wrong way.


Honestly, I believe Spotify would offer me more accurate recommendations if they relied solely on AI, without human input. Their current DJ features, although supposedly based on my previous listening habits, often suggest popular songs I don't listen to, tracks supposedly reminiscent of my school days that I've never heard before, and genres that I'm not interested in.


I recently signed up for Qobuz, which costs the same or roughly the same as Spotify. They have a significant amount of recommendations, writing, etc written by actual people. It is of vastly higher quality than anything automatically generated by Spotify. I've only occasionally found something I like through Spotify but have already found many things I like through Qobuz.


You don't have to pay someone $50/wk. You allow users to have friends, and then you recommend based on what they've been listening to.


"AI output is fundamentally derivative"

By that definition, so is all human output. Musicians spend years studying and practicing other people's music before writing their own, painters spend years trying to replicate techniques before mastering their own, programmers spend years reading other people's bugs before authoring their own. All expression of skill is ultimately derivative. Sometimes slightly, sometimes verbatim, sometimes outright plagiarism.

This is why we reserve "derivative" for cases where the output has a similarities and obvious connection, and why we have a hard time dealing with it in practice - it's impossible to disallow a human from using past experiences in future works.

We taught a pile of melted sand to think using principles of learning (very roughly) similar to ours, and now we get upset that it worked and they apply what they learnt because now only we are allowed to do that.


People act as if we are drowning in War and Peace level masterpieces that AI is going to displace.

There is absolutely nothing going on culturally.

"I am not into all this derivative AI crap. I like the creativity of humans. Do you want to go see Spiderman part 37 or Superman part 22 this weekend?"

We are already a culture that has been displaced by completely uncreative, derivative art and useless gadget making for cash. Maybe someone you can even sell their useless gadget so they can afford to go to some other place and culture full time.

AI is the only hope I have left for this culture.


> There is absolutely nothing going on culturally.

Old man yells at cloud.


All human output is much more derivative than the majority of "creators" dare to admit.

Nevertheless, good human output always add something new and original to the elements that are derived from prior art.

AI output consists entirely of derived elements, which only in the best case may happen to be mixed into a distinct combination from those already existing.

Even when the combination is new, it is distinct only due to randomness, without an ability to select like a human, which from the possible random combinations is more suitable to achieve a purpose, or it is more beautiful, or it is better according to other such criteria that are not possible, at least yet, to be judged by a program.


> Nevertheless, good human output always add something new and original to the elements that are derived from prior art.

My personal opinion is that even the new and original elements are derivative. Once a creation derives from a sufficiently large number of sources - some unrelated, such as being inspired by music when painting - you consider it original and new, and once the space of experiences and current inputs you derive from grow sufficiently large and chaotic - including in particular that inputs and outputs become experiences, forming a feedback loop currently lacking in machine learning - you get what we consider "free thought" . That's at least the model I believe in.

> Without an ability to select like a human, which from the possible random combinations is more suitable to achieve a purpose, or ...

Untrained humans do not have this ability. It takes training to identify and categorize things. For example, my mother may appreciate a photo that most people would consider "good", but does not have the practice needed to either frame the scene herself or select the best framing to be deemed "good". At the same time, others might outright dislike the same picture - "good" is not exact in the first place.

I see no reason to believe that our models could do this as well or better than us. If an LLM generates reasonable responses, it must already have applied a standard of "best fitting". It is just not a distinct step, just like how we are not manually filtering our thoughts as we say them.


*couldn't do this as well or better than us, of course.


Author’s point about environmental cost is a frustration I share. Most people in the tech industry are at least somewhat concerned about the environment but their use and endorsements of technologies don’t follow at all: cloud technologies running in massive over-provisioned datacenters, LLMs consuming more energy than some countries, etc. It’s totally cool to strip mine the Earth, clear cut forests, and burn a bunch of coal when it’s for fashionable things, right?


I'd rather shoot for a world where clean energy is abundant than abandon the benefits of ever increasing computational power. Thankfully, this is the world we are trending towards, and not the world of 'humans are a virus' self loathing, even if the toxic mindset is having a moment.


What consumes more energy in AI, training or inference?

It seems to me that training ought to be a super shift-able workload, a great fit for intermittent green energy sources.


The expensive bit is not the electricity, it's the cost of the GPUs, so they will train flat out.

There is also a massive amount of competition between the various players, and nobody taking part cares about the energy use (otherwise they wouldn't be doing it), so i don't see this happening.

(rough numbers: a H100 is 700W, which is $613/year at $0.1/kWh, and the estimated cost is $25000-40000)


Typical datacenter power cost is closer to $0.04/kWh.

Datacenters mostly use renewable clean energy as well. Environmental concerns are deeply misplaced.

10 minutes of compute in a datacenter is far more environmentally friendly than 10 hours of work by a human.


> Datacenters mostly use renewable clean energy as well. Environmental concerns are deeply misplaced.

Source? I didn't find much. Google talk about buying "carbon-free energy", which isn't bad, but ultimately just moves the carbon to other uses. Energy is also not the only resource being wasted.

I also tend to think that the way people talk about energy use of technology is misguided, but ultimately data center energy use does likely form a significant part of my personal carbon footprint, which i care about, so it's quite reasonable to argue against finding new exciting ways to use energy when, to me, the benefits are so often unclear or negative.

> 10 minutes of compute in a datacenter is far more environmentally friendly than 10 hours of work by a human.

I don't think this is a good way of looking at it. Lots of things that a computer can do quickly would take a human much longer, but that doesn't mean they are good uses of resources. A better solution is to do neither, and this applies to a lot of what the article talks about.


>A better solution is to do neither

aka "degrowth"

That's a non-starter. Doing more is a given. The best we can do is to do more efficiently.

Doing more with datacenters is the reason US economic productivity/output has soared in the past 3 decades while per capita energy usage has been stagnant.


"do neither" is not the opposite of "doing more". i'm arguing for good, productive uses of our resources


Why is "doing more" a given? Why do we need infinite growth?


Nice, we can just run the ML model—how do you put humans in suspend mode? Or do we just shut them off for now?


Energy is not really all that bottlenecked.


So silly. This reminds me of anti-computer rants from the 1980s. People confuse "AI can be used to generate garbage content" with "All AI-generated content is garbage".

I have 2 artist/photographer cousins, and both of them are raving about AI. I've seen the results, they make good use of AI as a tool to augment their talent, rather than to replace it. Sometimes they spend an hour going through AI-generated/altered content, but that saves them 10+ hours - the artistic input here is in operating and curating the AI-generated content in ways that an untalented individual wouldn't be able to do.


It is true that some people use AI as a tool to jumpstart, enhance, or refine their own works.

It is also true that the vast, overwhelming, majority of people simply take the terrible, horrible, no-good, vomit that spews out the end of the AI pipeline and pollute the world with it.


This was the case beforehand as well though? Now they are just better at it.


Anytime someone sends me something generated by ChatGPT, I think about how AI expert Hilary Mason puts it: "By design, ChatGPT aspires to be the most mediocre web content you can imagine."

https://nwn.blogs.com/nwn/2023/03/chatgpt-explained-hilary-m...


This resonates with my experience using Copilot. It generates lowest-common-denominator code, and displays a stunning unawareness of language and library features. Some understandable due to training cutoff (but still very frustrating), but it also refused to use Pillow functions that seem to have been around for a decade, instead crufting together some shitty pipeline by hand.


> it also refused to use Pillow functions that seem to have been around for a decade, instead crufting together some shitty pipeline by hand.

Definitely passes the Turing test.


I sometimes need this. It’s nice to get an average of all mediocre content summed up as a bullet list sometimes.

Sometimes I want the average tourist guide, the average recipe or the average answer. Now I get it instantly without sifting through ad-infested shallow content. ChatGPT made it so much easier to be curious about new things. It’s a good trailhead for curiosity.


Point of note: this is all well and good until these tools become inevitably ad-infested.

I'm starting to think we need to add "advertising" to the list (Death, Taxes, etc).


It's only a matter of time


Benedict Evans often invokes the analogy of the concept of "infinite interns", which is pretty apt, at least in the current state.


Feels like "crypto" where if someone says "I am not interested", "I do not like it", or something to that effect then commenters suddenly appear suggesting that this is somehow unacceptable. Look at the top comment.

Like crypto, it seems some folks have bet on "AI", and are spooked by any hint of skepticism.


The issue is that both your statements are ultimately very subjective opinions first and foremost.

It's absolutely fine to hold those, but then people extrapolate this to things like: "The "benefits" it provides to end users are, at best, dubious — though everyone responsible for creating it will most certainly enrich themselves." And those are presented as objective value judgments, which ought to require somewhat more than personal opinions to back up.

Why does it not have value for end-users? Why is it bad if people creating those tools become rich? But we live in a day and age, where everyone thinks that their mere opinion needs to be heard and has objective value.


The end users don’t want it. There’s no need to get defensive about that.


I am an end user of AI in many cases.

I use it to generate graphics, summarize notes I have drafted, explore topics (instead of searching to some degree), etc.

I absolutely want AI driven tooling in place of the manual / tedious options.

So who is this "end user" that is invoked all the time?


"hyper-aggressive", "loud"

It's a blog post, there's no audio or video.


Loud and public objections are the opposite of not caring.


I personally think AI will be it's most powerful not as some generated output, but as invisible glue that binds parts of a larger system.

AI (specifically the LLM variety) should be performing small tasks via agents and then using structured output to allow those agents to pass information in a larger system. There, as an example, countless zero/few shot classification tasks that LLMs crush traditional ML at. You want user tickets routed to the correct rep? That sounds like a task an LLM should be doing behind the scenes.

Code gen as an output is likewise boring, agents that adaptably learning to code themselves, generate tests, allow for debugging etc, that has the potential to be very powerful.

Unfortunately I still feel the next AI winter will hit before people even really scratch the surface of how these tools can be used in practice. Everyone is tragically trapped in the prompt -> output model, rather than really thinking about agent based approaches.


IPcenter by Amelia (former IPSoft) was like that, it could use Bayesian statistics on incoming events/alerts to determine where to route a ticket. This would only work after a few tickets, with roughly same content, being routed manually.

One issue with this was it learned that a particular database event would be routed to team_a after an incident. Next time similar tickets was raised, it would be routed to team_a incorrectly. This was an issue since events/alarms tend to look same for eg an application database and the organisations would route tickets to each application team first - not the centralized database team.

It had "virtual engineers" which could do investigation (collecting logs etc) and remediations (basically scripts) too.

https://en.wikipedia.org/wiki/Amelia_(company)


This is a pretty naive argument. People use AI to trivialize mundane tasks and make the more accessible.

> If you're having AI write your email, it probably wasn't that important.

There's plenty of situations where extra politeness and wordiness has a real measurable impact on the outcome of the communication even when the actual information you're trying to convey could be compressed into one sentence. AI can wrap your thought into an appropriate amount of polite meaningless bullshit solving a real-world problem. It can do it much faster and better than I can. Even when the recipient knows or suspects it was composed with the help of an AI, the rules of engagement and social contract has been uphold successfully.

Maybe this qualifies as "not important" in the mind of the author, but I'd argue that if the outcome of such emails can seriously affect your personal or professional life as it often does, it is important to me.

> I don't want music recommendations from something that can't appreciate or understand music. Human recommendations will always be better.

This is just plain demonstrably false. Humans recommend what they think you'd like based on an imperfect overly biased and incomplete simulation of you in their heads. AI doesn't give a fuck and recommends stuff similar to what it knows you like.

> The images it generates are, at best, a polished regression to the mean. If you want custom art, pay an artist.

Again, my mileage probably varies from the author here, but I have infinitely more scenarios where I don't want custom art but abundant and quick polished regression to the mean. AI can give it to me. It's silly to argue that it has no use.

I generally agree that AI often doesn't deliver what overly enthusiastic marketing claims it does, but it still can do amazing things that are of real use to a lot of people.


Usually these anti AI outbursts come from designers. This is the first one I've seen coming from a developer. Or at least the first with such salt, bitterness and anger.


For what it is worth, I'm a developer and I not interested in being exposed to AI generated content either. I work with students every day and probably have a more "real world" perspective on AI generated garbage than most.


Honesty, Javascript front-end developers are probably more at risk of “being replaced” by AI than designers. This is probably where this sentiment is coming from.


Why Javascript front end developers specifically?


I wouldn’t say it’s “Javascript front-end developers” specifically, but front-end developers in general. Intuitively, I’d say it’s because front-end development sits in the sweet spot between the creative side of the designer (who sets the tone of a website/app through their design) and the technical side/aspects of the back-end. An AI is most likely to understand how to interact with an existing back-end while respecting the “creative constraints” placed by the designer. To use a dumb analogy: if I had to use an AI to help design an airplane, I wouldn’t let it touch the flight control system or generate the movies and music from the in flight entertainment library, but I’d give it the task of making the HMI for the in flight entertainment system. Worst case scenario, it’s going to be .01% less clunky than the existing ones.


AI is wayyy better at writing backend then it is at writing CSS. Its magic to it, if you don't have an exact problem that you can probably find on stack overflow just as easily, you are fucked.


This is not an attitude that will tolerate the changes to come. See: Chrome adding AI to every text field.

AI will, at the least, became an extension of creatives - not as something that did the work for them, but that made it possible for them to better create it.


> Chrome adding AI to every text field.

Horrifying! I'm going to hazard a guess that it won't be running inference locally... Should we tolerate changes like this?


>Should we tolerate changes like this?

We aren't being given a choice.

But of course, if you refuse to tolerate AI, much less welcome it with open arms, you're just a small-minded Luddite.


We'll see how the US courts rule on it. I personally think it's unlikely that they cause any significant roadblocks to GPTs, but you never know. I know many people that find the use of GPTs to be morally reprehensible and my own employer bans the use of them over legal and data exfiltration concerns.


I remember VRML.


If you don’t want to have what an AI generates then don’t use it. I do agree with the sentiment that the addition of “AI” which goes from a rebrand of what was already there to integration of LLMs is at the moment only somewhat helpful and obtuse. But, really your new systems shouldn’t be thin front ends to gpt4 and instead something far more tangible.

Output dashboards or reports or aggregate data. I have my own project which is a thin shell over gpt 4 but I tried experimenting with an SMS UI that while only has question and answer dialogs it presents the information in a different way. Think of what it can enable.


> If you don’t want to have what an AI generates then don’t use it.

If only it were that easy, the flood of other peoples AI generated content clogs up anywhere its not laboriously moderated out. DeviantArt, for example, has become more or less 99% AI content by volume over the last couple of years and is now basically useless if you're not interested in having a firehose of generic AI images blasted at you. I've seen people complaining that speciality groups for hobbies like crochet, interior design or car photography are overrun with fake AI images. Search engines are full of fake AI images and GPT-written SEO farms. Twitter is full of GPT powered bots. It's everywhere regardless of whether you deliberately engage with it.

Not that I think complaining is going to fix anything, we've irrevocably broken the signal-to-noise ratio of the internet by building an infinite noise generator, for relatively nebulous benefits in return.


Yep. Just because I don't use copilot doesn't mean I'm not stuck reviewing a bunch of copilot code


"You may not be interested in AI, but AI is interested in you."


> your new systems shouldn’t be thin front ends to gpt4

> I have my own project which is a thin shell over gpt 4

Physician, heal thyself!


> If you don’t want to have what an AI generates then don’t use it.

The author is writing about sort of AI outputs that other people and organizations are passing to him: chatbots, generated emails, phony presence at a meeting, and so on. Those use cases are a bit more like relatives who send their DNA to untrustworthy companies for analysis: you personally saying no for your own use doesn't actually mitigate or even affect the negative externalities imposed upon you by widespread general use.

I do agree that, for many use cases, personally opting out of junk generative AI is sufficient. But I'm not looking forward to the world flooded by low quality AI outputs that become impossible to avoid sifting through in all areas of life.


I can see how a broad declaration like this may feel bold/freeing, but if you really put your mind to it then it seems pretty easy to pick apart.


Then pick it apart?


People don't care if your product is made of ai from game like palworld too drug researchers people care that your results are good, generative ai in general is bad. But used whit professionals use and knowledge in the matters is empowering


There's no evidence Palworld used AI for anything. The only reason it's come up is that the CEO commented that they liked generative AI on Twitter, and certain groups picked that up to try to build an anti-Palworld campaign by claiming it used AI to generate characters.


I’ve picked it apart myself elsewhere in another reply. Enjoy.


> I don't want music recommendations from something that can't appreciate or understand music. Human recommendations will always be better.

Ok but that's not actually true. "AI" music recommendations have been better than human ones for years, IME. AI art might not be better yet, but it's often good enough.

> If you're having AI attend a meeting for you, it probably wasn't that important. If you're having AI write your email, it probably wasn't that important.

100%. No-one actually cares about those meetings or emails. But apparently they're mandatory, so shrug.

Replacing human drudgery with AI is good for everyone. And a lot of meetings, emails, and even illustration work is drudgery. If you think AI can't be original and creative, then original and creative work will be precisely the part that's left over for humans to do, which seems like a win to me.


I don't see how this is an unreasonable take.

Deep down, all of us know that generative AI's main use case is going to be lowest-common denominator stuff like customer support chatbots, SEO-optimized copy writing, deepfake video ads on social media, Canva templates for social media posts, just accelerating the pace at which marketing noise is being created. That's where the market is trending and where investment dollars will flow to.

I'm sure it'll have good uses, as most computing advances do. But on the consumer side of things, let's be real. Generative AI is going to deliver value in the same way Uber delivers 'ride sharing'.


"I don't like CGI"

Same idea. What you actually don't like is bad CGI. Done well enough to be undetectable, you can never object to it.


Maybe the author does not like the idea of CGI in their movies?

> What you actually don't like is bad poop. Cooked well enough to be undetectable, you can never object to it.


cue the reglementations about feces levels in food, you'd be surprised


I get it, but what if creativity isn’t your use case? Why so one dimensional?

I don’t want to do corporate training modules, but I don’t care if they buy stock art or generate it using gpt. I don’t want to talk to a customer service bot at all, but I’d rather talk to one that is well built and capable. I don’t want to write cover letters. Now I don’t.

There’s plenty of ways for tools to be used inappropriately, and if you simply stop believing that “AI” exists and try to understand it as another tool, you can avoid falling into the pattern of angst that keeps going around.


I understand the sentiment but I view LLM and generative AI as tools to explore the possible idea space. A human can do this manually, it just takes a long time. But the output is the same either way. Therefore you're reduced to an argument that essentially says you don't like the output only because it was generated by an AI, which reminds me of "what colour are your bits?" https://news.ycombinator.com/item?id=24917679


The problem is that so-called AI isn't just a neutral tool for generating and exploring ideas in the whole idea space, nor is it a particular human being with particular experiences and talents and biases and ideas and a particular personality that can generate something interesting and unique, large language models always give you essentially the average of the entire Corpus of human output given whatever input you gave it. So they are always guaranteed to give you the most lukewarm, generic, uncreative ideas and output imaginable. Just try to get it to write something arguing against a widely held position or for a unique position. You're very quickly find that you can't really get it to do that, because that's fundamentally opposed to how it's reward function works so of course it isn't going to do that. This is because the problem stretches even further than just affecting what kinds of ideas large language models can produce on their own, too, because if you are asking it to expand on and process an idea you've given it, that processing itself will be subject to reverting to the mean, and so even if you have a human inputting unique and interesting ideas into the large language model, all you will get out is eventually more generic sludge with all of the interesting aspects scrubbed out. So ultimately everything created with LLMs will be lowest common denominator populist sludge with no interesting ideas or defining characteristics. And your fundamentally not going to be able to solve that until you come up with a completely different method for doing this with an entirely different reward function, with the problem being that the only quantitative, data based way of determining whether an output is good is to compare it to what's common in the Corpus of human output, since otherwise you get very quickly into very complex philosophical questions of Truth and value and logic that just aren't legible to a computer system.


I agree but most of this. Of course I would rather speak to a person, but replacing every chatbot with a person will increase cost (my cost). Do I really need a random person to read me a pre-written script when a chatbot can do the same thing? You think these people have permission to do things a chatbot can't? A chatbot can escalate your problem to a manager when necessary too.

You don't want AI music recommendations? Great, don't use it. Go speak to people. Many find discovery algorithms useful.

Want some custom art? Pay an artist. You can do this once a week at most before you're broke?

It's not a replacement for search? Google search was gamed by spammy websites and shitty people for years before AI.

It's idealistic. It's an easy opinion to have. You're not solving any problems.

AI has it's place and it's not meaningful art, a replacement for genuine human interaction or going to solve all of our problems. But it turns out at a REALLY GREAT autocomplete is a powerful tool. That's good enough.


While I largely agree with the author's sentiment about not being very interested in AI-generated content,

> AI output is fundamentally derivative and exploitative (of content, labor and the environment)

I believe humans do exactly this as well, and to a greater extent. If you asked me to draw a picture of some mountains and rivers there's a pretty good chance it'll be almost the same composition as some Monet or Ansel Adams or other picture that I've seen before and even I won't realize it. It won't be deliberate, that's just how the brain works, it learns patterns and extrapolates them.


It's still time and effort you have spent though.


Did LLMs not take time and effort to create? There's this ongoing sentiment in the AI detractors that for something to be meaningful it requires a human to have spent time making it. Meanwhile we live in a world where the overwhelming majority of things we see and interact with are already made by machines or are machines themselves.


I agree with a lot of what the author is saying here. If it matters, whatever it is, don’t use AI to directly generate it.

There is one great use of ChatGPT that I don’t think violates any of the author’s principles. When you’re trying to search for something on google, but you don’t know the words to use, and nobody you know is familiar with the topic, ChatGPT is great at giving you the words you need to continue your own search. This continues to be the most impressive types of “conversations” I’ve had with it.


What about AI-generate educational content? We use AI-generated voice calls to teach languages, e.g. immigrants get to practice speaking a new language without paying for a human tutor.


Sounds great until the "AI" tutor poisons their learning with hallucinations.


downvoted for this lmao, I love that we're just pretending hallucinations aren't a thing


> chatbot

Good luck with that; I don’t want that and never have, but this is only getting worse. Everyone is firing support / helpdesk employees for a $20/mo AI that indeed wastes your time. I wish companies that make money off a user (by payments or ads or selling your data; whatever) would get fined if not offering Real Person support. But it’s not going to happen and we are stuck with basically worthless AI support for now.


People are interested fundamentally in other people. That is why we read, listen, watch the Internet. It connects us to a mesh of person-hubs where you can find out more about or vibe with the people and ideas that you are attracted to. Maybe you could contribute something.

If the trend continues with LLM-related bullshitting technologies I will stop being interested in the Internet. I will stop coming. Many of us will.

Generated music is as far removed from jamming with friends as kissing a girl you like is removed from gang raping her with five of your teenage comrades over Twitch. Is that a good analogy? I think it is.


What... the... fuck? People do that sort of thing over twitch?


In a year, he’ll have no idea whether anything is AI or Human, or both!


Only in the same way people can't tell what "meat" is actually in the frozen factory sausage.

Powerless resignation is not the same as informed satisfaction.


Your expectations are too high. There might be outcomes where it's hard to tell, but that's already present today. There also will be many more areas where AI's outcomes will be subpar and it will stay this way for a few years.


Is that good or bad?


It is clearly good.


I have zero problem with people using AI to write or create. I think that's great.

But I don't like it when people start a sentence "Well, I asked ChatGPT what it thought, and . . ." If I wanted to know that, I'd ask ChatGPT. Just give me your human views (and if they are partially informed by ChatGPT, that's fine).


AI art does "something" to my perception, I tend to strongly dislike it from the first glance, but I am unable to see why or whether it was generated.

In general, it makes me want to consume less and less of "this". Which, in a sense, coincides with the drop of media consumption in general.

What we see now is subtle poisoning of content: video games, news, social media, books, music.

At some point in the future, it is going to pass the threshold of not being worthy of attention for almost anyone, so the whole industries will collapse.

And we will have to start again, from one soul to the others, with permaban on anything computer generated.

In general, it is possible, to subtle integrate computer generation in creative workflows enabling artists to create something great with less effort, but I am not sure that LLM is the way to go with this.


Same. It's almost like an allergic reaction. As soon as I perceive a hint of "AI" origin I feel a strong aversion. Probably some uncanny valley dynamic, plus the sense of over saturation with "content" in general, where AI in particular contributes nothing of value.


90% of people have similar tastes, so statistics will make the majority's lives much easier, in my opinion.


I don't want anything your statistics generate


Statistically, yeah, you do. Is it that there's numbers and forethought involved that turns you off, or the meta of it all?


My best guess is that there might be a group of people who fear the fact that their behaviour can be modelled using straightforward formula, because that threatens their beliefs that there is some sort of cosmic protection.


I guess you are missing the point. Every great work is an outlier.

Now, the GP is hyperbolic, of course, but statistically safe content isn't great content.


(Sorry, it was meant to satirize the silly title, and in retrospect I should've added an "/s".)


Wait until you learn how JPEGs work


JPEG lossy compression isn't generating new (non-fringe) art, it's a just a compression mechanism.


I love this comment


pick 10 things important to you. Chances are you don't get at least one of them because you don't matter being in that 700m person group. Screw competition we have stats?


> I don't want AI mediating social interactions

AI-based therapy tools that are popping up lately feel really weird to me


While arguing against the usefulness of ML models, the author of this piece is not thinking of two key concepts. 1) Supply and demand are linked. If the supply is cheap/plentiful enough, this opens up new demand for things that would have been prohibitively inaccessible before. To illustrate, imagine CPU chips were so cheap that they were essentially free, where the most expensive thing about them was shipping them. In that case, new demand might open up to begin using them to pave roadways or something. 2) There is knowledge and wisdom wrapped up in LLM's and recommendation systems that comes not from the computer program, but from the knowledge and wisdom of the crowds. This knowledge shouldn't be dismissed as useless out of hand just because it's in a computer.

The argument that computer generated art is worthless because "if you want custom art, pay an artist"-- this doesn't recognize basic facts about economics. In particular, supply and demand are related, they are almost the same thing. If the cost of free art is exceedingly low, this enables me to make custom art for things that I wouldn't even consider if the price or barrier to entry or time-to-generate were higher.

The author also points out:

> I don't want music recommendations from something that can't appreciate or understand music. Human recommendations will always be better.

While it may be true that humans understand music better than computers, the intelligence behind recommendation systems is actually not the computer programs themselves. It's the wisdom of the aggregation of many people, reflected in the numbers of the model. While talking with an expert curator about the next movie/series you should watch, or the next band you should discover would be nice, the truth is it's just not as practical or easy as firing up a computer recommendation system. While I could book an appointment with an expert music curator and go through a 2 hour interview process to find my perfect music tastes, a recommendation algorithm can still provide easy value for me.


>It's not a replacement for search, it simply makes search worse.

I strenuously disagree with this. Search has become all but useless in the last few years. LLM's provide a huge jumpstart on understanding, specifically, what to search on to find the canonical data you are after.


I also have concerns about where mankind is going with AI and believe it will have a chilling effect on original content.

But… I disagree with almost everything this person has written.

The output from LLMs can be wrong. It can be inappropriate or misleading. But it is often astonishingly, breathtakingly accurate and leads us to discover connections and conclusions derived from other humans that we would otherwise have missed or spent much longer looking for.

In one regard, LLMs could not be more human : they are after all simply a reflection of ourselves, an entire corpus of human musings, discussions, interactions. So to say that “an AI can’t advise or suggest X Y and Z” when it’s mirroring existing human opinions is misleading at best, or even just plain wrong.


Nowadays AI feeds on organic content created by humans. Soon humans will be ultimately discouraged to create organic content and remaining humans will feel "smart" by publishing generated one. How the era of synthetic content will look like?


I've started to think about the current use of AI as trend in the 1990s that started moving unimportant goods to be "manufactured in China".

Quality doesn't matter -- money matters. And with AI you can save a lot of money without sacrificing a lot of quality.


I get it. It's doesn't seem about AI vs non-AI. I think it's a more telling statement about how useless a lot of the applications are right now. They solve marginal problems in marginal environments, problems that are portrayed by their stakeholders as being of utmost importance. All so people can go around patting themselves on the back for having used AI to solve a non-problem in the first place.

We conflate a job's duty with a job's problem.

Want to make AI useful? Solve big, real problems with it. Combine it with the learnings of the last 20 years on how to build grassroots movements.

Maybe make a change in society, rather than solving yet another imagined-email problem.


I’ve been feeling the same sentiment recently. It’s one thing to have a cool tool for a very narrow and specific use cases and it’s totally crazy that we’re subjecting actual humans to “AI”. It’s time for a nice game of chess.


Not really the point but AI generated music recommendations have been great for me.


This will probably age poorly.

> it probably wasn't that important

we all do a lot of things that aren't important, that's the point of having AI do it instead

> my robots.txt file reflects this stance

unfortunately yelling it into a pillow is equally as effective


Might sound like an ass but would it be possible to tell something was made with the help of AI in the future?

Sure, most things AI generated can seem obvious that it's... well AI generated. But if it was generated and afterwards fine-tuned by people, the content would be indistinguishable.

Just to put it out there, Although I'm biased towards AI as i believe it can benefit people, I understand well enough the people it can benefit could be saints or assholes.


> If you're having AI write your email, it probably wasn't that important.

"I hope this email finds you well." floods my inbox. Yuck.

I agree with most of that post. However, for me when I give AI tons of context (and follow up input), it helps me write emails better than pre-AI (given the same time and effort). Not putting an email I consider "important" through the AI sausage grinder is (in my case) malpractice.


I'm in 100% agreement with the poster. I'm a programmer, but also a musician. People make money by doing something better, fast, or cheaper than others. If you give me AI generated stuff for anything, I'm getting the shitty version as cheaply as quickly as possible.

I don't want crap. I'm not poor, I want to pay for quality. I don't want crap customer service, crap support, crap writing, crap music, or crap art. And make no mistake, as much as it's impressive that a computer can make crap, it's still crap. It's polished crap, but crap nonetheless. Banal, generic, mediocre, everything that is uninteresting in media.

The message a company sends when foisting this crap on us is "we'll do anything to boost our margins, including giving you completely auto generated garbage". MARGINS!! ME! ME! ME!

No thanks.


This will age well.


When Gutenberg asked people what they thought of his printing press, they replied that it lacked the soul of a hand-copied book and they would never buy a book printed on it.

Also relevant: https://m.youtube.com/watch?v=pQHX-SjgQvQ&


Have you seen old manuscripts? They weren't wrong.

But "something vaguely similar to this"-type of argument is fallacious nonsense in the first place.


I apollogize in advance. I do not know what you are trying to say, so I am going to ask for clarification while answering as best I can.

> Have you seen old manuscripts?

I have seen pics of old manuscripts online.

> They weren't wrong.

Are you saying the old manuscripts were not wrong, or that the people who refused to use printed books were not wrong?

> But "something vaguely similar to this"-type of argument

Can you please clarify? Are you saying I made some sort of argument comparing printed books to hand-written manuscripts?

> But "something vaguely similar to this"-type of argument is fallacious nonsense in the first place.

Because I do not understand what that argument is, I can not know if it is correct or fallacious.

I look forward to any clarification. I like learning that I was previously wrong.


> I don't like the idea of it being trained on anything I've written or created[1].

I'm more curious about this part - because I'm the opposite. I'm ecstatic that having open source code on github over the last ~12 years means that you have contributed to machine intelligence. What a wild thought and how super cool that everyone who contributed to the internet also gets to contribute to AI.

I can't imagine being upset that your output into the world is accelerated and incorporated into what eventually will be AGI. It's an honor, really.

And obviously I disagree with the rest of the post. But trying to add something useful to the discussion - why are so many people offended that AI is learning from them? Do they see it some other way? As exploitation? But you are the one who put the content out there! If you didn't want others to see and use and learn from your content, it has always been rule #1 that you shouldn't put it online. Is this not drilled into everyone by this point in time?

> These tools will improve ... I don't think this is a good thing.

First time I've heard this one. Who doesn't want the tools to improve? You don't want to have an option to be excluded from the AI training that you were so upset about? You don't want to have an option to get paid for being included in training? You don't want models developed that don't use copyrighted content? You don't want anything to be better?

You want ChatGPT to continue hallucinating forever? You don't want OpenAI to reduce the harm that comes from it hallucinating?

I find it hard to believe that people who are so viciously against AI really mean everything they say. To so confidently state you want the world to halt progress on the most innovative and useful inventions of our time that can measurably improve people's lives is a hot take indeed.


>Do they see it some other way? As exploitation? But you are the one who put the content out there! If you didn't want others to see and use and learn from your content,

The point is that "others" learning from things is fine. The problem starts when it's a machine acting on behalf of a faceless mega-corporation. It's like the tragedy of the commons, except instead of originating from an invisible mass of takers, it mostly comes from a small amount of known actors like Microsoft and Google. No shit that people will rally against those.

> First time I've heard this one. Who doesn't want the tools to improve?

The reasoning is also given: improving machine learning has an environmental cost. It's also used to deskill jobs (which I don't think is a great argument, although it's not implausible for inequality to massively increase due to deployment of ML). The author also says AI is used to muddy public discourse, which is an understandable fear given that text generation can be used to fake e.g. some grassroots movement.


> If you're having AI attend a meeting for you, it probably wasn't that important. If you're having AI write your email, it probably wasn't that important.

I mean probably but I still have to do unimportant shit to live so might as well have the computer do it.


That’s kind of the point of it too — I just need AI to take care of the unimportant stuff so I can concentrate on the interesting things


While AI makes up things that are sure to make you find them interesting


AI has been a great help in my little day to day stuff and I'm happy to report I wouldn't go without it.


> The images it generates are, at best, a polished regression to the mean. If you want custom art, pay an artist.

AI can clearly generate very high quality images (paintings, photos, and so on). Certainly not something I would be able to do myself, even after putting in hundreds of hours into drawing lessons.


> It's not a replacement for search, it simply makes search worse.

I am not sure which one is worse: Google search (which ironically contains AI generated content already) or something like Perplexity. For many subjects Perplexity gives much better answers to me (anecdotal evidence).


We're approaching the threshold where a computer chip can outcompute a brain, and probably more cheaply for most creative and logical tasks.

It is better to make peace with the inevitable at this point. The future is becoming increasingly obvious - these tools will pass any variation of the Turing test people care to come up with, the only hint that computers are involved will be when they really zoom past us and the quality and creativeness of the output is so consistently high that humans can't have been involved. We're not there yet, but realistically ChatGPT is already a superhuman intelligence, it is just a broad intelligence and something with depth is what is needed.

There are questions of exactly when all this will happen (years or decades?) but the shape of the algorithms has basically been nailed down and we're seeing exponential hardware improvements where it seems wildly pessimistic to assume they will hit a wall before overtaking a biological mind.


Is it fair to say: complain if you can pass a double blind experiment on the arts you enjoy?


Your argument being that if you enjoy AI content, you can't criticise AI? Doesn't seem reasonable to me


I think it’s more about the author criticizing AI content being of low quality. IRL there are many cases I don’t see them distinguishable between human generated or AI generated.


That's a fair comment, but the article criticises more than just that element of AI. I took your comment to mean that the ends justified the means if one couldn't tell the difference between AI and human content, which it doesn't sound like you were suggesting.


> I can't trust the answers it provides or the text it generates. It's not a replacement for search, it simply makes search worse.

Can you trust search results without AI summarization? They’re mostly SEO spam.

> The images it generates are, at best, a polished regression to the mean. If you want custom art, pay an artist.

Most visual art that people consume is polished regression to the mean. It’s designed for mass appeal, not originality.

> I want to talk to a person, not a chatbot. The chatbot wastes time while you wait for a person that can actually help.

Have you spoken to an outsourced call center in the last, oh, 20 years?

> I don't want music recommendations from something that can't appreciate or understand music. Human recommendations will always be better.

I’m old enough to remember hearing the same popular song 10 times a day on the local radio station because their rotation was designed for broad appeal to maximize ad sales, not to uncover new and interesting music for eager listeners.

> I don't want AI mediating social interactions that it cannot and does not understand (though it may appear to). If I'm weary of too much volume on any social platform or in any news feed, I'll cut back on what I'm following.

This one I agree with, but it’s been our reality for at least a decade now, not a new development.

> If you're having AI attend a meeting for you, it probably wasn't that important.

True

> If you're having AI write your email, it probably wasn't that important.

Even in important emails there is a lot of “copy”, and AI-generated copy is no worse than human-generated copy. It’s just copy.

> If it's screening job candidates for you, you're missing quality candidates.

If the alternative is keyword screening, I’ll take AI screening, thank you.


I've got to break it to you, everything humans generate is derivative.


Would you like AI in a house? Would you like AI with a mouse?


It's very strange how many Luddites inhabit the tech world. I would expect to find this diatribe in many places but not on a developers blog.


It is only strange because you have assigned the label. But reality could be different. Just wrong pattern matching.


I too want these young AIs off my lawn.


Sure you concede Copilot, then you’ll concede translation, then weather prediction, then it’s too late


Yeah I thought it was so funny he admitted at the end ..."except copilot" (quotes added for emphasis, can't remember what he actually said) But it was so funny he cancelled his whole premise by admitting that.


Does a forward pass of Dall-E or a human artist making an image have a higher environmental impact?


>I can't trust the answers it provides or the text it generates. It's not a replacement for search, it simply makes search worse.

Why is an AI noob upvoted? When the author makes low tech skill claims, I wonder why this made it to the front page.

The person missed the AI train, they decided instead of adapting, they would just Boo it.


I find this kind of dismissal extremely tiring. It's very lazy to wave away criticism of a tool as a skill issue.

On the tin the claims are both true: LLMs do not replace search (it does not link me to any resources), at best it augments it (with services like Phind or Bing combining the two).

Also, search is made worse by SEO spam, which can be churned out more easily with AI, leading to crap like the following: https://web.archive.org/web/20230103091313/https://www.slowi... (which has been edited to be less nonsensical, but remains hilariously out of place). That's not to say that SEO spam wouldn't exist without AI, but this level of carelessness ("a 32 bit wine bottle is a bottle of wine that has been created with 32 bits of data") seems to be new.


It's the same shit bitcoin peddlers have pushed for years. "Have fun staying poor". Insult you, imply it's a personal failing that you're not getting on board with this thing which coincidentally they have invested heavily in.


AI will one day soon run the planet. SORRY


At first I wanted to roll my eyes at the pearl clutching and move on but it actually pushed me in the opposite direction.

I want more Ai to automate the entire web, to push out us humans so that we can stop these cynical platforms from creating more anti-social, anti-moral and obsessive internet personas.

Or that now increasingly we all have to politically be pawns to certain influencers, heaven forbid you have an opinion of your own as that means less money for that poor poor influencer.

Maybe the internet could have been different, the early days of it certainly showed this, where even places where anti-social and anti-moral people met, could to a degree be human.


Using chatgpt to generate work email is an amazing opportunity.

It decreases that thought that goes into writing the message and increases the effort on the part of everyone who reads it. It's a 1:N overhead increase.

If people use chatgpt to summarise before reading, the transform will be lossy and the baseline sketchy communication via text picks up a higher error rate. That's also overhead.

Finally because it's easy to create this stuff more will be sent out.

We have an unprecedented increase in operational overhead of corporations that go down this path. Merely by not doing this nonsense you gain a competitive advantage.


Too bad AI-generated tools will give small two-person brands the tools to compete with multi-billion dollar brands when it comes to creative content, coders with 6-months experience to create the apps they actually dreamed of making, and normal folks to actually see images and ideas they only had in their heads come to life.

Yeah, that's a really awful reality, isn't it? Joe the plumber should just learn to paint for 10 years to see the art he wanted to see but can't make.

Such gatekeeping nonsense from the tech crowd about AI.


To each their own. I find them helpful in specific use cases like learning coding


Psychological asbestos


This seems to be a common argument every time society gets interrupted by technology, especially when the technology advancement has moral grey areas (and they always do).

It turns out there is an entire line of research that parallels the “technology adoption” ethos we all know called, appropriately enough, “Technology Rejection.”

> A fundamental question is whether the field of technology rejection requires investigation at all, considering the wealth of research available in technology adoption. Are they distinct phenomena? Or is knowledge on technology rejection merely a by-product and subset of knowledge on technology adoption?

> “Rejection of technology” may be expressed as a phenomenon wherein a society, ranging from individual users, community groups, through states (nations), capable of availing the service of a particular technology, deliberately chooses to refrain from its use, in full or part. Consequently, some technologies get increasingly used while the use of others tends to ebb. Conventionally, the debate surrounding the divide between technological “haves” and “have nots” has simply quarantined the latter as individual deficits (Selwyn, 2003), typically of a financial nature. However, a consensus is recently emerging within the arena of sociology of technology that conceptualizing nonusers of technology as purely technology “have nots” is too crude an analysis (Selwyn, 2003). As Bauer (1995) highlights, nonuse or resistance to technology has largely been treated with a negative connotation, placing the nonusers at fault. Bruland (1995) states that such resistance to technology is by no means irrational or conservative. Rejecters must thus be dealt with as deliberate rational nonusers.

https://journals.sagepub.com/doi/10.1177/2158244013485248?ic...

The field of research that is technological rejection is fascinating. That paper lays out rejection at the individual, group and national level as the basis for study, with the original poster’s article being a prime example of individual technology rejection, including the reasons for doing so, and it’s posting to the Internet and hacker news actually an interesting attempt to “find the community” of people who feel similarly.

We’ve seen this before with radio, television and especially the Internet. There were so many people who out-and-out rejected the need for the Internet when it was coming of age in the 90s, and many who still do even as the world has moved on. The joke about the boss who printed every email wasn’t a joke: I knew several of them and still know one who does this very thing.

So what causes rejection of technology and is this a distinct field of research we should be looking at?

Aside: further reading and an example worth looking at.

https://www.researchgate.net/post/Why_some_people_reject_new...

https://www.theguardian.com/commentisfree/2016/dec/19/life-w...


> I really don't. AI output is fundamentally derivative and exploitative (of content, labor and the environment).

From the start, you can see where the author is coming from. This person is some variety of leftist/communist/socialist, and hates AI in the same way that their ilk hates any technology that reduces the need for (some) human labor.

And, to their point, technology does have winners and losers. AI certainly has losers. Developers with this mindset will be among the losers.


The winners will be the entities that control the best models. It's not a matter of mindset.


Return to monkey


ok


That's a tremendously ignorant post and clearly doesn't understand ai and what it all contains.

Does the author also think that the ai/ml based deep fold is shit?

This author just shouldn't post about things and form such a strong opinion on something they clearly have no knowledge about


[flagged]


> Modern Luddites can go back to churning their own butter

After almost 20 years in the tech industry, this sounds pretty enticing.


Fine, I’ll bite I guess. AI generated content will never beat a Studio Ghibli production. This must be judged by actual fans, not those who want AI to win. There’s no way AI is deriving content like that studio can make. Not in months (lol), not in 10 years.

The point of all content isn’t shitting something out for a quick buck, which sure, AI will excel at. Studio Ghibli is clearly taking care to create unique works. True craftsmanship and AI shitsmashship are not and will never be the same thing.


What's your take on this? Not an expert but I think some of these are (vaguely?) based on Studio Ghibli aesthetics:

https://old.reddit.com/r/aiArt/comments/17zmx1n/unreleased_1...


They’re all static images, not very comparable to a complete production. I admit they look good, but animation is another thing completely. And music. And story.


I think story and narrative will be the hardest nut to crack. I like to think humans will always be interested and capable of driving at the top-down level.

Image, video, voice, and instrumental will be easy. But it's still going to require a human with very sophisticated understanding of narrative, theme, pacing, structure, etc. to draw these elements together in a pleasing, coherent, and artistic way.

I think voice and performance will still be human-driven by actors for some time, but that a much smaller cast of less-than-supermodel actors and actresses will be able to do everything themselves.

I'll show you current SOTA animation if you email or message me on Discord.


Really, never? As in never ever?


At the point you're thinking AI is superior at generating Ghibli-like movies it essentially is as powerful as any human (understands the world and our preferences for storytelling), that is smarter than all of us by some margin. (I'm not talking about ASI yet.) Which by the way renders the whole argument (your point included) meaningless because you shouldn't even be giving it commands at that point. It should do have freedom to make its own decisions.


Yes.


That's a bold claim. In order for that to be true there would have to be some fundamental limitation preventing AI from ever thinking on the level of a human.


It’s a bold claim the other way too, that we can create something human from binary logic.


Nowhere close to the same magnitude. You’re saying with 100 years, 1000 years, 10,000 years, etc of technological progress we will never be able to simulate, or emulate what is between your ears.


A game will never be art, they say.


Buddy a few months is a little bit optimistic


Yeah, but a few years is no longer unthinkable and it will definitely not take a few decades.


Don’t be so sure. AI enthusiasm has existed before with major breakthroughs, and when things didn’t go as well as the hype suggested, it led to AI winter.

https://en.m.wikipedia.org/wiki/AI_winter


You haven't seen what I've seen. :)

There are a lot of groups running towards this goal and it's all going to land at once.


In my experience, "the first 80% is easy; the second 80% is hard." It's very easy to get caught up in the hype and think we're closer than we are because we can see the finish line. All this new generative work does a great job of giving compelling demos, but there's way more to do before it becomes compelling media.

Having been through a few of these things in the last 20 years, I'm going to trust my priors before believing AI is all that different.


We're over the 80% hand hurdle now.


Have you seen the about us portraits on your own company homepage? https://storyteller.ai/


Your profile looks as if you have an interest in this being true.

Not a judgement against you or a disagreement, just something worth disclosing.


Share what you’ve seen then?

The industry has a long history of people making promises they never deliver on. Every wave of tech has them, “AI” has pulled them all out of their holes once again…


He's probably seen the VC money being thrown at anyone who promises the coming AI utopia.


What have you seen? I am really interested as a writer.


Email or Discord me and I'll show you.


>You haven't seen what I've seen. :)

I rolled my eyes so hard I could almost see my brain.

If you have something pointing towards your claim being true, everyone here would be incredibly interested in seeing it.


The last 10% is always the hardest, longest, most frustrating bit.


> lot of groups running towards this goal

For such vanity?


> You haven't seen what I've seen.

Attack ships on fire off the shoulder of Orion? ;)

In my experience in the current AI climate people are over-promising and under-delivering. So I am a bit skeptical regarding hints at undisclosed great progress. No offense meant.


> Modern Luddites can go back to churning their own butter.

The history of the Luddites is actually interestingly parallel to what we're seeing with generative AI these days. The book "Blood in the Machine" talks about this in some detail - there's an interview with the author at https://www.currentaffairs.org/2024/01/why-you-should-be-a-l... that is worth reading, and Time has a piece on the book as well: https://time.com/6317437/luddites-ai-blood-in-the-machine-me...


> We're going to have incredible Pixar/Disney/Ghibli-beating animation tools within months. Creatives stuck in low-autonomy roles in the Hollywood studio system can finally break out on their own.

Animators React 11: Mulan, Aladdin, Anime Rock Paper Scissors -- And some old school animators reacting to it - https://youtu.be/jQ_DfORb3kw?si=5pvu0OBOQGOqkY-8

And I'll also recommend Disney Animator REACTS to AI Animation! https://youtu.be/xm7BwEsdVbQ?si=0WrTOri3VQRldL1k

Yes, the tools are there... but a typewriter allowing someone to get past poor hand writing or a word processor allowing easier editing doesn't change the underlying creative choices.

... and currently going through the entire Foundation series (currently part way through Foundations edge) as audio books so I can't skip over the not as interesting parts... I'm going to say that I really don't want 1950s Asimov. I've had the entirety of my desires filled for that.


The long tail was what was promised a couple of decades ago. What we got instead is plain old winner takes all dynamic, with a surveillance economy bolted on top.


You can't tell what's parody anymore.


Oh yeah. Perfect for people who want content instead of art.


I've heard the same shit about Claude 2.0 and GPT-4 and now that the hype has died down we can be impressed without acting like it's Skynet. So please stop fueling pointless hype.

We are not in VC's office, we are amongst ourselves for God's sake.


Too much choice is a problem. Most of that content will not be seen much.

https://www.ted.com/talks/barry_schwartz_the_paradox_of_choi...


Those creatives will be able to compete with content mills spitting out whatever nonsense gets engagement on ad supported platforms. They will lose.


Should I know or care why Cory doesn't want AI?

Literally a bunch of unsupported assertions and a hostile/aggressive tone.


We need a right from LLMs. The same way public society shuns and forces religion out of public sphere -- thus the right to freedom from religion is born -- there must exist a right from LLMs.

That must not mean a general prohibition of practice, not necessarilly. The Catholic church exists to this day. Yet there must be clear separation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: