Hacker News new | past | comments | ask | show | jobs | submit login

Specifically for this community, it will be interesting to see what the agreements were regarding AI. That was a major issue in negotiations for both sides. I think the decisions here could go a long way in setting a precedent for how that technology will continue to be developed and how it will be used in the future.

For example, if the writers get what they wanted, it means Hollywood has little motivation to put its political might behind getting AI work recognized as copyrightable. The agreement may also include guidelines regarding when and how a writer's work can be used in training new AI models that would certainly have ramifications outside of Hollywood.




Agreed. I was a professional actor 20 years ago when digital and streaming were first coming into play, and the seeds sown in the agreements from that time were painfully lacking for working actors. The precedents set have continued to this day.

I'm torn on the AI/copyright issue. On the one hand, for actors, having your image and likeness digitally reproduced in perpetuity without compensation goes against the very profession of acting.

On the other hand, the act of writing is a more interesting gray area to me. There's a different between physical property and intellectual property. If a machine can learn better than a human and put out compelling content, I'm conflicted as to how restricting that helps us progress as a society (just like I'm conflicted as to how allowing it helps us progress as a society).

So many gray areas. I'm just glad a tentative agreement has been struck, and hopefully it's equitable and forward-looking for all sides.


"The profession of acting" may go the way of the professions of milling, sewing, smithing etc - technology makes them redundant outside of the third world and edge cases.

I find it very strange writers and actors think they're somehow unique in this. They'll be looked back upon as a group of luddites hampering progress for personal gain.


I don't understand how AI work is not copywritable. It's like saying that paintings that use a brush are not copywritable. I know it sounds like a stretch - hear me out. AI is a tool, just like a brush is a tool. As long as there is a human required to guide it, it's still the product of a particular human. Tools can get more complex. E.g. prints. Then you've got non-AI computer generated art, which is uncontroversially copywritable. Why is generative AI fundamentally different?

Let's take one style of painting, say Jackson Pollack. You wave your brush around over the canvas. It's copywritable.

Now you create a frame that holds the brush that randomly shakes it using a shop motor over the canvas. Still copywritable.

Next, you plug it into a computer and have the computer use a RNG to determine how to shake it. Still copywritable.

Lastly, you have an LLM generate a sequence of angles and velocities with which to shake the brush. No longer copywritable? Why?


This is the difference between the law and the spirit of the law. Copyright is there to incentivize creation and innovation, and the large-scale use of AI might reasonably go against this incentive. The law doesn't exist in a vacuum where is it the word of god and We Have To Obey It. It has a purpose, and we can choose to adapt it to better suit our needs.

A note : RNG means it's random. An LLM's output is not random, it's based on previously look-at data (which is the same way humans function, if we're being honest).


>The law doesn't exist in a vacuum where is it the word of god and We Have To Obey It. It has a purpose, and we can choose to adapt it to better suit our needs.

And this is why this agreement is so important for the future of AI even though it is really only governs movies and TV. Hollywood is a major political force. We all know the story about how Disney has repeatedly pushed for extensions on the life of copyrights. This deal will likely give us good indication into how active and in what areas Hollywood will participate in shaping the law around AI.


> An LLM's output is not random, it's based on previously look-at data

Just because the set of tokens that are considered to be output next have weighted probabilities, does not mean the one selected isn't "random".


How is the selected token "random" if it's chosen based on weighted probabilities, especially if you use a temperature of 0?


Because even with weighted probabilities, they're still probabilities. If you have "the cat sat on the", and the next possible tokens are "mat" with a probability of 90%, "floor" with a probability of 9%, and "dog" at 1%, how are you going to pick "mat" 90% of the time without consulting a (P)RNG?


They are only probabilities if you treat them as such. If you always just pick the highest probability option (temperature = 0), it's not random in any way - then it's just a confidence value.

You can use randomness, but it's in no way inherent to the process.


> If you always just pick the highest probability option (temperature = 0), it's not random in any way

Huh. That feels like a recipe for either outputting parts of the training data verbatim, or getting stuck in an output loop.

Although I earlier gave an example with 90/9/1 weights where the obvious choice is significantly higher than the others, I'd have thought that in a different scenario where you have, say, 51/49 or 34/33/32 options to choose from, always picking the one with only a fractionally higher confidence than the reasonable alternatives could lead to a "sameyness" in the outputs. Isn't there any value in having variance in the generated responses?


> Isn't there any value in having variance in the generated responses?

Sure, which is why the technique is used. But that doesn't mean "random" is a useful descriptor. I, as a human, try to add variance to my responses as well - does that mean my mind is "random"?

That's why my point is that there is no inherent randomness in the process. We use a bit of randomness to make it subjectively better, but it's fundamentally not random.


Copyright is a special protection under law to incentivize people to produce creative works. Machines don’t need this incentive and thus don’t need the protection. It’s fairly straightforward.


You still prompt AI.


And prompts can be copyrighted, like computer code. That is not what the conflict over AI is about.


You can copyright the prompt but not the result generated by AI when fed the prompt


But your prompt is merely a suggestion to the AI model. You aren't the one making the image.


This is stemming from a conviction about where the difficulty in the creation process is at. That changes with technological advances. Automation often makes practical skills economically worthless.


Art is separate from economics. It has existed before there were markets or even complex societies. Also, the amount of effort is not what determines art. However, I think art requires one to be the actor in the process that generates the work. The big issue I have with AI art is that you could have very clear image in mind that you wish to generate, and spend lots of time tweaking your prompt to the model, but you will never get the image you had in mind, because there is disconnect between you and the machine. Your imagination doesn't extend across the screen.


>The big issue I have with AI art is that you could have very clear image in mind that you wish to generate, and spend lots of time tweaking your prompt to the model, but you will never get the image you had in mind, because there is disconnect between you and the machine. Your imagination doesn't extend across the screen.

Couldnt you say the same about pen and pencil? Or any other medium? Turning imagination into something physical/digital/real always entails loss. This might be a matter of a few more iterations of better usability. If you are able to put the the limits of the machine representation of your imagination into words they might iteratively vanish. You wont connect your imagination to a machine but to words.

Posts were also about creativity and design in the context of copyright thus the focus on economics. Art for its own sake as a personal pursuit doesnt really relate to that perspective. Unless you want to trade in other currencies (fame and number of impressions) in which case its the age old competition of what is able to speak to people on a deeper level. For which the representation is often less important then the idea / perspective and how that is able to resonate with people. For that the medium is just that, a medium.


I place take-out orders as well. Doesn’t make me a chef.


Given sufficient technological advances some skills might become practically meaningless in the economy. The idea might be where its at. Handloomers also thought they brought something worth while to the table.

Most takeout today is also warmed up according to strict recipe. "Sandwich artists" arent chefs either. And even the initial recipe is designed for mass reproducibility and supply chain feasibility over taste. You still get food but a lot fewer people employed as actual chefs doing chef stuff.

edit: Removed unsensible comparison


> "Next, you plug it into a computer and have the computer use a RNG to determine how to shake it. Still copywritable. Lastly, you have an LLM generate a sequence of angles and velocities with which to shake the brush. No longer copywritable? Why?"

Consider an alternative image generation technique where the same computer is connected to a camera, and that video feed drives the brush strokes. Let's assume it imitates the Pollock style so well that if you place an image of a Pollock painting in front of the camera, the computer will produce what looks like a near-perfect painted replica of the original.

Clearly this technique would have a problem with copyright. Programming a computer to make copies doesn't absolve you of copyright.

In the eyes of the law currently, image diffusion models are seen as something closer to that hypothetical camera input that waves the brush, in the sense that the AI model has been trained directly on the works of specific artists and can recall stylistic aspects of their works on command.


Because that LLM does not exist without tremendous amounts of input created by actual human beings. These oversimplifications always skip this step, and pretend LLMs sprang into being through sheer force of will. They did not. They are built on the output of other people, and using a statistical approach to copyright laundering remains copyright laundering.


That is betting LLMs wont get enough non copyrighted training material to be useful. Especially for text thats likely not the case. Pretty sure its a moot point anyway as you have no way to proof what was in the training set and what wasnt.

I very much expect parallel construction if push comes to shove.


An LLM trained on public-domain or liberal-license material would be a great counterexample ... if anyone built it. The popular models in wide use do disclose what they train on (sometimes under NDA) and they all contained copyrighted work.


There is no need to build it or tell you atm. Once that does change i expect parallel construction.

Once you know what behavior you want from the net, recreating that from non-copyrighted material might turn out to be a lot less difficult.


It was my understanding that the debate was never about whether AI work is copyrightable. The debate is whether the work can be attributed to the AI. In your example, it would be like attributing a work to the brush, rather than to the artist.


The problem of reducing AI to a tool is that your hammer has no intelligence. There are more complex tools, you might consider your computer a tool, once again it doesn't have intelligence albeit it can achieve very complex tasks. I think that your example of "you have an LLM generate a sequence of angles and velocities with which to shake the brush" is copywritable. In that sense you used it as a tool, as you would use a RNG. But if instead the LLM created the whole outcome, then it shouldn't be copywritable.

PS: I use the term "intelligence" loosely here for the sake of the argument


You might be too ahead of your time in this thinking, but I see the logic there.


I haven't followed this close enough to know, but I assume that writers will use AI heavily. It's a great tool for brainstorming and creating drafts.


>I assume that writers will use AI heavily

Yeah, from a high level that is the question. Will the writers use AI or will the studios? Is AI a tool to make the writers more efficient or is it a tool the studios will use to replace writers. The writers generally have no problem with the former, but they wanted protection from the latter.


Studios don’t want to replace writers. They just want to pay writers less by crediting AI if used. The writers want full credit so they receive full revenue.


I'm not sure if this is a comment written out of naivety of the profit focused nature of studios or ignorance of the writing process. Replacing writers and paying writers less is effectively the same thing. That "full credit" that you mention isn't just about what those writers got paid, it tells you what work they did.

For example, a writer will often write a treatment of a movie which is effectively a short summary of a movie that is done before writing a script. Sometimes different writers will do the treatment and screenplay. This is generally the distinction you might see in "Story By" and "Written By" credits in movies (although these distinctions can and often are a lot more complicated). One likely goal of the studios is for an AI to write the treatment and then hand that off to a writer to turn it into a screenplay. That is one of the most practical ways AI can be used since treatments aren't an end product themselves and the AI's flaws can be corrected by humans when translating the treatment into a script. However, that is still effectively replacing writers by taking away one of their clearly defined jobs and in turn one of their paychecks.


If the AI does a better job writing said treatments (aka, does the same or more for less cost), why shouldn't it take away the job?


This question is irrelevant in the context of negotiating a collective bargaining agreement. The WGA's motivations are obvious. They are protecting the livelihoods of their members.

If a studio thinks AI can genuinely do a better job than human writers, that studio has the right to replace those human writers with AI. The human writers likewise have the right to refuse to work with any studio that does so. The fact that no studio has even hinted at taking this route during a 5-month-long strike suggests that no one believes AI would actually do a better job than WGA members.


No way AI could write another remake of a movie from the 80's.


I realize this is a joke, but it is important to remember that Hollywood doesn't produce remakes because that is what writers want to write. It produces remakes because that is what studios want to make. Writers are probably the strongest force for injecting originality into the process.


The list of things AI could never do, that it has already done, is getting rather long for a statement like that to read as anything but naive.

Eventually, given enough investment, yes it could.


Woosh!


And yet, the studios came to the bargaining table instead of doing just that. Sounds like they don't have that much confidence in such a plan.


> why shouldn't it take away the job?

Because in context of the current generation of "AI" , i.e the ChatGPT cohort, all that it's doing is "given a very large number of examples of this text, generate one more that fits the prompt". It's nothing without the training data. And the training data is the output of people's jobs.

If you called it "distributed plagiarism" would that make the problem with the idea that AI replaces the need for the writers more apparent? It can not, but it might replace the need to _pay them_ for the contribution, as it's further from the end product.

https://www.zmescience.com/science/chatgpt-stocastic-parrot/

https://apnews.com/article/openai-lawsuit-authors-grisham-ge...

The same applies to the DALL-E cohort of image generators:

https://news.artnet.com/art-world/open-letter-urges-publishe...


> It's nothing without the training data. And the training data is the output of people's jobs.

so do you not claim that a student learning off textbooks or teachers are also doing "distributed plagiarism"?

> but it might replace the need to _pay them_ for the contribution

assuming the training data is paid for via a one time fee, or was using material that was publicly available to be learnt off, the contribution is already paid for.

There's no possibility of a royalty payment with AI generated content.


> so do you not claim that a student learning off textbook

No, these are not the same. A student has mental faculties that ChatGPT does not. The understanding-free predictive text processing of ChatGPT is the whole deal of that software. A student does not require 10 thousand textbooks as examples in order to understand a concept. Students do exhibit originality and reasoning that is not in the training data. You should not equate them.

> assuming the training data is paid for via a one time fee

Why would you assume that, when the links provided make it clear that this did not happen?


The human is far better at distributed plagiarism but given the definition the human is certainly as guilty.

If you removed all of humanities writing and language and raised a million ignoramuses it would be quite a while before you got Shakespeare v2 centuries at minimum.


> so do you not claim that a student learning off textbooks or teachers are also doing "distributed plagiarism"?

Make the AI pay taxes, sleep 10 hours a night, and start off with thousands of dollars in student debt, then we'll talk.


> Studios don’t want to replace writers

They will do it the very first second they think that they can get away with it without major consequences (such as profit loss from bad AI writing).


Disney evidently doesn’t care about profit loss from bad human writing so it’s not clear that they'll care about same from bad AI writing.


The plan eventually is to have AI churn out scripts, with a human script editor making whatever re-writes or changes are necessary to improve them.


Disney has already announced to invest a billion into AI.


…in their continued pursuit of driving their stock price down as far as it will go.

If there was ever a company that needs to take that money invest it back into the kind of artists and work that made them such a powerhouse in the first place, it’s Disney. But, instead, they’ll pay out this money to some execs friend with a BS AI startup to procedurally generate another shitty Marvel or Star Wars thing.


> Is AI a tool to make the writers more efficient or is it a tool the studios will use to replace writers.

Increases in efficiencies will still make people redundant. Automation means less people are needed. I dont think even the most optimistic executive thought you could get rid of writers entirely.


You might want to talk to writers first before making this assumption. Of the writers I know, there are two varieties:

1) AI is used essentially to generate grift/chaff, like marketing material for your actual work, or self publish dumb get rich quick books

2) AI is utterly repulsive to the creative process as it cuts out the actual passion-building part, which is brainstorming and drafting— editing is usually the shittiest part of the job actually


There is a difference between someone finding something distasteful and someone wanting something banned. The majority of writers may never want to use AI themselves, but the WGA's stance has not been for an outright ban on its usage. My comment was in the context of the union's negotiations so my use of "writers" was mostly a synonym for "WGA" and not "individual writers".


Does anyone have any input on a thought I've been kicking around?

The Artist's Oath:

AO0: I have used no AI

AO1: I have used AI in a research or teacher role

AO2: AI has been used for structural components, but not the art itself

AO3: AI has directly assisted in the creation process

AO4: AI generated this entirely from a prompt specifically engineered by a human

AO5: AI generated this entirely from a basic prompt

Slightly expanded thoughts and a few examples: https://cohan.dev/the-artist-oath/

The thinking is that once established creativity unions could specify at what level of AI they will accept, and for what compensation.

You could also stamp an [AO2] on the box/website as you would the content rating.

At the individual level as an artist you can specify the level you use for a work, and someone commissioning you could specify at what level of AI they'd accept.

The rating would be per-project, however the draft includes the addition of an asterisk [AO2*] indicates "and I've never used AI beyond this point in my art", and a + would indicate "however as soon as we reach sapience I welcome our new AI friends into the universe of creativity" :)

Would love to see what smarter people than I can come up with!


But why, though? It's like stamping on the product whether Photoshop or Blender were used, they're all just tools, and customers, for the most part, don't care how something is made as long as the product itself is good.


> customers, for the most part, don't care how something is made as long as the product itself is good

Most people may also not care about the ingredients in their food, but that doesn't take away the right of others to know what's in their food. If you think it would make no difference, what's the harm? Not even giving people a choice while claiming to know what they want seems fishy to me.

And art isn't just product. It's human communication, it means different things to different people, and "who made it and why" is part of that. You don't have a right to my attention and respect just by producing (or stealing) the same output as someone I care about. The stuff that can be quantified is not the stuff that matters.


Because people don't have strong opinions on photoshop and blender, basically. I got nothing better for you there.

For the record I agree with you that current gen AI is just another tool to be used. I even use photoshop and digital art as my example when this comes up too :)

A lot of folks don't see it that way though, hence they are striking. Just want to bring about clarity as to what people want in a way everyone can have an easier reference point :)

E: also in fairness they do stamp the products they use in the realm of TV and movies. It's in the credits usually towards the end where you see all the Dolby, Arri, THX, etc logos. I'm almost certain I've seen the Blender logo in a a non-buck-bunny film somewhere at least.


This is interesting in that my gut reaction as an artist is that it completely misses the mark on how artists work, create, and choose from all available media and tools. Yet when I tried to write a comment saying so, I kept think about how yes - we do label works on gallery walls with what media was used. We do sometimes create works deliberately using new tools, and sometimes put arbitrary constraints around ourselves.

So while I doubt this is going to become a standard in how artists present their work, and certainly won't be accepted as any kind of "Artist's Oath", there is some merit in helping communicate the choices made during the creative process.


I'm absolutely fine with my bit being thrown out if it inspires the real one! I'm not tied to the first draft :)

Chances are I've come at it from a far too technical standpoint for sure, it's not too far off looking like Geek Code haha

https://en.wikipedia.org/wiki/Geek_Code


One problem I see is that if compensation for the product depends on the level of AI users, then there will be an incentive to lie, making the self-rating pointless.

Also, it's doubtful the rating would carry through to the final product since it would be likely used for intermediate products used in the final movie/art creation, which is a different market than the end user viewer.


They already do. But not in terms of actual writing, but as a research method on unfamiliar topics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: