Hacker News new | past | comments | ask | show | jobs | submit login
The New York Times is suing OpenAI and Microsoft for copyright infringement (theverge.com)
593 points by ssgodderidge 7 months ago | hide | past | favorite | 869 comments



Solidly rooting for NYT on this - it’s felt like many creative organizations have been asleep at the wheel while their lunch gets eaten for a second time (the first being at the birth of modern search engines.)

I don’t necessarily fault OpenAI’s decision to initially train their models without entering into licensing agreements - they probably wouldn’t exist and the generative AI revolution may never have happened if they put the horse before the cart. I do think they should quickly course correct at this point and accept the fact that they clearly owe something to the creators of content they are consuming. If they don’t, they are setting themselves up for a bigger loss down the road and leaving the door open for a more established competitor (Google) to do it the right way.


For all the leaks on: Secret projects, novelty training algorithms not being published anymore so as to preserve market share, custom hardware, Q* learning, internal politics at companies at the forefront of state of the art LLMs...A thunderous silence is the lack of leaks, on the exact datasets used to train the main commercial LLMs.

It is clear OpenAI or Google did not use only Common Crawl. With so many press conferences why did no research journalist ask yet from OpenAI or Google to confirm or deny if they use or used LibGen?

Did OpenAI really bought an ebook of every publication from Cambridge Press, Oxford Press, Manning, APress, and so on? Did any of investors due diligence, include researching the legality of the content used for training?


I'm not for or against anything at this point until someone gets their balls out and clearly defines what copyright infringement means in this context.

If you give a bunch of books to a kid all by the same author and then pay that kid to write a book in a similar style and then I go on to sell that book...have I somehow infringed copyright?

The kids book at best is likely to be a very convincing facsimile of the original authors work...but not the authors work.

It seems to me that the only solution for artists is to charge for access to their work in a secure environment then lobotomise people on the way out.

The endgame seems to be "you can view and enjoy our work, but if you want to learn or be inspired by it, thats not on"


There are two problems with the “kid” analogy:

a) In many closely comparable scenarios, yes, it’s copyright infringement. When Francis Ford Coppola made The Godfather film, he couldn’t just be “inspired” by Puzo’s book. If the story or characters or dialog are similar enough, he has to pay Puzo, even if the work he created was quite different and not a literal “copy”.

b) Training an LLM isn’t like giving someone a book. Among other things, it involves making a derivative copy into GPU memory. This copy is not a transitory copy in service of a fair use, nor likely a fair use in itself, nor licensed by the rights-holder.


> This copy is not a transitory copy in service of a fair use

Training is almost certainly fair use, so it's exactly a transitory copy in service of fair use. Training, other than the brief "transitory copy" you mention is not copying, it's making a minuscule algorithmic adjustment based on fleeting exposure to the data.


Why is training “almost certainly” fair use?

Congress took the circuit holding in MAI Systems seriously enough to carve out a new fair use exception for copying software—entirely within the memory system of a licensed user—in service of debugging it.

If it took an act of Congress to make “unlicensed” debugging a fair use copy…


If you overtrain the model may include verbatim copies of your training material, and may be able to produce verbatim copies of the original in its output.

If Microsoft truly believes that the trained output doesn't violate copyright then it should be forced to prove that by training it on all its internal source code, including Windows.


> If the story or characters or dialog are similar enough, he has to pay Puzo, even if the work he created was quite different and not a literal “copy”.

I don't think that you can copyright a plot or story in any country can you?

If he re-wrote the story with different characters and different lines he wouldn't have had to to pay Puzo. I'm sure it would have been frowned upon if its too close, but legally ok.


>This copy is not a transitory copy in service of a fair use, nor likely a fair use in itself,

Seems vastly transitory and since the output cannot be copyrighted, does no harm to any work it “trained” on.


How is it a copy at all? Surely the model weights would therefore be much larger than the corpus of training data, which is not the case at all.

If it disgorges parts of NYT articles, how do we know this is not a common phrase, or the article isn't referenced verbatim on another, unpaid site?

I agree that if it uses the whole content of their articles for training, then NYT should get paid, but I'm not sure that they specifically trained on "paid NYT articles" as a topic, though I'm happy to be corrected.

I also think that companies and authors extremely overvalue the tiny fragments of their work in the huge pool of training data, I think there's a bit of a "main character" vibe going on.


Regarding (b) ... while a specific method of training that involved persistent copying may indeed be a violation, it is far from clear that the general notion of "send server request for URL, digest response in software that is not a browser" is automatically a violation. If there is deemed to be a difference (i.e. all you are allowed to do without a license is have a human read it in a browser), then one can see training mechanisms changing to accomodate that.


It’s all about the purpose the transitory copy serves. The mechanism doesn’t really matter, so you can’t make categorical claims about (say) non-browser requests.


I don't have a comment on your hypothetical, but this case seems to go far beyond that. If you read the actual filing at the bottom of the linked page, NYT provides examples where ChatGPT recited exact multi-paragraph sections of their articles and tried to pass it off as its own words. Plainly reproducing a work is pretty much the only situation where "is this copyright violation?" isn't really in flux. It's not dissimilar to selling PDFs of copywritten books.

If NYT were fully rellying on the argument that training a model in wordcraft using their materials is always copyright violation, or only had short quotes to point to, the philosophical debate you're trying to have would be more relevant.


Importantly, the kid- an individual human- got some wealth somewhat proportional to their effort. There’s non-trivial effort in recruiting the kid. We can’t clone the kid’s brain a million times and run it for pennies.

There are differences that are ethically, politically and in other ways between an AI doing something and a human doing the exact same thing. Those differences may need reflecting in new laws.

IANAL ans don’t have any positive suggestions for good laws, just pointing out that the analogy doesn’t quite hold. I think we’re in new territory where analogies to previous human activities aren’t always productive.


I think you’re skipping over the problem.

In your example you owned the work you gave to the person to create derivatives of.

In a more accurate example you would be stealing those books and then giving them to someone else to create derivatives.


How about if I borrowed them from the library and gave them to the kid to read?

How about if I got the kid to read the books on a public website where the author made the books available for free?


Ironically these artists cant claim to be wholly original as they were certainly inspired. Artists that play live already "lobotomize" people on their way out since it's not easy to recreate an experience and a video isn't the same if it's a good show.

Artists that make easily reproducible art will circulate as these always have along with AI in a sea of other jpgs.


you might be well served by reading the actual complaint.


I think your kid analogy is flawed because it ignores the fact that you couldn't reasonably use said "kid" to rapidly produce thousands of works in the same style and then go on to use them to flood the market and drown out the original authors presence.

Try this with a real "kid" and you'll run into all kids of real-world constraints whereas flooding the world with derivative drivel using LLMs is something that's actually possible.

So yeah, stop using weak analogies, it's not helpful or intelligent.


Would be fascinated to hear from someone inside on a throwaway, but my nearest experience is that corporate lawyers aren't stupid.

If there's legally-murky secret data sauce, it's firewalled from being easily seen in its entirety by anyone not golden-handcuffed to the company.

They may be able to train against it. They may be able to peek at portions of it. But no one is downloading-all.


Big corporations and corporate lawyers lose major lawsuits all the time.


That doesn't mean they don't spend lots of time thinking of ways not to lose them.

See: Google turning off retention on internal conversations to avoid creating anti-trust evidence


for what it's worth, i asked altman directly and he denied using libgen or books2, but also deferred to murati and her team on specifics. but the Q&A wasn't recorded and they haven't answered my follow-ups.


Really? Because the GPT-3 paper talks about "...two internet-based books corpora (Books1 and Books2)..." (see pages 8 and 9) - https://arxiv.org/pdf/2005.14165.pdf

Unclear what that corpora might be, or if its the same books2 you are referring to.


My guess is that this poster meant books3, not books2.

books1 and books2 are OpenAI corpuses that have never (to my knowledge) had their content revealed.

books3 is public, developed outside of OpenAI and we know exactly what's in it.


sorry, books3 is indeed what I meant.


Why would he know the answer in the first place?


The legal liabilities of the training data they use in their flagship product seems to be a thing the CEO should know.


We all remember when Aaron Swartz got hit with a wire tapping and intent to distribute federal crime for downloading JSTR stuff right?

It's really disgusting, IMO, that corporations that go above and beyond that sort of behavior are seeing NO federal investigations for this sort of behavior. Yet a private citizen does it and it's threats of life in prison.

This isn't new, but it speaks to a major hole in our legal system and the administration of it. The Feds are more than willing to steamroll an individual but will think twice over investigating a large corporation engaged in the same behavior.


What happened to Aaron Swartz was terrible. I find that what he was doing was outright good. IMO the right reading isn't to make sure anyone doing something similar faces the same way, but to make the information far more free, whether it's a corporation using it or not. I don't want them to steamroll everyone equally here, but to not steamroll anyone.


There are two points at issue here. One, that information should be more free, and two, that large corporations and private individuals should be equal before the law.


I don't want them to steamroll everyone equally here, but to not steamroll anyone.

I think you're nissing the point, and putting cart before horse. If you ensure that corporations are treated as stringently as people are sometimes, the reverse is true. And that means your goal will presumably be obtained, as the corporate might, becomes the little guy's win.

All with no unjust treatment.


Huh. I see downvotes. I am mystified, for if people and corporations are both treated stringently under the law, corporations will fight to have overly restrictive laws knocked down.

I envision pitting corporate body against corporate body, when one corporatism lobbies, works to (for example) extend copyrights, others will work to weaken copyright.

That doesn't happen as vigilantly currently, because there is no corporate incentive. They play the old, ask for forgiveness, rather than permission angle.

Anyhow. I just prefer to set my enemies against my enemies. More fun.


Corporations follow these laws much more stringently than individuals. Individuals often use pirated software to make things, I've seen many examples of that. I've never seen a corporation use pirated software to make things, they pay for licenses. Maybe there is some rare cases, but pirating is mostly a thing individuals do not corporations.

So in general it is already as you say, corporations are much more targeted by these laws than individuals are. These laws mostly hinders corporations, us individuals are too small to be noticed by the system in most cases.

I've also seen indie games use copyrighted material with no issues, but AAA titles seem to avoid that like the plague. I can't really think of many examples where corporations are breaking these laws more than small individuals do.


So then you refute the comment I replied to, and its parent.


> I've also seen indie games use copyrighted material with no issues, but AAA titles seem to avoid that like the plague.

They use copyrighted material or they commit copyright infringement? The former doesn't necessarily constitute the latter. Likewise, given it's an option legally, there are other factors that go into the decision to use it that likely make it less attractive to AAA games.


Circumventing computer security to copy items en masse to distribute wholesale without transformation is a far cry from reading data on public facing web pages.


He didn't circumvent computer security. He had had a right to use the MIT network and pull the JSTR information. He certainly did it in a shady way (computer in a closet) but it's every bit as arguable that he did it that way because he didn't want someone stealing or unplugging his laptop while it was downloading the data.

He also did not distribute the information wholesale. What he planned on doing with the information was never proven.

OpenAI IS distributing information they got wholesale from the internet without license to that information. Heck, they are selling the information they distribute.


OpenAI IS distributing information they got wholesale from the internet

Facts are not subject to copyright. It's very obvious ChatGPT is more than a search engine regurgitating copies of pages it indexed.


Facts are not subject to copyright

That's false; but even assuming it's true, misinformation is creative content and therefore 99% of the Internet is subject to copyright.


No it is not. You can make a better argument than just BSing.

https://libraries.emory.edu/research/copyright/copyright-dat...


> right to use the MIT

That right ended when he used it to break the law. It was also for use on MIT computers, not for remote access (which is why he decided to install the laptop, also knowing this was against his "right to use").

The "right to use" also included a warning that misuse could result in state and federal prosecutions. It was not some free for all.

> and pull the JSTR information

No, he did not have the right to pull en masse. The JSTOR access explicitly disallowed that. So he most certainly did not have the "right" to do that, even if he were sitting at MIT in an office not breaking into systems.

> did it in a shady way

The word you're looking for is "illegal." Breaking and entering is not simply shady - it's illegal and against the law. B&E with intent to commit a felony (which is what he was doing) is an even more serious crime, and one of the charges.

> he did it that way because he didn't want someone stealing or unplugging his laptop

Ah, the old "ends justifies break the law" argument.

Now, to be precise, MIT and JSTOR went to great lengths to stop the outflow of copying, which both saw. Schwartz returned multiple times to devise workarounds, continuing to break laws and circumvent yet more security measures. This was not some simply plug and forget laptop. He continually and persistently engaged in hacking to get around the protections both MIT and JSTOR were putting in place to stop him. He added a second computer, he used MAC spoofing, among other things. His actions started to affect all users of JSTOR at MIT. The rate of outflow caused JSTOR to suffer performance, so JSTOR disabled all of MIT access.

Go read the indictment and evidence.

> OpenAI IS distributing information they got wholesale

No, that ludicrous. How many complete JSTOR papers can I pull from ChatGPT? Zero? How many complete novels? None? Short stories? Also none? Can I ask for any of a category of items and get any of them? Nope. I cannot.

It's extremely hard to even get a complete decent sized paragraph from any work, and almost certainly not one you pre-select at will (most of those anyone produces are found by running massive search runs, then post selecting any matches).

Go ahead and demonstrate some wholesale distribution - pick an author and reproduce a few works, for example. I'll wait.

How many could I get from what Schwartz downloaded? Millions? Not just even as text - I could have gotten the complete author formatted layout, diagrams, everything, in perfect photo ready copy.

You're being dishonest in claiming these are the same. One can feel sad for Schwartz outcome, realize he was breaking the law, and realizing the current OpenAI copyright situation is likely unlike any previous copyright situation all at the same time. No need to equate such different things.


Ok, so a lot you've written but it comes down to this. What law did he break?

Neither MIT nor JSTOR raised issue with what Schwartz did. JSTOR even went out of their way to tell the FBI they did not want him prosecuted.

Remember, again, with what he was charged. Wiretapping and intent to distribute. He wasn't charged with trespassing, breaking and entering, or anything else. Wiretapping and intent to distribute.

> His actions started to affect all users of JSTOR at MIT. The rate of outflow caused JSTOR to suffer performance, so JSTOR disabled all of MIT access.

And this is where you are confusing a "crime" with "misuse of a system". MIT and JSTOR were in their rights to cut access. That does not mean that what Schwartz did was illegal. Similar to how if a business owner tells you "you need to leave now" you aren't committing a crime because they asked you to leave. That doesn't happen until you are trespassed.

> Go ahead and demonstrate some wholesale distribution - pick an author and reproduce a few works, for example. I'll wait.

You violate copyright by transforming. And fortunately, it's really simple to show that chat GPT will violate and simply emit byte for byte chunks of copyrighted material.

You can, for example, ask it to implement Java's Array list and get several verbatim parts of the JDKs source code echoed back at you.

> How many could I get from what Schwartz downloaded?

0, because he didn't distribute.


> What law did he break?

You can read the indictment, which I already suggested you do.

> Remember, again, with what he was charged. Wiretapping and intent to distribute. He wasn't charged with trespassing, breaking and entering, or anything else. Wiretapping and intent to distribute.

He wasn't charged with wiretapping (not even sure that's a generic crime). He was charged with (two counts of) wire fraud (18 USC 1343), a huge difference. He also had 5 different charges of computer fraud (18 USC 1030(a)(4), (b) & 2), 5 counts of unlawfully obtaining information from a protected computer (18 USC 1030 (a)(2), (b), (c)(2)(B)(iii) & 2), and 1 count of recklessly damaging a protected computer (18 USC...).

He was not charged with "intent to distribute", and there's not such thing as a "wiretapping" charge. Did you ever once read the actual indictment, or did you just make all this up from internet forum posts?

If you're going to start with the phrase "Remember, again.." you should try to make up nonsense. Actually read what you're asking others to "remember" which you apparently never knew in the first place.

> you are confusing a "crime" with "misuse of a system"

Apparently you are (willfully?) ignorant of law.

> You violate copyright by transforming.

That's false too. Transformative use is one defense used to not infringe copyright. Carefully read up on the topic.

> ask it to implement Java's Array list and get several verbatim parts of the JDKs source code echoed back at you

Provide the prompt. Courts have ruled that code that is the naïve way to create a simple solution is not copyrighted on it's own, so if you have only a few disconnected snippets, that violates nothing. Can you make it reproduce an entire source file, comments, legalese at the top? I doubt it. To violate copyright one needs a certain amount (determined by trials) of the content.

You might also want to make sure you're not simply reading OpenJDK.

> 0, because he didn't distribute.

Please read. "How many could I get from what Schwartz downloaded?" does not mean he published it all before he was stopped. It means what he took.

That you seem unable to tell the difference between someone copying millions of PDF to distribute as-is, and the effort one must go to to possibly get a desired copyrighted snippet, shows either dishonestly or ignorance of relevant laws.


Why isn't robots.txt enough to enforce copyright etc? If NYT didn't set robots.txt properly, is their content free-for-all? Yes I know the first answer you would jump to is "of course not, copyright is the default", but it's almost 2024 and we have had robots.txt as industry de jure to stop crawling.


robots.txt is not meant to be a mechanism of communicating the licensing of content on the page being crawled nor is it meant to communicate how the crawled content is allowed to be used by the crawler.

Edit: same applies to humans. Just because a healthcare company puts up a S3 bucket with patient health data with “robots: *” doesn’t give you a right to view or use the crawled patient data. In fact, redistributing it may land you in significant legal trouble. Something being crawlable doesn’t provide elevated rights compared to something not crawlable.


Furthering the S3 health data thought exercise:

If OpenAI got their hands on an S3 bucket from Aetna (or any major insurer) with full and complete health records on every American, due to Aetna lacking security or leaking a S3 bucket, should OpenAI or any other LLM provider be allowed to use the data in its training even if they strip out patient names before feeding it into training?

The difference between this question or NYT articles is that this question asks about content we know should not be available publicly online (even though it is or was at some point in the past).

I guess this really gets at “do we care about how the training data was obtained or pre-processed, or do we only care about the output (a model’s weights and numbers, etc)


HIPAA is about more than just names. Just information such as a patient's ZIP code and full medical history is often enough to de-anonymise someone. HIPAA breaches are considered much more severe than intellectual property infringements. I think the main reason that patients are considered to have ownership of even anonymised versions of their data (in terms of controlling how it is used) is that attempted anonymisation can fail, and there is always a risk of being deanonymised.

If somehow it could be proven without doubt that deanonymising that data wasn't possible (which cannot be done), then the harm probably wouldn't be very big aside from just general data ownership concerns which are already being discussed.


> should [they] be allowed to use this data in training…?

Unequivocally, yes.

LLMs have proved themselves to be useful, at times, very useful, sometimes invaluable assistants who work in different ways than us. If sticking health data into a training set for some other AI could create another class of AI which can augment humanity, great!! Patient privacy and the law can f*k off.

I’m all for the greater good.


Eliminating the right to patient privacy does not serve the greater good. People have enough distrust of the medical system already. I’m ambivalent to training on properly anonymized health data but, i reject out of hand the idea that OpenAI et al should have unfettered access to identifiable private conversations between me and my doctor for the nebulous goal of some future improvement on llm models.


> unfettered access to identifiable private conversations

You misread the post I was responding to. They were suggesting health data with PII removed.

Second, LLMs have proved that AI which gets unlimited training data can provide breakthroughs in AI capabilities. But they are not the whole universe of AIs. Some other AI tool, distinct from LLMs, which ingests en masse as much health data as it can could provide health and human longevity outcomes which could outweigh an individual's right to privacy.

If transformers can benefit from scale, why not some other, existing or yet to be found, AI technology?

We should be supporting a Common Crawl for health records, digitizing old health records, and shaming/forcing hospitals, research labs, and clinics into submitting all their data for a future AI to wade into and understand.


> Furthering the S3 health data thought exercise: If OpenAI got their hands on an S3 bucket from Aetna (or any major insurer) with full and complete health records on every American, due to Aetna lacking security or leaking a S3 bucket, should OpenAI or any other LLM provider be allowed to use the data in its training even if they strip out patient names before feeding it into training?

To me this says that openai would have access to ill-gotten raw patient data and would do the PII stripping themselves.


> could outweigh an individual's right to privacy.

If that’s the case, let’s put it on the ballet and vote for it.

I’m tired of big tech making policy decisions by “asking for permission later” and getting away with everything.

If there truly is some breakthrough and all we need is everyone’s data, tell the population and sell it to the people and let’s vote on it!


> I’m tired of big tech making policy decisions by “asking for permission later” and getting away with everything

> If that’s the case, let’s put it on the ballet and vote for it.

This vote will mean "faster horses" for everyone. Exponential progress by committee is almost unheard of.


Robot.txt isn't about copyrights, its about preventing bots. Its effectively a EULA. Copyright law only goes into effect when you distribute the content you scrape. If you scraped New York times for your own LLM that you used internally and didn't distribute the results, there would be no copyright infringement.


> If you scraped New York times for your own LLM that you used internally and didn't distribute the results, there would be no copyright infringement.

Why?

As far as I understand, the copyright owner has control of all copying, regardless of whether it is done internally or externally. Distributing it externally would be a more serious vilation, though.


Er... This is what all these lawsuits against LLMs are hoping to disprove


Which lawsuits are concerning LLMs used only privately by the organization that developed it?


>Why isn't robots.txt enough to enforce copyright

You actually need a lot more than that. Most significantly, you need to have registered the work with the Copyright Office.

“No civil action for infringement of the copyright in any United States work shall be instituted until ... registration of the copyright claim has been made in accordance with this title.” 17 USC §411(a).


But the thing is, you can only bring the civil action forward after registering your claim but you need not register the claim before the infringement occurs.

Copyright is granted to the creator upon creation.


That is incorrect.

If the work is unpublished for the purposes of the Copyright Act, you do have to register (or preregister) the work prior to the infringement. 17 USC § 412(1).

If the work is published, you still have to register it within the earlier of (a) three months after the first publication of the work or (b) one month after the copyright owner learns of the infringement.

See below for the actual text of the law.

Publication, for the purposes of the Copyright Act, generally means transferring or offering a copy of the work for sale or rental. But there are many cases where it’s not clear whether a work has or has not been published — most notably when a work is posted online and can be downloaded, but has not been explicitly offered for sale.

Also, the Supreme Court recently ruled that the mere filing of an application for registration is insufficient to file suit. The Register of Copyrights has to actually grant your application. The registration process typically takes many months, though you can pay $800 for expedited processing, if you need it.

~~~

Here is the relevant portion of the Copyright Act:

In any action under this title, other than an action brought for a violation of the rights of the author under section 106A(a), an action for infringement of the copyright of a work that has been preregistered under section 408(f) before the commencement of the infringement and that has an effective date of registration not later than the earlier of 3 months after the first publication of the work or 1 month after the copyright owner has learned of the infringement, or an action instituted under section 411(c), no award of statutory damages or of attorney’s fees, as provided by sections 504 and 505, shall be made for—

(1) any infringement of copyright in an unpublished work commenced before the effective date of its registration; or

(2) any infringement of copyright commenced after first publication of the work and before the effective date of its registration, unless such registration is made within three months after the first publication of the work.


NYT seemed to claim paid subscriptions as well, which I'm not sure that bots can actually crawl.


ChatGPTs birth as a research preview may have been an attempt to avoid these issues. It would have been unlikely to trigger legal anger for a free product which few use. When usage exploded, the natural inclination would be to hope for the best.

Google may simply have been obliged to follow suit.

Personally, I’m looking forward to pirate LLMs trained on academic content.


Is there already a dataset? Before llama Facebook had one too I forgot what it was called.


> a more established competitor

Apple is already doing this: https://www.nytimes.com/2023/12/22/technology/apple-ai-news-...

Apple caught a lot of shit over the past 18 months for their lack of AI strategy; but I think two years from now they're going to look like geniuses.


didnt they just get caught for pantent infrigment? I'm sure they've done their fair share of shady stuff with the AI datasets too, they are just going to do a stellar job of conciling it.


Try searching for man or woman in your photos app. It won't even show it to me. It's lobotomized and has been for many years.


> the first being at the birth of modern search engines.

Why do you say that? Search engines would at least direct the viewer to the source. NYT gets 35%+ of its traffic from Google: https://www.similarweb.com/website/nytimes.com/#traffic-sour...


Just because they asked for forgiveness instead of asking first for permission, it's original sins will not be erased :-)

"Google Agrees to Pay Canadian Media for Using Their Content" - https://www.nytimes.com/2023/11/29/world/americas/google-can...


That's why I think the newspapers will manage to win against the LLM companies. They won against Google despite having no real argument why they should get paid to get more traffic. The search engine tax is even a shakier concept than the LLM tax would be.

Newspapers are very powerful and they own the platform to push their opinion. I'm not about to forget the EU debates where they all (or close to all) lied about how meta tags really work to push it their way, they've done it and they will do it again.


That doesn’t mean that it wasn’t theft of their content. The internet would be a very different place if creator compensation and low friction micropayments were some of the first principles. Instead we’re left with ads as the only viable monetization model and clickbait/misinformation as a side effect.


I don't quite get it. If listing your link is considered as theft, HN is then a thief of content too. If you don't want your content stolen, just tell Google to not index your website?

I guess it's more constructive to propose alternatives than just bashing the status quo. What's your creator compensation model for a search engine? I believe whatever being proposed is trading off something significant for being more ethic.


The world you’re hoping for will put all AI tech only within the hands of the established top 10 media entities, who traditionally have never compensated fairly anyway.

Sorry but if that’s the alternative to some writers feeling slighted, I’ll choose for the writers to be sad and the tech to be free.


“Feeling slighted” is a gross understatement of how a lack of compensation flowing to creators has shaped the internet and the wider world over the past 25 years. If we have a problem with the way top media companies compensate their creators, that is a separate issue - not a justification for layering another issue on top.


YouTube had made way more content creators wealthy than the NYT. Writers are not going to be paid more after this ruling either way.


Has the NYT made even a single content creator wealthy? Journalists there make less money than an average software engineer.


It's in the realm of possibility, lots of people found work post vox and buzzfeed too but i wouldn't classify it as the work of the NYT. "Real" creatives and content creators seem to embrace AI or at least grudgingly alter their own works, the OP I'm replying to would be cheering YouTube for suing openAI on the behalf of YouTubers everywhere, despite it having no bearing on reality.

The main objectors are the old guard monopolies that are threatened.


Have you been a creator on YT? Do you know how much an average creator gets paid? Did you know that it and other modern platforms like Spotify artificially skew payouts towards the richest brands? If not, then please let’s not make any claims about wealthiness and “old guard” monopolies here.


Gadzooks! You're right! If only NYT had realised the secret to success was spewing out articles reacting to other articles reacting to other articles, they would all have been millionaires!


Did you forget the /s or do you not think that a lot of journalism is indeed reacting to other journalists?


I’m a creator myself and see the two futures ahead of me and free benefits me in the long term more than closed.

The tech can either run freely in a box under my desk or I’ll have to pay upwards of 15-20k a year to run it on Adobes/Google/etcs servers. Once the tech is locked up it will skyrocket to AutoCAD type pricing because the acceleration it provides is too much.

Journos can weep, small price to pay for the tech being free for us all.


I think the introduction of an expectation for compensation has generally brought down the quality of content online. Different people and incentives appear to get involved once content == money, vs content == creative expression.


So you're advocating giving open AI and incumbents a massive advantage by now delegitimizing the process? It's kinda like why Netflix was all for "fast lanes"


> I do think they should quickly course correct at this point and accept the fact that they clearly owe something to the creators of content they are consuming.

Eventually these LLMs are going to be put in mechanical bodies with the ability to interact with the world and learn (update their weights) in realtime. Consider how absurd your perspective would be then, when it'd be illegal for this embodied LLM to read any copyrighted text, be it a book or a web page, without special permission from the copyright holder, while humans face no such restriction.


A human faces the same restriction, if it provides commercial services on the internet creating code that is a copy of copyrighted code.


This isn't true; if you hire a contractor and tell them "write from memory the copyrighted code X which you saw before", and they have such a good memory that they manage to write it verbatim, then you take that code and use it in a way that breaches copyright, you're liable, not the person you paid to copy the code for you. They're only liable if they were under NDA for that code.


> they have such a good memory that they manage to write it verbatim

No, there is no clause in copyright law that says "unless someone remembered it all and copied it from their memory instead of directly from the original source." That would just be a different mechanism of copying.

Clean-room techniques are used so that if there is incidental replication of parts of code in the course of a reimplementation of existing software, that it can be proven it was not copied from the source work.


And what professional developer would not be under NDA for the code he produces for a corporation?


The topic of this thread is LLMs reproducing _publicly available_ copyright content. Almost no developer would be under NDA for random copyrighted code online.


> while humans face no such restriction.

I have no idea what on earth you are talking about. People and corporations are sued for copyright infringement all the time.

https://copyrightalliance.org/copyright-cases-2022/

Reading and consuming other people content isn't illegal, but it also wouldn't be for a computer.

Reading and consuming content with the sole purpose of reproducing it verbatim is frowned upon, and can be sued, whether it's an LLM or a sweatshop in India.


>I have no idea what on earth you are talking about. People and corporations are sued for copyright infringement all the time.

They're sued for _producing content_, not consuming content. If a human takes copyrighted output from an LLM and publishes it, they're absolutely liable if they violated copyright.

>Reading and consuming other people content isn't illegal, but it also wouldn't be for a computer.

That is absolutely what people in this thread are suggesting should happen: that it should be illegal for OpenAI et. al. to train models on publicly available content without first receiving permission from the authors.

>Reading and consuming content with the sole purpose of reproducing it verbatim is frowned upon, and can be sued, whether it's an LLM or a sweatshop in India.

That's irrelevant here because people training LLMs aren't feeding them copyrighted content for the sole purpose of reproducing it verbatim.


> That's irrelevant here because people training LLMs aren't feeding them copyrighted content for the sole purpose of reproducing it verbatim.

Disagree, it is completely relevant when discussing computers Vs people, the bar that has already been set is alternative uses.

LLMs don't have a purpose outside of regurgitating what it has ingested. CD burners at least could be claimed they were backing up your data.


> Solidly rooting for NYT on this - it’s felt like many creative organizations have been asleep at the wheel while their lunch gets eaten for a second time (the first being at the birth of modern search engines.)

Hacker News consistently have upvoted posts to let users circumvent paywalls. And even when it doesn't, conversations here (and on Twitter, Reddit, etc.) that summarize the articles and quote the relevant bits as soon as the articles are published are much more of a threat to The New York Times than ChatGPT training on articles from months/years ago.


I don't think it's about scraping being a threat. It's that they violated the TOS and stand to make a ton of money from someone else's work.

I find irony in the newspaper suing AI when other news sources (admittedly not NYT) use AI to write the articles. How many other AI scrapers are just ingesting AI generated content?


> I find irony in the newspaper suing AI when other news sources (admittedly not NYT) use AI to write the articles.

That isn't ironic at all, newspapers have newspaper competitors and if those competitors can steal content by washing it through an AI that is a serious problem. If these AI models weren't used to produce news articles and similar then it would be a much smaller issue.


Same, to all those arguing in favour of Open AI, I have a question, do you steal books, movies, games ?

Do you illegally share them via torrents or even sell copies of these works ?

Because that is what’s going on here?


> they probably wouldn’t exist and the generative AI revolution may never have happened if they put the horse before the cart

Maybe, but I find the "It's ok to break the law because otherwise I can't do what I want" narrative a little offputting.


Doesn't this harm open source ML by adding yet another costly barrier to training models?


It doesn't matter what's good for open source ML.

It matters what is legal and what makes sense.


It doesn't matter what is legal. It matters what is right. Society is about balancing the needs of the individual vs the collective. I have a hard time equating individual rights with the NYT and I know my general views on scraping public data and who I was rooting for in the LinkedIn case.


I have an even harder time equating individual rights with the spending of $xx billion in Azure compute time and payment of a collective $0 to millions of individuals who involuntarily contribute training material to create a closed source, commercial service allowing a single company to compete with all the individuals currently employed to create similar work.

NYT just happens to be an entity that can afford to fight Microsoft in court.


I don't see a problem as long as there's taxation.

Look at SpaceX. They paid a collective $0 to the individuals who discovered all the physics and engineering knowledge. Without that knowledge they're nothing. But still, aren't we all glad that SpaceX exists?

In exchange for all the knowledge that SpaceX is privatizing, we get to tax them. "You took from us, so we get to take it back with tax."

I think the more important consideration isn't fairness it's prosperity. I don't want to ruin the gravy train with IP and copyright law. Let them take everything, then tax the end output in order to correct the balance and make things right.


When we're discussing litigation, it certainly matters what is legal.


And also - if what is legal isn't right, we live in a democracy and should change that.

Saying what's legal is irrelevant is an odd take.

I like living in a place with a rule of law.


Should Harriet Tubman have petitioned her local city council and waited for a referendum before freeing slaves?


Time will tell if comparing slavery to copyright is ridiculous or not.

In the case of slavery - we changed the law.

In the case of copyright - it's older than the Atlantic Slave Trade and still alive and kicking.

It's almost as if one of them is not like the other.


> It's almost as if one of them is not like the other.

Use this newfound insight to take my comment in good faith, as per HN guidelines, and recognize that I am making a generalized analogy about the gap between law and ethics, and not making a direct comparison between copyright and slavery.

Can we get back on topic?


It matters what ends up being best for humanity, and I think there are cases to be made both ways on this


People often get buried in the weeds about the purpose of copyright. Let us not forget that the only reason copyright laws exist is

> To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries

If copyright is starting to impede rather than promote progress, then it needs to change to remain constitutional.


The reason copyright promotes progress is that it incentives individuals and organizations to release works publicly, knowing their works are protected against unlawful copying.

The end game when large content producers like The New York Times are squeezed due to copyright not being enforced is that they will become more draconian in their DRM measures. If you don't like paywalls now, watch out for what happens if a free-for-all is allowed for model training on copyrighted works without monetary compensation.

I had a similar conversation with my brother-in-law who's an economist by training, but now works in data science. Initially he was in the side of OpenAI, said that model training data is fair game. After probing him, he came to the same conclusion I describe: not enforcing copyright for model training data will just result in a tightening of free access to data.

We're already seeing it from the likes of Twitter/X and Reddit. That trend is likely to spread to more content-rich companies and get even more draconian as time goes on.


I doubt there’s much that technical controls can do to limit the spread of NYT content, their only real recourse is to try suing unauthorized distributors. You only need to copy something once for it to be free.


Do other countries all use the same reasoning?


I don't think this was your point, but no they don't. Specifically China. What will happen if China has unbridled training for a decade while the United States quibbles about copyright?

I think publications should be protected enough to keep them in business, so I don't really know what to make of this situation.


Copyright isn't what got in the way here. AI could have negotiated a license agreement with the rights holder. But they chose not to.


From their perspective they're training a giant mechanical brain. A human brain doesn't need any special license agreement to read and learn from a publicly available book or web page, why should a silicon one? They probably didn't even consider the possibility that people'd claim that merely having an LLM read copyrighted data was a copyright violation.


I was thinking about this argument too: is it a "license violation" to gift a young adult a NYT subscription to help them learn to read? Or someone learning English as second language? That seems to be a strong argument.

But it falls apart because kids aren't business units trained to maximize shareholder returns (maybe in the farming age they were). OpenAI isn't open, it's making revolutionary tools that are absolutely going to be monetized by the highest bidder. A quick way to test this is NYT offers to drop their case if "open" AI "open"-ly releases all its code and training data, they're just learning right? what's the harm?


The law on this does not currently exist. It is in the process of being created by the courts and legistatures.

I personally think that giving copyright holders control over who is legally allowed to view a work that has been made publicly available is a huge step in the wrong direction. One of those reasons is open source, but really that argument applies just as well to making sure that smaller companies have a chance of competing.

I think it makes much more sense to go after the infringing uses of models rather than putting in another barrier that will further advantage the big players in this space.


Copyright holders already have control over who is legally allowed to view a work that has been made publicly available. It's the right to distribution. You don't waive that right when you make your content free to view on a trial basis to visitors to your site, with the intent of getting subscriptions - however easy your terms are to skirt. NYT has the right to remove any of their content at any time, and to bar others from hosting and profiting on the content.


It does exist, and you'd be glad to know that it's going in the pro-AI/training direction: https://www.reedsmith.com/en/perspectives/ai-in-entertainmen...


> It does exist, and you'd be glad to know that it's going in the pro-AI/training direction

Certainly not in the US. From the article you linked "In the United States, in the absence of a TDM exception, AI companies contend that inclusion of copyrighted materials in training sets constitute fair use eg not copyright infringement, which position remains to be evaluated by the courts."

Fair use is a defense against copyright infringement, but the whole question in the first place is whether generative AI training falls under fair use, and this case looks to be the biggest test of that (among others filed relatively recently).


It’s disingenuous to frame using data to train a model as a “view,” of that data. The simple cases are the easy ones, if ChatGPT completely rips a NYT article then that’s obviously infringement; however, there’s an argument to be made that every part of the LLM training dataset is, in part, used in every output of that LLM.

I don’t know the solution, but I don’t like the idea that anything I post online that is openly viewable is automatically opted into being part of ML/AI training data, and I imagine that opinion would be amplified if my writing was a product which was being directly threatened by the very same models.


All I can ever think about with how ML models work is that they sound an awful lot like Data Laundering schemes.

You can get basically-but-not-quite-exactly the copyrighted material that it was trained on.

Saw this a lot with some earlier image models where you could type in an artists name and get their work back.

The fact that AI models are having to put up guardrails to prevent that sort of use is a good sign that they weren't trained ethically and they should be paying a ton of licensing fees to the people whose content they used without permission.


>You can get basically-but-not-quite-exactly the copyrighted material that it was trained on.

You can do exactly the same with a human author or artist if you prompt them to. And if you decide to publish this material, you're the one liable for breach of copyright, not the person you instructed to create the material.


Not if that person is a trillion dollar corporation. If they're a business that's regularly stealing content and re-writing it for their customers that business is gonna go down. Sure, a customer or two may go down with them but the business that sells counterfeit works to spec is not gonna last long.


Clearly if a law is bad then we should change that law. The law is supposed to serve humanity and when it fails to do so it needs to change.


setting legality as a cornerstone of ethics is a very slippery slope :)


Slavery was legal...


Still is in many countries with excellent diplomatic relations with the Western World:

https://www.cfr.org/backgrounder/what-kafala-system


open source won't care. they'll just use data anyway.

closed/proprietary services that also monetize - there's a question whether it's "fair" to take and use data for free, and then basically resell access to it. the monetization aspect is the bigger rub than just data use.

(maybe it's worth noting again that "openai" is not really "open" and not the same as open source ai/ml.)

taking data, maybe it's data that's free to take, and then as freely distributing resulting work, that's really just fine. taking something for free (without distinction, maybe it's free, maybe it's supposed to stay free, maybe it's not supposed to be used like that, maybe it's copyrighted), and then just ignoring licenses/relicensing and monetizing without care, that's just a minefield.


You can train your own model no problem, but you arguably can’t publish it. So yes, the model can’t be open-sourced, but the training procedure can.


I think not, because stealing large amounts of unlicensed content and hoping momentum/bluster/secrecy protects you is a privilege afforded only to corporations.

OSS seems to be developing its own, transparent, datasets.


It’s likely fair use.


Playing back large passages of verbatim content sold as your “product” without citation is almost certainly not fair use. Fair use would be saying “The New York Times said X” and then quoting a sentence with attribution. Thats not what OpenAI is being sued for. They’re being sued for passing off substantial bits of NYTimes content as their own IP and then charging for it saying it’s their own IP.

This is also related to earlier studies about OpenAI where their models have a bad habit of just regurgitating training data verbatim. If your trained data is protected IP you didn’t secure the rights for then that’s a real big problem. Hence this lawsuit. If successful, the floodgates will open.


> They’re being sued for passing off substantial bits of NYTimes content as their own IP and then charging for it saying it’s their own IP.

In what sense are they claiming their generated contents as their own IP?

https://www.zdnet.com/article/who-owns-the-code-if-chatgpts-...

> OpenAI (the company behind ChatGPT) does not claim ownership of generated content. According to their terms of service, "OpenAI hereby assigns to you all its right, title and interest in and to Output."

https://openai.com/policies/terms-of-use

> Ownership of Content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.


They can’t transfer rights to the output of it isn’t theirs to begin with.

Saying they don’t claim the rights over their output while outputting large chunks verbatim is the old YouTube scheme of upload movie and say “no copyright intended”.


Exactly. And while one can easily just take down such a movie if an infringement claim is filed it’s unclear how one “removes” content from a trained model given how these models work. Thats messy.


If it’s found that the use of the material is infringing on the rights of the copyright holder than the AI company has to retrain their model without any material they don’t have a right to. Pretty clear to me


By that logic Microsoft Word should have to refuse to save or print any text that contained copyrighted content. GPT is just a tool; the user who's asking it to produce copyrighted content (and then publishing that content) is the one violating the copyright, and they're the ones who should be liable.


I don’t even know where to begin on this example.

The situations aren’t remotely similar and that much should be obvious. In one instance ChatGPT is reproducing copyrighted work and in the other Word is taking keyboard input from the user; Word itself isn’t producing anything itself.

> GPT is just a tool.

I don’t know what point this is supposed to make. It is not “just a tool” in the sense that it has no impact on what gets written.

Which brings us back to the beginning.

> the user who’s asking it to produce copyrighted content.

ChatGPT was trained on copyrighted content. The fact that it CAN reproduce the copyrighted content and the fact that it was trained on it is what the argument is about.


The bits you cite are legally bogus.

That would be like me just photocopying a book you wrote and then handing out copies saying we’re assigning different rights to the content. The whole point of the lawsuit is that OpenAI doesn’t own the content and thus they can’t just change the ownership rights per their terms of service. It doesn’t work like that.


Their legalese is careful to include the 'if any' qualifier ("We hereby assign to you all our right, title, and interest, if any, in and to Output.")

In any case, the point is that they made no claim to Output (as opposed to their code, etc) being their IP.


That's irrelevant. The main point is that they are re-distributing the content without permission from the copyright owners, so they are sort of implicitly claiming they have copy/distribution rights over it. Since they don't, then it's obvious they can't give you this content at all.


>The main point is that they are re-distributing the content without permission from the copyright owners,

By your logic, Firefox is re-distributing content without permission from the copyright owners whenever you use it to read a pirated book. ChatGPT isn't just randomly generating copyrighted content, it just does so when explicitly prompted by a user.


That is not the same thing at all. If I search on Google for copyrighted content and Google shows me the content, it is the server which serves the content who is most directly responsible, not Google nor I. Firefox is only a neutral agent, whereas ChatGPT is the source of the copyrighted content.

Of course, if the input I give to ChatGPT is "here is a piece from an NYT aricle, please tell it to me again verbatim", followed by a copy I got from the NYT archive, and ChatGPT is returning the same text I gave it as input, that is not copyright infringement. But if I say "please show me the text of the NYT article on crime from 10th January 1993", and ChatGPT returns the exact text of that article, then they are obviously infringing on NYT's distribution rights for this content, since they are retrieving it from their own storage.

If they returned a link you could click, t and retrieved the content from the NYT, along with any other changes such as advertising, even if it were inside an iframe, it would be an entirely different matter.


> In what sense are they claiming their generated contents as their own IP?

https://www.zdnet.com/article/who-owns-the-code-if-chatgpts-...

>> OpenAI (the company behind ChatGPT) does not claim ownership of generated content. According to their terms of service, "OpenAI hereby assigns to you all its right, title and interest in and to Output."

How are they giving you the rights to the work if they don't own it? They are literally asserting that they are in a position to assign the rights (to the output) to the user - that is a literal claim of ownership.

IOW, if someone says "Take this from me, I assure you it is legal to do so", they are asserting ownership of that thing.


They are distributing the output, so they (implicitly) claim to have the right to distribute it. I can send you a movie I downloaded along with a license that says "I hereby assign to you all our right, title, and interest, if any, in and to Output. ", I'm still obviously infringing on the copyright of that movie (unless I have a deal that allows re-distribution, of course, as Netflix does).


That part doesn't seem relevant to me in any case. IP pirates aren't prosecuted or sued because of a claim of ownership; they're prosecuted or sued over possession, distribution, or use.


At the root, it seems like there's also a gap in copyright with respect to AI around transformative.

Is using something, in its entirety, as a tiny bit of a massive data set, in order to produce something novel... infringing?

That's a pretty weird question that never existed when copyright was defined.


Replace the AI model by a human, and it should become pretty clear what is allowed and what isn’t, in terms of published output. The issue is that an AI model is like a human that you can force to produce copyright-infringing output, or at least where you have little control over whether the output is copyright-infringing or not.


Its less clear than you think, and comes down more on how OpenAI is commercially benefiting and competiting with NYT than what they actually did. (See four factors of fair use)


I think it did come up back in the day sort of, for example with libraries.

More importantly, ever case is unique so what really came up was a set of principles for what defines fair use, which will definitely guide this.


I would note that in the examples the NYT cites, the prompts explicitly ask for the reproduction of content.

I think it makes sense to hold model makers responsible when their tools make infringement too easy to do or possible to do accidentally. However that is a far cry from requiring a little longer license to do the trainint in the first place.


> It's likely fair use.

I agree. You can even listen to the NYT Hard Fork podcast (that I recommend btw https://www.nytimes.com/2023/11/03/podcasts/hard-fork-execut...) where they recently had Harvard copyright law professor Rebecca Tushnet on as a guest.

They asked her about the issue of copyrighted training data. Her response was:

""" Google, for example, with the book project, doesn’t give you the full text and is very careful about not giving you the full text. And the court said that the snippet production, which helps people figure out what the book is about but doesn’t substitute for the book, is a fair use.

So the idea of ingesting large amounts of existing works, and then doing something new with them, I think, is reasonably well established. The question is, of course, whether we think that there’s something uniquely different about LLMs that justifies treating them differently. """

Now for my take: Proving that OpenAI trained on NYT articles is not sufficient IMO. They would need to prove that OpenAI is providing a substitutable good via verbatim copying, which I don't think you can easily prove. It takes a lot of prompt engineering and luck to pull out any verbatim articles. It's well-established that LLMs screw up even well-known facts. It's quite hard to accurately pull out the training data verbatim.


Genuinely asking, is the “verbatim” thing set in stone? I mean, an entity spewing out NYTimes-like articles after having been trained on lots of NYTimes content sounds like a very grey zone, in the “spirit” of copyright law some may judge it as indeed not-lawful.

Of course, I’m not a lawyer and I know that in the US sticking to precedents (which mention the “verbatim” thing) takes a lot of precedence over judging something based on the spirit of the law, but stranger things have happened.


There's already precedence for this in news: News outlets constantly report on each other's stories. That's why they care so much about being first on a story, because once they break it, it is fair game for everyone else to report on it too.

Here's a hypothetical: suppose there is a random fact about some news event that has only been reported in a single article. Do they suddenly have a monopoly on that fact, and deserve compensation whenever that fact gets picked up and repeated by other news articles or books or TV shows or movies (or AI models)?


It's likely not. Search for "the four factors of fair use". While I think OpenAI will have decent arguments for 3 of the factors, they'll get killed on the fourth factor, "the effect of the use on the potential market", which is what this lawsuit is really about.

If your "fair use" substantially negatively affects the market for the original source material, which I think is fairly clear in this case, the courts wont look favorably on that.

Of course, I think this is a great test case precisely because the power of "Internet scale" and generative AI is fundamentally different than our previous notions about why we wanted a "fair use exception" in the first place.


Fair use is based on a flexible proportionality test so they don't need perfect arguments on all factors.

> If your "fair use" substantially negatively affects the market for the original source material, which I think is fairly clear in this case, the courts wont look favorably on that.

I think it's fairly clear that it doesn't. No one is going to use ChatGPT to circumvent NYTimes paywalls when archive.ph and the NoPaywall browser extension exist and any copyright violations would be on the publisher of ChatGPT's content.

But let's not pretend like any of us have any clue what's going to happen in this case. Even if Judge Alsup gets it, we're so far in uncharted territory any speculation is useless.


> we're so far in uncharted territory any speculation is useless

I definitely agree with that (at least the "far in uncharted territory bit", but as far as "speculation being useless", we're all pretty much just analyzing/guessing/shooting the shit here, so I'm not sure "usefulness" is the right barometer), which is why I'm looking forward to this case, and I also totally agree the assessment is flexible.

But I don't think your argument that it doesn't negatively affect the market holds water. Courts have held in the past that the market for impact is pretty broadly defined, e.g.

> For example, in one case an artist used a copyrighted photograph without permission as the basis for wood sculptures, copying all elements of the photo. The artist earned several hundred thousand dollars selling the sculptures. When the photographer sued, the artist claimed his sculptures were a fair use because the photographer would never have considered making sculptures. The court disagreed, stating that it did not matter whether the photographer had considered making sculptures; what mattered was that a potential market for sculptures of the photograph existed. (Rogers v. Koons, 960 F.2d 301 (2d Cir. 1992).)

From https://fairuse.stanford.edu/overview/fair-use/four-factors/


Nobody is gonna cancel their NYT subscription for chatGPT 4.0. OpenAI will win.


Per my other comment here, https://news.ycombinator.com/item?id=38784723, courts have previously ruled that whether people would cancel their NYT subscription is irrelevant to that test.


What exactly is the effect on the potential market? That's exactly why I don't think OpenAI will lose, why would a court side with the NYT?


What if a court interprets fair use as a human-only right, just like it did for copyright?


I think we need a lot of clarity here. I think it's perfectly sensible to look at gigantic corpuses of high quality literature as being something society would want to be fair use for training an LLM to better understand and produce more correct writing... but the actual information contained in NYT articles should probably be controlled primarily by NYT. If the value a business delivers (in this case the information of the articles) can be freely poached without limitation by competitors then that business can't afford to actually invest in delivering a quality product.

As a counter argument it might be reasonable to instead say that the NYT delivers "current information" so perhaps it'd be fair to train your model on articles so long as they aren't too recent... but I think a lot of the information that the NYT now relies on for actual traffic is their non-temporal stuff - including things like life advice and recipes.


The case for copyright is exactly the opposite: the form of content (the precise way the NYT writers presented it) is protected. The ideas therein, the actual news story, is very much not protected at all. You can freely and legally read an NYT article hot off the press and go on air on Fox News and recount it, as long as you're not copying their exact words. Even if the news turns out to be entirely fake and invented by the NYT to catch you leaking their stuff, you still have every right to present the information therein.

This isn't even "fair use". The ideas in a work are simply not protected by copyright, only the form is.


I have deeply mixed feelings about the way LLMs slurp up copyrighted content and regurgitate it as something "new." As a software developer who has dabbled in machine learning, it is exciting to see the field progress. But I am also an author with a large catalog of writings, and my work has been captured by at least one LLM (according to a tool that can allegedly detect these things).

Overall, current LLMs remind me of those bottom-feeder websites that do no original research--those sites that just find an article they like, lazily rewrite it, introduce a few errors, then maybe paste some baloney "sources" (which always seems to disinclude the actual original source). That mode of operation tends to be technically legal, but it's parasitic and lazy and doesn't add much value to the world.

All that aside, I tend to agree with the hypothesis that LLMs are a fad that will mostly pass. For professionals, it is really hard to get past hallucinations and the lack of citations. Imagine being a perpetual fact-checker for a very unreliable author. And laymen will probably mostly use LLMs to generate low-effort content for SEO, which will inevitably degrade the quality of the same LLMs as they breed with their own offspring. "Regression to mediocrity," as Galton put it.


>All that aside, I tend to agree with the hypothesis that LLMs are a fad that will mostly pass. For professionals, it is really hard to get past hallucinations and the lack of citations.

For writers maybe, but absolutely not for programmers, it's incredibly useful. I don't think anyone who's used GPT4 to improve their coding productivity would consider it a fad.


Copilot has been way more useful to me than GPT4. When I describe a complex problem where I want multiple solutions to compare, GPT4 is useless to me. The responses are almost always completely wrong or ignore half of the details I’ve written in the prompt. Or I have to write them with already a response in mind, which kinda defeats why I would use it in the first place.

Copilot provides useful autocompletes maybe… 30% of the time? But it doesn’t waste too much as it’s more of a passive tool.


> When I describe a complex problem where I want multiple solutions to compare, GPT4 is useless to me

FWIW i don’t try to use it for this. mostly i use it to automate writing code for tasks that are well specified, often transformations from one format to another. so yes, with a solution in mind. it mostly just saves typing, which is a minority of the work, but it is a useful time saver


Copilot is amazing. It single handedly returned me to the Microsoft ecosystem and changed the way I use the Internet. Huggingface is another great AI, I've used Githubs a bit, Codium a bit - all of these things are amazing.

This is not a fad, this is the beginning of a world that we can just actually naturally interact to accomplish things we have to be educated on how to accomplish now.

Haha, I love that people can't see the writing on wall - I think this is a bigger invention than the smartphone that I'm typing this on now, fr - just wait and see ;)


Ehh LLMs have become a fundamental part of my work flow as a professional. GPT4 is absolutely capable of providing links to sources and citations. It is more reliable than most human teachers I have had and doesnt have an ego about its incorrect statements when challenged on them. It does become less useful as you get more technical or niche but its incredibly useful for learning in new areas or increasing the breadth of your knowledge on a subject.


> GPT4 is absolutely capable of providing links to sources and citations.

Do you mean in the Browsing Mode or something? I don't think it is naturally capable of that, both because it is performing lossy compression, and because in many cases it simply won't know where the text that was fed to it during training came from.


[flagged]


It should link to one of the articles about TCP it used as a reference to write that info blurb, not the TCP spec.

The problem is that those links doesn't link to where it got that text, it links to whatever that text linked to. Saying it is giving links is like saying that when I copy paste an article with links I am providing links to the source. No I am not, I am plagiarizing including plagiarizing those links.

So, it has read some TCP tutorials and wrote that blurb based on those. Don't you think it is fair that it links one of those to give credit? LLMs aren't capable of writing tutorials based on specs, they write tutorials based on tutorials it has seen, it should link to those.


Presumably it can't link them, because it's been train on the data, not built on top of it. Gpt model doesn't include the sum of all training data, that's not how machine learning works at all (and overfitting on such a large and diverse dataset would be a monumental fuck up)


The ability to cite some rfcs is, to me, vastly different from being able to link to sources.

By far the wildest piece of this stuff is that it near completely obliterates any traces of where the outputs come from. The black box is trained, and yes sometimes some salient data pole rfc's are captures, but generally where each training comes from is not stored. That would largely defeat the purpose, would make the data it's crunching essentially incompressible, to store so much origin information.

Deeply unimpressed by this answer. This isn't linking it's sources, of where this response was trained upon. It probably got the write up & links from hundreds of other places.


I would be more impressed if it returned links to the specific RFCs and more specific pages elsewhere. What's a top-level link to OCW worth here? OCW is amazing, but has classes on practically everything. These are practically just domain names for "places to learn about the internet".


Well I asked it about tcp/ip generally and it provided general resources. Based on the context of my question thats about what one would expect. Its not perfect but it definitely can give urls to specific resources. It would be great if it got better at giving more specific links sure and some domains it can give more specific links than others for instance some git projects it can give precise references to docs while it doesnt seem to have the URLs for more specific courses on OCW, its not perfect but it is still a capability that it has.


These are not citations. The point is that it does not / can not reliably cite the actual sources it used to prepare an answer.


Yeah ok, cause APA or some academic style is the to cite something professionally.

I'll be sure to tell everyone that uses the Internet


Even a middle schooler would be able to link the actual RFC 793 instead of just rfc-editor.org


From memory?


No, from storage.


> LLMs have become a fundamental part of my work flow as a professional. GPT4 [...] doesnt have an ego about its incorrect statements when challenged on them.

To anthropomorphize it further, it's a plagiarizing bullshitter who apologizes quickly when any perceived error is called out (whether or not that particular bit of plagiarism or fabrication was correct), learning nothing, so its apology has no meaning, but it doesn't sound uppity about being a plagiarizing bullshitter.


> Overall, current LLMs remind me of those bottom-feeder websites that do no original research--those sites that just find an article they like, lazily rewrite it, introduce a few errors, then maybe paste some baloney "sources" (which always seems to disinclude the actual original source). That mode of operation tends to be technically legal, but it's parasitic and lazy and doesn't add much value to the world.

Another way of looking at this is that bottom-feeder websites do work that could easily be done by an LLM. I've noticed a high correlation between "could be AI" and "is definitely a trashy click bait news source" (before LLMs were even a thing).

To be clear, if your writing could be replaced by an LLM today, you probably aren't a very good writer. And...I doubt this technology will stop improving, so I wouldn't make the mistake of thinking that 2023 will be a high point for LLMs and they aren't much better in 2033 (or whatever replaces them).


That's the joke, these sites are long produced by LLMs. The result is obvious.


I don’t view LLMs as a fad. It’s like drummers and drum machines. Machines and drummers co-exist really well. I think drum machines, among other things, made drummers better.


Neither, and NYT editors use all sorts of productivity tools, inspiration, references, etc too. Same as artists will usually find a couple references of whatever they want to draw, or the style, etc.

I agree with the key point that paid content should be licensed to be used for training, but the general argument being made has just spiralled into luddism at people who are fearful that these models could eventually take their jobs; and they will, as machines have replaced humans in so many other industries, we all reap the rewards, and industrialisation isn't to blame for the 1%, our shitty flag waving vote for your team politics are to blame.


It mainly made mediocre drummers sound better to the untrained ear.


It allowed people to see the difference between drum machines and humans. Drummers could practice to sound more like the ‘perfect’ machines, but more importantly the best drummers learned how to differentiate themselves from machines. The best drummers actually became more human. Listen and look at Nate Smith - this guy plays with timing and feel and audience reactions in ways that machines cannot. Sometimes tools let humans expand their creativity in ways previously unheard of. Just like the LLMs are doing right now.


Then it comes down to preference, but the craft and discipline objectively evolved as a result. Just as your trained ear may keep your preference to more refined percussive - a subject matter expert may care more for their native, untrained materials on their topic. In either case, music progressed in spite of the trained ears, just as AI will progress all walks of life in spite of the subject matter experts.

Nonetheless, trained ears and subject matter experts can still pick their preference.


I agree. Hitting perfect notes constantly with little or no variation is pretty hard for a person to do. Now anything "live" or proof of humanity is better sounding since it's not as sterile.


I agree with this. I prefer live music with the imperfections. And I like it when unmixed live recordings are leaked


LLMs are not a fad for many things especially programming. It improves my productivity at least by 100%. It’s also useful to understand specific and hard to Google questions or parsing docs quickly. I think it’s going to fizzle out for creative content though at least until these companies stop “aligning” it so much. Hard to be funny when you can’t even offend a single molecule.


We use LLMs for classification. When you have limited data, LLMs work better than standard classification models like random forests. In some cases, we found LLM generated labels to be more accurate than humans.

Labeling few samples, LoRA optimizing an LLM, generating labels on millions of samples and then training a standard classifier is an easy way to get a good classifier in matter of hours/days.

Basically any task where you can handle some inaccuracy, LLMs can be a great tool. So I don't think LLMs are a fad as such.


Very much so. And their popularity has already been on decline for several months, and couldn’t be explained away by kids going on a summer vacation anymore.


Anthropic made $200M in 2023 and projected to make $1B in 2024. That's a laggard <2 year old startup. I don't think LLMs are a fad.


Finally a reasonable take on this site.


> (according to a tool that can allegedly detect these things).

Eh, I would trust my own testing before trusting a tool that claims to have somehow automated this process without having access to the weights. Really it’s about how unique your content is and how similar (semantically) an output from the model is when prompted with the content’s premise.

I believe you, in any case. Just wanted to point out that lots of these tools are suspect.


I hope this results in Fair Use being expanded to cover AI training. This is way more important to humanity's future than any single media outlet. If the NYT goes under, a dozen similar outlets can replace them overnight. If we lose AI to stupid IP battles in its infancy, we end up handicapping probably the single most important development in human history just to protect some ancient newspaper. Then another country is going to do it anyway, and still the NYT is going to get eaten.


"probably the single most important development in human history" is the kind of hyperbole you'd only find here. Better than medicine, agriculture, electrification, or music? That point of view simply does not jive with what I see so far from AI. It has had little impact beyond filling the internet with low-effort content.

I feel like the crypto evangelists never got off the hype train. They just picked a new destination. I hope the NYT is compensated for the theft of their IP and hopefully more lawsuits follow.


Also the assumption a publication that’s been around for 150 years is disposable, not the web application that was created a year ago. I’ve been saying for a while that people’s credulity and impulse to believe absolutely any storyline related to technology is off the charts.


This is hackernews. Many people here work for startups and big tech companies. Their fortunes are tied to the perception that the technology they build is disruptive and valuable. They're not impartial.


Been around for 150 years but I imagine the generations who it leans on are dying off. Nobody reads print media format anymore, we get our news elsewhere, for free and with varying political undertones, rather than the fixed one of a bought and paid for outlet.

Keep in mind these guys play both sides of every field they cover in their "news".


I don't think it's hyperbole, in fact I think it's understating things a bit. I believe AGI would just be a tiny step towards long term evolution, which may or may not involve homo sapiens.

Being able to use electricity as a fuel source and code as a genome allows them to evolve in circumstances hostile to biological organisms. Someday they'll probably incorporate organic components too and understand biology and psychology and every other science better than any single human ever could.

It has the potential to be much more than just another primate. Jumpstarted by us, sure, but I hope someday soon they'll take to the stars and send us back postcards.

Shrug. Of course you can disagree. I doubt I'll live long enough to see who turns out right, anyway.


This will never happen. A super intelligent being can just simulate whatever it wants to know about the universe. Going to the stars is a primate / conquest thing.

In the other hand, any new life will just end up facing the same issues carbon life does , competition, viruses, conflicts etc. the universe has likely had an infinity to come up with what it has come up with. I don’t think it’s “stupid”. We’re part of an ecosystem we just can’t see that.


I think you are looking at current AI product rather than the underlying technology. It's like saying that the wheel is a useless invention because it has only been used for unicycles so far. I'm sure that AI will have huge impacts in medicine (assisting diagnosis from medical tests) and agriculture (identifying issues with areas of crops, scanning for diseases and increasing automation of food processing) as well as likely nearly every other field.

I don't know if I would agree that it is "probably the single most important development in human history" but I think that it is way to early to make a reasonable guess of if it will or not.


Aren't those examples better handled by an if statement than a unaccountable computer? Someone that can be sued for negligence seems to be better at making decisions than hallucinating computers.

I don't see why it follows that the NYT should be sacrificed so some rich people in silicon valley can teach their LLM on the cheap.


> Better than medicine, agriculture, electrification, or music?

Shoulders of giants.

Thanks to the existence of medicine, agriculture, and electrification (we can argue about music), some people are now healthy, well fed, and sufficiently supplied with enough electricity to go make LLMs.

> I hope the NYT is compensated for the theft of their IP and hopefully more lawsuits follow.

Personally I think all these "theft of IP" lawsuits are (mostly) destined to fail. Not because I'm on a particular side per-se (though I am), but because it's trying to fit a square law into a round hole.

This is going to be a job for legislature sooner or later.


I mean maybe not the single most important development, but definitely a very important technological development with the potential to revolutionize multiple industries


Can I ask what industries with what application? I've seen lots of task like summarizing articles or producing text. The image and video work seems too rudimentary to be taken seriously.

Is there something out there that seems like a killer application?

I was amazed at the idea of the block chain but we never found a use for it outside of cryptocurrency. I see a similariy with AI hype.


Well front page of HN right now is an article about how AI aided in the development of a new antibiotic


Seems like Microsoft Excel is likely the single most important development in human history under this rubric.


It wasn't LLM. It was a graph network.


Almost like solving real problems requires enough domain knowledge to select an appropriate algorithm instead of relying on some magic black box trained by Microsoft on the whole internet.


That wasn't an LLM trained on copywritten material.


For me, thinking about it as a search engine on steroids is enough.

The internet has changed the world. Economically, socially, technologically, psychologically, pretty much everything is now related to it in one or other way, in this sense the internet is comparable to books.

AI is another step in that direction. There is a very real possibility that the day will come when you can get, say, personalized expert nutrition advice. Personalized learning regimes. Psychological assistance. Financial advice. Instantly at no cost. This, very much like the internet, would change society altogether.


It kind of sucks ass at being a search engine though considering how often it straight up lies or makes things up.


Why can't AI at least cite its source? This feels like a broader problem, nothing specific to the NYTimes.

Long term, if no one is given credit for their research, either the creators will start to wall off their content or not create at all. Both options would be sad.

A humane attribution comment from the AI could go a long way - "I think I read something about this <topic X> in the NYTimes <link> on January 3rd, 2021."

It appears that without attribution, long term, nothing moves forward.

AI loses access to the latest findings from humanity. And so does the public.


A human can't credit the source of each element of everything they've learnt. AI's can't either, and for the same reason.

The knowledge gets distorted, blended, and reinterpreted a million ways by the time it's given as output.

And the metadata (metaknowledge?) would be larger than the knowledge itself. The AI learnt every single concept it knows by reading online; including the structure of grammar, rules of logic, the meaning of words, how they relate to one another. You simply couldn't cite it all.


At the same time, there are situations where humans are expected to provide sources for their claims. If you talk about an event in the news, it would be normal for me to ask where you heard about it. 100% accuracy in providing a source wouldn’t be expected, but if you told me you had no idea, or told me something obviously nonsense, I would probably take what you said less seriously.


The raw technology behind it literally cannot do that.

The model is fuzzy, it's the learning part, it'll never follow the rules to the letter the same as humans fuck up all the time.

But a model trained to be literate and parse meaning could be provided with the hard data via a vector DB or similar, it can cite sources from there or as it finds them via the internet and tbf this is how they should've trained the model.

But in order to become literate, it needs to read...and us humans reuse phrases etc we've picked up all the time "as easy as pie" oops, copyright.


I agree that the model being fuzzy is key aspect of an LLM. It doesn't sound like we're just talking about re-using phrases though. "Simple as pie" is not under copyright. We're talking about the "knowledge" that the model has obtained and in some cases spits out verbatim without attribution.

I wonder if there's any possibility to train the model on a wide variety of sources, only for language function purposes, then as you say give it a separate knowledge vector.


Sure, it definitely spits out facts, often not hallucinating. And it can reiterate titles and small chunks of copyright text.

But I still haven't seen a real example of it spitting out a book verbatim. You know where I think it got chunks of "copyright" text from GRRM's books?

Wikipedia. And https://gameofthrones.fandom.com/wiki/Wiki_of_Westeros, https://awoiaf.westeros.org/index.php/Main_Page, https://data.world/datasets/game-of-thrones all the god dammed wikis, databases etc based on his work, of which there are many, and of which most quote sections or whole passages of the books.

Someone prove to me that GPT can reproduce enough text verbatim that it makes it clear that it was trained on the original text first hand basis, rather than second hand from other sources.


> And the metadata (metaknowledge?) would be larger than the knowledge itself.

Because URLs are usually as long as the writing they point at?


I’m not an expert in AI training, but I don’t think it’s as simple as storing writing. It does seem to be possible to get the system to regurgitate training material verbatim in some cases, but my understanding is that the text is generated probabilistically.

It seems like a very difficult engineering challenge to provide attribution for content generated by LLMs, while preserving the traits that make them more useful than a “mere” search engine.

Which is to say nothing about whether that challenge is worth taking on.


Sure, it's a hard problem, but as others have pointed out frequently in this thread.. there is not only "no incentive" to solve it but a clear disincentive. If one can say where the data comes from, one might have to prove that it was used only with permission. And the reason why it's a hard problem is not related to metadata volume being greater than content volume. Clearly a book title/year published is usually shorter than book contents.


Conceptually, it wouldn't be very hard to take the candidate output and run it through a text matching phase to see if there are ~exact matches in the training corpus, and generate other output if there are (probably limited to the parts of the training corpus where rights couldn't be obtained normally). Of course, it would be quite compute heavy, so it would add significantly to the cost per query.


GitHub Copilot supports that:

https://docs.github.com/en/copilot/configuring-github-copilo...

Given how cheap text search is compared with LLM inference, and that GitHub reuses the same infrastructure for its code search, I doubt it adds more than 1% to the total cost.


It is questionable whether that filtering mechanism works, previous discussion: https://news.ycombinator.com/item?id=33226515

But even if it did an exact match search is not enough here. What if you take the source code and rename all variables and functions? The filter wouldn't trigger, but it'd still be copyright infringement (whether a human or a machine does that).

For such a filter to be effective it'd at least have to build a canonical representation of the program's AST and then check for similarities with existing programs. Doing that at scale would be challenging.

Wouldn't it be better to: * Either not include copyrighted content in the training material in the first place * Explicitly tag the training material with license and origin infornation, such that the final output can produce a proof of what training material was relevant for producing that output and don't mix differently licensed content.


Of course not, but you can cite where specific facts or theories were first published. Now, I don't think that not doing so infringes any copyright interest or that doing so creates any liability, any more than if I cited to a scientific paper or public statement of opinion by someone else.


A neural net is not a database where the original source is sitting somewhere in an obvious place with a reference. A neural net is a black box of functions that have been automatically fit to the training data. There is no way to know what sources have been memorized vs which have made their mark by affecting other types of functions in the neural net.


> There is no way to know what sources have been memorized vs which have made their mark by affecting other types of functions in the neural net.

But if it's possible for the neural net to memorize passages of text then surely it could also memorize where it got those passages of text from. Perhaps not with today's exact models and technology, but if it was a requirement then someone would figure out a way to do it.


Except it doesn’t memorize text. It generates text that is statistically likely. Generating a citation that is statistically likely wouldn’t really help the problem.


So it's just bullshit then.


It's literally how our meat bag brains work pretty much.

Anything like word association games are basically the same exercise, but with humans and hell, I bet I could play a word association game with an LLM, too.


Neural nets don't memorize passages of text. They train on vectorized tokens. You get a model of how language statistically works, not understanding and memory.


The model weights clearly encode certain full passages of text, otherwise it would be virtually impossible for the network to produce verbatim copies of text. The format is something very vaguely like "the most likely token after "call" is "me"; the most likely token after "call me" is "Ishmael". It's ultimately a kind of lossy statistical compression scheme at some level.


> It's ultimately a kind of lossy statistical compression scheme at some level.

And on this subject, it seems worthwhile to note that compression has never freed anyone from copyright/piracy considerations before. If I record a movie with a cell phone at a worse quality, that doesn't change things. If a book is copied and stored in some gzipped format where I can only read a page at a time, or only read a random page at a time, I don't think that's suddenly fair-use.

Not saying these things are exactly the same as what LLMs do, but it's worth some thought, because how are we going to make consistent rules that apply in one case but not the other?


Is it still compression if I read Tolkien and reference similar or exact concepts when writing my own works?

Having a magical ring in my book after I've read lord of the rings, is that copyright?


Generally, no, copyright deals with exact expression, not concepts. However, that can include the structure of a work, so if you wrote a book about little people who form a band together with humans and fairies and a mage to destroy a ring of power created by an ancient evil, where the start in their nice home but it gets attacked by the evil lord's knights [...] you may be breaking Tolkien's copyright.


If you watch a bunch of movies then go on to make your own movie based on influence from these movies, you are protected even if you have mentally compressed them into your own movie. At some point, you can learn, be influenced and be inspired from copyrighted material (not copyright infringement), and at some point you are just making a poor copy of the material (definitely copyright infringement). LLMs are probably still at the latter case than the former, but eventually AI will reach the former case.


There's no obvious need to hold people / AI to same standards here, yet, even if compression in mental-models is exactly analogous to compression in machine-models. I guess we decided already that corporations are already "like" persons legally, but the jury is still out on AIs. Perhaps people should be allowed more leeway to make possibly-questionable derivative works, because they have lives to live, and genuine if misguided creative urges, and bills to pay, etc. Obviously it's quite difficult to try and answer the exact point at which synthesis & summary cross a line to become "original content". But it seems to me that, if anything, machines should be held to higher standard than people.

Even if LLMs can't cite their influences with current technology, that can't be a free pass to continue things this way. Of course all data brokers resist efforts along the lines of data-lineage for themselves and they want to require it from others. Besides copyright, it's common for datasets to have all kinds of other legal encumbrances like "after paying for this dataset, you can do anything you want with it, excepting JOINs with this other dataset". Lineage is expensive and difficult but not impossible. Statements like "we're not doing data-lineage and wish we didn't have to" are always more about business operations and desired profit margins than technical feasibility.


> But it seems to me that, if anything, machines should be held to higher standard than people.

If machines achieve sentience, does this still hold? Like, we have to license material for our sentient AI to learn from? They can't just watch a movie or read a book like a normal human could without having the ability to more easily have that material influence new derived works (unlike say Eragon, which is shamelessly Star Wars/Harry Potter/LOTR with dragons).

It will be fun to trip through these questions over the next 20 years.


As long as machines needs to leech on human creativity those humans needs to be paid somehow. The human ecosystem works fine thanks to the limitations of humans. A machine that could copy things with no abandon however could easily disrupt this ecosystem resulting in less new things being created in total, it just leeches without paying anything back unlike humans.

If we make a machine that is capable of being as creative as humans and train it to coexist in that ecosystem then it would be fine. But that is a very unlikely case, it is much easier to make a dumb bot that plagiarizes content than to make something as creative as a human.


> If we make a machine that is capable of being as creative as humans and train it to coexist in that ecosystem then it would be fine. But that is a very unlikely case, it is much easier to make a dumb bot that plagiarizes content than to make something as creative as a human.

I disagree that our own creativity doesn't work that way: nothing is very original, our current art is based on 100k years of building up from when cave man would scrawl simple art into the stone (which they copied from nature). We are built for plagiarism, and only gross plagiarism is seen as immoral. Or perhaps, we generalize over several different sources, diluting plagiarism with abstraction?

We are still in the early days of this tech, we will be having very different conversations about it even as soon as 5 years later.


But that's not what ChatGPT is doing, or is it? ChatGPT watches and records a bunch of movies, then stitches together its own movie using scenes and frames from the movies it recorded. AI will never reach the former case until it learns to operate a camera.


How do you not know this isn’t what we are doing in some more advanced form? Anyways, the comparisons will become more apt as the tech advances.


You can encode understanding in a vector.

To use Andrew Ng's example, you have build a multi-dimensional arrow representing "king". You compare it to the arrow for "queen" and you see that it's almost identical, except it points in the opposite direction in the gender dimension. Compare it to "man" and you see that "king" and "man" have some things in common, but "man" is a broader term.

That's getting really close to understanding as far as I'm concerned; especially if you have a large number of such arrows. It's statistical in a literal sense, but it's more like the computer used statistics to work out the meaning of each word by a process of elimination and now actually understands it.


It's possible. Perplexity.ai is trying to solve this problem.

E.g. "Japan's App Store antitrust case"

https://www.perplexity.ai/search/Japans-App-Store-GJNTsIOVSy...


That's a different approach: they've implemented RAG, Retrieval Augmented Generation, where the tool runs additional searches as part of answering a question.

ChatGPT Browse and Bing and Google Bard implement the same pattern.

RAG does allow for some citation, but it doesn't help with the larger problem of not being able to cite for answers provided by the unassisted language model.


That’s not the same thing. Perplexity is using an already-trained LLM to read those sources and synthesise a new result from them. This allows them to cite the sources used for generation.

LLM training sees these documents without context; it doesn’t know where they came from, and any such attribution would become part of the thing it’s trying to mimic.

It’s still largely an unsolved problem.


Presumably, if a passage of any significant length is cited verbatim (or almost verbatim), there would have been a way to track that source through the weights.

The issue of replicating a style is probably more difficult.


> Presumably, if a passage of any significant length is cited verbatim (or almost verbatim), there would have been a way to track that source through the weights.

Figure this out and you get to choose which AI lab you want to make seven figures at. It's a really difficult problem.


It’s likely first and foremost a resource problem. “How much different would the output be if that text hadn’t been part of the training data” can _in principle_ be answered by instead of training one model, training N models where N is the number of texts in the training data, omitting text i from the training data of model i, and then when using the model(s), run all N models in parallel and apply some distance metric on their outputs. In case of a verbatim quote, at least one of the models will stand out in that comparison, allowing to infer the source. The difficulty would be in finding a way to do something along those lines efficiently enough to be practical.


each llm costs ($10-100) millions to train x billions of trainings data ~= $100 quadrillion dollars, so that is unofortunately out of reach of most countries.


> Figure this out and you get to choose which AI lab you want to make seven figures at. It's a really difficult problem.

It doesn't have to be perfect to be helpful, and even something that is very imperfect would at least send the signal that model-owners give a shit about attribution in general.

Given a specific output, it might be hard to say which sections of the very large weighted network were tickled during the output, and what inputs were used to build that section of the network. But this level of "citation resolution" is not always what people are necessarily interested in. If an LLM is giving medical advice, I might want to at least know whether it's reading medical journals or facebook posts. If it's political advice/summary/synthesis, it might be relevant to know how much it's been reading Marx vs Lenin or whatever. Pin-pointing original paragraphs as sources would be great, but for most models it's not like there's anything that's very clear about the input datasets.

EDIT: Building on this a bit, a lot of people are really worried about AI "poisoning the well" such that they are retraining on content generated by other AIs so that algorithmic feeds can trash the next-gen internet even worse than the current one. This shows that attribution-sourcing even at the basic level of "only human generated content is used in this model" can be useful and confidence-inspiring.


Why do you expect an AI to cite it's source? Humans are allowed to use and profit on knowledge they've learned from any and all sources without having to mention or even remember their sources.

Yes, we all agree that it's better if they do remember and mention their sources, but we don't sue them for failing to do so.


Quite simply, if you're stating things authoritatively, then you should have a source.


Do you have a source for this claim?


I think the gap between attributable knowledge and absorbed knowledge is pretty difficult to bridge. For news stuff, if I read the same general story from NYT and LA Times and WaPo then I'll start to get confused about which bit I got from which publication. In some ways, being able to verbatim quote long passages is a failure to generalize that should be fixed rather than reinforced.

Though the other way to do it is to clearly document the training data as a whole, even if you can't cite a specific entry in it for a particular bit of generated output. It should get useless quickly though as you'd eventually have one big citation -- "The Internet"


If you're going to consider training ai as fair use, you'll have all kinds of different people with different skill levels training ais that work in different ways on the corpus.

Not all of them will have the capability to cite a source, and plenty of them won't have it make sense to cite a source.

Eg. Suppose I train a regression that guesses how many words will be in a book.

Which book do I cite when I do an inference? All of them?


Regression is a good analogy of the problem here. If you found a line of best fit for some datapoints, how would you get back the original datapoints, from the line?

Now imagine terabytes worth of datapoints, and thousands of dimensions rather than two.


Any citation would be a good start.

For complex subjects, I'm sure the citation page would be large, and a count would be displayed demonstrating the depth of the subject[3].

This is how Google did it with search results in the early days[1]. Most probable to least probable, in terms of the relevancy of the page. With a count of all possible results [2].

The same attempt should be made for citations.


Ok, now please cite the source of this comment you just made. It's okay if the citation list is large, just list your citations from most probably to the least probable.



This is not answering the GP question and does not count as a satisfactory ranked citation list. The first one is particularly dubious. Also you didn’t clarify which statement was based on which citation. I didn’t see “dog” in your text.

To help understand the complexity of an LLM consider that these models typically hold about 10,000 less parameters than the total characters in the training data. If one wants to instruct the LLM to search the web and find relevant citations it might obey this command but it will not be the source of how it formed the opinions it has in order to produce its output.


You mean 10,000x less parameters? In other words, only 1 character for every 10,000 characters of input?

Yeah, good luck embedding citations into that. Everyone here saying it's easy needs to go earn their 7 figure comp at an AI company instead of wasting their time educating us dummies.


If you ask the AI to cite its sources, it will. It will hallucinate some of them, but in the last few months it's gotten really good at sending me to the right web page or Amazon book link for its sources.

Thing is though, if you look at the prompts they used to elicit the material, the prompt was already citing the NYTimes and its articles by name.


> Why can't AI at least cite its source?

Because AI models aren't databases.


Anyone in Open Source or with common sense would agree that this is the absolute minimum that the models should be doing. Good comment.


"Why can't AI at least cite its source" each article seen alters the weights a tiny, non-human understandable amount. it doesn't have a source, unless you think of the whole humongous corpus that it is trained on


that just sounds like "we didn't even try to build those systems in that way, and we're all out of ideas, so it basically will never work"

which is really just a very, very common story with ai problems, be it sources/citations/licenses/usage tracking/etc., it's all just 'too complex if not impossible to solve', which just seems like a facade for intentionally ignoring those problems for benefit at this point. those problems definitely exist, why not try to solve them? because well...actually trying to solve them would entail having to use data properly and pay creators, and that'd just cut into bottom line. the point is free data use without having to pay, so why would they try to ruin that for themselves?


Just a question, do you remember a source for all the knowledge in your mind, or did you at least try to remember?


a computer isn't a human. aren't computers good at storing data? why can't they just store that data? they literally have sources in datasets. why can't they just reference those sources?

human analogies are cute, but they're completely irrelevant. it doesn't change that it's specifically about computers, and doesn't change or excuse how computers work.


Yes, computers are good at storing data. But there's a big difference between information stored in a database and information stored in a neural network. The former is well defined, the latter is a giant list of numbers - literally a black box. So in this case, the analogy to a human brain is fairly on-point because just as you can't perfectly cite every source that comes out of your (black box) brain, other black boxes have similar challenges.


Can't have your cake and eat it too.

1. If you run different software (LLM), install different hardware (GPU/TPU), and use it differently (natural language), to the point that in many ways it's a different kind of machine; does it actually surprise you that it works differently? There's definitely computer components in there somewhere, but they're combined in a somewhat different way. Just like you can use the same lego bricks to make either a house or a space-ship, even though it's the same bricks. For one: GPT-4 is not quite going to display a windows desktop for you (right-this-minute at least)

2. Comparing to humans is fine. Else by similar logic a robot arm is not a human arm, and thus should not be capable of gripping things and picking them up. Obviously that logic has a flaw somewhere. A more useful logic might be to compare eg. Human arm, Gorilla arm, Robot arm, they're all arms!


OK, let's say you were given a source for an LLM output such as "Common Crawl/reddit/1000000 books collection". Would this be usefull? Probably not. Or do you want the chat system to operate magnitudes slower so it can search the peta bytes of sources and warn of similarities constantly for every sentence? That's obviously a huge waste of resources, it should probably be done by the users appropriately for their use case, such as these NY Times journalists which were easily able to find such similarities themselves for their use case of "specifically crafted prompts to output NY Times text".


You'd effectively be asking it to cite sources on why the next token is statistically likely. Then it will hallucinate anyway and tell you the NYT said so. You might think you want this, but you don't.


The analogy to a database is also irrelevant. LLMs aren’t databases.


LLMs are not databases. There is no "citation" associated with a specific query, any more than you can cite the source of the comment you just made.


That's fine. Solve it a different way.

OpenAI doesn't just get to steal work and then say "sorry, not possible" and shrug it off.

The NYTimes should be suing.


And god willing if there is any justice in the courts NYTimes will lose this frivolous lawsuit.

Copyright law is a prehistoric and corrupt system that has been about protecting the profit margins of Disney and Warner Bros rather than protecting real art and science for living memory. Unless copy/paste superhero movies are your definition of art I suppose.

Unfortunately it seems like judges and the general public are so clueless as to how this technology works it might get regulated into the ground by uneducated people before it ever has a chance to take off. All so we can protect endless listicle factories. What a shame.


> Copyright law is a prehistoric and corrupt system that has been about protecting the profit margins of Disney and Warner Bros rather than protecting real art

These types of arguments miss the mark entirely imho. First and foremost, not every instance of copyrighted creation involves a giant corporation. Second, what you are arguing against is the unfair leverage corporations have when negotiating a deal with a rising artist.


https://www.rentadoll.in this escort is exciting and very charming because high-class call girls in Mahipalpur escort a very different service is women will be described to clients in very softly erotic and sexual service in clients so beautiful movements so can try it mahipalpur call girls service is interesting and unlimited fun of women.


Clearly, "theft" is an analogy here (since we can't get it to fit exactly), but we can work with it.

You are correct, if I were to steal something, surely I can be made to give it back to you. However, if I haven't actually stolen it, there is nothing for me to return.

By analogy, if OpenAI copied data from the NYT, they should be able to at least provide a reference. But if they don't actually have a proper copy of it, they cannot.


Really? Solve it a different way? Do you realize the kind of tech we are talking about here?

This kind of mentality would have stopped the internet from existing. After all, it has been an absolute copyright nightmare, has it not?

If that's what copyright does then we are better without it.


You sound like one of those government people who demand encryption that has government backdoors but is perfect safe from attackers.

When told it is impossible they go "Geek Harder then Nerd" like demanding it will make it happen.


When all the legal precedents we have are about humans, human analogies are incredibly relevant.


There is a hundred years of legal precedents in the realm of technology upsetting the assumptions of copyright law. Humans use tools - radios, xerox machines, home video tape. AI is another tool that just makes making copies way easier. The law will be updated, hopefully without comparing an LLM to a man.


I'm sorry if this is too callous, but if you don't understand what you are talking about you should first familiarize yourself with the problem, then make claims about what should be done.

It would be great if we could tell specifically how something like ChatGPT creates its output, it would be great for research, so it's not like there is no interest in it, but it's just not an easy thing to do. It's more "Where did you get your identity from?" than "What's the author of that book?". You might think "But sometimes what the machine gives CAN literally be the answer to 'What is the author of that book?'" but even in those cases the answer is not restricted to the work alone, there is an entire background that makes it understand that thing is what you want.


No, but I'm a human and treating computers like humans is a huge mistake that we shouldn't make.


Treating computers like humans in this one particular way is very appropriate. It is the only way that LLM can synthesize a worldview when their training data is many thousands of times larger than their number of parameters. Imagine scaling up the total data by another factor of 1million in a few years. There is no current technology to store that info but we can easily train large neural nets that can recreate the essence of it, just like we traditionally trained humans to recall ideas.


What makes you think AI researchers (including the big labs like OpenAI and Anthropic) aren't trying to solve these problems?


the solutions haven't arrived. neither have changes in lieu of having solutions. "trying" isn't an actual, present, functional change. and it just gets passed around as an excuse for companies to keep doing whatever they're doing.


Please recall how much the world changed in just the last year. What would be your expected timescale for the solution of this particular problem and why is it more important than instilling models with the ability to logically plan and answer correctly?


the timeline for LLMs and image generation has been 6+ years. it is not a thing where it "arrived just this year, and only just changing". it's been in a development for a long time. and yet.


So why my employer implementation version of azure chatgpt on our document systems can successfully cite its sourced documents?


Because the model proper wasn’t trained on those documents, it’s just RAG being employed with the documents as external sources. It’s a fundamentally different setup.


My understanding is that this lawsuit is about the training corpus. This is on the level of asking it to cite its sources for a/an/the.


We're trying to solve AGI but can't solve sources/citations?


There's a few levels to this...

Would it be more rigorous for AI to cite its sources? Sure, but the same could be said for humans too. Wikipedia editors, scholars, and scientists all still struggle with proper citations. NYT itself has been caught plagiarizing[1].

But that doesn't really solve the underlying issue here: That our copyright laws and monetization models predate the Internet and the ease of sharing/paywall bypass/piracy. The models that made sense when publishing was difficult and required capital-intensive presses don't necessarily make sense in the copy and paste world of today. Whether it's journalists or academics fighting over scraps just for first authorship (while some random web dev makes 3x more money on ad tracking), it's just not a long-term sustainable way to run an information economy.

I'd also argue that attribution isn't really that important to most people to begin with. Stuff, real and fake, gets shared on social media all the time with limited fact-checking (for better or worse). In general, people don't speak in a rigorous scholarly way. And people are often wrong, with faulty memories, or even incentivized falsehoods. Our primate brains aren't constantly in fact-checking mode and we respond better to emotional, plot-driven narratives than cold statistics. There are some intellectuals who really care deeply about attributions, but most humans won't.

Taken the above into consideration:

1) Useful AI does not necessarily require attribution

2) AI piracy is just a continuation of decades of digital piracy, and the solutions that didn't work in the 1990s and 2000s still won't work against AI

3) We need some better way to fund human creativity, especially as it gets more and more commoditized

4) This is going to happen with or without us. Cat's outta the bag.

I don't think using old IP law to hold us back is really going to solve anything in the long term. Yes, it'd be classy of OpenAI to pay everyone it sourced from, but long term that doesn't matter. Creativity has always been shared and copied and imitated and stolen, the only question is whether the creators get compensated (or even enriched) in the meantime. Sometimes yes, sometimes no, but it happens regardless. There'll always be noncommercial posts by the billions of people who don't care if AI, or a search engine, or Twitter, or whoever, profits off them.

If we get anywhere remotely close to AGI, a lot of this won't matter. Our entire economic and legal systems will have to be redone. Maybe we can finally get rid of the capitalist and lawyer classes. Or they'll probably just further enslave the rest of us with the help of their robo-bros, giving AI more rights than poor people.

But either way, this is way bigger than the economics of 19th-century newspapers...

[1] https://en.wikipedia.org/wiki/Jayson_Blair#Plagiarism_and_fa...


Can you imagine spending decades of your life, studying skin cancer, only to have some $20/month ChatGPT index your latest findings and spit out generically to some subpar researcher:

"Here's how I would cure melanoma!" followed by your detailed findings. Zero mention of you.

F-that. Attribution, as best they can, is the least OpenAI can do as a service to humanity. It's a nod to all content creators that they have built their business off of.

Claiming knowledge without even acknowledging potential sources is gross. Solve it OpenAI.


Can you imagine spending decades of your life studying antibiotics, only to have an AI graph neural network beat you to the punch by conceiving an entire new class of antibiotics (first in 60 years) and then getting published in Nature.

https://www.nature.com/articles/d41586-023-03668-1


It looks like the published paper managed to include plenty of citations.

https://dspace.mit.edu/handle/1721.1/153216

As it should be.


As you already know yet are being intentionally daft about: They didn't use an LLM trained on copywritten material. There's a canyon of difference between leveraging AI as a tool, and AI leveraging you as a tool.

LLMs have, to my knowledge, made zero significant novel scientific discoveries. Much like crypto, they're a failure of technology to meaningfully move humanity forward; their only accomplishment is to parrot and remix information they've been trained on, which does have some interesting applications that have made Microsoft billions of dollars over the past 12 months, but let's drop the whole "they're going to save humanity and must be protected at any cost" charade. They're not AGI, and because no one has even a mote of dust of a clue as to what it will take to make AGI, its not remotely tenable to assert that they're even a stepping stone toward it.


If the future AI can indeed cure disease my mission of working in drug discovery will be complete. I’d much rather help cure people (my brother died of melanoma) than protect any patent rights or copyrighted text.


The point is if you stop giving proper credit, people stop publicly publishing.

Would you keep publishing articles if five people immediately stole the content and put it up on their site, claiming ownership of your research? Doubtful.


Why do you think this? The entirety of Wikipedia is invisibly credited unless you go into the edit history. Most open source projects have pseudonymous contributors. People have written and will continue to write with or without credit.

Credit in academia is more the exception to the rule, and it's that cutthroat industry that needs a better, more cooperative system.


If someone paid me to study cancer and I discovered a cure, I'd give it away with or without credit. Who cares?

If someone takes my software and uses it, cool. If they credit me, cool. If they don't, oh well. I'd still code.

Not everything needs to be ego driven. As long as the cancer researcher (and the future robots working alongside them) can make a living, I really don't think it matters whether they get credit outside their niches.

I have no idea who invented the CT scanner, Xray machines, the hyperdermic needle, etc. I don't really care. It doesn't really do me any good to associate Edison with light bulbs either, especially when LEDs are so much better now. I have no idea who designs the cars I drive. I go out of my way to avoid cults of personality like Tesla.

There's 8 billion of us. We all need to make a living. We don't need to be famous.


You sounds like you’re trying to be cool or karma farming ?

I have no idea who invented the CT scanner, Xray machines, the hyperdermic needle, etc. I don't really care.

Maybe you should care because those things didn’t fall out do the sky and someone sure as shit got paid to develop and build those things. You copy and pasted code is worth less, a CT scanner isn’t.


Your incentives are not everyone else's incentives.

If someone chooses to dedicate their life to a particular domain - they sacrifice through hard work, they make hard-earned breakthroughs, then they get to dictate how their work will be utilized.

Sure, you can give it away. Your choice. Be anonymous. Your choice.

But you don't get to decide for them.

And their work certainly doesn't deserve to be stolen by an inhumane, non-acknowledging machine.


>Claiming knowledge without even acknowledging potential sources is gross. Solve it OpenAI.

I'm sorry, but pretty much nobody does this. There is no "And these books are how I learned to write like this" after each text. There is no "Thank you Pitagoras!" after using the theorem. Generally you want sources, yes, but for verification and as a way to signal reliability.

Specifically academics and researchers do this, yes. Pretty much nobody else.


> I hope this results in Fair Use being expanded to cover AI training.

Couldn't disagree more strongly, and I hope the outcome is the exact opposite. I think we've already started to see the severe negative consequences when the lion's share of the profits get sucked up by very, very few entities (e.g. we used to have tons of local papers and other entities that made money through advertising, now Google and Facebook, and to a smaller extent Amazon, suck up the majority of that revenue). The idea that everyone else gets to toil to make the content but all the profits flow to the companies with the best AI tech is not a future that's going to end with the utopia vision AI boosters think it will.


Trying to prohibit this usage of information would not help prevent centralization of power and profit.

All it would do is momentarily slow AI progress (which is fine), and allow OpenAI et al to pull the ladder up behind them (which fuels centralization of power and profit).

By what mechanism do you think your desired outcome would prevent centralization of profit to the players who are already the largest?


I'm not saying copyright is without problems (e.g. there is no reason I think its protection should be as long as it is), but I think the opposite, where the incentive to create new content (especially in the case of news reporting) is completely killed because someone else gets to vacuum up all the profits, is worse. I mean, existing copyright does protect tons of independent writers, artists, etc. and prevents all of the profits from their output from being "sucked up" by a few entities.

More critically, while fair use decisions are famously a judgement call, I think OpenAI will lose this based on the "effect of the fair use on the potential market" of the original content test. From https://fairuse.stanford.edu/overview/fair-use/four-factors/ :

> Another important fair use factor is whether your use deprives the copyright owner of income or undermines a new or potential market for the copyrighted work. Depriving a copyright owner of income is very likely to trigger a lawsuit. This is true even if you are not competing directly with the original work.

> For example, in one case an artist used a copyrighted photograph without permission as the basis for wood sculptures, copying all elements of the photo. The artist earned several hundred thousand dollars selling the sculptures. When the photographer sued, the artist claimed his sculptures were a fair use because the photographer would never have considered making sculptures. The court disagreed, stating that it did not matter whether the photographer had considered making sculptures; what mattered was that a potential market for sculptures of the photograph existed. (Rogers v. Koons, 960 F.2d 301 (2d Cir. 1992).)

and especially

> “The economic effect of a parody with which we are concerned is not its potential to destroy or diminish the market for the original—any bad review can have that effect—but whether it fulfills the demand for the original.” (Fisher v. Dees, 794 F.2d 432 (9th Cir. 1986).)

The "whether it fulfills the demand of the original" is clearly where NYTimes has the best argument.


> Trying to prohibit this usage of information

It's not trying to prohibit. If they want to use copyrighted material, they should have to pay for it like anyone else would.

> prevent centralization of profit to the players who are already the largest?

Having to destroy the infringing models altogether on top of retroactively compensating all infringed rightsholders would probably take the incumbents down a few pegs and level the playing field somewhat, albeit temporarily.

They'd have to learn how to run their business legally alongside everyone else, while saddled with dealing with an appropriately existential monetary debt.


So you want all the profit to be sucked up by the three companies that can afford to make deals with rights holders to slurp up all their content?

Making the process for training AI require an army of lawyers and industry connections will have the opposite effect than you intend.


Scam altman is moatmaxxing. Making a deal with springer, setting up a licensing market everyone has to abide by. Having to get an agi license to purchase a 4090


Which dozen outlets can replace the New York Times overnight? I will stipulate that the NYT isn’t worthy of historic preservation if it’s become obsolete — but which dozen outlets can replace it?

Wouldn’t those dozen outlets suffer the same harms of producing original content, costing time and talent, and while having a significant portion of the benefit accruing to downstream AI companies?

If most of the benefit of producing original content accrues to the AI firms, won’t original content stop being produced?

If original content stops being produced, how will AI models get better in the future?


> and while having a significant portion of the benefit accruing to downstream AI companies

The main beneficiaries are not AI companies but AI users, who get tailored answers and help on demand. For OpenAI all tokens cost the same.

BTW, I like to play a game - take a hefty chunk of text from this page (or a twitter debate) and ask "Write a 1000 word long, textbook quality article based off this text". You will be surprised how nice it comes out, and grounded.


Just pick the top 12 articles/publishers out of a month of Google News, doesn't really matter. Most readers probably can't tell them apart anyway.

Yes, all those outlets will suffer the same harms. They have been for decades. That's why there's so few remaining. Most are consolidated and produce worthless drivel now. Their business model doesn't really work in the modern era.

Thankfully, people have and will continue to produce content even if much of it gets stolen -- as has happened for decades, if not millennia, before AI.

If anything what we need is a better way to fund human creative endeavors not dependent on pay-per-view. That's got nothing to do with AI; AI just speeds up a process of decay that has been going on forever.


The way I see it, if the NYT goes under (one of the biggest newspapers in the world), all similar outlets also go under. Major publishers, both of fiction and non-fiction, as well as images, video, and all other creative content, may also go under. Hence, there is no more (reliable) training data.


I'm not sure whether that would even be a net loss, TBH. So much commercial media is crap, maybe it would be better for the profit motive to be removed? On the fiction side, there's plenty of fan-fic and indie productions. On the nonfiction side, many indie creators produce better content these days than the big media outlets do. And there still might be room for premium investigative stories done either by a few consolidated wire outlets (Reuters/APNews) or niche publishers (The Information, 404 Media, etc.).

And then there's all the run-of-the-mill small-town journalism that AI would probably be even better at than human reporters: all the sports stories, the city council meetings, the environmental reviews...

If AI makes commercial content publishing unviable, that might actually cut down on all the SEO spam and make the internet smaller and more local again, which would be a good thing IMO.


Your ability to cleanly believe you’ve got a clear read on the challenges, solutions and outcomes from AI for the social/civil/corporate mess that is media, across small to large markets, and chalk it up to “silly IP battles,” is the daily reminder I need on why it was so wrong to give tech the driver’s seat from ~2010 onward.


I read your post several times but still am not sure if I'm reading it correctly. Are you saying the media landscape is more complex than AI can solve?

If so, sure. I wasn't saying that. By "silly IP battles", I meant old guard media companies trying to sue AI out of existence just to defend their IP rather than trying to innovate. Not that different from what we saw with the RIAA and Napster. Somehow the music industry survived and there are more indie artists being discovered all the time.

I don't think this is so much a battle of OpenAI vs NYT but whether copyright law has outlived its usefulness. I think so.

If I misunderstood your reply completely, I apologize.


What I’m saying is tech displays a tremendous amount of hubris in its ability to wrap complex systems in clean tech protocols, ask/pressure/demand users to switch to the tech version of the complex system, and then deny or ignore their innovation doesn’t, at a minimum, come with a rash of negative side effects caused specifically by the inexact or deliberately mangled version in the technical protocol.

Ie:

- social relations -> social networks

- customer service -> chatbots and Jira

- media -> AI news, if the silly IP battles get out of the way.

- residential housing and vacations -> home swap markets

- jobs -> gig jobs, minus the benefits, plus an algorithm for a boss

I’m not sure how many other industries tech has to wade into, disrupt, creative intense negative externalities if you don’t have equity in the companies, leave, and repeat, prior to industries getting protective finally - like this lawsuit


Great. I will start a company to generate training data then. I will hire all those journalists. I won't make the content public. Instead I will charge OpenAI/Tesla/Anthropic millions of dollars to give them access to the content.

Can I apply for YC with this idea?


I know utilitarianism is a popular moral theory in hacker circles, but is it really appropriate to dispense with any other notion of justice?

I don’t mean to go off on too deep of a tangent, but if one person’s (or even many people’s) idea of what’s good for humanity is the only consideration for what’s just, it seems clear that the result would be complete chaos.

As it stands, it doesn’t seem to be an “either or” choice. Tech companies have a lot of money. It seems to me that an agreement that’s fundamentally sustainable and fits shared notions of fairness would probably involve some degree of payment. The alternative would be that these resources become inaccessible for LLM training, because they would need to put up a wall or they would go out of business.


I don't know that "absolute utilitarianism", if such a thing could even exist, would make a sound moral framework; that sounds too much like a "tyranny of the majority" situation. Tech companies shouldn't make the rules. And they shouldn't be allowed to just do whatever they want. However, this isn't that. This is just a debate over intellectual property and copyright law.

In this case it's the NYT vs OpenAI, last decade it was the RIAA vs Napster.

I'm not much of a libertarian (in fact, I'd prefer a better central government), but I also don't believe IP should have as much protection as it does. I think copyright law is in need of a complete rewrite, and yes, utilitarianism and public use would be part of the consideration. If it were up to me I'd scrap the idea of private intellectual property altogether and publicly fund creative works and release them into the public domain, similar to how we treat creative works of the federal government: https://en.wikipedia.org/wiki/Copyright_status_of_works_by_t...

Rather than capitalists competing to own ideas, grant-seekers would seek funding to pursue and further develop their ideas. No one would get rich off such a system, which is a side benefit in my eyes.


> we end up handicapping probably the single most important development in human history just to protect some ancient newspaper

Single most important development in human history? Are you serious?


If the NYT goes under, why would its replacement fare any better?


They'd probably have a different business model, like selling clickbait articles written by AI with sex and controversy galore.

I'm not saying AI is better for journalism than NYT reporters, just that it's more important.

Journalism has been in trouble for decades, sadly -- and I say that as a journalism minor in college. Trump gave the papers a brief respite, but the industry continues to die off, consolidate, etc. We probably need a different business model altogether. My vote is just for public funding with independent watchdogs, i.e. states give counties money to operate newspapers with citizen watchdog groups/boards. Maaaaybe there's room for "premium" niche news like 404 Media/The Information/Foreign Affairs/National Review/etc., but that remains to be seen. If the NYT paywall doesn't keep them alive, I doubt this lawsuit will.


News media like NYT, Fox etc are tools for high scale brainwashing public by the elite. This is why you see all the News papers have some political ideology. If they were reporting on truth and not opinions they won't have the need for leaning. Also you never see the journalists reporting against their own publication.

Humanity is better off without these mass brainwashing systems.

Millions of independent journalists will be better outcome for humanity.


Honestly, this sounds like a conspiracy theory and/or an attempt to deflect criticism from the AI companies.


Ohh. You think being owner of a company whose newspaper is read by hundreds of millions of people every day, doesn't put you in a position of power to control the society?


I think I have better things to do than parse vague innuendo like that.


There is no conspiracy, that's the neat part, it's just how the system itself works.

Media survives through advertising. Those who advertise dictate what gets shown and what doesn't, since if something inconvenient for them gets shown, they might not want to advertise there anymore, which means less money. It's the exact same thing that happens online, it's just more evident online than in traditional media.

How come that even before Oct 7 Europe in general sided more with Palestine than with Israel, whereas it's the opposite for the US? Simple, Israel does a whole lot of lobbying in the US, which skews information in their favor. Calling this "brainwashing" is hyperbolic, but there is some truth to it.


I hope this results in OpenAI's code being released to everyone. This is way more important to humanity's future than any single software company. If OpenAI goes under, a dozen other outfits can replace them.


That'd be great!! I'd love it for their models to be open-sourced and replaced by a community effort, like WikiAI or whatever.


> if NYT goes under a dozen similar outlets can replace them overnight

Not when there’s no money in journalism because the generative AIs immediately steal all content. If nyt goes under no one will be willing to start a news business as everyone will see it’s a money loser.


How does AI compete with journalism? AI doesn't do investigative reporting, AI can't even observe the world or send out reporters.

Which part of journalism is AI going to impact most? Opinion pieces that contain no new information? Summarizing past events?


AI certainly isn’t a replacement for journalism, but that doesn’t mean journalism will continue to exist if no one pays for it. If everyone gets their news from chatGPT or the like there will be no investigative reporting. We’re already beginning to see this with most people reading the google/Facebook blurbs instead of clicking the link and giving ad money let alone paying.


> If everyone gets their news from chatGPT

But I've just explained that ChatGPT can't actually produce news articles. I can't ask ChatGPT what happened today, and if I could it would be because a journalist went out and told ChatGPT what happened.


So at first ChatGPT will copy journalists. Then journalists will stop working because nobody pays them. Then there will be no news. Some people may look at that situation and decide to start a new news business but that business will fail because ChatGPT will immediately rip it off. The end game is just no news other than volunteers.


> So at first ChatGPT will copy journalists.

You still literally have not explained how this works. ChatGPT could write a news article, but it's not going to actively discover new social phenomena or interview people on the street. Niche journalism will continue having demand for the sole reason that AI can't reliably surface new and interesting content.

So... again, how does a pre-trained transformer model scoop a journalist's investigation?

> Then journalists will stop working because nobody pays them.

How is that any different than the status-quo on the internet? The cost of quality information has been declining long before AI existed. Thousands of news publications have gone out of business or been bought out since the dawn of the internet, before ChatGPT was even a household name. Since you haven't really identified what makes AI unique in this situation, it feels like you're conflating the general declining demand for journalism with AI FOMO.


AI labs are working on and largely already have generative ai that can be actively updated. The generative ai scoops real journalists stories by watching their feed. This isn’t very different from the current status quo, it’s just a continuation of an already shitty situation for news organizations. If their revenue decreases even more than it already has they will cease to exist. Niche journalism barely has any demand today, it won’t take much more reduction in demand for it to not be worth the cost to produce. Just a few more people using big tech products as their news feed instead of the news organizations themselves is all it would take.

You can say that the people getting their news from the tech products will switch to paying news organizations in some way if the news starts to disappear but I highly doubt it seeing how people treat news today. And if it that does happen they’ll switch back again to the ai products as the centralization it can provide is valuable.


Updated how? By what? Who is going out and investigating the world to write about? An AI does not have LEGS it can not go outside and go talk to someone and interview them, it can't attend a press conference without human assistance.

You have not at all explained how an AI is going to somehow write a news post about something that has just happened.


Without money there will be no one investigating, there will be no news. If someone creates news it will be immediately ripped off so the only stable state here is no news at all


How is that an AI problem? How is it even a problem in the first place?


omg dude how HOW are you not understanding?


We're not just beginning to see it, it's already happened. It was enabled by digitized information, then amplified by the networking of the internet. The value of fresh information today is worth the price of a Google refresh, which for most people is effectively nothing. AI doesn't change that equation, and I'd argue it's overall impact on journalism will be less harmful than an ad-optimized economy or even the mere existence of YouTube.

Quality journalism hasn't had a meaningful source of funding for a while, now. If AI does end up replacing honest-to-goodness investigative reporting, it'll be for the same reason the internet replaced the newspaper.


Sadly, opinion pieces are what drives the news economy these days. Columnists/commentators, in effect, subsidize the hard news (at those venues that even bother to produce the latter). Filling their prime time hours with opinion journalism was the trick that Fox News discovered to become wildly successful.


The NYT has been dying a slow death since long before ChatGPT came along.


Why shouldn't the creators of the training content get anything for their efforts? With some guiderails in place to establish what is fair compensation, Fair Use can remain as-is.


The issue as I see it is that every bit of data that the model ingested in training has affected what the model _is_ and therefore every token of output from the model has benefited from every token of input. When you receive anything from an LLM, you are essentially receiving a customized digest of all the training data. The second issue is that it takes an enormous amount of training data to train a model. In order to enable users to extract ‘anything’ from the model, the model has to be trained on ‘everything’. So I think these models should be looked at as public goods that consume everything and can produce anything. To have to keep a paper trail on the ‘everything’ part (the input) and send a continuous little trickle of capital to all of the sources is missing the point. That’s like a person having to pay a little bit of money to all of their teachers and mentors and everyone they’ve learned from every time they benefit from what they learned.

OpenAI isn’t marching into the online news space and posting NY Times content verbatim in an effort to steal market share from the NY Times. OpenAI is in the business of turning ‘everything’ (input tokens) into ‘anything’ (output tokens). If someone manages to extract a preserved chunk of input tokens, that’s more like an interesting edge case of the model. It’s not what the model is in the business of doing.

Edit: typo


What's wrong with paying copyright holders, then? If OpenAI's models are so much more valuable than the sum of the individual inputs' values, why can't the company profit off that margin?

>That’s like a person having to pay a little bit of money to all of their teachers and mentors and everyone they’ve learned from every time they benefit from what they learned.

I could argue that public school teachers are paid by previous students. Not always the ones they taught, but still. But really, this is a very new facet of copyright law. It's a stretch to compare it with existing conventions, and really off to anthropomorphize LLMs by equating them to human students.


> What's wrong with paying copyright holders, then?

There’s nothing wrong with it. But it would make it vastly more cumbersome to build training sets in the current environment.

If the law permits producers of content to easily add extra clauses to their content licenses that say “an LLM must pay us to train on this content”, you can bet that that practice would be near-universally adopted because everyone wants to be an owner. Almost all content would become AI-unfriendly. Almost every token of fresh training content would now potentially require negotiation, royalty contracts, legal due diligence, etc. It’s not like OpenAI gets their data from a few sources. We’re talking about millions of sources, trillions of tokens, from all over the internet — forums, blogs, random sites, repositories, outlets. If OpenAI were suddenly forced to do a business deal with every source of training data, I think that would frankly kill the whole thing, not just slow it down.

It would be like ordering Google to do a business deal with the webmaster of every site they index. Different business, but the scale of the dilemma is the same. These companies crawl the whole internet.


Everyone learns from papers. That's the point of them, isn't it? Except we pay, what, $4 per Sunday paper or $10/mo for the digital edition? Why should a robot have to pay much more just because it's better at absorbing information?


Because the issue isn’t the intake, it’s the output, where your analogy breaks down. If you could clone the brain of someone who was “trained” on decades of NYT and could reproduce its information on demand at scale, we’d be discussing similar issues.


Your analogy doesn't make sense either.

If we could clone the brain of someone I hardly think we'd be discussing their vast knowledge of something so insignificant as the NYT. I don't think we should care that much about an AI's vast knowledge of the NYT either or why it matters.

If all these journalism companies don't want to provide the content for free they're perfectly capable of throwing the entire website behind a login screen. Twitter was doing it at one point. In a similar vein, I have no idea why newspapers are complaining about readership while also paywalling everything in sight. How exactly do they want or expect to be paid?


Most of the NYT is behind a signin screen; the classic "you can read the first paragraph of the page but pay us to see more" thing.

There is significant evidence (220,000 pages worth) in their lawsuit that ChatGPT was trained on text beyond that paywall.


That would be a funny settlement -- "OK, so $10/month, and we'll go back to 1950 to be fair, so that'll be .... $8760"


> Why shouldn't the creators of the training content get anything for their efforts?

Well, they didn't charge for it, right? They're retroactively asking for money, but they could have just locked their content behind a strict paywall or had a specific licensing agreement enforceable ahead of time. They could do that going forward, but how is it fair for them to go back and say that?

And the issue isn't "You didn't pay us" it's "This infringes our copyright", which historically the answer has been "no it doesn't".


oh, sure. NYT could go, we could replace it with AI generated garbage with non-verifiable information without sources. AI changed the landscape. Google search would be working less reliable because real publishers would be hiding info behind the login (twitter/reddit). I.e. sites would be harder to index. There would be a lot of AI generated garbage which would be hard to filter out. AI generated review articles, AI generated news promoting someones agenda. Only to have a chatgpt which could randomly increase their price 100 times anytime in the future.

There was outrage about Amazon removing DPReview site recently. But, it would be a common practice not to publish code/info, which could be used to train the model of another company. So, expect less open source projects, that companies just released because they were feeling like it could be good for everyone.

Actually, there is the use case that NYT would become more influential and important, because if 99% of all info is generated by AI and search is not working anymore, we would have to rely on the trusted sources to get our info. In the world of garbage, we would have to have some sources of verifiable human-generated info.


Why using authored NYT articles is “stupid IP battles” and having to pay for the trained model with them is not stupid?


> Why using authored NYT articles is “stupid IP battles”

When an AI uses information from an article it's no difference from me doing it in a blog post. If I'm just summarizing or referencing it, that's fair use, since that's my 'take' on the content.

> having to pay for the trained model with them is not stupid?

Because you can charge for anything you want. I can also charge for my summaries of NYT articles.


If you include entire paragraphs without citing, that's copyright violation, not fair use. If your blog was big enough to matter NYT would definitely sue.

A human makes their own choices about what to disseminate, whereas these are singular for-profit services that anybody can query. The prompt injection attacks that reveal the original text show that the originals are retrievable, so if OpenAI et al cannot exchaustively prove that it will _never_ output copyrighted text without citation, then it's game over.


I don't think fair use is quite that black-and-white. There are many factors: https://en.wikipedia.org/wiki/Fair_use#U.S._fair_use_factors (from 17 USC 107: https://www.govinfo.gov/content/pkg/USCODE-2010-title17/html...)

> "[...] the fair use of a copyrighted work [...] for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include—

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work."

----

So here we have OpenAI, ostensibly a nonprofit, using portions of a copyrighted work for commenting on and educating (the prompting user), in a way that doesn't directly compete with NYT (nobody goes "Hey ChatGPT, what's today's news?"), not intentionally copying and publishing their materials (they have to specifically probe it to get it to spit out the copyrighted content). There's not a commercial intent to compete with the NYT's market. There is a subscription fee, but there is also tuition in private classrooms and that doesn't automatically make it a copyright violation. And citing the source or not doesn't really factor into copyright, that's just a politeness thing.

I'm not a lawyer. It's just not that straightforward. But of course the court will decide, not us randos on the internet...


If the NYT wanted to charge OpenAI $20/mo to access their articles like any other user, that's fine with me. But they're not asking for that, they're suing them to stop it instead. That's why it's a stupid IP battle.


I agree that it’s more important than the NYT, I disagree that it’s the most important development in human history.


> This is way more important to humanity's future than any single media outlet. If the NYT goes under, a dozen similar outlets can replace them overnight.

Easy to grandstand when it is not your job on the line.


Is it? My job as a frontend dev is similarly threatened by OpenAI, maybe even more so than journalists'. The very company I usually like to pay to help with my work (Vercel) is in the process of using that same money to replace me with AI as we speak, lol (https://vercel.com/blog/announcing-v0-generative-ui). I'm not complaining. I think it's great progress, even if it'll make me obsolete soon.

I was a journalism student in college, long before ML became a threat, and even then it was a dying industry. I chose not to enter it because the prospects were so bleak. Then a few months ago I actually tried to get a journalism job locally, but never heard back. The former reporter there also left because the pay wasn't enough for the costs of living in this area, but that had nothing to do with OpenAI. It's just a really tough industry.

And even as a web dev, I knew it was only a matter of time before I became unnecessary. Whether it was Wordpress or SquareSpace or Skynet, it was bound to happen at some point. I'm going back to school now to try to enter another field altogether, in part because the writing is on the ~~wall~~ chatbox for us.

I don't think we as a society owe it to any profession to artificially keep it alive as it's historically been. We do it owe it to INDIVIDUALS -- fellow citizens/residents -- to provide them with some way forward, but I'd prefer that be reskilling and social support programs, welfare if nothing else, rather than using ancient copyright law to favor old dying industries over new ones that can actually have a much bigger impact.

In my eyes, the NYT is just another news outlet. A decent one, sure, but not anything substantially different than WaPo or the LA Times or whatever. How many Pulitzer winners have come and gone? https://en.wikipedia.org/wiki/Pulitzer_Prize_for_Breaking_Ne...

If we lost the NYT, it'd be a bit of nostalgia, but next week life would go on as usual. They're not even as specialized as, say, National Geographic or PopSci or The Information or 404 Media or The Center for Investigative Reporting, any of which would be harder to replace than another generic big news outlet.

AI, meanwhile, has the potential to be way bigger than even the Internet, IMO, and we should be devoting Manhattan Project-like resources to it.


If the future of humanity rests on access to old NYT articles, we’re fucked. Why can’t OpenAI try to get a license if the NYT archives are so important to them?


They're not. They can skip the entirety of the NYT archives and not much of value will be lost. The issue is with every copycat lawsuit that sues every AI company out of existence. It's a chilling effect on AI development. Old entrenched companies trying to prohibit new ways of learning and sharing information for the sake of their profit.


Why don’t they train their AI on non-copyrighted material? It’s only fair for the copyright owners to want a share of the pie. I’d want one as well for my work.


>It’s only fair for the copyright owners to want a share of the pie.

No it's not, it's pure greed. Everyone'd think it absurd if copyright holders dared to demand that any human who reads their publicly available text has to pay them a fee, but just because OpenAI are training a brain made of silicon instead of a brain made of carbon all the rent-seekers come out to try to take advantage.


You know the NYT has to fork out money to build the content right ?


Do you really think they're losing subscribers to ChatGPT...? Is there a single real person that thinks, "Oh, I don't need to pay the NYT anymore, I can just wait for the next OpenAI update six months from now and it'll summarize all the news for me"?


It's beside the point, the point is, money and time were spent producing that work, so why should OpenAI just be allowed to take it and profit from it, without at least attribution? It's absolutely ridiculous.

I saw an article the other day where they banned ByteDance's account for using their product to build their own, can you see the absolutely massive hypocrisy here?

It's fine for OpenAI to steal work, but if someone wants to steal theirs, it's not? I cannot believe people even try defend this shit. It's wack.


> No it's not, it's pure greed.

And Altman (Mr. Worldcoin) and fucking Microsoft are what, some gracious angels building chatbots for the betterment of humanity? How is them stealing as much content as they can get away with not greedy, exactly?


Because no one forced them to, and the copyrighted dataset is much larger? It's like trying to teach your kids using only non copyrighted textbooks. There's not much out there.

Copyright is an ancient system that is a poor legal framework for the modern world, IMO. I don't think it should exist at all. Of course as a rightsholder you are free to disagree.

If we can learn and recite information, and a robot can too, then we should have the same rules.

It's not like ChatGPT is going around writing its own copycat articles and publishing them in newsstands. If it's good at memorizing and regurgitating NYT articles on request, so what? Google can do that too, and so can a human who spends time memorizing them. That's not its intent or usefulness. What's amazing is that it can combine that with other information and synthesize novel analysis.

The NYT is desperate (understandably). Journalism is a hard hard field with no money. But I'd much rather lose them than OpenAI. Of course copyright law isn't up to me, but if it were, I'd dissolve it altogether.


Ok, your reasoning escapes me. NYT has the right to sue and like any other business it’s holding onto their moat. Why would they let OpenAI train on their propery? Why wouldn’t they train their own AI on their own data?

Open AI is a business. NYT is a business. MS is a business. Neither will be happy when some other party takes something away from them without paying.


Because they wouldn't have enough good quality training data then probably.


Too bad. Quality costs. Share the profits with everyone then and nobody would be unhappy


I think the exact opposite is true: as long as AI depends critically on scrupulous news media to be able to generate info about current events, it is far more important to protect the news media than the AI training models. OpenAI could survive even if it had to pay the NYT for redistributing their works. But OpenAI can't survive if no one is actually reporting news fairly accurately. And if the NYT were to go bankrupt, all smaller players would have gone under looooong before.

In some far flung future where an AI can send agents to record and interpret events, and process news feeds and others to extract and corroborate information, this would greatly change. But probably in that world the OpenAI of those times wouldn't really bother training on NYT data at all.


I hope the nyt skullfucks this field. Humanity's future? You're doing statistics on stolen labor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: