Hacker News new | past | comments | ask | show | jobs | submit login
Microsofts AI boss thinks its perfectly OK to steal content if its on open web (theverge.com)
70 points by avivallssa 72 days ago | hide | past | favorite | 73 comments



From the beginning, it's seemed completely intuitive to me that training a computer made of sand on publicly available content and then generating art later should be fair use, so long as it's fair use to train the meat computer in your head on the same content and then use it to generate art later. There's no meaningful difference to me as far as the ethics of the act are concerned.


I think the reason behind it is similar to case where you are allowed to watch movie in theater with eye, but not with camera


But you can have someone that watches the movie give you a plot summary, and I assume some sort of AI device that does the same would also be alright.


That's why for example fan-made Star Wars movies are totally OK, and Disney has no issue with them, right? In such cases not even the plot is based on anything Disney "owns".

Also, it's OK to make a movie out of a plot summary someone who read some book gave you, right?

You really think such things become legal if you have some ML algos in the loop?

https://web.archive.org/web/20220307083651/https://fairuseif...


The meaningful difference is that one is an autonomous person and the other is a machine owned by a company.


I think the tricky bit here is “fair use”, an inherently subjective concept that’s a function of context. What’s fair in one context (meat computer training due to limitations of meat computer) may not be fair in another context (silicon computer trained by Big Tech – unencumbered by meat computer limitations).


There is a big difference in most people's minds between talking to/helping individuals and machines. One establishes a human connection, the other... well, quite the opposite when on top of it they're owned by a megacorp.

I believe it was legal for them to, but a breach of an implied license.


Ther is no such thing like "generative art". Art cannot be generated by definition.


The problem is the word "training". It's too anthropomorphic. Humans absolutely do learn from their inputs, but they are also capable of recognizing those influences and adopting or rejecting them. Current generation AI technology doesn't do that - which is why Stable Diffusion loves to draw, say, incomprehensible mutations of the Getty Images watermark. Machine learning models are trained by rather simplistic fixed code - akin to the expert systems that ML replaced - and that code does not know the difference between the copyrightable and uncopyrightable features of a source image. e.g. the training set had a lot of Getty Images watermarks in it, so the model is updated to draw those watermarks.

In terms of ethics, there IS a difference: AI is cheap. It does not require food, shelter, or much of anything, really. If your fellow man learns how to draw off your work, he does not become an existential threat to your livelihood in the same way an AI model potentially would. This is not an inherent problem with machines that think[1], but the rationales for investing into AI. Generative models probably can't currently replace real human artists or programmers, but that's what your boss is hearing. The capitalist class is chock full of people who would love nothing more than to fire all workers so that they can reap 100% of the benefits of automation. Economists talk of how robots and automation didn't put humans out of a job, it just shifted the jobs around and made humans more productive, but the reality is that the people who owned the factories were hoping it would. And they're perfectly willing to take further swings at the "problem" of there being a working class.

Remember: Breaking looms was the Luddites' tactic, not their goal.

In terms of copyright, a system that is nominally intended to protect artists, but doesn't do a very good job of it: the Internet is not Public Domain[0]. If you - an AI system or a human - train on a copyrighted work and produce something substantially similar to that same copyrighted work, you've infringed.

To put this in other words: if Microsoft thinks it's perfectly OK to train on anything on the Internet, then he'll be OK with my LLaMA finetune on Windows NT source code leaks[2]. If I can get the model to output the source code to Windows that means I own it now, right?

[0] no matter what Eric Bauman thinks

[1] We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!

[2] I would love to see someone do this, get sued, and then claim Microsoft is estopped from asserting copyright infringement because they said training on other people's work is OK. It wouldn't work and Microsoft would ruin their lives but it'd be funny.


No he doesn't.

> I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding.

> There’s a separate category where a website, or a publisher, or a news organization had explicitly said ‘do not scrape or crawl me for any other reason than indexing me so that other people can find this content.’ That’s a grey area, and I think it’s going to work its way through the courts.


> I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding.

is literally

> I think it's perfectly OK to use content in arbitrary way if it's on open web

The only difference between this and the title is he doesn't think this behavior is called "stealing".


>I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it.

Good thing it doesn't matter what he "thinks" the "social contract" is, copyright is automatic.


It's an incredibly fuzzy statement, really.

You can copy copyrighted material you download for your own use and the use of your friends. You can't copy it to a different website and distribute it further on the "open web". You can modify material provided for download in your own home for any purpose you wish but unless that modification is in the category "fair use" (a small part, a parody, etc) you can't distribute it freely on the web either.

His statement might mean this but it could mean a zillion other things too (what does "it is fair use" mean? etc)

Whether using randomly obtained copyrighted material to "train" an LLM and then selling the output of that LLM is "fair use" seems like so far another "gray area" and the foggy statement seems oriented to reducing awareness of that situation.


Legally you cannot actually do the first thing without permission. The fact that it was technologically impossible to stop, and the damages would be impossible to prove, didn't make it a legal right.


Most people would read the first quote as totally aligned with the article’s title.


That "separate category" isn't a explicit opt in. It's its two different mechanisms for indexing and copyright.

Index is your out. You can use our content is opt in.

Reproduction and recreation, especially when taken physically outside of the Internet or into products for sale has always been a against the rules. As mentioned by another post, torrents of music and movies solidified this stance legally.

Unindexed connect can be open source no strings attached.


> the social contract of that content since the ‘90s has been that it is fair use

This social contract was broken when Google and Facebook pushed remarketing and behavioral tracking, and then started pulling content directly onto their own pages to boot. That was over a decade ago, and it's the reason why every news site now bugs you about running out of "complimentary articles" and how you need to maintain 50 different subscriptions to get what used to be paid for by advertising years ago. The only reason why complimentary articles even exist is to avoid Google delisting them entirely and them not getting any search traffic (since Google doesn't link to shit that isn't free).


> No he doesn't.

Could you please help me see where you see the difference between the title and the quotes? Even after reading them it seems the title is substantially true?

Or to be curt while mirroring your comment’s style: “Yes he does.”


I mean, that seems to be exactly how he's defining "open web" here, actually. That which is - in the dichotomy presented by these two quotes - "the open web" is free game for any use, and he defines things that use language that explicitly disallows all uses except indexing as the complement of this category. Maybe he'd accept any site that effectively declares any "whitelist" of acceptable uses in this category too, though this isn't explicitly stated.

His contention is an assumptive close, wrapping the assumption that anything not explicitly labeled otherwise must use a "blacklist" policy where any usage not specifically forbidden is permitted into "the social contract" that he claims to be so obvious as to not permit challenge

He would like the "grey area" of legal debate on this matter, as he explained quite clearly, to be exclusively about whether AI models can be enforcably barred from training on content for which such a narrow whitelist of acceptable uses has been defined. Naturally this would mean both that the courts could decide such a blanket ban can't bar msft (or anyone) from using this content to train AI models, but also that the court needn't or maybe even can't decide that failure to ban this use case explicitly (or adopt a similar "whitelist" style blanket ban) makes acceptance of it legally implied. Hell, he even leaves room for explicitly banning this use to be rendered legally unenforceable

I can see why he would want that to be the overton window!


Yes, he does.

> content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding

Sure, it's his belief, but this statement is completely incorrect. The words that he's using mean specific things, and he's just got it wrong. You would expect he's smart enough to know this, but 'it is difficult to get a man to understand something, when his salary depends on his not understanding it'.

> That’s a grey area, and I think it’s going to work its way through the courts.

It's not a grey area. Putting up a robots.txt doesn't change copyright, and it certainly doesn't make it a 'grey area'.


I'll bet they don't consider the windows and office source code fair game for arbitrary reuse provided the other party found the copy on the web. Even if the person found the copy on GitHub.


Isn't this discussion at all stupidly letting them control the goal posts? They have already gone far beyond this thinking that everything someone does on their own personal computer in their own home without the slightest bit of consent is going to be slurped up and recorded in case they want to query it someday.

This is like arguing that this guy who just murdered someone 10 minutes ago, should actually be able to steal the candy from this child since the child put it down on the park bench.


The more I read about this guy the more I get the feeling that he is an unscrupulous individual.

robots.txt is a "grey idea" to him, instead of being a directive to keep moving? Wow.


What exactly is wrong with the statement he has made?


The Verge quotes: "I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been 'freeware,' if you like, that’s been the understanding."

This statement is quite wrong. People have complained about content farms and plagiarists ripping off their content for ages. The only kinds of Web content I can think of where his statement could apply are open source software (for the more permissive licenses) and Stack Overflow posts. Almost everything else was posted with the intent of copyright.


Open source software is posted with the intent of copyright. Copyright is what enforces the license, which is what makes it open source. For BSD/Apache2 etc the license still requires attribution.


While the intent of copyright is unknown, when it's unstated it can be changed later. The legal solution to this is licenses.

A common curiosity to unstated intent is attribution. This is missing (and not currently possible) with many AI models today.

The Internet is not detached from the rest of the world. Something with a license can appear there (with or without the holders knowledge)

Note: Not intending to contradict you but elaborate on that last point as assuming intent is dangerous.


Content on web eithout any copyright notice is by default "all rights reserved" content. So you cannot use it to produce new content, unless you can prove that the old content was never reproduced ever along all the invocation of the LLM


You can totally study it and learn from a bunch of content on the web and then create a new work having looked at other works before. My writing is influenced by lots of copyrighted work I’ve read already, there is no mental blockade going on when I write something. The only question is what does it mean when a machine does that instead?

The old content can’t be reproduced by an LLM unless the content is provided as part of its prompt. At least, that’s what the AI companies claim and will have to show in court.


> You can totally study it and learn from a bunch of content on the web and then create a new work having looked at other works before.

But you don't do that by computing relative word frequencies in the content and then running an algorithm to generate further words with the same relative frequencies. When human efforts come too close to that it is called plagiarism. You don't get words from what you read and study, you get ideas, and ideas can't be copyrighted. You just have to express the ideas in your own words.

> what does it mean when a machine does that instead?

The machine is doing exactly the thing that, when human efforts come too close to doing it, is called plagiarism. The machine isn't operating with ideas; it doesn't even have a concept of "ideas". So it is not doing what you are doing--it is not taking ideas and expressing them in its own words. It's literally just shuffling the words.


> But you don't do that by computing relative word frequencies in the content a

That's a lot of assumptions about human level intelligence. We do know that the more you read, the better you can write, and a lot of our text is "inspired" by words and styles that we read in the past. You claim this isn't an algorithm, I don't think we really know how human intelligence works.

> You don't get words from what you read and study, you get ideas, and ideas can't be copyrighted.

That's completely oblivious to how humans actually read and study, of course you get words, if you are lucky you might also get ideas, but at least you get words (assuming ideas aren't words to begin with).

The government is eventually going to have to make a decision on this one way or the other, or we will just cede the whole of advancement to China (who won't bother with this debate and just do what works the best).


> That's a lot of assumptions about human level intelligence.

I disagree, but I doubt we're going to get anywhere by discussing it. It seems to me that you are either seriously undervaluing human intelligence, or seriously overvaluing LLMs.


I just know that we don’t actually know. We assume we are special, but we could just be stringing words together based on what we experienced and learned with what we remember for prompting.


There's a lot we don't know about the details of how our brains work, but we do know that our brains are connected to the external world in a huge number of ways that LLMs are not. The only connections to the external world that LLMs have are their training data and the prompts they are given. That is an incredibly paltry level of connection compared with the connections that our brains have. Those connections make a huge difference.

As AI technology evolves, I would expect that the connections AIs have with the external world will become much more extensive, and that will make a huge difference as well. But current LLMs simply don't have that, and it shows.


It is absurd to act as if machines would do the same as human brains if you don’t even know what human brains do.


The article does a pretty good job outlining the actual legal rights of published content and how his statement is not supported by laws and precedent.


He compares fear of "AI" to fear of calculators. But "AI" cannot do math. Calculators do not "hallucinate". They are not correct "80%" of the time. They are correct 100% of the time. We know how they work. IIRC, in the 1970s someone at Bell Labs wrote a UNIX program that could generate fake academic papers. It might be a fun gag but it does it have much practical utility. No matter how "real" the papers might appear, or even if they are correct "80%" of the time, it is not an "invention", and it is certainly not comparable to a calculator.


Will this make people who make indirect money through their content, less motivated from publishing their content on the Web ? This might be arguable.

May be, there should be a similar amount of openness in publishing the content used for training commercial models.

The copyright owner should have a privilege to ask for that content to be removed from training. This may also allow individual authors to gain their share with their Advanced RAG applications, that are specially focussed on the content they own and also published on the web.


Steelmanning against this, when you publish any form of content online you have to be prepared for the consequences of it's digital proliferation. When Napster was all the rage people said the same thing about music, and before that they decried home taping as the death of the music industry. The music industry lived on, it just changed form as the shape of music did.

If the online newsletter community that Substack and Medium built consequently dies from sheepish author syndrome, very little will change. The content they made will be replaced, and the internet will survive fine without them as it did for dozens of years before digital subscription services were a realistic revenue stream.


>Steelmanning against this, when you publish any form of content online you have to be prepared for the consequences of it's digital proliferation.

when you walk down the street in skimpy dress you have you have to be prepared for the consequences too, right? Whatever you're trying to say it has nothing to do with what rights companies have to use content.


The point is that you cannot claim damages to something that you give away for free. What did they take from you, notoriety? Content? Traffic?

When the Author's Guild pressed Google for their indexing of plaintext copywritten books, they lost in court. Transforming freely-available content can't be gatekept because the intent isn't strictly what the author imagined. There is a degree of fair use that exists when you make anything public. Music, art, text, videos, all of it can be consumed in novel and unexpected ways. People haven't been concerned about the legal ramifications of abusing intellectual property since teenage Neil Cicierega made Mr Rogers fight Batman in 2005: https://www.youtube.com/watch?v=lrzKT-dFUjE


> you cannot claim damages to something that you give away for free

Maybe you can't claim damages based on loss of income, but that's not the only kind of damages.

For example, say I post an article and don't charge anything for reading it, and an LLM, based on its training data which includes my article, generates an article in my writing style. Sure, it hasn't cost me any money, but it might still affect my reputation if people think I wrote the article. Copyright is supposed to cover cases like that as well as cases where income is lost.


Maybe it’s better if people stop publishing for profit? The internet was better when everyone wasn’t trying to profit from it and it was a bunch of hobbyists.


>The copyright owner should have a privilege to ask for that content to be removed

Just so you know, privileges can be (and probably will be in this case) denied.

Rights on the other hand can't be denied.


Right, nobody on the internet has ever violated copyright and gotten away with it. /s


One thing is a robots.txt policy, meant mostly for search crawlers.

Another thing is the copyright of the content, terms of use policies, etc.

Abiding by a robots.txt policy doesn't make you immune to copyright, terms of service, law in various jurisdictions, etc. If you think that you are probably a kleptomaniac.

Just create a robots.txt with "User-Agent: one billion asterisks" so that the crawlers die when parsing it.


It seems obvious to me that there is no such thing as AI without publicly training on the open web, and that any kind of licensing is an impossible feat.

Programs from my youth (Daria, Captain N) had licensed music for their broadcast, and that’s all because what else was ever going to be done? 20 years later, streaming with the music intact is an impossibility because the kind of money necessary to license all of it was too much. And you have to make deals with dozens of companies.

Multiply that by several orders of magnitude and you start to see the scope of the problem.


> It seems obvious to me that there is no such thing as AI without publicly training on the open web, and that any kind of licensing is an impossible feat.

Yes, I think this is correct. Which means the business model of AI, or at least AI as companies like Microsoft are currently implementing it, is fundamentally at odds with anyone that posts any content on the Internet having copyright and the rights associated with it.


I don’t care if it’s expensive or difficult. If somebody wants copyrighted content to train an ai, buy the rights to it before creating these derivative works based on it


Part of the problem here is that the web has gone through lots of change as to what it is and how people understand it.

Some people think of it as billboards posted on the highway. Some think it’s a bulletin board. Some think it’s a newspaper. A television, a “zine”, a diary, graffiti. It has been all of these things, and is and isn’t. And people who publish are really bad at explicitly stating which one they are. But they expect you to know.


Plagiarizing any of those is just as much against law.

Try storing copies of all television shows out there and see how much the entertainment industry will want to sue you. Why would a corporation doing the same to other content be any different?

This is just a "rules for thee not for me" thinking by corporations.


Except they’re not storing copies of any of these. They’re storing how likely it is that something from this relates to something from that.

Back when Starship Titanic came out, the website posted a copy of the associated novel, with the words in alphabetical order for convenience [1]. No copyright court would recognize this as a copy of the book, just as a dictionary is not a copy of any other work.

An observation about a work (how many words it has, which words, how frequently they’re associated with other words, who wrote them, when etc. ) is not the same thing as a copy of the work.

I get that there’s an ethical question here, but I really don’t agree that there’s a copyright question.


> Except they’re not storing copies of any of these.

Copyright isn't about storing copies. It's about making copies.

How do you think the AI companies extract all the statistical info about some content?

Spoiler: They copy that copyright protected work onto their machines where they do the data analysis.

At this point copyright was already violated. They usually never had the right to make copies for the purpose to extract statistics about it in the first place. "Because it's on the internet" doesn't give you such a license.

The current LLM "AI" only exists on the grounds of massive copyright volitions. It's the exact same situation as with Napster in the past.

No mater how often MS and friends hallucinate something about "fair use", it is not, and the only realistic legal outcome in the long run will be the same as for Napster: AI training is piracy. On a global "internet scale". (The only sad thing is likely nobody of the directly involved will end up in jail, like they do e.g. with people sharing torrents).


> Except they’re not storing copies of any of these.

How do you think gtp(n+1) is trained?

We literally have people in the AI field admitting part of their training dataset is a pile of torrented books.


So we've now learned that copyright is determined by communications protocol. If you're using torrents it's copyright infringement, if it's the web then it's public domain.


> if it's the web then it's public domain.

Only if you're a big company. If you're an individual or small company, then it isn't.


Not sure why downvoted.


Use of humor on HN triggers down-votes quite often.

Why it's like that, IDK.


Hmm hear me out, go to a public website and add black space below any video or picture with random adjectives that are your satire review of that piece of art then feed those into the ai model and tell it to ignore any text.


This is nothing but performative clickbait by the Verge.

It is classified as fair use, the term is transformative use, where those using it are training models (their intention) if anyone wishes to Google it.

The end.


> the term is transformative use

Transformative use is not automatically fair use. It is just a factor to be considered in judging whether a use is fair use. [1]

(I would argue that AI outputs are not transformative uses anyway, since their point is not to add anything original but to provide a cheaper way of generating equivalent content.)

Two other factors that seem highly relevant to the case of training AI models on the entire web are the third and fourth factors described on the US Copyright Office site I linked to. Training AI models on the entire web seems to be at the "least likely to be judged fair use" end of the spectrum for both of those factors: they use the entire content, and the whole point of the unlicensed use is to harm the existing or future market for the original copyrighted work.

[1] https://www.copyright.gov/fair-use/


My point specifically speaks to training it.

Copyright is only concerned with the supposed copy in question, which would be the model.

Your interpretation is an extreme stretch under both those last two criteria. At an abstract level the information issued to help the AI model obtain a model of the world. This is transformative use. On a technical level the data is merely used to adjust parameter weightings, again transformative use. There is no corpus of copyright work stored in the model, merely the probabilistic model for that information, this is transformed use.

You will have an extremely tough time trying to find a neutral AI expert who would give a definition that in anyway says the whole point of a multi-billion parameter AI model is to harm the existing market of original copyright as this is not true. Can it be used for that purpose, yes. But is it the purpose, nearly always no.


> My point specifically speaks to training it.

The training part by itself is irrelevant to copyright since nothing is published. But of course nobody just lets trained LLMs sit in private; they are used to produce output, and the output is published so it has to meet the requirements of copyright law.

> the information issued to help the AI model obtain a model of the world

No, it isn't. LLMs don't have a model of the world. They have a model of words and word frequencies. That's it. That's all they get from their training data.

> You will have an extremely tough time trying to find a neutral AI expert who would give a definition that in anyway says the whole point of a multi-billion parameter AI model is to harm the existing market of original copyright as this is not true.

Sorry, not buying it. The way Microsoft expects to make money with their AI is by having it replace humans at generating content. The reason for training the AI on existing copyrighted content is so that the AI can generate content that can replace the original copyrighted content. Sure, Microsoft itself might not be doing that, but the customers who buy their AI are.


-Its specifically relevant as it's what the article is talking about. Of course the output can be used to produce copyrighted materials but this relies on the user inputting specific variables. Much like someone rewriting a copyright work in word, this is just way easier to do but ease of copying is not covered by copyright law.

-LLM's do indeed have a model of the world. They are no longer regression models that simply have a model of words and frequencies (although I'd argue this is still transformative use), worth checking out just how powerful GPT's are and how they work.

-That argument would imply any new novel method that can replace a copyright method would be infringing. Ideally there should be multiple ways to do something and the AI would produce it's own median/mean way based on its learnings. Yes I agree the customers can, but that is not what Hassabis is defending.


scraping the open web shouldn't be a crime[1], even if unsavoury people do it for unsavoury purposes

[1]: or even just an issue


It's not stealing content if the content is still in the original place. Stop trying to redefine words. It's copying.


Tell that to Hollywood.


I don't have contacts with that neighborhood.


If buying isn't owning, copying isn't stealing. This is a really tired argument.


Ah yes the implied social contract that it's ok because it happens all the time.

That's how society falls.


The open web's ethos since its inception in the 1990s has been one of unrestricted access and fair use. Content published openly online inherently invites broad consumption, reproduction, and creative reuse by the public. This is not merely custom, but a fundamental aspect of fair use doctrine as applied to the digital realm.

The four factors of fair use - purpose of use, nature of the copyrighted work, amount used, and effect on the market - overwhelmingly favor allowing free use of openly published web content. The transformative nature of most reuses, the public availability of the original works, the necessity of using entire works in many cases, and the lack of a traditional market for such content all support this interpretation.

This longstanding practice has been the catalyst for unprecedented innovation and information dissemination. It represents a tacit social contract between content creators and users, establishing a de facto "freeware" model for open web content. Any attempt to retroactively impose strict copyright limitations would not only stifle innovation but also contradict decades of established legal precedent and digital norms.

-As a side note, I’m not certain that training necessarily involves “copying.”

—-Lastly, if anyone really thinks the Robert’s court is going to knee-cap AI, you’re soft in the head.


> Content published openly online inherently invites broad consumption, reproduction, and creative reuse by the public.

You're using "invites" as a weaselword here. Then you go into the law, as if the law is discussing "invitations."

Counterpoint: content published online is usually extremely hostile to reproduction, and the people who produce it are deathly afraid of other people copying their work and outranking them with it.

> —-Lastly, if anyone really thinks the Robert’s court is going to knee-cap AI, you’re soft in the head.

If you think the Roberts court is going to be hostile to copyright, you're insane.


I am optimistic but I feel sad when I remember when went through all this with sampling 30 years ago and now the music industry is more insular and controls every scrap of music they can find so no one can sample anything and even if your song has similar "vibes" to another, well, now it belongs to them as well.

I'd wager that 90% of the people cheering on the RIAA (and copyright in general) now were singing a much different tune before they decided AI was a threat to their livelihoods. There's a lot of reasons to not be optimistic about the continued legal existence of freely available open source AI, because if the copyright holders have their way, they will be the only entities that control all the data needed to train it. And many people on the internet are all too happy to cheer this on without realizing that once the RIAA, MPAA, and publishers (when they finally figure out how to organize effectively like film and music has) hold the reigns, the AI will still exist, and it will still take all the same jobs, but all the open source and freely available AI that everyone could use, that could level the playing field for everyone... that's going to be illegal for people to use, without being able to pay the content rights holders. Just another way to keep it to have / have nots, same as it's always been.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: