The ignorance in this comment section is already giving me an aneurysm. Software licenses matter. Copyright matters. If megacorps like Microsoft can sue people into oblivion for violating their copyright terms, people can sue Microsoft into oblivion for violating theirs. I don't use MS Github, I have no skin in the game, but I hope there is at-least a $1000 award to every instance of AGPL and GPL license violation because it's unfair and illegal what they're doing.
Software freedom matters, but I wouldn't expect the typical HN type to understand, since their money is made on exploiting freely-available software, putting it into proprietary little SaaS boxes, then re-selling it.
Microsoft (presumably) did train it on their open source repositories, since those repositories are public GitHub repos. They didn't train it on anybody's private repositories.
The point is, if they're sure they won't be recycling copyrighted code wholesale, why not include their own in the training set. Surely their internal code is higher quality than the average git repo, which must be 80% abandonware (if my personal repos are anything to go by :P)
Probably because of the (very small) chance that Copilot could regurgitate something secret or embarrassing.
Which is not necessarily hypocritical. The amount of copying needed for something to be copyright infringement is not high… but it's still significantly higher than the amount needed to leak information. For that, just a few words will do, e.g.
// For Windows 12
or
// Fuck [company name]
or
long secret_key[2] = {0x1234567812345678, 0x8765432187654321};
But publicly accessible doesn't mean public domain. Microsoft has shared even some of their private code with others like governments. No doubt with strict licenses which they expect to be honored. AGPL and other licenses on publicly accessible code still matter.
Microsoft's apparent legal opinion is that training an AI on the data is the same as reading it, and doesn't require a license.
That as long as they have the right to read the data, they have the right to train an AI on it. The fact that the code is available under an open source license is irreverent to them.
As for why they didn't use their own private code to train their AI, I suspect it was more of a non-malicious: "we don't need to, this public github repo dataset is big enough for now"
Personally, I think Microsoft should double down on this legal stance. Train the AI on all their internal code. And train it on any code they have licensed from other companies too.
I remember when some Windows code has been leaked, people explicitely skipped reading it to avoid getting sued if they were to work on Linux kernel or Wine in the future. Reading code can most certainly lead to a copyright breach and Microsoft of all corporates should know this.
> Microsoft's apparent legal opinion is that training an AI on the data is the same as reading it, and doesn't require a license.
How is that conciled with the fact that a person that read copyrighted code (not even the original source code, a mere decompiled version of it !) is forbidden to reimplement it directly:
Clean room reimplementation is a way to prevent court cases, it's not a legal requirement.
If a company copies a competitors product then the chance of getting sued is very high. If they can show that, in fact, there was zero copying at all, then they can get the case dismissed and save great legal expense.
If the sample of the Metallica song is insubstantial enough then you may well prevail in court.
It's unsurprising that copilot can reproduce the most famous subroutine of all time precisely, given that it occurs in hundreds or thousands of repos.
Also that code is not copyrightable. Pure algorithms are not copyrightable, copyright of code arises from its literary qualities.
E.g. I can copy an algorithm out of an ISO spec and that doesn't make my code a derivative work of the spec requiring me to pay royalties to ISO.
When you strip out the algorithmic elements out of fast inverse sqrt, you are left with what? Single letter variable names. That is certainly far below the threshold for copyright.
Software licenses have barely been tested in court, let alone how they apply to code injected and combined with other code via machine learning. You're extremely overconfident about how this will actually play out.
For one, just because your code is covered by the GPL, it doesn't mean every single line in isolation is copyrightable. It has to demonstrate creativity. That's why you don't have to worry about writing for (int i = 0; i < idx; i++) {.
You're right that code has to demonstrate creativity for copyright. But that also means that an algorithm, even a transformative algorithm, cannot change copyright because an algorithm is not creative, by definition.
This means that the output of any algorithm on copyrighted code is still under the original copyright. I mean, we still apply the copyright of the original to the output of compilers, even though compilers can be transformative with inlining and link-time optimization, to the point that it mixes disparate code in the same way Copilot does.
In fact, I wrote some software licenses [1] that codify the fact that algorithms cannot change copyright.
You sound very confident about this, whereas copyright lawyers I've read discuss this issue seem much less confident overall, but lean toward thinking this would be fair use.
What makes you so confident that this would not be ruled fair use?
(And for people not familiar - if ruled fair use, it doesn't matter what the license is because fair use is an exception to copyright itself.)
I have a feeling you did not read the FAQ of the licenses. I don't blame you, but they explain my position.
Here's the relevant quote:
> GitHub is arguing that using FOSS code in Copilot is fair use because using data for training a machine learning algorithm has been labelled as fair use. [1]
> However, even though the training is supposedly fair use, that doesn’t mean that the distribution of the output of such algorithms is fair use.
My licenses say, basically, "Sure, training is fair use, but distributing the output is not."
The licenses specifically say that the copyright applies to any output of any algorithm that uses the source code code as all or part of its input.
Now, I have not gotten a lawyer to look at my licenses yet (it's in the works), so don't use them yourself. But because everyone keeps saying that training is fair use, I'm fairly confident that only training is fair use.
Of course, it might not be, but that would take more court cases and more precedent. I wanted to poison the well now [2] to make companies nervous about using a model that was partially trained with code licensed under my licenses.
It's mildly interesting that you've decided to express your personal opinion about what is or is not fair use within in your license text, but the fact is that if a use of the work is deemed to be fair use under the law then the terms of the license you're offering are completely irrelevant. Your permission is not required to make fair use of the work, so no one needs to agree to your license.
> It's mildly interesting that you've decided to express your personal opinion about what is or is not fair use within in your license text, but the fact is that if a use of the work is deemed to be fair use under the law then the terms of the license you're offering are completely irrelevant. Your permission is not required to make fair use of the work, so no one needs to agree to your license.
You do not seem to get it. Yes, I understand that if fair use applies, my licenses don't matter. I get that. I promise I do get that.
The purpose of these licenses is to sow doubt that fair use applies to distributing the output of ML models.
Lawyers are usually a cautious lot. If a legal question has not been answered, they usually want to stay away from any possibility of legal risk regarding that question.
The licenses create a question: does fair use apply to the output of ML algorithms? With that question not answered, lawyers and their companies might elect to stay away from ML models trained with my code, and ML companies might stay away from training ML models on my code in the first place.
That is what I mean by "poisoning the well." The poison is doubt about the legality of distributing the output of ML models, and it is meant to put a damper on enthusiasm for code being used to train ML models, especially for my code.
It still amounts to an opinion statement in the license text which has no real bearing on the license. I was trying to be charitable, but your clarification makes it seem even more like you're just trying to spread unsubstantiated FUD in hopes of scaring people away from using your code as input to ML models even when that would be fair use. Which seems to me to be vaguely akin to fraud. Moreover, the license seems like a poor choice of venue to express your opinion since those you're most interested in dissuading (e.g. people using lots of different projects as input to their ML models, without investigating the details of each one) are also the least likely to bother reading it. In terms of raising awareness of how copyright might apply to the output of ML models you'd do better to post your opinions on a blog somewhere and leave the license text for things that can actually be affected by a license.
> The relevant part of the license is the definition of the covered work, which basically says that the output of any algorithm that uses copyrighted code as input is under the same license.
In other words, you are granting unnecessary additional permission to use the output of an ML algorithm trained on the copyrighted code under the terms of the same license, when your permission was not required if the use of that output was already covered by fair use. If the use is not considered fair use—if the output would be deemed a derivative work under copyright law—then this license is beneficial to the developers of ML systems like Copilot since it explicitly grants them permission to use the output under the same terms. In the best case it's fair use and your license is irrelevant, and in the worst case your license grants them a path to proceed anyway, with a few extra rules to follow. Under no circumstances can anything you write in the licence expand the reach of the copyright you have in the original code, no matter how "wonderfully broad and general" the license may be.
Reading through the licenses and FAQs on your site did not improve my opinion of them in the slightest. Especially the part where you attempted to equate what Copilot does with trivial processing of the source code, e.g. with an editor, to argue that classifying the use of any output from an ML algorithm trained on copyrighted inputs as fair use is equivalent to eliminating copyright on software. The reality is of course much more nuanced. Certainly if the ML algorithm merely reproduces a portion of its input from some identifiable source, including non-trivial creative elements to which copyright could reasonably be applied, then the fact that the process involved an ML algorithm does not preclude a claim of infringement, and it would be reasonable to apply something like a plagiarism checker to the ML output to protect the user from accidental copying. However, the purpose of an ML system like Copilot is synthesis, extracting common (and thus non-creative) elements from many sources and applying them in a new context, the same as any human programmer studying a variety of existing codebases and subsequently writing their own code. The reproduction of these common elements in the ML output can be fair use without impacting the copyrights on the original inputs.
The real question here is why I'm wasting my time attempting a good-faith debate with someone who thinks that "spreading FUD is not necessarily a bad thing…".
I never said that the synthesis process was creative; rather the opposite. The point of a tool like Copilot is not to come up with new, creative solutions, but rather to distill many different inputs down to their common elements ("boilerplate") to assist with the boring, repetitive, non-creative aspects of programming. When the tool is working as intended the output will bear a resemblance to many different inputs within the same problem domain and will not be identifiable as a copy of any particular source. Of course there have been certain notable exceptions where the training was over-fitted and a particularly unique prompt resulted in the ML system regurgitating an identifiable input text mostly unchanged, which is why I think it would be a good idea to add an anti-plagiarism filter on the results to prevent such accidental copying, particularly in cases where it might be less obvious to the user.
> When the tool is working as intended the output will bear a resemblance to many different inputs within the same problem domain and will not be identifiable as a copy of any particular source.
You would have a great argument, and I would actually not be so mad at GitHub, if they had only trained Copilot on such boilerplate/non-copyrightable code. However, they trained it on all of the code in all of the public repositories. That's why we see:
> ...there have been certain notable exceptions where the training was over-fitted and a particularly unique prompt resulted in the ML system regurgitating an identifiable input text mostly unchanged...
The fact that this happens is a sign that GitHub did not train it only on boilerplate; they trained it on truly creative stuff. And they expect people to believe that the output is not under copyright. The gall blows my mind.
But even if it were to take the most repeated pieces of code and only synthesize stuff from that. Would that solve the problem?
Not really because some of the best (i.e., most creative) code is forked the most, meaning that Copilot saw some of the best code over and over.
Here's an experiment you can do (if you have access to Copilot): Start a new C source file, and in a comment at the top, say something like:
// A Robin Hood open addressed map.
map_item(
And see what it gives you. I would bet that it will suggest something close to [1], which is my code. (Ignore the license header; the code is actually under the Yzena Network License [2].) Notice that there is no "ymap_item()" function in my code, so this would not be triggering Copilot's overfitting.
The reason I think so is that Copilot doesn't just suggest one line at a time, which if it did, an argument could be made for boilerplate. Instead, it suggests whole sections of code. A good percentage of the time, even maybe a majority of the time, that is not boilerplate.
Licenses can't dictate what is not allowed unless the user wants to use it in a way compliant with the rest of the license. If you decide to not follow the license at all, then it's effectively like any other copyright where you can use it without the owner's permission under fair use.
> it doesn't mean every single line in isolation is copyrightable
Microsoft did not just copy individual lines. They fed whole repositories into their model, ignoring the license (if it exists) even though they knew from the start that information generated by the model will be publicly available. Available usually out of context, but nonetheless - the scope of the input and intent are very clearly "everything" and "redistribution".
Just adding a filter/ML model to the output shouldn't matter. I dare you to build a Copilot clone trained from leaked internal Microsoft code and then trying to argue the output is a bit mixed up.
Copilot was trained on leaked internal Microsoft code that's on github at the moment. Anyway, everyone seems perfectly ok with training langauge models on copyright text.
Everyone is not perfectly OK with training language models on copyrighted text. It's just that evilCorps do it anyways, and there's nothing anyone can do to stop them. I can't do anything. At best, I could get a Twitter account and complain to the ether. The copyright holders can't do anything against the might evilCorps, but that doesn't make them okay with it. The fact you believe this is just sad, and exactly what evilCorps want from you.
This goes beyond fair use or satirical/comedic effect. They are training their models to output text in the style of the authors being absorbed. The style of is exactly the artistic effect that is being copyrighted.
My explanation will not be popular here on HN, but I'm never one to shy away. Especially when asked directly.
Buying a book, buying an audio CD, or buying a DVD/Blu-ray is granting the holder permission to read,listen,view that product as a single instance. You can lend them out, but that's all you're really allowed to do with them. The text,audio/video is not owned by you to do with as you please. People obviously do not like that, and argue making copies/backups is their right. Maybe that's acceptable, but we can agree posting them on torrents and sharing in any other manner from a copy made from the thing you have is not.
Saying that, training a model on someone's copyrighted text is not part of the agreement of the usage of said text whether it's a copyrighted magazine, newspaper, or book. If the people doing the training reach out to the copyright holders and get specific permission to use their copyrighted material in such a manner, then go ahead. The fact that people feel like they can do anything without the common courtesy of asking for permission is troubling to me that we've lost something as a society. There's no acknowledgment that someone has created something by their own work so that the creator can do with it as they please. A large portion of people believe that because it was created they deserve/should be able to/etc do what ever they want with someone else's creation. Including getting paid for derivitave works from the original creation.
> The fact that people feel like they can do anything without the common courtesy of asking for permission is troubling to me that we've lost something as a society.
I see this sentiment a lot in FOSS spaces but I don't really understand why. The role of judicial process _isn't_ to provide a guiding moral philosophy around social organization. Depending on the government in question that's either a role of government functions or isn't something that should be guided at all. The role of law often (and yes, not in all governments, but at least in the US) is to offer a contract between the state and the individual.
I understand the potential for abuse here in using Copilot to regurgitate licensed works without adhering to the terms of the work's license, but I'm not fluent enough in law to know if this is illegal or not. Calling out and specifically applying strict limits this practice is certainly something I'm sympathetic to, and I'm very curious to see what the courts come up with. But swayed by a moral argument I am not.
In the realm of FOSS, I feel like it's not the same comparisons. The FOSS devs created the work, released that work with the express knowing that someone else could update/modify that work. Writing/art/videos are rarely released with copyright that allows this kind of modification. That's a huge difference. There are some FOSS releases that allow people to use for personal/private use while restricting commercial use. This is closer to the books/movies type of scenario.
I mean sure, but these are both legally defined works with licenses that govern their use. The difference is in the style of license. FOSS doesn't get a special moral valence because individuals are authors and they offer their work for editing and remixing under narrow circumstances. I mean, if Jeff Bezos today were to release code he wrote by hand with GPLv3 and were to cry foul over Copilot, I doubt anyone would care (or he'd get made fun of online.) Why does FOSS get treated so differently?
> People obviously do not like that, and argue making copies/backups is their right.
In some jurisdictions this is in fact their right by law as long as they own the original (the music/film industry of course used this as an excuse to slap additional fees on every sale of any storage medium). Redistribution is different however.
> My explanation will not be popular here on HN
How is this better than ’bring on the downvotes’?
Moving on, I’ll put this to you: you claim training a ML model against copyrighted text is in violation of the ‘permission’ granted by the rights holder. However, flip this on its head for a moment – that’s basically all human brains do. Clearly, the greatest writers of our time haven’t written their works in a vacuum. Rather, that historical reading and inspiration becomes sufficiently obfuscated that we deem something adequately creative enough to be granted its own copyright.
Fundamentally, how does Copilot differ, other than perhaps being a poor implementation? Is it by not being ‘adequately creative’ enough?
Is there some future version you could envision that would be, or is it the principle you’re arguing against?
Human beings commit copyright infringement all of the time. People have been lifting riffs from music, sometimes unconsciously, forever. This is why clean room implementations are done sometimes when writing software.
Also, you're taking the machine learning metaphor literally. AI models do not "learn", they're just statistical models, they don't understand anything. There is no comparison to human learning that isn't superficial or metaphorical.
The real question is how Copilot is any different than a compiler, or lossy encoding or compression.
I don't agree to your premise. Humans can consume creative works and be influenced, this is not in question. Unless one is an impressionist, they aren't going to try to recreate exactly the works done by the artists they have been influenced by. Even if an artist does something inspired/influenced by, they have pretty much stated that. Musicians cite prior bands, as do writers, painters, etc all credit those influences.
I'm probably just a curmudgeon, but I don't understand the point of Copilot. So I'm probably not the best to opine about it. However, I am very opinionated about copyright in manner that typical flows against HN group think.
>> My explanation will not be popular here on HN How is this better than ’bring on the downvotes’?
I totally missed the non-wrapped question.
Because I don't give a crap about down-votes/up-votes. I just know from experience my views on copyright do not gel with the majority views on HN. I was just acknowledging that fact. Conversations can be had regardless of votes. My views on Napster/MP3 trading are in the same realm (and somewhat related with copyright issues). I was a co-owner of a small music site when Napster was in its heyday, and we saw direct repurcussions of people not buy music because they got it from mp3 trading. Group think here is all "things for free when I want it, how i want it", yet I still have conversations. I'm not afraid of a measly -4 points because my thoughts are contrary to group think.
At the same time, if something like this gets your goad, how is asking how something is better being better in and of itself?
> Software licenses have barely been tested in court...
OSS licenses have been litigated and upheld. Can't supply details of my own experience for confidentiality reasons but plenty of plaintiffs have prevailed in suits about violations of OSS license terms. My guess is the numbers are higher than you might think because a lot of the cases end in non-public settlements.
A confidential settlement does not mean that a licence has been “tested in court” or “litigated and upheld.” It means the parties thought the risk of losing was high enough to justify a settlement. The state of the law remains uncertain because cases are getting settled rather than litigated.
What about non-traditional-FOSS licenses? There is a lot of source-available not-OSI-compliant licensed software on GitHub like MongoDB, CockroachDB, etc., and that's clearly proprietary. If this thing is trained on that and generates what amount to snippets of that code then it's clearly violating those licenses.
Then there's private repositories. If they included those in the training data set that's even more actionable.
Personally I think this is software piracy at an absolutely unprecedented scale. Machine learning is just information transfer from the training data into weights in a model, a close relative of lossy data compression. Microsoft is now reselling all its GitHub users' code for profit.
> You're extremely overconfident about how this will actually play out.
I'd argue Microsoft too, was/is overconfident about how this would play out. I would have expected a little more caution on selecting the training data.
While they are not tested, anything other than accepting the idea kills the idea of software completely. There is lots of room to change details, but somehow copyright and the fact that the code is copied into computer memory needs to be reconciled.
I don't see how. It might kill specific ideological licensing of software code, but the idea it'd kill software as a whole is pretty unbelievable. Software is too valuable to society.
As we're seeing, there's VERY little software where the specific algorithms or ideas in the software are what's valuable. The value comes from the ability to sell a service based on the software and operate it at scale. Like you said, how much SaaS is mostly open source stuff packaged up? Android is (sort of) open source, companies pay lots of people a lot of money to contribute to the Linux kernel where they give away the code they developed with that money, etc etc.
A software license, like any license, is a permission to operate.
> it doesn't mean every single line in isolation is copyrightable
It is if you can prove reproduction apart from your own original work (fair use). Unlike patents copyright doesn’t protect uniqueness. It is only a shield from reproduction, and if reproduction is demonstrable to a court you are likely at risk.
Copyright certainly matters. It's a big deal legally and economicically all over the world.
Suppose that it's just a bad idea and shouldn't exist. Does that mean that I should release my code into the public domain? I think you could make a good case that even being totally opposed to copyright morally or pragmatically or otherwise, given that it currently is enforced in many places it's worthwhile to play along. For example, some people would prefer a world without copyright, but GPL their code, because it might prevent a greater evil.
Exactly. The copyleft side of me says you can't copyright instructions on how to bake a cake, or a fast route across a city, or a beautiful way to display colored pixels in a grid, or an efficient compression scheme for video data... because it's all intellectual, and not physical, "property". But society disagrees so a nice hack on copyright that perpetually keeps any of the above from being stolen and locked down by profit seeking psychopaths just early enough to the scene to make a buck, seems like the best interim solution.
If you abolish copyright, that will only make it easier for for-profit corporations to use FOSS. There will be nothing stopping them from using FOSS, unless people stop sharing their code altogether.
While True, if you abolish copyright then there is nothing preventing me from Installing Microsoft office on as many machines as I want never paying Microsoft a dime....
This is a common misconception: without copyright, Microsoft would still have many legal means to force you to pay for every copy of windows, from contract law to patent licenses. Without copyright there would not be free software and copyleft as we know it.
There is zero mechanism under patent law to enforce what you are referring to.
Patent law is about selling items not consuming them so they could prevent me from selling a clone of office but they cannot prevent me from installing office
as far as contract law that would be between two parties so if I obtained a copy of office somewhere and I did not have a contract with Microsoft nothing I would not be violating a contract with Microsoft copyright is the only mechanism they use to stop unauthorized distribution of their software
How so? the MIT license allows you to do everything with the code. It doesn't allow to sue the author, but that's about it. Here it is: https://opensource.org/licenses/MIT
No, it's not clear, and I guess that's up to the courts to decide.
But in my (non-lawyer) opinion - if the reproduced code is substantial/unique enough to be deemed to be covered by the license, then it's also substantial/unique enough to be subject to that license requirement.
>I don't use MS Github, I have no skin in the game
You don't have to use Github to have a skin in the game.
As long as someone has access to your open source code, no matter where it's hosted, anyone is free to upload it to Github. The open source license of your code allows that.
>I hope there is at-least a $1000 award to every instance of AGPL and GPL license violation
So much this. If a neural network is capable of regurgitating code verbatim (with comments!), it's not a stretch to say it's a derivative work of the GPL code used to feed it.
> Yes, he did wrong and gross things, but in the same breath he's brushed under the rug, so are his ideas.
His ideas being "brushed under the rug" had nothing to do with his public "cancellation" that happened in the past few years.
Stallman has always been an extreme purist that prioritized his ideological stance over anything else that matters to users. And his ideas were "brushed under the rug" just as much 5 years ago (before public revelations about his misdoings) as they are now. It might just feel like he has been increasingly "brushed under the rug" more recently because he has been becoming increasingly irrelevant and more of just a spokesperson.
Stallman was looking out for USERS, not developers. the problem was the developers thought it was them that Stallman was wanting to protect.
GPL, and Libre Software is about keeping software open from the Dev to the EndUser. Non-Copyleft "Open Source" is about keeping libraries open for Dev's to exploit into their closed source products...
There is a big difference, I support Free Software, not "Open Source"
RMS is not the dictator of FOSS. There are plenty of valid competing opinions of what "freedom" means and not all of those include legally compelling everyone to share. The MIT license, for example, is both older and more popular than GPL. There has always been a lot of people who do not agree with his opinions.
Most of the people who go nuts when you point these things out are FOSS zealots reacting to the idea that FOSS licenses should be adjusted to prevent billion dollar companies from co-opting it for profit.
Profit is fine. Building anti-competitive monopolies that don't share and that seek to own more and more of computing was an unanticipated side effect.
Their link to why you shouldn't use GitHub[0] takes you to a page where they criticize GitHub for complying with US export controls. The FSF is a US corporation, why do they think that US export controls don't equally apply to savannah.gnu.org? And unlike FSF, GitHub has actually done the work of applying for export licenses so that developers in US-sanctioned countries can access GitHub[1].
there is a different and more important criticism listed too, githus is nonfree.
But, github could easily establish a non-us entity to host export restricted code. And for savannah, if anyone had any code they were worried about export control for their code, savannah would quickly and easily have an independent person host that repo outside the US.
> You agree that any and all content.. that you provide to the public Network... is perpetually and irrevocably licensed to Stack Overflow on a worldwide, royalty-free, non-exclusive basis pursuant to Creative Commons licensing terms (CC BY-SA 4.0)
Technically a lot of people who copy from Stack Overflow are breaking CC BY-SA 4.0 since it requires attribution AND requires distributing code that uses it under the same license ( I think - I am not your lawyer) :
Given how the racist twitterbot AI turned out, along with L4 autonomous driving by 2017, I suspect that Copilot is going to suffer most from an incredibly high velocity of churned out security bugs and bad code. SWEs are probably going to get fired for using it and companies will need to ban it, even if the legal problems don't take it down.
I think copilot is the wrong application of AI. It spits out what most coders would write for a specific problem. First, if many people have the same problem, than libraries are the solution, not copy-pasting. Also, just because many people do one thing doesn't mean it is the right thing to do, and you sometimes get code with security vulnerabilities.
Instead, I would like a system telling me about obscure things, traps, vulnerabilities, performance issues, etc... like the machine learning linter. The way I could see it work is by matching my code with bugfix commits. For example if several commits replaces "printf(buffer)" with "printf("%s", buffer)" and I write "printf(buffer)", I want an AI to tell me "code like yours is often replaced in commits, it may be wrong", bonus points if it can extract the reason from commit messages ("format string vulnerability") and suggest a replacement ("printf("%s", buffer)"), mega-bonus if it can point me to good explanation of the problem.
Pissing lines of code is easy, I can do it, anyone with a couple weeks of training can do it, I don't need a bot to help me with that. Thinking about everything while I am pissing my lines is hard, and I will welcome a little help.
A nice thing about that approach is that it is unlikely to result in worse code than what I would have written by myself, because it will be designed to trigger only on bad code.
I'm sure there's an IDE out there which will do that already without any AI. Just need to lint your code, highlight the bad stuff it finds and suggest a refactoring.
Most of them already do, personally, I use SublimeLinter for SublimeText, and LSP support.
But linters work with hand crafted static rules, which is good and the idea is not to replace them. The idea is to used big data techniques to find unwritten rules based on commit histories, the idea being that we are more likely to remove bad code than good code. So if your code looks like code that is often removed, is is most likely bad, even if it doesn't match an explicitely written anti-pattern.
Sounds good, although it would have to be context aware. For example, code that often gets removed in a production environment might be dissimilar to choose that is typically removed in dev or testing.
There are also other triggers of code removal and refactoring that are outside the code base, such as an organisation migrating to a different platform. An AI trained on a large public commit history could encourage a general shift towards already-established big players, punishing smaller organisations.
I agree with your objective, however it's obvious why Microsoft didn't do this: they wouldn't have been able to make good on their billion-dollar investment in OpenAI/GPT-3, which they REALLY want to justify.
I don't think Copilot is useless at all. Today it's actually been very helpful for me with interactive, notebooks-based programming. And it's also just an early beta right now; as the model improves and the tooling around it matures a little more, I suspect I'll be using it a lot for interactive stuff.
Notebooks programming has a flow of "execute a small bit of code, check the results, and iterate", and this fits perfectly with Copilot since you still need to check if the suggestions work.
Maybe this kind of programming is where Copilot finds a niche, maybe not. I don't know. I'm skeptical of its use in larger applications where you can't trivially check if the code you wrote (with its help) did what you want. I think there needs to be a lot more tooling built around that to really make it compelling for larger applications like that, likely in the form of more editor tooling integrations. But I think it's promising. I wrote about that a little more here: https://phillipcarter.dev/posts/four-dev-tools-areas-future/...
An interesting initiative from FSF, through I suspect the answer the most of the question will be answered when someone attempts a similar projects in a more traditional copyright-restrictive area.
As an example I would like to see is a Cosinger, where the AI is trained using songs on youtube and streaming services. With the final product, a user start to sing and the algorithm attempt to sing along and give the singer suggestions for how the song should continue. I could see how a lot of musicians would be willing to pay good money for such program, and removing obligations to pay any money for the training set would make it much more feasible to create.
There are already AI's that create music (through unlikely from proprietary training sets). A Cosinger shouldn't be too far from that.
I predict it is very likely we will see a court case where a smaller actor will take public available information as training data and get sued for copyright information. It will be interesting to see if, just like in the pirate bay case, the courts will be creative. In the TPB case, the accused was found guilty of an Swedish anti-biker gang law that was written with the intention to shut down biker bars.
When copilot came out, one thing it reminded me of was the ethical considerations of face generators in animation. The output naturally has some similarities with the training data, and it is trivial to use a limited set of actors in order to create faces with canny similarities of the actors. A question that people asked (here on HN if I recall) was if you needed permission from those actors to use in the training set, or if this would allow anyone to "steal" the face of public faces and create semi-look alike that can then be used in anything from porn to advertisement.
Read the rest of the paragraph. They think it is unacceptable and unjust from certain perspectives that are trivial for them. However, there are other perspectives that are worth exploring, and that is what this is about.
Just because someone has formed strong opinions about some aspect of a subject, doesn't mean they can't be open minded about another aspect. They plainly state that they don't have clear answers about many of the questions that Copilot raises, and this isn't going to be the last time that those issues appear. It is these broader issues that they want to hold discussions about, not Copilot itself. I don't see any reason not to accept this interest as genuine.
No, they have a position and arguments to support it, but those have nothing to do with the machine learning aspects, just with the fact that the software is proprietary.
They are asking for views on the machine learning, which they do not have arguments or a position on.
Having tested copilot, most suggestions are based on existing code in your opened file. Furthermore, most snippets tend to be relatively short, where it feels more like a Stack Overflow answer than existing code.
Of course it is possible to make the model generate longer pieces of code that are potentially GPL. But you would have to do certain effort for it. It also tends to adopt your coding style.
But maybe the fact that there are no guarantees makes it unfair.
The difference is that Stack Overflow has taken the legal responsibility of making sure any contributions to the site are licensed in a way that allows users to copy-paste them into their own works, and has the authority to do as much. GH does not have the authority to, without authors' permission, launder their code through an AI "tumbler" and spit out shiny suggestions stripped of all license concerns.
I just checked Stackoverflow terms, and it still says that all user contributions are licensed under Creative Commons CC BY-SA 4.0, which means that copying them to your own codebase is likely to be a copyright violation. Lots of people do it, but it's a well-known legal problem.
I would think that to combine the models, the software would need some internal method to differentiate between the licenses used by the various code sources it's pulling it's suggestion "ideas" from, and the compatibility between the licenses of those sources and your own choice of licensing for the project you're creating.
eg, cleanliness described by different linters/static-analysis tools? Can we actually make better code suggestions by choosing examples which are known to have less super-obvious flaws?
I would imagine saying "Copyright (c) appropriate rights holders on planet Earth" wouldn't satisfy most license attribution claims if they were ever tested in court.
i wonder if they could retrain the model on BSD or MIT licensed code only; How much of the open source code is licensed as GPL vs more permissive licenses, does anyone know?
Interesting that they want to charge for the use of co-pilot, I guess that we will see this business model more in the future.
Haven’t watched the video but this makes a lot of sense.
I assume there are quite a lot of Leetcode solution repositories containing exact problem descriptions and LeetCode naming on GitHub.
So essentially it‘s copy and pasting from these solutions.
> It requires running software that is not free/libre (Visual Studio, or parts of Visual Studio Code)
A little nitpicky, but the only proprietary part it requires is the plugin itself, not the IDE—Copilot runs just fine with the Free build of VS Code compiled from source from GitHub, after flipping a switch to enable WIP APIs.
The main point is: Assuming the New section is a queue, and that the very same link is posted twice, should the first link be re-queued in the New section again? It was clearly a duplicate, although it was not flagged as such.
>Is Copilot's training on public repositories infringing copyright? Is it fair use?
My money's on yes, but this isn't settled until SCOTUS says so.
>How likely is the output of Copilot to generate actionable claims of violations on GPL-licensed works?
This depends on how likely Copilot is to regurgitate it's training input instead of generate new code. If it only does so IF you specifically ask it to (e.g. by adding Quake source comments to deliberately get Quake input), then the likelihood of innocent users - i.e. people trying to write new programs and not just launder source code - infringing copyright is also low. However, if Copilot tends to spit out substantially similar output for unrelated inputs, then this goes up by a lot. This will require an actual investigation into the statistical properties of Copilot output, something you won't really be able to do without unrestricted access to both the Copilot model and it's training corpus.
>How can developers ensure that any code to which they hold the copyright is protected against violations generated by Copilot?
I'm going to remove the phrase "against violations generated by Copilot" as it's immaterial to the question. Copilot infringement isn't any different from, say, a developer copypasting a function or two from a GPL library.
The answer to that, is that unless the infringement is obvious, it's likely to go unpunished. Content ID systems (which, AFAIK, don't really exist for software) only do "striking similarity" analysis; but the standard for copyright infringement in the US is actually lower: if you can prove access, then you only have to prove "substantial similarity". This standard is intended to deal with people who copy things and then change them up a bit so the judge doesn't notice. There is no way to automate such a check, especially not on proprietary software with only DRM-laden binaries available.
If you have source code, then perhaps you can find some similar parts. Indeed, this is what SCO tried to do to the Linux kernel and IBM AIX; and it turned out that the "copied" code was from far older sources that were liberally licensed. (Also, SCO didn't actually own UNIX.) Oracle also tried doing this to the Java classpath in Android and got smacked down by the Supreme Court. Having the source open makes it easier to investigate; but generally speaking, you need some level of suspicion in order to make it economic to investigate copyright infringement in software.
Occasionally, however, someone's copying will be so hilariously blatant that you'll actually find it. This usually happens with emulators, because it's difficult to actually hire for reverse engineering talent and most platform documentation is confidential. Maui X-Stream plagiarized and infringed PearPC (a PowerPC Macintosh emulator) to produce "CherryOS"; Atari ported old Humongous Entertainment titles to the Wii by copying ScummVM; and several Hyperkin clone consoles feature improperly licensed SNES emulation code. In every case, the copying was obvious to anyone with five minutes and a strings binary, simply because the scope of copied code was so massive.
>Is there a way for developers using Copilot to comply with free software licenses like the GPL?
Yes - don't use it.
I know I just said you can probably get away with stealing small snippets of code. However, if your actual intent is to comply with the GPL, you should just copy, modify, and/or fork a GPL library and be honest about it.
To add onto the FSF's usual complaints about software-as-a-service and GitHub following US export laws (which, BTW, the FSF also has to do, unless Stallman plans to literally martyr himself for--- oh god he'd actually do that); I'd argue that Copilot is unethical to use regardless of concerns over plagiarism or copyright infringement. You have no guarantee that the code you're actually writing actually works as intended, and several people have already been able to get Copilot to hilariously fail on even basic security-relevant tasks. Copilot is an autocomplete system, it doesn't have the context of what your codebase looks like. There are way better autocomplete systems that already exist in both Free and non-Free code that don't require a constant Internet connection to a Microsoft server.
>Should ethical advocacy organizations like the FSF argue for change in copyright law relevant to these questions?
I'm going to say no, because copyright law is already insane as-is and we don't need to make it worse just so that the copyleft hack still works a little better.
Please, for the love of god, we do not need stronger copyrights. We need to chain this leviathan.
> most countries won't pay any attention to what the US Supreme Court decides
Copyright lawsuits across nation state lines are pretty much non-existent and not worth it. What matters in the U.S. is pretty much as far as anyone who cares about copyright is going to care about.
But that just becomes a copyright dispute with the country. The only thing preventing Microsoft from just not showing up to your court case is that they have presence there and want to continue doing business in the country. Imagine you write a project and some solo developer in the U.S. (that's not Microsoft) violates your copyright - the only way you would get damages or injunctive relief is by suing them in the U.S. or hoping the U.K. passes a judgement on them and extradites or sanctions them. If they never plan to go to the U.K. and the U.K. doesn't extradite them for nonpayment or noncompliance of whatever judgement you have against them, there's not really much you can do.
> Imagine you write a project and some solo developer in the U.S. (that's not Microsoft) violates your copyright
how is this completely different situation relevant in the slightest?
we're talking about possible massive, pre-medidated industrial scale copyright infringement by Microsoft, a large multinational with a substantial UK presence
not some random guy in the US
if Microsoft don't show up to the court: I win by default
I can then send in the bailiffs to start seizing their property (and their staff will be arrested if they interfere)
It's your prerogative who you sue, but again, i'm just describing how the law works in relation to copyright suits. That situation matters because there's no international law that says whoever you sue has to fly to your country and show up to your lawsuit, it's that the UK only has jurisdiction over the UK (and incidentally some jurisdiction over commonwealth nations via AJA 1920). imagine it weren't 'some guy' but a US-only startup selling only to U.S. based firms and they were valued tomorrow at $100B, regardless of their size you wouldn't have a way of seizing their property since the UK can't seize property in another country (I guess not without that country's permission). The only way they would be punished is if they're sanctioned and thus can never do business within the UK, or they get extradited some-how some-way. Microsoft indeed will show up because they want to keep selling Windows and Office 365 there, but otherwise, as I've said, copyright lawsuits across nation state lines and not within a single country basically don't happen.
Is this constructive?
I think it's reasonable to criticize an organization for its leaders, and question their actions accordingly. And he is on the board.
I'm using Github to publish my code and seriously I don't care whenever Copilot was trained using it. I published it and in the end somebody can do anything with it without giving a damn about license, copyright etc - that's the truth of open-source.
This was the same mentality that brought copyleft to the masses in 1984. While you may not care, there are others who do care about the sanctity of license agreements. This is an argument where staying silent means you accept this approach. Of the millions of open source projects, a large portion of the contributors ARE speaking up because they don't find this to be acceptable. I personally think copilot is the future and all this discussion is doing is going to bring a license usage feature to copilot (e.g. i want only or i do not want GPL code in my copilot suggestions)
Please continue using GitHub as you were, but maybe consider acting on your words and either removing or changing licenses within your code that does not represent your ideals. Nothing is preventing you from releasing code into the public domain, so do that!
> Of the millions of open source projects, a large portion of the contributors ARE speaking up because they don't find this to be acceptable.
Is this true? Is there really a large portion of contributors speaking up against this? I got the opposite sense, that it was a very small portion of contributors speaking up against this but I don't have any evidence one way or the other.
No, that's your opinion, which as it turns out also has no legal basis. For me, I want proper attribution from people who use my code. And for any code that I release that's under copyleft, I absolutely do want that license followed.
You seem to be fine releasing your stuff into the public domain, and that's great that you want to do that, but you don't speak for everyone.
This is why there are a multitude of different open source software licences. Because some people care more than others about the terms in which their code is used by others.
> We already know that Copilot as it stands is unacceptable and unjust [...]. Activists wonder if there isn't something fundamentally unfair about a proprietary software company building a service off their work.
> We will read the submitted white papers, and we will publish ones that we think help elucidate the problem.
Doesn't give me hope they're aiming for unbiased opinion.
I would be very surprised if any of the published papers don't closely align with FSFs apriori position.
It sounds like they have a legal premise and they want to work out the implications, not to open up discussion to every quibble about the FSF's values. Having an opinion on the legal issues around their licenses and values seems sort of essential to what the organization does.
The word "unbiased" seems to be doing a lot of heavy work in your comment. The FSF is inherently biased towards its project -- how is that a problem?
> The word "unbiased" seems to be doing a lot of heavy work in your comment. The FSF is inherently biased towards its project -- how is that a problem?
That's straw-man, I never said (nor do I think) FSF should not be biased towards its project.
However, I would be more willing to trust the results of this call if I had confidence that all solid arguments are presented, even if they're not aligned with FSF's agenda. Hiding them won't make them disappear - you might as well get as informed as possible about the issue, especially if you care deeply about the issue and agree with the FSF.
Anyone feel like FSF moved from maybe engineering idealists to a very lawyer driven type org?
The big GPLv3 push and development - plenty of attacks on folks actually shipping product on GPLv2 and building communities around that model (which keeps software free but allows users of the software to do what they want with it pretty much including putting in devices that are locked down - cars / tivo's etc).
Here's an opportunity to really advance in an interesting area with ML -> something that may open up programming to more people -> may advance computers ability to program and modify their own programs in the long run.
And regardless of the FSF attorney stuff, places like china, tiny little LLC's with no assets will very likely use the wonderful amount of code on the web to develop solutions in this space, even if FSF claims everything is a violation. Where is the vision anymore from FSF.
One thing that's been sad about the FSF -> it's gone from what I would consider a forward looking idealism sort of thing -> here's how we could do / make cool stuff that let communities work together -> to now sort of a legal compliance type org that really is focused on "actionable claims" " protected against violations" etc.
Question - does the Linux community and other successful larger open source communities welcome the FSF and their attorney's into the discussion? I can hardly imagine the BSD's, the Linux folks really connecting anymore with them.
Is there space for a different group, maybe a collection of actual develops shipping code in larger communities to get together, no FSF / SFC lawyers present, to think creatively about the future? What should we be working for, what is fair to everyone, what helps society, what works around pro-social community building?
A tool that helps with cross language building blocks for common functions etc (stackoverflow on steroids) - just how bad is this?
This is more of a tangent, but I found this framing very interesting:
> which keeps software free but allows users of the software to do what they want with it pretty much including putting in devices that are locked down - cars / tivo's etc
The FSF considers the user to be the one using cars/tivo's/other devices. In their view, this was a design flaw of gplv2 that it allowed locking out end-users of their devices.
For Linux this was not the case. The important part that modifications/extensions were shared (and maybe even upstreamed), while the end user access wasn't important.
The case of tivoization fractured the interest between the mostly moral "I want freedom for the end user" and the more immediately benefical "If you use my code, I want reciprocity for modifications".
I personally believe that today the latter case won, even for a lot of non-gpl software that gets lots of contributions e.g. via github for lots of different reasons, but the moral case gets more dire.
Looking at security for older (or shockingly often even current) devices, right to repair and lots of other issues concerning the effective loss of rights with more modern devices, the concerns of the FSF were often accurate, but with the increasingly hostile approach to "proprietary" IP and thus the exclusion of GPLv3 and similar licenses not palatable to the larger open source community.
Right - FSF ended up with a user view. Problem was the developers are the one actually writing the code and picking licenses, and the FSF moved away from really talking with them. I think this was a big shift.
I’m all for advancing machine learning but given how much big corporations aggressively defend their IP, it’s a hard pill to swallow if someone shrugs off a potential misuse of open source code. The law is the law and if it’s ok for Microsoft to defend their copyrights then it’s ok for the FSF to defend my copyrighted code too. The fact that I licensed it GPL was intentional — if I didn’t give a crap what happened to the code then I’d have used BSD or similar. But I chose to place restrictions and I’m very much interested to see if training proprietary AI models are legally covered under those restrictions.
Sure, but the GPLv2 was very freedom oriented. Enforcement practically was relatively sparse and more educational I thought. Ie, release the TiVo source code, but we don't care that Tivo's are locked down.
Is anyone building strong communities on AGPLv3 / GPLv3? I feel the momentum shifted towards Apache / MIT style licenses unfortunately.
> Is anyone building strong communities on AGPLv3 / GPLv3? I feel the momentum shifted towards Apache / MIT style licenses unfortunately.
While the corporate momentum switched to Apache/MIT licenses, there are strong communities built on AGPLv3/GPLv3.
* Nextcloud - file hosting (AGPLv3)
* Source Hut - git hosting (AGPLv3)
* StreetComplete - OpenStreetMap editing (GPLv3)
* F-Droid - Free Software "app store" for android (GPLv3)
* NewPipe - alternative Youtube frontend (GPLv3)
While these aren't necessarily used by large corporations, their individual communities are thriving and strong.
The shift toward SSPL and Commons Clause licensing is another argument in favor of AGPLv3 licensing. Amazon/Google often won't touch your AGPLv3 code (and you can still sell proprietary licenses to other companies that can't/won't use AGPLv3).
(A)GPLv3 actually has seen some real growth corporate side -> it's used commonly by proprietary tech companies as sort of a poison license (Microsoft had some of these like SSPL).
The way this works is all contributors are required to sign a CLA -> the corporate developer can then use their code under ANY license, and most importantly can integrate into propriatery products or sell to others.
The code is then released as an AGPLv3 to be "open source" - but literally the only company with the "super" rights to license / make money off it is the corp dev.
It's kind of genius -> so I think we may see more (A)GPLv3 stuff coming this way. The corp developer can then offer for example a hosted version of the software WITHOUT releasing all the related code! But anyone else would have to release their code.
> The way this works is all contributors are required to sign a CLA -> the corporate developer can then use their code under ANY license, and most importantly can integrate into propriatery products or sell to others.
If a third party is contributing a lot of code that is highly relevant, the third party is under no obligation to sign the CLA. The third party is entirely within her rights to refuse to sign the CLA and distribute an AGPLv3-only fork of the software.
If this fork is significantly better than the original, the original authors are out of luck when it comes to proprieatary relicensing.
This is what happened with OwnCloud/Nextcloud. OwnCloud was AGPLv3 but required a CLA. OwnCloud became OpenCore and started distributing "enterprise" features as proprietary upgrades. Some developers were unhappy with this and forked OwnCloud and started developing Nextcloud. All contributions to Nextcloud are AGPLv3 only and cannot be re-licensed by Owncloud. Interestingly enough, any new code released under AGPLv3 by Owncloud can still be used by Nextcloud.
> But anyone else would have to release their code.
Which I think is perfectly fair: you are getting a full product, and you can do with it as you please (including profit off of it), as long as you publish your changes too!
The fact that the original copyright holder has the rights to close it off for future developments is completely natural, and if you do not want to allow them to do that, don't sign a CLA and fork. Oh, there's a cost in maintaining a fork? Pick your poison then :)
To me what matters is that once you get the software, you have freedom to use and modify it. I am ok if you do not have the "freedom" to close it off. If you start being a bigger contributor than the original company, you avoid all of the problems with a fork, but you can't say you did not benefit from the original AGPL release.
> The code is then released as an AGPLv3 [...] but the only company with the rights to make money off it is the corp dev.
Actually anyone that has the AGPL code can sell and/or make money from it. People regularly buy GPL software and pay monthly subscriptions to hosted AGPL software.
If you can't compete without having some code as "trade secrets"; that's your failed business model, not a fault of the license.
Qt has switched to GPLv3 and is going pretty strong as a community. Can't find the figures for the official forum, but an unofficial one has 75k members.
> The big GPLv3 push and development - plenty of attacks on folks actually shipping product on GPLv2 and building communities around that model (which keeps software free but allows users of the software to do what they want with it pretty much including putting in devices that are locked down - cars / tivo's etc).
The users of the software are the owners of the devices. The distributors are the ones locking down the devices to prevent the users from modifying the software (often so that the distributors can control something else the users are doing).
GPL is about end-user freedom (as opposed to software distributor freedom). This is why GPLv3 exists.
GPL used to be targeted at DEVELOPERS of software - the share and share alike model. These developers would in some cases use the GPL'ed software in locked down devices (many / most android devices are pretty locked down - but developers contribute to a GPL kernel).
So yes, FSF created GPLv3 to focus on USERS freedoms, but the users are not writing the software - so it remains the devs who pick licenses.
>And regardless of the FSF attorney stuff, places like china, ....
So your argument is if China does not care about license neither should we, the thing is I am fine with that, I know Windows source code is leaked so let's train an AI on it too
I think is a clear sign that MS did not trained on proprietary code , it means that is not legal or not safe, so the question is why GPL or other licenses are safe, I think you need the authors or the licenses to give you the permission to use the code as training data in black box, locked, proprietary algorithms.
This isn't ML, it is a ripoff and is violating clear software licensing terms. https://news.ycombinator.com/item?id=27710287
Software freedom matters, but I wouldn't expect the typical HN type to understand, since their money is made on exploiting freely-available software, putting it into proprietary little SaaS boxes, then re-selling it.