What amazes me is how predictable(?) all of the recent issues were.
Don't get me wrong, the folks behind Copilot are clearly, without any doubt smart, creative, and capable. But then... None of these issues (reproducing licensed code ad verbatim, non-compiling code, getting semantics wrong, and now this) are 0.01% edge cases that take specialized knowledge to see or trigger. I remember some of them being called days ago in the initial HN thread by people who haven't had beta access.
I really wonder how this announcement/rollout looked like on the management side of things. Because a) these shortcomings must have been known beforehand and b) backlash from people who feel threatened for their jobs/"stolen" of their open source work was (I guess) foreseeable? I've already read calls to abandon GitHub for competitors; this can hardly have been an acceptable outcome here.
Nevertheless, Copilot is still one of the most innovative and interesting products I've seen in a while.
I’d be very surprised if management at the least didn’t have their heads in the sand about the potential failure mode. There are often “must deliver” dates at large companies because someone made a promise about a deadline and now heads will roll for missing it whether anyone actually cares or not. So long as middle management thinks the C suite is watching them, they are desperate to meet quota.
Hilariously, this results in stuff like Copilot getting released to great big legal problems. Only then does the C suite actually notice the project and get upset that it is a legal nightmare for them.
I think the real secret to winning in big tech is that your job is just to keep your head down and keep the money rolling in without causing headaches for higher ups. Increase sales, make customers happy enough to keep paying, maybe release a cool product. But more importantly, don’t cause a major outage, burn the PR team, or get caught up in a legal kerfuffle.
You make a good case for innovation through acquisitions rather than in-house development. Once the derisking aspect is factored in acquisitions suddenly look a lot more attractive.
I saw this type of thing coming a mile away and left GitHub as soon as they were bought by Microsoft. TBH even despite my inherent distrust of Microsoft this is way beyond the hypotheticals I had in mind when I deleted all content from my GitHub account. Now I’m worried about VSCode as another potential vulnerability vector. Has anyone done a recent independent audit of what is sent across the wire to Microsoft from VSCode?
Yes, it looks like unfinished work. They could have:
- implemented plagiarism detection to attribute code to its source (where possible), then present the result together with the link. This makes Copilot same with Googling your answer and then copy-pasting the code. You are fully responsible
- implemented some regexes to filter out secrets, or even better, change the secrets to random values in the training data
- implemented a robots.txt like system so people have a method to ban the Copilot spider from their code
If they did these things before release it would have been so much better. But they are simple fixes so I see no technical obstacle.
Should we really be forgiving one of the world's richest corporations for launching a marketing campaign with expansive claims for a half-baked product because, in the fine print, they call it a technical preview?
Lets widen this a bit (possible slight hyperbole ahead, but generally this is my feeling now):
It is more the rule than the exception that any service using AIs are less usable than the previous solution. That is, unless you think about how usable they are to extract money from gullible investors or for making laughing stock of their users and/or developers.
In fact I while I'm certain they exist I cannot right now come up with a single product that I use for anything other than fun or creativity (games, painting) that have been improved by recent AI additions.
Thinking of it maybe maybe Google Translate qualifies, but that depends on how you define recent.
Oh, and by the way maybe there is something that qualifies as AI in some of the new translation web applications I've seen recently.
It depends on whether they fix the problems before launch and whether the issues found during preview cause Real Problems (eg, secrets found this way resulting in significant cybercrime).
DuckDuckGo.com is rising exponentially and have been doing so for years. Ans yes, mathematically exponentially, not cool kids speak or journalistically exponentially: https://duckduckgo.com/traffic
What do you suggest that will make a difference in yet another human era of aristocratic capture of our lives and agency.
They do not have an information advantage, just a political one.
And we can see how concerned the general public is with taking control of politics for its gain. It very clearly prefers to be hands off and let a minority manipulate public agency for their gain.
Presumably you can get your money back if you don't like the results.
edit: well this appears to be unpopular. It's a preview release, nobody is using this in production. They are offering the tech for free while they kick out the bugs and determine where things don't work as everyone expected. The fact that this is doing things they might not have expected suggests that this part of the process was necessary.
If you expected this to be production-ready, then you've misunderstood the purpose of a preview release. This applies to MS the same as it does any other developer.
The point of GP is that the grand PR campaign doesn't really state it's an unfinished product and that they are looking for free testers and security and legal audits.
Absolutely. I also believe that Copilot is getting more flak than appropriate at the moment.
To rephrase my comment above: I don't want to blame the team behind Copilot for not getting everything right on the first try. Neither am I in a position to do so, nor would I want to live in a world where smart people aren't allowed to make mistakes.
What irritates me is that there are two possible scenarios here:
1) They knew about potential issues and decided to release it anyway (without at least addressing them verbally).
2) They didn't.
And frankly, I don't know which one I like less. Even though it's still a beta/preview, either option seems to signal a degree of negligence? that feels unnerving given the potential impact of such a system.
That being said, if we do live in scenario 1) than I am certain that better framing could have prevented the PR fallout that we're seeing right now (at least partially). IMHO, GitHub (the platform) is still a great product after all.
Unfortunately this is something large corps like AWS have been getting away with for a while now. Releasing half-baked product clones as GA when in fact they're still clunky and are probably beta at max.
This is a good point. There is a lot of outrage now, but the product when finished might have every single wrinkle removed.
This one, for example, seems it should be pretty easy to fix. You could even make a hack that replaces ALL sufficiently long and sufficiently random strings with garbage/zeroes, at the point of recall. The difference from the case of regurgitating GPL sources is that the information that it looks like an API key can be deducedd from the output of copilot, so you don't need to track it through the system like you would with a system of attribution.
You don't. The logic is unchanged if the data changes. A snippet of code would be unchanged, apart from the data.
// Add an arrow icon
var arrow_icon = base64decode("00000000000000000...");
add_image(arrow_icon);
That is: the prerequisite for this approach being viable is if one assumes that "code" and "data" are distinct, and that data can be seen as irrelevant placeholders. That is: in the example above I was after the code to add the icon, not the icon payload itself.
There are obvious bordeline cases like large numeric constants that are actually core part of the logic. E.g. a method that multiplies by Pi with 14 digits wouldn't work very well if they were replaced by zeroes. So most likely numerical constants would need to be left alone.
Often times secrets are numerical constants. In your own example, the icon is a base64-encoded number. How would you tell secret numbers apart from the rest?
Base64 isn't numeric it's alphanumeric. The only reason this is reasonable (again) is that alomost all secrets like api keys or complex passwords are maximizing their information content and are therefore alphanumeric (or better). Base64 encoded data does too, and is an innocent casualty in that censorship.
They meant that a number written in hex (base16) is still a number, even though you use some letters. Similarly, a number written in base64 is still a number.
Kite does only fraction what Copilot is currently doing. It is great at suggesting function names and parameters, but it does not really suggest complete code or generate somewhat new code.
I dont know if that true. Just because you are "smart, creative, and capable" does not mean you can predict every possible outcome or be incapable of missing the obvious
I have been on both sides of that, where I have had to point out obvious flaws in an idea to very smart people, and have had clearly obvious flaw pointed out to me in one of my idea's...
I think it is completely possible that some or even all of the issues co-pilot is facing were unknown at the time of release, even if they are obvious to some
Though they could have proofed the small number of handpicked examples on copilot.github.com to see that they compiled/didn't blow up on first run. Or one further, that they did what they were supposed to, in a somewhat reasonable way.
Unintentional copyright violations and “leaking” of secrets people accidentally committed to public repos aside, my main issue with Copilot is that I don’t think it actually makes coding easier.
Everyone knows it’s usually far easier to write code than to read code. Writing code is a nonlinear process: you don’t start from the first character and write everything out in one single pass. Instead, the logic of the code evolves nonlinearly—add a bit here, remove a bit there, restructure a bit over there. Good code is written such that it can be mostly understood in a single pass, but this is not always possible. For example, understanding function calls requires jumping around the code to where the function is defined (and often deeper down the stack). Understanding a conditional with multiple branches requires first reading all the conditional predicates before reading the code blocks they lead to.
Reading, on the other hand, is naturally a linear process. Understanding code requires reconstructing the nonlinear flow though it, and the nonlinear thought process used to write it in the first place. This is why constant communication between partners during pair programming is essential—if too much unexplained code gets dumped on a partner, figuring out how it works takes longer than just writing it themself.
Copilot is like pair programming with a completely incommunicative partner who can’t walk you through the code they just wrote. You therefore still have to review most of it manually, which takes much longer than writing it yourself in the first place.
However, I've worked with people who struggle to write in English without introducing random punctuation, and don't "see" (or care about) the text on the screen enough to go back and fix it.
I think Copilot will be a great benefit to the lazy programmer, who understands the semantics, but just can't be bothered to get the indentation or other syntax correct.
I 100% agree, but that’s exactly what code linters do, which have been around for decades.
That said, a more sophisticated linter might be useful in catching non-idiomatic, but syntactically/stylistically valid code that would thus be flagged as “valid” by current linters’ simple automata.
My main concern is it will give non-technical managers funny ideas about what goes into writing code. Writing out what you want to do is the easy part. Figuring out how to do what you want to do in context of the broader environment is the main challenge and that requires time to think and reason.
Copilot, as far as I know, also does not seem to factor in the greater context of the application/code you're in when auto-completing these tasks.
To me, this is a huge part of modern-day development. It's not only about producing functionally correct code, but also code that integrates well and is semantically relevant to the broader context of the application itself.
That doesn't mean Copilot's input will have no value, but it just means that developers will generally need to refactor that code in a way consistent with the app they're building.
I think if your code requires reconstruction of nonlinear pieces to read it, you’ve written bad code. Fundamentally, a program is a list of instructions for a computer to run. The more linear that list is, the more efficiently the computer can run it. Linear code is also much simpler to read and understand for us humans as you pointed out. Iterate on the code until you find the linear path through it, otherwise you’re going to be in a world of pain if you need to understand it in the future.
In principle I agree, but it often can’t be avoided. Function calls, loops, and even things as trivial as Boolean short circuiting are all examples of essential, unavoidable nonlinear code.
The question is easily answered by checking their FAQ:
> What data has GitHub Copilot been trained on?
> GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. It has been trained on a selection of English language and source code from publicly available sources, including code in public repositories on GitHub.
It's such an issue that I believe AWS scans every commit pushed for secrets and disables them.
I know because I accidentally pushed a secret myself. Mistakes like that can happen super easily when it's a simple project and you've not setup all your things correctly.
Can we please stop (mis)using the term "AI"? It just does not live up to most people's expectations.
Copilot is a glorified Markov chain autocomplete sitting on a huge dump pile of data. It is not aware of constructs such as "licenses" or "secrets" most people would have expected from AI. To prevent it from spilling secrets everywhere, a developer ~~should teach the AI a concept of secrets and the meaning of licenses.~~ has to implement a filter. A regexp-based one will do, I guess.
Yeah, this. Modern AI is not AI - AI is synthetic machine life and essentially always has been, in fiction and non-fictional idealism.
Deep neural networks have stuck us in hope-fueled uncanny valley, and very smart people tend to become very confused about their technology when they're subject to it.
This technology has its place about heuristic programming, definitely, but is not AI.
Someone will definitely be quick to reply with the "AI is a moving goalpost, things stop being called AI when they work, for example..." - so I'll offer my counterpoint up front. These things were never AI except in marketing lingo and in connection to the research in machine learning. The common folk definition of AI doesn't change - it's still the same vision of computers in science fiction, with which you can converse, and which can think better than you (except in some specific ways in which are super-dumb - this is necessary for the story to have any plot).
Right. And just to be clear I love the field and most everybody in it, specifically for their idealism - I'm a practitioner myself - clearly everyone involved in the modern field of AI wants to leave a legacy of beneficial impact, and sees AI as their tool to do so. I just think if we're looking for life we won't find it in the gates of a transistor.
Previously to GPT-2, I would have agreed. With that and GPT-3, I think most people from the 70s familiar with science fiction versions of AI would think this was pretty close to how they conceived of AI, at least until they were educated on the specifics of how it works.
I said this having GPT-3 in mind too. To be fair: my mind was absolutely blown when I first got to play with it, and I'm still deeply impressed by it. The experience altered my beliefs on human intelligence. I've always appreciated the joke that sometimes humans can be hard to distinguish from a Markov chain, but GPT-3 actually made me take this seriously - the quality of output on a 1-5 sentence range is astonishing in how natural it feels.
Still, it doesn't take more than a few sentences from GPT-3 to realize there's nothing on the other side. There's no spark of sapience, not even intelligence. There's the layer that can string together words, but there's no layer that can reflect on them and check whether they make any kind of sense.
To be fair, "sapience" may be a bit too high for the lower bound of what is an AI. I still think the bound is on that control / reflexivity layer. It's the problem that stifled old-school symbolic AIs. To this day, we can't even sketch an abstract, formal model for this[0]. Maybe DNNs are the way to go, maybe they aren't. But GPT-3 isn't even trying to solve that problem.
I'm not sure when I'll be ready to call something a proper AI, but I think it would first need to demonstrate some amount of on-line higher-level regulation and learning. I.e. not a pretrained, fixed model, but a system where I could tell it, "look, you're doing this wrong, you need to do [X]", and it'll be able to pick up on a refined pattern with a couple examples, not a couple hundred thousand. For this, the system would need to have some kind of model of concepts - it can't be just one big blob of matrix multiplications, with all conceptual structure flattened and smeared over every number.
--
[0] - I think the term of art here is "metacognition", but I'm not sure. It looks like what I'm thinking about, particularly metacognitive regulation.
> SendGrid engineer reports API keys generated by the AI are not only valid but still functional.
> GitHub CEO acknowledges the issue... still waiting for them to pull the plug
I agree this is an issue for co-pilot as well - but it's really on send grid to invalidate keys that are known to be leaked?
Yes, that's inconvenient for the affected customers - otoh they won't get billed for other people's usage - or dinged for someone spamming using their keys...
Most sendgrid customers are in their public IP pool. They don't tolerate spam on those IPs because its difficult to manage with the spam lists. So they are definitely proactive about killing leaked keys.
I leaked one when I accidentally left one in a repo I was making public. Took 15 minutes for sendgrid to drop it after putting the repo public.
It does not generate secrets. The Twitter conversation does not mention that word.
Most certainly, it regurgitates secrets it has seen on crawled repos. Can the title be adjusted, please?
I think ‘generates’ means produces this output in this context. It’s the correct term for the technology being talked about. Nobody is confused that it’s producing new cryptographic tokens.
I was confused. "Copilot generates valid secrets" sounds like it can be used to generate new secrets that are valid in format or something. The headline is misleading, even if you personally weren't misled. The secrets are not being generated, they are real secrets appearing in generated code.
They are being generated. It's a Generative Pre-trained Transformer. It's generative. It generates things. That's what this technical term means. It's correct.
I think a generative transformer can be said to generate text. It doesn't generate new words. The words it uses to generate the text, are words copied verbatim from the input. What's emergent is the combination of those words into text.
The generated output has building blocks (characters, words) which themselves are not "generated" in the sense that they are novel.
A random number generator generates numbers. It doesn't generate the digits used to represent the random numbers. Those are taken from a set (such as 0..9) and just "used".
The issue is whether the headline can be easily misunderstood, not whether it is technically correct. Why not make it unambiguous? “GitHub Copilot output includes secrets” or something.
Terms have different meaning in different context, I think both terms are valid in this case, thus a qualifying word to remove ambiguity is the normal solution.
> I'm going to assume it will also regurgitate malicious code.
Well if it wouldn't replicate or even randomly generate malicious code, it would imply that CoPilot would somehow be able to solve Halting Problem or - at the very least - understand intent and purpose of both its output and training material.
Keep in mind that the very definition of "malicious code" is highly subjective, plus the intent and purpose aren't necessarily encoded in the program itself. If the latter were the case, there would be no need for documentation, requirements or specs.
When I say "malicious code," what I really mean is some well-known patterns of malicious code, not all malicious code in general. Just like we are surprised about "secrets" being regurgitated when we mean "API keys."
Training on code that unintentionally has vulnerabilities is a problem, but I'm even more worried about bad actors intentionally putting code with vulnerabilities on GitHub with the hope that it will become training data. Bad actors might learn how to disguise code to sneak it into Copilot (if disguise is even necessary) and introduce backdoors, etc. It could be especially dangerous because of the "stamp of approval" Copilot has from GitHub/Microsoft. People who would not copy/paste code from the web might feel a false sense of security using Copilot.
I totally expect this will happen. I bet this is already happening. As long as they continue to train Copilot on third-party code that wasn't thoroughly, manually vetted by them, this vector of attack will remain open - and mitigating it falls into the domain of... spam filtering.
It's really kind of comical at this point. The more this copilot bs continues to be a thing, the more it's making Github seem irresponsible/careless at best.
The cynic in me is thinking that marketing folks at Microsoft/Github are giggling endlessly at all the stories giving them free and extreme publicity. This is enforced by the recent post by Github 'analyzing' the copilot's code regurgitation, instead of retracting and retraining it on more defined subset of codebases.
This thing is worthless at best, annoying at everything and terrifyingly capable of destroying every programmer's productivity at worst. But it will stay, it will grow because stupid execs will keep dreaming of replacing their engineering talent with this, and Microsoft will laugh all the way to the bank.
There's no way they didn't expect some backlash on this. So I think it's partly for the marketing gimmick. I'm sure there are people at Github tho who really think they're making something valuable unfortunately. Sadly, what they're not aware of is that they've become part of the experiment themselves.
It's like expecting that an AI trained on Shakespeare novels would be able to "help" a writer write Shakespeare-like novels. Sure, they might get something that might fool some people, but are they a writer? I think software is a lot more like "writing" than it is like "building".
What mostly annoys me is that this is a win-win for github regardless of the outcome. If people buy into it (even for just a while, and it currently seems like some really smart people are buying into it) they'll carve a huge piece of a new market. If it fails, they'll make it seem like an experiment into the whole ethical gray area of what should and shouldn't be used for training, and that they just wanted to draw attention to it.
I'm kind of astonished that this project got greenlit, given Microsoft's previous experiences with embarrassing AI projects (thinking particularly of Tay and Zo).
Microsoft isn't a person. Individuals at organizations make decisions, and it's very possible the person greenlighting this never heard of Tay and Zo. It's almost certain they weren't the same person.
My comment wasn't about reasonableness. My comment was about organizational reality.
How many times have you seen the two coincide in large orgs, especially ones as complex as Microsoft?
People treat organizational decisions from large orgs as if they were made by a particularly stupid, incompetent individual, but that's not what happens. They're made by organizational processes, incentive structures, and emergent behaviors.
That's not cynicism -- structuring over 100,000 people to collectively act in ways one might consider reasonable is a genuinely hard problem. We tend to blame unreasonable people or malicious behavior, but every person in an organization can be smart, ethical, and reasonable, and stupid stuff will still happen.
If there were one bad apple, like Enron, it'd be fine to blame the people there. If it's nearly every large organization in the history of humanity -- including formerly good ones like Google -- it's reasonable to look for a more systemic explanation than unreasonable people.
Put yourself in the position of a richly-compensated engineer.
Your boss: "We need to launch this, or our division is shut down."
Your web search: Famous AI failure
You would like to stay richly compensated. What do you do?
This is one scenario, and waaay oversimplified, but the way you get richly-compensated is by doing what's required to be richly-compensated. That usually has more to do with corporate politics than long-term corporate outcomes.
I see this as a problem with the developers who are committing code, and not a problem with Copilot. if you make your secrets accessible then they might be accessed. Also if you are rotating your keys regularly that would also mitigate these issues. This is a problem with humans failing to execute known security best practices, not malicious AI doing something insidious.
The fact that Copilot recreates API keys that still work makes me wonder if they come from a semi-public place, because SendGrid is usually quite fast at blocking API keys that were accidentally made public.
I wish he would have tried to track down if the keys were in a public repo before asking Sendgrid about them. If they turned out to be only on Github private repos, that would be new and interesting info.
Not saying putting keys in a private, but 3rd party hosted repo, is a terrific idea.
I don't consider this a problem. Copilot was trained on public repos, so these secrets had to be checked into public repos. They were already totally public, and should have been invalidated/replaced and redacted. Copilot might result in previously undiscovered published secrets being found, but that's not much worse than anyone finding one under normal circumstances.
Grand Source Code theft. A permanent stain on Github?
They should scrap it and Microsoft should be ordered to sell Github because they have a conflict of interest.
For example Microsoft has access to your private repos and can do things like co pilot with your data.
Who knows maybe your code powers Windows 11 now.
The only time I would think this is a valid security issue if those were tokens that were previously not public. But that should not be the case right?
Sure, if someone checked in a secret to a repo that at some point was public, and got crawled by co-pilot - they should cycle that secret, so it's no longer valid - rather than only mark the repo private and/or nuke the secret from the repo history.
But there's another side to this - if you write code using co-pilot against a popular Api - and co-pilot gives you a valid key - and you access data or a system you aren't supposed to - would you be liable under the various draconian antighacker laws?
If you pick up a key card from the street, and enter someone's home - you'd be trespassing after all..
That is a good question and I think you should be. After all you are still the Person that writes and produces the code just with the help of a tool. Similar to a lockpick. (I hope that makes sense)
Lets hope so... I expect that these were accidentally committed to a public repo
However, while the keys are then already leaked, you'd have to go search for them. Copilot suggests you use them in you editor. That is not quite the same imo.
It goes from deliberately searching and using leaked keys to having them handed to you without context. I feel it is a bit like finding an unlocked bike, if you take it, it is still stealing. But here there is a guy at the bike parking lets say that is handing out bikes to anyone passing by. Not the best analogy, but i think it covers my point ;)
The stuff you write doesn't have to be commited to GitHub(or am I missing something?) so this argument makes no sense. Copilot clearly scanned and autocompleted third party secrets, it's in no way acceptable behaviour.
Deliberate, repeated prompt engineering to make it regurgitate something is not proof for the general case. The same way that humans being able to recite text when asked to is not proof that they're incapable of creating new content.
To clarify my issue was not that it outputted quake prompted -- course that's obvious, it's clear the AI had no intention of outputting the code until the user provided function Q_
The issue is that it's impossible to tell if it's done that or not short of googling each line it comes out with
Anyway I've determined my reactions are now based on disappointment rather than thought, a bias that should be factored in
I do feel for the people behind Copilot, even though they'll have known it was coming. They produce something absolutely frggin' amazing that can change the world and for the next few days all everyone does is pile on and pull it to pieces... yes of course these are valid issues but can we please look at the big picture and appreciate what an achievement this is?
I agree with this; I'm very confused by what appears to be a very strong visceral reaction to this experimental feature. I don't know what impact it will have on programmers/programming, but I'm curious to see. Personally, I see something like copilot as a terrific search engine. Searching for code in Github is kind of difficult; being able to search by writing a descriptive comment is really cool!
> you pretend this glorified markov chain is something more
I hate this belittling attitude that HN has towards projects in certain fields. It's like if I dismissed any progress in graphics as "just glorified triangle-drawing".
> They produce something absolutely frggin' amazing that can change the world
It won't change nothing. It's madlibs for code. The 90% of programmers who are mediocre will drown out the 9% who are competent and the 1% who are talented.
So GitHub Copilot has inherited all the bad practices of many StackOverFlow and GitHub side projects and generates them in front of you as 'assistance'.
All the API keys are still working and who knows, someone might complain about a huge fee right in here because they forgot to revoke it. Only time will tell.
I am certainly going to avoid this contraption. No thanks and most certainly no deal.
Downvoters: So are you saying GitHub Copilot DOES NOT do the following:
Leak working API keys in the editor.
Generate broken code AND give you the wrong implementation if you add a single typo?
Copy and regurgitates copyrighted code verbatim.
Guesses 1 out of 10 tries.
Send parts of your code when you type in the editor.
The content of your editor is sent by Copilot to its cloud service (the FAQ says: "The GitHub Copilot editor extension sends your comments and code to the GitHub Copilot service"). So yes any editor content is leaked, including sensitive information.
But is this content sent to other Copilot users? AFAIK, no. The FAQ mentions the OpenAI Codex as the training set.
They advertise an English-to-French translation service powered by AI. But it appears that nobody who is French native has even reviewed the service presentation. When the marketing material is just a joke, what can you expect in production if as a customer you use the service?
Just this example tells me much about the internal organization of the company.
Don't get me wrong, the folks behind Copilot are clearly, without any doubt smart, creative, and capable. But then... None of these issues (reproducing licensed code ad verbatim, non-compiling code, getting semantics wrong, and now this) are 0.01% edge cases that take specialized knowledge to see or trigger. I remember some of them being called days ago in the initial HN thread by people who haven't had beta access.
I really wonder how this announcement/rollout looked like on the management side of things. Because a) these shortcomings must have been known beforehand and b) backlash from people who feel threatened for their jobs/"stolen" of their open source work was (I guess) foreseeable? I've already read calls to abandon GitHub for competitors; this can hardly have been an acceptable outcome here.
Nevertheless, Copilot is still one of the most innovative and interesting products I've seen in a while.