Maybe I'm a bit paranoid, but I find it harder to trust the binaries generated by such a project than the ones provided by the Big Techs (even though they are the biggest stalkers).
I'm afraid that such projects, with much less man power and skin in the game, are more vulnerable to supply chain attacks or a hostile takeover if a dev sells the project.
I'm not very knowledgeable in this area. Would someone contribute here some resources on how this is avoided in such projects? Or maybe my concerns are valid :)
Your concerns are absolutely valid. Without casting any judgement on this project, the process of taking, modifying, and redistributing is hard to do in a trustworthy way. Big companies go to extreme lengths to ensure a chain of trust of the software they build, and app distribution platforms[^1] go to extreme lengths to ensure a full chain of trust end-to-end[^2] from developer to apps running on users devices.
There's basically no way to do this without trusting the redistributor in addition to the original publisher, but some things can reduce the amount of trust. Having a verifiable build process, that checks signing keys and hashes, that re-signs, etc, that all helps. If end-users can theoretically produce exactly the same build themselves it's easier to trust it.
The best option however is for the source project to produce unbranded builds themselves, and potentially for a project like VSCodium to become "config only" for things like disabling telemetry, and therefore not requiring re-signing. The Chromium project distributes its own Chromium binaries which lack the Google-specific stuff for example.
[^1]: Mostly thinking of app stores, but the same is basically true of things like Debian package repositories.
[^2]: App stores typically re-sign in the middle, so you do have to trust the store, but see (1), these companies go to great lengths to ensure trustworthiness there.
Trusting "big companies" because we presume they have guard rails seems like a bad idea, it is how we got the solar winds supply chain attack.
When I worked for one of the largest voting machine companies I was astounded how the final build process for the software was done just before it was sent to the customer. It was built on a Jenkins server that several people had access to change and update with minimal oversight and the devops person had freedom to do anything to with no oversight. For the year I worked there this one man could have changed dozens of elections if he chose to and figuring it out would have been very hard. I am sure a large enough investigation could could have figured out such a thing, but by then the damage would already be done. If he changed only municipal elections that couldn't affors such an investigation it might never have been done. I have no reason to claim this happened, but we certainly can't prove it didn't and this is in one of the most paranoid and regulated spaces in software development.
I'm speaking from experience at Google, and honestly our software integrity processes are like nothing I could have imagined coming in. Not everything is perfect and it has clearly evolved over time, but basically for a modern internal Google codebase I don't think I could have much more trust that it is what it claims to be.
I don't know if other big tech is quite the same, but from the little I've heard, I suspect that Microsoft, Amazon, Apple, and others in that ilk are similar.
I'm very much in the camp of "trust but verify" here. You're right that Solar Winds was an utter disaster. It should never have been able to happen.
To be honest when an attack like this happens at big companies, our expectations to mitigate the attack are much higher than if it's a project maintained by a small group of people.
How does this even happen at a big company like Microsoft? Then you take a look at small, <100 people company, their security policies are much more strict than Microsoft's. And even if it happened the same way, we wouldn't have such high expectations because they have a small team.
Honestly, I prefer trusting a small project than a big company project. Take a look at Terraform now, HashiCorp pulled the plug and made it a proprietary app with source available requiring you to sign a CLA to prevent them from being sued later on. Red Hat is manipulating the GPL2 to force people to buy their subscriptions.
EDIT: how can we forget how GitHub was before Microsoft bought it in 2016? GitHub was much more friendly to open source and nowadays they just use any open source code available on their service to train their Copilot AI without respecting the terms of the licenses of the millions of projects hosted there. They killed Atom. They scan your private and public repos for "vulnerabilities". They literally read everything on your repo and feed it into Copilot. If GitHub wasn't sold, I'm pretty sure the situation would be different.
Call me crazy or dumb but I strongly believe that the bigger a company is, the worse their policies are. They neglect security at the cost of higher profits (ahem, looking at Facebook for allowing people to mine their users' data via an exploit that was just fixed when they called out Facebook for selling users' data).
Trusting "open source" is also how we get tons of other kinds of vulnerabilities, but that doesn't mean open source is unworthy of trust.
The reason why organizations turn to other companies to deliver this is because there's a "throat to choke" for when things go wrong and a financial incentive to fix the inevitable issues when they come up.
> There's basically no way to do this without trusting the redistributor in addition to the original publisher, but some things can reduce the amount of trust. Having a verifiable build process, that checks signing keys and hashes, that re-signs, etc, that all helps. If end-users can theoretically produce exactly the same build themselves it's easier to trust it.
Really what much else could you ask for?
It only takes 1 trusted friend to verify the codebase, and from there it can spread. Luckily there are a lot of talented people on the Internet whom several people can vouch for, easy to find. It's pretty much a non-problem in the age of the Internet.
The same natural logic applies to anything else really... like cars - "yo what car do you trust?"
It's just the nature of living. If you don't have the skills, you need to find someone who does to trust. And no just because it comes from a company doesn't mean you can trust it, that's a fallacy. "Safety" doesn't necessary equate to "profits" which is what a business needs to live. It's an organism.
I'm certainly not equating safety to profits, but I think that a company such as Microsoft with decades of experience distributing cryptographically verified software to a range of demanding consumers (e.g. governments) and being big enough to be able to have teams running things like their certificate authority infrastructure, is much more likely to produce secure binary distributions than some GitHub Actions and shell scripts.
I don't think your car analogy quite fits, as there are ongoing updates, and it's not like the end-user perception of regular VSCode, or backdoored VSCode, are going to be any different until someone spots the backdoor (and we can't rely on that).
Meanwhile in the real world, Microsoft is actively screwing up, losing control of their signing keys, failing to implementing even basics of Azure authentication, destroying Azure customer VM security, and so on.
Ya I used to buy into that idea about using vendor products with the rationale that "at very least they wouldn't want to get sued for negligence so they have to have some basic security in place" then that huge bug came out where you got root access in Azure if you simply didn't send an AuthN header.
"Yeah but you can sue" is not really that viable. How's that going for anybody? Not everyone can dedicate their time and savings to gambling on getting compensated. At the very least, the open source community is vigilant and quick to ostracize bad actors. Big tech companies get tax breaks to screw us over
I think it is worth mentioning here that Microsoft does not care about user privacy.
I don't know if this is because Microsoft took down the blog posts and press releases or because Google web search sucks now (different conversation) but I had trouble finding the source for Microsoft statement. I'm talking about the incident when Microsoft spied into a user's hotmail email contents.
> However, to determine the identity of the leaker, CNET reveals, Microsoft actually looked through the Hotmail email of a French blogger who was in contact with the leaker, and make available online early copies of Windows 7 and Windows 8, as well as means that allowed users to circumvent activations protection for Microsoft and Windows.
I wouldn't give Microsoft any benefit of the doubt. While I have utmost respect for the people (who I know and) who work there, the senior leadership has always been poopy at best. I would never trust a word they say.
You can use a commercial (paid-for) Linux distro if you really want the legal protections of being a customer, while staying with Free and Open Source software.
As Capricorn2481 points out though, suing isn't really on the cards for most individual.
I've never tried this, but in theory, if these projects produce builds on GitHub Actions and you trust Microsoft's security, then you should be able to just look at the commit hash that got pulled in the action and diff that against upstream and just study that. If the diff looks safe [1] then I think that implies the binary can be trusted.
[1] Of course this includes ensuring it doesn't e.g. pull anything from the internet whose integrity you can't similarly guarantee...
Somebody please bother to explain why this is downvoted. The reasoning seems perfectly valid. Of course, there are a lot of ways how somebody could fuck up, but in theory automated builds on github have the same trustworthiness guarantees as Microsoft's very own VSCode binaries.
> there is now an open source remote mode extension
URL please? Open VSX lists several but it's hard to evaluate if they're complete & stable from just the READMEs. For example, https://github.com/xaberus/vscode-remote-oss has existed for a long time, but one would need to spend a few hours to know if it works well enough.
with how the current tech world is, and after working in it, I'm the opposite.
I have absolutely 0 trust towards any big corp doing things properly, and not cramming the applications full of spyware in the name of telemetry. and i would rather trust a bunch of insane dedicated people trying to make FOSS software.
Yes, I also trust the original company more, for now. It would be nice to have more choices.
Reproducible builds would help here, like Go did for their newest SDK [1].
This would be particularly true for a fork when the source changes are fairly minimal, since you could just verify the patch.
Linux distros are fairly similar; why do we mostly trust Debian packages? Because there’s an umbrella organization and a process. That’s less true of a standalone organization.
Maybe eventually there will be an umbrella organization that just does reproducible builds, and people will use that to distribute binaries. (And independent verification could keep them honest, like happens with domain registrars.)
Good point. That's one problem that might be well solved by something like BitTorrent, which hashes all the files as part of the protocol. Once a binary gets analyzed and trusted (the current binary, not the process or every new upgrade), it can be safely used.
Trust is always relative to a threat model. I trust that the people who work on VS Code won't do all sorts of nefarious things that hackers might want to do if they can get code running on your system. I trust that their telemetry settings do what they say they'll do. And if someone finds a security bug, they'll fix it.
I don't assume that there won't be security holes, but in effect, I'm assuming they probably won't affect me.
Example Arch AUR but similar for Nix, Guix, Gentoo, others:
On first install:
- read AUR file
- audit build & patching framework
- audit patches
On upgrade:
- review patch changes
- treat upstream changes like you would otherwise
On upgrade with AUR file change:
- review AUR file changes
- review build & patching framework changes
If you've bothered to set up your own repos and build pipeline integrating your patches already (and you can get a lot of that for free), the additional overhead isn't as large as it may sound.
vscodium is doing the much larger work of cross-checking vscode and in exchange I do the smaller work of cross-checking theirs - and putting in my own so I have zeroconf installs with prebundled extensions and whatnot on new machines.
Same for browser (ungoogled-chromium, vanadium, librewolf or whathaveyou)
I'm not worried about the government spying on me or ad tracking, I'm worried about a random hacker.
It's ironic that for a lot of people the concept of privacy has been inverted: The closer someone is to you the more we want to hide from them.
Facebook showing my online status to acquaintances(Or the lack of an online status, which could also create suspicion that I'm hiding something) bothers me more often than a random Google employee listening in through my alarm clock.
It's not ironic, it's concern over the influence big tech yields over society with said data, unlike your concerns which are scoped on you and immediate.
The majority of people seem to have immediate -scoped concerns I suppose.
Most of the really scary big data possibilities seems more like political issues than data issues, the data is just something they could use to do the things people are worried about.
It's not great, but it's competing for mind share with a lot of other possibile horrors people are afraid of and people are rather fatigued.
The more subtle stuff they're already doing seems like it would be a problem with even the most basic data.
The biggest issue I see is personalized content which causes fights, hate, wasted time, and all that, because the algorithms want you to engage with it as much as possible, and it shits on real journalism by giving all the attention to clickbaits.
I suspect every time I've responded to such clickbaits might be more problematic than all the data they get from all my smart devices in a year...
Protecting one's privacy because of societal harm is a worthy thing but also a pretty big effort and I sure don't have a solution, or even any confident prediction of how big the problem is or where it's coming from...
But I probably won't truly be worried unless one of those horrid encryption bans gets close to passing.
> The majority of people seem to have immediate -scoped concerns I suppose.
Well, like you write, aggregated data is already being used to influence populations which makes it immediate. An hypothetical hacker is not.
> Most of the really scary big data possibilities seems more like political issues than data issues, the data is just something they could use to do the things people are worried about.
I don't follow. Not giving away the data removes the need for politics. Crime is illegal nevertheless it exists because it is possible to commit.
> It's not great, but it's competing for mind share with a lot of other possibile horrors people are afraid of and people are rather fatigued.
Not everyone shares the same concerns and nobody likes problems, that doesn't' mean they should be accepted.
> The more subtle stuff they're already doing seems like it would be a problem with even the most basic data.
All the more reason not to give them more.
The personalized hate content is a direct consequence of big data aggregation.
All in all, seems like more of an issue than a possible compromise of a FOSS project.
On the one hand there is 100% validity to what you say. On the other, people download entire trees of dependencies from pypi, npm, homebrew, independent non sandboxed apps, chrome extensions, etc.
So don’t lull yourself into thinking the supply chain on everything on your system is any better :) It’s likely substantially worse.
It's not paranoid, the scepticism is completely warranted as this is security theatre. People are afraid Microsoft will screw them over with a text editor but run random binaries provided by people for whom a thousand bucks may as well be enough motivation to ship you malware.
It's like buying your heart medication off an anonymous guy on Craigslist to stick it to big pharma.
I would have assumed the win here would be that your Linux distribution or package manager on another OS can compile "VSCodium" themselves -- and you already trust them for all your other software, so this simplifies the trust chain somewhat.
In reality I think distributions that are willing to ship binaries (NixOS, Homebrew on MacOS) ship VSCodium, and other distributions (Alpine) have packages called things like `code-oss` that are basically the distribution's internal compiled version of VSCode and have nothing (?) to do with VSCodium.
I think skin in the game is the big one, much moreso than manpower. I use a lot of things that are probably mostly one-person projects, but if that person poured their love and time into it, it seems unlikely they'd permanently tarnish not just their reputation, but also the thing they spent all this time on. If you're just setting a couple compile flags and changing a name, I am a little more suspicious
You're missing the point. GP is discussing binaries. I.e., the assumption is that published source code (on Github) is perfectly fine, and then either the (malign) developer is changing the code before compiling it, or the (malign) third party is somehow using some vulnerability to replace the official binaries with compiled code of their own. And then you think that you are using "VSCode without the telemetry code", but in reality you're running a botnet.
The signing key is the important bit though right? This project breaks the chain of trust, as it clones the source repo, [does some shady stuff?] builds a new release, and uploads it to GitHub.
It's unlikely that this project knowingly does shady stuff – I've heard of it before, seems to be a long running legit project – but there are lots of unverified factors in that, and the lack of signing (I think?) on the final binaries also means it's hard to know if they get tampered with at some other step.
Having had a quick look through this workflow it seems to miss most opportunities to ensure a safe build.
- Downloads binaries for use in build with no hash/signing verification.
- Doesn't pin shared actions.
- Uses Yarn to install dependencies (which can involve downloading/executing arbitrary code from anywhere)
- Doesn't sign the final binary.
None of this is necessarily wrong, all would make maintenance harder in the long run, but it means this project is really about removing MS branding and some telemetry, and that there is a security trade-off to get those benefits.
> - Downloads binaries for use in build with no hash/signing verification.
It downloads them using TLS.
> - Doesn't pin shared actions.
The shared actions are just @actions/checkout and @actions/setup-node. They're official. I wouldn't pin them - YAGNI.
> - Uses Yarn to install dependencies (which can involve downloading/executing arbitrary code from anywhere)
It downloads/executes code based on the carefully chosen dependencies
> - Doesn't sign the final binary.
That's platform dependent I think. For Mac OS X it does.
Seems like FUD, which you might be able to recognize because you say "None of this is necessarily wrong". Especially the part about pinning first party GitHub Actions. There would be nothing wrong with that but it is much more useful to pin third party GitHub Actions, and IMHO suboptimal to pin first party actions.
Isn't the whole point of this comment thread "vulnerabilities to supply chain attacks"?
>> - Downloads binaries for use in build with no hash/signing verification.
> It downloads them using TLS.
If the binary is updated to a shady version, sure, no one will be able to tamper with the download, they're certain to have received the correct shady stuff.
I don't think it's quite FUD, but I do agree none of these are strictly necessary, all can be rationalised as unnecessary and for many users this project probably provides a perfectly reasonable security posture. However the fact that there's so little explicit acknowledgement of the security concerns, and that 2 minutes looking at the repo turned these things up, suggests that security is not a priority of the project. Again, not the wrong thing to do, but maybe not the trade-offs all users will want.
Pinning actions is so low effort/high reward that even the low risk makes it worth it for a project like this in my opinion. Official actions are certainly much safer, but ultimately it's still just human review and PRs being merged.
Downloading over TLS negates some impact of hash/signing verification, but it would be a nice extra layer. You're otherwise putting a lot of trust in the combination of DNS+CDN+Hosting. I've seen hijacked sites due to IPs being re-used on cloud providers for example. Unlikely, but again easy to do and high impact in the rare situation that is is taken advantage of.
Yarn dependencies may be carefully chosen, I'm not familiar with the VSCode practices. I bet that official binaries however are not built like this – I'd bet that there are allowances for specific network connectivity and binary execution, and that everything else is locked down. To my knowledge GitHub Actions have open internet access. I wouldn't even say this is low risk either, the NPM ecosystem is so deeply nested that I'm sure malicious code could be snuck in somewhere. This is a lot harder to solve for this project, and certainly the most debatable aspect as to whether it's worth it or not.
The yarn.lock includes checksums, if yarn is not checking checksums properly then that affects every project in Node.js, not just this one.
Malicious code with the correct checksum? VSCode team is not auto updating dependencies but I also doubt they are reviewing the source code of every package they update. I've never worked anywhere that does. So yeah, "gulp-vinyl-zip" (or any other package used at build time) could add some code that secretly triggers when run in the VSCode repository and makes some malicious source code changes. But, it's still going to be the same code in VSCode and VSCodium. Unless the attacker decides to use specific logic to target one or the other.
It's possible, and common, for NPM packages to download additional binaries at install and/or run time. These would not be included in the hashes, and are essentially untrusted, often hosted on random S3 buckets. I'd hope there are checks in the official project to ensure these aren't being used, but there aren't any checks in this project.
My employer uses artifactory as a mirror of npm and one step of adding a package to the mirror is installing it on a VM/container with no network access. If the package has a "postinstall" script requiring internet access then it will fail. There's also code scanning that will find at least casual attacks like "download and run this crypto ransomware".
Also, it depends how determined your attacker is. If they write code to detect whether they're being installed in the vscode project, have access to commercially available security scanning tools to ensure they evade detection, etc...
I'm not sure I follow why the "official" actions under the actions org are materially different to others? I agree they're likely to have better processes around contributions, but they are fundamentally an open contribution model with human review, and human review is fallible. Unless they had extra sandboxing, pinning requirements, materially better testing, etc, I'm not sure there's a huge difference.
However, the Linux one grabs two from third party, to GitHub, repos. These seem relatively safe, from official systems level projects — snapcraft and docker. I'm going to suggest to Codium they add a SHA to pin those, just like I would suggest they unpin GitHub Actions if they'd had them pinned. Sorry I was looking at the Mac one because I have a Mac even though the link up the thread was for Linux.
That's the key value of open-source projects. You don't have to release a binary, just source code and a build guide. It's also one of the reasons why I have such high respect for OS distributions like BSDs and Slackware. They give you a good base that you can build upon if you know what you're doing.
The problem is, many PC users don't really know what they're doing.
Can't tell you how many times I've had to explain how environmental variables work to developers, and that's a pretty simple concept compared to many other things in an operating system.
MS eliminated any competition from this direction by limiting their main extensions to the official release only. I'm personally using the remote extensions quite a lot and if someone can provide an alternative there, it would be great.
“Open Remote - SSH” by “jeanp413” aka: @ext:jeanp413.open-remote-ssh
works just like the closed source one, at least for me.
I’m using the vscodium AUR package under manjaro. I got the extension from whatever store vscodium defaults to. I’m not sure if it is available in Microsoft’s store.
The extension didn’t work for me under Code - OSS (there was an apparent configuration error, and I didn’t bother tracking it down).
Mine broke a few weeks ago (MacOS - Open Remote - SSH v0.0.42). What version are you running and are you connecting to your remote hosts via your ssh config file or using username@ip?
Just curious, since I cannot get it to connect to any hosts.
You can switch the extension marketplace, but Microsoft uses DRM in their extensions to prevent them from running with non-proprietary builds [1]. Pylance, their Python LSP, is one notable example [2]. Their earlier Python LSP was open source, but the community forks have lost a lot of the wind in their sails because such a large portion of developers use VC Code without realizing or caring that the LSP is DRM-laden and closed source. I believe the traditional term for this is embrace, extend, extinguish.
This alone, to me, means that MS is up to something shady. Maybe not right now, maybe not tomorrow, but it's surely on their roadmap.
I don't see why they'd put out an open-source app, whose most useful extensions only work if you accept a closed binary known to phone home.
I can see them trying to win goodwill from devs and steer them to their Azure/Windows offerings by releasing such an editor. But why insist on this telemetry-laden distribution, which is still free-as-in-beer?
This is separate from the telemetry, but I suspect that part of the roadmap is for LSP extensions to become monthly subscription services eventually. We currently see hints of that from two directions: 1) the introduction of GitHub Copilot as a paid service, and 2) the aforementioned move towards proprietary and DRM protected LSPs. It's not hard to imagine how these two might converge in the future. I'm sure that the performance of these LSPs will be extremely impressive and that it will be rational for many individual developers to pay for them. This will in turn pull mind share and community involvement away from FOSS solutions, and the gap between the two will widen over time as a result.
Oh, I'm not at all against companies making a buck selling software. I'm a happy JetBrains customer for my IDE needs, and I'm not even a "professional" developer.
But why would they force the use of their "official" VS Code build for this? Couldn't they just charge for their "impressive" plugins, regardless of the edition of VS Code used? The JetBrains "community" IDEs (open source and gratis) can use paid plugins from their marketplace.
I'm not against selling developer tools either, but I do have a problem with EEE as a strategy. As far as I know, JetBrains has never engaged with this behavior and they have coexisted in a healthy way with fully open source alternatives. The point of Microsoft's strategy isn't to only produce a better product, but to actively hurt open alternatives by driving down their adoption through insidious and disingenuous means. They're not just trying to compete in the market, they're trying to monopolize it.
The idea behind LSP and Microsoft's initial open source work on LSP were both excellent. That launched seven years ago, and I don't think that we would have seen the near universal adoption of LSP among open source editors nor the dominance of VS Code among developers if they had been paid products from the start. Now that they have a large enough market share, they can make the LSP engines proprietary without most developers even noticing. The gap between the proprietary and open source solutions can now be widened both by the open source community shrinking and by Microsoft pumping money into improving their LSP engines. The more that gap widens, the more people migrate to VS Code from open alternatives. That becomes a self-reinforcing loop.
Once VS Code is significantly better than open source alternatives and they have a huge market share, Microsoft is in a very strong position to start collecting rent. Switching costs on an editor are nontrivial to begin with, and are enhanced by the induced atrophy of open source alternatives. Despite the fact that this strategy takes more than a decade to execute, I would guess that it ends better for Microsoft overall than if they were to start charging for VS Code back in 2016.
So do you think their goal is to reel back in the people who had already left OG VS and also a bunch of new ones by slowly boiling the frog with the closed distribution of vscode, which will gradually have all the best features and, maybe, in the end, become paid again? I would have thought that they were merely trying to push their own cloud and cloud-adjacent ecosystem, which I expect to be the real cash cows.
You know the "Editor Wars" meme? That was a thing because vim and Emacs were the canonical editors; you could expect just about every serious hacker (in a Unix environment) to main either one or the other.
Today, there is only one canonical editor: Visual Studio Code. 75% of professional programmers use it. Microsoft was trying to get the open source crowd back into their tooling fold -- and they've succeeded! Which means now they have a hugely expanded developer base to sell tools and services to. Remember their money is now in cloud, not desktop software -- so think things like GitHub Codespaces. (The entire devcontainer ecosystem comes from Microsoft and is oriented specifically around Visual Studio Code's model of remote development.)
Sure I do, and I enjoy flaming the occasional Emacs user I cross paths with.
But why does selling those features, especially the ones requiring a separate cloud subscription, require a specific build of an app which is otherwise open source?
Flip it on its head, the question is, why even do an open source release? The answer is, it's pretty much a sop, much like Apple's Darwin releases or the AOSP. Good PR for the open source community to build trust and good will, while still controlling the whole platform from soup to nuts making EEE possible. If they opened up their dev tools -- their LSPs and cloud service connectivity -- to just anybody they risk losing that control.
Honestly, the more I learn about Visual Studio Code the more I understand why Emacs is the way it is. Stallman was trying to keep it fully hackable forever.
I admit, I didn't think about it under that angle.
But I do wonder how much VS Code being "kinda open source" mattered. I may be in an MS-centric bubble, but most people I interact with couldn't care less about that. They're using closed-source software all over. Their main reasons for switching to VS Code seem to be the free-as-in-beer part, and that it's more practical than the OG VisualStudio. They're 99% Windows devs.
Yeah, they were going after the Vim and Emacs using, Linux/Mac web dev crowd. Instead of fighting Linux they're co-opting it into yet another Microsoft development platform.
Mark me down as someone using vscode because it has vscodium. Prior to vscode, I was using Geany for anything that wasn't Java (IntelliJ CE there, and prior to that, netbeans).
Upsells like Copilot, telemetry data mining, investing in the goodwill of an entire generation of programmers, and presumably further plans related to enterprise products.
I know that unfree software is bad, but really a lot of good is being done in the process here. VS Code is making programming significantly better and more accessible to a large audience, including relatively advanced features like remote and containerized development. It also gave us the Language Server Protocol, which would be nowhere near as successful as it is today without Microsoft building it into VS Code and open sourcing the language servers.
I’ve started using Remote OSS [0] with some custom scripts to install and run the remote code host. Works great on remote servers and even with devcontainers on remote hosts. You can use a script like the one here to handle automatically installing and setting up the tunnel [1]. I’m working on a PR that adds a way to automatically run custom scripts to set up the tunnels as well [2].
I think the alternative is just not using VS code. I use it at work, because it is officially supported by the IT dept and I don't care about tracking. But for my personal work I still use sublime.
The remote extension on jetbrains is not even remotely comparable to even just the base remote ssh vscode. Much less the remote container, dev container and tunnels (plus stuff like one click integration to Azure)
What's super cool is that the dev container extension works over the other remote connections. So I can remote SSH from my Chromebook to my Ubuntu box, and then open the dev container on Ubuntu. And everything works as expected.
I hope that more people try Emacs after having had exposure to vscode. It may be slow to get to know the new environment, and reading the manual only takes one so far, but Everything becomes nicer after the training phase. Text-based remote editing has been mature in this platform for decades, and it keeps working well with the addition of all the language servers and tree sitters and so on. It is now snappy again in all architectures, it has built-in support for sql and still uses the powerful lisp for all its extensions.
Yup. I tried Codium and gave up after I couldn’t connect to the docker containers for debugging. It’s pretty much the only use for VS Code I have and Codium can’t do that.
I don’t know about JetBrains but I have tried to replicate my VSCode + Remote Containers extension workflow in vim and Nova (which I absolutely love) and not got anywhere near.
Would you mind describing a few of the things you like about Nova IDE? I never tried it, but it has my interest. I'd be happy to hear what you like and dislike about it.
I don't really know how to describe it - it just "feels" better. Which isn't very objective I know, but is important for an app that I spend several hours a day in.
There's some visual stuff:
The default views, font choices and colours are really nicely put together (and I've tried tweaking VSCode to match but I can't get it to look as nice). For example, when viewing a git diff, it displays sweeping curves, which move and bend as you scroll, connecting the changes on both sides.
There's some Panic stuff:
I like the integration with Transmit (which I use all the time). And I used to use the "publish" function all the time (develop on macOS and push to a Linux box to run tests etc).
And there's the fact that it's a native Mac app - I'm an old-school Mac user and there's a definite difference in "feel" between Mac apps and others. I'm not a purist about it (I have tons of Electron stuff installed), but when I use a Mac app I notice it. Plus I often get VSCode randomly slowing down to a crawl causing lots of annoying type-ahead, which doesn't happen with Nova - which I guess is a combination of Electron, JS and extensions combining.
But Panic has a reputation for sweating the details and it shows throughout the UI. Plus I feel good supporting a tiny company working in a niche market.
But functionality-wise it falls way short of VSCode - especially with extensions and dev-containers. The way VSCode + Remote Docker Containers feels like you're developing locally when actually everything is happening in a container on a Linux box somewhere halfway across the world is quite amazing.
So, this is basically what you would get when you download and build Visual Studio Code's source code.
The name is probably a world play similar to Chrome -> Chromium.
A mini rant, if I may:
Microsoft's (and Google's, lol) practice of "open-sourcing" their products, but releasing a product which has "an added closed-source functionality" is dishonest at best and worthy of a giant lawsuit at worst.
You're 100% correct. Google and Apple started this trend, and MS perfected it: just say you're creating "open source" products, but release a closed source version. These mega companies still benefit from the work of clueless software engineers who donate their time to the open source product, but what the mega corps deliver to end users is the closed source version.
Nothing "clueless", I don't know what you are talking about. If people release/contribute to code under MIT license, they are very well aware that anyone on any projects -- open source projects, proprietary software -- can use their code as long as there is a copy of the copyright notice. Otherwise they should release it under GPL or something similar, or spend their time elsewhere. It's all clear and fair game and working as intended for decades.
I wasn't clueless when I built a tiny functionality into Code-OSS and I'm fine with Microsoft (or M$?) slapping telemetry on VSCode. You know, the crash reporter and usage statistics are the foundation of this incredible product/software. People use that software for free and they can disable telemetry at any time.
I'd rather work a week on VSCode for free than spending a day looking at Jetbrains' Java font rendering or waiting for Visual Studio to start.
Apple released OS X/Darwin open source, but kept the moneymaking frameworks (cocoa, carbon) closed source. This generated positive PR while keeping the secret sauce needed to ship polished applications under lock and key
Google released Android open source, but then made unlocking bootloaders and flashing devices as difficult as possible for laymen. So they get to point at AOSP and say "open source" but in practice the vast majority of users end up running proprietary builds with big G's telemetry baked in
Unlocking and flashing Androids is (in general) much easier today than it was 10 years ago, mostly due to Google's efforts with fastboot and other software infrastructure. Modern Pixels are really easy to hack on.
> in practice the vast majority of users end up running proprietary builds with big G's telemetry baked in
The vast majority of users are always going to do this, regardless of how easy it is to install a custom OS.
What is dishonest about it, and what would you sue them for? I don't think you can sue for "open sourced a product and released it for free but it's not what I like".
This is literally what the MIT license is for: it lets people take the source code, modify it, and then distribute that. As a paid product even, if that's what you want to do.
This isn't fraud, this is literally MS going "here's the MIT licensed version, and here's our own variant of that, based on obeying that license." And then they go one step further and say "We're not going to tell you that only our product exists, we are explicitly telling you where to get the MIT licensed source code, which isn't even an MIT license requirement".
This is exactly what good open source practices look like.
It's a typical and completely compliant way to use the MIT license, but good open source practice? I don't think so. The MIT license permitting this sort of thing is why I and many others consider the MIT license to be a "cuck license":
> A Cuck License is a permissive software license that that does not enforce the freedom of derivative works. This means that anyone can take software licensed under a Cuck License and turn it into proprietary software, effectively cucking the original author.
> Examples of Cuck Licenses are the MIT license and BSD license.
> Cuck License consequences:
> There have been instances where developers's usage of Cuck Licenses has backfired. One notable example is Andrew Tanenbaum' MINIX, which got taken by Intel and turned into spyware called the Intel Management Engine. Tanenbaum went on to say:
> "Many people (including me) don't like the idea of an all-powerful management engine in there at all (since it is a possible security hole and a dangerous idea in the first place), but that is Intel's business decision and a separate issue from the code it runs. A company as big as Intel could obviously write its own OS if it had to."
> However, Tanenbaum maintains that he made the correct choice licensing MINIX under the 3-clause BSD License.
Ok, the MIT license can backfire and damage the author. But it doesn't damage those who receive the software. So why do you resent Microsoft releasing their own software with the MIT license? They expose themselves to the risk that someone else takes their code, adds some extra feature and sells it for a profit. Microsoft acts against its own interests, so what's dishonest in this?
No, any copyleft license is a better open source license. In fact, proprietary licenses are better as well; with MIT or BSD licenses you are writing code that a corporation will make proprietary, effectively writing proprietary code for them, except you don't get paid for it. It would be better to use a proprietary license and get paid than to use the BSD/MIT license, have your code turned into that same proprietary product, and not even get paid for it.
Even giving your code to the public domain is better than an MIT or BSD license; corporations will still be able to make it proprietary but at least it clears the air around the 'interesting question' of mixing MIT/BSD code into a copyleft project and distributing the whole lot under a copyleft license.
That's pretty naive. If we are talking about individual developers, big companies like Microsoft can blatantly use your GPL code in their proprietary code and you will have no money to fight them in court anyways. If it's big company deliberately choosing MIT/BSD for their open source project, they don't need your lecture. They have their own lawyers to tell them what license benefit the most.
> Even giving your code to the public domain is better than an MIT or BSD license
Maybe in the US, but much of the world doesn't recognise waving away all author rights voluntarily. The closest thing to worldwide public domain is probably the Creative Commons Zero licence.
Pretty much only corporations get themselves in a twist over that technicality, so what's the problem? Community FOSS projects don't worry about buying a license to SQLite, but the corporations with their lawyers do. This makes public domain even better.
I'm serious, how many German individuals do you think would refuse to install sqlite on their personal computer because they lack a technically valid license for it? A few such weirdos doubtlessly exist but you can safely ignore them.
The risk is not "I don't have a license because public domain doesn't exist", the risk is "the rights I have are revokable on a whim because they haven't properly waived their right to do so".
SQLite has a good track record but I can imagine another strongly religious developer taking advantage of authorship rights to damage some small EU-based social organization they strongly disagreed with; just an example.
As for the weirdos, as I said SQLite has a good track record, but I'd think twice over a much smaller dependency. However, I've seen enough disregard for FOSS licenses in business to know the weirdos are few. I guess you either care about license minutae or you don't.
Either you care about licenses, in which case you care about that technicality, or you don't care about licenses, in which case Microsoft using the correct license for their own code and adhering to that license is not something you care about either.
The source is available. You have the right to fork it. If there’s something in the propriety release that’s harmful, you might have a case - but all Microsoft is doing here is counting on people being willing to consume the builds rather than coming together to use the source code.
Basically: Microsoft is betting that people care more about free as in beer than exercising software freedom. So far, they seem to be correct.
I don't see this as asshole behavior. They're giving away an overwhelming amount for free, which the community or a competitor can build upon at any time. This project being an example.
For those who find the closed capabilities and extensions distasteful there is opportunity to make their own.
I've got several decades of experience with the things Microsoft has given away for free. I consider this to be an (entirely on-brand and consistent) asshole move, part of the same EEE cycle we've been through a dozen times with a dozen product lines. It's enough to keep me off even the open-source rebuild of the product.
The producer gives you two options to aquire that chocolate.
First option is a pile of ingredients, a plain plastic wrapper and a recipe. You are obviously required to make the chocolate yourself, but what you get is the original chocolate as advocated by the company.
Second option is a finished product that has a pretty decorated package and tastes a bit different than original chocolate. You can't quite put your finger on why, but at least you don't have to make it yourself!
Maybe "fraud" is a bit too strong word. We could go with "consumer deceit" instead.
Sorry, but we face this every single day, with every single industrial food that include extra ingredients (e.g. preservatives) wrt to old granny's classic recipe and we don't call it "consumer deceit".
It would be a fraud if the label on the product you buy omitted some ingredients, but it's not the case here: Microsoft doesn't claim its binaries are the result of a vanilla make on their source!
Their marketplace, not the marketplace. Their extensions. Microsoft's extensions for Microsoft's editor shared in Microsoft's marketplace. And if it's dishonest, where did they say otherwise?
Wow, that's a bold claim. Unless vscode and other products are improperly using GPL-licensed or not following license requirements otherwise -- which I am not aware of -- Microsoft is doing nothing wrong here, just like countless other commercial companies that release proprietary software based on open source projects.
It is a platform play, where the platform increasingly becomes incompatible with the open core stuff.
The value of a lot of software comes from interoperability, not from the code itself.
For instance, try running AOSP and 100% open source system libraries on your android phone for a month. You will find that you cannot perform basic financial transactions, like paying for parking, or hailing an uber/lyft, fast-charge your car, attend concerts without paying an extra fee, etc, etc.
I don’t even want my phone to support any of the above crap, but I don’t get to dictate how the US economy is structured, so I have no choice but to need all that stuff to work.
What used to be useful extras are now considered essentials (see how many on this thread say they need the SSH, Docker or WSL extensions and can't use VSCodium).
Plus they're now moving Python, .NET and other specific extensions towards closed source. Yeah, you can use the open source versions, but they don't have mindshare any more so source code written with one might not navigate/display as well with the other.
Also, new features only land when MS has a closed source idea that can use it, and are locked to MS extensions only. Innovation for me, not for thee. (Copilot/Continue on Codespaces/Live Share/VS Code Web/...)
"Plus they're now moving Python, .NET and other specific extensions towards closed source."
They are also adding Python support to Excel, which means that the layman now has a intuitive way to use a Python function to plot graphs inside their favorite workbook.
In other words, now you no longer need those extra machine learning Python devs, when Bob the accountant can plot fancy graphs for you.
Talk about extending an open source project just to incorporate it into the Borg that is closed-source Excel.
Pretty sure they use "open source" in a fair way here. Others are able to grab their sources and legally publish builds that are pretty much identical (up to the telemetry...).
What you're looking for is the "anti-features" distinctions, like the F-Droid store does it. E.g. "This app depends on other non-free apps." or "Promotes or depends entirely on a non-free network service."
maybe you think this way because you assume the open source code is meant to be a product.
I think they do it mostly to make it easier to build an ecosystem of developers. Plenty of companies do this (Adobe is another one). They release some basic part or engine of their product so developers can work with it without compromising their business.
Summary: Microsoft releases Visual Studio Code with telemetry built in, so other projects (e.g. VSCodium) have spun up to release custom builds without telemetry. But these projects can't use the same marketplace because they are not licensed by Microsoft, hence the "fracture" in the title of this article. It goes on to say that other proprietary IDEs are also problematic, and GitHub is a trap to capture and fracture developers.
This drama with Microsoft and open source reminds me of the Halloween documents:
The main point that post eventually gets to is that Microsoft made the best IDE and the best extensions... so now it's hard to not use Microsoft with all the bad things that implies.
Um... OK?
I mean... Use it or don't. I'm not sure, "it's just too good not to use" is a legitimate complaint.
Emacs is definitely designed to fracture. Pretty sure I can write the same article with %s/vscode/emacs/g. The only thing emacs is missing is a commercial offering, and it's definitely going to fracture in the same way.
Android is designed to fracture, Linux is designed to fracture, Clang is designed to fracture, Windows is designed to fracture.
Take GCC for example. It was philosophically designed to _not_ fracture due to how its plugin system was designed. As far as I know, there is no such limitations on emacs plugins. Plugins and their interface are what enables fracturing as this article defines it.
Just had to switch away from an open source build yesterday because after updating, Pylance seems to have stopped working due to DRM. Whenever it booted up, it just displayed a big warning about how using it with non-MS-sanctioned builds is a violation of the license, and promptly locked up.
I used this for ages, and asked my team to use it, but I gave up in the end; without the remote SSH capabilities and devcontainers, Microsoft have a tight hold on using VSCode to the max.
On top of that, they seem to have relented in the telemetry and it can be fully disabled now (though I haven't tested the truth of "off").
I will say that the official devcontainer extension uses the open source devcontainer CLI to do the bulk of the work, so it can be used outside of any IDE. Plus, with Remote OSS, you can use the vacodium remote host in a dev container. See my comment here for more https://news.ycombinator.com/item?id=37386025
That seems pretty elaborate just to maintain two different environments.
Doesn't VSCodium have something like browser profiles, where you can customize all kinds of things only in that profile? I use this in Firefox all the time, windows looks different etc.
I'm not sure if VS Codium has profiles like that, but Nix can be used to create them. There is support for configuring an installation of VS Code in the Nix language and using it as a package.
They recently added profiles in VSCode for this very use case. I have a bunch of them configured for different tech stacks (they only differ by extensions)
> So to use an extension in VSCode, it has to be published in Microsoft's store?
You can always download the extension (or build it yourself) and install it manually - using Code or Codium. You can use the Open VSX registry with Code, but you have to configure it: https://github.com/eclipse/openvsx/wiki/Using-Open-VSX-in-VS....
So it technically does not have to be in MS' Marketplace, but 99% of Code user will not find or know of your extension, if it isn't in the Marketplace.
IIRC there was a ticket somewhere in the issue tracker where someone explained in the comments how to get Pylance to work. Wasn't particularly difficult.
From my experience, the UI is very slightly different, with some options/functionality missing here and there (from what I remember, for example, you couldn't enlarge the font size of the menus).
The plugins won't work out of the gate, but you can installed them manually and most of them will work.
It's not a 1:1 replacement, and from my limited experience, the UX of VSCodium was worse than that of VSCode. But, if you value not being tied to MS... That's the price you're gonna pay.
Are you sure you weren’t using an older version of one? I see no practical difference between code on macos and vscodium on Linux (except that I prefer Linux’s font renderer, and the ctrl key moved).
Same here ;) But now there's "8 GB and constantly swapping": Eclipse! I logged once on a dev server to test something, it was slow and I checked what was going on. On this 64 GB machine, 8 Eclipse users, each Eclipse using a tad over 8 GB, led to significant swapping. What a world!
Vscode is super easy to configure yourself. I'd argue probably easier to customize fully than emacs for the average user. That's the whole point of using it versus a fully fledged opinionated IDE.
The average user can barely hit the space bar with their forehead. Creating programming tools designed for the average user will always be a study in mediocrity.
Configuring emacs isn't a test of intelligence, it's a test of investment. I wish the mindset that conflates intelligence and investment would go away.
Software is going to continue to play a bigger influence on everyones life. The majority of this software is going to be written by engineers of average intelligence. Having tools that are easier for everyone to use will make your life better down the line too.
Learn emacs once and you're done - your customizations will stick around for years, decades. With the proprietary MS solution, you have no guarantee of this.
Le sigh, I guess some people want to be subject to the whims of Redmond for life.
Microsoft has always been terrible at naming things.
Their word processor is called Word. Their sql server is called Sql Server. Their IM is called MSN Messenger/Windows Live Messenger/Messenger/Skype/Lync/Skype For Business/Teams
I'm not sure about Word, I think it's not such a bad name and it's pretty much a synonym for 'text editor' at this point, not sure you can do much better than that. Certainly not a worse name than 'Writer' (sounds a bit clunky), maybe on par with Pages.
It's not even support for visual studio - it's a completely different product spun out of monodevelop they acquired with xamarin.
Anything xamarin related (MAUI, VS for Mac) is extreme level of garbage - such low standard of quality is really doing .NET/Visual Studio brands a disservice.
Seems that they've pretty much just bought Xamarin to kill it (besides actually properly porting .NET itself to other platforms which they could've done anyway). It seemed like a pretty cool product showing some promise 7-8 years ago.
> Even though we do not pass the telemetry build flags (and go out of our way to cripple the baked-in telemetry), Microsoft will still track usage by default.
This project is mostly useless, and I say that as someone who uses ungoogled chromium as their primary browser.
The only apparent practical reason for this to exist is to disable telemetry. But the telemetry built into VSCode can already be completely disabled by configuration. And if you don’t trust that configuration to do what it says on the tin you should also not trust a piece of software built from the same source.
With Vscodium you also aren't exposed to all the closed source extensions (e.g. Pylance). MS aggresively pushes proprietary extensions on everyone, which does only heaven knows what to your data and machine. Vscode is in all practical purposes open core surrounded by a lot of closed source. This despite the fact that they like to pretend they altruistically love open source.
Same old MS as ever. Only trust what you can verify with that lot.
Interesting point, I’m using Codium exactly for this wrong reason then. In the end trusting or untrusting without the intellectual capacity required to verify is no different from faith. That’s maybe why 100’s rabbits approach to computing with uxn is so appealing.
I think trust is a strict synonym of faith, capacity to verify or no. Trust means to believe someone's claims, without certain 100% demonstrative proof. It ceases to be trust when you verify the claims yourself, but the capacity to verify makes no difference to what it is.
Not that trust is bad or anything. I remember hearing from someone that it used to be safe to keep house doors open, because people trusted each other, and that it's safe for a 6 year old girl to travel by train alone in some places. Can't help but feel we have lost something.
I'm afraid that such projects, with much less man power and skin in the game, are more vulnerable to supply chain attacks or a hostile takeover if a dev sells the project.
I'm not very knowledgeable in this area. Would someone contribute here some resources on how this is avoided in such projects? Or maybe my concerns are valid :)