Hacker News new | past | comments | ask | show | jobs | submit login

This is one of the reasons I am *very* hesitant with VS Code extensions and Jetbrains plugins. The absolute minimum is strictly enforced on all my machines. Ditto for project dependencies (NPM, PyPi, Gradle etc.)

However, the way things are going, news of these vulnerabilities / incidents will be used to push through the Codespaces (IDE on the cloud) among enterprises -- and many companies will fall for it.

I guess software engineers and technical experts cannot be trusted anymore to keep their machines safe. :-/




I think if it's a large org you should treat engineer machines as threat vectors by default, PoLP and all that jazz.

Someone already posted here how they were able to use PIP to hijack Google developer machines because on their machines defaults were to resolve to public repo first (even for private packages). Google just closed/ignored the issue because this was engineers problem and official build was setup to resolve correctly (this is my from memory summary)


If you apply principle of least privilege to developers then ideally you should have a whitelist of every software package that they need to use. What happens then when a productive developer, instead of developing from scratch, searches for a solution to some problem and discovers that there is already a module that may solve it? Do they go to some central committe to get approval to add it to the whitelist? What are their criteria? How long will it take them to approve it?

Suppose it takes a couple of days. Then the developer tries the module. Discovers immediately that the module doesn't solve the problem. That's two days wasted for nothing.

Suppose some module that has dependencies on a huge list of other modules. How long will it take now?

I'm not saying polp isn't valid. But is it practical?


> Do they go to some central committe to get approval to add it to the whitelist? What are their criteria? How long will it take them to approve it?

Yes, depends on the org, depends on the org.

Introducing third party dependencies should not be a single person decision.


Oh, yeah. That's how I have to treat large orgs as a contractor. :) And trouble start almost immediately, because they apply PoLP only to you...

First thing with a new client is usually some form of a VPN access. Even with open protocols, it's challenging to secure a VPN access. Eg. by default running openvpn with a random config provided by a third party allows the third party to push any network setup they want remotely. There's no whitelist, etc. It takes quite a bit of effort to run openvpn as unprivileged user and make it do all network setup via a trusted setuid helper tool that can do whitelisting of allowed network configurations.

And oftentimes VPN has to be some closed source garbage. Management daemons for these require high privileges and take remote commands on how to reconfigure the network and god knows what else. They also can't deal with any non-basic networking setup. The first such VPN solution I had to briefly deal with (before telling the client that we'll want something secure for both sides and it's going to be a fixed wireguard config), was some Linux binary blob that I checked in ghidra before running, and one of the first things it did was scan the system for USB devices, and it had other hidden (to the end user, probably not to the buyer) remote management functionality absolutely irrelevant to a VPN software.

Devs need to apply PoLP also to the clients. Otherwise it's quite easy to accidentally route networks of multiple different clients together via a dev's machine.


I use a VM for each client. I host it on my desktop and then when I need to work from a laptop I just SSH into the VM.

This way I never "forget to turn off the client VPN" and similar BS, and my client files don't get mixed up, etc.


That works for a client or two. I'm already at ~20 and there will eventually be hundreds. Managing all that via random VMs and VPN solutions (I've had some require some smartphone app, and one time codes + pins just to connect to VPN) would just be sheer craziness if everyone was allowed their own VPN solution and network setup.


>I guess software engineers and technical experts cannot be trusted anymore to keep their machines safe. :-/

When SEs could be trusted?

We run so much 3rd party code that it would be insane to expect SEs to verify it.

Security industry is also heavy of bullshit

Instead of performing reviews they run some "scanners" and fill checkboxes


My God, the list of npm dependencies some projects I've worked on had.

Endless.

Anyway, it could have been me. I don't inspect vim plugins before install, generally.

Security is hard. Even if you're an expert, it's a lot of work.


As a bare minimum security measure, when using plugins (all 9 of them), my Vim runs in a bubblewrap sandbox with only my project folder mounted as writable. Network and IPC access is completely disabled. It is secure enough to stop practically all non-targeted attacks.

Generally I try to install plugins whose authors I know. And whenever I update them (once a year) I re-read the entire source code. Some small plugins I just integrate in vimrc directly.

I hate to say this but Vim isn't the most secure editor, considering features like modelines which some environments enable by default and an aggressive plugin installing culture.


So it's Emacs, but we want features and not to be locked. Don't run propietary crap, trust Elisp repo like ELPA and NonGNU and you will be mostly safe.


Isn't MELPA just serving the latest git master of whatever it happens to be at the time package-refresh-contents was called? With MELPA stable likewise just serving the latest tag? That doesn't spell trust.


Using Emacs is not going to help you to avoid supply chain attacks per se. What it might do, however, is give you unparalleled power to inspect your environment - calls and source. If you run untrusted code you are exposed, and thats that. Development tools should assume that you, a programmer, know what you are programming.

Emacs and lisp is focused on providing power, not security. These often do not go hand in hand.


> What it might do, however, is give you unparalleled power to inspect your environment [...]

The "read the source" argument. It doesn't scale. I don't have 17 lifetimes to study a single release of every bit of software I run.

I really do appreciate Emacs for the introspection capabilities, but it's not a solution to the trust chain issue.


It scales to "don't run untrusted code if you are concerned about security"


> don't run untrusted code

The entire point of this thread is how a chain of trust should be maintained. "Don't run untrusted code" is skipping from the question straight to a hypothetical world where an answer has already been established.

"How to live long" - "don't die".


MELPA is not ELPA.


I always give source code a glance, unless it's by a sufficiently prominent and reputable maintainer.


Do you check _their_ dependencies though? And do you check every file?


I don't check every file but I use very sophisticated proprietary heuristics such as "intuition" and "hunch" for how far to dig.

I use vim so dependencies are explicit. But when using npm packages in work I give dependencies a look before I look anywhere else. An unfamiliar dependency gets looked at. It's easier since npm web browser allows inspecting code.

It's a very imperfect process.


Have you ever caught anything?


No, that would be a different story:) ended up not using dozens of plugins and libs stuff after a look at their dependencies and code though


Emacs has deps on libraries such as pdf-tools with mupdf and telega with tdlib, but these are installed from the OS repos so they are trustful.


Only handful of Vim plugins have dependencies and even then you need to install them explicitly


> I guess software engineers and technical experts cannot be trusted anymore to keep their machines safe.

Were they ever? It's a big set of folks, and while some of them were and are competent an even larger subset isn't.


Too add salt to the wound often their machines have more rights/access too, making the impact that much more.


There must be a huge market for "audited and validated" subsets of the major package managers. For a monthly fee you have access to a secure version where all dependancies are checked (manually, or automatically) for vulnerabilities and where no new packages, or versions, can be added without having eyes over by a human.

Throw in a credits or fees system where you can request, for a cost, a none audited package is added to the subset but then it's available for everyone.


Sure, but the business model for the entity providing that sucks. Practically infinite amounts of possible exploits and extremely finite resources to detect them. Either that or you are back to where you started with a web of trust.


I agree that would be a tough business model. Even for a relatively small package set like VS Code plugins there must be many thousands of releases to check every year and the potential market of paying customers for the tool is limited. Maybe it could work if some of the tech giants sponsored it?

For the wider problem of depending on external packages and managers like pip or npm I don't see how anyone could realistically keep up with the scale of releases that would need to be checked. You would need far fewer packages from far fewer sources with far less frequent releases for this to be a viable strategy. That might be nicer for developers for other reasons as well but it's not the world we live in today.


> Maybe it could work if some of the tech giants sponsored it

its not about them sponsoring it, that frames it wrong. They news to use it, they have security budgets in the tens of millions, they will already be doing some auditing of their own. A vendor can provide that service to the wider market.


I could be a "risk assesment" service. For a given package it could run an automated web-of-trust on it together with an analysis of past history of vulnerabilities and of its mantainers.

You can also add watchers to check who is allowed to publish new versions ad see when that list changes.

Even without looking at the code you could gather a useful report.


All it takes is one tired/careless/unlucky dev or it engineer to get their machine owned, at minimum resulting in an extensive and tiring incident response and forensic verification to confirm nothing else happened, bearing in mind once the attackers get a foothold they'll try to blend in.


> I guess software engineers and technical experts cannot be trusted anymore to keep their machines safe. :-/

I think we never could in the first place? While we are more cautious than the average user, we might occasionally shoot ourself in the foot. That’s part of our job.

The extensions shown in this example would not have ended up on my machine, simply because of the red flags they come with.


Are we more cautious? We might not fall for the old scam of extension bars in the browser and approving spam notifications but I'm sure plenty of people would blindly follow a tutorial to run commands in the terminal and install dependencies to run code.

The most recent example was probably Win 11 replacing the status bar and people recommending all kinds of anonymous software on GitHub. It's open source and works so it must be alright.


Plenty of popular developer-friendly tools have installation instructions that involve sudo, curl and piping to sh. That says everything we need to know. But if it didn't then the way many developers will casually install packages from untrusted third parties when the installation scripts themselves could do almost anything says the rest.

In addition developer PCs often have more privileges than a typical office worker. That's legitimately useful for our work but also means compromising a developer machine is a bigger risk. We're a nightmare for any organisation that wants proper IT security.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: