Hacker News new | past | comments | ask | show | jobs | submit login

If you were Pocket how would you handle the vulnerability created by having internal services hit user-supplied URLs?

Some ideas:

- Move the service doing the fetching to an untrusted network. At least it would be unable to access any internal services and any compromises there would be hopefully limited. You still have the problem that the local machines there could potentially be compromised.

- Validate / verify the URL to ensure it's not hitting anything internal. This sounds hard. Pre-resolve the name and check to see if the IP is in an internal range? Seems easy to get our of date as your network changes. Make sure to repeat for any redirects? Is there a better way to validate?

- Ensure that all internal services require authentication. This also sounds hard and easy to miss something.




This isn't new territory or groundbreaking research. This is Pocket not performing basic validation techniques. jerf provided excellent information. Maintaining a small blacklist of internal, non-internet routable, and private hostnames/IPs will work fine.

It actually isn't very difficult, but the repercussions of getting it wrong are scary.


Yep, we use HTTP::Tiny::Paranoid with additional blocklists for all external requests at FastMail. We have to allow requests to hit our external IPs still, because our customers could be fetching public data from other customers - but the requests route out to the external network, and our machines don't trust the external network more than they trust any other part of the internet.


As a fastmail customer I'm glad to hear that.


Regarding point 2 (Validate), won't blocking the entire private address IPV4 range (10.xx, 172.xx, 169.xx, 127.xx and what not) suffice ?


It would, but it's quite easy to miss something in the actual implementation:

- How do you extract the hostname from the URL? If the algorithm isn't the same as the one used by your network lib, it might be possible to trick your check into checking the wrong hostname.

- You'd have to check for redirects.

- If you pre-resolve DNS hostnames for your check, and then let your network lib open another socket to the host, it might resolve to another (internal) IP, because the attacker might control the DNS zone of that host, returning 127.0.0.1 on every other request. You'd have to make sure to open a socket to the IP returned during the check.

The safer option would be to work with iptables: https://news.ycombinator.com/item?id=10079554


There was also the address of the other internal services.

I think the best way here is to put the "fetch random URLs" service out in its own subnet, where it cannot access any other internal services like the EC2 status service. You'll also have to validate the URLs (no non-HTTP-or-HTTPS URLs) and prevent things like the redirect attack from working.


Your last idea seems like the obvious approach to me. Don't blindly trust stuff just because it's local. That seems completely insane. I'd hope it wouldn't be easy to miss something, because you'd set up everything to require authentication.


Re: point number 2. Libraries like https://github.com/bhuga/faraday-restrict-ip-addresses (ruby) help with that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: