A little tip for people trying to armor themselves against this problem: If your app reaches out to do network transactions, it really ought to block localhost. However, bear in mind that localhost isn't "127.0.0.1"... it's "127.0.0.0/8" (or "127.x.x.x" if you don't casually speak CIDR). Ping 127.2.88.33 on your console now... you'll see replies.
On the flip side, if you're doing a security test like this, I've gotten mileage out of convincing apps to access local resources with things like 127.88.23.245, precisely because the developer blocked 127.0.0.1 specifically and thought they were done.
You should also usually block all internal and external IPs for your entire network, but especially in the cloud this can begin to get tricky. Still, you should.
Also worth noting: don't just regex-check the URL with (localhost|127.*) or something similar. Any hostname could point to 127.0.0.1.
iptables with --uid-owner denying traffic to local/private IP space (plus infrastructure-specific stuff like EC2's instance metadata service) would probably be the best option.
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
# Create a chain for outgoing proxy traffic
-N PROXY_OUT
-A OUTPUT -m owner --uid-owner proxy -j PROXY_OUT
# Allow replies to the requestor
-A PROXY_OUT -p tcp --sport 8080 -j ACCEPT
# Prevent proxy from talking to anything non HTTP{,s} (and DNS)
-A PROXY_OUT -p tcp -m multiport ! --dports 80,443,53 -j DROP
# Allow DNS udp
-A PROXY_OUT -p udp --dport 53 -j ACCEPT
# Allow the proxy to specific private ips (demos servers)
-A PROXY_OUT -d xxx.xxx.xxx.xxx/32 -j ACCEPT
# Prevent proxy from talking to anything private
-A PROXY_OUT ! -o <%= @public_iface %> -j DROP
-A PROXY_OUT -d 10.0.0.0/8 -j DROP
-A PROXY_OUT -d 172.16.0.0/12 -j DROP
-A PROXY_OUT -d 192.168.0.0/16 -j DROP
# Prevent proxy from talking to services via public ips
<% @aws_public_ips.each do |name, facts| %>
# <%= name %>
-A PROXY_OUT -d <%= facts['ec2_public_ipv4'] %>/32 -j DROP
<% end %>
Anything I missed? Blocking outgoing ports is to taste.
That seems to cover everything in IPv4, but if you have IPv6 enabled, you probably want fd00::/8, fe80::/10 and ::1/128 as well (disclaimer: my knowledge of IPv6 is rather limited.)
that is how I got around all the internet filters in our high school. nothing makes you feel more like a hacker than punching some numbers on a calculator while all your friends watch, and bam, you're browsing forbidden websites while they all say "whoaa!"
localhost.fbi.gov is a great example. (Also a great way to exfiltrate fbi.gov cookies on a multi-user system, but that's another discussion completely.)
Great point. Also, possibly all reserved IPv4 spaces (see the Apache proxy mentioned in the article), not just 127/localhost (and don't forget 169.254/16 as the article points out). It's hard to do this right, since there might be use cases where local networks should be allowed, but better to err on the side of caution.
This seems a little like playing whack-a-mole security. Is there a comprehensive list of what should be blocked by default unless your specifically needs it?
If you run an open proxy, then you're choosing to give away any privileges you've received on the basis of your network position or IP. By default, one should avoid running open proxies.
They should have run the proxy on a host that they didn't give any special IP-based privileges to. The other nodes shouldn't honor X-Forwarded-For headers from the IP hosting the proxy.
The next big mistake is configuring your packet filters and application ACLs to only filter requests coming in from the internet; they can still make requests to 54.175.131.207 or whatever the IP is and the source of the request packets will be from the loopback interface.
And the IPv6 equivalent is "::1". Did they even set up basic packet filtering or ACLs for IPv6 requests? If they have no public IPv6 address, they probably never did.
Quick question. Is this a field that layman web developers/script developers should know? Or is this more of the domain of information security/network security? I don't make apps as a product, but I do like to tinker with stuff and make projects that sometimes my friends use. Not really sure what counts as a network transaction.
> Ping 127.2.88.33 on your console now... you'll see replies.
People's results here might vary depending on their operating system. For me, these pings were returned on a vagrant box running ubuntu, but dropped (not blocked) on the host system.
I don't think this is true on Windows. The hostname is tied to the IP which is tied to the loopback adapter and it defaults to the IPv6 addr anyway. The hosts file has it commented out as the literal 127.0.0.1 for those who need it on IPv4.
"Pocket does not provide monetary compensation for any identified or possible vulnerability."
Cheapskates. This could have cost them money if somebody abusive had discovered it first. He deserves a monetary award.
[edit] Should we be concerned about the massive number of people listed on that page who have found security problems with Pocket? I counted 153 separate people...
It's a freemium service that doesn't depend on ad revenue. Maybe they're not made of money as Facebook and Google are and that's the reason for not compensating.
This isn't some free service being run out of the goodness of somebodys heart. This is a company, that wants to make money. If a company has that many employees, they can afford a bug bounty program too.
I didn't say it was being run out of the goodness of someone's heart. That's why they need money, to pay the bills. Having infrastructure to deal with that many users on a popular service doesn't come cheap.
Mozilla claims that they don't get money for the integration, and I don't see a good reason why I shouldn't believe them. Pocket very likely doesn't have that much money lying around, compared to Mozillas budget.
its sounds a lot more like the situation was "Pocket was friends with some guys at mozilla, and they made up a bunch of bullshit about how it was a benefit to users to push their buddies project"
"Mozilla bundles crapware for financial gain" v. "Mozilla bundles crapware without any financial incentive" is a more accurate comparison, and highlights that something is indeed wonky over in Mozillaland.
Yes, but Pocket is integrated with Firefox. Pocket and the Mozilla Foundation formed a partnership that almost certainly includes some sort of compensation or resources in exchange for Pocket's service.
Yes. Notice the corollary: if they had a bug bounty, we wouldn't be learning about this today, because someone who wants to make money probably would have already found them.
You shouldn't do work based on "they haven't had a bug bounty before now but maybe they will make one just for me."
Exploits in a proprietary cloud service like this are harser to sell, at least se acquisition programs are expressly uninterested in them. I wonder what effect that has on bug bounties
They also don't really seem to put much effort into releasing bugfixes in a timely manner. For a month and a half, you could reboot any iPhone in the world remotely via text message.
Heaven forbid we ask of a corporation that they compensate us for a job they should have done and preventing huge losses and a PR nightmare.
Had I been OP I would have sold the exploits with no remorse, we don't need companies like that on the market, rewarding corporations for bad behaviour with free labour is fucked up on every level.
Maybe they should have a bounty program, but they don't, and they made that clear. They didn't ask anyone to perform "free labor," and I don't think they have any obligation to pay for labor they didn't ask to be performed. I don't see anything unethical in their behavior. Your beef seems more to be with the researcher who decided to perform free labor for this company.
Selling exploits to the black market does strike me as unethical, though.
>Selling exploits to the black market does strike me as unethical, though.
If I gave them away and they get exploited is it unethical? If I gave them away publicly vs privately does it make a difference? If I responsibly reported to the manufacturer a month previously?
I guess intent matters, and I'm ok with that being the line. In the end though I feel the rewards for fast production far outweigh the consequences of security bugs. My firefox updated last night and added pocket and I was reminded that I couldn't even remove the useless bloody thing.
> If I responsibly reported to the manufacturer a month previously?
I'm not an expert on responsible disclosure, and I think reasonable people can disagree on the best course of action. I think selling exploits on the black market is always unethical.
> I couldn't even remove the useless bloody thing.
I'm a photographer and freelance software developer, I know all about not doing work for free and what that does to our industries.
This isn't doing work for free, this is a hobby. The time spent by this person was not requested by Pocket and they shouldn't be maligned just because they're not paying him.
Also, "sold the exploits with no remorse" really makes you sound like a troll. So I'm feeding.
If you're that worried about compensation, the correct answer is to determine up front whether there's a bounty program, and if not, move on. Not to do work you had every reason to believe would not be compensated, then complain when it isn't compensated.
You can ask them for a bounty program in the future, but you don't really have a right to get upset when they don't honor a bounty that was never promised. It's like demanding a reward for returning a dropped wallet.
Okay, so forget the word felony. "You would have aided/abetted a malicious criminal and profited monetarily by doing so?" My point is that selling an exploit doesn't become morally okay just because the company doesn't offer a bounty.
You say it's subjective and then the reason you give is that some people would, knowing that it's criminal, still commit a crime.
It looks like their view of the activity isn't that it's fine to do, rather that they don't care that it's not fine. This implies that they know and accept that it's not fine.
Yet another service discovered which was built/deployed with no regard for security whatsoever. I'm beginning to realize - this is the norm. Security is the least important thing for most of the IT companies.
I guess the DevOps trend (i.e. not hiring sysadmins) should take it's share of blame. Or maybe it's the other way around - you don't care for security, so there is no point in hiring security experts?
If team principals are mostly young and don't have exposure to very specific kind of experiences dealing with threat modelling, then the security work will always take a backseat to feature-building. It's how humans work - it's hard to cut time from nice visible things and dedicate it to a hypothetical (at that point) abstract goal like security. Everyone agrees that the latter is important, but it just never ends up getting any oxygen. What's worse, if the team does not know what tightening systems feels like, they don't even know where to start. No "muscle memory" for it.
Maybe at some point we'll get more baseline collective wisdom about it throughout the industry, but it will also take the people signing the cheques (CEO, investors) having a bit more (justified) paranoia and respect for these priorities, and consequently requiring them from the outset. And DevOps has little to do with it - security has to be established as a priority from the leadership levels on down anyway because it necessarily means reducing time spent on more immediately-visible things.
We are not talking about some complex interactions between multiple components which lead to a security vulnerability. This is some trivial stuff like "don't give your passwords to anybody" or "don't run everything as root".
The most complex vulnerability mentioned in the article is with proxying. If you have opened /etc/squid/squid.conf at least once you should have noticed the to_localhost ACL and the comments which explain why it is important. So is the Pocket team building a multi-million user service which has a proxying component without trying to configure squid once? Absolutely!
Also I consider too optimistic hoping for the situation to improve - it's moving in the opposite direction for now with steady speed.
It's not the DevOps philosophy that's to blame. That philosophy is primarily about the utilization of automation to drive deployments. This is a great way to get better security, but you still have to prioritize security when you're writing your automation. The problem is that investors, managers, and developers all prioritize features. People are graded and rated on bullet points not on attack surface.
> It's not the DevOps philosophy that's to blame. That philosophy is primarily about the utilization of automation to drive deployments.
In theory it is. In practise it sometimes seems to be seized on by people who are sick of listening to those fuddy-duddy infrastructure types bang on about security and firewalls and ug, so boring bro. Crush code!
Oh Mozilla, why couldn't you resist the money. Your recent so called "services" are not welcome. You know it. But well, money makes the world go around.
How much did Telefonica pay you for the Hello integration?
When we decided to add a reading list to Firefox, we had two options: build and maintain our own service and integrations, or partner with an established player who had sane privacy and data access policies. We chose the latter.
Pocket began life as a Firefox add-on, and is now integrated into literally hundreds of applications. Embracing that is a reasonable choice in terms of sustainability and value for users.
For what it's worth, I do not believe any money changed hands in the Pocket deal, but I don't know that authoritatively.
And that's what it should have remained! Promote it all you like on your websites, but please let it, and all featurees like it, be separate from the main web browsing functionality, because far from every one of your users wants or needs this!
I, and I'm sure many other Firefox users are deeply disappointed by the Mozilla track record of including questionable features such as this. That's why, I have already started [1] figuring out how to switch to ESR channel by default on all systems that I have control over: I want the security updates, but I don't have the time and effort necessary to browse through all the crap features Mozilla introduces and figure out how to tweak and/or disable them on a six-week basis.
I could care less about their being a third party--we already have that with the search bar (and it's been promised that Pocket integration was just a step towards a more general Reading list API, though I don't know how well that's been followed through with). What annoys me is that baking features like Hello and Pocket into the browser doesn't strike me as "promoting openness, innovation & opportunity on the Web," it just seems like a desperate attempt to regain market share.
I believe the root of this change-in-mission is well expressed in a post by David Rajchenbach-Teller [1]:
While I personally want a browser that is fast, small, reliable
and trustworthy, we have market research that shows us that you
and I are a minority. More precisely, we have market numbers
that shows that users want a Pocket-like feature and are not
going to bother checking if there are add-ons that implement
it.
I would be very surprised if the kind of users who won't check for addons are also the kinds of users who would go out of the way to change their browser (... or even know what a browser is).
And as a long-time user of Firefox, this is what got me to stop using Firefox.
There's been tons of people asking for the removal of Pocket from Firefox including hundreds of posts in the Mozilla governance forum asking for its removal but Mozilla sits there doing nothing claiming "sane privacy" and "oh you can disable it by going to your about:config and finding this key and setting it to false". Package it as an uninstallable extension or get rid of it completely.
That is in fact the plan now, after the feedback Mozilla received on this:
> "Folks said that Pocket should have been a bundled add-on that could have been more easily removed entirely from the browser. We tend to agree with that, and fixing that for Pocket and any future partner integrations is one concrete piece of engineering work we need to get done." https://mail.mozilla.org/pipermail/firefox-dev/2015-July/003...
> There's been tons of people asking for the removal of Pocket from Firefox
That's true. However, we have surveyed large swaths of our users and found that for all the people that dislike the Pocket integration, several times as many actively do like it. They're not you or me, but barring a significant sampling error, it means the dissenters are a vocal but small minority. Of course, both classes are overshadowed by the proportion of users who don't care one way or another.
> "oh you can disable it by going to your about:config and finding this key and setting it to false"
That's false. From day one, you could disable Pocket entirely by right clicking on it and choosing "Remove from Toolbar" or by dragging it off the toolbar in the customization mode.
baked in browser add-ons shouldn't be a popularity contest. I'm sure a facebook add-on would make tons of people happy as well. Yet you don't have that built-in.
I'm sorry, but this logic doesn't make much sense when you consider the fact that it's a third party service which may not be there next quarter or even next year. How about you just do the right thing and unbundle the extension? Make it an opt-in by default. Not an opt-out.
I'd like to add that there's another issue with bundling third party services like Pocket with a browser. You can't audit the internal security of the service. So, if another security vulnerability is found in Pocket you can't be sure if it's just the extension code (this you can audit easily) or the service itself (this not so much).
Instead of bundling it as baked in to Firefox why not just make it part of a "recommended extensions" section of the installer? Not only do you make the users that do use Pocket happy since it can be installed by default (just make us non-Pocket users uncheck the install extension(s) check box) but it gives users that don't want to use Pocket an option to not install it. And it gives developers a chance to vet extensions they feel may be good to add to Firefox to streamline the install experience such as suggesting password manager extensions like LastPass, Dashlane, or 1Password. Or any other highly useful extension (Privacy Badger, uBlock Origin).
It just seems to me the whole idea of making Firefox a convenient browser has become the kitchen sink approach to the problem rather than focusing on what is the essential web (supporting HTML, CSS, and JS standards). The rest is literally optional to use the web.
I'm confused by your argument that because the service's status may change, it shouldn't be bundled. Couldn't they update to remove Pocket if the service goes away? It's not like Firefox is going to stop development anytime soon.
Pocket is not Facebook. Pocket can go out of business sooner than Facebook will ever go out of business. Including Pocket can actually promote Pocket to get more users, so to some Fx users they suspect there was some money-business deal involved. After all Firefox doesn't bundle Yahoo being the default search in the North America region without Yahoo paying at least what Google was paying.
> However, we have surveyed large swaths of our users and found that for all the people that dislike the Pocket integration, several times as many actively do like it.
Do you have any information to share publicly regarding that survey? Like the sample size, how it was done and things like that.
(Personally, I've disabled the Pocket Integration in about:config)
The specific numbers were shared during a keynote at Mozilla's June 2015 all-hands, but I can't seem to find a public recording or data from that session. My apologies.
And as another long-time user of Firefox, I'm grateful that you keep adding useful things like Pocket. Any news on when the integration ships in non-US locales? That one had me stumped for a bit where I got Pocket on one box but not the other :)
I use Firefox partly because I thought Mozilla's definition of "sane privacy and data access policies" involved client-side encryption. My parents understood that Mozilla couldn't see their bookmarks; now they understand that Mozilla can see some of their bookmarks but aren't sure which. Even if I didn't care about my personal privacy, I still couldn't use Pocket at work because of company policy.
Bloat like pocket/hello is a trap. The original mozilla suite had e-mail, a newsgroup reader, irc chat and html development tools all embedded. Sounds useful, but who used it? Only power users and enthousiasts. I imagine a share of users are enthousiastic about pocket, but it also subtily makes the whole browser less attractive.
Complexity intimidates and scares off users and makes software harder to use. All firefox' competitors are extremely streamlined and simple, and that works. Firefox got popular intially because it was simple. Chrome became popular fast because it was faster and more streamlined than firefox. And by integrating all these extra services, firefox makes itself less streamlined and less useable.
I stick with firefox because chrome has some dubious privacy defaults, but that's really the only advantage firefox has over its competitors from my perspective.
fwiw, I actually use Firefox (I say that because I assume most of the complainers don't) and was thinking of making my own reading-list service, but Firefox integrating with Pocket saved me the trouble. I use the integration many times a day and it's great. Not sure why this gets so much negative attention (it's not like it's spying on you if you don't use it).
No money changed hands. Mozilla integrated Pocket because they wanted such a feature in their browser (apparently, so did users), and Pocket already existed (better than reimplementing it from scratch)
That actually makes it worse. At least if there was a transaction between two companies, I would've undertood Mozilla's decision. Now it makes zero sense.
It makes sense to me. It was a feature they wanted to add to Sync, but they decided not to reinvent the wheel.
As far as "third party services" in Firefox goes, this isn't different from the search bar. Both don't do anything until you use them, and both talk to proprietary services. It's just that search is used more than link-saving is.
That's true to an extent, but with the search bar you have a choice. I can use Google (or Yahoo Search, since that's been the default option for some time now) to get better results but less privacy, or I can use DDG to get slightly worse results but much more privacy. I can also opt-in or opt-out for autocomplete. With Pocket you have zero options. You can't even completely remove the so called feature from your browser.
I don't think Firefox users were complaining about the lack of read-later feature in Firefox. "Reader View" is a pretty great feature without Pocket and as far as I know it's all done locally without the risk of leaking your data.
Also, I think most people understand that the Google icon in the search bar and the Google result page mean that Google sees their search terms. The Pocket branding is less distinct, and everyone I've talked to outside of Mozilla expects the reading list to be like history or bookmarks, not searches.
To be fair, though, I think a reading list is a very useful feature.
IIRC this was basically the "other half" of reader view, though I could be wrong.
Yeah, I'd like Pocket to have switchable backends too. I've heard some positive things on this, but I don't recall anything concrete.
"Completely remove" is a pretty nebulous concept, really. You can drag it out of sight to the customizable UI holding area, and the code is lazy-loaded, so it's pretty much gone. One could also argue that any extension isn't gone because one can open the addons page and install it again. Yes, that's different and the addons page is on the Internet while customizeable UI isn't, but it illustrates the point that "completely remove" is nebulous and not a useful metric. For all practical purposes, you can remove Pocket. Does it matter if there's still Pocket related code in the Firefox binary? (which isn't being run or even accessed?) Probably not.
One way would be to keep forking Firefox's codebase for bug fixes and new features while customizing it to fit the needs of the privacy conscious users. It may not be "ethical" but it will serve the main premises of open source and is also doable as a community. A new name would also be required as Firefox is trademarked by Mozilla (See Iceweasel Browser [1]).
FWIW: Reading this on Pale Moon which is basically a rebranded fork of Firefox before Aurora.
Can't say anything about PaleMoons security but I like the old design better, the team seems more responsive and they haven't yet (had the chance to) give in to hypocrites and fire "Brendan Eich".
Really interesting write up! I'm surprised they are still running in EC2-Classic. However, even if they are, security groups should still be restrictive enough to prevent some of the things discussed. For example, bypassing the load balancer shouldn't be possible. A security group applied to the back end instances should only allow HTTP/S traffic from the load balancer group. SSH security groups should only allow inbound traffic from known IPs (like the office network), etc. Unfortunately, not enough people do this, and once you can query instance meta data or obtain an SSH key, it's game over.
If you were Pocket how would you handle the vulnerability created by having internal services hit user-supplied URLs?
Some ideas:
- Move the service doing the fetching to an untrusted network. At least it would be unable to access any internal services and any compromises there would be hopefully limited. You still have the problem that the local machines there could potentially be compromised.
- Validate / verify the URL to ensure it's not hitting
anything internal. This sounds hard. Pre-resolve the name and check to see if the IP is in an internal range? Seems easy to get our of date as your network changes. Make sure to repeat for any redirects? Is there a better way to validate?
- Ensure that all internal services require authentication. This also sounds hard and easy to miss something.
This isn't new territory or groundbreaking research. This is Pocket not performing basic validation techniques. jerf provided excellent information. Maintaining a small blacklist of internal, non-internet routable, and private hostnames/IPs will work fine.
It actually isn't very difficult, but the repercussions of getting it wrong are scary.
Yep, we use HTTP::Tiny::Paranoid with additional blocklists for all external requests at FastMail. We have to allow requests to hit our external IPs still, because our customers could be fetching public data from other customers - but the requests route out to the external network, and our machines don't trust the external network more than they trust any other part of the internet.
It would, but it's quite easy to miss something in the actual implementation:
- How do you extract the hostname from the URL? If the algorithm isn't the same as the one used by your network lib, it might be possible to trick your check into checking the wrong hostname.
- You'd have to check for redirects.
- If you pre-resolve DNS hostnames for your check, and then let your network lib open another socket to the host, it might resolve to another (internal) IP, because the attacker might control the DNS zone of that host, returning 127.0.0.1 on every other request. You'd have to make sure to open a socket to the IP returned during the check.
There was also the address of the other internal services.
I think the best way here is to put the "fetch random URLs" service out in its own subnet, where it cannot access any other internal services like the EC2 status service. You'll also have to validate the URLs (no non-HTTP-or-HTTPS URLs) and prevent things like the redirect attack from working.
Your last idea seems like the obvious approach to me. Don't blindly trust stuff just because it's local. That seems completely insane. I'd hope it wouldn't be easy to miss something, because you'd set up everything to require authentication.
This is the problem with services that store user information: it is highly probably that vulnerabilities like these exist. Security is rarely given the time and attention it requires.
I'm not trying to single out Pocket; they are just the latest evidence that even in the few cases where "you can trust us with your data" is said honestly, it isn't a promise that can be kept in practice.
One more vulnerability which is possible when you can request an instance's metadata: Any AMI roles which have been given to that instance (for example to enable S3 access or decrypt data using AWS' key service) would be visible.
These keys are rotated relatively frequently, but it opens up a whole new level of exploits against the company which runs those AWS servers.
Is there a name for this sort of attack? We were just protecting against some similar attacks earlier this week, and it would have been nice to have a short name to refer to them as instead of "that attack vector where we make unrestricted HTTP requests based on user input".
Most developers don't think (or know) about security issues like this, and running something as root avoids a whole class of deploy problems, so I understand why this can happen.
It's certainly not right, and indicative of the need for either a system administrator with an eye for deployment best practices, or an explicit security position who sets up audits to look for these kinds of flaws.
This was a stronger argument in the past when multi-user/app systems were the rule. If you have only one app running on a box and it can access all of the data which matters, what precisely does getting root give you which getting that app user does not?
Yes, someone could install a rootkit but these days the way to deal with a compromise is to replace the entire system and in either case it's most likely that that process would be initiated by some external clue.
(edit: to be clear, I still don't deploy as root but that's more for other reasons like isolation and I'd be surprised if that was the most pressing security concern on many sites as opposed to things like insecure local services, overly-broad, chainable credentials, etc.)
The ability to go around the host firewall. Accessing data from sub-services that otherwise might run isolated under their own users. The ability to change application source code. Not applicable in all cases, but probably often enough.
My point wasn't that those aren't good but that they're hard enough to do effectively that most places won't see much benefit until they've done a bunch of other things first.
e.g. how many places use least-privilege auth credentials vs. having something like AWS keys or shared database credentials which have access to a ton of shared resources? I'd want to compartmentalize something like that well before changing the UID which code runs under since it's available without any further exploits.
It's certainly possible that something will be logged and it's even possible that someone is actually watching the logs but it's still a gamble that the attacker does something which draws attention: if they have a solid exploit and don't do something which disables legitimate service it's unlikely to be noticed at most organizations.
> a system administrator with an eye for deployment best practices
Many companies call this position "DevOps Manager" or something similar. They usually own their servers, manage builds and deploys, maintain the repos, and communicate to management and developers.
The short version: Super Meat Boy for PC connected directly to a MySQL database to upload levels created in the level editor. The DB address, username, and password were all stored in plaintext in the binary. The DB user had UPDATE and INSERT permissions, but not DELETE, so the game author figured there was no harm to make that user public.
I've seen a serious multitenant business app that manages financial info do this. Except the credentials were root. And also valid for SSH. Users would run a Java applet, which would SELECT from users (after connecting as root) to determine if their login was successful. 7-8 figures per install and this was a "known issue".
It's not thread-specific. Individual boards periodically switch from freely viewable to requiring registration to "encourage" people who have been reading for a while to pay up.
I'm really not a fan of EC2 exposing instance meta data as a RESTful HTTP API running on Local-Link IP addresses. If its only supposed to be queried locally, why aren't these just environment variables? Perhaps they are dynamic and that won't work but come on!
At the very least, run it on localhost:10101 or something. Don't give us another range to have to filter!
One of the biggest selling points of EC2 (at least for me) is that you get a real VM with a real kernel and userspace, and not some "user-friendly" thing the provider made with loads of alien services running as root. So EC2 can't define vars inside your instances + as you said, they can be dynamic.
localhost:SOMETHING will also not work for same reason - they would need to run a service inside your instance.
There is one more popular solution to the metadata problem - providing it as an emulated cdrom, which would be also vulnerable in this case. And it can't be dynamic.
That IP for the metadata is internal to Amazons AWS. Under normal circumstances it isn't exposed to the outside network. Because Pocket is remembering URLs and fetching them for us, it leaks it's own private network in the responses.
Both of those solutions require Amazon to do "something" to the instance itself and thus limit the kinds of machines you can run on EC2. Custom kernels, FreeBSD, etc..
Their current metadata approach works across OSes.
And, yes, the data are dynamic. Things like AWS access keys change over time and can be accessible via the metadata if you've given the instances IAM profiles. I'm surprised the author didn't mention this.
I agree that the approach feels uncomfortable in general but it seems like the best approach for the functionality they wanted.
If you want a laundry list of SSRF methods you should protect against, a great place to start is this slide deck from a talk at ONsec a couple of years ago:
Security researchers out there: on what side of "the line" do you view this kind of exploitation to be? It was not done for nefarious purposes, but it did involve intentionally accessing resources that were clearly not intended to be accessed, like /etc/passwd. Would you worry if you did this that the company might call the police instead of thanking you?
The one thing I don't like about this article (and indeed, much of the discourse around the Pocket integration) is its characterization of the Pocket integration itself. It calls it an “opt-out non-removable [extension]”. The truth is that you can easily disable it just as you can easily disable many other things that Firefox includes by-default. In fact, if you use Classic Theme Restorer (I use it not because I dislike australis, but because I really do not want a navigation toolbar), it has an option baked in to disable Pocket along with webrtc, et al.
Admittedly, I suppose it would be nice if Firefox actually packaged Pocket as a real extension that could be removed from the Extensions menu, but they have already integrated several things without using that schema.
I still use firefox, just with more and more things disabled, because none of the other browsers out there even come close to having what I need in a GUI browser (though, I would note that I'm evermore tempted to abandon GUI browsing altogether).
Either way, the write-up is great, and everything in the article other than that one characterization (which rubbed me a bit the wrong way in the wake of all the fevered discussions around the Pocket Integration) was a truly enjoyable read. Not to mention, it's great that the Pocket devs fixed things quickly; that's always a plus!
>you can easily disable it just as you can easily disable many other things that Firefox includes by-default
>it would be nice if Firefox actually packaged Pocket as a real extension that could be removed from the Extensions menu
These two statements you made seem to corroborate his characterization of it being "opt-out" (meaning on by default, but capable of being disabled) and "non-removable" (baked into firefox as opposed to an extension). Not sure what you find wrong with his characterization.
“opt-out” is certainly correct (though, as I understand it, Pocket, as with all parts of Firefox, is only loaded when it is actually used, so “opt-out” does not seem to tell the whole story to me). I do disagree with the “non-removable” bit. Fully disabling it, to me, counts as removable (though I can understand why someone would disagree).
Perhaps I just read something into the Author's tone (probably due to all the vitriole from the typical discourse around the integration) that wasn't really there. If that's the case, and all the author meant by that statement was that the code itself could not be completely erased from Firefox as-packaged, then that seems to be factually true, and I simply read it incorrectly.
Happy to do it. The last thing I want is for my post to have come across as vitriolic or argumentative; I just wanted to voice the one case of unease the article left me with :)
>Grab ssh private keys from autoprovisioned EC2 user’s home directory using 301 redirect to file URI (after all, we’re running as root, we can read them).
This is not a fair assumption to make. Maybe they are running a LSM like AppArmor.
What is more important - the existence of SSH private keys on the EC2 instances us unlikely... There is chance there are SSH private keys there, but they would most probably be SSH deploy keys for some private repo (configuration management, software).
i prefer using offline bookmark..specially the bookmark manager of Opera browser is impressive.
=============================================
and for online bookmarking there is 'raindrop.io'
On the flip side, if you're doing a security test like this, I've gotten mileage out of convincing apps to access local resources with things like 127.88.23.245, precisely because the developer blocked 127.0.0.1 specifically and thought they were done.
You should also usually block all internal and external IPs for your entire network, but especially in the cloud this can begin to get tricky. Still, you should.
And don't forget IPv6.