"I don't see why this was a security problem in the first place. No personally identifiable data was disclosed. What does it matter if you can view anonymous traffic graphs from other customers?"
I hate it when people say this sort of thing. That just indicates they can't see a potential exploit, not that it won't be one potential aspect of an attack. Honestly, attackers - regardless of their morality - will tend to look at things from a viewpoint others haven't imagined. It's best to give them as few avenues as possible.
Apart from risk to individual customers, it means a competitor ISP could keep a running track of how many customers they have and how fast they are growing/shrinking.
Also how many customers on which products - some graphs will top out at 4Mb and some at 25Mb, etc.
And it implies there wasn't a thorough security review on that server-which-has-access-to-customer-data, so any malicious attacker might turn more attention to it for a way into more access.
Cperciva says the access required a valid login, it's plausible that login could be customer number + password, which would make brute forcing easier with a list of current customer numbers.
Hmm
If they're big enough to have customers across Canada, usage patterns could reveal timezones. And whether the residence is unoccupied during work hours / what typical work hours are.
Longer term usage patterns could reveal if they tend to go out at nights, on weekends, over holiday seasons.
What else... maybe usage drops correlate with major sports events for some profiling?
Apart from risk to individual customers, it means a competitor ISP could keep a running track of how many customers they have and how fast they are growing/shrinking.
Also how many customers on which products - some graphs will top out at 4Mb and some at 25Mb, etc.
And it implies there wasn't a thorough security review on that server-which-has-access-to-customer-data, so any malicious attacker might turn more attention to it for a way into more access.
Cperciva says the access required a valid login, it's plausible that login could be customer number + password, which would make brute forcing easier with a list of current customer numbers.
Hmm
If they're big enough to have customers across Canada, usage patterns could reveal timezones. And whether the residence is unoccupied during work hours / what typical work hours are.
Longer term usage patterns could reveal if they tend to go out at nights, on weekends, over holiday seasons.
What else... maybe usage drops correlate with major sports events for some profiling? Maybe
Case A: An attacker knows person A has account number 123. They also know that this person has a tablet, a phone and an Apple notebook that A always takes with them when leaving the home. A uses Twitter and a couple of other social networks that those devices sync with in regular intervals and syncs to iCloud (which can happen while the notebook is closed).
I'd be surprised if this constant chatter isn't visible on the graphs.
That means: The attacker can track rather well whether the person is at home or not. Or at least make informed guesses. They plan a robbery during a holiday. Or, much more malicious: some attack on the persons other web properties like their iCloud and Amazon account, as presented a while ago.[1]
Case B: Consider that the attacker doesn't have the account number. Another leak in their platform allows them to link personal information with account numbers. In itself, that info is also not very interesting and would also not be "dangerous".
Because the leak of the traffic graphs exists, they go on and plan the biggest heist in Toronto ever :).
A good exploit is often the combination of multiple seemingly innocuous flaws. [1] is also a good example of that.
You could send large amounts of data to an IP in sporadic bursts, find which graph correlates, get their account number and use that as a starting point for social engineering the helpdesk staff.
Elaborate, sure, but plausible. The more important part of the leak is all the account numbers anyway :).
I'm pretty sure that "have access to someone's account number once" => "have access to past and future traffic usage forever" is not good security practice regardless if there are or are not obvious vulnerabilities, and you really don't have to think very hard to construct a scenario where it can be abused.
A good one mentioned in the OP comments is deducing when people are on vacation due to their traffic usage (or rather lack there of). Also, we don't know what other information may have been listed there. What if they decided to add your name and IP address to the graph?
That's with an assumption that the data was anonymous - an assumption we don't know was true. Was there any account/customer number, IP addresses, naming scheme or similar in the graphs or file names ?
As mentioned, a great five minute project for when you get back to work after the holiday is adding a /security page to any website you control which handles user data. All it needs is a monitored inbox, a promise to get back to security researchers, and a PGP key.
If you want to make it a 15 minute project, write a bit of customer-facing "We take your security seriously. That's why we encrypt all data with bank-grade security..." copy above or adjacent to the researcher-focused payload.
Good examples (I picked their disclosure pages rather than the security marketing pages) include:
I wholeheartedly agree. I go as far as to put my PGP key on my contact page and to have a public policy regarding security vulnerabilities in all of my projects related to my website:
I'll admit to some confusion regarding your disclosure page. I accept that full disclosure is an option and it's clear that you prefer it, but having read the page I'm left with the impression that your primary goal is that I publicize any vulnerability to other people, and the secondary goal is that I tell you.
It seems that even if I'm practicing full disclosure, telling the author at the same time I tell the world is a 1st order goal.
> I'm left with the impression that your primary goal is that I publicize any vulnerability to other people, and the secondary goal is that I tell you.
That is correct. If the world knows, then Scott will also be informed.
> It seems that even if I'm practicing full disclosure, telling the author at the same time I tell the world is a 1st order goal.
That is your choice, you are not obligated to do so with Scott.
If he makes a mistake, he wants hackers to call attention to it. He wants people to see his mistakes and how he responds. Maybe follow his example: to accept mistakes graciously, and immediately issue a patch that adequately addresses them.
If he gets burned in the process, that's the price Scott is willing to pay to improve.
Or maybe he's just cocky and is bluffing everyone because he thinks he's too good of a programmer to make a security-affecting mistake. Only way to find out is to audit his open source code and drop 0days onto Full Disclosure ;)
I understand wanting people to call attention to mistakes, and wanting that call-out to be clear and loud. I understand accepting that feedback.
But if it were me, my step #1 would be "disclose this via FD or whatever dispersal method you feel is appropriate, and tack my-address@example.org onto the CC line. That way we make sure I get your feedback and can address it". The goal of getting the author involved as part of step 1 isn't to hide from mistakes, it's to make sure the author doesn't get left in the dark just because they miss a mailing list digest line.
> It's to make sure the author doesn't get left in the dark just because they miss a mailing list digest line.
I intended to address this with my comment here:
> If he gets burned in the process, that's the price Scott is willing to pay to improve.
If Scott gets left in the dark, he feels that it is his fault for making a coding error in the first place. At this point, he no longer deserves to be enlightened. If the vulnerability discoverer feels like being nice and sharing this information first or simultaneously, wonderful. But if they botch it or maliciously post it everywhere else in the world, then no hard feelings. If public knowledge, eventually the problem will be fixed.
The key motive here is that at no point are third parties bound to regulate their behavior or self-censor. At no point will rudeness and/or publicly disemminating exploit code lead to any sort of criminal liability so long as the targets include Scott, Scott's code, and any systems solely under Scott's control.
Scott carries no legal stick. As a third-party security researcher with no business relationship with Scott, you should be empowered to give Scott as much advance/simultaneous notice as you feel is appropriate. With no requirements.
Let me frame it another way: The very act of publishing a security vulnerability benefits two parties: The publisher/vendor/author of the code that contains the vulnerability, and the public. In the case of Scott's open source software, the interest that matters most is the public interest. The public should be informed so they can decide whether or not they wish to continue to trust the code quality that Scott produces.
So what if Scott's servers get rooted and rm'd? He'll wipe them and write better code next time.
At no point will Scott impose any restriction on what you decide to do with your ideas that were inspired by reading his work. Even if your mind goes to dark places. All he asks is, just don't hurt the public. He's not exactly in a position to waive the right for the general public to press charges if you hack into their systems.
you don't need a PGP key, and in fact you can easily filter the traffic to your security@ inbox with a simple rule - is it encrypted with the PGP key? you can delete it, it's probably bullshit.
experienced reporters will recognize that the flaw they are reporting is already publicly visible and the thing that is the most important is getting attention and fixing the issue, secrecy won't help. you will get garbage reported to you encrypted, stuff like "I can see that your website uses PHP because there are URLS that end in .php".
Merry Christmas Colin! Nice to see this worked out. I'm confused as to why you're surprised they had working whois contacts, most real businesses do, it's usually the scammers, the spammers and people that are up to no good that use whois privacy, rarely you see that used for someone who genuinely needs some protection.
If a company uses whois privacy I don't do business with them as a rule.
As other people have said, I've encountered lots of startups using whois privacy; I think it's often a decision made in the early days and never revisited. This is one of the reasons I wrote about this -- to point out to people that having non-"private" whois actually serves an important purpose.
I think "Never unchecked the checked-by-default whois checkbox when purchasing the domain" is more accurate than actually saying it is a decision.
While a company should definitely have a whois page, most startups make it abundantly clear how to get ahold of someone at the company with large "Contact Us" buttons.
I've never contacted a startup to report a security issue, gotten a response (some don't respond), and had it not be from someone in a position to have the issue worked on immediately.
I use whois privacy on my domain because it is for my personal site and I'd rather random people not getting my home address, especially since sometimes people get angry with me and will try to find data about me.
... and that's fine, because it's a personal site and it's unlikely someone will need to contact you urgently to report a problem. The tradeoffs would be different if you were a company providing a service to a large number of customers.
It's not uncommon for the whois contact to be a non-private but unhelpful number for the company. If you call any of the numbers in, say, `whois google.com` or `whois dropbox.com`, I'd expect that at best you'd reach front-line technical support, and more likely a phone tree or a voicemail box, not a live human who runs their infrastructure.
Google's tech whois number is a different number than the one for their main contact and a different one for the DNS admin one.
Dropbox uses the same number for all three.
Neither company uses whois privacy. Agreed that for plenty of companies you'd end up in an IVR system but that seems to go for every kind of company these days, not just internet companies.
I assume this only applies to internet tech companies that should know better. If the non-technical guy outsourcing a company's IT got upsold on whois privacy, I'm not about to hold it against them.
Real businesses do no not hide. Whois privacy for any real business is a dumb decision, your customers need to know who you are, where you reside and how they can reach you even when (or especially when!) your website is down.
I don't really understand the way ICANN currently rhymes their 'whois data should be accurate' policy with the 'we allow the use of anonymization services' exception on that policy.
If the number leads to a phone center the business contracted with and there's no real way out of that to the people who make decisions, is that not a form of hiding?
100% Agree. As a registrar (and as someone who personally has been in the business since the mid 90's) we don't even offer it. But yet not a day goes by (ok an exaggeration) where we get a legitimate business that thinks they need privacy because it's going to somehow protect them from spam or getting their domain stolen or they've read that it's the right thing to do. The large registrars push this as a profit center and/or some kind of benefit to enhance their offerings.
There are reasons to want privacy of course but not in the case of a business with a business address that most likely (say the local cake shop?) already puts their address on their website.
As a business, why wouldn't you want your contact info to be public? It's another piece of marketing.
Even more absurd to me is people who own domains who clearly want to sell them (let's say they are listed on SEDO or Afternic etc.) and they have privacy on their whois record. I mean get a PO box if you don't want your home address and you don't have a business address. If your domain is so valuable or if you own many we are talking $100 per year approx for an address. Use a google voice number for the phone number.
Lastly, lack of public info on ownership (so no trail of ownership at whois history) makes it much harder to prove you own the domain if something happens at the registrar. You are depending on them to have all their records in order. If they get hacked, go out of business and so on you could have a problem proving ownership. (However small it's not worth the risk).
ICANN does require registrars to archive whois data (with Iron Mountain) however since we don't offer privacy I'm not sure whether they require the underlying ownership (if you want to call it that) to be archived. Also many of the early privacy programs actually put ownership in the registrar's hands but had a separate contract with the actual registrant (not sure if that is needed anymore).
> 100% Agree. As a registrar (and as someone who personally has been in the business since the mid 90's) we don't even offer it. But yet not a day goes by (ok an exaggeration) where we get a legitimate business that thinks they need privacy because it's going to somehow protect them from spam or getting their domain stolen or they've read that it's the right thing to do.
Personally, I don't think WHOIS privacy is something a business ought to be using, but it's perfectly legitimate for private individuals to use it.
> The large registrars push this as a profit center and/or some kind of benefit to enhance their offerings.
It is, because it's something the registrar can do at little or no cost. Given registrars work at thin margins, there are good reasons why even smaller registrars tend to offer WHOIS privacy.
> Lastly, lack of public info on ownership (so no trail of ownership at whois history) makes it much harder to prove you own the domain if something happens at the registrar. You are depending on them to have all their records in order. If they get hacked, go out of business and so on you could have a problem proving ownership. (However small it's not worth the risk).
That's precisely what data escrow through Iron Mountain is intended for, and why ICANN have been enforcing RDE since the failure of RegisterFly.
The key thing to keep in mind is that what is shown in WHOIS isn't necessarily what needs to be escrowed. A registrant is meant to provide accurate data to the registrar, and it's this data that has to be escrowed, but this data doesn't have to be what's published in WHOIS. This is why the only way to legitimately do WHOIS privacy is through the registrar of record. Anybody who's dumb enough to use a third party is asking for trouble.
If you're not escrowing the underlying ownership information (that is, the real registrant, admin, tech and billing contact details), then you're in breach of the RAA.
> Also many of the early privacy programs actually put ownership in the registrar's hands but had a separate contract with the actual registrant (not sure if that is needed anymore).
That can be solved by making it clear in the WHOIS details that the details displayed in WHOIS are for an agent acting the actual registrant. The registrar I work for also forwards any details sent to the email address published in WHOIS with WHOIS privacy on to the actual contact behind the scenes. Also, the registrar is required to escrow the real details.
Doubtful. I know people in the ICANN registrar constituency, and to the best of my knowledge there have been no discussions of this.
Now, what is the case is that there's been a crackdown, due to pressure from law enforcement agencies, on incorrect details being set on domains. However, this actually has no effect on what's published in WHOIS because the only actual requirement is that the registrar have the correct details for the domain on record and escrowed. So long as the registrar has correct details, the registrar can mask the detail in WHOIS however is applicable, so long as it's clear that WHOIS privacy is in place.
They're not alone in that: for instance, .eu and .be domains with registrant contacts without an organisation name set have only very minimal details published in WHOIS.
Then again, it's totally up to ccTLD operators to decide on what details are published in WHOIS. gTLDs (those longer then two letters long) have to stick to ICANN's policies.
Granted that this is Canada so is a bit different from where I live, I personally would still at this point fear malicious prosecution just enough that I would not have taken action. That's a little sad all by itself.
I'm not a lawyer, but I'm confident that I could not be successfully prosecuted under Canadian law. The law in Canada states that access to computer systems must be "fraudulent and without colour of right" in order to be criminal.
I'm not familiar enough with US law to know if a prosecution could be possible there, but regardless of the law might say I think juries can be influenced by the clear intentions of the accused.
A few months ago, working on a system, I noticed that the URLs seemed to be based directly on a fixed, sequential user ID. It was a system that, in this case, was being used by An Important Company You Have Probably Heard Of.
I was about to see what happened if I changed the number, when I realised that I am a foreign citizen in a country with notoriously unpleasantly drafted laws about computer security.
And then I stopped. It's almost certain that nothing would go awry, but the mere act of gently poking might be interpreted in an unfavourable light. Losing a work visa is very easy and I very much don't want to.
If you're accessing a system a certain way (i.e. accessing certain parts of an application), then your traffic abruptly stops and a Tor exit node IP picks back up, you're hosed.
Fair point. But the post I responded to was saying to just give up.
Tor may not be enough. To really be safe, use the precautions in http://pastebin.com/cRYvK4jb (this guy hacked finfisher and got away with it, so they know what they're doing.) Mostly whonix, truecrypt, and tor.
(Come to think of it, why hasn't that post been taken down? Is there nothing against pastebin's tos in there? They took down all the sony leaks.)
I would never go back and forth on the same page between tor and clearnet. I assumed that level of caution was obvious, especially to a fellow hacker (we're talking about vulnerability-noticers here, after all.)
> but regardless of the law might say I think juries can be influenced by the clear intentions of the accused.
Indeed, in a sane country where a judge would most likely would throw it out in pretrial you could spend tens of thousands of dollars getting a jury to in a less sane country.
> I'm not familiar enough with US law to know if a prosecution could be possible there, but regardless of the law might say I think juries can be influenced by the clear intentions of the accused.
Possibly true, but US juries are explicitly instructed (ad nauseum) not to do this and potential jurors are often removed from the pool if they seem unlikely to follow through. So, you know, be careful. (addressed less to cperciva than to other readers)
Why not just store each user's information in a file whose name is based on HMAC(some secret, user's account number)? That should be secure against enumeration attacks.
It's a good idea, but if you do that properly, you should have a system in place rotate the secret, and if shell scripts work with those graph files, they might also need access to both the hashing function and the secret.
I wonder if instead one could use mod_rewrite (assuming Apache), and check the URL against %{REMOTE_USER}.
I'm not certain why one would want to rotate the secret on a regular basis; it'd only need to be changed in case of exposure. A truly robust system would have a way to easily say 'rotate the secret' rather than have to manually change it.
Basically, it's a poor man's capability system: once one has a URL to one's graphs, one can easily share them around, but if one doesn't have the URL and doesn't have the secret, one cannot determine the URL, thanks to the power of a decent hash function in the HMAC.
No, it's rather that all idiots are PHP programmers; or rather, that PHP attracts idiots.
You can certainly be a non-idiotic PHP programmer, and there may certainly be just as many good PHP programmers as there are good programmers for any other language. But you, and they, are outnumbered by the bad "this is my first language and first project" programmers, who tend to adopt PHP for that purpose. and because of that, if you know nothing else about a programmer, knowing that they primarily program in PHP should tend to be Bayesian evidence against their programming skill/experience, just like knowing someone lives in India should tend to be Bayesian evidence against the likelihood of them being wealthy, or knowing someone likes {pop, classic rock, modern country} music should be Bayesian evidence against the diversity of their taste in music. It's not that there aren't tons of counterexamples; it's that there's this one huge overlapping subgroup that happens to have that property, and dominates any statistical effects. (Respectively, "people learning their first language"; "people born in poverty with no way out"; and "people who only listen to music provided by their single local radio station.")
Of course, if some other language were easier to learn than PHP, the dominating subgroup of "people who know 0.5 programming languages" would become attached to that language instead of PHP. So the fact that this group is attached to PHP is actually a point in favour of PHP as a platform. It's still a point against its statistical median user.
On the one hand, there is truth in what you say about the statistics. On the other hand, treating individuals as though they were statistics tends to be boorish and obnoxious behavior. If you were running a forum for discussing sports cars, for example, you probably wouldn't consider it acceptable to say 'Indians are not welcome on this site, they haven't got any money so they wouldn't know anything about sports cars'. Similarly, if you encounter a bad programmer who uses PHP, it's fine to criticize him for being a bad programmer. It's not okay to insult PHP programmers in general.
I'm a PHP programmer too. I've also been a C, C++, Pascal, WFL, COBOL74, 68k assembly, etc. etc. etc. programmer for the last 30ish years (and paid off and on for it for almost 20 now, by all kinds of different places). I also founded a local developer group, and I certainly wasn't the least knowledgeable guy there when it comes to application security.
I get that you didn't intend it as an insult, and you probably didn't have people like me in mind when you made that comment. However, the relatively recent "PHP programmers are stupid" thing has so turned me off to any kind of programming discussion that I now habitually avoid any discussion of programming at all -- for the first time in all my years as a programmer.
Thank you! I just sent a long email to the HN moderators about this very issue. This pervasive anti-PHP mentality is rampant and actively discourages me (and other PHP devs) from contributing.
I can't believe I have to say this to you, but suggesting there's a "strong correlation" between PHP developers and idiots is a complete insult. It's actually bigotry in action. To further suggest you held back a better solution because you assumed the PHP developer was too stupid to understand file hashing is both another insult and bad practice.
It doesn't matter if there's 10 or 10 million PHP developers, it's still insulting. You sell a product online, and you've now alienated a large developer community. I'll never recommend or sign up for tarsnap now.
To further suggest you held back a better solution because you assumed the PHP developer was too stupid to understand file hashing is both another insult and bad practice.
It wasn't a better solution. It was one of many alternatives which came to mind; but I offered them the best solution (serve the graphs from a PHP script) and the easiest solution (run a cron job to delete the old graphs).
The implication in your word choice came across as unfair and elitist.
> EDIT: To elaborate: I'm definitely not saying that all PHP developers are idiots. But PHP is a much easier language to learn than many others, so it gains a lot of users at the low end of the ability spectrum. There are just as many talented and security-aware Windows users as there are talented and security-aware Debian users, but there are far more untalented and security-unaware Windows users.
Okay, that's a fair statement. I thought you were calling ME an idiot :P
It's like you don't even know you're doing it. I, Mike Gioia, am defensive. Not "PHP developers".
Would you not be defensive if you were called an idiot because of the language you choose to program in? Every time PHP comes up in these forums someone makes a smart-ass comment about how PHP developers are somehow lesser. THAT is why I get defensive, because of how celebrated and rampant it is here to treat PHP developers like complete shit.
Your account is just as old as mine and while I don't know how active you are on PHP-related threads, it's something I see time and time again here. It's a cancer in this community and when you see it and hear it as much as I do it becomes a serious burden.
No other group of programmers is stigmatized like PHP devs are for seemingly no other reason than how many bad ones there are. Yes there are bad PHP developers, but the blanket dismissal is, in fact, bigotry!
> No other group of programmers is stigmatized like PHP devs are for seemingly no other reason than how many bad ones there are. Yes there are bad PHP developers, but the blanket dismissal is, in fact, bigotry!
Woah. Deep breath.
PHP is a legitimate tool. Great for some things. Less great for others. Knowing how to use a tool can't make you an idiot, or bad at what you do.
If you only know how to use one tool, you're likely bad at your job, and/or the n00b that we're talking about as the butt of the "PHP developer" joke: The guy who implements something by following a tutorial and modifying small bits, until he gets used to the language.
Except, guess what - we're all "that guy" when we're in a new language. We start at README.md, a.k.a. step 1. We're making fun of ourselves when we first tried to do $x in $y without realizing that (naturally!! dumb noob!) in $y you actually do $x via $asdf. And even further back, when we started out with.. BASIC? PHP? Something inviting. Come to think about it... A lot of really good developers, who later may have transitioned to other languages, have probably started out with PHP.
But I digress. I've always thought that the php-devs-are-bad joke is just a more instantiated version of $noobs-are-bad.
I identify as a PHP developer. I owe PHP. I've worked with PHP extensively and would implement in it again, if and when it were appropriate. I don't get mad at PHP dev jokes. If I'm called a bad developer because I know a language more than them (and not a joke to lighten the mood or something), then the wielder of the argument is truly the idiot[1]. You may as well be saying "long division is retarded" because you just do it on the iPhone, man, or "French Speakers are idiots".
PS: This is my read on it. I try to live by an expanded Hanlon's razor ("Never attribute to malice that which is adequately explained by... $x") and also happen to believe that when you get upset/offended over trivialities, you "give power away". Being overly sensitive only ends up hurting yourself.
[1] And you have to wonder - does he even know that the hammer can also remove nails?
That's great for you that you don't take offense to the comments, but you shouldn't belittle the fact that I do. I've taken many deep breaths about this, and I'm trying to vocalize my feelings about an issue I see here all the time.
Obviously you're right in what you said, but I definitely do not see it falling under Hanlon's Razor and I definitely don't think everyone (or even most) people who bash PHP see it the way you do.
> That's great for you that you don't take offense to the comments, but you shouldn't belittle the fact that I do.
I'm sorry that I offended you, that was certainly not my intention. I'm puzzled as to how that got through, I thought I was careful in my tone. Fair enough.
> Obviously you're right in what you said, but I definitely do not see it falling under Hanlon's Razor
I said "expanded Hanlon's Razor" - meaning that I've found that many more things get misattributed as malice. In this case, the bad PHP dev meme[1] has permeated or sub-culture. That has happened already. It is done. Can't undo it. Can't take it back.
If you attribute usage of that meme to malice every time, and escalate it into bigotry in your head, you're "going to have a bad time" [sic]
I see where you're coming from, but calling discrimination & bigotry is one step too far in the opposite direction.
[1] And again, yes it is a bad meme, but it is actually about one-trick-pony noob devs. Not you, not me. Keep that in mind.
"First, they were using widely used open source code; if I hadn't been familiar with it, I wouldn't have noticed the problem. Bad guys will always dig deeper than unpaid good guys; so if you're going to benefit from having many eyeballs looking at what you're doing, it's much better if your bugs are shallow."
That is a really interesting reason to go with the popular open source solution. I guess I don't always follow that advice, but I wonder why I didn't think of it as a security decision.
I'm not sure I do either. It does make quite a lot of sense, but then you get the "how do new things get adopted" problem. I would probably weigh much more heavily to it if I was a manager at a large company.
Novus is great, they offer some of the highest speeds available in the areas that you can get access.
My biggest gripe with Novus (besides limited coverage) is they have no unlimited bandwidth offering like my current provider has. Despite that, I'm still considering switching to them for those awesome upload speeds. In light of this, I'm probably going to sign up.
Last I heard, when you went over your bandwidth limit with Novus they cut you off to prevent overages. I really liked this because you have the freedom to call them and confirm that you're fine with additional charges and they'll immediately reconnect you, but I'm curious about if they still do this. Any current customers able to clarify?
According to their website you'll get an email when you hit 80% utilization, and you can upgrade plans (or buy extra bandwidth for $5 / 20 GB) at any time. Also if you tell them in advance they can set your account to automatically buy more bandwidth rather than shutting off when you hit the limit.
Personally I've used about 20 GB in the past week, so I'm not worried about hitting my 500 GB cap.
I think Teksavvy, also available in Vancouver, has a feature whereby data used between 2 am and 8 am doesn't count towards your cap, which is downright reasonable and makes me wonder why none of the other providers are honest enough to offer that. I mean, data used in that time bracket is going to be far less than in the peak hours even with the people who will save their heavy bandwidth use for then, so usage then really doesn't cost them any money.
Novus doesn't appear to offer that.
So personally, I'd switch to Teksavvy instead, and I will if I ever have any problems with Telus (but so far I haven't).
I don't think that feature (Zap the Cap) is available in Western Canada though (I might be wrong). I think that's Eastern Canada.
(Happy TekSavvy customer intentionally paying more than Novus would cost because they were doing nice things politically, and the 25Mb plan is ~$5 difference. Might switch if I get a faster plan, though.)
Well, I don't have TekSavvy at the moment, so I can't say for sure, but I did notice that when I gave them my postal code, all the plans they showed available for my area did have that feature.
There's a need for extremely private registrations that still have a feedback channel. A guy I know has such a registrar, but I doubt there's any means of contacting any site's operator for technical, legal or other matters. (The whois postal address always lists somewhere Europe, but there's absolutely no published details. So it's a registrar that's about as private as allowable.)
My only concern here is how quickly support was making changes to production code. Should changes like this be made to a functioning system within 10 minutes of getting a verbal bug report?
I'd agree with you if they were making changes to their product code, but they weren't. This was just a monitoring interface, quite separate from their actual product (which is internet access, not RRDtool graphs).
I was expressing concern about the process not the particular changes made. I am pretty gung-ho and the sole person who makes changes to any customer facing areas and even I would take longer than this to address anything that was functioning.
We have pushed severe/security issues to production nearly as fast to a major bank. Granted we had the attention of an entire team but this ISP seems to be much smaller and these tools probably see a lot less use.
Out of curiosity, how long do you think it would take to where you wouldn't be concerned? It seems hard to me to be really concerned without knowing exactly what changes were made, how many people were involved, and what kind of process they have in place for pushing code out the door.
I agree if the issue is critical that you want to get the fix out as soon as you can. I also agree with you that we don't really know the full background here, but I am concerned when this is being held up as a good way for a company to behave.
I can only speak for myself, but I never push any non-critical changes out on a customer facing application in less than a day. I like to think about what I have done overnight just to make sure I am not breaking something. Even doing this I have still managed to break things on occasion :)
Use a cron job to delete all the generated graphs every minute.
That doesn't strike me as all that great of a solution, since (at least naively) this is going to result in a small (but presumably non-trivial) number of 404s on initial load.
I guess if it's just a stopgap until a better solution could be implemented then it's fine.
If you mean the race between "create graph" / "graph is requested by browser" and deletion, that can be mitigated by not deleting any files which have been modified recently.
A short time later they started checking Referer headers; but as I pointed out to them, sending a fake Referer header is easy, so that only helps against the most careless attackers. I suggested several options for solving this, and they chose the simplest: Use a cron job to delete all the generated graphs every minute.
And it's relatively common issue with auth on the web. I think Facebook has (or had) this problem too. You can generally right click and "copy link" to get a .jpg URL, and send around people's pictures without any auth.
Basically the problem is when there are two web servers, a "dynamic" one with auth, and a static one that serves images. The static one is often a CDN.
Macaroons are basically a simple technique for decentralized auth, involving HMAC chaining. In this setting, the static server would first give the dynamic server a macaroon M authorizing ALL pictures.
At serving time, the dynamic server authorizes the user for a particular request. That request will have an <img src="acct123.jpg"> link. The dynamic server will take the macaroon M, and add a CAVEAT that the file must be "acct123.jpg", yielding Macaroon M2.
The client gets the restricted macaroon M2 with the HTML, and sends it back to the static server to retrieve the .jpg. The server can 1) verify that M2 is derived from the original M, and 2) read the caveat from the dynamic server, proving that the user was authorized for the image acct123.jpg (and only that image). The HMAC chain is constructed so that the client can't remove the caveat and get access to all pictures.
Basically what happened is that the static server DELEGATED auth logic for its resources to the dynamic server. In figure 2 of the paper, the static server would be TS (target service), and the dynamic server is IS (intermediate service).
The static server still needs extra code for Macaroons, which existing CDNs and static servers don't currently have. It would be cool to have an Nginx plugin that does this. But the key point is that it is preserving the original intention behind the static/dynamic split: performance.
In less performance sensitive context, you would have a web server perform custom auth logic, and then just read() the static file from disk and serve it over HTTP. This is likely to be many times slower than say Nginx. With the Macaroons, you can authorize ONCE in the dynamic server, and then PROVE to the static server that the auth decision was made. So all the .jpg requests can be fast and only hit the static server. The HMAC calculations are just hashing so they are cheap. It is symmetric crypto, with the shared HMAC secret.
The paper has some other use cases and is definitely worth a read. I'm thinking about using this technique for a project. I'm interested in opinions from crypto/security folks.
Why not just give every file a name that's a guid or a hash of the content? this works fine with any CDN and it's impossible to guess/fudge. You can only obtain a hash if you have the rights, and while you could share it with others it doesn't seem a bigger security problem than just sharing the picture directly instead of the link.
The only problem I see with such approach is lack of expiration policy (other than re-loading the file under a different name every now and again).
This is basically like the HMAC(secret, acct#) suggestion further up in the thread. Macaroons are an extension of this idea -- a chain of HMACs instead of a single one.
The idea is to limit the power of the credential, so that when it's stolen, the negative impact is mitigated.
In the paper, they compare macaroons to plain cookies. When you steal a cookie these days, you can take control of an entire account -- maybe even change the password, etc.
In this situation, with the simple HMAC / guid / hash, once somebody "steals" the URL, then it can be leaked to everybody, forever. It can't be fixed without "breaking" the app.
The macaroon technique can also solve the expiration issue. In my post, I described how you would add a caveat for the filename. You can also add a caveat based on time. So then stealing the Macaroon would only authorize the attacker for a limited team -- even a single request (this is talked about in the paper).
A legitimate user can be minted new macaroons constantly based on his login. But the attacker can be locked out based on the expiration. So the attacker's problem is elevated from stealing a single cookie/macaroon to stealing either the login or ALL macaroons, which is generally harder. With Macaroons, you don't have credentials of full power constantly traveling back and forth over the network.
That is my understanding anyway... actually implementing it will probably reveal some more insights.
I don't think performance is the issue. A heavy static file serving workload (like a CDN) is either I/O bound or memory bound, depending on whether data set can fit in cache and the network situation.
The HMAC verification is just doing MD5/SHA1 hashes and comparing for equality, which all CPU (and really CPU, not CPU + memory, since the data is so small). It's tens of microseconds of CPU implemented in JavaScript (Table II). I'm sure it will be single digit microseconds or less implemented in C, so that's at least 100K - 1M req/s. The CPU will be negligible compared to the rest of the workload.
The performance issues with naive solutions for auth of static files are pretty different: hitting databases for auth checks, copying data through two process, context switches, etc. Those are things likely to slow down a static file serving workload.
There is some extra "complexity", but I think it's almost the simplest solution you can think of for auth, even ignoring performance. It's a lot simpler and more robust than say putting a database in the request path.
The implementation isn't as big as it may appear. There is an open source library that is less than 2K lines of C:
What I described is probably 200 lines of C or less. Again you are just verifying an HMAC chain. And the code for hash functions is stuff that probably already appears in all web servers anyway. It's using only the simpler "first party caveats" and not the "third party caveats".
Probably right, sorry for jumping to conclusions too hastily (a common antipattern on the ol' internets). Certainly if someone deleted a parent comment, there's several design decisions for comment services on how to handle child threads. One would hope that if a parent comment were deleted, it would leave a placeholder like HN, but it's doubtful that it's only design branch taken.
"I don't see why this was a security problem in the first place. No personally identifiable data was disclosed. What does it matter if you can view anonymous traffic graphs from other customers?"
I hate it when people say this sort of thing. That just indicates they can't see a potential exploit, not that it won't be one potential aspect of an attack. Honestly, attackers - regardless of their morality - will tend to look at things from a viewpoint others haven't imagined. It's best to give them as few avenues as possible.