There's nothing wrong with publishing your findings immediately. It's not up to a security researcher to do a bunch of unpaid, extra work. Finding a bug is enough.
Which is better: Finding a bug and bringing attention to it, or someone malicious finding the bug? The latter is objectively worse, but people keep trying to punish researchers for not following the third path of "Report it in private, following strict and lengthy procedures, and make no mention of it until a timeline of their choosing."
It's a professional courtesy for someone to do that for free, but not a requirement, and it's certainly not unethical to tweet about it.
While I'm not sure how keeping the bug secret compares to revealing the bug immediately in terms of ethicality, I think we can all agree that keeping it secret for a reasonable amount of time while the company fixes it (unless there is active evidence of exploitation, or you have significant reason to believe there is) is strictly more ethical than either of those options.
No, no it's not strictly more ethical. It's not even strictly safer, which should be an even easier question to answer. The baked-in assumption in your logic is that users have no options other than waiting to patch. But, obviously, they do, and keeping vulnerabilities secret deprives them of those options.
Full disclosure feels very much like pointing at my house and saying "This house here has a lock which you can get past using a Bic pen if you insert it with the rounded end and turn it clockwise. You could then steal all the things inside."
I get what you mean about "responsible disclosure" leaving users in the dark here but I'd feel much better if you'd just tell me there's a problem with my door before you put it in the neighbourhood news bulletin.
To make it worse, the vendor has the means to inform every user whereas you have the means to inform some tiny subset.
I see this as a discussion of trade-offs whereas you seem to see this as an obvious truth on your side of the argument. That's the only part that concerns me.
> Full disclosure feels very much like pointing at my house and saying "This house here has a lock which you can get past using a Bic pen if you insert it with the rounded end and turn it clockwise. You could then steal all the things inside."
No, it's like going on TV and telling everyone that a certain brand of lock, which you happen to use, is easily circumvented with a Bic pen. Feel free to get a new lock, or do something else to increase your security if you feel it's warranted, as the lock isn't good protection, and it wasn't prior to the announcement either.
You might find this[1] relevant. This is how bad locks can be. Would you rather use this lock in ignorance, or immediately replace it? Particularly of note is the 20 seconds or so starting at 4:10, where he makes it very obvious just how bad this lock is. Also note how he says "Geez, keep sending me stuff like this and you'll get me sued by Master Lock again."
The point is the "lock" can't be immediately replaced, by the user, in most software exploit scenarios. That's why you delay, so the company can ship a new "lock" so that when malevolent forces who didn't know of the exploit learn of it the new "lock" is ready to do duty.
It depends on the specifics of what's being exploited but in general I think a short delay between informing those who can fix it and those who will exploit it is generally more moral.
That really isn't an option for most users. People who use a password manager are pretty much forced to use that password manager, at least if they're using it properly. Most users won't know how to export it to a CSV and then use that without a browser extension and/or import that to another password manager.
Isn't that the right thing to do for truly sensitive passwords?
I'd much rather transcribe my financial account passwords to paper, or not log in for a couple days than have multiple months where a vulnerability might be exploitable
Exactly. My front door has a lock we open with keys, which is all we rely on normally. It also has deadbolts, which we use when we're out of town. Tell me my keyed lock --- tell everyone my keyed lock is unsafe! We'll just set the deadbolts until we can replace it.
What I don't want you do is secretly working with the vendor of my lock to fit some kind of solution to their problem into their product shipping schedule.
You're not telling everyone, though, are you? As a matter of fact, you're probably not even telling a small number of people who have the lock.
You won't be setting your deadbolt because, guess what, the guy who found the problem told your local Linux Users' Group, not you. And you don't attend your local Linux Users' Group.
If you were telling everyone that would be another thing.
We're talking about someone posting on Twitter about bugs, not someone giving secret presentations at LUGs.
I don't like telling vulnerability researchers what to do. Of all the ethical issues involved in vulnerability disclosure, the most important to me is the fact that these researchers are doing free work for the vendors, who bear all the obligations of ensuring that they're not shipping vulnerabilities in the first place. I think these debates about "disclosure ethics" mostly serve the vendors, who want to deflect from discussions about how they manage to ship vulnerabilities that third parties can find.
But if I had to have a problem with any disclosure process, it's the "secret cabal" model, which has always been an problem for me, all the way back into the 1990s with things like the CORE list.
I'm not going to argue against your position on security experts' freedom to act because I do not think that any law (or societal taboo) should be made abridging their right to disclose in any way the results of their research. If there is any withholding, I want them to do it of their own accord because they are convinced of the merits of withholding, not because of anything else. So let's set that argument completely aside because there's no disagreement on that front.
Twitter is the equivalent of your local LUG in the scale of things. My mum is not reading Twitter for security news, or even at all. What my mum is doing, is having her stuff update. And my dad's no different.
It might as well be some presentation at DEFCON. I want my vendor to disable functionality that's vulnerable. And I want them to be given a chance to do that. If you feel you are adequately warning users by fully disclosing via a Twitter post then so be it. I think I've made all the arguments I have to make on the topic. If they are insufficient, then that is where I must leave it.
To anyone reading afterwards who may not be sure about my position, I feel positively about how this vulnerability was handled.
Twitter is the equivalent of your local LUG in the scale of things.
Not really. Any serious vulnerability in a widely used product disclosed by twitter announcement makes it into the mainstream media in no time and prompts a response by the vendor (if one had not been forthcoming already). Your local LUG, less so.
Well, we're not comparing "not knowing" to "knowing". We're comparing "vendor tells all customers" vs. "you tell a small fraction of customers".
I would rather have the former. Also, I didn't pick the Bic pen thing randomly. Certain Kryptonite locks were once famously demonstrated to be Bic pen unlockable.
In the case of a lock you have to go out and buy a new one. A difference in software is that your lock can improve its security without buying a new one.
If a vulnerability in your lock is found and the lock manufacturer buys you a new lock within a week, do you stop trusting that manufacturer?
So, because some companies are good at fixing things in a timely manner, we should make the public wait for all vendors? What about problems that are more architectural or design specific that aren't easily fixed, or can't be fixed by the nature of the discovery (math proofs)? What about vulnerabilities that seem fairly benign, but when used in conjunction with other benign vulnerabilities result in something greater (e.g. the ability to influence timing attacks)?
What is a serious problem is subjective. The right amount of time to wait is subjective. In both cases, there will be people that are unhappy with how they were or weren't informed. Instead of some select group of people deciding how important the fix is and when it will be fixed, announcing immediately empowers the public.
>So, because some companies are good at fixing things in a timely manner, we should make the public wait for all vendors?
Of course not. In our analogy here, if the lock maker isn't issuing even a statement then hell no you shouldn't trust them. But in this case the locksmith fixed the lock and reissued the lock within a week.
> What is a serious problem is subjective.
Yes an no. There is a gray area. But what isn't is: lives, A TREASUREBOX OF PASSWORDS, anything that can cause physical damage, and anything that can significantly compromise a user's safety.
To be clear: in another post I said you should ALWAYS (read as "\Huge{ALWAYS}")publicly disclose. So that others don't make the same mistake. But obscurity can be useful in the short run.
Announcing immediately DOES NOT empower the public. They are not rational and quick to fear. ALL (EVERY SINGLE ONE) companies have vulnerabilities. What is important is if they fix them in a timely manner or not. SEEING that a company does fix a problem quickly empowers the company AND the user, because they find they have a company they can trust. Immediate disclosure only empowers the adversary.
Obviously, announcing immediately does indeed empower those parts of the public who will decide to simply stop using the vulnerable software. Vendors don't like to acknowledge this, and prefer instead a set of premises where the only recourse to a vulnerability is an officially-approved patch (and, by implication, the continued loyalty of their user base). But it is nevertheless true.
Your statement would be true IF it were possible for software to be invulnerable. Since ALL software (and hardware) is vulnerable it is perfectly reasonable to give the owner of said software/hardware a reasonable time to fix the issue before releasing to the public. I fully believe you should always release to the public, but obscurity is a good temporary measure.
I think your stance is reasonable, but I disagree with it.
Remotely turning off a pacemaker is the ultimate example of empowerment. If I had a pacemaker, I would want to be informed immediately of a vulnerability, along with everyone else. Assuming it's a proximity-based attack, I'd have the option of driving to a rural area and booking a motel until the vendor deploys a fix.
This is obviously an extreme example, but addressing the most extreme example might be the best way to address all the rest.
How would I hear about it? Because HN, Reddit, and Twitter would all blast the information to everyone, along with "warn anyone you know who has a pacemaker." Even if I weren't technologically literate, someone close to me would tell me.
This is strictly preferable to the alternative: If the vulnerability is kept secret, my life would be in the hands of a select few who (a) know about the vulnerability and (b) may or may not use this information to their advantage.
When an independent security researcher informs a company about a vulnerability, the researcher has almost no transparency into how the company is responding to the report or what the timeline is for a fix. (HackerOne has made this a bit better, but most companies still don't have public bug bounty programs.) In the case of a pacemaker, it's very likely that the only thing the security researcher will hear is "Thank you for informing us. We are working urgently to deploy a fix" followed by a month of silence. This leads to the researcher doing increasingly desperate things to bring attention to the issue, such as emailing other security researchers in private. But an independent security researcher has no way of knowing whether the person they're emailing is trustworthy, nor can it be guaranteed that their conversation isn't being spied on. The researcher is also likely to tell someone (their husband/wife, their colleagues, etc) who are all trusted to keep it secret.
The responsibility for the pacemaker being able to kill me is squarely on the vendor, not the researcher. I would argue that the researcher has an ethical obligation to inform me, first and foremost, so that I can protect myself from the threat as quickly as possible.
When my life is on the line, giving me a chance to save it is much better than leaving it up to someone else to secretly guard it.
Some people might feel differently. For example, maybe they'd want to remain unaware that their pacemaker could kill them. But the point is that it's not a clear-cut issue.
Honestly, in that situation I would probably do nothing, since the chance that anyone would maliciously target me with a proximity-based attack is so small that it's more productive to be worried about being struck by lightning. But what if it wasn't proximity-based? Let's ratchet up the severity: What if somehow the pacemaker was connected to wifi by design, and someone gained control over whatever central server it was connecting to? In that case, I would want to be informed so that I could disable my wifi rather than trust that the server remains secure while a fix is deployed.
This all may seem unrealistic, but computing is still in its infancy. What about self-driving cars? I'd want to know immediately if someone could remotely gain control over my car so that I can not get in one. Ditto for planes.
My mind is open on the topic, so I assure you that if you try to persuade me otherwise, it will be a productive conversation. But it's hard to imagine a scenario where I'd rather not know of a threat. Can you think of one?
The only thing that comes close is if an entire city could somehow be destroyed remotely by any random troll on the internet. But in that case it might still be better to warn everyone immediately so that they can flee from the threat that was somehow constructed in their back yard, rather than keep them ignorant ostensibly for their protection and tell them later "By the way, you and your family were in serious danger."
analogy with pacemaker would work better if disclosure would be just announcing that your life is at risk, please keep yourself isolated". it would warn public, but also it would lack specifics how to harm people. now if you announce wulnerability and disclose how it can be used then you increase number of people who could do damage and victims does not have anything to defend themselves. timing here is also very important.
Ok I'll present a scenario. Let's say you discover that there is an easy to reproduce hack that allows someone to remotely control a car of a specific, but popular, company, others aren't affected. All you need is an sdr and a computer. This hack was not so easy to find, which is why it was missed by the developers in the first place. It takes high technical skills to find, but is easy to execute. (BTW, this kind of hack would actually make it difficult to track down who executed the attack)
Currently there have been no cases seen in the wild, how do you move forward? Do you give the company a chance to fix the problem before it is seen in the wild? I think so, and here is why.
If you fully disclose you WILL see the hack in the wild because of its ease to execute and there are malicious people out there. What does the public do? Not go into work for a month? People can't do that. Buy a new car? People can't just buy new cars. They are too expensive and it isn't like a dealer will want to do a trade in for a dangerous car.
Let's say you just tell the public that there is an easy to reproduce hack, and you fully disclose to the company. This can still be a dangerous thing because now that people know there is a powerful hack and they will be looking. Malicious actors will be quick to use it too, because they know it will be patched soon. It isn't unreasonable to think that the hack can be found before it can be fixed and a patch issued.
Now let's say that you just disclose to the responsible party. There aren't many people looking for the hack. Malicious actors that might know about it (definitely few at this point), will be wary to use it because if it is used then the people will find out about it. In the mean time the company patches it, issues an OTA update, and we never see the attack in the wild (or maybe once or twice). We then tell the public so that no one makes the same mistake. By doing it this way you minimize the chance of the hack being used. This method IS protecting your life better than you could do on your own.
We are presented with a problem if the responsible party brushes off the researcher. It is the responsibility of the researcher partially disclosing to the public. So now we're in the second case presented above. But now there is more power to the public. From a legal standpoint you can prove recklessness or malice by the responsible party and a lawsuit will be much easier. The company can be required to replace the vehicle, provide a temporary replacement, or buy it back. Whatever happens it will go much smoother in court than if one of the first two events happened.
The key here is that we need trust. No system is invulnerable and it would be naive to think there is one. What the trust comes from is that the company producing the product you are using fixes the issue quickly. The trust is that they are doing what they can to ensure the safety of the users.
> If you fully disclose you WILL see the hack in the wild because of its ease to execute and there are malicious people out there. What does the public do? Not go into work for a month? People can't do that. Buy a new car? People can't just buy new cars. They are too expensive and it isn't like a dealer will want to do a trade in for a dangerous car.
Of course they have options. It's up to them to choose whether to get into a dangerous car. They can arrange to ride-share, or book a taxi, or ask a favor from a friend. And even if they have no options, the vulnerability researcher didn't make them less safe by disclosing immediately. The car company made them unsafe by shipping a car that could kill them.
Tangentially related, but you may find that the term "responsible disclosure" has covertly influenced your thinking, as it did mine: https://news.ycombinator.com/item?id=12308246 It's easy not to notice.
> We are presented with a problem if the responsible party brushes off the researcher. It is the responsibility of the researcher partially disclosing to the public. So now we're in the second case presented above. But now there is more power to the public. From a legal standpoint you can prove recklessness or malice by the responsible party and a lawsuit will be much easier. The company can be required to replace the vehicle, provide a temporary replacement, or buy it back. Whatever happens it will go much smoother in court than if one of the first two events happened.
The company will undoubtedly be required to fix the vehicle anyway, if only to save their brand. Just because someone revealed the existence of a problem doesn't absolve them of any responsibility.
You make some good points, and the conclusions are logical and well-reasoned. Unfortunately I think we disagree on the central point: "This method IS protecting your life better than you could do on your own." Under no circumstances would I want to get into a car that an sdr could control -- this is the ultimate protection -- and the only way to ensure that is to be informed immediately.
So I think we have a different fundamental problem. I do not think a system can be fully secure. I think the vast majority of security researchers would agree with me too. A common saying is "given enough time and enough will power, someone will break your system."
What we're talking about here is the discovery of a vulnerability. Not that the company knowingly shipped the product with this issue.
Security is a constant cat and mouse game. Security through obscurity is an effective method in the short run. A company cannot and will not be responsible for shipping a car that could be hacked unless security researchers could show that there was gross negligence.
>Under no circumstances would I want to get into a car that an sdr could control
I suggest never entering a car with a computer in it. This likely means the car you own. Basically if your car has power steering and power braking then it can be hacked.
Really great discussion. To my complete layperson mind, perhaps a good middle ground is that if you are the person who discovers a big security bug, you first go to the company in private, but with the caveat that you insist on being informed, perhaps daily(?), of progress.
If you feel that the vulnerability is big, and the company is taking it seriously enough, you give them an honest chance, otherwise if they slack or hum and haw, you go to the public.
The nuance here is that we trust the person who discovered the hole to make subjective judgements about the severity of it and the company's response, and we force the company to be more transparent than usual to the person who discovered it about how they are working/nature of the code fix. But maybe that could be a good thing?
>If you feel that the vulnerability is big, and the company is taking it seriously enough, you give them an honest chance, otherwise if they slack or hum and haw, you go to the public.
This. I fully agree with this. Obscurity is a good form of security in a temporary sense. But vulnerabilities should ALWAYS be disclosed to the public, for a number of reasons. The question is when. Disclosing does a couple things: 1) We see a company fixes them in a timely manner, which gives us trust in services that we use (and the opposite if they fail to) 2) other companies don't make the same mistakes because now the vulnerability is googleable and quickly becoming public knowledge.
"Hey neighbor, nevermind how I know this, but I can open your door with a Bic pen and steal all your stuff if I wanted to"
"Well no one is going to try that. So I'm safe"
Post instructions
But if the person asks for help to prevent it, and especially if you're being paid (like Travis) or a good neighbor, help them before you post instructions. So the instructions you post aren't useful to break into your neighbor's house.
And if you know 5, or 10, or 15 people with this lock, what then? Do you spend time helping them all? What about all the people that aren't your friends? You are helping your friends at their expense because you assume you are the only person to know about this. Is it more ethical to help your friends over the public at large?
> And if you know 5, or 10, or 15 people with this lock, what then?
You tell them.
>>Do you spend time helping them all?
No
>>Is it more ethical to help your friends over the public at large?
It is more ethical to help the major players. We're talking about a company that holds a significant portion of users data. If we found Master Lock had a vulnerability, we'd expect them to fix it (we wouldn't expect them to replace every users' lock for a number of reasons that are different from a software vulnerability, but it wouldn't be unreasonable for them to notify users and/or provide a coupon). It also wouldn't be unreasonable to wait for Master Lock (who controls a significant portion of the market) to fix it before making it public. Making it public means everyone can fix it. But let the majority have an attempt to fix it before letting the public, and malicious factions, know.
Would you agree that the following series of actions is the most ethical way of reporting a vulnerability?
1. Inform the company of the vulnerability
2. Offer a limited disclosure of the vulnerability to the public/whomever you want, without giving away details of the vulnerability, as these would make an attack from an adversary more possible while the company fixes the bug.
3. After the company patches the bug, do whatever you want.
edit: I guess it would be important to ask this as well so we're not just going in circles, of the three general categories of ethical systems, would you say you prescribe to virtue, consequentialist, or deontological ethics?
What about publicly posting that there is an X-level vulnerability and privately communicating what it is? Then people can make their decisions while still reducing the chance the vulnerability immediately becomes an exploit.
Well, the controversy in this case is that Tavis announced the existence of a bug (no details). Not sure why that would upset anybody. You thought it was perfectly bugfree?
It sounded like the parent was arguing for it not being wrong to do immediate full disclosure. I was responding to that. Perhaps the parent was merely talking about stating there is a vulnerability?
That is much more ethical. However, I am not sure that I believe it is something that would pass my threshold for clearly ethical (not your words, mine). You would need to weigh the damage to the company and consumer knee-jerk against the ability for consumers to make an informed decision about what they're willing to risk. In this case, maybe LastPass deserves for consumers to know immediately. They seem to have an extremely buggy program with a number of disclosed vulnerabilities. It's hard to know whether other programs are necessarily better though. Another factor that you should account for against disclosing is that stating the nature of the vuilnerability could potentially make it easier for someone to exploit. Do you think Tavis could find a vuln easier if I told him it was an RCE vs a CSRF bug? I do.
I suspect if you told tavis there was a RCE or CSRF bug, he would find one of each. I mean, I've found bugs by telling myself they existed. If I told myself it was a different bug, I'd find a different bug. Software is buggy.
I disagree. There's always a possibility that someone else already knows about it and isn't disclosing it. Waiting to disclose will naturally lead a company to take longer to fix the issue.
Immediately disclosing allows customers to take action to protect themselves in case someone else is already exploiting the bug. Waiting to disclose is being peddled by the corporate agenda as "the ethical thing to do" because it makes vendors look bad.
Here's typically what happens. You disclose a bug, company fixes it for next release and puts a footnote in the release notes. Nobody ever looks to see if it was exploited because the instinct is to bury it. Customers aren't widely notified and the seriousness is downplayed because "the bug is already fixed" . In the meantime the software was vulnerable for up to three months when it didn't have to be.
If you disclose immediately there's a temporary panic as everyone does mitigating measures (which is how it should always be done!!!). the company is under tremendous pressure to out a patch in a matter of days which they usually do. Then you get yelled at by the company for making them look bad and "putting their customers at risk" even though the customers are provably safer because they were only vulnerable for a few hours
You're forgetting the biggest factor: immediate disclosure also informs malicious parties.
What's really more dangerous, an extra week with a vulnerability that might be known, or two hours with a vulnerability everybody knows about?
Who's really more likely to see that disclosure on your personal Twitter account, every single (potentially non-technical) user of software you aren't even related to, or a few black hats who know you like to hack and brag?
Yes, it also makes companies look better, but in this case my anti-corporate agenda needs to take a back seat.
You're taking an unknown known and making it a known known.
I'd rather an exploit stay secret so there's a chance that someone doesn't use it against me, rather than telling everyone the exploit and hoping someone fixes it fast enough.
Disclose it to the company, and give them a hard time limit.
The opinions of people who work in the industry, whose reputations are on the line, are strongly aligned toward immediate disclosure for fairly persuasive reasons (see elsewhere in the thread). It makes us all safer to do so, for example, because you have the option to stop using the affected software.
I think we can all agree that keeping it secret for a reasonable amount of time while the company fixes it (unless there is active evidence of exploitation, or you have significant reason to believe there is) is strictly more ethical
Here are a bunch of tptacek comments on the topic:
--
"Responsible disclosure" is a term of art coined by a group of vendors and consultants with close ties to vendors. Baked into the term is the assumption that vendors have some kind of proprietary claim on research done by unrelated third parties --- that, having done work of their own volition and at their own expense, vuln researchers have an obligation to share it with vendors.
Many researchers do share and coordinate, as a courtesy to the whole community. But the idea that they're obliged to is a little disquieting.
If vendors want to ensure that they get some control over the release schedules on their flaws, they can do what Google does and pay a shitload of money to build internal teams that can outcompete commercial research teams. Large companies that haven't come close to doing that shouldn't get to throw terms like "responsible disclosure" around too freely.
--
"Responsible disclosure" is a marketing term. Linus may be wrong about the importance of security flaws relative to bugs, but that doesn't validate the self-aggrandizing omerta of security researchers.
Vendor "coordination" of security flaws often works to the detriment of users. For one thing, cliques like vendor-sec gossip and share findings with the "cool kids", ensuring that every interested party but the operators knows what's coming a week before the advisories are published. For another, it substitutes the judgement of people like you --- who, no offense, don't run real world systems or make real world risk assessments about real assets --- for the judgement of the people who are not like you, but who could potentially disable or work around vulnerable systems far in advance of "coordinated patches".
--
The process of "responsible disclosure" gives product managers latitude, because it effectively dictates that researchers can't publish until the vendor releases a fix. The vendors almost always decide when to release fixes.
When a researcher publishes immediately, vendors are forced to fix problems immediately. A small window of vulnerability is created ("small" relative to the half life of the vulnerability, which depends on all sorts of other things) where less-skilled attackers can exploit the problem against more hosts.
On the other hand, in the "responsible" scenario, many months will invariably pass before fixes to known problems are released. During that longer window, anybody else who finds the same problem (and, obviously, anyone who had it beforehand) can exploit the vulnerability as well.
Furthermore, full disclosure creates a norm in which vendors are forced to allocate more resources to fixing security problems, instead of waiting half a year or more. This costs vendors. But the alternative may cost everyone else more. It depends on how well-armed you think organized crime is.
Finally, there's the issue nobody ever seems willing to point out. If you disclose immediately, lots of people can protect themselves immediately: by uninstalling or disabling the affected software.
--
You are unlikely to find anyone in the "community of responsible security researchers" to say anything negative about Tavis Ormandy. It is way over the top to imply that he's not "halfways mature".
You will, on the other hand, find plenty of people with real reputations in the industry at stake (unlike yours, which is influenced not one whit by anything you say about disclosure) who will be happy to explain why "responsible disclosure" is damaging the industry. It's not even a hard argument to make. The dollar value of a reliable Windows remote is too high to pretend that bona fide researchers are the only people who will find them. Meanwhile, because product managers at large vendors are given the latitude to fix problems on the business' schedule instead of the Internet's, people get to wait 6-18 months for fixes to trivial problems.
Personally, without wading into "responsible" vs. "full" disclosure, I will point out that vulnerability research has made your systems more secure; the manner in which the vulnerabilities were uncovered has very little to do with it. You are more secure now because vendors and customers pay to have software tested before and after shipping it.
I think there are sufficiently many persuasive arguments that it's very difficult to claim that someone who informs the world of the existence of a bug is doing more harm than good, regardless of how that information is made public. And if they're doing more good than harm, it's probably ethical.
If you want to control disclosure of a bug why not offer large sums of money in exchange for an NDA? "Report it to us, keep quiet for 30 days and we'll pay you 100k." Would be a motivator few would turn down.
That's not a question easily answered though. Assuming complete objectivity, it would mean that you might save an order of magnitude more lives in the future discovering a cure for cancer, for example, at the expense of lives today. Then you're stuck with questions on how to quantify the value of life etc., but the base reality of the fact that you DO save more lives, and therefore, keep more families intact/happy doesn't change. Who matters more? People who died in the process? Or the future ones who will die as a result of inaction in the present?
Not right. Of course in practice that situation doesn't come up, because you can't reliably tell whether you'd save many more people by sacrificing people now.
Where in the article/blog post is this sentiment stated? I can only find praise and thanks for taviso and white hat researchers from lastpass; the fact someone on Twitter disagrees shouldn't be used to suggest an entirely company takes this approach.
I strongly disagree. right now you are just taking about vulnerabilities in general but there's varying levels of impact. are these vulnerabiluties ok to tweet?
- able to see someone else's recipes on a cooking app without permission
- some has ability to transfer money away from your account
because how closely technology is to our lives there will always be sensitive things behind someone's code or design as the only defense. it's not always information at risk. there are unethical bugs to publicly expose
I think there is an ethics in how you release. I 100% believe you should disclose the bug to the responsible party. Give a reasonable time for them to fix, check, and then disclose to the public.
If you check and they are working on it then give them more time (offer to help? Doesn't have to be free). If they push you off then post it everywhere and well... we know how that goes. If they do fix it then add it to your resume and everyone is happy.
In my opinion, all vulnerabilities should be disclosed to the public. This helps prevent the same mistake from being done twice (or in reality, thousands of times). BUT there is a reasonable way to disclose. Some things are more vulnerable than others (like a cache of everyone's passwords or a device that controls someone's heart...). Public discloser makes more people aware of this. We say security through obscurity isn't security, but that is only true to an extent.
> Which is better: Finding a bug and bringing attention to it, or someone malicious finding the bug? The latter is objectively worse, but people keep trying to punish researchers for not following the third path of "Report it in private, following strict and lengthy procedures, and make no mention of it until a timeline of their choosing."
He put a lot of details out there that simply weren't necessary, and I think this is where the problem lies for a lot of people. Please explain to me how making the distinction between "There's an RCE" and "There's an RCE that's exploitable by messing with <feature X> in the API" benefits anybody besides nefarious actors (when the software developer is already working on a fix, as LastPass was at the time)?
Pointing out that structures of power are not what they appear - be they public or corporate - is in the best spirit of a free press. And reporting a security vulnerability in a widely-used product is just that.
I clearly recall people debating this in relation to IIS vulnerabilities under Windows NT4, which therefore must have been some time pre-2000. Unfortunately, I don't see this nicety in the future.
Ethics come into play when a company is paying people to hack other companies' software. Google's Project Zero is a brilliant PR play, but it's walking a fine line ethically. Google pays some engineers to hack other companies' software, then publicizes the results.
Given its recent troubles, Uber could benefit from hiring a few gray-hats, pointing them at Google, and releasing some of the resulting bugs under an arbitrary "responsible disclosure" policy.
It's only a fine line if you think everything they find would remain unfound without them.
It's sort of like saying Consumer Reports walks a fine line if they alert the public about any dangerous behavior they see in the products they test. The danger is always there, telling people about it so they can avoid it and it can be fixed is to the public's benefit.
I don't think Consumer Reports rates other product review sites with which it directly competes. Project Zero is more like politicians paying for opposition research. You can argue that it benefits the public to have dirty deeds exposed by any means necessary, but it's hard to call it altruistic.
That's one way to look at it. Another way s that they are subsidizing the security team of their competitors.
Also, project zero gives 90 days or until you've released a fix[1]. If you can't roll out a fix for your problem in 90 days, you're either playing fast and loose with your customer's security, or you've lost the talent to competently fix the problem, or you've made such a poor choice in your design that makes it hard to fix that nobody should use your program or service anyway.
Any company or project unwilling or unable to roll out a security fix within 90 days doesn't deserve your support or money.
I've been trying out lastpass for a few weeks. I downloaded the extension that their website directed me to. Because of this discussion, I did a version check and lo and behold, it defaults to NOT auto-update.
Luckily I've been using it only for a few unimportant sites. They've had two security issues disclosed since I started my trial. I'm impressed with the functionality. I'm decidedly unimpressed with the security experience.
I am a heavy LastPass user, but I've stopped using the browser plugins, and just copy and paste into my browser (or look up on my phone and manually type on my laptop/desktop). C&P'ing is also a bit risky as LP seems to lack a clear clipboard option on Android.
I have a month left on my paid subscription. I think I'll be leaving for a competing product shortly.
If they take security seriously they will stop distributing their addon from their website, update the deprecated version from the addon store, and start behaving like a company that has access to every single password their customers have.
I just realized I'm running on 3.x.x too, there was no update notification. If that's how this company treats security, then this is game over for me. I've been planning to switch to 1Password for a long time; seems like the time is now.
I want to ask a serious question: considering there have been some major issues with LastPass, like this one, showing up in recent memory, why do people (at least here on HN) stick with it despite these major issues? There are other options out there, depending on your use case (I use 1Password, which seems to have fared much better as far as vulnerabilities), so why keep the same software when vulnerabilities continue to be found in it? Is there a special feature in LastPass?
I'm comfortable using software that has been thoroughly vetted by expert security engineers. I am even more comforted by the fact that LastPass responds rapidly to Tavis Ormandy's reports (and presumably others). I'll use the guys who are extensively studied and do all right over the guys who are less so.
It's like choosing a 4 ⭐️ product with 5000 reviews over a 5 ⭐️ product with 3.
Every security expert I've spoken to about password managers recommends against lastpass. Typically they recommend 1password, or passwordstore or sometimes even keepass.
The more apt comparison is a product with 100 2 star reviews vs one with 1000 4 star reviews.
1password has more recommendations by security engineers, and has been reviewed in significant depth.
Lastpass has been reviewed in depth, but is rarely recommended by the same engineers.
1password has changed recently. Are security experts recommending their newer cloud subscription service or the older version that stored the vault on Dropbox or another storage service (or both)?
Thank you for that. Would any security engineers here who have successfully published vulnerabilities like to chime in? I would appreciate strongly informed advice.
How do you tell that LastPass has been vetted better than the alternatives? "Experts looked at X and found no major issues" normally doesn't get very much attention, and potentially isn't published at all, or just reflected in recommendations.
(some of it is of course also depends on the threat-model: something with no (or with more caveats, an explicitly activated browser integration) is way less likely to be vulnerable against attacks launched from a website, reviewed or not)
As an enterprise customer, I previously tried getting people to use some Keepass databases available only on the local network. Compliance was roughly zero. The UI frightened people, it didn't integrate well into existing workflows, and, as an admin, I had no good way to segregate users into groups and only give them access to specific password sets.
Since moving us to LastPass, compliance is through the roof. Using the browser extension makes it much easier for everyone. Administration is much simpler too. I'm actually considering moving some personal Keepass files to a personal LastPass account, as I now find myself comparatively frustrated when I have to use my Keepass files.
The only good alternative everyone seems to agree on is Keepass. Keepass is useless if you need something that works in a group of people, especially across different devices.
I can't speak to the vulnerabilities in 1Password but the reason we decided against 1Password is that the user experience for LastPass (Teams) is much better. There's no point in having a secure tool without compliance and bad UX is a show-stopper.
Also the vulnerabilities so far have been mostly non-concerning:
1. The vulnerabilities of the browser extensions generally require the user to access malicious URLs, so they still require some degree of interaction.
2. The actual hacks of LastPass didn't result in any passwords or confidential information being leaked, so they're not very interesting.
That 1Password hasn't seen similar disclosures doesn't mean 1Password doesn't have any vulnerabilities, nor that it didn't have them in the past. No software is without faults and so far LastPass has always reacted promptly. That 1Password doesn't have as many recent disclosures may simply mean that Tavis hasn't gotten around to looking at it yet.
For me, it is because I have shared folders for credential management for various jobs and it would be way too hard to get everyone to migrate. In fact, I am not aware of any other solution that matches LastPass functionality.
I have changed my behavior to just using lastpass-cli and a self-written Go/Qt GUI wrapper over the CLI - https://github.com/alexzorin/lpass-ui - (only builds on Linux sorry) to protect myself from the problems of the browser extension.
LastPass user here, the reason why I choose LastPass is it's convenient and _not_ because it's safe. A common misconception is that softwares like LastPass/Keepass should be extremely secure, resist to any kind of hacking/exploiting, so that I can store all my passwords in it. Let face it, that's not realistic. The practical way is: 1, for non-critical accounts use the most convenient and relatively safer one, like LastPass/OpenID/etc. 2, for cirtical ones(like bank account, major email account) burn a strong passwords inside your brain and combine it with 2FA. Do not trust any one/software when something really matters to you.
Why not!? So they mess up. I figure all the other password managers either have similar issues and either weren't disclosed, or fixed before the issue was publicly found. No software is bug free.
While they have issues -- at least this is an issue I know that's been addressed.
"This password manager is bad, so all the others must be just as bad"... yeah.
Other password managers, like 1Password, are more frequently recommended by security engineers.
They're also not built entirely as browser extensions and interfaces, which massively increases attack surface.
Maybe the reason other password managers aren't tire-fires is because they're designed better and are more secure, not because they have more unknown issues.
I use lastpass with a yubikey for 2fa and I feel safe enough. I briefly looked at other password managers when I was evaluating lastpass, but convenience won out for me. I haven't looked at moving out of lastpass, but 3 years ago no other password manager came close to the mobile and platform support that lastpass had.
I considered keeping my passwordDB local, but the inconvenience of needing it and not having it wasn't it worth it to me. Do I know I'm making a tradeoff? Yes.
I also have a nasty habit of setting things up on a local server and then never updating it. Instead I decided that lastpass was worth the $12/year so I wouldn't have to manage anything locally.
So while lastpass may not be perfect security, I'm a lot better off than I was three years ago when I used the same few passwords everywhere.
It also helps me share passwords with my wife, so that's nice.
Cost, 1Password is atleast 3 times the cost. And i use, 2nd factor on most sites + lastpass website only no app or no browser extension. I think using this way it is same as 1Password when it comes to security with 1 3rd of cost..
For me it's the ease of adding new sites, which is frequent. As a long time Keepass user I know the pain of generating often requiring navigating two programs for every sign up.
In light of recent revelations I'm considering going back. And possibly telling friends and family to maintain a separate LastPass for their important stuff, in a separate browser.
I would check out 1Password. I can't compare LastPass and 1Password since I have never used LastPass, but adding a new login/password for a website is pretty easy in 1Password (just open the app and choose generate password, as long as the extension is installed on the browser it understands the website). It also prompts you if you login without generating a password.
I used lastas for two years. When my student license expired I tried KeePass, 1password and Dashlane. Dashlane used over 1.2gb of memory, was slow and made no sense. You could add sites in the standalone client but NOT passwords. So you had to use the browser extension to create the password and copy paste it into the client.
1password might be amazing on Mac but on Windows it was so barwbonea I could just use lastpass without the extensions and still get more features.
KeePass was fine just a bit cumbersome to integrate and browser integration would sometimes stop working.
That being said the UX in lastpass is also horrible and getting worse. It is so slow now with the fancier graphics. I'm still surprised they are all so bad but lastpass is the least annoying to use.
The alternative is either going back to remembering passwords, sharing passwords between sites, or writing them down, or to switch to a similar product that has fewer features and an unknown amount of bugs with likely worse vulnerabilities because there are fewer people looking for them
Out of curiosity, what LastPass features are you missing in 1Password[0] or Dashlane[1]?
These two products are very feature-complete (at least for my usage) and have as much attention as LastPass (with for example Tavis Ormandy working on both). They even publish information about their security model[2][3], which LastPass does not.
LastPass has proven repeatedly that it is not robust enough for storing passwords. Fewer or less important vulnerabilities reported does not equate to less security.
A similar question I have is why use a browser extension for your password manager? I use 1Password as well, and if I need a password I just head into 1Password Mini. It's convenient enough, simple, and completely eliminates this attack vector.
Because it's extremely convenient. When this bug was first reported a few days ago, I disabled the Lastpass browser extension and started copying and pasting passwords from the vault, basically as you describe. It's been a very noticeable inconvenience, to the extent that I probably would have turned it back on soon even without a fix.
I'd argue convenience isn't really the point of password managers. Extremely convenient would be using the same password everywhere or letting your browser save them.
I've never used LastPass, but 1Password Mini is pretty convenient. It's always ready to go in the menu bar, and once you enter your password it stays unlocked for a good while.
Yes. That one. No one claims they didn't have any bugs of their own, but they tend to fare better. That bug also only affects Windows users and is only locally exploitable, compared to LastPass which is pretty much always RCE.
Just b/c they also have a bug doesn't make them any worse than anyone else or diminish their track record in general. And contrary to just about everyone else LastPass doesn't care to notify you of updates or have the auto-update mechanism on.
>why do people (at least here on HN) stick with it despite these major issues?
Before I switched to lastpass, I had maybe 10 different passwords I shared across 100 sites. This was terrible, and now that I use Lastpass, every password is unique. So I use Lastpass because its an incremental improvement.
Is 1Password an incremental improvement over lastpass? I'm not convinced. It might be a tiny bit more secure, but nowhere near the gain in security I had from 10 re-used passwords -> all unique randomly generated 20+ char passwords. The cost of switching is high and so it would have to be enough of an improvement to justify the switchover.
I recently switched. There's a LastPass import option, and it took about 20 minutes to get all my devices synced up. The cost of switching may not be so high, depending on how you use LastPass (if you use their Enterprise features, for example, it would be higher).
LastPass is terrible, but without alternative, at least if you want cross platform.
People keep bringing up KeepPass, but apart from the bad UX, it's developers seem to be even more incompetent concerning basic security design. That leaves 1Password, which also hasn't covered itself in glory security-wise, but appears to be marginally more competent. No linux client though.
And most of the convenience. I don't need a password manager if all I'm using is a single machine - an encrypted file in my homedir would do. It's the syncing of passwords across all devices that is the most useful.
What if they're never on the same network? I don't bring my work machine home, or my personal laptop to work, and yet they have to sync passwords somehow.
My most sensitive passwords aren't in LastPass (or anywhere else for that matter) and don't overlap with any that are, but an awful lot are for less important sites.
One thing I like about this is Tavis' comment on Twitter on 3/25 that it's a major structural issue and they have 90 days to fix it. 6 days later, resolved and confirmed so.
I can't comment on Deployment, but the defaults are not secure (defaults to autofill info), and it's not secure by design - surface area is much too large.
For me to trust lastpass, I will have to see real changes on those.
Security isn't an absolute thing. Just the fact that you use the internet indicates that you're willing to sacrifice a significant amount of security, which is clearly worth it to you. Presumably most LastPass users prefer autofill browser integration.
It's a local db only, you're in full control of where it is, where it goes. It's one of my favourite pieces of software actually: Simple enough that my aunt is using it, secure enough that I'm using it for my company's accounts.
xc is a fork of the 2.x series. I would possibly recommend it, but it's less battle-tested and they're aiming to enable the HTTP server by default which I think is a mistake. I'm using it, but disabling that stuff.
(Either way the databases are compatible with one-another, so it's up to you what client you use)
And you can always store your DB in a "cloud" drive. This is what I do and I'm fairly happy with it. Granted it doesn't have the features and ease-of use of online password managers, but my workflow allows me to live with that tradeoff.
I have quite a few sites I need to share credentials to with my wife, but we each have other credentials we don't need to share. Is there straightforward workflow for that with any of the KeePass variants?
Are there any special considerations that need to be made with regards to synchronization? I'd expect the DBs to be mostly read-only, so maybe it won't matter much. But I'd hate for a write conflict to leave things in an inconsistent state.
Edit - Thinking about it more, I suppose it really depends on what each cloud provider does.
Yeah you gotta make sure you're syncing the db properly when you write to it. It works ok with two people, but it doesn't scale much further than that.
As I finally decided to move away from LastPass (giving Enpass a shot) and tried to delete my account, I noticed in the advanced settings that the option "Keep track of login and form fill history" was automatically turned on. This may be just to show you the "most recent logins" in the app, but nonetheless I think this setting should've been a bit more easier to access
Why on earth do people who read HN keep their passwords in a cloud service? pass/gopass, stored in git over ssh on a server I control, and I have access to all my passwords everywhere while only relying on the security of ssh and gpg...
No. If gpg does its job you could even host the passwords in a public github repository. Several pass users do this; I used to, but my server has better availability than github over the last few years so I switched after being frustrated at not being able to sync new passwords to my phone a couple of times.
So, to be fair, I misspoke when I said I rely on the security of gpg AND ssh. Really, it all comes down to the security of gpg. I feel pretty okay about that.
I have a security background and I've found (minor) security bugs in lastpass before and yet I still use it.
I use lastpass because it is safer than what I was doing before using a password manager and most inconvenience I'm willing to put up with for my day to day computing.
I like to think of it as a convenience vs security chart of password schemes and for me lastpass is on the pareto frontier of it and happens to be the most convenient option on that frontier.
How exactly is giving a third party access to your plaintext passwords "on the pareto frontier"? There are numerous options which do not involve doing so, some of which have better convenience than pass, so I'm genuinely curious how distributing your plaintext passwords more widely could possibly be at the frontier of security...
Which options are more convenient? I'm always looking for something better.
What do you mean about distributing passwords in plaintext? Can you be more specific for me?
It's not the pareto frontier of security, it's the pareto frontier of security versus convenience. It's not exactly a "thing", it's just a mental model I use for thinking about security trade-offs.
It's really not that bad. There are extensions but I decided that auditing their code was harder than just using one of the terminals I already have open.
Autocompletion is a security nightmare, and 2FA really doesn't help, you're blindly trusting LastPass.
Too late, I decided to take the opportunity and renew my passwords and store them offline. I'm giving Enpass a try before reverting to the well known keepassx
LastPass is a tire fire. How many exploitable vulnerabilities in a password manager do people need to drop as 0day to kill this company? Why is it that we continue to tolerate products that routinely violate the sales claims they make? Start actively telling your friends to stop using LastPass and switch to a better password manager.
Except on Android, where it requires you to use a poor custom keyboard. Which means that you have to either build your own keyboard switching functionality or go through a ridiculous dance every time you want to use it.
Indeed. It's hard enough convincing family members to use a password manager in the first place. Getting them to shell out $60/year to do so on top of that is a non-starter. Unfortunately, it falls in the same sort of category as paying for backup solutions in my experience. The cost is high enough that they're willing to roll the dice.
That's not so say 1Password's pricing model is wrong -- they need to run a business after all. But LastPass makes for an easier sales pitch to family members, particularly those that need to share credentials and thus would fall into a family plan.
There's nothing wrong with publishing your findings immediately. It's not up to a security researcher to do a bunch of unpaid, extra work. Finding a bug is enough.
Which is better: Finding a bug and bringing attention to it, or someone malicious finding the bug? The latter is objectively worse, but people keep trying to punish researchers for not following the third path of "Report it in private, following strict and lengthy procedures, and make no mention of it until a timeline of their choosing."
It's a professional courtesy for someone to do that for free, but not a requirement, and it's certainly not unethical to tweet about it.