Good to know, thank you. What in your opinion is the best way for someone to submit a security vulnerability (the sort which could land them into legal trouble, they don't have permission to be penetration-testing, could be viewed as malicious, etc) or is it simply not worth the risk?
You need to clarify. Are you talking about submitting a vulnerability in someone else's website, or in a product you installed on your own computer?
In the latter case, it's pretty straightforward. Your legal liabilities in that situation are (so long as you don't demand money) civil (you many have violated a click-wrap that will probably prove toothless against security research). You can tell the vendor directly if you like (from experience, this isn't fun; you'll probably spend a couple hours in tier 1-2-3 tech support hell). If it's an important target, you can also talk to bug bounty programs.
In the former case: it's not not straightforward. Start by scouring the target's website to see if they have a disclosure program, which will probably equate to permission for looking for flaws in their site. If they don't, it's very possible that you've broken the law in whatever process you used to find the vulnerability. Submit anonymously and carefully. If they prove themselves to be cool, you can always take credit later.
you would think they would be more focused on vulnerabilities of websites and the underlying technology as Google is an internet company (though they have morphed into a wider spectrum). Most recent public security outcries have related to internet services so my assumption that would be their focus.
Part of me hopes they tread in this gray area so that we are forced to address issues with the CFAA and how its presently enforced.
It doesn't matter if you are sitting on a bean bag chair at a google lab, you have no defense in criminal court if you are unauthorized to test for a vuln in someone's system. Disclosing a flaw is sufficient grounds for a felony.
This isn't a "grey area". It's illegal to test web applications run by other people for security vulnerabilities. The examples you've seen of above-board security research targeting web apps fall generally into these buckets:
(a) Web apps run by other companies but which are available for download to run on one's own machines
(b) Web apps run by other companies that have published bug bounties or other forms of permission for testing
(c) Web apps tested carefully and, at first, usually anonymously (or, if not, then by researchers working from jurisdictions where CFAA is hard to enforce)
Right, they mentioned researching bugs like Heartbleed. Discovering and disclosing bugs on underlying open source technology hasn't been the target prosecuting under CFAA in the past so I follow you there.
If a prosecuting attorney wanted to, couldn't they charge the researchers that discovered the bug? CFAA doesn't specify intent to do harm...
If someone used the disclosure of the bug to do harm, you have assisted in unauthorized access of information.
I don't understand what you're asking. The Heartbleed research they didn't probably didn't have any CFAA implications for them.
The distinction isn't between "open source" and "closed source". It's between "software running on machines you own" and "software running on other people's machines".
Hundreds of thousands of vulnerabilities have been discovered in the past decade and a half. None of those researchers have been prosecuted for disclosing the vulnerabilities. If you disclose a bug and someone unrelated to you breaks the law with it, CFAA does not say you're liable.
haha not to drag this conversation out but just a hypothetical scenario:
I download and install Drupal on my web server. I find an SQL Injection vulnerability in the login form. I post on a public forum the vulnerability where someone proceeds on their own to deface a government website using that knowledge. You don't think they would charge you in assisting?
No, they would not. The equivalent of this scenario happens all the time. In the one case I'm aware of where the developer of exploit code was found criminally liable for its use, that developer had a direct relationship with the person who actually did the exploiting (for commercial gain).
> I'm aware of [one case] where the developer of exploit code was found criminally liable
You're probably referring to Stephen Watt (a.k.a. "Unix Terrorist"), who wrote the POS sniffer. His friend, Albert Gonzalez, then used it to steal 170 million credit cards from TJ Maxx and other vendors back in 2006-2007.
The law doesn't require relationship or commercial gain. I can't share my story (and I'm sure there are others out there) but I can assure you: if you mess with the wrong people, the CFAA has no bounds
Look at it this way: what if Drupal's own security committee posted a description of a SQL injection vulnerability (as they often do when issuing patches), and another person takes that info and uses it to attack an unpatched Drupal government site? Would the security committee members be charged?
I agree, this was probably a bad example because involvement seems a bit of a stretch (although posting on a known black hat forum might change some variables).
What if you had only told your friend of the particular vulnerability and he used that information to write an exploit?
Writing an exploit is also not illegal. Using the exploit is a problem.
If you tell your friend about a vulnerability, and he writes an exploit and then uses it to break into a retail chain and steal credit card numbers, then you have a problem. Not because finding the vulnerability is illegal, but because there is now a chain of evidence that might link you to something that is unambiguously illegal.
You won't be held liable for unwittingly enabling the crime; you'll be accused of sharing the vulnerability with the express purpose of enabling the crime (actually, maybe, technically, with the purpose of enabling any crime; conspiracy laws get weird). That intent will be something that needs to be proved in court.
That liability is still a stretch, doubly so if, the moment you find out that your friend is doing something crazy, you inform the authorities.
> This isn't a "grey area". It's illegal to test web applications run by other people for security vulnerabilities.
Scanning for heartbleed is a good example of why it may well be - through a normal, authorized connection, it becomes apparent if the implementation is vulnerable.
Or are you referring specifically to sending a malformed heartbeat in the context of an authorized connection?
Sorry I should have been more clear - I meant the former case of finding vulnerabilities in a website/API/web-service/etc. Your answer was comprehensive in either direction though, and thank you