Hacker News new | past | comments | ask | show | jobs | submit login
Ethical considerations of access to the HackerOne community (hackerone.com)
200 points by waffle_ss on May 4, 2017 | hide | past | favorite | 62 comments



Good to see that they are taking a clearly communicated, carefully considered stance on a messy ethical issue. I don't really have a strong opinion on this case, but I think it's refreshing that HackerOne is dealing with a case with no clear best answer in a principled way.


Selecting your customers is always tricky.

On one hand your right to run your business as you see fit and respecting your principles.

On the other hand you have discrimination of all kinds.

Think about the recent cases of a small baker with strong religious views refusing to create cakes for gay couples.

Think about CloudFlare protecting ISIS sites.


To me, it seems to come down to:

1. there are evil people, but

2. those people frequently have more social power than nice people, and

3. the evil people will use their social power to paint nice people as evil (i.e. "bullying.")

If you're defining the laws for a community or society, or the Terms of Use for a piece infrastructure for such a community/society to use—then it behooves you to consider that any "hammers" built into your system will mostly be used by those with power against those without it, regardless of which side is "correct."

So: If you let people speak freely, the powerful will shout down the powerless. But if you let people silence others, then the powerful will silence the powerless.

Morally, it really comes down to a choice of which kind of hammer hurts wronged innocent powerless people the least. (Which can often mean offering no hammer that can truly be used to "deal with" obviously-evil people.)


I understand what you're writing, and what HackerOne wrote, but to me it pretty much seems like "we won't run a security service for spyware companies".

At Matasano, we wouldn't do work for the USG or arms manufacturers. We didn't have a coherent framework to fit that decision into. We just wouldn't do it. I worry that we may be overthinking things here.


I see an parallel to the harassment/CoC discussion:

Because that's mostly my stance on the question "should a project have a code of conduct?": It's overspecifying things because we believe we can control them better the more we're spelling it out.

I'd prefer a blanket "we're against harassment of any kind and will act if harassment comes to our attention", over many paragraphs trying to spell out what exactly we consider actionable, leaving loopholes for language lawyers (and we nerds have this tendency!) all over the place.

I know that you're strongly in favor of CoCs. Do you see a qualitative difference between those things?

I'll grant that harassment is a much more explosive issue today, but "who is allowed to participate in a bug bounty community" also seems to have potential for bitter quarrels.


It's good to have an explicit statement that the spirit of the law is important, and that if you're trying to language-lawyer your way around a code of conduct, you're missing the point.

But that doesn't mean there's no value in having clearly laid-out principles, and in particular, clear descriptions of proscribed behavior and protected groups. Because in the absence of that, the same kind of people who would language-lawyer in the presence of a code of conduct will try to slickly excuse their behavior as acceptable in the absence of one.

Some of the most insidious people around will be superficially nice to someone's face (some of the time), while taking the time in a policy process to calmly and politely inquire if it would be reasonable to treat people like them as subhuman, with ever so much justification and honeyed words. Head it off in advance, set a line for what you expect, and don't assume that "be nice to each other" will make everyone feel safe and welcome.


> Some of the most insidious people around will be superficially nice to someone's face (some of the time), while taking the time in a policy process to calmly and politely inquire if it would be reasonable to treat people like them as subhuman, with ever so much justification and honeyed words.

This much is true. Do codes of conduct help or hinder such efforts, on the whole?

When two people are in conflict (which is really the only case that matters), popular codes of conduct seem to favour the person who can best convince that the conflicting behaviour on their part is part of their identity while that on the other person's part is just them choosing to be mean. I'm not at all convinced that this consistently favours the person we should be favouring.


As soon as you try to give your process the air of "due process" you have lost. You must never get into a debate about it (internally, sure, externally never).

You're not a court of law, you're an organization, a club, whatever. This club has officers or a president.

Put your foot down and make a dictatorial decision that is only announced, not discussed.

You're not recognizing their "right to argue". They cannot "lawyer" if there is no venue open to them. Ignore their complaints on Facebook or whatever.


> As soon as you try to give your process the air of "due process" you have lost.

It's worth remembering that people (rightly, I think) get far angrier about breakdowns of due process than the absence of it. Some of the people upset about the Drupal mess recently didn't seem to care much about the specifics of what happened, but were incredibly disturbed that the official process appeared to have been circumvented. At that point it would have been far better to say up front "we make decisions at our discretion, in the manner we choose".


I see these kinds of arguments as being something that any message board nerd of sufficient experience can generate in any situation; it's just catnip for flame wars. If you try to generate an all-encompassing framework for the ethics of your project, bored nerds will chase you all the way back to Gödel with nitpicking inconsistencies.

To me --- and, let's be honest, to pretty much the overwhelming majority of all other people --- it's enough to say "we've decided we're not going to work with spyware companies", just as for the most part it's fine to say "we're not going to work with people who publicly lobby for racism or misogyny"† without connecting the dots on that statement all the way back through Rawls and John Stuart Mill.

I think at present it's a courtesy to announce those kinds of beliefs, preferably in the simplest way possible, just so nobody is shocked when you exercise your prerogative to enforce them. I obviously don't think the resulting "code of conduct" statements need to be formally impregnable.

You could make the opposite statement as well, as I think at least one project has sort of done implicitly. I don't begrudge anyone that right; why would I bother? If you want to pour kerosene over your professional reputation and light a match, who am I to stop you?


> I'd prefer a blanket "we're against harassment of any kind and will act if harassment comes to our attention", over many paragraphs trying to spell out what exactly we consider actionable, leaving loopholes for language lawyers (and we nerds have this tendency!) all over the place.

Thank you, you succintly worded my main issue with CoCs and why I much prefer "Community rules: Don't be an asshole" over "Community rules: [3 pages]".

I have yet to see a single situation faithfully resolved with the latter that couldn't have been with the former.


> We just wouldn't do it. I worry that we may be overthinking things here.

I agree that going with your gut feeling is very likely good enough, but I don't agree with the term "overthinking". I think codifying the decision a bit is beneficial, as any of us is likely to run into a case that falls into a gray area, and having more fleshed-out decision lines will be helpful in that case.

I think it's the same in ethics vs morals. The ethics you inherited from society will be good enough for the vast majority of decisions, but sometimes it's good to logically deduce morality for yourself. You might realize you used to believe things that didn't stand to reason.


The only question I have, in this particular case is: what about the victims of the original app? Are they possibly the subject of re-victimization if the app on their device is compromised further by 3rd parties?

Though I completely agree with HackerOne's moral stance here, does this particular scenario complicate things?

EDIT: I do see they took this into consideration in their writeup... still curious.


If other people are using your brand as a means to sell then you definitely need to take a stance on what you stand for.


I wasn't so much suggesting a decision-making framework, as a model that can be used to understand decisions people make intuitively. "Overthinking" is the whole point. :)


Discussion in response to this comment elsewhere.

https://www.reddit.com/r/rational/comments/69f7nw/d_friday_o...


Really neatly said. Sometimes it's worth noting the value in anlytical thinking that the user base at HN is able to produce. This (in my humble opinion) produces more valuable insights than what I'd find on cable news, or even sometimes in respectable publications.


"Think about CloudFlare protecting ISIS sites."

I'm pretty sure the Feds are happy to have traffic to ISIS sites be routed (unencrypted) through an American company's service...


> On balance, if someone is infected with spyware they're probably better off infected with secure spyware.

That's a pretty amazing sentence. It illustrates just how messy this whole situation is.


I looked through the task manager of a corporate issued laptop and saw tasks belonging to very similar companies, as part of the disk image IT makes.

The corporation likely has a license for the software, as well as conditions for all their employees to expect monitoring.

A formalized bug bounty program would enable the software producer to have secure software.

Why exactly is HackerOne drawing a distinction with this software producer? I read the whole article and still miss what the controversy with this producer is.

Is all monitoring software now banned from HackerOne under the guise of a moral high ground HackerOne just created?


>Why exactly is HackerOne drawing a distinction with this software producer? I read the whole article and still miss what the controversy with this producer is.

FlexiSpy specifically marketed itself as a tool for spying on your spouse. Their front page used to include "read your partner's sms" https://web.archive.org/web/20060402200643/http://flexispy.c...


So the issue about how oblique they make their marketing message? If they rebrand and use lots of innuendo, then it's OK?


You're being obtuse if you think this use-case wasn't a huge part of their thinking while designing the software and offering support to customers.

>I asked a FlexiSpy salesperson a simple question: If I wanted to, could I use their spyware to snoop on my wife's cellphone without her knowing? The answer each time was yes. When asked if it was legal, they responded with a canned disclaimer explaining it was necessary to get the permission of the target. But what if I didn't want my wife to know? They could help me anyway.

https://www.forbes.com/sites/thomasbrewster/2017/02/22/flexi...


I have no doubt it is what they're selling. But the question is if the tool does something, then the marketing message shouldn't really be relevant. If a FlexiSpy clone comes along and is less direct about spouse-spying, and plays a stronger line of "you need authorization", then it's suddenly OK?


>the marketing message shouldn't really be relevant

my point was that it isn't just marketing, this use-case was a huge part of their thinking while designing the software and offering support to customers. [repeating my comment almost verbatim]

If a clone came along that wasn't designed so that it can be totally hidden while installed, then privacy advocates would be less angry.


Many tools have legitimate and illegitimate uses. There is rarely a clear sharp line; we have to make a judgement as to whether a given tool does more harm than good, on the whole, and there are no shortcuts. IMO it's entirely right to weigh the marketing message in that balance; after all, the marketing will by design affect what kind of people buy the tool with what kind of intentions.


In some respects, yes, it's about the messaging. You'll attract certain customers, and they might be willing to press for certain things that might not be legal, depending on the message.

There's a difference between a gun company that markets their products as protection and sports, and one that markets them for revenge. In both cases, the usage of the gun is up to me, but in one case it looks like the company might be willing to assist me, or point me towards helpful information that assists me, in circumventing safeguards put in place on the purchase of weapons.


That's helpful feedback on missing context from our post. Thanks.

This series by VICE articulates the sometimes subtle distinctions between legitimate monitoring software built for enterprises and parents vs this particular software (which they deem "stalkerware").

https://motherboard.vice.com/en_us/article/inside-stalkerwar...


There are a lot of dubious companies on HackerOne. Why did taking a stance on this one have a perceived more positive outcome than taking any stance at all?

Pretty much zero of the companies on HackerOne are part of any social responsibility index, shariah compliant index, or trendy b-corporation index. And even in the non-zero rebuttal, the vast majority can have entire dissertations written about weighing the ethical considerations for doing any business with them.

So why even make a stance at all?

The time it takes for the arbitrary nature of your ethical decisions to become apparent is simply longer than it will take for your runway to deplete.


shariah compliant index?

Did you include that just to question the objective nature of morality?


B corporations and socially responsible investing are shariah compliant investing rebranded for an islamaphobic audience.

Standard & Poors operates shariah compliant funds right over in Toronto and is also very popular in many markets.

Maybe it doesn't mean what you think it means, maybe you'll learn


> B corporations and socially responsible investing are shariah compliant investing rebranded for an islamaphobic audience.

Shariah compliant investing follows specifically and explicitly from a religious moral basis.

SRI seems much more concerned with issues of Social Justice and human welfare, in ways not always inline with Sharia principles.

If you say they are similar, the burden is on you to demonstrate.


Significant overlap in the concepts such as avoiding companies into firearms, profiting from conflict zones, tobacco sometimes and other vices.

Thats the demonstration. Shouldn't bother you that much.


Why exactly is HackerOne drawing a distinction with this software producer?

The truth is: because a H1 rep went on Risky Business and did not deliver a very good performance.

Patrick, who is absolutely okay with H1 having FiveEye clients like the US DoD, has a very serious problem with them also servicing an obscure spyware application provider. Because, I suppose, being murder-droned by a panopticon hegemony is much better than getting yelled at by an angry spouse?


The purpose of the DoD is not to spy on people, it is to protect people. That some actions by some programs and and departments may cross the line legally during certain periods is not that same as an entity whose sole, or majority of goods or services are for, or marketed as being for, an illegal action.


"The purpose of the DoD is not to spy on people, it is to protect people. "

To protect the elites as the other commenter said. Start with the claims of a person who led a bunch of wars after getting Medal of Honor:

https://www.ratical.org/ratville/CAH/warisaracket.html

https://en.wikipedia.org/wiki/Smedley_Butler

The same patterns kept happening over and over from there. The politicians got to play politics. The big, war contractors made billions. Their CEO's millions. After the drafts, the eventual "volunteer" army of mostly poor or working class kept dying for their BS about protecting "freedom" and "democracy." Look up Operation Ajax and compare to how that event is treated today to know plenty about what DOD does in the world. ;)


In the first case, you have an entity that has a proven record of breaking the law (on purpose) using technology. I can also argue that the purpose of DoD now is to protect the elites, from the people, but that's another story.

In the second, the legal line is not crossed. It may be crossed at some point by an adult person that can bear responsibility for his actions.

I would not work with both; I can understand how can one not be a hypocrite by choosing to work with the latter and not the former, but not the other way around.

Is it the right moral choice to protect the privacy of a cheater? Maybe, I don't know, I'm struggling to answer that to myself, let alone judge others.


> Is it the right moral choice to protect the privacy of a cheater?

Is this spyware used to find out if someone is cheating? If so, it means you'd install it, and violate their privacy, without knowing if they are a cheater, so the point is moot.


I was not referring to the app, but in general to discover a cheater you'll most likely violate their privacy.

My point is that privacy in a relationship is a relationship thing, and the moral choice for me would be to not interfere in other peoples relationships. This includes not judging you if you use spyware on your wife.


> to discover a cheater you'll most likely violate their privacy

I'm not sure this is true. There are often clear boundaries, like secretly observing them in public versus accessing their private phones.

> privacy in a relationship is a relationship thing

but it's also a privacy thing. Is domestic abuse a relationship thing? That would also interfere with a relationship.


What's so refreshing about their post is that they admit not to have an unassailable "moral high ground". They highlight the strongest arguments, including those arguing against their decision.

They do this because they recognise that decisions often have competing trade-offs, nuances, ambiguity. There is, unfortunately, almost no recognition of this fact in public these days. You're expected to pick a side, defend it, and attack others, using whatever rhetorical tool is available.

Among those tools of destructive debate: reducing any ambiguity in your favour, i. e. "They are banning Z, and I don't know Z, so I'll argue that Z is like A and banning A would be wrong". Or, equally bad, the slippery slope: "Z may be bad, but you can't give me an algorithm that unambiguously distinguishes Z and Y, nor Y and X, or, by transitivity, Z and A. Therefore, you can't ban Z without also eventually banning A, and that would be bad".


There are plenty of corporate applications; the majority of them are obvious and the user knows what's happening (in my country at least [UK] any employee would need to be specifically told and likely have to sign something before it could be used). Monitoring isn't the issue.

The problem is spying. Enabling people to more reliably spy on others is a problem.


Making the decision seems a lot easier to me when the monitoring software explicitly markets itself to and supports domestic abusers.


While I applaud this move, I suspect H1 will continue servicing government and law enforcement clients of all kinds.

A consistently applied policy would see ties with ALL surveillance entities severed.


This is going to earn me huge downvotes, but not all surveillance is equally illegal or equally unethical.

To me it seems that groups that run spy satellites and look out for nuclear missile launches are in a different ethical category than people who make software for perpetuating domestic abuse.

Clearly, I picked two extremes. That was just to show that not all surveillance is equally bad and that some can be better than others. I will leave other kinds of surveillance are just and unjust for other discussion.


> people who make software for perpetuating domestic abuse

That's a bit like saying the authors of Wordpress perpetuate fake news.

I've used similar products to monitor usages on teenager's devices and I can attest to their usefulness far beyond "perpetuating domestic abuse".


The makers of wordpress don't say "great for fake news", but the makers of this software say "great for watching your partner".

They advertise reading your wife's SMS messages as a feature!

That and Wordpress would only be so-so for making a fake news page, I mean it could work but you would be a competitive disadvantage.


They literally make tools to spy on people without their knowledge. How is there any situation where that is ok outside of law enforcement? Parents monitoring their kids don't need to be secret about it.

(And to be honest even that sounds super creepy. I thank god my parents didn't know my complete internet history and track my every movement 24/7. I can only imagine the helicopter parenting horrors that modern technology is enabling.)

Hell this software goes well beyond that. It records every text, every phone call, every keypress, hijacks the webcam and microphone to secretly record them, etc.

It's extremely common for domestic abusers to use this. I'm having trouble finding it, but I recall in article on HN that domestic abuse shelters are requiring victims to turn off their phones because it's become a problem.


Patrick Grey interviewed the CTO of hacker one in the latest riskybiz security podcast [1] on this topic. Patrick is a friend of Alex, but that doesn't stop a hearty debate. Highly recommended podcast for those interested in infosec in general.

[1] https://risky.biz/


That font-weight/color is nearly unreadable on my Windows 10 machine.


> Companies should defer judgement to the courts rather than make arbitrary moral judgements.

Uh, no. Please no. I do not want the courts to arbitrate morality. That's a far far far more dystopian world than one where corporations do (supposing I accept their false dilemma). Companies can, in theory, be created by any person, with any moral alignment. That is not the case with governments (minus authoritarian ones, which function in the context of moral-defining as more or less the same as a company).

Additionally, deferring to the courts also leads to the ever terrible "this is moral because it is legal" and "this is immoral because it is illegal."

There is not a correct authority on morality to which you can defer. You cannot offload such decisions and wash your hands. Any moral decision you make, including deferring to some other moral-decider, is entirely your responsibility.

Note: I'm doing the naughty thing of morals=ethics. I know this is pedantically not the case, but I'm 99% sure that is what the article means. And, in general, this is also what everyone means outside of targeted discussions.

> ...if someone is infected with spyware they're probably better off infected with secure spyware.

I think this is a great ethical issue within the security community. There's many arguments against working for a company you have ethical disagreements with, but that becomes much more grey when it comes to security. Sure I might not agree with the mass surveillance of the government, but wouldn't I rather help the NSA not leave piles of malware sitting around on C&C servers than let it be exposed to even more malicious actors?

Security could use a hippocratic oath.

> FlexiSPY has not published a vulnerability disclosure policy or committed to no legal action against hackers. Both protective steps would be required should their program be hosted on HackerOne.

I'm surprised HackerOne doesn't have a policy surrounding this already. Are hackers who submit issues to HackerOne not protected?

> We will not take action against them based exclusively on moral judgements.

Hooray, kinda. I think this is a maxim that HackerOne could extend to not making moral judgements relevant at all, and to instead institute policies that reflect HackerOne's current morals. This increases transparency and allows HackerOne to say "We reject you because your company's goals/actions/whatever explicitly contradict our policy that everyone wear unicorn hats on Tuesdays".

> Their business conduct is not in line with our ambition to build a safe and sound internet where the sovereignty and safety of each participant is respected.

I think now would be a good time for HackerOne to write this stuff down. A very brief look at their site and the only thing I can see relating to this is the tagline "Make the internet safer together." From which sovereignty implications can be drawn, but having such policies explicitly stated and publicly available not only allows for transparency in decisions, but also works as an advertisement, "Oh hey, this company wants to protect my digital sovereignty, neat!"


> I'm surprised HackerOne doesn't have a policy surrounding this already.

I think you misread that part. They require this, so FlexySPY joining their program would mean the situation would improve.


Thanks tetrep. I agree with your statement "would be a good time for HackerOne to write this stuff down".

We just discussed it this morning internally. If you have suggestions on how to formulate such a policy, please email me at marten@hackerone.com.

Thinking out loud, HackerOne stands for and supports the security and integrity of every piece of software code, for transparency and openness, for the sovereignty of each human being connected online, and for fair and equitable principles for all online activity. And probably some other aspects that I didn't think of this exact second.

If anyone has thoughts on this, we are all ears.

Marten


Don't get suckered into trying to write a 'clear set of guidelines' or a 'comprehensive community policy' or whatever they want to call it. 10 times out of 10, the people asking for such things are either looking to pin you on your own texts through language lawyering or are incapable of independent thought - not the sort of people you want to deal with anyway. The whole faux 'justice' (of this sort) rhetoric is just that - the upholding of an illusion of 'fairness', where that 'fairness' is a juvenile understanding of 'equal treatment no matter what', just like those who think that majority decisions are always right because they're 'democratic'.

The correct response is that of when people tried this trick on the SCOTUS when they asked it 'what is porn'. There, and here, the correct answer is: "I can't define it, but I recognize it when I see it." This of course is a deeply unsatisfying answer to people who can't (or won't) think for themselves, and doubly so for the aspi types that inhabit the interwebs in disproportionate numbers.


Coming soon, a public market in bugs?

How soon can I buy futures in Windows vulnerabilities?


The bugs you're talking about are already worth 5-6 figures. Their prices are so volatile and their outlook is complicated enough that no sane person would enter into a forward contract on one.


[flagged]


Personal jibes like this are not welcome on HN.


This sounds like a very interesting theme for a novel (or novels).

Could even make it kinda like a subgenre of cyberpunk or like a modern version of it at least.

I mean, I can imagine some interesting plots regarding people trading bugs in an underground market, where the product is something that can be more dangerous than a gun, but it fits in a USB drive.

Maybe they can even be based on real stories, just changed a bit to protect identities and such. However I do think you would need to be fairly involved in this world to be able to write something interesting and based on something real.

Hopefully someone involved in the security world gets bitten by the writer's bug and writes something some day. I know Brian Krebs has written interesting articles but a book would be very cool.


Nah, the bugs are already traded underground. The nature of the situation means the bets might be done underground, too.


To settle those bets, you have to have a reliable spot price. The only way to know how much a Windows RCE is worth is to actually sell it.


Good point.


Sounds like a job for Markit Group (https://en.wikipedia.org/wiki/Markit)...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: