Hacker News new | past | comments | ask | show | jobs | submit login

All well and good, but are these kind of 'middleboxes' unequivocally unethical? For example, some ISPs might want to block highly illegal content - let's use the typical examples, e.g. child porn sites, malware domains, and so on. It's not inherently unethical (or, at least, there are plenty of reasonable people who would say it is ethical) to install a middlebox that will make it more difficult for users to access these sites.

So now your company has got a content blocker installed. What exactly are your network engineers meant to do now? Demand personal refusal over any additions to the sites that these boxes will block? That seems highly unlikely to happen, and how could that even work in practice? Are all the engineers meant to vote on blocks, and only those unopposed sites get added to the list?

Can ethical network engineers usefully oppose content blocking?




> let's use the typical examples, e.g. child porn sites, malware domains

a futile game of whack-a-mole that only serves to make politicians feel good, and so they can claim they're "doing something" about social threats.

malware domains can be adequately addressed at the application level through things such as: https://www.google.com/search?channel=fs&client=ubuntu&q=goo...


Whack a mole can be a highly successful strategy if there is a cost to having the mole appear somewhere else and the whacker has more resources than the one controlling the mole.


Successful does not imply sustainable. There is an argument to be made for doing manual strategies like whack-a-mole until you have a generic solution, but if you don't have the generic solution coming down the pike, it's time to go back to the drawing board.


Devil's advocate response: If these content blockers are just 'futile games of whack-a-mole', then why are you getting up-in-arms about their existence? Should be easy to avoid them if you truly believe what you say.


Stuff like what the UK is trying to do with a DNS based "black list" of bad things on the internet? Futile game of whack a mole.

Authoritarian regime that forces all ISPs in a country to run networks funnelling all traffic through a government run central point where they do DPI and flow analysis on it (Chinese GFW for instance)? More of a real threat.

For instance there is one ASN in Iran that has transit connections to the outside world. All ISPs are forced to be downstream of it. https://bgp.he.net/AS12880


Well, yes, the original article is about funneling traffic through a centralised DPI content blocker, and not a trivial DNS blacklist.

Agreed, the UK's DNS filtering is definitely a simple to defeat by anybody whack-a-mole (e.g. thepiratebay.org is blocked? Oh no! Let's just google for Pirate Bay and pick one of the many, many unblocked mirrors)

But the kind of DPI, forced blocking utilised by these middleboxes is certainly a step above that, to the point that most people will not be (say) using measures like a VPN to bypass the block.


Because they create tools governments will use to restrict legitimate speech and freedoms. As history has shown they do.


Create and normalize. "What's one more site to denylist?"


...So, again, they can't be 'futile games of whack-a-mole', if they work, then?


I think it's more like taking the whack-a-mole bat and going and beating up people all around the arcade.


There are easily measures that will cause the closure of your small business if used against you personally that would not stop A drug dealer let alone all drug dealing.


Because whacking even 5% of moles looking for ways around government[0] censorship is a unconscionable travesty.

0: eg, China, Iran, etc; if you don't think people getting caught is a problem then I question your basic human decency; if you don't think 5% of people getting caught is a problem then I question your sense of scale and/or ability to multiply numbers by other numbers.


> some ISPs might want to block highly illegal content

There is no way - not even a theoretical way - to allow blocking of illegal content (for any definition of illegal) that won't allow for blocking of any other arbitrary content. Censorship is binary. You can accept either none of it, or all of it.


At some level, everywhere has some form of censorship. For any country you could name, there are, or could easily be, content in any kind of media - books, audio, video, games, whatever, that is so abhorrent that it would either not be published, or would be shut down as soon as possible.

So if you say censorship is binary, it's already here, and has been here for ever. But I would guess that few believe that censorship is truly binary like you say.


Censorship is fine if it's opt in. I don't use facebook and censor myself from it. I opt in to use a pihole and adblocker. It filters many things I otherwise would see.

You can't often choose your ISP so this makes it extra important for censorship of any kind of to be opt in rather than forced.


My kids will not opt-in to censoring "Thomas the Train" videos when they should be doing their school work.

I think I just got everyone to take a step down the slippery slope.



> I think I just got everyone to take a step down the slippery slope.

Hi! Counterexample here!


> There is no way - not even a theoretical way - to allow blocking of illegal content (for any definition of illegal) that won't allow for blocking of any other arbitrary content.

Well, there is: take the person or body that ultimately determines whether content is illegal, and have them review each request and proposed response and decide whether to allow the content through to the requester.

For slightly better scalability, have that body review all content outside of any request-response cycle before it can be published and sign any approved content, then block any content they haven’t signed.

Somewhat more generally, as long as the specific blocking methodology is itself part of the definition of what content is legal, any blocking method can meet the standard of “allows blocking illegal content without allowing blocking of other content”, since any content blocked by the method is, ipso facto, illegal.


How do you propose to determine that a new, never-seen-before URL hosts child porn? In the US, viewing child porn is a strict-liability offense, meaning you are guilty of a serious felony just by looking at a page with the image(s) on it.

There are also civil and criminal liability concerns at the corporate level by assuming the responsibility for constructing and/or maintaining these filters.


ISPs should help LEAs find illegal content all day long, but developing this sort of blocking technology is not the same thing, and neither is using it. There are legitimate uses of it though, namely in employers' networks, for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: