Hacker News new | past | comments | ask | show | jobs | submit login
The Democratization of Censorship (krebsonsecurity.com)
182 points by rfreytag on Sept 25, 2016 | hide | past | favorite | 58 comments



Are DDoS attacks really a problem for journalists getting out information to the public, or are they more so just a problem for journalists who want to distribute content via their own website? Wanting to run a website is reasonable, as it enables journalists to make money on ads as well, but does the inability to run your own website truly hamper free speech?

This seems more like an architecture problem to me, not with regards to the Internet, but with regards to how articles are distributed: by contacting a server who responds with the article. If I really wanted to get some information out, I would upload it to: Google Drive, Dropbox, Amazon S3 and any other free/cheap hosting service I know, and spread the links out on Twitter, Facebook, HN, Reddit, etc. Would it really be possible for any attacker to take down all these services, thus preventing the information from getting out? Or is this more about distributing articles via a web app?


To be protected against DDoS it sounds like you need to be hosted by Google, CloudFlare, or Akamai. Yet these companies are so influential and critical to the Internet that journalists need to feel that they are free to criticize them.

AFAIK Google Drive, Dropbox, Amazon S3 etc. will drop you in a second with a "we're not getting paid enough to deal with this" error message if you bring a DDoS down on them.


This is such an important point. And it is exactly why, when we launched Project Galileo, CloudFlare's initiative to protect politically or artistically important work online, we decided it was critical that CloudFlare wasn't the one deciding what was "politically or artistically important." Instead we rely on the input of civil society organizations like the EFF, CDT, ACLU, Access, etc. If one of the partner organizations says something meets the criteria then we have committed to protecting it, for free and no questions asked.

https://www.cloudflare.com/galileo/

We offered to protect Brian's site under Galileo for free. Had he taken us up on our offer, which remains open, I would hope he would continue to be as critical of CloudFlare as he always has been.

And I hope we've established some credibility in not abusing the position of trust we occupy. For instance, when we protected Spamhaus from a large DDoS attack years ago we specifically made it clear we'd never ask them to treat us any differently than any other organization they monitor. And we haven't. And, to this day, Spamhaus remains one of our biggest critics. And they remain a CloudFlare customer.

Here's a talk I gave last year at Blackhat about the risks of ideas being suppressed on an Internet that is increasingly controlled by a small handful of providers:

https://youtu.be/V-Pj0lrr168

It's something we worry about all the time and I'm glad more people are beginning to discuss the risks it poses.


> Instead we rely on the input of civil society organizations like the EFF, CDT, ACLU, Access, etc.

You still decided implicitly what's politically or artistically important, by hand-picking these groups as authorities. Even more so if you just rely on their input and have the final word yourselves. Not that I personally have anything against these organizations, but ceding power isn't as easy as that.


Does Project Galileo cover political expression you consider problematic though? It's easy to support free speech with which you agree. The real test is whether you support it when it's reprehensible


Yes. There are lots of things that I personally believe are incredibly abhorrent that use our network. I'm not proud of them, but I am proud that we don't censor them.


It seems to me that a vast majority of the for-profit DDoS attack sites (https://www.google.com/#q=ddos+booter) use Cloudflare to protect the component of their operations that take the payments to attack people like Brian Krebs, including the one he wrote about that triggered this entire thing (http://krebsonsecurity.com/tag/cloudflare/).

Forget "state sponsored actors", people executing these attacks are just as likely teenagers with a spare $20 that can use these DDoS attack services to execute a serious DDoS attack in minutes, just for the hell of it.

The point Brian is trying to make in this article is that DDoS attackers have become the true censors of the web. By protecting their ability to profit from DDoS attacks, are you protecting their "free speech rights", or are you instead protecting the real censors of consequence here?

I know you take action on malware and phishing attacks being propagated from your service. A lot of us out here in the NOC world struggling to keep the internet running (and now starting to fail at it) would really appreciate it if you added "DDoS attack sites" to that list.


Yes, you can see Brian's critique of us here:

https://www.youtube.com/watch?v=wW5vJyI_HcU

Skip to minute 19:35. Then skip to the Q&A at minute 45:00 to hear my response.


I'm glad to hear about your concern about "the risks of ideas being suppressed on an Internet that is increasingly controlled by a small handful of providers". I share the same concerns. But I think we differ on the best way to improve this problem.

Your approach is to try to avoid taking down any content proxied behind a single consolidated service, which is controlled by a single organization and is managed under ASNs and IP addresses assigned to you and under your control.

The approach I prefer is to try to improve the problem by decentralizing control of autonomous systems - putting more ASNs, IP blocks and SSL terminators into the hands of independent operators, which makes it harder for governments to single out organizations for things like, for example, mass scale wiretapping via FISA court orders. ASNs are also previously where legal precedent generally accepted that autonomous service providers exist for purposes of handling legal issues, and are the spot where one's strong control over their "Terms of Service" generally begins (though the IP transit provider will usually set a few anti-network-abuse policies (https://he.net/tos.html), including DDoS related ones, realizing that protecting speech has to be balanced with maintaining the health of the internet).

The point I want to make is that the problem with adding "DDoS-for-hire" sites to your list of protected speech is that it directly harms the latter approach of improving decentralization and diversity of ownership in a way that no other service that has existed has ever done before. By making it so that those independent groups require an enormous amount of routing equipment and bandwidth in order to run their own services without risk of DDoS attacks, or being forced to hide their autonomous systems behind another autonomous system (like yours), I strongly believe that not only is your organization directly contributing to the consolidation problem on the net, but that your organization, by enabling these attackers, may even be the leading cause of it.

I have no problem with your anti-censorship policy. I don't think anyone in here does. I would even defend your right to proxy a terrorist web site. But even IP transit providers make exceptions related to DDoS for the health of the internet itself. If it comes down to a choice between protecting DDoS-for-hire sites and protecting the internet itself, which one is the right choice?


I take your point if they were actually using our bandwidth to launch attacks. That's not what's happening here. Instead, they're putting marketing sites behind us. I'm not sure where the line gets drawn:

• A site that actively encourages 3rd parties to launch attacks

• A Twitter account that takes requests on sites that people want DDoSed

• A phone number that you can call and request attacks get launched

• A blog that provides instructions on how to launch attacks

• A company that sells boxes that facilitate oppressive regimes censoring the Internet

• The political sites of the oppressive regimes themselves

• A search engine that includes DDoS for hire sites in its index

Agree that if there's something that is per se harmful then the choice is easy. What's hard is that the universe of per se harmful is pretty small.


Thank you for the thoughtful reply, I see where you're coming from.

I personally think the line can be drawn, not perfectly, but pretty comfortably at the point where they are taking payment to execute the attack. In this context, they would either be accepting money with intent to commit a crime, or committing fraud by taking payment to not conduct that activity. Either way, it is highly obvious that a crime will be committed there.

I've seen parallels of this approach in hitman-for-hire stings. Payment for the assassination is usually the point where criminal intent to murder is made - they don't wait for someone to actually be killed to determine if it's "the real deal" or not.

Your follow-up will probably note that this involves law enforcement, and I understand why you don't want to go there. I run a service with 90,000 hosted sites and I run into similar issues all the time. But outsourcing this to public law enforcement to go after is a really tall order. They're simply not going to have the resources to approach this problem. And even if they do, they're being expected to focus those resources on the more "critical" issues of our time (terrorism, murder, etc).

There is also, IMHO, a general understanding that (forgetting the illegal NSA dragnets for a moment), as a trade off for government generally leaving their hands off the internet, we police ourselves voluntarily in situations where it's necessary for the network to function. Up until now, we've done a pretty good job at that. After watching DDoS attack capabilities triple within a year, I'm not so sure anymore.

I fear what might happen if we cannot figure out how to come together to take on the DDoS problem (for everyone, not just a few large autonomous organizations), and we start to see more government intervention in this space to address the problem. Any such legal intervention would likely also contain a bunch of wonderful earmarks by the lobbyists-of-the-moment, further constraining our ability to provide people with free speech protections.

Anyways, I'll stop here because I think we've both made our points. I don't think it's as blurry of a line as you do, but I agree that it's a blurry line. Thanks for chiming in.


It's hard to say exactly what "protecting the internet itself" really means. And besides, if Cloudflare can offer this protection to both bad apples and to legitimate journalists, then they never even have to make that choice. It'd just be a false dichotomy.

I appreciate your concerns in the sense that the web is no longer decentralized. You might be interested in http://zeronet.io/, which is at least an interesting attempt in encouraging decentralization. But let's face it. Cloudflare is hardly forcing the rest of the web into centralization. They're just helping to protect people that sign up for it, regardless of who they are. They're not here to judge who stays up or not. I feel like that is more in the spirit of the internet than anything.

If Krebs worked with Cloudflare when they had made their offer, I don't think his website would have been down. He's using Project Shield now. And that's fine too.


If I were subject to multiple insults, on stage, by the Cloudflare CEO, I'd sure as hell stay away. There is no good that can come from a dialogue from such a bad actor as he is.

And I can only combine his direct insults on stage (Whereas Krebs was directed at the service, not the man) with CF's insistent take on Tor. They are a bad actor, bar-none.


> multiple insults, on stage, by the Cloudflare CEO

Interesting. Do you have a source for that? I'd like to check it out.


eastdakota (CEO of cloudflare) posted this:

____________________________________________

Yes, you can see Brian's critique of us here:

https://www.youtube.com/watch?v=wW5vJyI_HcU

Skip to minute 19:35. Then skip to the Q&A at minute 45:00 to hear my response.

____________________________________________

He repeatedly badgers Krebs on "why didnt you respond to my emails to meet", to the point they nervously laugh/cough on stage.

"Who needs to actually ask questions, as a journalist?", said eastdakota (https://youtu.be/wW5vJyI_HcU?t=2887). This was what got me. I expect better composure from a CEO than childish and churlish jabs.

(edits were purely for formatting and separating eastdakota's writing from mine.


The particulars matter, but I don't think you are under any ethical obligation to host 'incredibly abhorrent' content.

While I realize that having an expansive policy regarding acceptable content is laudible what is the argument for being expansive enough to include 'incredibly abhorrent' content?



Would you host openly racist white supremacist content? (E.g., an American Renaissance mirror?) I understand that you believe you're promoting free speech, but I'm curious where your limits actually are.


That's incredibly impressive, and well done. I'm glad there is at least one organisation out there protecting even the speech they don't like.


Is that the 'real test'? I think this is an example where the use of the term 'free speech' confuses the discussion.

Free speech can mean 'limited interference by government' (i.e. U.S. First Amendment rationale) and then there is another meaning, 'hands-off editorial policy'.

The two things are related but are not at all the same and I often see the rationale for one being applied to the other. In this case you are suggesting that a non-governmental actor should have a hands-off editorial policy with regard to 'reprehensible content'. To me this is an attempt to apply a 1st Amendment rationale to a private actor, and it doesn't make sense.

I have no problem at all with private entities crafting their own editorial/business policies such that they don't facilitate or participate in enabling 'reprehensible' content/activities. Down thread someone asked if 'openly racist white supremacist content' would be hosted. I hope not and I would not think less of a hosting company that refused to host that content.

It is a mistake to insist that private entities must engage in the same hands-off behavior regarding content and free-speech that we rightly expect of the government.


When your views are very radical or nonconformist, conservative policies would choose to reject it. Why would a company risk reputation loss for no real gain?

But this argument is the very cause of censorship. If there's a policy of non editorial interference, the problem won't occur, and the conservative company have an argument to fall back on: "it's our policy not to hinder free speech".


I could also ask why companies would risk reputation loss for hosting 'reprehensible' or 'abhorrent' content.

Your comment also illustrates the confusing notion that a restrictive editorial policy is 'censorship'. The term 'censorship' was once reserved for government restrictions but now is unfortunately wielded as a weapon against the editorial policies of private entities. By neutering the term it makes it much more difficult to talk with clarity about the two different concepts. Government censorship is much more problematic.

Obviously the marketplace ultimately decides if a private entity is making wise editorial decisions. Public discussion of those policies is reasonable but that discussion can't be founded on the unqualified notion that 'censorship' is bad otherwise there would be no room for editorial decisions at all -- which is an absurd outcome.


This is kind of an oblivious response to

> we decided it was critical that CloudFlare wasn't the one deciding what was "politically or artistically important." Instead we rely on the input of civil society organizations like the EFF, CDT, ACLU, Access, etc. If one of the partner organizations says something meets the criteria then we have committed to protecting it

which is in the first paragraph of the comment you're "responding" to.


CloudFlare is saying all the right things in this relevant blog post:

https://blog.cloudflare.com/cloudflare-and-free-speech/

I'm fairly optimistic that they mean it -- and that if they don't, playing censor would be way more trouble for them than it could ever be worth.


Is it common for the hosting company to know which client is being targetted by the DDoS?


Judging by the earlier HN commentary, Krebs was hosted by Akamai.


The decentralized nature of the web is critical to it's power for free speech. The whole point is that you shouldn't have to be dependent on a handful of media giants in order to publish.


I'm not saying free speech isn't important, I'm only saying that perhaps it's not reasonable to expect that distributing information, that is highly damaging to certain parties, can be done free of charge.

The web exponentially decreases the cost of distributing information, but can we reasonably expect it to be zero, always, for any type of information?

Historically, free speech has meant that you can say whatever you want without fear of violence. It has never been synonymous with the right to have your opinions made available to everyone, free of charge.

I think free speech is immensely important, which is why I think it we shouldn't confuse it with the ability to cheaply serve damaging information from a website with 100% uptime.


The only way to make it not-free in a way that isn't censorship is to make it equally not-free for all information. Otherwise it's a content-based restriction, i.e. censorship.

> Historically, free speech has meant that you can say whatever you want without fear of violence.

That is untrue. For example, the government can't tax Democratic magazines and not Republican magazines (or vice versa).


This is really where I think p2p things like torrents can be a big help. There are obvious drawbacks such as verifying your getting the original version, for example. But if we give up the idea that the creator's website should be the gateway to that information, we have the methods to work around ddos attacks.


A torrent with one million public keys of the most prominent journalists wouldn't be big (32 MB if we use Ed25519), and would allow them to host their content from torrents with few seeders, without fear of being over-crowded by malicious nodes with wrong data. Of course, we'd have to make sure that this particular public key-torrent is not taken over either, but it's a lot easier to secure a single torrent than to secure the torrents containing all articles of all journalists.

krebsonsecurity.com basically works as Brian Krebs' public key: any information you get from that domain via HTTPS is considered signed by Brian Krebs. But it constitutes a fairly crude PKI system, where the public key is tied to a domain that simultaneously has the job of distributing the signed information. The solution requires a separation of public key and information distribution: information must not be tied to a location, and the location of the public key should differ from that of the signed information.

All in all, I would say that (D)DoS is a problem of the transport protocol, not of the internet. Distributing information of high negative value to certain parties requires a different protocol than serving cat pictures does, which is why the BitTorrent protocol appeared in the first place: to counter the takedown of information (copyrighted content) whose publication is deemed of negative value to certain parties (copyright holders).


krebsonsecurity.com doesn't use Brian Krebs' public key, it uses Google's public key (and before that, Akamai).


Yes, and this is old news, actually. We observed the rise of DDoS as a form of censorship in Russia roughly 10 years ago. People usually think that DDoS attacks are mostly used for extortion purposes, but in Russia it was routinely used to suppress some independent news outlets since 2006. Of course, now (since 2014) they have full-blown Internet censorship in Russia, so they don't need it anymore.


Democracy isn't separate groups of individuals having —here— total power to take a website, datacentre, even CDN offline. What's actually happened is the consumerisation of weapons of mass destruction.

Glamourising stuff like this isn't useful. Everybody involved is doing something wrong. We should be doing more at network level to remove botnets by removing (reporting then blocking) infected computers and servers. Continuing to ignore them isn't working.


Seconded. "Super-powered individuals" are the opposite of democracy, whether they are in that position because of jockeying and electioneering, or by buying malware services.


He mentions:

"There is every indication that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — mainly routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords."

How was the source being compromised IoT ascertained? The only way I could imagine being able to determine that is by looking at the vendor bits on the MAC addresses of the source. But being that IoT devices are generally on a LAN on with some RFC 1918 address you wouldn't have that information. You wouldn't even have the MAC address of the default gateway that routed it.

Can anyone comment on this?


You can ascertain information pointing towards specific IoT devices from things like HTTP header information. I saw an blogpost a couple months ago detailing how the author ID'd an IoT DDOS botnet, which I can't find now, but here is a similar one: https://blog.sucuri.net/2016/06/large-cctv-botnet-leveraged-...


> The only way I could imagine being able to determine that is by looking at the vendor bits on the MAC addresses of the source. But being that IoT devices are generally on a LAN on with some RFC 1918 address you wouldn't have that information.

Your not going to have that even if the device has a public IP unless it is a public IPv6 address on a device not using privacy extensions.

MAC addresses are link local only and are not transmitted beyond their local layer 2 network.

There is device finger printing that you can do on the peculiarities of individual IP stack implementations, but honestly without solid proof or explanations from Krebs, they way he is holding himself out as a martyr over this leads me to not believing sensational claims he's making over the event.


> BCP38 is designed to filter such spoofed traffic, so that it never even traverses the network of an ISP that’s adopted the anti-spoofing measures. However, there are non-trivial economic reasons that many ISPs fail to adopt this best practice.

So, it costs too much to run clean internet pipes.

As the internet is a major part of the economy, as well as access to government (as well as government access to surveillance), it's probably time to regulate ISPs and related players for healthy operation, like water utilities.


No No No. There are probably ~100K of ISPs, many of them in judicially-weak countries. This will never work. ISPs are not the problem. Its fundamental oversight in routing architecture of internet.

BCP38 is Best Current Practice. Not a protocol requirement. Once it becomes a requirement this will be solved in a week.

The 5 RIRs[1] are the non-profit organizations that allocates IPs and regulates ASNs. I dont know how but we (or the big internet boys) should petition them to force ASNs to fix their routers.

[1]: https://en.wikipedia.org/wiki/Regional_Internet_registry


> No No No. There are probably ~100K of ISPs,

Good point. I was thinking in a US-centric way.

Still, I'd like my ISPs in the US to be the internet equivalent of lead-free water.


In the current lobbying environment you're more likely to end up with a statutory minimum of lead in your ISP (surveillance, filtering, anti-competitive measures, anti-net-neutrality, etc)


He's also put up a blog post about the incident. Apparently it's back up under Google's "Project Shield":

https://news.ycombinator.com/item?id=12575047

https://krebsonsecurity.com/2016/09/the-democratization-of-c...


You may be thinking of this discussion, "KrebsOnSecurity is now up and hosted on Google Cloud":

https://news.ycombinator.com/item?id=12574428


That's exactly what I'm thinking of. And actually, that's where I meant to leave this comment. I must have had both open in two tabs for copy/pasting the URL and picked the wrong one.

Much less consequential tab-tastrophe than the time I dropped a table in production because I thought I was in my Dev tab. Backups saved my job that day. And I never kept two SQL servers up at once after that day. :-D


Ha. I swear this comment was on another post to begin with.


Your migration is interesting, since it now censors me out because I live in Iran. So I can not read your article. Here is my browser capture, for your article, tab 2 for twitter, tab 3 for blogspot, tab 4 is sourceforge, and tab 5 is Nvidia! https://my.cloudme.com/d358b17/Capture


In other news we now know it at least ~700Gbps to shut down companies hiding behind Akamai.



Kudos to Google for stepping up here! Krebs is a valuable voice of the free internet.


When I first saw the headline I thought this was going to be about YouTube Heroes.


Something very weird is going on - krebsonsecurity.com is resolving to 127.0.0.1 . Could this be an attempt by someone's DNS servers to make the machines in the original attacking botnet DoS themselves?


You still have the "old" (1 day old) DNS records. Try to flush your DNS cache.


My ISP's resolver has them too. Apparently it's somewhat common for ISP-run resolvers to impose minimum TTLs (the nominal TTL on the record I get is 5 minutes).


I'm seeing the same thing. IIRC it was a mitigation measure by Akamai, perhaps to prevent new bots from joining the attack.


From the post:

> I asked Akamai to redirect my site to 127.0.0.1 — effectively relegating all traffic destined for KrebsOnSecurity.com into a giant black hole.

Since Akamai was going to drop the "shields" on the site, instead of smashing the hosting provider with the attack, DNS was pointed at localhost.


This seems like an ineffectual measure. Instead of giving the domain to the individual nodes in the DDoS. I'd resolve it once and pound the IP until it changes.

With a simple script curling the page and looking at the content to check if it's pointed to the right server. Ignoring unroutable or inane IPs returned by the DNS.


In the article he mentions that was done before moving from Akamai to Google Project Shield.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: