Hacker News new | past | comments | ask | show | jobs | submit login
What's up with HN?
236 points by kogir on March 12, 2013 | hide | past | favorite | 121 comments
I'll provide more details in a full writeup later.

We suffered a DDOS. The volume of traffic was sufficient to keep us from handling it in Arc like we always have before. Simply accepting and dropping all requests not from our office required 45% CPU utilization.

Now nginx is helping with some of the work. Ironically the transition was planned for today anyway, except it was meant to happen at night with no downtime. So it goes.

I'm fixing things as I find they're broken. Please let me know if I've missed anything.

Edit: Yes, I know about and will fix all the SSL resources. Like yours, my Chrome window was also a portal to the '90s for a bit.

Edit Again: Your SSL resources should now be happy. Let me know if I missed any.




Hi, it seems all the major sites that have been hit recently have been using the "DDOS" term as a catch all, which really doesn't provide any insight for those of us who are trying to protect our own sites and understand wtf seems to be happening lately with all these high-profile attacks.

A site could, because of its own deficiencies in handling normal traffic, call any outage a "DDOS attack". Not saying this is the case with HN, but see what I mean.

Could you at least specify, is this a massive scrape, which would indicate an attempt to pirate or steal information, or a SYN flood type attack (not a ton of GETs) which would indicate an attempt to not steal information but disable the site.

I believe some more insight into what is happening with all these major site attacks will help us to protect our own sites better. Thx,


General practice in the industry is not to discuss the details of attacks publicly.

As much as it sucks, the rule of thumb is that you need every advantage you can get when it comes to being attacked. You gain little by talking about these kinds of things publicly and stand to lose much (by giving away how you're mitigating the problem, for example, possibly leading the attackers to adjust their attack). It's just generally safer not to.

If you are someone who runs a web site, once you hit a certain size where you have to worry about DDoS attacks you will certainly have the kind of industry connections where you can talk about the issue privately and get help and/or help someone. Below a certain size you just don't generally have to worry about it -- and if you do get attacked, the response will mostly be done by your provider as there's not usually a lot you can do if you're just a few servers.


> the rule of thumb is that you need every advantage you can get when it comes to being attacked.

Nah. Well, at least people shouldn't feel that way; publishing your solutions helps us all.

I just don't think that someone sitting on gigabits and gigabits of zombie throughput needs any help figuring how to hose you down.


Anyone can rent botnets; no technical skills required.


The industry should reconsider. Not saying one should disclose ones counter-measures, but if the standard practice of keeping secrets amongst industry connected VIPs is the wise choice it's not working very well.


If someone is connected then they can just talk to their connections and get the information and help they need.

I'm just saying it's not good to post these postmortems publicly. "We got hit by X. We did Y." Now when Q comes along to attack you, they know what not to do and also know how you mitigated X so they can more efficiently attack you. The EV from posting attack postmortems is just not there.


if you post the Ys publicly, eventually you will cover most of the possible attack cases and DDOSing anyone may not be worth it anymore.


The problem is that it is asymmetric with the advantage to the attacker since in general they are stealing the computing resources that they use, while the website actually has to pay for its side of those same resources.


How is it not working very well?


This sounds a lot like Security through Obscurity to me. Why does it work with DDOS, but not source code?


Security through Obscurity is actually a valid tactic, in most arenas. It can't be relied upon in isolation, which is what many people tried to do. If you already have a robust defense system, it adds an additional layer.

Additionally, there are different trade-offs for DDOS vs source code. Source code you leave behind obscurity, in order to get a well-tested and well-vetted implementation. In DDOS, you're using ops, not code. All your responses are custom-crafted anyways, so there is no well-tested implementation for you to gain. The benefits of transparency are much smaller, and the benefit is the same.


Just in case anyone else was confused: The cost is the same.


It doesn't. The only thing it does is help the attackers prey on another unsuspecting website. Open business practices anyone?


Low hanging fruit principle.


Publishing detailed information about the attack itself doesn't give attackers any knowledge they don't already have.


Well, it doesn't give the original attackers any knowledge they don't already have.

Any other malicious parties might find it useful.


Yes agree but this is * * Hacker * * News.


I couldn't agree more.


Most attacks these days come in 2 styles.

- Slowloris or HTTP spam attacks on dynamically generated webpages

- Reflected DNS attacks to fill up your bandwidth.


There's a third class as well: single-byte UDP and/or TCP SYN: designed to overload the routers and/or switches close to your machine (and if it it's a VM, the hypervisor as well). TCP SYN can also end up overloading the OS's TCP stack (although syncookies are an incredibly easy defence against the latter).


Write-up coming thanks HN!


Bug report for kogir: all items with an id less than 5million are 404'ing.

E.g. this loads: https://news.ycombinator.com/item?id=5000000

But this 404s: https://news.ycombinator.com/item?id=4999999


Looks like anything above ID 8000000 returns a 404 too.


Together, the two of you can probably guess the regex responsible :)


Does this mean that old content will be permanently inaccessible? One of my (rarely exercised) pastimes used to be typing a few random five- or six- digit IDs into the address bar, then following the chain up to the top-level article. I found some really interesting stuff that way.


I suspect that's intentional; older items are less likely to be cached.


Regex pattern gone wrong?


I guess the rule is that you know you're making it as a startup when somebody sues you. You know you're making it as a website when they DDOS you.

Geesh.


You'd be surprised. Sometimes it just happens for no reason whatsoever. One of my sites, for example, has a curse of forum spam bots from China clearly intended to spam a different site. They're using the wrong paths and everything. It's like I'm being constantly invaded by a hoard of blind dumb chickens trying to peck my eyes out by hitting their heads against a fence. It's sad, really.


I'm consistently impressed by how many bots are smart enough to crawl the web and submit forms and even evade some captchas... but are still too dumb to realize that rapidly trying to submit comment spam to my search form isn't doing anyone any good.


IMHO, they fail for the same reason most software does: failure to check return values.

Assuming the happy path is the road to ruin, even for spammers.


and you know you're making it (not necessarily profitable, but at least getting some level of awareness) as a desktop product if it gets added to warez sites. :)


Today the Google Alert I have set up for mentions of my Mac app lit up with a new crack on a warez site. They had a full ASCII art header and everything! Put a smile on my face.


Congrats on your success!


What app?


Reddit Notifier, on the Mac App Store... or less scrupulous places, apparently, if you can't spare 3 bucks!

http://itunes.apple.com/us/app/reddit-notifier/id468366517?m...


Probably one linked in his profile?

The keyboard is a very interesting idea. There's a free version too.


Oh the memories : )

Would put a smile on my face too but... I decided to make part of the computation on the server-side so that my desktop software cannot be cracked without rewriting what the server-side does. My desktop app only makes sense when the Internet connection is up and doesn't "phone home" too often.

If users don't like that they can GTFO.

If pirates don't like that it puts a smile on my face ; )


You work for Maxis? :)


Please, don't blame Maxis for EA management tier crappy ideas.


https://twitter.com/simcity/status/310490053803646976

"Hey, this is on Maxis. EA does not force design upon us. We own it, we are working 24/7 to fix it, and we are making progress"


I feel that's only PR correctness. They possibly have to claim that it's their own fault.

But well, I may be wrong, and Maxis is simply no longer the company that it used to be.


A DDOS of HN is quite strange indeed. There should be some kind of ransom involved, but seeing as HN doesn't make money (I assume) from these forums, there is nothing to ransom.


Why would it have to be for gain? I bet there's plenty of bored people with idle small- to medium-sized botnets who'd take down a somewhat popular site just for lulz.


For the people doing it, the gain can be about trying out their skills, not being bored, and getting the bragging rights of having DDoSed "Hacker" News.


So if they then brag about it and it get posted on HN, should it get upvoted? (not a rhetorical or cynical question, I'm really curious)


I'd upvote it if it was interested and they talked about how they did it (and it didn't involve LOIC).

Which is to say the blog post about it probably wouldn't be worth reading :)


I'm inclined to suspect HN's design actually makes it fairly easy to DoS.


How come? Can you explain your point a bit further?


Sure. I didn't want to put a how-to manual in this thread on the day of the attack though.

HN has a big table of closures representing actions a user can take. Loading certain pages, such as the reply link on a comment creates more of them. They time out after a while, which will get you "unknown or expired link". Intentionally creating a few million of them ought to fill up the server's memory and would be a more effective way to to impair the server's functionality than simply requesting the home page a few billion times.


Someone upset that they were rejected from YC?


HN is a significant asset for YCombinator, and YC, presumably, makes significant money.


Who would you DDOS if you wanted to see if your botnet works?


or someone upset their comments were down-voted? :D


For what it's worth, http://ycombinator.com/images/grayarrow.gif is being referenced by pages on https://news.ycombinator.com/, leading to mixed (SSL vs. not-SSL) content fun in browsers.

Also, if SSL is now a permanent thing for HN, it would be a nice bonus to see "add_header Strict-Transport-Security max-age=31536000;" in the nginx configuration block for the https server...


Not sure if this works everywhere, but: ▲


I think arrow should be replaced with raw css, or converted to base64, since it's very small image, it won't be big as base64.


Or at the very least set a long future expires set on it....

I doubt the arrow is going to change much, and if it does, just use a new reference!


It would be really nice if there was an official Twitter account with status updates for things like this. Might reduce the amount of refreshing.


Or we could step away from the keyboard for some fresh air :-)


I was getting some actual work done for once

Glad that's over with


not possible.


Fresh air? What's that?


I think it's a new Javascript framework. I hope it is REST-compliant.


Oh, yeah! Well, that makes two of us.


there is no IRL, only AFK


It's that funny, but oddly pleasant smell when you get outside the office.


You mean cigarette smoke?

Yes, I live in NYC.


it's a new webscale nodejs framework


+1 to that. Or a status.news.ycombinator.com


Thanks for spending your time on a site that's essentially one big favor to us all.


It's also (hn) an effective way to get quality beta testing of new startups.


For what it's worth, HN is loading extremely quickly now. Is that the nginx transition? Usually it takes a few seconds to load, say, my user profile. Not anymore. Nice work!


Chrome is still throwing a small fit about insecure elements. Change the favicon url to a protocol-relative one.


It's surprising that there still isn't a sure-fire defense against a DDOS attack.


That's a bit like asking for a truck-proof bicycle.


Not quite. There's one sure protection from DDOS actually: scale. For example, Facebook or Google are practically immune from DDOS, since you cannot really overwhelm an infrastructure of that size, even with 100,000s of bots. If you have 10,000s of servers, IP anycast to multiple datacenters, multiple 10Gbps uplinks, you are immune from most DDOSs- they are simply handled like normal traffic fluctuations.


To extend the metaphor; Google and Facebook have 1000000 bicycles, and you can't take them all down with a single truck.


Your second metaphor ain't correct. Google and FB's normal way of executing doesn't consist in sending regular trafic to a bicycle (server) until it crashes and then re-dispatching that legit trafic to non-crashed servers.

Sure a server does crash once in a while but it's not because they purposefuly did overload it.

The DDoS attempt trafic is dispatched left and right just like regular trafic and doesn't affect the normal behavior.

It's more like the single truck can hit any of the 1000000 flying bycicles.

: )


Are you saying that Google and Facebook respond to DDoS attacks by just having enough capacity to serve all the attacking requests, all the way to rendering the pages as though they were requests from legit users? And if so, do you have first-hand knowledge to back that up?

Consider the fact that each of those companies has many services, which can vary widely in usage and capacity.


That's like the episode of South Park where the cure for AIDS is massive piles of cash.


There's never a sure-fire anything in this world, but here are some tips:

* http://stackoverflow.com/a/14599129/178651

* http://stackoverflow.com/a/1029613/1395668



>The volume of traffic was sufficient to keep us from handling it in Arc like we always have before.

>Now nginx is helping with some of the work. Ironically the transition was planned for today anyway, except it was meant to happen at night with no downtime. So it goes.

Does this mean that HN no longer uses Arc, but nginx? Or are you using nginx+Arc now?


Arc is a programming language. Nginx is a web server.


Why not use CloudFlare? It is probably the easiest way to deal with that kind of situation.


So is the redirecting to HTTPS thing going to permanent then?

I guess I'll have to figure out how hack some Chrome extensions that haven't been updated for 3 years since the creators seem to have hard-coded the HTTP URLs.


Well, it's lightning fast now. I don't believe I've ever seen it this fast.


Is HN having SSL problems? Firefox is complaining that HN's SSL cert has not been verified. A screenshot can be seen here: [1]

It used to be fine up to today.[2]

[1] - http://img1.imagilive.com/0313/hn-cert-130311.png

[2] - Just FYI, for me, news.ycombinator.com resolves to: 184.172.10.74


I had this problem the other day with a site I administrate, in Firefox only. It turns out that Chrome and other browsers have robust and complete certificate chains built in which allow them to trust certificates that Firefox over zealously assumes are not verified if you don't explicitly define a certificate chain file to link your certificate issuer to a root certificate that Firefox does trust.

So long story short: Firefox demands a chain file for some certificate issues while other browsers just trust the certificate.


I bet what actually happened was that your chain was set up incorrectly, but was cached in Chrome so it appeared ok. If you've already hit another site that uses the same root certificate as your chain, then your site would appear fine. If you haven't (i.e. testing your site in a browser you don't use much), than you'd see an SSL warning. So it might not have been anything different about Firefox, just that Chrome trusted the chain because it had already verified the root for another site with the same chained SSL certificate issuer.


That actually could be. I use Chrome primarily and rarely use Firefox so Chrome probably trusted my Network Solutions SSL certificate because it had already encountered and cached a chain file linking Network Solutions back to a trusted root certificate, but Firefox had not cached that chain file yet.


At ${PREVIOUS_JOB}, I despised that certificate caching behavior. It did nothing but wallpaper over configuration errors that shouldn't be wallpapered over...

Me: "Your site at https://blah.otherdept.myemployer.edu/ is causing visitors to see SSL errors because your web server isn't sending the certificate chain. It probably got messed up when an updated certificate was installed the other day."

Other sysadmin: "It works fine for me. Try clearing your browser's cache."

Me: "No, really, it's not that. Here's the openssl s_client output showing that your server is only sending its own certificate and not any intermediate certificates."

Other sysadmin: "I just tried from another computer in the office, and the site's working fine. You should call the university's helpdesk since the problem is obviously on your end."

Me: profanity


Funnily enough, I had the same conversation just last week.


>Me: profanity

Thanks, now my screen is full of coffee^^


Had that same thing happen to me. There are a bunch of free SSL verifiers you can use to debug this in the future, http://www.digicert.com/help/ being the one I used (some of the other ones required Java, I liked that this one didn't).


Looks like it's fixed now.


SSL on HN no longer supports RC4 ciphers, see BEAST Attack and the full report on

https://www.ssllabs.com/ssltest/analyze.html?d=news.ycombina...


while you're in there, can you please increase the default font size on this site!?


Try ihackernews.com for mobile HN browsing.


It would be nice next time to direct to a static status page with info on what is going on, if possible, and keep that site up all the time, e.g.: status.news.ycombinator.com.


Why do people bother with ddossing, it just always goes away anyway. I can't imagine that it would be anything other than kids since it achieves no aims in particular.


Not true - it can be used effectively to distract from a primary more targeted attack...


The //news.ycombinator.com comments links in the RSS feed aren't working from Pulse for Android. Can we put the https scheme in the RSS feed permanently?


I have a suspicion that average karma isn't updating.


Can you let us know if this was done by a "large state actor"? Any reason why you think this could have happened? Any specific target?


I can't submit, it just says "Please try again" - been like it for a few hours now


What about the RSS feed? All the links point to file://.


Aha, it's a problem with NetNewsWire not liking https:// URLs. That's a problem.


Who did it?


Since attacks are distributed amongst several infected zombie machines with no motive or intent, it can be difficult to pin-point the true source of the attack.


We really need to standardise a protocol for this so at least victims know who is taking them down

/half-s


To the IETF!


Don't forget to tell them to turn on the Evil Bit:

http://www.ietf.org/rfc/rfc3514.txt


Thanks


Interesting info, happy the site survived.


Thanks for the update. But how is that ironic? Do you mean coincidentally?

Sorry, I don't mean to be a pedantic jackass, but still I said it.


Irony = poetic injustice. I hope that helps.

Now, please stop being a PJ (pedantic jackass). Thanks.


I'd say that depends on what type of irony you're considering: http://en.wikipedia.org/wiki/Irony#Irony_of_fate_.28cosmic_i...

I imagine you're referring to situational irony, but there are a few more literary ways to use it. Ignoring that, it really isn't hard to see how it is ironic that they planned to do today anyways, but instead of everything going smoothly, there was a severe unseen outage that they weren't prepared for.


When the whole trend of "irony has only one meaning" start? Every dictionary I've looked at includes something like "incongruity between the actual result of a sequence of events and the normal or expected result" (I've got an old Webster's New World from the 40's that gives the example "it was an irony of fate that the fireboat burned and sank, which is exactly the kind of thing that a certain type of person claims is "coincidence, not irony").

Is it all down to that one episode of Futurama?


I'd definitely say it's coincidental rather than ironic. At least it's not raiaiaaaaaaaaiiiiinnnnn on your wedding day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: