Founder and CEO of M5 Hosting here. We did have a network outage today that affected Hacker News. As with any outage, we will do an RCA and we will learn and improve as a result.
I'm a big fan of HN and YC in general, we host of other YC alum, and I have taken a few things through YC Startup School. During this incident, I spoke to YC personally when they called this morning.
We have been using M5 Hosting for one of our servers since 2011. They have been extremely reliable up until today. Based on what was posted about the Hacker News server setup, we have something similar. We have a "warm spare" server in a different data center. We use Debian, not FreeBSD.
We are in the process of slowly moving to a distributed system (distributed DB) that is going to make fallover easier. However, that kind of setup is orders of magnitudes more complex than the current (manual fallover) setup. I really wonder if the planned design is going to be more reliable in practice. Complexity is almost always a bad idea, in my experience. Distributed systems are just fundamentally very complicated.
Oh hi! Thank you for the kind words. I cant tell who you are by your name here, but if you've been with us since 2011, we have certainly spoken. Are you using our second San Diego data center for your failover location? If you and I aren't already talking directly, ask to speak with Mike in your ticket.
I had used M5 some years ago to host an online rent payment / property management app. Have nothing but positive things to say about that experience. We once had an outage that was our own fault on our single server and they had someone go in, in the middle of the night, to reboot it for us and we weren't even on an SLA.
Thank you for sharing your positive experience! We can power cycle power outlets remotely and can connect a console (ip kvm)... and we are staffed 24x7.... in case you need another server. Thanks again!
HN is one of the few sites I always keep zoomed-in (around 200%), which led to me finding an interesting bug in Chrome while HN was down: Chrome's internal "This site can't be reached" page uses the zoom level of the site you would be visiting (if it were up), rather than Chrome's default zoom.
Chrome used to store 'zoom level' for URLs even if you were in incognito mode: and in plain text. Not sure if it still does....
(if you changed the zoom level for a site while in incognito from the default, it would save the value and the associated URL).
I've observed a related issue with much amusement for a few years now: when loading a new resource (specifically: spinner going anticlockwise, waiting for TTFB), Chrome will invisibly switch the renderer over to the font size settings of the to-be-loaded resource, then carefully inhibit repainting the view.
But, if said destination resource is very slow to hit TTFB, you switch to a different tab, then back to the loading tab, you'll see the current page at the destination page's zoom settings.
My guess is that the interstitial system that injects error pages, Safe Browsing warnings, etc, doesn't hit the code path that says "we loaded a new (regular) page, go find its zoom settings".
Demo/PoC:
1. Run $anything that will serve a webpage on an arbitrary port - even an error page or directory listing. eg, python3 -m http.server, php -S 0:8000, etc.
2. Open the resource you just set up in a new tab, zoom in or out as preferred (eg, to a crazy level), copy the URL (for convenience), then close the tab.
3. Stop the server in (1), then run `nc -lp 8000` (or netcat, ncat, or $anything that will listen but never respond).
4. Open a new tab, navigate to a valid website (eg here :), example.com, etc), then once it's loaded, paste the URL you copied. With the page spinning and waiting for netcat (et al), navigate away from the tab, then back to it again.
Think I noticed this for the first time a couple years ago. Seems harmless enough.
It'd make the zoom level you see jump up and down depending on whether you lose your connection or regain it, this is less jarring.
You could say that Chrome is designed to tie the zoom level to the viewport but I wouldn't count on this behavior springing up from an underlying design and implementation rather than it being a design choice for the user experience.
I'm not sure I follow. If the page is constantly bouncing to the no connection page, it is jarring, period. If the page of my "no connection" changes because of the address in trying, that is jarring if the problem is on my end.
That is, consider your network is down. You try to go to an address. It doesn't load, so you try another address, the page changes; but it is the same content.
Is it important that the no connection message be an HTML document treated like others in the web browser? If browsers used to model it that way and you saw behaviors corresponding to the switching of a webpage, it was arbitrary too, and in this case, would cause more disruption.
> I'd expect the zoom to be associated with the site.
That's what the GP comment said happened: the zoom level was the one associated with what they previously had set on HN, and they expected it to be the opposite, the default zoom level for the browser.
But your browser's connection-failure page is considered to come from the HTTP Origin of the site. It's like when browsers receive a specific HTTP status-code (e.g. 500) with no body, so they render a default HTML error document.
In both cases, those are the browser supplying a resource representation, while still technically being on the resource specified in the navigation bar. The thing you're seeing is an overridden representation of the server's response. (Which, in this case, just happened to be "no response.")
It's almost exactly the same as how the server sending a 304 gets the browser to load the document from cache. The server's actual response was a 304; but the browser's representation of that response is the cached HTML DOM it had laying around from the last 2xx resource-representation it received "about" the same resource.
I mean, I get the argument. But it falls easily, from my pov. If the styling is for all content from the target site, 304 still works logically. If the argument is "all content in the browser", it fails as it depends on the address.
And I can see the argument for either. If I increase my terminal's font and run a curl, the response is scaled up. That makes sense, I scaled up my terminal.
To that end, it is odd that scalling up is per document origin. I'm assuming that is configurable?
I think, the font size around 105-110% would be perfect but the default one is fine as well. It definitely is the smallest default font I've seen on a popular website but it's workable for me.
> Which begs the question: Does anyone feel the default font is just perfect and wouldn't want it to be bigger even by a tiny bit?
I think it's perfect. What is your screen DPI (or rather angular pixel size from your normal viewing position) and is your browser set up to do any scaling based on that? Maybe it should be.
I really dislike the trend of giant fonts and whitespace.
Are you using a high dpi monitor but not using > 100% display scaling in your OS or something? It's roughly the same size as most other sites for me.
(And pretty much all browsers have a zoom function for exactly this, it feels like a totally separate frontend would be more hassle to use than just ctrl + scroll wheel once)
I’ve come to enjoy using a high DPI monitor without display scaling as a way to counteract the huge amount of whitespace in modern UIs, coupled with content zooming so words are still actually readable :) https://addons.mozilla.org/en-US/firefox/addon/zoom-page-we/
As a designer, I've been thinking a lot about this "huge amount of whitespace in modern UIs" thing. I personally hate it, I want most things to be always in reach.
Two of my hypotheses are:
(1) some designers are working on huge screens themselves, and don't test enough in usual resolutions
(2) it's easier to achieve good visual composition by doing a lot of whitespace (to the expense of hiding things below fold or in triggerable containers)
It's the only site I have problems with, tbh. Stylesheet says it's supposed to be 10pt (with comment text dropping down to 9pt), which is even smaller than the too-small 12pt font that gets recommended a lot.
I've found Linux to handle scaling pretty inconsistently; I've got a 4K television I connect my computer to and if I tell it to scale 200% in the monitor configuration most things get scaled nicely, but random stuff (especially proprietary stuff) doesn't know what to do.
It worked much better to just tell it to output 1080p and let my television scale it... less graphics memory too. I still need to scale HN up relative to other sites in order to read it though.
If I compare the text of your comment to the text of an article on npr.org it seems like about the same as the difference between 9pt and 12pt, and they are using a serif font that seems to be a lot easier to read.
It's a style choice I guess? It seems like it would work best on a large 1080p display, so maybe that's just what the person who designed the layout was using.
While you are there, can I feature request that bring the upvote icon/button to end of the comment? Right now that triangle is at the beginning of comment, sometimes a comment is long & interesting,I want to upvote it because its relevant, interesting & correct, have to scroll back up.
I consider that a feature, not a bug. I typically do all browsing zoomed in somewhat and I expect the "page can't load" to also be zoomed. Or am I misunderstanding what you're saying?
EDIT: People who disagree, care to explain? I zoomed in, so why would I expect it to zoom out just because its a different page? What am I missing?
This actually got me thinking. Do we really need CDN? This is one of those thing we take and use without actually thinking whether we could do without it.
CloudFlare will proxy whatever site you configure, even if it is static.
Static websites will get the best speed boost from locally served assets (much reduced latency from the local POP) because the page itself can be cached (presuming headers on origin site are correctly set). Especially for page requests from international users.
Sorry, I wasn't clear. What I meant was that since HN is dynamic its dynamic content is not usually cached. I mentioned specifically "dynamic" sites on my parent comment because as of this month Cloudflare can host static pages.
(Just so nobody misinterprets my question, nothing wrong with FreeBSD, I know other stuff also runs on it like Netflix’s CDN. Still always interested to hear why people choose the road less travelled)
RTM, PG and I used BSDI (a commercial distribution of 4.4BSD) at Viaweb (starting 1995) and migrated to FreeBSD when that became stable. RTM and I had hacked on BSD networking code in grad school, and it was far ahead of Linux at the time for handling heavy network activity and RAID disks. PG kept using FreeBSD for some early web experiments, and then YC's website, and then for HN.
FreeBSD is still an excellent choice for servers. You may prefer Linux for servers if you're more familiar with it from using it on your laptop. But you use Mac laptops, FreeBSD sysadmin will seem at least as comfortable as Linux.
Do you think this influenced early YC companies more generally? For example, reddit's choice in picking FreeBSD over Linux?
It's interesting that they might still be on Lisp if they hadn't picked FreeBSD (a chiefly cited concern was that spez's local dev environment couldn't actually run reddit, which seems like it wouldn't have been a problem with Linux, since Linux & OS X both had OpenMCL (now known as CCL) as a choice for threaded Lisp implementations at the time).
Lisp was indeed a hassle on FreeBSD. Viaweb used CLisp, which did clever things with the VM system and garbage collection that weren't quite portable (and CLisp's C code was all written in German for extra debugging fahrvergnügen.)
I don't know how Reddit came to use FreeBSD, but if you asked which OS to use around university CS departments in 2005 you'd get that answer pretty often.
Yeah, absolutely; wasn't criticizing the choice of FreeBSD more generally (short of elegant maybe, but the only real UNIX systems available these days are illumos and xv6, and they're short of elegant, too), just thought it odd for that specific use case.
Thanks for answering! That's really interesting about clisp; I've always found it a more comfortable interactive environment than any other Common Lisp, but it definitely sacrifices portability for comfort in more ways than one (lots of symbols out of the box that aren't in the HyperSpec or any other implementation, too, for example). I'm now really thankful I've never been tempted to look to its source!
How do you define UNIX? Don't use "a system licensed to use the trademark," as that's boring and includes many things that are definitely far from it. It's hard to pin down! I'd say it's easiest to define what isn't: massive systems.
Massive systems miss the design intent and, to a great extent, nearly every benefit of using UNIX over VAX.
This excludes many of the operating systems licensed to use the trademark "UNIX." In this regard, even though Plan 9 is obviously not UNIX, it's a lot closer to it than (any) Linux and FreeBSD.
> Massive systems miss the design intent and, to a great extent, nearly every benefit of using UNIX over VAX
I take it you meant to say "VMS" here, not VAX.
I don't think the size of a system is essential to whether it counts as "UNIX" or not. The normal trajectory of any system which starts small is to progressively grow bigger, as demands and use cases and person-years invested all accumulate. UNIX has followed exactly that trajectory. I don't see why if a small system gradually grows bigger it at some point stops being itself.
I think there are three main senses of UNIX – "trademark UNIX" (passing the conformance test suite and licensing the trademark from the Open Group), "heritage/genealogical UNIX" (being descended from the original Bell Labs Unix code base), "Unix-like" (systems like Linux which don't descend from Bell Labs code and, with rare exception, don't formally pass the test suite and license the trademark, but which still aim at a very high degree of Unix compatibility). I think all three senses are valid, and I don't think size or scale is an essential component of any of them.
UNIX began life on small machines (PDP-7 then PDP-11), but was before long ported to some very large ones (for their day) – such as IBM mainframes – and the operating system tends to grow to match the scale of the environment it is running in. AT&T's early 1980s IBM mainframe port [0] was noticeably complicated, being written as a layer on top of the pre-existing (and obscure) IBM mainframe operating system TSS/370. If being small is essential to being UNIX, UNIX was only a little more than 10 years old before it was already starting to grow out of being itself.
Embarrassing slip in this context (I was just reading the CLE spec, too!), but yes.
> UNIX has followed exactly that trajectory. I don't see why if a small system gradually grows bigger it at some point stops being itself.
Adding onto something (and tearing down the principles it was created on, as Linux and most modern BSDs do) doesn't always preserve the initial thing; a well-built house is better as itself than reworked into a McMansion. Moissanite isn't diamond; it's actually quite different.
An operating system that has a kernel with more lines of code than the entirety of v7 (including user programs) is too much larger than UNIX, and too much of the structure has been changed, to count as UNIX in any meaningful sense of the word.
> If being small is essential to being UNIX, UNIX was only a little more than 10 years old before it was already starting to grow out of being itself.
Correct, which is why many of the initial UNIX contributors started work on Plan 9.
> the only real UNIX systems available these days are illumos and xv6
And then when I ask you what makes those "real UNIX systems" you say:
> I'd say it's easiest to define what isn't: massive systems.
But I don't see how illumos doesn't count as a "massive system". Think of all the features included in illumos and its various distributions: two networking APIs (STREAMS and sockets), DTrace, ZFS, SMF, Contracts, Doors, zones, KVM, projects, NFS, NIS, iSCSI, NSS, PAM, Crossbow, X11, Gnome, IPS (or pkgsrc on SmartOS), the list just goes. illumos strictly speaking is just the kernel, and while much of the preceding is in the kernel, some of it is user space only; but, to really do an apples-to-apples comparison, we have to include the user space (OpenIndiana, SmartOS, whatever) as well. Solaris and its descendant illumos are just as massive systems as Linux or *BSD or AIX or macOS are.
I will grant you that xv6 is not a massive system. But xv6 was designed for use in operating systems education, not for production use (whether as a workstation or server). If you actually tried to use xv6 for production purposes, you'd soon enough add so much stuff to it, that it would turn into just as massive a system as any of these are.
> Think of all the features included in illumos and its various distributions: two networking APIs (STREAMS and sockets), DTrace, ZFS, SMF, Contracts, Doors, zones, KVM, projects, NFS, NIS, iSCSI, NSS, PAM, Crossbow, X11, Gnome, IPS (or pkgsrc on SmartOS), the list just goes.
Much of what you mention isn't actually necessary/isn't actually in every distribution! Including X11 and GNOME as a piece of it is a bit extreme, don't you think? I also think it's a bit extreme to put things that are obviously mistakes (Zones, doors, SMF, IPS) in with things that actually simplify the system (DTrace and ZFS, most importantly) as reasons for why illumos is overly-complex.
I mostly agree with the idea that we have to include user space; even then, it's still clear that illumos is much closer to sane, UNIX-ideals than Linux is. I'm not going to claim that the illumos libc is perfect (far from it!), but the difference in approach between it and glibc highlights how deep the divide runs here. illumos, including its userspace, is significantly smaller than most Linux, massively smaller than macOS, slightly smaller than FreeBSD (and much better designed). All of these, though, are of course much smaller and far more elegant than AIX, so in that way we all win.
I don't actually know much more I would add to xv6. If anything, I'd start by removing things. Mainly, I hate fork. Of course, its userspace is relatively small, but v7's userspace is more or less enough for me (anecdotally, I spend much of my time within via SIMH and it's pretty comfortable, although there are obviously limits to this), so it wouldn't take many more additions to make it a comfortable environment.
Again, I'm not claiming Linux is bad (I love Linux!), simply that it isn't UNIX and doesn't adhere to the UNIX philosophy.
> simply that it isn't UNIX and doesn't adhere to the UNIX philosophy.
I talked earlier about three different definitions of UNIX – "trademark/certified UNIX", "heritage/genealogical UNIX" and "UNIX-like/UNIX-compatible". Maybe we could add a fourth, "philosophical UNIX". I don't know why we should say that is the only valid definition and ignore the validity of the other three.
The fact is that opinions differ on exactly what the "UNIX philosophy" is, and on how well various systems comply with it. The other three definitions have the advantage of being more objective/clearcut and less subject to debate or differing personal opinions.
Some would argue that UNIX itself doesn't always follow the UNIX philosophy – or at least not as well as it could – which leads to the conclusion that maybe UNIX itself isn't UNIX, and that maybe a "real UNIX" system has never actually existed.
It is claimed that one part of the UNIX philosophy is that "everything is a file". And yet, UNIX started out not treating processes as files, which leads to various problems, like how do I wait on a subprocess to terminate and a file descriptor at the same time? Even if I have an API to wait on a set of file descriptors, I can't wait on a subprocess to terminate using that API since a subprocess isn't a file descriptor.
People often point to /proc in Linux as an answer to this, but it didn't really solve the problem, since Linux's /proc was mostly read-only and the file descriptor returned by open(/proc/PID) didn't let you control or wait on the process – this is no longer true with the introduction of pidfd, but that's a rather new feature, only since 2019; Plan 9's /proc is much closer, due to the ctl file; V8 Unix's is better than the traditional Linux /proc (you can manipulate the process using ioctl) but not as good as Plan 9's (its ioctls expose more limited functionality than Plan 9's ctl file); FreeBSD's pdfork/pdkill is a good approach but they've only been around since 2012.
> I don't know why we should say that is the only valid definition and ignore the validity of the other three.
For "trademark UNIX": very few of the systems within are small, comprehensible or elegant.
For "heritage/genealogical UNIX": Windows 10 may have the heritage of DOS, but I wouldn't call it "DOS with a GUI."
For "UNIX-like/UNIX-compatible": nothing is really UNIX-compatible or all that UNIX-like. Do you define it as "source compatibility?" Nothing from v7 or before will compile; it's before standardization of C. Do you define it as "script compatibility?" UNIX never consistently stuck to a shell, which is why POSIX requires POSIX sh which is in many ways more limited than the Bourne shell.
I personally take McIllroy's view on the UNIX philosophy:
A number of maxims have gained currency among the builders and users of the UNIX system to explain and promote its characteristic style:
* Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features."
* Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
* Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
* Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
Throwing out things that don't work is a good idea, which is why the modern backwards-compatible-ish hell is far from UNIX (in this regard, I'll admit illumos doesn't qualify).
I fully agree with you that Plan 9 is closer to UNIX than Linux and FreeBSD!
Would the original authors of Unix agree with your opinions on how to define the term?
Does AT&T's c. 1980 port of Unix to run on top of IBM's TSS/370 mainframe operating system [0] count as a real Unix? It appears that Ritchie did think it was a Unix, he linked to the paper from his page on Unix portability [1].
So is your definition of "Unix" broad enough to include that system? If not, you are defining the term differently from how Ritchie defined it; in which case I think we should prefer Ritchie's definition to yours. (McIlroy's maxims are explicating the Unix philosophy, but I don't read him as saying that systems which historically count as Unix aren't really Unix if they fall short in following his maxims.)
> McIlroy's maxims are explicating the Unix philosophy
This is why I used the quote, not for this reason:
> but I don't read him as saying that systems which historically count as Unix aren't really Unix if they fall short in following his maxims.
I'd say yes, a port of v7 is fine, because it's not meaningfully more complex. It can still be comprehended by a single individual (unlike FreeBSD, Linux, everything currently called Certified Commercial UNIX trademark symbol, etcetera).
> I'd say yes, a port of v7 is fine, because it's not meaningfully more complex
I think AT&T's port of V7 (or something close to V7, I guess it was probably actually a variant of PWB) to run on top of TSS/370 really is meaningfully more complex because in order to understand it you also have to understand IBM TSS/370 and the interactions between TSS/370 and Unix.
Pragmatic engineering: What will this change enable me to do that I cannot do now? Does being able to do that solve any of my major problems? (If no, spend time elsewhere)
Can I ask a question that's half facetious half serious (0.5\s): does hackernews use docker or any containers in its backend? With 6M requests per day, if it didn't use containers, HN might be a good counter example against premature optimization (?).
Nope, nothing like that. I don't understand why containers would be relevant here though? I thought they had to do more with things like isolation and deployment than with performance, and it's not obvious to me how an extra layer would speed things up?
I was trying to point out in my original comment that some people maybe pre-maturely optimizing for scale, and having tooling drive decision-making rather than problems at hand. And a good logical short circuit to that would be: "if Hacker News serves 6M requests per day, then using docker would be overkill for a small CRUD app".
That being said, if modern websites were rated by utility to user divided by complexity of tech stack, I must say Hacker News would be one of the top ranked sites compared to something similar like Reddit or Twitter which at times feels... like a juggling act on top of unicycle just to read some comments. :)
We use Nginx to cache requests for logged-out users (introduced by the greatly-missed kogir), and I only ever look at the numbers for the app server, i.e. the Arc program that sits behind Nginx and serves logged-in users and regenerates the pages for Nginx. For that program I'd say the average peak rps is maybe 60. What I mean by that is that if I see 50 rps I think "wow, we're smoking right now" and if I see 70 I think "WTF?".
That would be the natural next step, but it's a question of whether it's worth the engineering and maintenance effort, especially compared to other things that need doing.
For failures that don't take down the datacenter, we already have a hot standby. For datacenter failures, we can migrate to a different host (at least, we believe we can—it's been a while since we verified this). But it would take at least a few hours, and probably the inevitable glitches would make it take the better part of a day. Let's say a day. The question is whether the considerable effort to build and maintain a cross-datacenter standby, in order to prevent outages of a few hours like today's, would be a good investment of resources.
My vote is no. We will all be fine for a day without HN, as today proved. There have to be so many other ways HN can be improved, that will have more of an impact for HN users, in the remaining 364 days of the year.
> For failures that don't take down the datacenter, we already have a hot standby. For datacenter failures, we can migrate to a different host (at least, we believe we can—it's been a while since we verified this).
> Question: what is the other things that need doing?
I'm currently working on fixing a bug where collapsing comments in Firefox jumps you back to the top of the page. I'm taking it as an opportunity to refine my (deliberately) dead-simple implementation from 2016.
> But this forum has seen little change over the years and it's pretty awesome as is.
That's an illusion that we work hard to preserve, because users like it. People may not have seen much change over the years but that's not because change isn't happening, it's because we work mostly behind the scenes. Though I have to say, I really need more time to work on the code. I shouldn't have to wait for 3 hours of network outage to do that (but before anyone gets indignant, it's my own fault).
Does that mean it might get more performant? On my mobile the time it takes seems to scale with the number of posts on the page, not the number of posts it actually collapses
Yes I certainly hope so. The dead-simple implementation first expands all the comments and then collapses the ones that should be collapsed, so your observation is spot on.
I had a lot of help today from one of the brilliant programmers on YC's incredible software team. And there are other people who work on HN, just not full-time since Scott left.
That depends on how much Racket's garbage collector will let us (edit: I mean without eating all our CPU). Right now it's 1.4GB.
Obviously the entire HN dataset could and should be in RAM, but the biggest performance improvements I ever made came from shrinking the working set as much as possible. Yes, we have long-term plans to fix this, but at present the only reliable strategy for getting to work on the code is for HN to go down hard, and we don't. want. that.
arclanguage.org hosts the current version of Arc Lisp, including an old version of the forum, but HN has made a lot of changes locally that they won't disclose for business reasons.
The application is multi-threaded. But it runs over a green-thread language runtime, which maps everything to one OS thread.
That's a significant distinction because if you swap the underlying implementation then the same application should magically become multithreaded, which is exactly the plan.
I've been waiting to see a comment like this somewhere. Just a hugops from the internet and a reminder to all who see this to get your backups fire-proof and off-site.
When going to two you need to handle split brain some way probably, otherwise you end up with an database state hard to merge, thus you better get three, so two can find consensus, or at least an external arbitration node, deciding on who is up. At that point you have lots of complexity ... while for HN being down for a bit isn't much of a (business) loss. For other sites that maths probably is different. (I assume they keep off-site backups and could recover from there fairly quickly)
I haven't run a ton of complicated DR architectures, but how complicated is the controller in just hot+cold?
E.g. some periodic replication + external down detector + a break-before make failover that brings up the cold, accepting any unreplicated state will be trashed and rendering the hot inactive until manual reactivation
Well, there you have to keep two systems maintained, plus keep Synchronisation/replication working. And you need to keep a system running which decides whether to fail over. This triples the work. At least.
A wise colleague recently explained to me that if you build HA things HA from the start, it's only a little more than 2x. If you try to make an _existing_ system HA, it's 3x at best. HN is not a paid service, they can be down for a few hours per year, no problem. We're not all going to walk away in disgust.
This is the newest version of their architecture I've seen [0]. Compare to an overview from 2009 [1].
tl;dr StackOverflow's architecture is fairly simple and has done mostly vertical scaling (more powerful machines) and bare metal servers rather than virtual servers. They also realize their use patterns are read-heavy so there's a lot of caching and they take advantage of CDNs for static content which completely offloads that traffic off their main servers.
Not sure why HN would still be a hosted at a 3rd tier provider. A few EC2 instances (multi-zone) behind a application load balancer should do the trick.
I've played Universal Paperclips from start to finish 4 times! I loved it. In fact, I loved it so much the last time around that I wanted to have another game "somewhat like it" in the background -- that's where the recent Cookie Clicker tab came from.
I always recommend Universal Paperclips to people who don't like cookie clicker games, because I fell in love with it the first time I tried it (heard of it from the Hello Internet podcast)
I'm not sure if this is still a thing, but at one point you could open up a JS console on cookie clicker and run game.ruinTheFun() to unlock everything. :)
I kind of feel like we need a "Year Zero" clicker game where once you get up to 1.7m "clicks" you'll see Pol Pot start dancing in the corner. Then as you accumulate more clicks, you'll see Hitler, Stalin, and Mao make appearances as well. Then, finally, once you've overflown the 32-bit integer, the year resets to 1970 and dennis ritchie and brian kernighan start dancing in the corner as well.
Had me quite confused because I'm also having home internet issues. I was trying to get my laptop to switch to my mobile hotspot and HN is one of the sites I used to test connectivity because a) it's almost always available and b) loads very quick.
A bit of a mindfuck trying to assess my actual internet connectivity via a site that was also down : )_
Ditto. HN is so reliable and light on JavaScript that I typically use it to test my connection. I thought my connection was down earlier but guess this was the rare case where it was HN.
(Other comments suggest it was a network outage at M5 where HN is hosted.)
Me too. I was trying to browse HN on my phone earlier and my first instinct was that my WiFi was having a moment. It's a testament to how reliable HN is.
I was trying to read some news while training in the basement, where I don't have very good Wi-Fi. Usually HN is one of the pages that work better down there, haha.
my fingers automatically just start typing in "news.y" when I'm idle, I definitely didn't know what to do when greeted with a 404!
Is there any way to put the HN homepage on an edge cache so at least the homepage shows up? Or am I admitting that I'm addicted to checking HN too many times a day?
I used to use a web browser with Emacs keybindings, so visiting a URL was the same keystroke as opening a file. I'd type "C-x C-f news.ycombinator.com" quite regularly, and my fingers still go to that "n" when I visit a file in Emacs.
LOL, same. In fact, every site I visit often is one char + enter in the browser. With the exception of W, being east of the Mississippi every station starts with W.
That got me to thinking about 'first letter advantages.' If a site has a first letter not currently in use, I'm much more likely to visit it more often(mostly out of boredom, sure).
V and X are still available if anyone is wondering. Zillow got Z!
I never expect HN to be down... I asked my wife - hey is the internet down? She said - no, it's working for me. I clicked on another site and my mouth dropped.
TIL HN uses "mirrored magnetic for logs (UFS)". Is there a privacy policy posted anywhere? What's in these logs? Magnetic is for long term storage. How far back does it go?
The section on ("HN Information") - does this include e.g. IP address under "any submissions or comments that you have publicly posted to the Hacker News site"? My naive reading of that would say "no". But is that correct?
"If you create a Hacker News profile, we may collect your username (please note that references to your username in this Privacy Policy include your Hacker News ID or another username that you are permitted to create in connection with the Site, depending on the circumstances), password, email address (only if you choose to provide it), the date you created your account, your karma (HN points accumulated by your account in response to submissions and comments you post), any information you choose to provide in the “about” field, and any submissions or comments that you have publicly posted to the Hacker News site (“HN Information”)."
"Log data: Information that your browser automatically sends whenever you visit the Site (“log data”). Log data includes your Internet Protocol address, browser type and settings, the date and time of your request, and how you interacted with the Site."
Then there's also this section:
"Online Tracking and Do Not Track Signals: We and our third party service providers may use cookies or other tracking technologies to collect information about your browsing activities over time and across different websites following your use of the Site."
I would assume that "other tracking technologies" includes IP addresses.
Seeing HN unresolved was a bit weird as it is the best performing site I ever visit on my low bandwidth phone so several times I thought the problem was on my end. But in the end it helped me realize how frequently I dial into HN while it was down. I have a bit of a problem and I think I need to turn on that no procrastination flag on.
Definitely. My thought was "HN is hosted on Azure"? So I went looking into their hosting provider, and lo, they were down too. M5 might be Azure hosted... couldn't confirm that.
Unrelated issues, but I did hear from our other clients that O365 was having issues at the same time as our network outage affected HN and many others.
The one time I'm actually reading HN for actually relevant information for actual work, it's down for half a day. Made for a great excuse to take a nap.
Wow, looks like a wealth of knowledge. Forked it for myself, only for reference, hope that is ok. Just seems like a ton of great info that I’d love to comb through myself. Cheers.
Yes, but it would be nice to read some "official" numbers backed by HN's monitoring (although I'm on HN quite/too often, I would not notice every downtime).
I'm a big fan of HN and YC in general, we host of other YC alum, and I have taken a few things through YC Startup School. During this incident, I spoke to YC personally when they called this morning.