Hacker News new | past | comments | ask | show | jobs | submit login
Facebook's new front-end server design (facebook.com)
210 points by banderon on March 11, 2016 | hide | past | favorite | 43 comments



TL;DR - FB historically has always used a dual-socket setting up. They started noticing the trend in Intel chips to be only getting marginally faster yet using vastly more power. They needed to change this curve. So 3 years ago they starting working directly with Intel. The result was the Xeon-D chip. FB current setup is a single socket Xeon-D which is faster than the old dual socket setup and uses 1/2 the power.


Thank you for your service! I would pay for a TLDRAAS.


http://smmry.com/ produces:

We have been using traditional two-socket servers for more than five server generations now at Facebook.

Because our web servers are heavily compute-bound and don't require much memory capacity, the two-socket servers we had in production had several limitations.

Mono Lake is the server embodiment of Xeon-D and the building block for our SOC-based server designs.


PyTeaser, is that you?


This should be a HN feature. Ability to submit TL;DRs to a post in a way similar to commenting. An option to always show the top voted TL;DR, and a link to dive in to other interpretations.

A wiki-style service for TL;DRs could work too.


Slowly thinking about building one: https://github.com/simonebrunozzi/MNMN


> uses 1/2 the power.

Actually the power use is the same per rack, it's just faster compared to the previous performance per rack, for their workloads, compared to what they'd get by keeping dual socket with this CPU generation.


I guess it is very much workload dependent.

New E5-2650L is 2x faster on many workloads, which is the EP line they're comparing it to. Not sure about power.


Also of interest is the fact that they settled on using a multi-host NIC to share a single NIC between four single-CPU servers.


Isn't ironic that all this great engineering advances (they're influencing Intel) are all based on running more efficiently the much hated PHP?

In the future, maybe the first step to make a highly scalable green web-based startup is to choose PHP to write your code. ;)


Well... they wrote their own PHP-to-C++ compiler (which is both insane and incredibly impressive), then their own PHP tracing JIT compiler, and their own PHP type checker....

At what point does it become their own language?

Also a good amount of their backend services use better languages like Haskell, D, Erlang, and whatever else they need.


There's a reason the combination of type checking, proper containers and generics, inline XML, and asynchronous processing is all called Hack, and the runtime is called HHVM. At this point, new code using all these features only vaguely resembles PHP, but it's far more robust and maintainable, and vastly more performant.


When they started to call it "Hack" instead?

More seriously... I'm pretty sure they no longer consider it a PHP code base. I could be wrong though since I don't work there.


How soon before Intel is selling FB custom SKUs with PHP/HHVM accelerating instructions? They could name it Love Canal Technology.


This is legitimately a nightmare scenario for some people I know.


why could this be so? just wondering


Dear god no.


PHP with their own created VM and JIT ;) (and for new code they switched to Hack)


Facebook has spent a lot of time and man hours working around a lot of the issues most people have with PHP. Throw enough talented engineers at it (or any language), and they'll come up with some mighty creative solutions. It's not like they can do a rewrite, nor should they when they've obviously managed to make it work for them. Plus, their investors would kill them. Very, very slowly for such a boneheaded idea.

Anyhow, most new startups don't really have the resources to do anything remotely like that. Best to choose a well-established language that works well for your needs with manageable downsides.


Is there any potential for Facebook to get into the cloud services market? It seems like Facebook has the experience for high-performance, reliable cloud platform. Using something like Kubernetes could make it easy for customers to try out FB.


They just got out of the cloud services market, see Parse.com


There is definitely potential but they don't want to make money that way. So, don't expect it. Source: I was driving this internally for a while.


I'd rather like to see a FB free and especially free of ads, financed by selling cloud hardware/software.


They just closed $18B in revenue in the past fiscal year. Even if they successfully began selling hardware and were making twice that, they'd be idiots to give up that huge, huge income stream. Their investors would kill them, and for good reason.


They shut down Parse.com, they are not interested in selling that kind of product or don't know how to do it...


Those in-market cloud providers are very secretive about their solutions. But due to the fact they are running and expanding a business that is comparable and bigger than Facbook in size, there is no reason to believe they don't have something highly tailored to their own causes.


There's a lot that goes into building a public cloud that in house solutions won't do because it's unnecessary complexity. A lot of which is security related, which ends being very difficult to bolt onto an existing system.


It sure does sound like they could potentially build an AWS/Google Cloud competitor, but with Oculus just getting started I guess it's not a priority for them at the moment.


Facebook lost their AWS keys not many months ago. Until recently, it was also possible to trivially brute force your way into any user account. Cloud services containing the same bugs would not remain standing for very long. I'm not sure these facts would prevent them from trying though.


Interesting that they worked directly with Intel on this. I wonder if the cost savings from a power perspective were worth all this engineering effort? It's impressive in its own right to go to the lengths of customizing the hardware in such a way, but I'm not sold on the business case.

It seems like you'd get a lot more for your money just by finding cheaper sources of energy (or building your data centers near a cheap, renewable source) and using commodity low power chips that have already been proven to work well for server work loads.


Facebook is already using the cheapest energy they can find. If Facebook is buying, say, $20M worth of Intel processors per year then you can imagine how it's possible to get ROI on a few million worth of engineering effort.


Their hardware before has been custom as well, and I'd guess the CPU work was mostly Intel getting input from Facebook to help figure out what's best to counter eventual server ARM chips. It's not like Xeon-D are a limited design custom made for Facebook, they are going to be an important part of Intels lineup.

And even if your electricity is cheap, you still have to get the energy out of the data center again.


They're probably taking a pretty long view on the cost savings and considering future expansion. Based on the article, I'm thinking Intel ate up some of the costs with this to recoup them with other sales.


Amazing to think you can have problems of scale that can only be fixed by building a better CPU. But at FB scale everything is a bottleneck including power. I think it would be fun to think about ways to fix problems of this magnitude with virtually unlimited budgets. Of course having too much money can also be a bottleneck and allow you to come with too many wrong answers as well.


The next step in saving power would be adding 32GB of HBM2 on package.


getting a 500 with the given link: https://i.imgur.com/w8cPBa2.png


Redux as clear winner for app state management?

They're wrong. Here is why:

https://gist.github.com/idibidiart/49a095b6bc528638f34f


Sites these days love their whitespace, don't they, but now the content is fading. At this rate, in 7 years most websites will just be pointless while rectangles. Still, should save on the server costs as it'll cache well.


Funny, your comment is the lightest one on this page.


Yes, they do that to make it harder to read.


I don't understand your complaint. Out of all the articles I've read lately this page seems to be easiest to read. It focuses on just the content and formats it well.


The text is grey instead of black. It probably looks fine on most monitors, but I can see how it might exacerbate readability issues. It's become some kind of received wisdom that body text shouldn't be black anymore, I just don't understand it.


My complaint is that grey text is harder to read than black. People could have made text grey years ago if it were better; it doesn't require hardware or new protocols. It's just a trend. It'll die out, but until then - given browsers don't allow the user much in the way of control over how content is presented to the user - it's annoying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: