Hacker News new | past | comments | ask | show | jobs | submit login
The Evolution of Container Usage at Netflix (netflix.com)
208 points by kiyanwang on April 21, 2017 | hide | past | favorite | 151 comments



Netflix seems so over engineered to me. They basically have a catalog of a few thousand movies that are negotiated months in advance of actual use. Basically, they just need to encode them and put them on a box and ship them to edge caches. Caching immutable data scales incredibly well.I would also bet that 99.99% of the movies people actually watch on Netflix, would fit on a single box.

In regard to the analytics, they have 100 million subscribers. Let's say each subscriber watches an average of 100 episodes/movies a day. For each watch you record subscriber ID, movie ID, start time, stop time and get 32 bytes * 100 * 100,000,000 = 320 gigabytes of data per day total. I am pretty sure that you could get a commercial database and and business intelligence package that could support the type of analytics you need (mainly clustering analysis) at that scale. A national grocery chain, probably has a similar amount of data ingestion and a similar analytics need. In addition, I have subscribed to multiple Netflix type services and I have never weighed the quality of suggestions very high, giving much more weight to the functionality of the client, lack of ads, and large catalog of good movies.

As evidence that this is a solved engineering problem, just look at the number of similar movie services: Amazon, Google play, Hulu, PlayStation Vue, Pureflix, Crackle, etc.

Google, Facebook, Baidu, Amazon, the self driving car companies are doing cutting edge stuff in terms of scalability and analysis, but not Netflix. The complexity of their operations, seems mainly to be one of their own doing and not intrinsic to the service they provide.

So I look at stuff like the article here and see a bunch of very smart engineers who are bored with the (solved) core problem and spend their time making cool stuff which is actually a pretty good thing.


At one point, Netflix video streaming accounted for something absurd like 20% of all Internet traffic in the country. Regardless of whether it "scaled incredibly well," I would imagine there are still novel issues with that much scaling.

Reliability also matters differently for video than it does for normal web traffic. It's one thing to shuttle 5GB (or whatever) of data over the course of an hour. It's another thing to shuttle 5GB of data with no hiccups for an hour. Detecting and routing around machine or network issues fast enough that real-time video playback is not impeded sounds to me like a difficult challenge.


"Reliability also matters differently for video than it does for normal web traffic. It's one thing to shuttle 5GB (or whatever) of data over the course of an hour. It's another thing to shuttle 5GB of data with no hiccups for an hour."

Speaking of which ...

I notice that while youtube continues to buffer video while paused, netflix (and many, many other video players online) do not.

So while netflix will auto-adjust quality for you in response to a bad net connection, you can't just pause it, go brush your teeth, and come back with a big enough buffer to avoid skips through the entire video.

When did video players stop buffering-on-pause ? Why was that choice made ?

On the other end of the spectrum is whatever video player showtime online uses - it does not adjust quality and it does not buffer on pause. Basically it was built for perfect Internet connections and nothing else.


Probably too many cases where they deliver a bunch of content while it's paused and then the content never gets played. More than half the people who pause it probably end up closing the tab without finishing the video, so one can save a lot of bandwidth by only delivering the data when it's needed. A totally unsupported hypothesis.


I arrived at the same conclusion when I realised that YouTube no longer buffers the video all the way through to the end on mobile anymore. I miss being able to buffer a (long) video for offline playback when riding public transport.


Ditto, for this case I youtube-dl prior and watch on mobile VLC


> I notice that while youtube continues to buffer video while paused

Not anymore, or at least not for long periods of time.


Beat me to it.

I am sure a large part Netflix's success is how seamless its products are: there's minimal to no loading time and the video stream is virtually always smooth - including during times of action/high frame rate.

Creating a video streaming platform is one thing, but creating a content delivery platform that is effectively as reliable as TV that is also cost efficient is no small task if you ask me...


Bullcrap. The tech is good enough (but not that much better beyond that level). >90% of the success comes from the content creation/licensing strategy/execution and marketing. I.e. non-tech.

At this point it's a bit like arguing that Disney's great success in Bluray sales comes from their encoding/Bluray mastering expertise.


You seem to stumbling into the distinction between functional and non-functional requirements. Netflix spends massively more on content then tech so that's consistent with what you're saying. To use you're Disney example though, yes it would effect their success if they weren't able to deliver their content because their Bluray mastering expertise was lacking.


Exactly. The tech division, while overfunded and overengineered is fulfilling its role in a good enough way.


Tech part was "good enough" for last 20+ years - it's why most companies call their tech teams "IT department".

There is however something about that last 5-10% where moving a few bits around the right way gives you business advantage, over places like Disney.

That said, it does seem like some of this Netflix stuff is overbuilt, although I don't need to run tens of 1000's of ec2, so maybe it's worth it.


Tens of thousands of ec2 instances, none of which served any customer, if you red the article carefully.


> Regardless of whether it "scaled incredibly well," I would imagine there are still novel issues with that much scaling.

I would imagine there is not. When you master something for 1%, you do 20% without too much hassle. (Let's remember that netflix is only serving a couple of videos to a lot of customers, that is embarrassingly parallel and cachable).

About reliability. Video is one of the thing that has the least need for reliability. You can have thousands of hiccup during the hours, doesn't matter, nooone can see any of them.

I expect most of the tech challenges to be in cost optimizations, not in delivering the service.


That is the core of their product. However, there are a lot of by-products and back office software needed to run the company. As a consumer, you most likely are only perceiving a small percentage of the technology running a company.

- Applications for all devices Netflix is on (TVs, Phones, Displays, Roku, Apple TV, etc.) and all the support around it.

- Recommendation and analytical software within Netflix. Not even accounting for other areas of the company where ML can be applied.

- Billing and financial software for Netflix users and partners

- Proprietary/internal customer support software

- Content management system, along with all the international and legal challenges around that.

- Any/all software used by their marketing teams


* Operations and control plane for all of the above that is robust enough for SREs to deal with million-to-one-chance occurrences happening every few minutes.

* Complexity multiplier (variations of contracts, catalogues, infrastructure, language, currency, payment methods, taxation, government regulation, corporate governance, market structure, customer preferences) of offering service in >100 countries.

The suggestion that building a service like Netflix is a "solved problem" is naive to the point of idiocy.


Another thing worth noting is that they still support basically every piece of hardware they ever did. Think built in apps on old smart TVs or early blu-ray decks that haven't been patched in YEARS, devices from before 2009~ even.

You can still watch netflix on an original Wii. That hasn't gotten any OS or software patches since like 2011. That's a long tail of legacy clients to have to deal with.


For the record, the official Netflix client for PS2 (only legitimately released in some South American countries) was discontinued and is unsupported. I'm guessing they moved exclusively to H.264 at some point (no idea what codec they served prior to that point, but to be playable on the PS2 it would have either had to be MPEG-2 or MPEG-4 ASP/Xvid)


Given that streaming is basically competing with discs, it makes sense not to drop any devices ever; a DVD produced today is expected to work on a DVD player from however long ago, provided it's still operational.


Most of these stuff are also done by large corporate IT departments such as banks. They all consider it pretty routine and are not writing engineering blog posts.


Maybe the engineering community as a whole be better for it if so much duplication of work wasn't going on behind closed doors.

Sure, you don't want to publicize things which could be a competitive advantage to your company, but I bet these banks of yours wish weren't sitting on huge heavy custom-rolled COBOL codebases right now...


This sounds like one of those developer estimates that takes 10x longer than stated.


"Netflix is over-engineered, it's just videos lol"

Classic HN hubris.

For example, Netflix dynamically adjusts video quality on demand. Pretty easy feature. The top HN comment will explain how they would implement that in five minutes -- hypothetically, of course!


"Responsible 20% of the entire us internet traffic? Solved problem mate, i could do it on a single machine with mongodb. Overengineered!"


Forget about the joke for a minute and let's be realistic.

It is entirely possible and reasonable to prototype a streaming service serving 20% of the us traffic with a single machine and mongodb.

Well, a single machine, mongodb AND akamai.


Well of course, mongodb is web scale don't ya know


MongoDB is shit. No need to use it.

What's important is really the distribution and caching layer, and Akamai has been offering that for a very long time for all the big internet companies.


Prove it.


Just think how much more they could do if they stopped using MySQL. /s


Anyone up for the "make Netflix in a weekend challenge"?


Rails + Devise + Paperclip + JWPlayer = who's got Bob Iger's phone number?


How does the fact that other companies have streaming video services lead to Netflix not having to solve that problem? This seems a bit like saying that Walmart doesn't need to worry about managing their supply chain because Cosco has already solved that problem.

Also if you've ever tried a TV channel's streaming service you'll realize that recording the start and end time won't tell you much about the quality of the experience.


In the end Netflix is one of the places people would like to work because of stuff like this. The sheer scale of operations at netflix combined with modern technology is very interesting to many engineers, so it also kind of work in favour of their HR.


I get the impression they're running analytics not just on the users but also the content (which would take more horsepower). Guessing they use the analysis to model scripts and generate ideas for their Original Series. Thinking of what is needed to do this, you need to analyze themes, characters, story lines, visual effects, music, many components, and then tie them to user types and viewing patterns. Then they create series with either wide appeal or strong appeal to a very small niche. On top of that, these shows load almost instantly while you're still determining whether to watch or not.


http://www.smbc-comics.com/comic/2011-12-13

If we make a scale from 1 to 4 out of those four panels, I'd say Google is around a 2.5 and Netflix is somewhere above 3. It's tough to assign numbers much past 3 because the technology becomes indistinguishable from magic pretty quickly.


> Google, Facebook, Baidu, Amazon, the self driving car companies are doing cutting edge stuff in terms of scalability and analysis, but not Netflix.

I think you're missing a quantifier of some sort. Perhaps "consumer-facing"? Of any AWS customer, Netflix is far and away the leader in understanding and making use of AWS. It pushes the limits more often than not.

Their scalability, failure/recovery handling and (assumed) cost-effective deployments are _the_ standard in AWS and cloud computing. They even have an open-source toolset[0] for testing your infrastructure against failures and recovery that's quickly being picked up by several other companies to harden their infrastructure.

Then you realize that regularly, throughout the day, they're intentionally bringing their own infrastructure down and they can still operate with nary a hiccup. I wouldn't call that over-engineered.

[0] https://github.com/Netflix/SimianArmy


I can see this kind of over engineering in all the companies that hit the VC/IPO lottery. They all hire like crazy the best engineers and have to keep them busy doing challenging engineering stuff. And managers want to protect their turf so they join in. So you have ever more complex designs in anticipation of growth, side projects, detailed analysis, blogs with pretty graphs, etc. A combination of Parkinson's Law and Second System Effect.


For sure there is a bit of self-flagellation. However, what it doesn't tell you is how cost effective or as they elude to: how improves developer velocity (a stand in for productivity) is.

Those other guys may also be offering streaming, but how quickly can they adapt?


Don't they also have some kind of recommendation engine? I bet that takes a bit more data and processing (who watched what, did they finish, maybe even more like what kind of scenes did they stop watching, or watched again, etc)


> Basically, they just need to encode them and put them on a box and ship them to edge caches. Caching immutable data scales incredibly well.I would also bet that 99.99% of the movies people actually watch on Netflix, would fit on a single box.

This is exactly what they do. The video serving part isn't that complicated, it's the rest of the site and APIs.

Although those are a tad over engineered considering the final output is often a slow and unwieldy UX.


> A national grocery chain, probably has a similar amount of data ingestion and a similar analytics need.

It's off-topic, but having interacted with national-level grocery chains before, I think you grossly over-estimate their technological capacity and savvy. They're probably one of the slowest industries to adopt new technological advancements, for better or worse.


While the contracts may be negotiated months in advance the videos aren't delivered until very close to the release date, the same for Google and amazon. In some cases it may be a few days early but production schedules are tight and even if the post house has it done early contracts and controls forbid them from sending it to Netflix, etc too early.


>The theme that underlies all these improvements is developer innovation velocity

I can't wait until this becomes buzzword de jour and startups start using it in their product descriptions. Then someone needs to start talking about products to "enable developer innovation acceleration" to outpace these crufty companies stuck at 25 kph.


Yup. I think further down someone will 'democratize innovation velocity and acceleration'


"unlock innovation potential"


You have to do it holistically though.


Holistically-caused emergent innovation velocity and acceleration?

...The scary thing is I think that makes sense, actually. I can understand that sentence as a thing I would want - Twiddle with your company culture so that individuals come up with (and make) new ideas, in such a way that as time goes on, their ability to do that grows...?

Makes sense to go at it from a systems view (holistic), rather than components view.

How about -

Homeopathic synergies for creative empowerment and evolution?

That sounds properly almost, but not actually, sensical.


But watch out for the innovative jerks. They may mess up the acceleration.


As long as they include direction and magnitude it's OK with me.


There's a whole list of higher-order developer innovation derivatives for marketing types to explore - jerk, snap, ...


They're killing performance (one of the main reasons to use containers) and adding a massive extra layer of management if they're running containers on EC2.

I suspect Netflix are too wedded to AWS (which is weird as Amazon is their biggest threat) but Triton or Red Shift (both of which actually isolate containers using SmartOS and SELinux respectively) make way more sense for other people who want to use the blazing fast IO speed of containers on bare metal.


It seems they value developer productivity and innovation over machine performance. I find this enlightening.


I value developer productivity over machine performance too, but they're doing the opposite here: developer productivity is not helped with an unnecessary second layer of containment.

With containers on top of VMs, you now have to manage which containers run on which VMs (and the cloud provider worries about physical boxes), with a pure container solution you just spawn containers (and the cloud provider worries about physical boxes)


I've been developing for a while with a simple Dockerfile in the root of a given project's output... from there I rarely care about it.. in dev, I can mount the app path to the container for dev, and use environment variables set for any other services that are needed. I'm much more productive with Docker for (Windows|Mac) than I've ever been with the likes of vagrant, or whatever bastion of technology that manages VMs... I don't have to think of the VMs, that's generally IT/Infrastructure/DevOps job. yeah, sometimes I wear that hat, but not when I'm coding.


It is helped if the second layer further automates deployment. I would imagine Netflix is better at understanding its velocity than we are.


Yep but Netflix also has a decade of large scale arch wielded to AWS that we don't.

They might not be deploying containers on VMs on bare metal because they want to, but because they /have/ to.


Mind quantifying this overhead you assume containers-on-metal vs VMs will solve?


It's not assumed, it's widely known VMs have poor IO performance, that one of the main points for containers in the first place. Short ver: GIYF.


If you can't provide numbers, then please talk about what technologies you're assuming are involved that create this overhead.


Compute is cheap. People are expensive.


> Compute is cheap.

Not on AWS.


Word on Hacker News is that Netflix is one of AWS' premiere showcase customers and gets discounts that we couldn't dream of.


If you have dynamic workloads, need to be HA, scalable and multi region, AWS is worth every cent.

If you are running a webapp and a database with a known workload, not so much.


The refrain of shitty developers everywhere.


Performance has never mattered. That's why Internet Explorer's map access implementation was exponential instead of logarithmic (to the size of the map).

That one piece of lazy coding held back Javascript on their platform for years, and I'd argue that it as the most significant cause of their browser's market share plummeting (or at last tied with lack of adblockers).


Seems fine for server-side development. For client-side, please don't kill my battery.


Could you elaborate on why containers don't perform on EC2?

I'm not running that combination myself so I wouldn't really know, but I'm not aware of problems with that specific combination or can think of anything obvious.


I don't know about performance specifically, but you lose a lot of flexibility in managing containers when running them on the public cloud. e.g. there are rate limits so a lot of multicast protocols cannot be used as they quickly saturate that and can't be used effectively (IMO).


Containers use LXC in the Linux kernel underneath. Overhead is even lower than HVM virtualization which is a couple percent for most things. Containers are really just a smarter way of dividing resources between users on a shared linux box, something that's been going on since the dawn of time.

It might add another 1% overhead for most tasks to run containers on HVM virtual machines.

The one giant exception is network performance. The network is usually virtualized at the VM level, unless you have an "enhanced networking" VM with SR-IOV enabled. For containers it's virtualized a second time.

This makes the combo potentially terrible if you're trying to run high bandwidth stuff on low performance VM's.

I still like the combo because it allows you to give the big ole finger to AWS if they try to lock you in one day. Since your containers are isolated from the VM you can easily spin them up pretty much anywhere else.


There is a lot of incorrect information in this post.

- Containers are a combination of namespaces, cgroups, and chroot (maybe). You don't need LXC to use containers. Docker doesn't even use LXC.

- There is no overhead for running processes in containers.

- There is no requirement to virtualize networks for containers. They can be configured to use the host's network directly, at which point you are bound by the host's network capabilities. Otherwise it is typically a combination of bridges and overlay networks for which the benefits outweigh the performance concerns for most workloads.


I agree that the typical NAT or SDN setup around container networking could impact performance or at least require additional resources.

But I don't see how that would be any worse on EC2 compared to bare metal or any other hypervisor/provider.

Maybe I'm just interpreting too much into the OPs wording and he did not mean that it's a specific EC2 issue.


I've mentioned this in another comment but the short answer is: rate-limiting.

Now, Netflix, being a priority customer, may get higher limits and such. But average joe public cloud user should keep that in mind before trying to use EC2 for running containers.


Even if that claim wasn't wrong, it's an unrelated question. If there were rate-limiting problems, they'd apply to using EC2 at all even without involving containers.


I think you're ignoring the fundamental issue when deploying container based services v/s services on a multiple VM's. Usually, the architecture for containers involves spinning up a bunch of VM's and deploying some kind of layer on top of that (either K8s or Swarm or something else). When you deploy containers, they may not be on the same VM, or the overlay network itself may require some kind of communication to another container on another VM. This usually creates a lot more communication b/w hosts and rate limiting becomes the bottleneck.


Do you have any evidence of this rate-limiting showing that it's that much of a problem? People have been running clustered apps on EC2 for over a decade and it's not like you hear people saying you can't run Cassandra, ElasticSearch, etc. on EC2 because the network is limited.

Similarly, do you have any data showing that a container system has such incredible overhead compared to the actual application workload? I mean, if that was true you'd think the entire Kubernetes team would be staying up nights figuring out how to reduce overhead.


You run a compute pool, you don't spin up EC2 instances on demand for this kind of application. You scale the pool based on target utilization metrics.


I was wrong about docker, back when I was playing with it it did use LXC, and appears to have started out as project to make a specialized version of LXC. You're right that Docker has its own container runtime now.

The overhead for running containers is usually very low but real. The OS needs to partition low level resources that are normally shared and the scheduling introduces some overhead.

I disagree about network performance. The virtualization adds a somewhat small but non-trivial overhead here (the overhead for other stuff could probably be considered trivial)

Here is a paper I dug up on that gives results to back up my ranting. It's a bit old now but probably still holds mostly true. http://domino.research.ibm.com/library/cyberdig.nsf/papers/0...


I'd need a citation that a process running in a namespace adds overhead.

My point about network virtualization is that it is not required to use linux containers. Yes, some container tools do create network abstractions that add overhead, but they aren't required and most tools allow you to optionally bypass the abstraction and sit directly on the host's network stack.


AFAIK the early go versions of docker used LXC underneath, but that was ~3+ years ago


Sorry . It's hard to keep up these days, I'm 112 in Javascript years.


Wow. Your knowledge of computers is excellent for only being 4 months old.


Containers are simply processes so NOT 1% overhead! No overhead but get hard to believe.


Late edit: I've said 'Red Shift' above, I mean Red Hat's OpenShift.

Tech has too many edgy code names.


.. so kubernetes.


Kubernetes on bare metal, with additional tech to isolate VMs from each other safely. See link elsewhere in thread.


SELinux is NOT isolation. The main issue is the large kernel attack surface and SELinux, while important, only solves a small part of that.


That's true, but a lot of the classic containers-not-containing issues (/sysfs hacks to get into the parent kernel etc) are prevented by SELinux policies.

See https://blog.openshift.com/securing-dockers-future-with-seli...


Agreed.

But people keep selling SELinux or AppArmor as a solution for multi-tenant container environments, which is just plain false.

The real solution are efforts like like Intel's Clear Containers and Hyper's runV.


We get fast performance when we need to, and there's a number of EC2 technologies that help with that.


Or use something like GKE where the containers should be running directly on hardware, so you only have one layer.


"Bare-metal" means running directly on hardware. But yes, GKE would also join Triton and Red Shift (provided it has similar container isolation capabilities).


Actually, as 0x74696d mentions below, GKE is also not bare metal.

Doing a little research: GKE runs on https://cloud.google.com/container-optimized-os/docs/ which is designed for GCE, which runs on KVM: https://cloud.google.com/compute/docs/faq

So it looks like just Triton and Red Shift.

Disclaimer: I used to work at Red Hat, which means I like the people behind Red Shift, but that also means I hate the people behind Triton as part of the early 2000s Linux/Solaris wars.


GKE clusters run on VMs, not on bare metal.


Containers don't actually run directly on hardware with GCE - there's still a virtualization layer in-between.

I'm 99% sure that Google runs one VM per container because that's the only way to make it safe.

Anything else would be insane.


> I'm 99% sure that Google runs one VM per container

I'm 100% sure you are wrong. You might as well just use VMs.

Containers are not only about safety, you know.


I stand corrected, it's a bunch of VMs per customer but still no multi-tenancy.


1 container per VM? What's the point? If you can bypass container sandbox it's very likely that you can do the same with VM.


> If you can bypass container sandbox it's very likely that you can do the same with VM.

Hypervisors are much, much harder to break out of than a Linux container.


Netflix Engineering team amazes me a lot. They literally took all the available apis, and build their own platform despite some of the features are already in the AWS offerings. I suppose they did it mainly because the native service isn't flexible and robust enough for their use cases.

BTW, their opening positions are always prefixed with "senior" title but I guess that makes sense; Netflix builds pretty much everything from scratch under time constraint.


I asked that in an interview with Netflix, why they were building their own vs open source. The answer I got was "It doesn't work at the scale that Netflix operates" , not quite sure what that means but I didn't press him further.


I work at Cloudflare, since joining almost 3 years ago I've seen multiple pieces of technology get deployed and then replaced because "it just doesn't work at our scale".

It feels kinda weird to say that, but then I see what our DNS servers are doing, or I learn about some customers that struggle to consume their own logs due to the speed at which we produce them, and things that you think are not an area of concern become that... when you have enough traffic flowing through your systems.

Pretty much the only thing that has really stood out is Kafka. Kafka does work at scale.


It's rather rare to see off the shelf technologies (open or closed) that scale to the top 5%. This makes sense because the techniques required at that level are pretty different, and don't apply well to other situations.

So, making an off the shelf product that actually does scale to the top 5% rarely makes sense, since, at best, the customer base will be limited. In the average case, though, the customer base will be almost nil, because of the per-site quirks and customization that are always present.

I experienced this on a weekly basis in the late 1990s and early 2000s at WalMart, where we were centrally managing a network with well over 10 million nodes.

Every vendor ever constantly tried to get in with us, so we had to come up with a triage system or an enormous portion of our time would be spent evaluating.

Even those best of breed would almost never work for us, because of our unprecedented scale (at the time).

In the rare case where an off the shelf product was selected and successful, it had to be heavily customized.

I am of course talking more about 'framework' kinds of tech, or, I guess, things that work at scale. A lot of off the shelf tech was and is used that isn't scale related.


Funny you should mention Walmart. I went to visit them in ~1992 when they were the biggest user of Teradata at the time. Never used Teradata after that, but i'm interviewing with them in the UK next week...


Yup. WalMart was doing big data analytics, mixed in with a lot of real-time processing, many years before those became a thing in the rest of the industry.

If you don't mind me asking, what kind of position are you looking at in the UK next week?


Devops.

Yes Teradata were awesome at the time. Their secret sauce was the hashed index stuff,which I think was the source of their major patents.

Our PM was not amused when I pointed out that, although hashed indexes were awesome, they did nothing when you were doing a wildcard search at which point you're doing a table space scan. Nobody has thought about this wrinkle...


Devops with WalMart in UK...so with ASDA then?

Re: Teradata: I sat next to that group for half a year, but I had no other direct exposure to it. I did what one might call 'devops' for Network Engineering.


No, this was for WH Smith. We went to Walmart to see how they used it. It was awesome, but misunderstood.


Most (but not all) open source projects aren't built or tested with huge scale in mind, and often you have to do things different enough that it's not worth the effort to change an existing project when you could just build it yourself. Especially if you have a lot of custom environment thing to integrate with. A simple example is projects using MySQL rarely have support for separate read and write servers, but that's a pretty common way to scale out, and plumbing that through is going to be more painful than to do it right in the first place.

The other thing is 'early optimization' when you know millions of users are going to use something, you have to build things right to begin with.


They specifically only hire seniors, they don't want to provide positions for juniors as they would prefer to pay more for more seniors than pay indirectly for training juniors.

I think this reflects poorly on them, just as much as companies that use OSS but don't contribute anything back.


I guess "experience discrimination" is a thing now?

Seriously, if a company only wants to hire experienced folk, then so be it as long as they don't exercise REAL discrimination.


How does that reflect poorly on them? They are willing to sacrifice potentially great hires because they'd rather make velocity their focal point. It's a trade-off.


I personally find it short-sighted since I find mentorship opportunities to be a hugely rewarding part of my job. I wouldn't want to take a job in an environment with no junior developers since it would deprive me of that perk. I'd imagine Netflix misses out on other senior engineers for the same reason.

It's also interesting considering a quote I remember from Google about their preference for junior engineers..."fewer bad habits to break."


I think it's a refreshing change from the ageism of much of the rest of the industry. It probably gives them a compounding competitive advantage.


> Today, we are in the process of rebuilding how we deploy device-specific server-side logic to our API tier leveraging single core optimized NodeJS servers

Is this the core Netflix API ? have they moved from java - previously, their entire open source contributions were around java (https://netflix.github.io/). Hystrix repo was updated barely a day ago.

For me, this is more interesting than the VM part.


It is a splitting of the device-specific stuff from the general coordination stuff. So the "core API" as you put it remains in java while letting the device teams write their code in node (much of our UI teams are experts in JS already).

Hystrix is important and won't be going away any time soon.

More info: https://www.slideshare.net/mobile/KatharinaProbst/the-new-ne...

Disclaimer: I'm on paternity leave and not on those teams, but we've talked publicly about this stuff recently..


This is very interesting. And very indicative of convergence. If someone at the same of Netflix has a strong drive towards cross pollination of talent from device to server, then I suppose the js ecosystem is far more successful than I thought.


They use a mix of everything but most of the back end microservices are JVM tech - Java, Groovy and I heard they were using some Kotlin awhile ago.


> We implemented multi-tenant isolation (CPU, memory, disk, networking and security) using a combination of Linux, Docker and our own isolation technology.

Curious what their 'own isolation technology' does that docker doesn't.

Also, what does Fenzo do that marathon doesn't . Looks like Fenzo sits on top of marathon and sends it some sort of recommendations for scheduling. I need to find a good example of what this is actually doing.


Fenzo is a java library for deciding how to allocate tasks when offered resources. It makes implementing Mesos frameworks easier because it turns out the question: "Given an offer of X resources, and a list of tasks that need running, what is the best use of the offered resources?" is actually quite hard. (3D knapsack hard).

"Apache Mesos frameworks match and assign resources to pending tasks. Fenzo presents a plugin-based, Java library that facilitates scheduling resources to tasks by using a variety of possible scheduling objectives, such as bin packing, balancing across resource abstractions (such as AWS availability zones or data center racks), resource affinity, and task locality." [1]

Marathon is a Mesos framework for scheduling long running applications (like rest services) and keeping them running. If what you want to do is serve http traffic then Marathon does the job (although the stand-alone UI is now deprecated and will only bug-fixed for "the next few months", so you'd better like the full DC/OS offering).

Titus appears to combine the functionality of Marathon, plus the ability to run batch jobs. I wondered if Titus was a fork of Marathon with new bits, but that doesn't appear to be the case. I believe it deals with one glaring flaw in Mesos, which is that frameworks all independently calculate the best use of the offered resources. When compute becomes available, Mesos makes offers to frameworks, with some basic logic such as making offers to frameworks currently consuming the least. But that means that there is no way to customize the resource allocation across different use cases (e.g. between REST APIs, one-off tasks and Spark clusters). It'd be great if Fenzo did "sit on top of Marathon", so I could customize how it schedules based on the "bigger picture". Titus avoids the problem because it schedules everything.

[1] https://github.com/Netflix/Fenzo


>Curious what their 'own isolation technology' does that docker doesn't.

Probably related to that "and security" part. That isn't currently docker's strong suit.


Justifying high engineering salaries, i see this a lot in great companies.


Netflix is the last place I'd expect to do this. They're famous for letting people go when they're no longer needed.


That sounds like a great incentive to build big proprietary systems that depend on your presence...


Yeah, boo on those evil developers who want to make money and work on interesting stuff. What are you, their CFO?


How does this compare to k8s? Seems like it has everything plus a layer of scheduling and batch jobs on top?


Doesn't kubernetes run an overlay network (at least in some scenarios)?


An overlay network is not a requirement. The only requirement is that pods (collection of containers) should be able to communicate with each other directly without NAT. Each pod gets an IP address in the container network.

Overlay network technologies (flannel, weave, calico, etc) are popular but they aren't mandatory. You can implement it using hardware switches and VLANs if you wish.


Sorry, but what are you inferring by hardware switches? Something not an overlay? Something that switches VXLAN in hardware?


That is true, it has more fancy network stuff, but less scheduling stuff.

This space is getting busy in the last few years..


> In each of these examples, a key to the success of Titus was deciding what Titus would not do, leveraging the full value other infrastructure teams provide.

This. So much this.


This is some amazing scale:

-------------------

We run a peak of 500 r3.8xl instances in support of our batch users. That represents 16,000 cores of compute with 120 TB of memory. We also added support for GPUs as a resource type using p2.8xl instances to power deep learning with neural nets and mini-batch.

In the early part of 2017, our stream-processing-as-a-service team decided to leverage Titus to enable simpler and faster cluster management for their Flink based system. This usage has resulted in over 10,000 service job containers that are long running and re-deployed as stream processing jobs are changed. These and other services use thousands of m4.4xl instances.

While the above use cases are critical to our business, issues with these containers do not impact Netflix customers immediately. That has changed as Titus containers recently started running services that satisfy Netflix customer requests.


"We run a peak of 500 r3.8xl instances in support of our batch users. That represents 16,000 cores of compute with 120 TB of memory."

I get that they are bragging about their implementation, but what number for peak batch processing instances would they be embarrassed to divulge?


What has happened to HN? This thread is so filled with misinformation it is clear there is very little understanding of container.

A container is just a process. Really no different than any other. So cache and shared libraries, etc all the same with just a little care.


I'm looking at Netflix repos at https://github.com/Netflix. 5 pages x 30, that's a lot of repos.


Is anyone else having issues accessing this?

    $ curl http://techblog.netflix.com/2017/04/the-evolution-of-container-usage-at.html
    <!DOCTYPE html>
    <html><head>
    <meta http-equiv="content-type" content="text/html; charset=UTF-8" />
    <title>Access Denied</title>
    <style type="text/css">body {margin:0;font-family:verdana,sans-serif;} h1 {margin:0;padding:12px 25px;background-color:#343434;color:#ddd} p {margin:12px 25px;} strong {color:#E0042D;}</style>
    </head>
    <body>
    <h1>Access Denied</h1>
    <p>
    <strong>You are attempting to access a forbidden site.</strong><br/><br/>
    Consult your system administrator for details.
    </p>
    </body>


Work filtering out netflix.com? Old place use to do it and it would bug me I couldn't read their techblog at work.


Yup that appears to be it. Didn't even realise my workplace filtered anything.


A previous workplace of mine does block Netflix, and consequently their tech blog.


I would've imagined Netflix had their own hardware, both compute and storage.


At least part of their content delivery network is based on their own hardware/software (see https://openconnect.netflix.com/en/ and the PDF referenced there). It's called "Open Connect Appliance".

They use AWS for video transcoding) (see http://techblog.netflix.com/2015/12/high-quality-video-encod...) and probably other activities (analytics, business)


adrian cockcroft, who now works for aws, is known for a lot of his very early forward thinking approach to distributed applications and was responsible for their cloud architecture. I highly recommend checking out his medium and twitter, he's a brilliant man.


I've heard they're looking at 'architecture in a box' solutions, which would allow them to abstract designs from specific cloud providers and perform cloud arbitrage; eg, if Cloud Provider X can run the app at 14c/hour, and Cloud Provider Y can run the app at 22c/hour, then they can just deploy the entire architecture on Cloud Provider Y.


Another simple problem that's solved: Auto scaling of stateless services across thousands of machines, in multiple datacenter and cloud providers all around the world.

https://www.nomadproject.io/

There are also kubernetes and mesosphere to do similar thing, but they are harder to use, you can't learn them over a week end.


Why would they choose the more expensive one?


Because, er, I made a typo and it's too late to edit. Good spot!


On the off-chance that it becomes the cheaper one. Competitive markets change.


They were fairly early and fairly vocal proponents of moving to cloud based systems. I haven't looked in awhile but last I checked all but their CDN was running on AWS.


I thought their CDN was FreeBSD machines and everything else was on AWS.


I'm not sure why you're being downvoted, as the GP is wrong and your post is on the money.

Netflix runs their OpenConnect appliances at every big ISP that will let them put one there:

https://openconnect.netflix.com/en/

They run Freebsd, and all of the web services (thousands of them) run in AWS on Linux.

https://www.rapidtvnews.com/2016031942170/netflix-moves-all-...


Yeah that was my impression too.


I wonder how this compares with the recently launched AWS Batch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: