I kind of love Meta for all the seemingly unnecessary internal stuff they do. They have so many projects that are absolutely not critical for them, maybe not even net positive, but they spend who knows how many hours building and maintaining them.
Is it? It seems like 90% of what Netflix is (from a technical PoV), is a CDN + video playback. There's a lot more value in the content library they've negotiated and the business agreements with ISPs than there is in the software stack.
Apologies if this response is delayed, 6 posts today is "too fast."
I'm not sure anyone has access to the real data, but I've had a suspicion that Netflix is able to remain a lot more profitable due to their superior tech. Cloud hosting and streaming (not to mention labor) can get very expensive, and I think while it's easier to set up nowadays (in comparison to when they started) a lot of the other companies are burning cash to try to keep up. HBO Max (just Max now?) has always had poor streaming quality compared to netflix and I imagine they're paying a lot more for it.
Surely that has more to do with having a fleet of edge nodes that mirror content close to consumers? There are only so many ways to ship video bytes across the internet. Best way to save money is to move fewer bytes.
TBH the value of bento over other notebook offerings was almost entirely how well it plays with the rest of the data and infra stack within facebook. It was super easy to go from raw data (entire DE and DI orgs responsible for ETL and cluster maintenance) to a cleaned up table (usually built by DEs) to an ad hoc table to support a specific use-case that could then be accessed via bento, analyzed, and then published / shared to anyone in the company.
If you use jupyterlite, you're using the same thing. Bento is just the internal Meta version and the only potential benefits is the internal integration.
Probably not. It's written in Hack, and heavily tied to internal frameworks, so it'll be practically impossible to extract into a standalone package, unless they do a "clean room" implementation (like they did for Sapling UI https://sapling-scm.com/docs/addons/isl/).
But it has some cool features that notebook developers can take inspiration from.
"Oh that's cool.", "It'd be interesting to work on problems like that.", "That's a neat solution"
If anyone's on the fence about applying, that could be enough to nudge them in the direction. If anyone's worked in similar areas, could be worth applying and looking at the team, etc.
The original "Block Editor" (that Jupyter modeled itself after) is the one that's now called "Quanta", and has been around for decades in various forms and incantations:
I meant the "block editor" aspect, like how individual chunks of text and images can be independently selected and moved around or even shared with their own URL.
I've long believed some system like that could and should some day replace even HTML and the web, and that it'll only happen if the Semantic Web ever takes off in a big way where chunks of stuff are "typed" (like a Type-Safe Web). Even Tim Berners Lee has been dreaming of this for decades, but the world is still stuck in HTML-land for the foreseeable future.
Yes, but it's closer to Sage (browser and Python based).
I don't know what quantadev is thinking of, but Quanta seems totally different and not a programming notebook at all. Its README also claims "Quanta is a new kind of platform with a new kind of architecture", while quantadev claims it "has been around for decades".
Having used both Bento and later Colaboratory, a few years ago, I think I liked the latter a lot more. Google's internal tools are usually much more polished and better-designed, perhaps because they've been around for much longer.
A bit off-topic, but my problem with any notebook type of tool (ie you create a document that mixes code, the output of that code, and text/media) is that they always feel like they're meant to be these quick, off the cuff ways to present data. But when I try to use them they just feel awkward and slow. (I tried doing a jupyter notebook with the vscode plugin, and while everything was very polished, it feld like I was ponderously coding in Word or something. The same was true for R-notebooks in rstudio. Maybe it's a better experience if you have a decently fast laptop)
I will confess that I found Mathematica kind of neat back in the day. I never got as good with it as peers did. I'm curious if that would be different for me today.
That video cannot be seen without watching Jeremy Howard's rebuttal: I Like Notebooks. I also believe this was the video that got him kicked out of a conference(?) because it was too confrontational? Which was just ugly for a guy who clearly loves being an educator.
I have the exact opposite experience — VS Code notebooks are much snappier and are possibly the best Jupyter implementations I’ve ever used (better and more responsive than vanilla Jupyter or Jupyter labs).
VS code notebooks also support LSPs with refactoring, typing etc. Black is supported. Step by step debugging is supported. Venv is built in.
There are so many conveniennces in VS Code that whenever I have to use Jupyter Lab I feel a lot of stuff is missing.
I agree with you that the VSCode experience feels superior. It integrates a lot of the other various IDE widgets into the notebook experience. Code formatting, variable definitions, spell checker, non-garbage tier code hints, etc. The little timer noting the time it takes to run a cell alone is a huge boon.
My only complaint is how white space heavy the VSCode layout is by default. Probably can be customized, but I have never dug into it.
Hitting Escape in normal mode takes you out of editing the cell and into "notebook manipulation mode" instead. This is so counter to the way Vim normally works - Esc should leave you in normal mode no matter where you started - that I found it almost unusable until I realised I could just remap that binding. I made it Shift-Esc and am very happy with it now.
Also I always think it's a littly sad that Jupyter was one of the best shots for Julia to get more mainstream attention, and instead the notebooks people write are basically exclusively python
Also the Julia people wrote their own notebook system called Pluto. Which is so on brand for them. It might be technically better, but they miss out on the whole jupyter ecosystem, further isolating the language.
I love that notebooks started as a student hacking together a Python fork and now they're core infrastructure for all these places trying to make sense of GenAI.
The internal tools at Meta are incredible tbh. There’s an ecosystem of well-designed internal tools that talk to each other. That was my favorite part of working there.
Polar opposite of my experience. To achieve the technical equivalent of changing a lightbulb, spend the entire day wrangling a dozen tools which are broken in different ways, maintained by teams that no longer exist or have completely rolled over, only to arrive at the finish line and discover we don't use those lightbulbs anymore. Move things and break fast.
IMO there's a mix of a few really good, widely used, well-supported tools as well as a long tail of random tiny tools where the original team is gone that are cruftier.
Yeah 100%. I found it immensely frustrating to be using tools with no community (except internally), so-so documentation, and features that were clearly broken in a way that would be unacceptable for a regular consumer product. If you have a question or error not covered by an internal search or documentation, good luck, you'll need it. Literally part of the reason I left the company.
People probably think you’re exaggerating but it’s true. Sometimes when I would get blocked the suggestion was to “read the source code” or “submit a fix” on some far flung internal project. Huge fucking waste of time and effort, completely unserious.
That’s how open source already works by default. The difference is if an OSS tool is broken my boss doesn’t imply landing a fix is my responsibility on top of my regular job duties.
> being able to land a diff to fix the issue is awesome imo.
yes, if its a one off. but for my last project that would involve spinning up many "XFNs" (multi-team chat fests) to argue that actually they don't want to have that change because of reason x,y and z.
At which point you just give up and make a stupid fucking hack.
So much is not about engineering excellence, its about trying to get people to accept change.
Doesn't sound like your type of company tbh, the flipside is that a "serious" company will often have broken bs too except now nobody is going to look at your contribution/fix.
Yes lmao, the number of times I would start off on some nominally useful task only to find out 3 weeks later that there is actually already a solution to that created by team XYZ that nobody in my reporting chain has ever heard of…(3 weeks was optimistic case, I remember my team member getting like 2 months in to some new data pipeline before finding out some tables already existed that did what he needed…)
Welcome to meta! where everything is a murder mystery.
Except you're not really sure if there has been a murder, or sometimes you wonder if you're the murderer, because at every turn you're told that you've been a bad dev for trying x,y and z
Same as Google. Many internal tools have painful interfaces and poor or documentation because the hiring bar was high and it was acceptable to assume that the user's skill level is high enough to figure it out. That attitude becomes a bigger problem when trying to sell tools to the public (e.g. Google Cloud Platform).
As an outsider, I was always under the impression that Google had a tradition of engineering excellence (robust tools, clean and while tested code following strict guidelines), while Meta has more of a Hacker culture (move fast and break things).
Agreed. I often get my work done using open source build instructions and tools and then when everything works I port it to internal infra. Other people are the opposite though, which for open source based code bases has a nasty side effect of the work having no upstream able tests!
But you're both talking about different things. The tools are both often left in disuse, lacking documentation, etc. But they also have a really tight integration with each other that allows for unparalleled visibility and ability over enormous systems with many moving parts.
My opinion: Many Meta tools and processes seem like they were created by former Googlers that sought to recreate something they previously had at Google, during the Google->FB Exodus, but also changed aspects of the tool that were annoying or diverged from their needs. This is not a bad thing.
Since Bento doesn't appear to be usable by the public, aparallel version of this that people can get a feel for cross-tool integration would be Google's Colaboratory / Colab notebooks (https://colab.research.google.com/) that have many baked-in integrations driven by actual internal use (i.e. dogfooding).
For any kind of general Python/C++ work, its a _massive_ pain.
The integrated debugger rarely works, and its a 30 minute recompile to figure that out. The documentation for actually being efficient in build/run/test is basically "ask the old guy in the corner". You'd best hope they know and are willing to share.
The code search is great! The downside is that nobody bothers to document stuff, so thats all you've got. (comments/docstrings are for weaklings apparently)
You want to use a common third party library? You'd best hope its already ingested, otherwise you're going to be spending the next few days trying to get that into the codebase. (yes there are auto tools, no they don't always work.) Also, you're now on the hook to do security upgrades.
One of the crazier things a L4 meta colleague of mine told me, that I still don’t believe entirely, is that meta pretty much has their own fork of everything, even tools like git. is this true?
That decision is also illustrative of why they end up forking most things - Facebook's usage patterns at the far extreme end for almost any tool, and things thats are non-issues with fewer engineers or a smaller codebase become complete blockers.
Yes when I used to talk about this to interviewees, I described that every tool people commonly use is somewhere on the Big-O curves for scaling. Most of the time we don't really care if a tool is O(n) or O(10 n) or whatever.
At Meta, N tends to be hundreds of billions to hundreds of trillions.
So your algorithm REALLY matters. And git has a Big-O that is worse than Mercurial, so we had to switch.
I'm gonna disagree with you there. The difference was with stat patterns, and the person at Facebook who ran the tests had something wrong with the disk setup that was causing it to run slowly. They ignored multiple responses that reproduced very different results.
Nail in the coffin on this was a benchmark GitHub ran two years ago that got the results that FB should have: git status within seconds.
Facebook didn't use mercurial because of big O, they used it because of hubris and a bad disk config.
> Facebook didn't use mercurial because of big O, they used it because of hubris and a bad disk config.
Half-remembering a blog post I read - the git maintainers also wouldn't give Facebook the time of day on code changes to accommodate FBs requirements. Mercurial was more amenable. This also disproves the "Facebook has a fork of evertyhing, because the attempted to upstream the changes they wanted)
I should probably just write it up into a post, but the git mailing list at the time is the source (I remember reading it from the side a few months after convincing our VP R&D to switch from svn to git). We were chuckling around the same time that FB had to reallocate the stack on Galaxy S2 phones because they were somehow unaware of proguard or unable to have it work properly with their codegen.
I recall there being a message from someone either at AirBnB or Uber who mentioned that they have a similar monorepo but without the slow git status, but can't seem to find it now - it's likely on one of the other mailing list archives but didn't make it to this one.
Point being that painting this as "the community was hostile" or "git is too slow for FB" is just disingenuous. The FB engineer barely communicated with the git team (at least publicly) and when there was communication, it was pushing a single benchmark that was deeply flawed, and then ignoring feedback on how to both improve the performance of slow blame, commit by repacking checkpoint packfiles (a one-off effort) and also ignoring feedback that the benchmark numbers didn't make sense in absolute terms.
If git is blocking you, you are using it wrong. Lotta instances of people treating it as an artifact repository. Use it correctly with a branching strategy that works for your use case and it's bulletproof.
Plenty of other customers with the same magnitude problems as Meta are using Git perfectly fine.
Yep. Zeus is a fork of Zookeeper, Hack is a fork of PHP, etc. It's usually needed to make it work with the internal environment.
The few things that don't have forks are usually the open source projects like React or PyTorch, but even those have some custom features added to make it work with FB internals.
Google also maintains a monorepo with "forks" of all software that they use. History diverges, but is occasionally synchronized for things like security updates etc.
Few companies experienced the explosive growth fb did, though many will claim to have done so. Hack made the existing codebase of php scale to insane levels while reaching escape velocity for the overall company to even attempt to transition away or shrink the php codebase, as i recall (i was an SRE, not a dev)
my memory is hazy, but when i started i was called an sre. Then someone made up the term "app ops". then it was production engineering. then I called myself an SRE when i was interviewing to leave fb.
Oh, and i was called a DBA before being acquired by google.
Meta doesn't use git. It uses mercurial. It does fork it because they have a huge monorepo. They created a concept of stacked commits which is a way of not having branches. Each commit is in a stack and then merged into master. Lots of things built for scaling.
Left pad was from the creator pulling the code from the public source forge, not from a destructive code change.
I assume all of the big tech companies host internal mirrors of every single code dependency + tooling. Otherwise they could not guarantee that they can build all of their code.
A friend of mine is doing his PHD while being an intern at Meta. He does not share your excitement... at all. To summarize his complaints: a framework written a long while ago with design flaws that were cast in stone, that requires exorbitant effort to accomplish simple things (under the pretense of global integration that usually isn't needed, but even if was needed, would still not work).
> A friend of mine is doing his PHD while being an intern at Meta
I interned thrice as phd student at FB. your friend isn't entirely wrong but also just doesn't have enough experience to judge. all enormous companies are like this. FB is far and away better than almost all such companies (probably only with the exception of Google/Netflix).
Agreed. I'm reading some complaints in the thread about being told to "just read the source code" for internal tools at Meta. When I worked at Apple we didn't even get the source code!
I don't see why saying that Facebook's tools are bad should be invalidated by saying that Google's or others' tools are bad too. Google being bad doesn't vindicate or improve Facebook tools. There's no need for perspective: if it doesn't work well for what's it designed to do, then that's all there is to it.
lol bruh read my response again - FB's and Google's and Amazon's tool are lightyears ahead of #ARBITRARY_F100_COMPANY. you haven't a clue what "bad" means if you've never worked in a place that has > 1000 engineers.
How long has he been interning? Is it long enough for him to have learned how long the timescale big-tech roadmaps operate on? If he wants a feature, he better write it himself (if his PR doesn't conflict with an upcoming rewrite, coming "soon"), or lobby to get it slotted for the second quarter of 2026.
He started right about the time COVID started, so... about four years now, I think. I'm not sure if those were contiguous though.
I'm not sure what your idea about PRs and features has to do with the above... he's not there to work on the internal infra framework. He's there for ML stuff. Unfortunately, the road to the later goes through the former, but he's not really a kind of programmer who'd deal with Facebook's infrastructure and plumbing.
The point is, it's inconvenient. Is it inconvenient because Facebook works on a five-year plan basis or whatever other reason they have for it doesn't really matter. It's just not good.
I also have no problems admitting that all big companies (two in total, one being Google) I worked for so far had bad internal tools. I don't imagine Facebook is anything special in this respect. I just don't feel like it's necessary to justify it in any way. It's just a fact of life: large companies have a tendency to produce bad internal tools (but small often have none whatsoever!) It's a water is wet kind of thing...
> I'm not sure what your idea about PRs and features has to do with the above... he's not there to work on the internal infra framework.
My idea is if he's not making the monorepo codebase changes himself, he's going to have to wait for an awfully long time for any non-trivial improvements he'd like because the responsible teams have different priorities sketched out for next calendar year. It's a function of organization size, unless you have the support of someone very high up on the org chart, ICs can't unilaterally adjust another teams priorities.
Looking at some of the bureaucracy in their open source projects, I'd say that they need less tooling and more thinking. These tools help to keep spaghetti code bases from imploding totally.
Uuuh can you tell a bit more about wasabi, the Python LSP? Saw a post years ago and been eager to see whether it’d be open sourced (or why it wouldn’t).