Hacker News new | past | comments | ask | show | jobs | submit login
Software Engineering at Google (2020) (abseil.io)
344 points by nvahalik on Aug 14, 2023 | hide | past | favorite | 337 comments



Cool. Could someone, maybe an ex-googler, comment on which parts of these work well and which don't?

A lot of other companies get into trouble trying to cargo-cult what Google does when they are operating in very different environments wherein those practices aren't optimal. E.g. different levels of scale.

Additionally, critics of Google may point out that their engineering culture may not be great on its own terms -- every time Google launches a new feature, people post links to the Google product graveyard.


In my ex-Google experience, here are the stages of denial about something that Google does which is good but the industry doesn't yet embrace.

Stage 1: "We're not Google, we don't need [[whatever]]";

Stage 2: Foreseeable disaster for which [[whatever]] was intended to address happens;

Stage 3: Giant circus of P0/SEV0 action items while everyone assiduously ignores the [[whatever]];

Stage 4: Quiet accretion, over several years, of the [[whatever]] by people who understand it.

And the [[whatever]] ranges from things that are obviously beneficial like pre-commit code review to other clear winners like multi-tenant machines, unit testing, user data encryption, etc etc. It is an extremely strange industry that fails to study and adopt the ways of their extremely successful competitors.


Strong disagree. In my experience, this is not commonly why competitors don't adopt Google's practices. The main reasons I've seen are:

1. Money. Google essentially has a giant, gargantuan, enormous, bottomless pit of money to build a lot of this tooling (and also to take the risk if something ends up not working out). I think you might be able to say that other companies are just being short sighted if they don't implement some of these things up front, and that may be true, but (a) that's pretty much human nature, and (b) given that very few other companies have a bottomless pit of money like Google, that may just end up being the right decision (i.e. survive now and deal with the pain later).

2. Talent. This is closely related to #1, but few other companies have the engineering talent that Google does. If there is one thing I've seen with my experience with ex-Googlers is that most of them are fast coders. So when you go to your boss and say "I'd like to implement engineering/tech-debt improvement XYZ", at other companies it's a harder decision if (on average) it would take 9 months to implement vs. 2 or 3.

3. Related to both of the above, but your 4th bullet point, "Quiet accretion, over several years, of the [[whatever]] by people who understand it.", is actually other companies just waiting for more evidence to see what "shakes out" as the industry-standard, optimal way to do things.

4. Finally, your stage 1, "We're not Google, we don't need [[whatever]]" is actually true in tons of cases. Many of Google's processes are there to handle enormous scale, both in terms of their application/data capacity, as well as the sheer number of engineers they need to coordinate. Very, very, very few companies will ever hit Google's scale.


I will add one more category - companies that have or need scale but nowhere near the technical talent that google has.

Eg - most telco companies in the US run at a scale similar to google. They need most of the software engineering best practices, internal tools teams, etc. They used to have it during Ma Bell times when they had a cash cow. That doesn’t exist and they’re left with scale and the points 1-4 described above.

This in general leads them to outsource to lowest bidder contracting firms that compound the shitty software problem. In the end it’s a miracle that all of it works together :)


Eh, money is only a factor when it comes to scale. That is, Google can afford to hire 30 engineers to support their CI infra, you can't.

Everything else isn't. Unit tests aren't a luxury that Google's infinite riches allow it to have - they pay dividends whenever code exists for more than a few weeks.

You can bet your ass Google engineers don't write unit tests for throwaway code.

CI saves time, and while Google can maintain a team you can afford to pay for Jenkins or GitHub Actions, because not paying for them is more expensive - if your company is to survive for more than 3 months.


CI can totally cost time, especially if it doesn't have a team of good engineers keeping it running sanely and ensuring it tells you useful things on failures.

A CI bot which waits 24 hours then says "no", with a text file that crashes your browser and ultimately only contains the information 'exit code nonzero', which fails for reasons totally unrelated to your code change is dubious as a value add system.

If that bot is also a non-negotiable gate on shipping things you get a bunch of other antipatterns, like massive code patches to decrease how often you have to roll the die and a tendency to hit retry every day or so until the probability that it's actually your patch that's broken gets high enough that you try to debug it locally, at which point you may be unable to reproduce the blocking error anyway.

The real question is whether that pathologically rubbish implementation is still better than shipping without CI, which rather depends on whether your engineers ship code that works without the guide rails, which to a fair approximation they do not.

Thus it might still be a net win for product quality but saving time is harder to see.


Setting up a decent CI pipeline on GitHub with GitHub Actions is super, super easy - less than a day's work for a basic initial implementation.

Of course, the difficult part about managing a CI pipeline is writing quality tests, ensuring your tests don't take forever, deciding the right balance between mocking out 3rd party runtime service dependencies vs. calling their dev versions in your tests, etc.

But this is why I argue that the bare minimum should just be to have the CI pipeline created. If you don't have that, you are definitely going to screw over future you. Once that's there you can balance the cost/benefits of how much to invest in your test suite and test coverage.


As I said in another comment, I think folks are just disagreeing on terms. I absolutely don't consider things like unit tests or CI to be any kind of "Google-specific" engineering advice - they're just standard good engineering practices.


True, but what I was trying to introduce into the discussion was what another sibling commenter astutely labeled the anti-cargo-cult: industry feeling that anything at Google is an anti-pattern even when that thing is firmly established among other successful software developers for a long time. And in my experience comprehensive unit testing is one of those things that I have sometimes heard waved away.


I don't think only Google are writing unit tests.


Just discussing your point #1, I hear this but what I see is that the companies I have direct experience with spend much more and move more slowly with their we-are-not-Google hacks. People move fast and break things into a corner where their entire project is a haunted graveyard with no test and no comments, that has never been reviewed, and at that point nobody is allowed to change anything.


Perhaps, but to me everything you put in your comment above just sounds like bad engineering practices in general and not something particularly related to Google processes.

E.g. things like "do feature work on a branch and then code review/run PR checks before merge", "have unit test coverage (being a hard balance to judge what is sufficient coverage)", "have useful comments" - absolutely none of these things I associate with "Google engineering practices", and many of them definitely predate things that were specifically done at Google.

Things I think of when I think about Google practices are things like ensuring data is infinitely horizontally scalable, monorepos, etc. Those things are all scale-specific.


Monorepos actually work better at small scale than at Google scale. I think it's nuts that individual startup founders actually consider microservices; if you are validating out a software product idea, write a computer program, the very simplest one possible, to prove that you can do it and get the general shape of the architecture before you start dividing it into microservices.

I usually see the pressure to split into microservices appear around 20 engineers, just as your single repo is starting to get unwieldy. Knowing that the big companies use a monorepo is pretty important information here, because it may prompt you to invest in tooling to make that one repo less unwieldy rather than splitting into many small repos that will be very difficult if not impossible to merge back together again.

Google doesn't actually plan for infinite horizontal scalability in data. The framing I've found most useful is [Jeff] Dean's Law: "Plan for a 10x increase in scale, but accept that sometime before you reach 100x, you will have to rewrite the system with a different architecture." The reason for this is shifting bottlenecks: as the system gets larger, different aspects become the bottleneck to future scalability, and each time the bottleneck changes you usually need a different architecture. But by planning for an order of magnitude growth, you ensure that you're not artificially introducing bottlenecks, and that you have enough headroom to actually discover the new bottleneck.


Re: monorepos, I think we're talking about 2 different things. I usually hear the term "monorepo" discussed in the context of how it is practiced at places like Google and Facebook: having the code for all the company's services (micro or not) stored in a single source control repository.

A monorepo really doesn't have anything to do with how code components are deployed - your comment seemed to be contrasting a monolith architecture with a microservices one.


I was fast & loose with terminology, but I'm thinking of the organizations where every binary and every library is its own Github repository, and you make copious use of git submodule to build anything. I think that's the same thing you're talking about, right?

It's impractical (particularly when the project is young) for the same reason having separate binaries is impractical: it makes it very difficult to do refactorings that cut across repos while still keeping atomic checkouts and rollbacks.


Yep, we're talking about similar things but in slightly different ways:

1. When you're small enough, and only have a few teams and/or deployed binaries, you keep everything in a single repo.

2. As you grow with more teams and more products/binaries, often companies will split into having separate repos per team/product, and then use some sort of dependency management tooling (either something like the git submodules you discuss, or a private package repository, e.g GitHub's Package Registry, Nexus, or Artifactory). I totally agree with you that a lot of companies do this prematurely.

3. What distinguishes "monorepos", in my opinion, is that this was an innovation at either Google or Facebook I think (not sure who was first), where they realized exactly what you point out - it makes it really hard to do refactorings across lots of dependent projects in different repos. So they decided to keep everything in a single source control repository, with a single commit history. But in order to do that and have things be sane with lots of teams and thousands of developers, they needed to invest a ton in custom tooling to make these giant monorepos usable by individual developers and teams. E.g Facebook has Sapling, Google has Piper, and there are open source tools like Lerna for JS monorepos.

So, in my experience, just having everything in a single repository but without any special tooling (because you're small enough to not need it yet) is just a repo. Monorepo IMO implies that you've grown to the point where it's difficult to keep everything in a single repo without special developer workflow support tools.

All that said, I definitely agree with your main point - a lot of companies can just keep everything in a single repository a lot longer than they think they can, even without special tooling beyond some separate build targets.


Yea--I've found it surprisingly difficult to get plain vanilla medium-sized companies to adopt obvious, time-tested best practices. I'm not talking about "Google engineering practices" but basic table stakes practices like using source control and a bug tracker. So-called "Joel Test"[1] items. The most common excuses are: "We don't have time/money to do infrastructure/process, we need to write shipping code!" and the usual "We've always done it this way".

1: https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...


It is honestly baffling, totally baffling to me that in this day and age there are companies that don't use source control or a bug tracker (but trust me, I believe you).

With these companies I think it's just best to walk the other way. Some of them may state that "yeah, we know we need to improve our engineering practices, so we're willing to learn" but they usually just have so much of a different mindset of what it takes to actually run a software company that it's just a waste of time. There are many more companies that have a modicum of understanding about what they're doing.


These rarely are "software companies". They're companies in other industries, that happen to need some software. Sometimes it's a pretty plumb gig: Good, but not great, pay, often in relatively lower cost-of-living areas, a relatively light workload, a good amount of autonomy as one of a maybe a handful of software devs, and in their blindness to good practices (like source control), they're also untouched by common bad practices in software, like whatever bastardized version of agile/scrum that your bosses heard through an extremely lossy telephone game.

But there's also the bad: Software isn't the company's focus, so you aren't the company's focus. That means no "Senior FAANG" salaries, no best practices to keep things sane, and often you find yourself working on a codebase that was originally hacked together in a week by a chemist who may or may not have been deliberating huffing reagent fumes.


Tests, comments and code reviews is not something unique to Google. It's a commonly accepted practice. There might be some dark corners, just like there are people who perform version controlling with ctrl+c, ctrl+v technique, but it's not a norm. I don't think that many people would argue against basic software development rules. However being Google is much more than writing tests and doing code reviews.

Being Google means having a team which writes source control management system for you.


One conspicuous omission in the ex-google is reflection on killed products like Google Wave, Plus, Glass etc etc etc .. for many of the [whatever] was the gross imbalance between Eng owning the product but ignoring the userbase.

What ex-googlers often fail to grapple with is the product lifecycle (how short it may be) and the value of having diversity in the loop of product testing. Google is designed to be a safe place to focus, and that’s not what the real world is like outside the plex.


Actually I think it is never-Googlers who have the wrong perspective here. The fact that Google constantly produces and destroys products demonstrates that it is extremely easy for that company to churn out code, and validates their software development methodology. It's incredibly easy to just dash off a product building on their gigantic foundation of source code, infrastructure, and launch process.

The fact that Plus and Glass got canceled and Wallet has been canceled sixteen different times is merely a consequence of the fact that leadership and product is often led by imbeciles. That's an organizational problem and I hope nobody is out there cargo-culting Google's org (even though I know they are, with OKRs and Perf being widely copied).


Exactly, Google engineers deflect user issues and product failures to the leadership and non-engineers. That happens in any large team, but Google has sweetened the situation for engineers to keep them focused on engineering rather than the larger consequences. E.g. credit cards are just fine, nobody actually wants to see ads, etc. It’s the user’s fault for failing to see the esoteric details behind the thing.


Google's penchant for killing promising products is 100% the result of poor incentives. People are incentivized for launching challenging projects, but they are generally not responsible for the bottom line (which is going to be dwarfed by Search Ads revenue anyway) or for user happiness & brand loyalty (which is challenging to measure). As a result, lots of promising and exciting products are brought to market and then killed, as the easiest way to bring new products to market is to cannibalize the stuff your predecessors did and show how great your alternative is instead.


I'm not sure why your comment was previously downvoted. I've often heard, and it's not hard to find these comments from ex-Googlers on HN, that Google's "promotion-oriented development" is one of their biggest factors in some of their cultural shortcomings. That is, launching a big new product is seen as one of the best ways to get promoted, while working on the little nits (which in my experience, especially with some of Google's enterprise products, can languish for years, even though they can be really important but "boring" issues to fix) is not seen as high-value work.


clearly there is a missing link in your experience of companies where you don't have control over a lot of the things you think is needed and companies want to push a product as fast as possible and cut as many corners as possible.

More and more companies will be like this to cut costs


I think it's also the ability to have a deep bench of coding talent who just get to work on the toolchain. Most companies ration that talent to the product, shipping features that drive revenue.


Off-topic for this thread, but one of the most poignant quips I remember about Google culture was that the performance-review process was really good at rewarding hard, challenging work that didn't produce much value and not very good at recognizing work that produced lots of value but was not astoundingly difficult. I think you were the one who first noted this.


I recently explained Google's perf process to an employee of the US federal government, and was told that the performance review and promotion processes in the government were simpler and less wasteful.


Is the government getting good results out of their process? Remember when a bunch of ex-Google engineers had to step in and save healthcare.gov? If their simple promotion process works, why didn't they curate that talent in-house.

People at L4 might not really like Google's process, but if some Distinguished Engineer shows up at your design review you're pretty much guaranteed to get some sort of valuable feedback. That is not a given in other organizations.


Good people don't work for the government because it pays poorly, not because its promotion process is wonky. The usual strategy in any high-paying field for government-track people is to work there for a few years to build credibility, then transition to being a consultant for a 100+% raise (or move to private industry completely).


Wasn't healthcare.gov outsourced to contractors in the first place? I'm not sure that government actively maintains much in the way of dedicated employees or teams to build stuff like this.

The problem lay with picking the cheapest contractor bids. If anything, the FAANGs should establish consulting arms for this kind of work.


The performance review process has a small impact on salary.

The promo process is not based on value or difficulty, but on the size of the organization that one is running. This is also true for higher level ICs, except they do not manage people, but rather manage/lead projects (which then have a certain amount of people involved).

Here's a rough breakdown:

  - L4 -> 1 person
  - L5 -> 1-3 people
  - L6 -> ~7 people
  - L7 -> ~25 people
  - L8 -> ~70 people
The approximate 3x difference between the levels is also found in other organizations, for example in armies: division ~= 3 brigades, brigade ~= 3 battalions, battalion ~= 3 companies, company ~= 3 platoons.

Misunderstanding this is the source of almost all frustration with the promo process. This process is designed to build and expand the organization, not reward awesomeness. There are of course deviations from the simple schema I listed here, but this is the hard reference point.


Perf feeds into promotions, which are the real way to raise your long-term salary (both inside and outside of Google).


This is a terrible process unfortunatelly. Raising the salary should be related to the usefulness of the person to the company, and not the breadth/impact of their work. This leads to terrible things like gaming the system to get high impact/leadership projects to get raises which comes with huge side effects, like projects getting abandonned fast, being deprecated in favor of new shiny promo-bringing things.

But this is not just a Google specific issue, and it is quite widespread in the industry. Google however suffered from this especially due to its obsessive culture of pay-for-perf and by ignoring simple facts:

- inflation means that your salaries should raise regardless of performance. if you only tail the market by adjusting salaries only if the market changes, then you are 1 year late(at least). This isn't a problem in an economy with low inflation, but is a huge problem in one with much higher inflation.

- there is a significant number of people needed to maintain projects that won't show large impact. Those people need to be at the very least recognized and compensated.

- making new products is great, but it requires huge amounts of ressources to do at Google scale from the get-go. A wider strategy is much needed, which Google obviously lacked for almost a decade.


Perf is almost irrelevant for promotions beyond level 5.


That was not my experience. A long run of good perf scores is clearly not sufficient for a promotion at that level, but it is necessary.


There's a lot of external complaining about perf at Google. My experience is that most of these complaints are wrong. I've personally had two reports fail to get promoted to L6 off of projects that were very difficult and executed well but for various reasons did not have the impact that we expected.


The problem is at the VP level, not the L6 level. It's not that impact isn't considered, it's that impact is relative to the current org goals of the moment, because it's evaluated by your peers that have presumably all bought into the current org goals of the moment (if they haven't, they will probably be fired or sidelined soon). However, there's very little feedback between things users care a lot about and things executives care about. You're rewarded for doing things that your VP deems important. Your VP is almost certainly not going to sweat the small stuff (although I do know a couple that try). It's impractical for someone with 2000 reports to keep up with every little bug in their product area, and they would be a terrible micromanager if they did. So what usually happens is that they call out the few annoyances they happen to see when using the product, everybody in their org jumps on fixing those because that will get them promoted, and everything not noticed or not specifically called out by a VP languishes.

How would I fix it? Not get to the point where the shots are being called by people with 2000 direct reports, for one. Software has distinct diseconomies of scale, where you have a much smaller loop between "Notice a problem. Identify who can solve the problem. Solve the problem" in a small organization than a big one. But that ship has sailed.

Failing that, I think orgs need to adopt quantitative measures of impact (eg. X tickets closed, X customers helped, X new sales generated) along with backpressure mechanisms to ensure that those metrics are legitimate (eg. you can't just create new bugs to solve them; you can't just help customers only for them to need to return tomorrow; you can't generate new sales that are unprofitable).


I do agree with this criticism. There are impactful things that go unrewarded because of org mismatch and that's dumb. My team has been affected by this very problem and it is very frustrating to look at a list of things that I know to be very valuable and be told "don't do that, it is misaligned with org priorities."

But this is "some impactful stuff is unrewarded" rather that "hard stuff that isn't impactful is rewarded", which was the complaint in the post above mine.


The "hard stuff that isn't impactful is rewarded" is relative to the overall marketplace, including conditions outside of Google. Starting a new chat client that replaces Google's other half dozen chat clients may be challenging, but it has little benefit to either users or Google.

Good entrepreneurs go into markets where the existing alternatives suck, and make them radically better. Most Google executives and PMs (most executives, period) go into markets that their company and all of their competitors already have pretty decent alternatives for, so the true impact is pretty limited.


I miss you too, raldi.


The tools, design and manpower needed to build a skyscraper are different from those needed to build a 1-story wood house. It's not that the ones that build the wood house are failing to study and adopt the ways of their extremely successful competitors.

Now, some of the things you say like unit-testing and user data encryption are ones that I've never seen associated with the "We're not google" mindset, so maybe people have started using that phrase for anythingnow


"were not google" is usually good for things where people are using cargo cult. I saw at one company that went open floor plan because google did it. No one was happy about that. Retention became very low and everyone bailed out. Emulating google does not fix process and management issues. As what may be at google for a good reason may be an utter failure at another company. There are things all shops can adopt that google does that would help them. But many of the ones I have seen adopted were little more than showy garbage instead of the things that would actually help.

Also sometimes you just need a simple tool to get something done. As engineers we like to build things so sometimes we make it way more complex than it really needs to be. For someone like google that may just be fine to do. For others a minimum viable product may be in order. Do not worry about optimizing for the 3 million user per day case when you have 10 total users a month. Add logging and keep an eye on it. Then worry if you need to scale. As building good scale takes time and thought. Many times you do not need that at all.

As your company/group grows you will take on more and more of the things 'google does' because you will need to, or you will go nuts chasing everything. You could probably even make stages out of the different times to do/evaluate things. To do it early could actually harm what you are doing. You need to evaluate what you are doing and why. Just copying someone else does not always lead to a good outcome and you could be wasting effort when you could be making product.


> "were [sic] not google" is usually good for things where people are using cargo cult

It's no different to cargo culting. That should not be the reason for not doing something, any more than the opposite should be a reason for doing it. Just see if the practice makes sense in your context and decide that way.


"We are not Google" is the answer to "I saw on that blog here that we should do X", "everybody is doing X, we should too", or "you have to follow this good practice here" where the practice is only "good" because it's hyped.

Those kinds of demand happen exactly because they saw them at Google, and an outright refusal is exactly how they should be dealt with. Once the unreasonable person is cut out, you can look at your context and decide what's the best way to solve the problem.


I think you wanted to reply to the same person I replied to? Since I'm saying basically the same thing you do I believe


I'm not really talking about artisanal 3-man software shops, I'm talking about mid-sized companies with thousands of engineers, who don't realize they are already larger than and facing the same problems as Google was when they started adopting these practices. And to be clear, rejecting something as proven as pre-commit code review is not only to reject the example of Google and many other very successful enterprises, but also to ignore decades of developer productivity knowledge before Google existed. It's almost like the fact that Google adopted a long-standing best practice makes modern engineers reflexively revolt against those best practices. This can only be seen as a structural advantage for Google.


> I'm talking about mid-sized companies with thousands of engineers

Can you name such mid-sized companies with thousands of engineers? If you hit thousands of engineers headcount, you are not a mid-sized anymore.

Theoretically, Google does 100 things right, pays for those 100 things, but also Google has tons of cash, if Google didn't release product in Q1, no worries, they will release in Q3.

Now consider startup with 50 engineers, if you didn't release feature in Q1, you might need to stop the project, because customer with whom you signed the contract just goes away and you will be laying off 5 people


mid-size with thousands of engineers? Wow, mid-size for me is around 100-200 people :)


Curious how pre-commit code review worked, could you please elaborate a bit?


Every change is reviewed by someone other than the author before it lands in the repo. At google they take this a bit further. Every change has to have been either written or reviewed by a designated owner of the code (designated by subdirectory) and one of the participants must be a qualified user of the languages used in the change ("readability"). And they have technical measures in place to ensure that programs running in production descend exclusively from reviewed and committed code.

Pre-commit review is common but not universal in the industry. Some shops practice post-commit review or no review. Some believe review consists entirely of quibbling.


Oh, I see - by "pre-commit" it doesn't really imply it in "Git commit" sense - the change is still propagated to others (presumably by committing it as a sort of "draft"), it's just not committed to the mainline - is that correct?

I'm very familiar with CR at other companies, but tbh since most use Git, I wouldn't call that "pre-commit" but "pre-merge", if you will - unless I misunderstood and it really is pre-commit at Google (i.e. the changes are not even _committed_ to the repository - and then I'm confused once again at what exactly that means..)


Google (for $reasons) doesn't do long lives code branches and doesn't use git, at least for the main repo. So in that context every commit is reviewed pre-commit, but you'd do the same workflow elsewhere with trunk-based development, small pull requests, and CI and automated and human review of all PRs before they're merged.


Right, I know it’s not on Git and hence was my question - and it sounds like this is more about terminology and less about technology. I.e. what Google does in this case is not that different from just “Code Review” in the traditional sense, as most other companies (with good engineering practices) do - reviewing code before it enters production (+CI/CD, as you mentioned).

Edit: as OP mentioned, it does seem to differ in technical sense from traditional CR, in that the changes live only on developer machine, not in source control.


> Edit: as OP mentioned, it does seem to differ in technical sense from traditional CR, in that the changes live only on developer machine, not in source control.

Yes and no. There's a decent whitepaper on Piper and citc you can find by searching for it (or actually I will :P [0]), as far as piper is concerned they aren't checked into source control, but the vast majority of development happens in "citc" workspaces, which aren't source control, but also are source control in a sense that every save is snapshotted and you can do point in time or point in history recovery, and I can open up a readonly view of someone else's workspace if I know the right incantations, and most of the developer tools treat the local citc workspace enough-like a commit that it's transparent ish.

[0]: https://cacm.acm.org/magazines/2016/7/204032-why-google-stor...


OK, thanks for these details and the link! Somehow I remember the title, but not the content :-) That'll be an interesting read.


> and I can open up a readonly view of someone else's workspace if I know the right incantations

So thats not just a basic feature?


It's the equivalent of me being able to view the local, unpushed changes you have in whatever directory you git cloned into.

If that sounds somewhat magical, yes, correct.


The normal workflow is to create a changelist and then have somebody patch that changelist into their client rather than accessing their client directly.


> as OP mentioned, it does seem to differ in technical sense from traditional CR, in that the changes live only on developer machine, not in source control.

Which seems worse.


"On the developer's machine" isn't correct.

They are actually saved within a special file-system called "citc" (Clients-in-the-cloud). It saves literally every single revision of the file written during development. If you hit save, it is saved in perpetuity, which has saved me a bunch of times. Every single one. No need for any kind of commit or anything else.

Further, these saved revisions are all fully accessible to every engineer within the company, any time they want.


Yeah, there are no direct translations between git and perforce concepts. The right term within Google would in fact be "pre-submit" not "pre-commit". Before a change is submitted in the Perforce-derived flow it exists only in the author's client and isn't really part of source control in the way that git users are accustomed to pushing their branch to origin.

NB: At that company there are also users of git-compatible and hg-compatible tools, but I am discussing the main perforce-derived flow.


Oh I see, thanks! I was under impression Google has migrated away from Perforce towards an in-house system a while ago, but looks like I was mistaken (or do you mean that system is derived from Perforce?). Edit: I guess its name is Piper..

It’s quite interesting/mind-bending to think of work-in-progress that’s still somehow synced between peers (in fact this is one of those “missing nice-to-haves” I wish Git had, and can only be approximated with wip branches..)


The synced-between-peers features are built atop a thing call CitC, or Client in the Cloud. An author's client isn't on their machine, it's hosted in prod.


OK got it, thanks for clarifying it


The real problem with code review is that if people don't do it/just hit sign off, it's worthless. Your whole company has to believe.


Pre-commit means before committed to the canonical repo, not before commit locally.

The SPDK project has an elaborate pre-commit review and test system all in public. See https://spdk.io/development . I wouldn't want to work on a project that doesn't have infrastructure like this.

Even mailing lists with patches are really a pre-commit review system, as are GitHub pull requests. Pre-commit testing seems more elusive though.


Or, alternately phrased:

As a company grows and matures, their software development processes evolve to meet the business needs.


I just wish more places would adopt `third_party`; I would also love reproducible builds but I'll settle on third_party.


There is risk of selection bias here. The companies that runs into [[whatever]] are the ones that made it far enough to have run into it. What you're not seeing are all the companies that tried to do what google does at scale, built a complex code base that doesn't serve it's customers needs and can innovate fast enough and are now dead.


> pre-commit code review

Unless you're referring to automated precommit hooks, this sounds baffling. What's wrong with reviewing pull requests? What if I want to push a WIP while I switch to another branch, I still need a review? Is the final PR reviewed again at the end?


What they mean is code review prior to merging into what you'd call main or trunk or master or release, not for committing your WIP changes or whatever (unless you want those merged at that stage).


That's a git user's perspective and Google doesn't use git or anything analogous. Under their system, and generally under Perforce, it is never necessary to "push a WIP" because your client just contains whatever edits it contains. You never need to manually checkpoint, stash, or commit. People with multiple changes in flight will usually use two different clients, one for each change, although that is not strictly mandatory and in the perforce model you can have disjoint sets of files in multiple changes in the same client.

Anyway, TL;DR, the problems you suggest are git-specific and one solution to them is not using git.


Readability doesn't help Google or anyone else, it's a pure "inmates running the asylum" artifact.


I'm sorry are you saying Google invented multi-tenant services, unit testing, or user data encryption?

I'll give you "pushed the WEB industry to have transport-layer encryption for the entire industry by default".

I'll even give you "code reviews".

But not the first 3.


label the tradeoffs?


"Readability" works terribly when your company is acquired and your team enters all at the same time.

Google has (or had ~10 years ago), a thing called "readability" for each language, where in order to be allowed to commit code to the central "google3" repo, you needed to have written some large amount of code in that language, and needed to have a readability reviewer sign off on your code. The process is designed for slowly on-boarding junior people into a team, and introducing them to google coding style and practices. Eg, the senior, mentoring folks on the team do the reviews and bring the new person up to speed. I imagine it must work well in that context.

However, this breaks down when your entire team is new. How do you find somebody to review the code? All several million lines of the product that was acquired? Especially when it is written in multiple languages.

So we were basically locked out of the main corporate repo, unable to do anything productive. We finally figured out that there was a paved path with a git repo used by the kernel team (and android?) that had none of these hurdles, where we could put our code and get productive immediately.


"Readability" is very much still a thing. It's a mess and would be one of the worst things to take from Google. If you can't enforce the code style you like through autoformatters and linting, it's not worth enforcing.


I kind of disagree in the sense that readability indirectly forces someone who has been at Google for a while/ is more experienced to have to sign off on new people’s code. Without it, you could have some very junior members with OWNERS reviewing other very junior members’ code.

And there is more to style than just linting, IMO. For example in C++ there are some complex macro-based test predicates that are hard to learn and use but which greatly simplify/improve on naive testing. Part of the point of C++ readability is that people who understand this stuff teach new people how to use them, or at least introduce them to concept, during code review


> I kind of disagree in the sense that readability indirectly forces someone who has been at Google for a while/ is more experienced to have to sign off on new people’s code. Without it, you could have some very junior members with OWNERS reviewing other very junior members’ code.

Exactly. It is very likely for a lot of junior engineers will be working with other junior engineers, and they will in fact have the most specific knowledge of the part of the project that they are implementing. And human nature makes it so people are afraid of being judged by their "superiors". Readability makes so that it breaks that barrier, guarantees that a more senior engineer will be involved, and will teach the ropes into writing readable, mantainable code to nooglers.


I dunno, I am pretty sure I got Java readability the second month I was at Google and was already in the OWNERS file.

I was a readability reviewer and most of the readability CLs were the first project a person worked on at Google, often rather unnecessary but redone strictly to meet readability requirements (largely new code, more than X lines, etc.). I would go back and forth for quite a while to turn 1000 careless lines of throwaway code into 50 lines that were actually good, but I basically had to grant readability after that one interaction, and it never felt great to me.

The most hated readability process at Google was Go's process (at least in the early days; k8s is obviously not using it), but I think it was actually one of the best. It took me a long time to get Go readability, but after going through the process I feel like I'd write the same Go code as anyone on the Go team. When I look at people's open source projects I think to myself "don't they know that that Simply Isn't Done?" But of course they don't; Go readability can only be experienced, not explained. People didn't like that process, and I am sure I said nasty things about it at the time, but in retrospect I really like it.


As someone who quit Google in large part because of all the stuff like readability that I ran into there (red tape everywhere in sight, low productivity due to process, zero urgency bc everyone is fat off the money-printer, no deadlines for the same reasons, etc.), I was about to strongly disagree with you, and write yet another excoriating take on why readability is AlwaysBad(tm) and etc, etc. I did already snark elsewhere here...

After taking a walk and reflecting, though, I'm remembering something that my manager said to me when I gave notice. Google is not for everyone, for a lot of reasons, and especially a lot of people who came up in startups really have problems (which is ironic since so many startups come out of people leaving Google). How you feel about readability may actually be a pretty good test of whether you will fit in at Google in the current era: it's not a small, scrappy company anymore that gets shit done quickly using whatever tool is most efficient RIGHT NOW and ships it as fast as possible to see if it gets product/market fit. It's a behemoth that runs one of the most prolific money-printing machines that has ever been built, and fucking that up would be a DISASTER. It'd be better to have half the engineers at the company do literally nothing for 10 out of 12 months in the year than to let someone accidentally break the money-printing machine for a day while they figure out how to fix it.

And obviously, it's better if everyone is productive even as they're shuffled around from project to project (which they will be, a lot), which means that you want as little "voice" as possible in their code. At a lot of companies you can tell exactly who wrote a line of code just by the style (naming, patterns, etc.), without even checking git blame, but at a place like Google individual styles cause problems. So the goal is to erase as much individual voice/style/preference as possible, and make sure that anyone can slot in and take over at any point, without having to bother the person that originally wrote the code to explain it (they might be at another project, another division, another company, and even if they're still at Google there is a very strong sense that once a handoff is complete, you should not be bugging people to provide support for stuff they've moved on from).

In that sense coding at Google is a lot closer to singing in a choir than being the frontman in a band: you need to commit to and be happy with minimizing what makes you unique or quirky, rather than trying to accentuate it and stand out. Some top-tier singers just can't force their vibratos down, or hide their distinctive timbres, or learn to blend with a group, and are absolute trash in a choir; it's not their fault or some ego failure, it's just that there are some voice types that don't work in groups, and that's fine, you just don't add those people to a choir.

At least below director-level (or maybe L7 equivalent on an IC track), Google doesn't need individuals to come in and shake things up, bust apart processes and "10x" a codebase. That's startup shit, and even if it might sometimes be worth some risk for the high payoff, it's too dangerous for them to allow for the thousands upon thousands of (still quite senior, sometimes 15+ years of experience) L4 or L5s at the company. The same process that prevents that from happening also makes sure that the entire machine keeps humming along smoothly. If being a part of that smoothly functioning machine while painting within the lines is exciting, then Google can be one of the best places on the planet to work; if you would be driven crazy because you can't flex and YOLO a prototype out to users in a couple days, then it's really not going to be for you.

I'm in the latter camp, I couldn't handle almost anything about the process and was so desperate to move quickly that I started talking to investors to line up my own funding a few months after I joined, but even as a quick-quit (<1 year), I have the utmost respect for the company and the people, and highly recommend it to almost everyone who applies there (the exception being people like me who TBH should just be doing their own startups). Everything they do has a pretty well thought out reason, even if I don't like following those rules myself.


Readability is far, far more than formatting and linting. I hate the current system a lot, but no linter or autoformatter knows if an identifier is appropriately named or if a function is properly decomposed.


Autoformatters and linter presubmit checks are used extensively at Google. Readability has nothing to do with those. It exists for everything else - ensuring that code is structured properly and idiomatically. Readability talks about structuring code, using the proper tools and containers where possible, and more. Everything from "that method should be named differently" to "you can use this function to do that thing you just wrote code for" to "this could be done with Immutable containers if you A, B, and C" and so much more.


The way it's supposed to work is that acquired teams get lots of support on integration, including readability. This helps your team get integrated into writing Google-style code. Not sure why that didn't work out in this case?

(Left Google a year ago)


This. Someone dropped the ball.

There's a form to get your corner of the codebase exempted from readability temporarily. This gives your team a quarter or two to build up readability.


A quarter or two isn't going to be enough for a drastic realignment of a large codebase. It's a start, but only a start.


Readability is usually applied in an incremental way. You don't have to fix all the code to make it conformant. If there are concerns about consistency, the style guide actually encourages people to prefer consistency over its own rule.


The 1-2 quarters is just to "realign" the SWEs, not the codebase.

Old code can usually be left as-is unless there some particularly egregious security hole or the like.


"Readability" requirement is still a thing, but it isn't for every single piece of code in G3, and I haven't worked close enough to it to think about the exact mechanism of how it applies.

My previous team - pretty much any python submission was hitting me with a python "readability" requirement, and it was a bit painful, because only a single person in my entire group of teams (roughly 15 people total) had the "python readability expert" status. My current team - already submitted quite a few significant C++/TS/Java pieces of code to G3, and not a single "readability" requirement triggered.


There’s now an explicit safeguard against that.


In my opinion, the monorepo, global presubmit, testing culture and the beyonce rule (if you liked it then you should have put a test on it) are basically a superpower for infrastructure teams. Without these things it'd be utterly impossible for certain kinds of infra refactors to be done and many more would be very very painful.

In the open source world I see a fair amount of "tests are always red, don't worry" and "we can never edit this interface because who knows who it breaks." These problems aren't intractable at Google.

This approach does have its own set of challenges and I do suspect that the monorepo has contributed in some ways to Google's inability or refusal to maintain some older products. But holy cow the ability to do something like move everybody in the company to different vocabulary types is powerful.


On the other hand, most weeks someone else breaks my system and I have to track down the culprit.

Google's emphasis has always been to make things easy for library developers, at the expense of library clients. For people who value backwards compatibility over long timespans, Google's practices could be better.


I dont think its fair to classify code review and test coverage as “the google way.” Should evaluate more by the unique things google does or the things they specifically invented (not code review and testing).

And of course volunteers working on open source projects have lower standards. Lets instead compare Google to companies which say “we arent google.”


What I am describing is not code review and test coverage. What I am describing is the ability to run all of the tests for the entire company in one go so you can safely make absolutely massive changes to the codebase.


So having a monorepo?


A monorepo and TGP plus a culture where changes are okay when they don't break tests.


Engineering culture has somewhat collapsed at Google. The things that made engineering great didn't really survive the last couple rounds of internal coups.


Interesting -- having not had any experience inside Google I'm having difficulty painting a picture, could you give an example or two of some of these internal coups?


I thought so too, but since then I moved over to Cloud and things are a LOT better.


I always here about Cloud having the worst culture tho? Has that not been the case to you?


People work hard in cloud but there are no MBAs in sight. It’s all very technical work, often very bottom up driven.

A lot of the overall goals of cloud are more ambitious than AWS offerings. Reliability is prized more than it is in other areas of Google as well, because customers are so technical and often notice.

Not a place to coast, but I’d say most people do a solid 45 a week for those that want to get good reviews and get a fat bonus.


> A lot of the overall goals of cloud are more ambitious than AWS offerings.

In what sense?


Not at all, it's a breath of fresh air. It's much less of a check buganizer, check email, write code, push CL cycle. Work is very project focused with high flexibility, my current team isn't even pushing to g3 and using different build systems entirely just because we wanted some more flexibility for example, and it doesn't matter as long as we are getting results.

The problems are a lot more technical though and I don't see a lot of L3s being able to work in the environment as it requires a lot more intuition and experience.

I usually work 45hours a week, but I don't mind it. Plus I'm 100% WFH here cause my management isn't dealing with in office bs.


Some teams in Cloud suck but the core engineering teams have some top talent and solve some very hard problems. Keep in mind Borg and Spanner are both “cloud”, but so are many field sales teams with an average tenure <2y


Nice bait


The “policies don’t scale well” section is inaccurate.

There are plenty of policies floating around that don’t scale well, and plenty of migrations that are still forced on internal users rather than handled magically by the migrating team. The reality is that Google is such a big company most of these fly under the radar of whichever person actually enforces these policies, and it becomes a whole thing to escalate up to whoever enforced them, and then there’s potentially a political battle between whatever director or VP is in charge of the bad actors and the enforcer (ideally they get away with not allocating HC to the internal migration and amortize it across all their users, so that HC can work on flashier stuff).

I think one reason Google has a proliferation of bureaucracy and red tape is that they do not “review” postmortem action items very formally. They are only reviewed as part of the larger incident postmortem review process and the tooling is way overengineered such that performing that review beyond a perfunctory once over isn’t easy to do. So you end up in a situation where “we need to do something” and whichever person handled the incident has to suggest a way to make sure it doesn’t happen again - the easiest of which is to introduce some CYA process. The other reason is that non-coding EMs introduce processes to show some kind of impact on their team.

Also, the existence of the monorepo, global test runs, forced migrations, etc makes it so maintaining a mothballed project incurs some inherent engineering costs - IMO it’s a non-negligible reason Google kills products that could instead simply exist without changes. It also makes it so Google doesn’t really “version” software generally speaking.


DISCLAIMER 1: Current Googler here, but opinions are my own.

DISCLAIMER 2: I think from a hands-on-keyboard SWE there is a lot of useful stuff. What you mentioned about Google culture of killing products and such I am not gonna talk about.

I recommend chapters about testing first and foremost. Among all the codebases I saw (both OS and proprietary) Google tests are the most comprehensive and reliable. However, If you are in a startup-like environment you should pick and choose and not try to follow every single principle listed as they could sink your velocity drastically in the short run.

Other interesting points (IMHO) are Monorepo, Build System, and Code Reviews.

For the Monorepo I discover being a huge lover although I was skeptical. The sad thing is that it's a rather niche practice and tools like Git don't play ball very well (i.e. each time you pull you have to retrieve changes for all the codebase, even files you never saw/heard of managed by another team). I think there's no nice off-the-shelf offering for running monorepos out there. However, not having to fight with git submodules, library versions, ... is great. If the change I am submitting breaks something else in the company you are immediately aware and so can act accordingly (e.g. keep the old implementation alongside the new one and mark it as deprecated so the other team will get a warning next time they do anything).

The build system is a bit more controversial. I learned to love blaze/bazel, but admittedly, the OS version is a bit messy to set up. Additionally, being so rigorous about the build rules felt like a massive chore at the beginning, but now I appreciate it a lot. I can instantly know the contacts of all the teams that use a build rule I declared and hence can be contacted to warn them about bugs, ... . I can create something experimental and have private visibility so only my team can use it and only later expose it to the wider world with just a one liner.

Finally, the code review AKA Critique. Google has the best review tool I had the joy to use hands down. It's clear about what happens, at which stage is the review of a particular section/file and is focused on discussion. The evolution of each change is easy to follow along. These are things I really miss when using GitHub/GitLab PR view. The tooling is incredibly confusing to me. Luckily (I am not affiliated in any way) an ex-Googler (I believe) is working on an alternative that works with GitHub (https://codeapprove.com/).


I am a big fan of Anki, and for reasons I wanted to build it on a machine I have on an uncommon architecture (it has a graphical desktop). I have all of the components... rust, typescript, qtwebengine, etc) installed and working. I invested some time in trying to convince bazel that the required dependencies existed, to no avail. Rules broke left right and centre and every time I found the solution, other things broke. I think it insisted on pulling stuff from the internet, including definitions of other stuff I needed to change. I can't remember much more than that, as I gave up and haven't thought about it much since.

Thing is, pkg-config would've picked up the dependencies just fine - they were literally all there. I even built Rust from source on my weird machine with musl variant, before realising musl has some issues on my architecture.

I suspect Bazel may work well inside of Google for infra/server side stuff (never worked there). I'm a lot more skeptical of more complex builds, like desktop applications across various platforms. Chrome still uses "gn" to generate ninja files, and then ninja to build. For my own stuff, I won't touch it.

I probably wouldn't have commented, except that, to my surprise it seems the Anki developers have also decided life is too short https://github.com/ankitects/anki/commit/5e0a761b875fff4c9e4...


Anki's build seems particularly problematic for some reason - both Arch and NixOS have given up on updating their from source builds and just repackage the first party builds


Sadly first-party builds aren't available for my platform. I suspect the complexity might be in the number of languages they're trying to simultaneously use, plus the complexity of qtwebengine, which requires a working chromium port, which outside of {x86-64, arm64} is not a given.


As I mentioned, bazel is a bit messy to set up. I think it's still doable and it brings a lot to the table, but it requires an upfront investment in learning Skylark and/or fight with the various rules provided. Blaze (the internal version) is just great unless you want to do shady things that most likely you shouldn't do anyway.


> I think there's no nice off-the-shelf offering for running monorepos out there.

I think Git works perfectly well for 99% of monorepos though. It just doesn’t work for the massive ones. I think its a perfect example of something most codebases shouldn’t follow google on.


Maybe if you only allow for shallow clones/pulls. I am not sold on vanilla git handling monorepo well. If anybody in the company pushes a huge blob you mess up everyone else and so on. Git and some modifications perhaps yes though


> If anybody in the company pushes a huge blob you mess up everyone else and so on

Again, this is really only a problem at larger scale. The segmented number of people that pull this large blob at Google may be higher than the entire engineering team at other companies.


Thanks for the CodeApprove shoutout! If anyone here in the comments wants to try it out, just let me know!


No need to thank me. I didn't try it yet, but we chatted months ago I another HN post and so once in a while I check it out. Any plans to include gitlab as first class citizen?


No plans for Gitlab at the moment, but maybe one day!


Personal story from ex-Googler: after getting exasperated at yet one more internal tool launched with great fanfare and almost no testing, let alone documentation, I suggested to the Internal Tools group that we have a contest for BEST internal tool.

Not "worst" since that would be too hurtful. The hope was to recognize excellence, motivate people to be better, and maybe shame the people whose tools received no votes. This suggestion was summarily dismissed.

There were, indeed, some truly excellent tools: Dremel comes to mind. And lots of tools that were nearly unusable.


I left Google around six months ago. I worked in medium and small companies, currently at a startup with ~30 devs.

I would say the vast majority of it works well, some you just don't need until you hit scale (here, scale in the number of developers).

For example, policies work if you have <20 engineers, probably don't really work otherwise.

Blaze/Bazel I miss a lot. Just wrangling the dependencies between shared packages is a mess (though we might just suck at configuring Poetry - at any rate it's not intuitive). Building and deploying is much more involved.

Another thing I miss is code review the Google way. Google asks that you review within 24 hours, reviews by (the equivalent of a) commit and not by PR, and strongly advises to keep commits small; The GitHub PR workflow is terrible in comparison:

1) it nudges you into batching commits into large PRs

2) Is the PR message informative? Is each commit's? What about squash and merge - how many people edit that message? At Google part of the code review is reviewing the commit message. When you squash and merge, that's post approval, so you can't even do that.

3) Hidden conversations? What the actual fudge

4) How many comments have I not addressed yet? For that matter, how many PRs are waiting for my attention and when were they sent?


Of all the Google dev tools, I miss Critique the most. GitHub is terrible at giving enough context to efficiently review a PR on a second or third pass.

I think coupling commits with review progress was a mistake.


Shameless plug but if you're missing Critique and working on GitHub, try CodeApprove (https://codeapprove.com) which brings as much of the Critique magic to GitHub as possible.


Offtopic but, aren't you scared by "GitHub improves its PR workflow" and put your product out of business?


Yes that's a real risk! CodeApprove is not (yet) anyone's full time job so it's also an acceptable risk.

However I think the biggest issue with the landscape for code review tools is that 99% of developers use the default system that ships with their VCS. So on most teams, that's GitHub. People should be actively choosing their code review tools just like they choose their VCS, IDE, CI/CD platform, Issue Trackers, etc. It's one of many tools that makes up your SDLC "Stack".


They haven't done it yet. What's been stopping them?

As long as CodeApprove and Reviewable remain niche they've got nothing to worry about.


This seems very simiilar to Gitlab's MR. Does anyone know all of them to highlight their strengths and drawbacks?


https://reviewable.io is the earliest full-powered Critique alternative for GitHub.

It supports some cool things Critique doesn't/didn't, such as reviewing multi-commit branches (also across history-rewriting force-push cleanups), and indicating exactly the nature of your comment (just FYI, or you want this to be changed before you'll give your approval).

(I was an intern in the initial making of Critique, and subsequently got interested in finding an out-of-Google alternative. I contributed a bit to other review tools such as ReviewBoard, and actively used Gerrit, Crucible, Phabricator, and GitLab reviews.)

When looking for a Critique alternative for my startup, reviewable.io had just appeared and ticked all the boxes, and we use it successfully for many years. The drawbacks are that it's GitHub-only and isn't free software.


Small commit reviews sound miserable. You have no context of the rest of the branch unless you look for it, you have no idea what's in the author's head for future commits (I can imagine some devout YAGNI follower rejecting a commit because a function argument is unused, which the author planned to use tomorrow..), and it sounds like it would encourage minor nitpicking when there's not that much to review. As opposed to a whole branch PR where I can see the entire feature at once and how it comes together.


For a large feature you work off a design doc or an issue tracker issue (Jira ticket equivalent). If you're going to call the function tomorrow tell your reviewer you're going to call it tomorrow. We're all adults.

Re not that much to review, it's the exact opposite. When people get 400 line code reviews they tend to nitpick on style. When they get a 100 line CR, they critique on naming, organization, consistency.


> comment on which parts of these work well and which don't?

I don't think there is a visible distinction between those parts that work/doesn't work. In fact, most of the cases each practice has pretty strong rationales. The problem is, when you take everything as whole, its cumulative complexity and cognitive overhead tends to go wild and almost no one can understand the whole stack when its original writers/maintainers leave the team.

In fact, this might play a certain role of the Google graveyard narrative; it's not because its engineering culture is bad, but sometime its standard is too high for many cases so it's nearly impossible to keep it up for newcomers, especially when you have external pressures that you cannot ignore. Even if you make an eng team of 3~4 people for a small product, they'll likely suffer from tens of migration/deprecation/mandates over years.


Readability is hit and miss. Very nice to have everything written to the same standard, it makes it much easier to navigate through any project. Downside is it's pretty rough for more peripheral teams or teams working in a language that's a small component of their product. I remember for one of DeepMind's big launches the interface was all in files ending in .notjs, presumably since they didn't have anyone on the team with Javascript readability. This was 5+ years ago, though, so some of the downsides may have been mitigated.


> Cool. Could someone, maybe an ex-googler, comment on which parts of these work well and which don't?

TBH most of this stuff is transferrable and even "common sense" in most of the companies you've worked for. Similarly how Google's SRE book is actually a very good collection of battle won experience on how ops can keep systems more reliable and running.

The book is written in a way that you can easily throw away advice that you don't think useful.


> Additionally, critics of Google may point out that their engineering culture may not be great on its own terms -- every time Google launches a new feature, people post links to the Google product graveyard.

It is personally scary when they develop new products. What if it is a brilliant idea, one I cannot live without? If Google develops it, then I am looking at this stillborn thing, mewling for life when I know its horrible fate.

The trouble here is that Google employees (and perhaps even its upper management) want to believe that they are a company which is an inventor of things. But they are not this at all. They are an advertisement company. Advertisement companies should not and do not want to invent things... inventions are worse than burdens, inventions are these weird alien objects that appear valuable but are quite expensive and do not help to sell ads at all.

So they hawk the inventions like they were freaks in some carnival sideshow to move traffic past their billboards. Until the traffic dwindles (or until they get tired of it). And then they take it out back behind the woodshed and put an end to it.


The inventions are to keep the talent stream coming ... to work on ads.

The inventions are the small tax they pay to pretend to candidates that they could work on inventions when the vast majority of them will be "allocated" to ads.


It's also Google licking the cookie. They maintain a moat around ads by doing just enough to threaten to destroy anyone that gets close to their ecosystem.

Facebook survived because G+ product vision was so out of touch with reality and FB were not part of the anti-compete hiring nonsense so that they managed to poach a lot of good people.


You're telling me this stupid, bizarre thing: that Google's major innovation was an HR process.

That's fucked-in-the-head just enough that you've made me wonder if it's true.


"The best minds of my generation are thinking about how to make people click ads"

https://quoteinvestigator.com/2017/06/12/click/


I doubt they did anything like that intentionally. My impression in my time there was they were constantly cargo-culting themselves. X obviously works, so keep doing X, even if it doesn't look like it makes any sense. And X was absolutely everything.


Mind me asking why you moved on?


You need Shiny Inventions so you can divert the talent stream away from your competitors, more than to actually work on the ads.

I'm sure there's some Shiny Invention Corner in the ads business -- let's call it "AI" -- and some of the top people can be motivated to work there.

But isn't the ads business by its very firehose-of-money nature something that will get on fine with that average level of talent that is sufficiently motivated by cash and doesn't need Inventions?

And isn't the top talent able to make the same money doing interesting things elsewhere? (I keep hearing this is happening with AI, but I hear it on Xwitter so who knows.)


>A lot of other companies get into trouble trying to cargo-cult what Google does when they are operating in very different environments wherein those practices aren't optimal. E.g. different levels of scale.

Any prominent examples?


I could not care less about what Google practices. They operate on enormous scale and have vastly different goals and values.


For all of this, Google doesn’t create very good products anymore. This is a guide that came into being _after_ Google was successful. It’s not _why_ Google became successful.

Yes, if you have a mountain of money and a horde of underutilized employees, it’s easy to gold plate your engineering and navel-gaze at your biases.


Whenever I watch/read interviews with people who were successful in some way, they usually downplay the ugly hacks and shortcuts they took to get there, and are quick to say that they "should have done it <the right way>". It's really hard to get any insights because of this inherently unreliable narration.


There was a great (IMO) article posted here on HN the other day about this very topic: https://news.ycombinator.com/item?id=37100226


Obligatory: https://xkcd.com/1827/

I'm also reminded of something written by Scott Adams (decades ago, when his reputation was different.)

> Most people won't admit how they got their current jobs unless you push them up against a built-in wall unit and punch them in the stomach until they spill their drink and start yelling, "I'LL NEVER INVITE YOU TO ONE OF MY PARTIES AGAIN, YOU DRUNKEN FOOL!"

> I think the reason these annoying people won't tell me how they got their jobs is because they are embarrassed to admit luck was involved. I can't blame them. Typically, the pre-luck part of their careers involved doing something enormously pathetic.

-- "The Dilbert Future" (1997), by Scott Adams


Survivorship bias is a hell of drug.


It depends how you look at it. A lot of products created by Google were good/high quality but were killed anyway. It's sad to see things being killed because they were not "big enough".


The ones that were not killed are atrociously shoddy and have been now for, you’d think I was going to say years, but it’s actually decades.


I feel like even their main products are shoddy as well. How can they recommend good, relevant ads if they can't even recommend good news articles, music, or videos? If I do any search in Google on a topic, even if it's just to get a tiny bit of info about that topic, Google decides I'm interested in it and will make recommendations based on that one, single search. Or sometimes I'll buy a product and start seeing ads for similar products later. That's not good targeting IMO.


IMO bang for buck is to invest in:

* reproducible fast dev environments - anyone should be able to build anything pretty easily fairly quickly

* culture of design reviews, testing, and code reviews.

* CI/CD, static analysis, PaaS, dependency management.

You can do all of these without Google-level or really any bespoke tooling. A lot of what google builds is for operating at Google scale - distributed builds and running of huge applications - and even then this tooling this simply enables working at this scale, it comes with a lot of cost(speed/complexity) and jank you don't need at smaller software shops.


(2020). Earlier discussions:

https://news.ycombinator.com/item?id=31224545 (304 points, 179 comments)

https://news.ycombinator.com/item?id=22609807 (222 points, 70 comments)


Thanks! Macroexpanded:

Software Engineering at Google (2020) [pdf] - https://news.ycombinator.com/item?id=31224545 - May 2022 (178 comments)

Software Engineering at Google - https://news.ycombinator.com/item?id=22609807 - March 2020 (69 comments)

Similar sounding but different:

Software Engineering at Google (2017) - https://news.ycombinator.com/item?id=18818412 - Jan 2019 (309 comments)

Software Engineering at Google - https://news.ycombinator.com/item?id=13619378 - Feb 2017 (156 comments)


Unless you have a monopoly on the ad market that's going to mask all of your bad managerial, strategy and culture problems, then I'd advise steering well clear of copying Google. They make money in-spite of their day to day practices, not because of them.


From someone who’s seen how the sausage gets made, I both agree and disagree.

A lot of Google’s practices really are good software engineering practices - provided you have the money to invest in replicating it to a high degree of quality, which could be substantial and better spent elsewhere. When you have one of the most lucrative business models of all time you definitely have the money to invest in trying to make it as stable to maintain and easy to add value onto as possible, so it was definitely worth it for Google in many cases, but each other company will have to determine the costs vs benefits themselves.

Replicating Blaze and Forge seems really expensive and hard to get right (though it can be tremendously valuable for development on a large codebase). Postmortems, containerization, servers-as-cattle, gradual non-global releases… those aren’t as expensive to set up and have great cost/benefit ratios. It’d be stupid to not do these just because Google does them (and in some cases invented or popularized the practice).


Yup - just look at Stadia. This is how they handle stuff that's not related to ads.


As a counterpoint to the engineering/product/culture comments providing context on the book, I would point out that Urs Hölzle has recently stepped back from uber-manager to individual contributor in the infrastructure space.

This is the guy who built the hotspot JIT that added decades to the life of Java, and who engineered Google's data centers and GCP. He obviously doesn't need more money or glory or experience, so ... why?

There are a million examples of things gone wrong, but it may be worth studying one example of how someone could have such an impact and still just love what he's doing.


Are we thinking about the same Urs? His reputation was not particularly good when I was in the TI group at Google recently. He also had nothing to do with this book.


Did he have any involvement in this book?

He’s not an author and not in the acknowledgments: https://abseil.io/resources/swe-book/html/pr01.html#_acknowl...


Probably a bit off-topic, but since I'm a bit triggered by the 'abseil' in the domain name:

I wish Google would relax their 'guidelines' when it comes to software that's also published outside of Google. Case in point: the Dawn C++ library (Google's native WebGPU implementation) has a dependency on abseil, and from what I've seen when glancing over the code, the only reason seems to be some minor string-related stuff.

I can only assume that there must be some interal NIH rule inside Google to use abseil in place of the C++ stdlib (of course I would prefer if Dawn would use neither abseil nor the stdlib, especially since it looks like the only component that's used are related to strings, which definitely isn't the focus of a 3D API).

...and then there's of course the use of 'Google Depot Tools' and of course they are using their own build system [1] (however at least there the Dawn team rebelled and also provides cmake build files).

All those Google specifics make it increadibly difficult to integrate Google C++ projects into any non-Google project, and because of this "Google C++ bubble" I would seriously hesitate taking any advice from them about software engineering as gospel, at least when it comes to C++.

[1] https://chromium.googlesource.com/chromium/src/tools/gn/+/48...


> I can only assume that there must be some interal NIH rule inside Google to use abseil in place of the C++ stdlib

FWIW my understanding is that this is exactly backwards: Abseil exists because the internal code and toolchain evolved to use features that hadn't landed in the standard yet, and releasing the support this way allows that dependent code to be used in open source releases. It's not about features Googler's "can't" use, it's that they[1] could always use better stuff, and this is a way to get the better stuff released so non-Googlers could use it at all (and then, apparently, complain about it).

Obviously looking at this in hindsight from the outside of a project using gcc13/clang15, it seems like it's needlessly different. But when written it was forward-looking.

[1] "We", I guess, though I work in ChromeOS and not in this world.


I cannot more strongly endorse this.

It would be tremendously beneficial if their software dropped the abseil dependency, especially where it is almost entirely unused. Hell it'd be better if they simply vendored the bits they need.

Having to use Bazel, and having to manage an additional dependency like abseil can be hellish for small projects with uncomplicated build systems.

The worst part is that abseil leaks through interfaces and you end up being coupled to it as a consumer of a library. It's bananas. I don't need yet another stdlib.


Just glancing at Dawn, which I never heard of until now, it appears their use of absl is similar in purpose to the way other Google open source projects use it: faced with the choice between requiring C++20 (or 17, or 14) or requiring only C++11 and using absl as a kind of polyfill, they chose the latter.


Many of the authors of abseil are on the C++ committee and contribute to its progress--this is especially true of the string library, where abseil convinced the standards committee to adopt string_view.

The dependency here almost certainly predates C++ adopting the features it has had for a decade.


For string related stuff — there’s not much in the stdlib that can replace the stuff in abseil. (Haven’t looked at Dawn in particular, though.)

E.g. absl::StrFormat — no equivalent until std::format was standardized in C++20.

absl::StrCat: You could use streams, but, ew. Also, StrCat is optimized to reserve sufficient size in advance, so it is more efficient than appending to a string or using a std::stringstream.

absl::Cord. No equivalent in the stdlib.


C++ stdlib didn’t have string_view for ages. Also, until recently, C++ sucked for things like convert to/from strings and string buffers. std::stringstream is awful.


The string stuff in abseil is mostly a historical byproduct of what google was doing in 1998: manipulating lots of strings. at the time, the C++ standard library string implementation (mainly the GNU one) was immature and slow. The string library was written in the early days for performance reasons, as well as reliability (at the time, the libstdc++ was so bad that most string operations just made garbage, not strings). And then it got too expensive to change the entire codebase.

I remember Sanjay Ghemawat or Jeff Dean mentioning that one of their big "optimizations" was to inline short strings into the string object- instead of a string that was "size_t len, size_t capacity, char *data", anything less than 24 bytes was just stored directly instead of with a pointer. when you're running mapreduces with trillions of small keys, this makes a big difference!


Part of the pressure behind abseil is that perf and promotions are correlated with open source (Tensorflow, Chromium, TFX, etc) and it would be essentially impossible to translate internal projects for public release without a public library like abseil.

In contrast Facebook Folly has much less overall clout because engineers there have more incentive to build-from-scratch, which can include simply not using C++.



If this document is intended as a how-google-does-it for applicability outside of Google, then the leadership section (Ch9) should include a heavy dose of your-mileage-may-vary. It is very good advice, but requires great cultural and executive support.

All the adages listed here make perfect sense, but they don't succeed in a vacuum. Servant leadership is great, until individuals disagree with product strategy and priorities. Addressing low performers is great, until a company's HR policies make doing so onerous.

I could go on, but there are often many things outside of your control as a leader that directly affect your capability to manage. It's all part of the job and something to account for, but set your expectations accordingly.



I'd say it's a tradeoff. If you are entirely driven by tests that include your deps, they will be slow. Unit tests are good at catching basic behavior issues that would show up with an integration test, but it's easier to see the cause.

I'm of the opinion that both are needed, but don't put all your eggs in unit tests (they don't need to be perfect). That extra time being spent on integration tests tend to be better for maintaining system health.


Google's massive distributed build system also runs tests and so most projects run their dependent tests in parallel on thousands of machines.


With an optional --runs-per-test=1000


Is there a reason to emulate google if your company is not trying to become an adtech behemoth? Google just sells ads, and has not originated a successful product besides search. They have an endless stream of failed pursuits, and so this could be a guidebook on inefficiency and exploratory tangents that don't produce value. That's why they laid off 12,000 employees since this guide was written. If anything, companies should try to follow Apple's workflow, which hasn't had layoffs and has a track record of elegant software that produces value in new areas.


What a narrow minded view. This post and book has nothing to do with the product side of Google. You can argue about their products failing to gain traction in the market. But it's clear that Google is a leader in the software engineering industry. They have launched tons of tools and paradigms which has changed the way the entire software engineering industry works.


I've been listening to the audiobook and definitely recommend it to others.

Also, it's free. [0]

[0] https://www.audible.com/pd/Software-Engineering-at-Google-Au...


I'm stumbling into this thread right after experiencing what appears to be a pretty catastrophic failure of Google's main product. As I write this, the search results for "Google stock" (among other queries) returns zero results ("Your search - google stock - did not match any documents").

I'm not really sure what to make out of these discussions about how X or Y Google engineering is, while the production service is broken for an end user like me.


The decline of google’s search engine has really been dramatic. It honestly is just a poor product at this point and I find myself having to use yandex, bing and variety of other tools now to find what I am looking for.

My guess is that between SEO companies and Google just trying to maximize ad profits the product is in terminal decline


They really don't give a shit how many search engineers they drive away with 50+-hour weeks and their endless criticism. When it became uncool to have Google Search on your resume in 2018, I left.


Searching for "Google stock" shows me correct results. A stock chart followed by various search results.

There's no news of a widespread Google failure. Maybe you have a browser extension interfering? Or there's some kind of very localized hiccup.

In any case, your experience right now isn't even close to representative. For its scale and complexity, Google search is probably one of the most reliable services ever built.


Small update: It's definitely not extensions, it's giving the same result on two different devices (mobile and laptop). I've narrowed it down to there being something going on w/ being logged into specific accounts. On my work account, I get no results (and this is a query that used to return results under the exact same setup just last week). Trying on an old personal gmail account, I'm getting the UI localized to what seems to be mandarim for who knows what reason (I don't speak mandarim, and don't even use this account on a regular basis).

As for why this happens, I have no idea. I've had Google Maps completely black out on me and then eventually magically fix itself many months later.

As for reliability, I would probably have agreed if it was a "simple" system (which the original Google was). Today, I'm not so sure. I at least understand that Google today is made up of a large number of subsystems, and subsystem failures like the ones I'm experiencing (and bad search results as others have also reported) do in fact erode my trust in the product. "Your 99.9% is not my 99.9%" feels like an apt quote here.


Possible. I noticed recently that Google search no longer works with NoScript. It used to work. Not sure when this changed, since I don't often enable NoScript.


[Deleting -- I thought I was replying to the same commenter. Never mind, bradley13! Thanks.]


I'm not the original commenter. I was just tossing in a hypothesis based on my experience.


Don't know why you are getting downvoted - search quality has declined drastically.

I've had multiple occasions where Google reproducibly fails to find exact matches in the page title (no problem for Bing). This cannot be explained by mysterious AI ranking or Unicode issues since Google gave me zero results, the website is non-political, and the title is just plain ASCII.

This never happened ten years ago. Whatever they are doing now, they are seriously screwing things up.


Are you really sure your machine isn't running malware that intercepts queries?

The query [ google stock ] would never return zero results.

If this is really happening to you, please post an actual screenshot demonstrating that a standard browser in incognito (not logged in) mode on a standard OS returns no results for [ google stock ].

I'm not saying Google's core product hasn't slipped but that's one query I run every day.


Search has significantly declined in quality in the past two years.


found this comment thru google's "filter by latest 24 hours" search

currently logged on a google account, indeed the "google stock" search shows "Your search - google stock - did not match any documents"

it happens with other searches, too; not all of them, but some.

no solution found at the moment except logging on another account / not using an account. no extensions installed either.


I’ve been unable to “Mark as read” in GMail for two years now.


That's pretty interesting.

I can't tell, right offhand, but is this an official Google publication?



see also: "Software Engineering at Google: Lessons Learned from Programming Over Time" 1st Edition by Titus Winters (Author), Tom Manshreck (Author), Hyrum Wright (Author) O'Reilly



Google is not what it used to be but this book is great. So much distilled wisdom.


Most people will misinterpret what this book really is. Most people will think, "Hmm if I read this book I can start another google." Nothing could further from the truth. This is a book about how to run a gigantic software organization once it gets huge, rich, and sclerotic. This book has no insight into how to start another Google. The only insight is into where will you end up if your startup ends up huge, rich, and sclerotic, like Google.


How have layoffs impact group psychological safety?


Not good.

I've seen people rapidly shift to career protection, wagon-circling, and empire building. I've seen more competition for leading junior people since there are fewer new hires coming in. But mostly I've seen the company become more cynical towards executives.

Depending on your feelings on AI, you might see new excitement and opportunity opening up or further meddling and messy product management in the future. I'm not sure where I land here yet.

I think Google is still a very good place to work. Pay is high. WLB is great (at least where I sit). Tooling is very good. I haven't had any asshole managers or directors. But I definitely find it hard to get people excited about taking a big risk or maintain a project that is important but somehow misaligned with what your VP cares about.


I think they hit pretty hard. I remember some hardcore arguments with buddies that work at google when I told them that based on google's behavior layoffs are going to happen (this was 3-6 months before google did their first layoffs in history). I was brushed off with "google would not do that" and "our culture us different". It was heartbreaking to hear how their view on google transformed after layoffs did happen.


A book on software engineering but totally missing how decisions are made around what features to build, what bugs to fix. I would have love to read more on how Google engineers prioritise work, what makes them more creative in terms of building innovative features and products. Also would have love to read how UI engineering is done while working with designers etc.


What you’re describing sounds more like a book on product management?


Yeah but ideally some PM spirit should also be built into a good developer culture so they can fill any PM gaps when needed, imho.


To add to your point, I think what they said about technical writing might also apply to product management:

> It introduced a perverse incentive: become an important project and your software engineers won’t need to write documents. Discouraging engineers from writing documents turns out to be the opposite of what you want to do. Because they are a limited resource, technical writers should generally focus on tasks that software engineers don’t need to do as part of their normal duties. Usually, this involves writing documents that cross API boundaries. Project Foo might clearly know what documentation Project Foo needs, but it probably has a less clear idea what Project Bar needs. A technical writer is better able to stand in as a person unfamiliar with the domain. In fact, it’s one of their critical roles: to challenge the assumptions your team makes about the utility of your project. It’s one of the reasons why many, if not most, software engineering technical writers tend to focus on this specific type of API documentation.

PMs are a scarce resource. A lot of eng teams within Google don't have a dedicated PM and need to carve out product-market fit for themselves. When you have a dedicated PM it's easier to offload all PM ideas and responsibilities to that person.

(I'm a technical writer at Google. The boilerplate "all opinions my own" is important in this convo because I think a lot of TWs and PMs will strongly disagree with these ideas.)


Super important point. What seems to be missing at Google today is caring and initiative in fixing anything.

Maybe it’s more of a PM culture thing.

But as a dev (not at Google) I’m used to stepping into self-driven mode when PMs are slacking, and it’s a shame for Google that Googlers don’t exhibit this behavior.


I've always found it weird that people seem to obsess so much on how Google does things. From hiring to writing code and tooling.

Google is so much not like any other company, especially 20 employee startups. Trying to apply their recipes to 99% of companies is solving problems that don't exist and most likely creating new ones you don't need to deal with.



Google is not a good example to look at if you are thinking about enterprise software as these need to be supported long term and Google is not very good at that. They have a history of making breaking changes and discontinuing products.

Microsoft is a much better example for business software as they are (were?) paranoid about backward compatibility.


You're confusing software engineering with corporate product support. You can have top-notch ongoing lifetimes for trash products. See for example SAP or anything Oracle.


Product support is an integral part of enterprise software engineering. Product management does not know what adding a new feature or deprecating an old feature means. It is the responsibility of engineering to provide the dependency matrix.

For example, engineering usually tells product, if you change feature A, then it will also affect feature B, C and Z. Otherwise you may end up with contract breaches and SLA violations.

Product lifetime and providing incremental features is a big reason why SAP and Oracle have been successful in the enterprise space and people still pay a lot of money to buy them.


> SAP and Oracle have been successful in the enterprise space

Yes, but are the well-engineered software?


Found it here how Google Engineering Career Ladder is structured: https://labs.revelo.com/template/google-google-software-engi...


That's mostly propaganda, like most companies, Google has a policy of something like 5x-10x fewer people at every level as you go up the management chain, and they enforce that policy in preference and use these sugar-coated candy-cane level guidance documents to hide their sleaze.



People seem to blindly worship Google too much. There have always been good ideas in engineering but because Google has the money to implement them at scale they get all of the credit.

Testing, reproducibility, etc. These have always been common sense. It's just that people only adopt them when the big guys do it because we implicitly give authority to those with money and power.

It's not different than when you see folks making videos of "How a millionaire structures his time." Like, wtf. Those ideas existed before the millionaire. Many people already practice those. And those alone are not what made the millionaire.


When are we going to advance as an industry and stop worshiping these people?

If you have a decade in control of the greatest internet business ever, go ahead and do exactly as Google has done. For anyone else, look inward for inspiration!


I thought that the more high status the company would indicate better technology stacks and better quality of engineering.... but all the high status seems to do is pull in people who are really good at politics and doing promotion based development which is kind of counter to the science aspect of computer science.


I've consistently gotten the impression that the difference between high performing organizations like Google and less high-performing organizations is that Google doesn't just _say_ they do this stuff, they actually do it, too. (Or, at least, they used to).


> "High performing"

I havent really see Google doing any innovation in about a decade or more? What have they done significant since maps and android over a decade ago?

Apple stomping them in hardware, openai stomping them in AI, AWS stomping them in cloud, Nvidia stomping them on game streaming.

Google has a monopoly on a big ad network at its core and thats not high performing or innovative.


The "3. Thesis" section [0] seems empty. Anyone know what's up with that?

[0] https://abseil.io/resources/swe-book/html/part1.html


It's just a separator page. Each of the parts is like that: https://abseil.io/resources/swe-book/html/part2.html

This is an html copy of a physical book. In book, that's nothing unusual.


Is the culture section being 100% blank a very wry joke about there being no culture or a mistake?

https://abseil.io/resources/swe-book/html/part2.html


Neither. Each part -- Thesis, Culture, Processes, Tools, etc. -- has its own heading.

The formatting of the table of contents does not convey it well.


> Part I. Thesis

...and then it's blank.

Am I missing something?



Great book, a modern classic IMHO


Cool, is this available as a PDF?


"https://www.ebooks.com/en-us/book/209970024/software-enginee..." includes PDF and EPUB formats.

(I bought it, intending it as interview prep for an experienced engineer interview, but seems they are doing student-startup-style Leetcode interviews.)


Is there an epub version?


I wrote some code last week to generate one: https://github.com/captn3m0/swe-ebook


thanks! hmm it's a 404


"As far as this outsider can tell, the systems and processes for writing code at Google must be among the best in the world, given both the scale of the company and how often people sing their praises."

Is this really the case? In high school I sure thought Google was a magical software heaven us mortals could but dream of working for, but now (and increasingly as of late) I'm strongly under the impression that 20 years ago they discovered unobtanium in the form of a very under-served search and ads market, they plopped down camp on top of it and turned it into a giant money firehose, and now to this day are simply operating that ridiculously high-margin business while trying desperately to find another vein of unobtanium because when you already have one, that's the only thing that even registers on your radar.

"Revenue from Google Network ads hit $7.5 billion (10.7%), and YouTube ad revenues registered at $6.7 billion (9.6%). In other words, at $54.5 billion, the total ad revenue from Google constituted 78.2% of the company's total revenue in the quarter." https://www.oberlo.com/statistics/how-does-google-make-money

78% between google and youtube. And given youtube isn't nearly as high-margin as the search product (can you even imagine the cost related to essentially storing the video history of the world?), I'm hesitant to say youtube is their 2nd vein.

So yeah. Turns out when you have essentially a monopoly in a high-margin business, you can do basically anything and build an entire private software ecosystem without going bankrupt.

I interpret "how google does it" to simply mean "look at all this cool stuff you can do when your product is this high-margin".


The tooling is great in some ways, but possibly not in the way you'd expect.

There is a tool that will do almost anything, but the tools are often just a bit janky. Rather than thinking of the toolchain as some polished, tightly integrated, perfect system, think of it more like an internal open-source ecosystem of things that work together but which are made by individuals, with limited resources, that still feature bugs.

This is not to downplay what we have, it's amazing in many ways, but most tools are exactly the same sort of thing that you'd create in any other place, we just have more of them that cover almost every aspect of development. So much of the time I run into neat little systems and then discover they're a few Python scripts running under someone's personal quota. Sometimes these are 20% time projects, sometimes they are bits of tooling teams build for their own needs, sometimes they are more concerted efforts.

There is a core, tightly integrated set of systems that work well and are more "productized", but most of those aren't that much more special than what you can buy at any other company. If you've used Datadog then you'd feel at home in much of the modern infra tooling. Spanner is great, but anyone can pay to use it. Our CDN is very good, but there are other CDNs and from a developer perspective there isn't a whole lot of difference.


I have yet to be impressed by internal software tools developed by one team and consumed by another. You're likely being friendly and understating the degree of jank which is present.


It's janky, and when you do choose to use a best-effort tool with no dedicated support, you absolutely have to plan for it being a pain point and the likelihood of eventual migration.

That's true at Google, just as it's true anywhere. But what makes Google a much better experience than most places is a willingness (and ability) to pay for internal dev tooling of every sort with dedicated headcount. And that results in a consistently better experience than pretty much anywhere (I understand Meta has a comparable culture and ability to fund it, and I wouldn't be surprised if some or many of their internal tools surpass Google's). It's also unfortunately not replicable as a strategy, because you need a giant money spigot to make it work.


Is this at Google, or elsewhere? If it's at Google I'd be happy to try to persuade you via chat :)

If elsewhere, I think there are 2 kinds of tool: those developed by individuals/small groups in informal ways, some of these are bad, but much like with open-source tools, the ones that are good survive and become popular. The other type is those with more formal backing, dedicated dev/design/UX/PM resources, and those are good at surveying users to figure out what the right problem to solve is, so they're normally pretty good, and when they're not, they still accept fixes (I regularly contribute fixes to other teams codebases when I find issues).


>if its google id be happy to try to persuade you via chat.

I would be curious as to how you view projects that are staffed entirely 20%.


Feel free to reach out internally! But in general, it seems to work ok for many popular tools. I haven't attempted to do it, maybe it sucks as the maintainer?


>most tools are exactly the same sort of thing that you'd create in any other place, we just have more of them

Why? Why reinvent the wheel 10 times?


Google has 180,000 employees. There's an org that is responsible for internal tooling, but they are an org with priorities and challenges like any other. So sometimes Search needs a thing and Core won't do it so some people in Search build a tool. Maybe later somebody in Ads needs a similar tool but has never heard of the thing that Search built. So there is an organic force to this sort of complexity.

There are systems to resist this and people whose whole job is to focus on unifying systems, but they aren't perfect. In general, I'd say that Google is way way way more uniform than most other companies because of the monorepo, centralized tooling, and (mostly) uniform development process across the company.


Should be easy nowadays to have a chat-based AI which, given some provided requirements, suggest similar tools that have been built by other SBUs. Even if only to look at the code.


Maybe.

But this only solves a visibility issue (and codesearch/moma is already pretty good at this). It doesn't help you when the tool supports a different language or the team that owns it isn't willing to commit to the SLO that you need or whatever.


Most engineers create tools, Google is not special in that way.

But even when there's a similar open-source tool, it may be difficult to integrate with internal systems. There's a lot of benefit to a consistent set of technologies (data formats, storage systems, etc) so that's often a reason to write internally.

There is a lot of open source software in use at Google so I wouldn't say there's a lot of reinventing the wheel. And when there is an internal clone, the requirements are often necessarily different enough to warrant writing an internal version.


I did not interpret their statement as meaning they have many variants of the same tool, but that they have tools that cover a wider variety of situations.

I found the same thing moving from a very large company to a smallish company. Both had good tooling, the latter just had significantly less and there were gaps in coverage. Whenever I encountered one of those areas I often found myself wishing I had access to some niche tool I had gotten used to at my prior company.


In many cases there was no wheel available at the time


> I'm strongly under the impression that 20 years ago they discovered unobtanium in the form of a very under-served search and ads market, they plopped down camp on top of it and turned it into a giant money firehose

In 2010 when I interned there, one of my mentors said something to me that stuck with me: "Google found a hose that money pours out of, and it's name is 'online advertising'. All we do is optimize that hose, and search for another one."


Yes and just like all tech companies.. Google steals from smaller companies to the dreamers they inspired to invent and create.

Sonos: https://www.theverge.com/2023/5/26/23739273/google-sonos-sma...

A MIT student: https://news.ycombinator.com/item?id=18566929

I met them (they were horribly unprofessional) in 2013 around time Sonos did to discuss my audio syncing tech. Best to steal tech then bother investing into R&D. Quickest way to make more money .. steal .. spend less and make more.


I've been told I shouldn't talk about this anymore on hacker news by Dang.

I haven't mentioned my experience in years cause it's old news but Sonos court victory is new relevant news to my experience meeting Google R&D that same year.

Though I can say for sure and claim is that they were extremely and laughably unprofessional to me and I met with other tech companies then who were very professional and polite.


I can't think of a single C++ library coming out of Google I had to deal with and where I didn't have massive problems integrating them into my own C++ projects because of all the Google specialties baked into those projects (their own weird build tools which don't seem to have changed much since the early 2000's and the use of other Google dependencies like abseil).

Maybe it works well inside Google, but definitely not in the real world.


Off the top of my head I used FlatBuffers in quite a few projects many moons ago and that was pretty seamless.

Arguably it was a bit thin on documentation but it just worked.


Google search was 1998. Google Maps was 2005. Gmail was 2004. Android was 2008. Youtube was 2005. Chrome was 2008. Docs was 2006. Translate was 2006.

Yet in the last decade, they really haven't had many successes (perhaps with the exception of Google Photos - 2015)

One would imagine that with nearly 200,000 employees at least one of them would have a good enough idea for a new product people like. But management and culture, with 'more wood behind fewer arrows', is no longer conducive to the good ideas getting launched.


This is a common meme, but it depends how you define "product". There are companies with millions in VC backing that would just be a feature on a Google (or other big tech company) product as most people define them.

I would argue that taking a product from 100m to 1bn users is a whole new level of success, and that has been done multiple times in the last decade.


Why should scaling matter? I thought google tried to solve problems regardless of scale? Or maybe youre commending google for its ability to more effectively commodify datacenter labour in the past decade?


I don't think we can expect businesses to solve problems without any business upside. I don't hold that against any company. That means needing some amount of scale in order to justify the work and opportunity cost.


'free' ad supported products usually have engineering costs greater than compute costs, even up to billions of users.

Compute scales with the userbase. Engineering scales with the product featureset. That means a small userbase in general cannot support much engineering and therefore cannot have much complexity/features if it wants to be profitable.

That in turn puts big companies at a massive advantage. And in fact we see that - the vast majority of my time using the web is spent on big products (google, youtube, reddit, twitter), and only a small fraction on small sites (perhaps HN being the one exception here). Those big products won the battle for my attention.


You have the wrong impression of Google Translate. Translate was started in 2006 and it sucked. 60% accuracy at best. Janky incoherent sentences. Translate revolutionized the entire field of language translation in 2016. Seriously. Went to 94% accuracy when Jeff Dean joined the project and spurred them to train against all languages at once. It revolutionized the whole research field of language translation. Translate is now better than most high school students after several years of study. I consider Translate to be one of the top-5 accomplishments that Google ever had:

https://translatepress.com/is-google-translate-correct/

Another breakthrough is picture recognition. Did you know you can type in "motorcycles" and do an image search and there is ZERO text associated with many of those pictures of motorcycles? They really are mapreducing those pictures and running image recognition against a 17M open-source image library to assess "motorcycle-ness" of all those pictures! They started talking about picture recognition in the early 2000's but it really began happening in the late 2010's after they hired fei fei li, the mother of image recognition ...

https://profiles.stanford.edu/fei-fei-li


Translate did not suck in 2006, it was a big leap over the competitors at the time. It was really the first time a company had launched productized statistical machine translation at all. Before that there was the Altavista Babelfish which was rules based.

Going neural definitely made a big difference but Translate was called "statml" by the infrastructure internally for years, maybe still is, because just using statistics and having training at all was a big deal back then.


If you compare the Android from 2008 and the Android from 2023 then you'd soon agree that within these 15 years, the whole thing was re-invented multiple times.

And I'm not talking the UX, which has already changed substantially, but I mean the underpinnings.


Sure. But as far as the user is concerned, it is much the same. Touchscreen apps, app store, camera, wifi and phone abilities, app switcher and back button. The user doesn't care that the wheel has been reinvented 3 times along the way... There hasn't been any revolutionary new features in android since shortly after launch really.

There have been revolutions in apps. For example, pre 2017 I couldn't just tap a button and get an Uber. Or tap a button and have food delivered. Or do a video call with all my friends.

But the OS can't really claim credit for most of that innovation.


> There hasn't been any revolutionary new features in android since shortly after launch really.

Not visibly. But the tech stack has changed substantially.

I'd find it quite an achievement if the UX has largely stayed the same while things under the hood got modernized over and over again and went with the times, subtly bringing innovations to the UX as well without people realizing. It's a feature that things don't look substantially different every two years.


Android Auto was 2015.


Hasn't really seen widespread acceptance. I bet a sample of 100 cars on the freeway, perhaps only 25% would have carplay or android auto connected in the USA, and outside the USA adoption is far lower.


Are you sure?

https://appleinsider.com/articles/23/05/23/carplay-android-a...

> A report from Straits Research found that 98% of newly produced vehicles were compatible with either CarPlay or Android Auto. Meanwhile, 80% of prospective car buyers strongly preferred having these smartphone-based infotainment systems in their new vehicles.

The same research also shows Asia-Pacific to be the biggest market for these products, though North America is the fastest-growing.


"connected" vs "compatible with"

Tbf. 25% connected is huge. If you just jump into your car to quickly do groceries or pick up your kids, you may not be interested in connecting your phone to your car, even if you really like that feature. So there could be a natural ceiling for the "connected" number and 25% feels getting close to that ceiling actually.


My older car requires a hardwire connection for Android Auto, but my newer car will automatically connect over wifi so I no longer need to take my phone out of my pocket unless I want to charge.


If 80% of buyers express a strong preference for having Carplay/Android Auto I don't think it's reasonable to say that they haven't seen "widespread acceptance."


I read that as "strong preference for having Carplay/Android Auto over the car manufacturers own UI".

And everyone is just frustrated with laggy UI's in cars. But in reality they'll probably still just use Waze with a 10 buck phoneholder suction-cupped onto the windscreen.


Why on earth would you do that instead of using Waze on the larger, built-in screen? Especially if you are a buyer who "strongly prefers" having that in the first place? Nobody is using that kind of holder on a car with Android Auto or Carplay.


Security-concious people like to keep the complexity low of things that are involved in their day-to-day activities. Having your car mingle with your phone is kinda the opposite of this.


I don't think this describes any significant portion of users.


Maybe, maybe not. But it describes an answer to your "why on earth" question.

People may have different preferences from yours.


The proposition in question is whether Android Auto/CarPlay is “widely accepted,” so implicitly what I’m being asked to believe is that there is a large portion of car buyers who strongly prefer a car that supports these features but then don’t actually want to use them because of security concerns. I think the idea is absurd on its face.


Not sure where you're being asked to believe that beyond a single thread contribution. I've argued your point earlier in this thread and agree that they are "widely accepted", so you may be barking up the wrong tree.

Nevertheless, some people prefer not to accept them while not being entirely anti-tech either. You may not be agreeing with choices other people make, but it goes a little far to call them "absurd on its face", don't you think?


This sounds like a result of how often cars are replaced. What if you stopped a sample of 100 cars released in the last five years? That proportion would be far higher. Pretty much every car review I watch mentions support for either or both.


Uber was founded in 2009, and pretty broadly available by 2014 Grubhub was founded in 2004, I was ordering from there all through college 2007-2011 Google actually put out hangouts in 2013, and Skype could do group calls in 2010

Not trying to downplay what happened after this: it was incremental change that made the products scale and be more usable to more people. I would say the things you describe as revolutions were actually evolutions of earlier products.


> Google search was 1998. Google Maps was 2005. Gmail was 2004. Android was 2008. Youtube was 2005. Chrome was 2008. Docs was 2006. > Yet in the last decade, they really haven't had many successes

I don't think this kind of analysis is really giving a good answer. I mean, by these standards, look what a disaster of a dinosaur legacy vendor Apple Computer is! Haven't had a successful product launch since the iPad 13 years ago! (Perhaps with the exception of Apple Watch - 2015).


> perhaps with the exception of Google Photos

Which is somewhat offset by Google dropping Picasa.


Also Maps, Android and YouTube were started by other companies which were acquired.


K8s,

TPU

Colab

Tensorflow

hmmm.


Oh yes - google still has great technology - but they aren't consumer products.


Apple has undergone a similar ossification.

I think it's just something that happens to companies that get a certain size. The Process becomes more important than the Product.

When I was younger, I wanted so badly, to work at Apple.

A few years ago, they actually approached me, and I found out that the culture seems to have drastically changed.

They ended up not wanting me anyway, so maybe it's just sour grapes, on my part.


Last I talked to people about Apple's internal tooling, it was still pretty fragmented between different organizations.

I was told about e.g. multiple internal CI/build systems because one org didn't want to rely on the other, so they both built their own little software stacks.

That's a bit different than what I've heard from meta/google/amazon who seem to be more open to running centralized internal services


Amazon famously mandated such internal services at one point:

https://gist.github.com/chitchcock/1281611

This was rather a long time ago now so I wouldn't be surprised if things have changed since then, though having never worked at Amazon I wouldn't know one way or the other.


The main thing that changed is that now the service approach also generally enforces a particular framework (Smithy/Coral)


Has it really? The transition to Apple Silicon was pretty impressive, and something that Apple could have simply decided not to do at all while still remaining hugely profitable. I don't know anything about the company's internal culture, but it's not exactly resting on its laurels just yet.


The hardware people certainly aren't: I think they're reveling in their newfound freedom to make the best, not just the thinnest, computers.

But the software side? There seems to be a massive variance in both ability, and giving-a-shit across the software orgs, and no leadership filter on the output of these disparate groups.

Perhaps they have the opposite of the resting-on-laurels problem: a smattering of very senior people who are talented and care about the product/users, and a wide base of careerists who aren't very good and just want the money and the CV, users be damned.


The Apple Silicon transition also involves software (Rosetta), which works absurdly well given the complexity of what it's doing.


Exactly my point: the same company wrote Rosetta and the new Settings. Clearly not the same level of talent and commitment in those two teams.


I think it would be difficult to imagine a company the size of Apple not having variance in talent and commitment between different teams.


When you have the golden goose, you don't actually know how it makes the gold. You think you know, and you aren't completely clueless, but you don't really know, because it's complicated.

The obvious thing to do then is to not change too many things around it. Or at least document the changes.

So then you get a bunch of processes that are designed to not rock the boat too much, in case the magic dies.


> I found out that the culture seems to have drastically changed.

Can you explain in detail?


I'm not going to go into detail, as I don't think that it's helpful.

But I don't think that someone like me would be very welcome there.

That is both good and bad. I am certainly not God's Gift to Programming, so they may well be better off without folks like me. They certainly seem to be making a lot of green.


> I'm not going to go into detail, as I don't think that it's helpful.

Wouldn't it be helpful to other people who are interested in working there?


I don’t think so.

This is a professional venue, and I don’t feel that slagging companies is a particularly professional thing to do. I like Apple, and wish them the best. I have no interest in throwing shade on them.

When I do say less-than-positive things, I find it best to stay vague, so people can easily write off what I say. I think that folks here, are perfectly capable of making their own decisions, and that I’m best off, letting them do so, unencumbered by my opinions.


Steve isn't there, for one.


Everyone knows that Steve Jobs died. That's not an answer to my question.


> The Process becomes more important than the Product.

I don't know if the process is more important, but at a certain size codified process becomes a requirement.


> Apple has undergone a similar ossification.

> I think it's just something that happens to companies that get a certain size. The Process becomes more important than the Product.

Why change it if it's working well? Keep doing what you are good at, keep being the best in the world/market at this, and the customers will reward you.

I seriously don't understand the common theme here in these threads saying that Google is basically stagnating. Why do they need to churn out a major new product every other year to stay relevant? They continuously re-vamp their entire stack, their data centers, their networks. Nothing ever stays as it is. As a user I actually find that things change too much. (But I'm not what counts, it's ad revenue that counts..)


I don't work there but am curious can you share what you saw?


Google software engineering may not be perfect, but that's because we all have high standards probably. I've worked at both FAANG and non-FAANG companies, and while the sample size is small and not representative of ALL non-FAANG companies, I will say that my experience at FAANG companies is the engineers there do to get dream big and come up with ingenious engineering solutions that make engineering work at non-FAANG look like college senior undergrad projects. I think we should celebrate what Google has done for software engineering instead of nitpicking every little detail that they got wrong.


> I will say that my experience at FAANG companies is the engineers there do to get dream big and come up with ingenious engineering solutions that make engineering work at non-FAANG look like college senior undergrad projects

My own experience at FAANG and non-FAANG is that __some engineers__ at FAANG get to dream big and demonstrate ingenuity, whereas at non-FAANG (especially startups I've been at) everyone can do this, though not all take the opportunity.

At my last FAANG where I was a manager, my opinion is that performance management and the focus on bullet points, metrics, and peer feedback for review time is a huge limiter for everyone else. I spent a lot of time creating cover for people on my team to actually try something new and innovative. More senior management constantly wanted to know why person X on my team was working on a new thing instead of trying to move some incremental metric somewhere else. Based on my experience during performance reviews for senior ICs, risk taking wasn't particularly encouraged at L6 (or even L7) and below as the cost of failure was substantial.


I used to work there and feel about the same. Google’s internal tech is really good where it needs to be, and barely good enough where it doesn’t. Borg, Blaze, d, google3, codesearch, etc. are all very good. Almost everything else falls by the wayside.

Generally programmers at Google are better than programmers I’ve worked with outside of the company. But not by leaps and bounds. And in my opinion it doesn’t matter as your pace of development is so much slower it matters little how good of a programmer you are.


You're not wrong. Google gets to sidestep a lot of problems other lower-margin companies have because they've spent their whole history being near market-leader on search and ads. Things like "deadlines" were foreign to the company for a long time; when you're already the pack leader, you release new features when you feel like it, not to play catch-up with competitors. You can see them struggle in spaces where that's not true (Cloud, for example).

There is one underlying technology special sauce they refined to an art form, which is building a reliable system on incredibly unreliable components. This comes from their inception when the two guys building out the company knew how to write code but not build custom machines; their initial attempts at racks (hard drives and motherboards mounted to flexible plywood) are hilarious-looking by modern standards and broke down all the time, so they had to learn how to write software that was fault-tolerant at every link in the chain. That resulted in building an infrastructure that was extremely supportive of experimentation (if the system can survive half of it disappearing, it can survive a software bug crashing half the machines), and the rest of the shape of the company kind of flows from that.

It is worth repeating a lot that most companies do not operate at the scale or constraints Google does, and their approach to engineering is emphatically not one-size-fits-all.


I think people forget that the standard back then in the industry was very high margin servers with redundant everything (psu, network/nic, disks, etc).

Going to (eventually) mostly dependable simple servers with no redundancy was a big leap.


There were a couple places in Google's internal infrastructure where they had picked up traditional "big iron" architectures (mostly by acquisitions), and it was funny to observe the sheer plane between their regular best practices and those systems (when they hadn't yet been replaced).

One high-ticket item relied on a huge Oracle database. Every other aspect of that product was patch-on-the-fly, silent-release new versions, forward-and-backwards compatibility of software... But the whole product had regular scheduled outages on the weekends because the Oracle part could not be patched on the fly, had a rigid schema baked into its relational architecture, and had to be brought offline for updates and migrations.

It was like everyone was zipping around in racecars and then they all had to pause for Grandpa to cross the street.


I don't understand your argument - why is the one trick pony revenue generation incompatible with high quality systems and processes for writing code?

Both can be true at the same time, and my feeling is that they are, having worked there. The business is not very diversified, and the tools are great.


> I don't understand your argument - why does the lack of a diversified business have imply a lack of quality for systems and processes for writing code?

That wasn't the argument. The argument was simply that having a profitable monopoly doesn't imply the presence of quality.


How did they achieve monopoly if it weren't for the quality?


The quality of what exactly?

Google's massive internal processes and systems are the result of having already been huge for many years. Google needed the time and resources to build them up. They're not the cause of Google's becoming huge in the first place.

Of course the quality of Google's external search results helped Google achieve a monopoly. Ironically, many people say that Google's search results have been getting worse, and I'd have to agree with that sentiment.

See also the list of mergers and acquisitions by Alphabet: https://en.wikipedia.org/wiki/List_of_mergers_and_acquisitio...


I think you hit on a key part of the debate in this thread. There clearly is some sort of functional quality floor, in the sense that poor enough quality means something doesn't actually work to do its job. But beyond that, what is "quality"? Some would consider it to mean elegance of algorithmic and architectural design, or readability of code, or some other more abstract measure. Some consider it suitability to purpose, with low-bug count.

I've give one example I observed commonly early in my career. The unexplained memory leak. Some process is running and the memory usage continues to grow. Eventually it will use the entire memory of its machine and die. You have a few options: 1) debug the issue and address the root cause 2) debug the issue and workaround it in some way 3) Give up and rewrite the code using some other kind of tooling 4) wake up people when the process dies and have them restart it 5) write a cron-job that restarts the process periodically.

What is the right answer? The best from a QA perspective is probably to identify the root cause and fix the underlying issue. Tools like valgrind have made this much easier in recent years, but it still can be a challenge. Pragmatically, my own answer (speaking generally, there are more different cases for different contexts) would be to time-box and investigation and fix, and if that wasn't achievable in reasonable time, just write the cron-job and work on the next problem. You can imagine very successful operations filled with kludges like that. Is that low quality?


It's not incompatible, but making money hand over fist can hide many problems.


If Google's code writing process really was superior, you'd expect them to consistently produce killer products in other fields.

But I can't think of anything like that in the last 15 years.


I don't think this is true.

It means that you wouldn't expect them to have products that fail because of unresolvable tech problems. You can see plenty of cases like Stadia where the tech was solid but the product strategy and leadership follow through was garbage.


Unfortunately good code does not mean a good product, and Google is anything but a good product company.


If anything, these are orthogonal.

I had a front-row seat to some really revolutionary ideas in Google getting to the prototype stage before being squashed in the gears of "We're chasing a market and that's not how users see this product working." Stuff where, if it caught on, it'd be a paradigm shift... But it turns out users don't want every paradigm shift that comes down the lane.

Because Google has (traditionally; this has changed in recent years) a real push-pull in authority between management and engineering leadership, the company can't commit fully to building a quality implementation of a status quo. Nor can it commit fully to chasing entirely new ways of doing things that could shake up an established market. In general, this... Actually kind of works out fine for them, more fine than critics often realize, because neither of those answers are always correct. Sometimes you get Gmail. Sometimes you get Google Drive. And sometimes you get iGoogle or Wave. And sometimes you get the stuff in between, like Reader or App Engine (really popular among the users, but the users don't have the money to make it profitable to commit to it).


> If Google's code writing process really was superior, you'd expect them to consistently produce killer products in other fields.

This is a very simplistic notion of what enables the creation of killer products. That has much more to do with understanding users' needs and identifying market opportunities. Good code writing processes are about code maintenance and scaling engineering effort, not dreaming up the next killer app.


Well, you're of course right about that.

I'll retreat to saying that claims about the superiority of Google's code writing process remain unsubstantiated.

During my 3 years at Google, I observed little brilliance. Not that my little corner of the giant org is a statistically valid sample.

I did see a fair amount of "Google is The Best!" sentiments and rejection of anything invented outside the company.


I was there for well over a decade, and my read on it is that the tools are great for building solutions for all sorts of problems, including web applications and big data analysis systems. The biggest issues with the ability to launch a killer app are IMO:

- The risk aversion to anything that threatens a big existing successful product.

- The product/feature approval processes that implements the above.

- The concern about launching something experimental or half-baked under the brand (vs a startup which does that by default).

Personally, I saw a lot of product and engineering creativity, but it was often stifled or watered down by the above.


Don't mistake publicly visible products with internal products. There is a lot of amazing infrastructure internally that has no public presence at all.


I’d be interested in the perspective of a high schooler/fresh college grad on this too. I was in college when Google appeared on the scene and my impression was very much that it was an engineering paradise (20% time had a lot to do with that!), in the many years since most of the interactions I’ve had with Google have changed that perception a ton. But I’m not sure what it looks like to someone just starting out.


I used to work there. The tooling is pretty good.


I’ve not worked there but I’ve worked with many exgooglers. They tend to think Google * is the best in the world and they created it before anyone else and everyone else with similar capabilities copied them, and that Googlers are smarter. However every company of their size and technical complexity has built almost exactly the same tooling and processes, but don’t carry the same attitude around. At a certain point I’ve learned to smile and nod as they explain and take credit for things I’ve developed myself at other FAANG and adjacent highly technical megacorps. The final thing of note is they seem to be unable to replicate their magic outside of Google, and blame the org they’re now in for being deficient in some way for that. “At Google we had X and didn’t have to worry about Y, without Y we can’t do X,” where Y is some nebulous qualitative feature of google culture. After I finish smiling and nodding I just go build X.

N.b., I’m not saying nosefrog is this person, or even everyone at google or ex-google is like this. This was just my experience working with senior principal or other very senior engineering staff.


Great comment. Google and others blazed a trail, but by 2010 you could find similar, often specialized tooling and practices emerging at most Fortune 500 tech companies.


Today you can almost universally get a better off-the-shelf solution with a SaaS product than what Googlers are able to access internally.


I think this is broadly correct but I do think their cloud services business holds some promise. Although a distant third at best behind AWS/Azure (and maybe fourth behind OCI), I have seen some pretty large shops with a ton of volume of all of their compute to GCP. It does seem like the immense infrastructure Google had to build to power this firehose of money has the skeleton of a great product to power the cloud workloads of many other businesses, in ways that can compete with AWS on their home turf. But this is conjecture from my end, I haven't used GCP in the heat of production.


Probably all of the above is true.

From an outsiders, it looks like all of FAANG has scaled to the point where delivery, perf and managing the business has become quite cumbersome. This, I suspect is why they're all used the economic climate to cut back, as they largely are unable to see the benefit of the additional hiring.

I have no doubt most of the talent is top tier, but being a strong engineer doesn't deliver profits nor cost savings on it's own.


Hiring is always profitable as it lowers the average cost of your engineers. Layoffs are a way to accelerate that cost averaging.


It's possible that such techniques are unreasonable in low margin companies, and it could also be said that perhaps these engineering techniques have been a factor in allowing Google to keep its margins high as it scaled up by many orders of magnitude. Google in 2003 was making less than $1bn/year in revenue... 200x growth in 20 years


The Beyoncé Rule

More colloquially, this is phrased as “If you liked it, you should have put a CI test on it,” which we call “The Beyoncé Rule."13 From a scaling perspective, the Beyoncé Rule implies that complicated, one-off bespoke tests that aren’t triggered by our common CI system do not count.


Pet peeve: There's an anticorrolary though, which is that tests written solely for the requirements of an elaborate CI system tend to hurt and not help. Tests need to be trivially executable and inspectable by developers running on their own systems with minimal support. If they aren't, then all you know is that "It's Broken!" and not how to fix it.

Good CI is good because it easily integrates "one-off bespoke tests", not because it outlaws them.


This is a non-sequitor I think. A bad ci system is independent from the mandate that, functionally "unless we can verify we don't break you, you can't complain when we do".


No? The two statements at hand equate to: "Do not add features to the product without adding validation to CI" and "Do not write validation steps CI just to add features to the product". Those are pretty clearly related concepts.

And the reason for the disconnect is that the former centers the requirements of integration and not development, which is exactly backwards. CI isn't "the place where you run tests". The place where you run tests is underneath the software developers' fingers. CI is where you integrate tests developers are already running. And to the extent that is not true, CI stops being a useful tool for development and turns into just another roadblock to evade.


> No? The two statements at hand equate to: "Do not add features to the product without adding validation to CI" and "Do not write validation steps CI just to add features to the product". Those are pretty clearly related concepts.

Only if your CI and developer workflow are different. To a first approximation, at Google, I run `blaze test //foo` to test `foo`, and CI invokes the same command when testing `foo`.

There's caveats here, but they don't apply to the beyonce rule in practice, which is centered on unit tests that can be run hermetically on presubmit. Your weird 3-hour integration test won't be run.

Edit: Actually it seems like you're saying "once you've tested a feature once, you can release it", which...no. You need some way of ensuring you don't regress, the CI suite is how you do that, otherwise no one will know when the new release of some random open source library that you depend on breaks your application.


Clearly we agree. I'm just saying that this "Beyone Rule", naively applied, leads to cargo cult CI disasters (I've seen this happen both within and outside of Google, FWIW, but more outside than in to be fair). The correct philosophy isn't "all tests must be integrated in CI", it's "CI must be able to integrate all tests".


Can't say about the content, but this is possibly the worst website layout and navigation I have seen. How are you even supposed to read this thing?

Is there a version with links from one chapter to the next? Or a PDF?


Just buy the O'Reilly book


When I see the font and color contrast scheme on that page I nope aborted fast.

and dont tell me I can override it in my browser. Neither my Android phone or Android Chrome honors my expressed wish to override everything with a high contrast, dark theme. A few honor it, many apps and docs do not. That article does not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: