[DISCLAMER: I used to work at Google in general, but not at Google Cloud]
I'm not sure whether this has been discussed here before, but I'd love to take this forum to share an angle from the tech side of things:
IMO, Google is _cursed_ to keep deprecating its products and services. It's cursed by Google's famous choice of mono-repo tech stack.
It makes all the sense and has all the benefits. But at a cost: we had to keep every single line of code in active development mode. Whenever someone changed a line of code in a random file that's three steps away on your dependency chain, you will get a ticket to understand what has changed, make changes and fire up every tests (also fix them in 99% of the cases).
Yeah, the "Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time. It’s costing us time and money to support our shit, and we’re tired of it, so we’re not going to support it anymore." is kind of true story for internal engineers.
We once had a shipped product (which took about 20-engineer-month to develop in the first place) in maintenance mode, but still requires a full time engineer to deal with those random things all the time. Would have save 90% of that person's time it it's on a sperate branch and we only need to focus on security patches. (NO, there is no such concept of branching in Google's dev system).
We kept doing this for a while and soon realized that there is no way we can sustain this, especially after the only guys who understand how everything works switched teams. Thus, it just became obvious that deprecation is the only "responsible" and "reasonable" choice.
Honestly, I think Google's engineering practice is somewhat flawed for the lack of a good solution to support shipped products in maintenance. As a result, there is either massively successful products being actively developed; or deprecated products.
I have also worked at Google (in an unrelated department) and completely disagree with this. Maintaining old code & services is a problem everywhere. Monorepo vs multirepo, monolith service vs microservices etc. all have nothing to do with it. There will always be a broken dependency, a service/API/library you rely on about to deprecate, new urgent security patches, an outage somewhere upstream or downstream which you have to investigate, an important customer hitting an edge case which was hidden for years. You will always need a dedicated team to support a live product regardless of how it is engineered.
The problem at Google was (and maybe still is) with lack of incentives at the product level to do any of this. You don't get a fat bonus and promotion for saying that you kept things working as they should, made an incremental update or fixed bugs. When your packet goes up to the committee (who don't know you and know nothing about your team or background), the only thing that works in your favor is a successful new product launch.
And as an engineer you still have multiple avenues to showcase your skills. That new product manager you just hired from Harvard Business School who is eager to climb the ladder does not. And due to the lack of a central cohesive product strategy, this PM also has complete control of your team's annual roadmap.
The "you must make something new to get promoted" was a common meme (literally) at Google, but I never saw that myself. I got promoted, and I sat on promotion committees, and it didn't seem that important. I did sort of start a new project to get from 4 to 5 (rather a prototype was handed to me by my more senior team members), but it was clear to me that the path to 6 was not starting a new project -- it was increasing the reach and value of my existing project. I left before that happened (Google Fiber got canceled, found a new team but it wasn't really my thing), so I'll never know for sure, but I didn't feel any pressure to make something new for the sake of making something new. There was, of course, pressure to make the thing that you did work on good, and nobody was really going to stop you from making something new.
Basically, that whole eng ladder thing is really important. I looked at that a lot for my own promotions and for evaluating candidates for promotions. Just dealing with churn isn't really on there, so it's probably not something you should focus too much on. I'd say that's true at any job; customers aren't going to purchase your SaaS because you upgraded from Postgres 12 to 13. They give zero fucks about things like that. You do upgrades like that because they're just something you have to do to make actual progress on your project. Maybe unfortunate, but also unavoidable. Finding a balance is the key, as with anything in engineering.
The biggest problem I found with promotions is that people wanted one because they thought they were doing their current job well. That isn't promotion, that's calibration, and doing well in calibration certainly opens up good raise / bonus options. Promotion is something different -- it's interviewing for a brand new job, by proving you're already doing that job. Whether or not that's fair is debatable, but the model does make a lot of sense to me.
Things could have changed; I haven't worked at Google for 4 years. But this was a common complaint back then, and it just wasn't my experience in actually evaluating candidates for promotion.
"The biggest problem I found with promotions is that people wanted one because they thought they were doing their current job well. That isn't promotion, that's calibration, and doing well in calibration certainly opens up good raise / bonus options. Promotion is something different -- it's interviewing for a brand new job, by proving you're already doing that job. Whether or not that's fair is debatable, but the model does make a lot of sense to me."
Thanks for articulating this distinction so clearly; it's a simple enough idea, but it seems to elude so many.
> Promotion is something different -- it's interviewing for a brand new job, by proving you're already doing that job.
Every large corporation has a concept of levels. It makes sense to use levels as a progression (they are numeric after all) rather than a new job each time. That’s what job titles/roles are for.
I’m not convinced by this summary, and it seems anecdotal rather than realistic.
The reason things like this eludes so many people is because it never gets properly explained by anyone. What jrockway explained might be simple, but it is quite rare to see such an explanation.
With respect to your experience, the impact of promotion chasing was heavily felt by product teams and I wouldn't expect it to be that visible to people on the promo committees. I watched multiple fellow Googlers rush project work and cut corners in order to be able to "ship" and put the project in their promo package (and be frustrated when they missed promo). In some cases I got to watch them abandon the project and move on to something else even though it badly needed additional maintenance and cleanup due to all the corner-cutting. In one specific case, all the corner cutting led to multiple significant exploit chains (one of them delivered persistent root on Chromebooks)
> I watched multiple fellow Googlers rush project work and cut corners in order to be able to "ship" and put the project in their promo package (and be frustrated when they missed promo)
This kind of confirms my point -- the committee isn't looking for "created a disaster area a month before promo packets were due". They want a consistent track record of success at the next level.
I definitely encountered this problem at Google (there was a reason it was a meme), but it was far more prevalent at the EM/PM/director level, and so still directly affected the overall product strategy for the org and what you as an IC got to work on.
>Promotion is something different -- it's interviewing for a brand new job, by proving you're already doing that job
I've worked a couple places where getting a "meets expectations" on your annual review was expected
Their review processes were calibrated such that you should [almost] never get a 5 ("always exceeds")
A handful of 4s ("sometimes exceeds") was good - but not a requirement ('too many' 4s indicated you were in the wrong role, so titles/pay/etc would be adjusted)
More than one 2 ("sometimes doesn't meet") was reason for extra mentoring, one-on-ones, etc
There were no 1s ("fails to meet") - if you would otherwise have earned a 1 in any category, you'd've been let go already
I think monorepo makes it easier to update downstream dependencies atomically as part of an upstream change, and thus encourages a culture of unstable APIs.
That is, careful evolution of internal APIs is not given much weight, so modularity - in the sense of containing change - suffers.
I don't think monorepos must necessarily go this way, but expressing dependencies in terms of build targets rather than versioned artifacts has a strong gravitational effect. Change flows through the codebase quickly. That has upsides - killing off legacy dependencies more quickly - and downsides, wanting to delete code that isn't pulling its weight because of the labour it is inducing.
[I currently work at Google but I've only been here a few weeks. I certainly don't speak for the company.]
I think this is absolutely it. A lot of best practices go down the drain when the compiler compiles the bytecode. As such a lot of best practices are about people, not the computer. APIs can be just as stable behind a network or a library, but way more people are onboard with never break your APIs than they are never break your function.
Concerning the promotion thing, I hear this a lot from Googlers but isn't it the same everywhere else? Most tech companies (big tech at least) will promote on product achievements, not maintenance.
The main difference was/is that at Google your immediate manager, director, PM, peers and everyone else in your product unit (who you work with every day) have almost zero say in whether you get promoted or not. You have to essentially summarize everything you did in bullet points and send it over to an anonymous committee who don't know who you are. They will base their decision on this piece of paper without any additional background or context.
This does help in various ways – the process is more objective, there is less bias, less departmental/managerial politics etc. The drawback is that a lot gets lost in translation. There is too much burden on you as an engineer to pick and choose what you spend your time on so it looks good to the committee.
In other companies I have worked at getting promoted was a byproduct of doing a good job. At Google getting promoted is the job.
There have been some changes that make this entirely untrue for earlier promos and partially untrue for promotions to L6/Staff. There's considerably more locality at this point.
That’s not true anymore until you are going for 6/7 (depending on your org). Now, committees are based in your org and members are expected to be somewhat familiar with your work.
This sounds like any large organization, where each engineer is only one tooth on a very large gear.
I'm guessing, but do not have enough anecdotal experience myself, that just about any large tech company employee here is reading your description and thinking "sounds like my company."
I'm curious how sound my hypothesis/guess is. Can other large organization employees answer with a claim that this does NOT describe their situation?
This makes sense. I do not work at Google, we do not have monorepo, and yet many problems feel similar.
The guys who maintain the company infrastructure introduce some changes, send an e-mail notification, and call it a day. The maintenance you need to do at your existing project to keep up with these changes does not count as important work, because it is not adding new features. Therefore it is important to run away from the project as soon as it stops being actively developed.
I work for Google for the 3rd time; 12+ years in total. It's first time when this issue is brought to my conscience. I think that the issue exists, but it's not due to the monorepo, it's due to the internal APIs changing.
I learned how to avoid the Google3 tax that you mentioned, when the old thing is deprecated, and the new one is not working yet.
Surprisingly, the answer for me was to embrace Google Cloud: its APIs have stability guarantees. My current project depends on Google Cloud Storage, Cloud Spanner, Cloud Firestore and very few internal technologies.
I believe that this is in general a trend at Google: increasing reliance on Google Cloud for new projects. In this sense, both internal and external developers are in the same boat.
As for the monorepo - it's a blessing, in my perspective. Less rotten code, much easier to contribute improvements across the stack.
I worked as a SWE at Google and also at Google cloud. I both disagree with this and find it a very perplexing angle.
I think the issue is a mis-aligned (financial) incentive structure.
With the right incentive structure, challenges in either monorepo or federated repo can be overcome.
With the wrong incentive structure, problems will grow in both monorepo and federated repo.
The choice of repo simply manifests the way in which thorns of the incentive structure arise, but its the
incentive structure which is the root cause.
I think monorepo has an effect, but the bigger effect is dependencies expressed as build targets rather than versioned artifacts. When your dependencies are in the eternal present, you're forced to upgrade, or you're a blocker. And I don't think you can do dependencies as build targets well without a monorepo.
I think this is a little inaccurate. Google's monorepo does have branching but it is a second class citizen with minimal support that most engineers aren't aware of unless you've carried a product through many releases.
That being said, every release can stand on its own and be iteratively changed without taking on changes from the rest of the company.
The highest possibilities for breakages to be introduced are at boundaries where your long running services depend on an another team's service(s), but this problem is not unique to Google.
Google can choose to maintain a long running maintenance project or deprecate it, and I won't claim to know what plays the biggest factor in that decision (it's likely unique to every team), but having a monorepo definitely is not part of the equation.
We actually tried that route (i.e. using a branch) before deprecating the project. Instead, the infra team told us something like the branch can only live for about 6 month before they have to deprecate the toolchain that supports the branches that are more than 6 months old.
How does multi-repo codebase solve that problem? You would still need to keep up with your infra at minimum unless you run everything yourself too. Now you have another problem...
Imo mono repo has little to do with it and it’s more just an eng culture of shipping above all else (heavily influenced by their promo process)
I don't know exactly what's going on at Google, but the key feature request seems to be that one chunk of code be able to depend on a consistent version of the library interfaces. If it wasn't a mono-repo, you could specify a dependency as a particular version of the other repository. But if everything is in the same repository, and one directory of code depends on a past version of library code, then everything falls apart. Keeping code that doesn't work in the mono repo, with its test last failing, is worse than deleting the code at the point that the API change breaks the other chunk of code.
Maybe in java world it was different but when i wrote c++ and go code there breaking existing apis was extremely frowned upon and if they had to do it people usually sent you automated code edit PRs for this
I can attest. I work at a similar megacorp with a very large megarepo. If you commit a change that breaks any kind of test, anywhere, that shit is getting reverted very rapidly. If you MUST make a breaking change, congradulations, you get to update all of your users code too.
This is about company culture, not the programming language. When someone breaks their API and someone else's code stops working as a consequence, will the first person get told to fix their API, or will the second person get told to fix their application?
Some companies may have a consistent policy about this, in other companies it may depend on which team happens to have more political power.
At least during my time there nobody was introducing breaking (at compile time) api changes wily-nily. What did happen was people would deprecate (or “sunset” as pms loved to call it) a runtime api that ppl depended on - i.e shutting down some servers. So splitting the monorepo would do nothing here unless you’re willing to run those services yourself.
its easier to park a codebase and have it rot when its not in a monorepo where everything is assumed to be consistent/working all the time.
I'd have thought you could just pull maintenance-mode products out of the monorepo tree and stash them somewhere else. Let it rot by choice. Is basically what everyone else does, let you perform maintenance tasks on your schedule not other monorepo participants schedule.
I would assume multi-repo also means dependency on packages rather than code. So code changes that are not backwards compatible yield a new version of the package that doesn't have to get applied across all uses of that repo.
I can tell you that having a multi-branch code management system doesn't make this easier. You will only pay the tax at a different point in time.
In the monorepo you are forced to update things immediately if something brakes. In the multi-branch system things will get unnoticed for a while. Until you have to update dependency A (either due to a bug, security issue or you want a new feature), and then observe that everything around it moved too. Now a lot of investigations start how to update all those changed packages at once without one breaking the other. I experienced several occasions where those conflicts required more than 2 weeks of engineering time to get resolved - and I can also tell you that this isn't a very gratifying task. Try starting a new build which just updates dependency D and then notice 8 hours later than something very very downstream breaks, and you also need to update E, F, but not G.
I actually preferred it multiple times if changes would lead to breakages earlier, so that the work to fix those would be smaller too. So that's the contrarian few.
Overall software maintainence will always take a siginficant amount of time, and managers and teams need to account for that. And similar to oncall duties it also makes a lot of sense to distribute the maintainence chores across the team, so that not a single person has to end up doing the work.
If you make changes to a large shared module, is it your responsibility to chase down each and every usage of it? For example if you are upgrading a dependency due to a somewhat breaking security issue such as Jackson 2.8->2.12
At Google it mostly _is_ your responsibility to do that, yes.
There is substantial tooling assistance to assist with this, and it's common to make changes by adding new functionality, writing an automated transform to shift usage from old to new, sharding out the reviews to the suitable OWNERS, and finally removing the old functionality.
Very heavily used common code tends to be owned by teams that are used to this burden. That said, it does complicate upgrades to third party components.
You do not chase down, the buld system detects all affected modules and runs their tests. That's the advantage of monorepo - contineous integration that includes all dependent modules.
It's also a disadvantage, to be clear. Tests take longer to run when you need to rebuild your database and not just your own code. There's no easy way to put something in maintenance mode and only take changes for bug fixes, because maintaining forks is not a significant thing. Thus downstream dependencies must pay not just for bug fixes but also for feature improvements, deprecations etc.
It works well enough if everything is making money and is being actively developed.
> Yeah, the "Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time. It’s costing us time and money to support our shit, and we’re tired of it, so we’re not going to support it anymore." is kind of true story for internal engineers.
Oh man that explains everything, I can totally relate to that.
the monorepo makes everything worse but it’s only part of the problem, i believe.
the big problem is forcing everyone to keep everything “updated”.
What is really needed is a way, given a certain state (branch, etc) to find a way to reliably reproduce the build artifacts AND to have a way for your software to depend on these packages at specific versions.
This way you can make an informed decision about when or if yo upgrade something and you know for a fact that (setting security issues aside) you will not have to touch the code and you can keep running it forever.
Look at virtually any modern programming language. The way the packages work makes of breaks the language. I never understood why Google seems to believe they are special and basic stuff does not apply to them, but it does.
Also, IMHO huge difference between how thing are run and work inside Google and how things work “in the wild”.
> Would have save 90% of that person's time it it's on a sperate branch and we only need to focus on security patches. (NO, there is no such concept of branching in Google's dev system).
I wonder if this is why you have so many different programming languages being used under the hood at Google? Essentially people using a programming language as a branch. If you're working on a completely different language in theory you could shelter your team's product?
I'm not sure if agree. Almost everything major is written in Java or C++ still. And I'll disagree that the issue is with libraries at all. It's with other services changing out from underneath you.
I've been in GCP support for over 4 years. My opinions are my own. I try to stay as impartial as I can about my employer. I know there are a lot of valid criticisms to be made about GCP. As one of the people often bearing the brunt of the fallout whenever there is a painful outage or deprecation, I share some of them.
But it never gets easy to read posts like this. This one appears to be a collection of old hacker news posts. And I can't help but think about all the posts that are never written, submitted, or upvoted about every time someone had a good experience with support. No one talks about their GCE VMs with years of uptime.
I'll spend hours on video calls with customers, going through logs, packet captures, perf profiles, core dumps, reading their code, conducting tests. Unpacking the tangled web until the problem is obvious. It's always a good feeling when we get to the end, and you get to reveal it like the end of a mystery novel. For me, that's the good part. Sometimes it takes a couple of hours. Sometimes weeks. Months even. And then the customer goes on with their life, as they should.
That's how it always should work. But no one talks about when a process works the way it's supposed to. People want to read about failures. And trade their own analyses about why that failure happened and how Google is fundamentally broken for these N simple reasons.
I don't want to diminish the negative stories as they are about people who went through real pain. I also realize that I'm just one person, and I can only work with so many customers in my time here. I'm not sure where I'm going with this.
I guess what I'm trying to say is, keep an open mind. This is a highly competitive field. There are strong incentives for GCP to listen to its customers.
I am on GCP, the greatest fear is not they increasing prices or disabling a product it‘s getting flagged and not being able to do anything. The support has been helpfull, that‘s true.
I guess having this fear is mostly people on hn reading these stories so often and how it is resolved: knowing someone at google. I don‘t know anyone and i should not have to.
There should be some contact for disabled accounts you can reach. I am all the time looking at aws which does not have this problem but gcp is so much easier to use.
The only viable alternative to knowing someone at Google, is getting your story to the front page of HN / Slashdot / similar, and hoping someone from the other side of the wall reaches out.
> no one talks about when a process works the way it's supposed to. People want to read about failures.
No, people don't want to read about failures. People want to expect services work as advertised. People write when something doesn't live up to the standard that it should have, even if 99% others are fine.
Years of uptime is expected, so no one writes about that of course but if it goes beyond one's expectation, like being able to run a server for 10 years and over without a down time, I'm sure people start to feel like writing positive stories.
Hey, thanks for all that you’ve done. My experience with GCP has been an incredibly positive one. GCP documentation has always seemed fantastic. Our TAMs were very responsive.
GCP support has by far been the best support experience. I have to say that the initial days it seemed to suck. The UI was some 90s google group clone which wasn’t even accessible through the GCP console, it was its own separate site which I always found amusing. But over time, the UI and quality of support became more streamlined and predictable, and I consider it one of the best SaaS support experiences today.
One particular incident I’ll never forget is a support person arguing with me why network tags based firewalls are better overall for security than service accounts based firewalls. I expected to have a very cut and dry exchange but the support engineer actually did convince me that tags are superior to using service accounts. I did not ever expect to have had such a discussion over enterprise support tickets.
Ahh I actually love those conversations, when they express the knowledge in a way that actually helps build your own understanding about the product and use it better.
Dear HN reader, if you ever did that to really help a costumer, you are a truly MVP :P
The thing that always gets me is when this is presented as some sort of huge commitment -- this is the bare minimum that enterprise support plans provide and have provided for decades.
> No one talks about their GCE VMs with years of uptime.
This is literally what you sell. It's like a restaurant owner complaining about the bad reviews, saying "nobody talks about all the people that we fed and never got food poisoning!". Yeah, I only need to hear about a few of those to be concerned about going there, I don't care it's less than 1% of your customers that get food poisoning.
On the contrary, I think it is quite normal to leave a positive review when you've had a good time at a restaurant. When I look for restaurants, I certainly read both positive and negative reviews.
You make this almost binary by using food poisoning as your metaphor (either you get it or you don't), but normally there is a much more nuanced range of experiences.
I never said they do, I said they sell "GCE VMs with years of uptime" i.e. "just pay for VM and let us worry about maintenance, you can simply assume it works for all intents and purposes".
But, the kind of stuff that people are worried about is not VM uptime. It's "honest customer being banned with no recourse and having entire livehood destroyed". You don't hear Google bragging that "we have an SLA of 99.9% service to honest customers - we only destroy 1 out of every 1000 businesses that choose us!". You can bet that their SLA there is 100%, even if it's not achievable, 100% is absolutely their target. Just like the target for Facebook is 100% security, and you won't hear Zuck bragging that "we are 99.9% secure, we only leak your private data once every 3 years!".
> There are strong incentives for GCP to listen to its customers.
I really want to believe this, but my experience as a Google customer (not GCP, but Suite, Fiber, Fi, GMPAA, etc.) leads me strictly away from considering a business dependency on Google.
I want to love the products but they vanish. I want to love the Google but they're not around when I need them most.
Idk sample size of 1, I’ve seen Cloud SQL almost bring a startup to death with issues. We wanted to move to AWS but migrating the data was too cost/time/risk prohibitive. The support engineer was helpful but ultimately the issues faced were not prioritized because we weren’t Google. Being not too specific for privacy.
This crowd also harshly criticized the Coinbase founder / Bitcoin when we he was looking for a co-founder, and now his share of Coinbase is worth $10 billion or so.
I don't know, seems to be mostly filled with speculation and anecdotes. I can find similar anecdotes of bad behavior by Amzon, for example claims of using AWS to steal its users' business ideas[1].
Admittedly, the original author's title of "Why I distrust Google Cloud more than AWS or Azure" much better describes their position than the editorialized title of the HN submitter ("Why Google Cloud is less trustworthy than AWS or Azure").
> Admittedly, the original author's title of "Why I distrust Google Cloud more than AWS or Azure" much better describes their position than the editorialized title of the HN submitter ("Why Google Cloud is less trustworthy than AWS or Azure").
I thought the new title better but I did not want to change the original (because I had already shared the link with friends), so I just changed when submitting. Should've gone with the original...
I can't speak to Azure, but while AWS has a policy of never discontinuing services or features, even if they are replaced by something else, whereas GCP does discontinue entire services (although if it is generally available they have to give 12 months notice according to the terms of service).
RBE was discontinued during alpha. They tried a thing, it didn't work out for some reason, so they decided not to bring it to market. This hardly fits the bill of the typical Google deprecation.
I'm one of those "you'll get my baremetal and systemd out of my dead cold hands", kind of guy.
But I have reasonable exposure to both AWS and GC and I can say that, by far, Google Cloud is easier to reason about. As a consequence, it's much harder to misconfigure. The 2 large AWS deploys I've seen have, at best, had billing issues no one really understood (incl AWS), and at worse, security issues.
Complaining that maps prices went up re Cloud Hosting is, to me, like complaining that Amazon raised the price of the Kindle, e.g., not particularly relevant.
I used to think the problem with AWS was pricing and hidden costs etc. But in reality it’s because companies just let developers run wild without restriction on AWS and end up over provisioning or pulling in expensive services to solve dead simple problems.
The issue is definitely not AWS. It’s always the developers. You really need a gate keeper to AWS to question why you need a service and ask for a price estimate on cost and usage.
It's not that one-sided. On AWS, you have to go hunt to find the pricing for everything. On GCP, it's right next to the instance that you're starting. The GCP dashboard also provides recommendations to down-size VMs if they are too large. On AWS it's also super easy to spin up a VM and never see it because it's in a different region. These little things add up.
I'd implement a good cost attribution strategy before trying a gatekeeping approach. Companies generally have at least semi-functional mechanisms for managing department budgets. Once there's a clear picture how much each service/webapp/product costs to run then they can feed that into the existing budget infra and let things shake out.
Until you know both halves of the ROI calculation it's difficult to focus effort on trimming the right things. e.g. It seems silly for a team to spend $2k/mo on naive/managed solutions for simple things but maybe it's worth it if it helps them avoid hiring another $10k+/mo engineer.
Funny, since AWS has much more granular control over IAM roles and users than GCP does, so that the infrastructure/security group should be able to provision devs with the ability to roll their own IAM in a scoped way to prevent issues.
In my experience trying to configure AAD policies, AWS IAM and (to a very limited extent GCP IAM), it does not generally require a large investment in time. It does require a development account in which the developer has full access to IAM/AAD.
At my employer, we have a gatekeeper team who is terribly overworked and hardpressed to push back too much when business outcomes are at stake. One of the more successful things theyve done is create a terraform repo anyone can contribute to. They will review PRs and manually apply changes for production accounts. Whats great is that these folks can take my PRs that are 80% right and they are able to help me achieve least privilege better than I could on my own. However, other devs really dont care about least privilege and they tend to go for large open policies.
AWS's IAM policy is far and away the most sophiscated and granular, and even has a nice UI now. Trying to achieve this in Azure is next to impossible because you must have extremely high permissions to even be able to make new roles/policies that are super granular.
Also, permissions boundaries are specifically made for the use case of "IAM teams delegating some control to devs".
IAM team creates a "developer admin" role/user that can only create users/roles that have a permissions boundary on it. That way, no matter what policy the dev admin grants, the dev user can only do what the permission boundary allows.
(A) Not necessarily, and (B) if so, okay so what? The whole point of the parent comment was about keeping devs in check, and I submitted an anecdote that AWS in fact has better tooling to keep people from doing things they aren't supposed to. Not related to billing oversight, but permissions.
Is that true? Google Projects correspond to AWS accounts. You can have as many AWS accounts as you like. If I'm not mistaken (very much possible), you can even inherit permissions to AWS accounts comparable to Google's org/folder hierarchy.
What does a billing issue look like? Is it something trivial, like they charged you for $X+X, but you only used $X (e.g., they double billed you -- should be solvable with a phone call)? Or more complex, e.g, they charged for more egress than you actually used (kind of hard to prove or disprove after the fact)?
Not GP, but in my experience, AWS bills run away from you if you're not careful. They don't have great tooling (or, at least, accessible or intuitive tooling) to determine what your bill is going to be, or to set limits.
Pair that with a misconfiguration because of their horrendous web interface, and you're in for a surprisingly large bill at the end of the month.
Google, on the other hand, has some of the best tooling in the industry when it comes to billing and cost management. I dislike Google as much as the next guy but I'd feel more comfortable with them over AWS if I ever needed to choose.
The last startup I was at repeatedly ended up with $25k AWS bills due to runaway elastic search clusters or dynamodb. The only reason we resolved them was due to us having hired our former AWS account rep.
I got my fair share of those from customers while at GCP, but I agree that in the past several years GCP has gotten much better at billing infra given all the problems we heard of...
Honestly there are tons of examples of runaway bills on both, and neither provides much better of a way to handle visibility of cloud billing than the other. We could discuss the limitations of AWS' billing estimation systems ("only visible when you look!") or GCloud's budgeting system (which has notoriously questionable "limitations" https://www.theregister.com/2020/12/10/google_cloud_over_run...), but neither of the two are particularly better than the other at avoiding surprise billing.
There is this bias effect that is not common only to this part of the thread but this entire thread, and perhaps any discussion of "which cloud is better" where people who are clearly invested in one platform or another show biases that help them to justify their (or their company's) lock-in decisions.
This is not to say that cloud itself is a bad call, but it's crazy how many people out there don't realize how their situation and fear of "making the 'wrong' decision in the past" affects how they discuss the options (or even how they reinvest in a particular option later!), and how they claim "actually that vendor is worse than mine"
I have larger development investments in both AWS and in Google Cloud. They each have pros and cons but runaway billing is a gotcha of minute-by-minute rental billing of compute, storage and network services (the "cloud") and how we use it, and not really something specific to one vendor or another. It's just something that you have to be constantly aware of, constantly monitor, and work to avoid.
It's 100% our mistake(s). It's only AWS fault indirectly, in that AWS is complicated and requires a lot of non transferable knowledge.
As a small example, we currently pay $750 for Route53. We don't know why (it isn't traffic). It has something to do with Route53 resolvers that our "lead sre" setup before leaving. AWS support doesn't understand how it's setup, and since $750 is relatively small, we've just left it.
I just completed a migration from AWS to GCP. My experience was previously entirely AWS, but GCP has been really nice. GCP has fewer features (e.g., no scheduled filestore backups), but GKE is far and away better than EKS and the overall console UX is far better as well. There’s also generally less to understand and thus misconfigure, and far fewer half-baked features to wade through to find the happy path (I’ve spent way too much time using CloudFormation).
I haven't used Kubernetes on either platform - so there may be more to that.
One thing I really dislike about GCP is how expensive it is for personal or hobby use. I burned through $300 for a simple vm on GCP in a few weeks because their cheapest instances are so expensive.
That is incorrect. It costs $75/mo and they give you that as credit. Also why use gke if you’re trying to learn kubernetes? Single instance kubeadm cluster is perfectly fine for that purpose (even better)
If you roll your own cluster without GKE, I believe you have to configure your own load balancers, ingress controllers, etc. Having some of that ready to go out of the box allows you to learn Kubernetes concepts more gradually.
Yeah, navigating either platform is tough for hobbyists. I can get more lambda invocations than I’ll ever use for free but a single load balancer is like $30/month, never mind instances.
I see the Google hate on HN is strong enough that an article that just regurgitates the contents of a few articles and blog posts can be upvoted to #1 in short order.
The issue here is that the majority of criticisms apply to Google products and services _other than_ GCP. By and large, many commenters here (myself included) have had very good experiences with GCP. Less so with Google Reader.
```
I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she continued. “Google Cloud has been building our theme on Democratizing AI in 2017, and Diane and I have been talking about Humanistic AI for enterprise. I’d be super careful to protect these very positive images.
```
Not being judgemental about if defense is right or wrong , but its fair to say we are in the running for it. What is not okay is "we care for humanity and freedom of speech" and then do backchannel discussions on how dissent can be quelled with government agencies.
So to me distrust is for the company possibly and probably not so much on the reliability of its services
And the odd thing is that Google has been significantly better for consumers than other FAANG companies.
Apple has historically been anti consumer and anti developer with a huge marketing budget to wash it. Facebook intentionally makes us sad. M$ and their anti competitive practices should be well known.
What? Apple has been pro-consumer to the detriment of everyone else. Developers are still screaming that they aren't allowed to install malware on my iphone.
"Let" Russia? I am pretty sure that Russia is the one firmly in charge of that decision, not Apple.
It isn't inconsistent to push back on government overreach where it is legal to do so, but conform where they have no other legal choice except dropping an entire market.
To be honest, since I've heard literally all of these stories/anecdotes, and in some cases have been affected by them, and yet there is still nothing new in this article to rehash, it actually makes me feel good having gone with GCP.
I certainly have felt what I thought were missteps by GCP in the past, but over the past couple years have been an extremely happy customer, and I still feel I've architected my applications so that if worse came to worse I could migrate off GCP if needed.
> The clock is ticking for Google Cloud. The Google unit, which sells computing services to big companies, is under pressure from top management to pass Amazon or Microsoft-currently first and second, respectively, in cloud market share-or risk losing funding. While the company has invested heavily in the business since last year, Google wants its cloud group to outrank those of one or both of its two main rivals by 2023, said people with knowledge of the matter.
If they pulled this off, they would be hailed as gods of marketing for eons to come.
Tactically, what would this look like? Getting large numbers of AWS or Azure customers to move over? Companies have higher priorities than changing their cloud provider, they want to focus on growing their business.
Indeed - in fact Microsoft recently tried to buy Pinterest just to get a large company that they can move from AWS to Azure:
> The deal, which would have been Microsoft’s largest acquisition to date, confirms that the tech giant is continuing to pursue an acquisition strategy aimed at amassing a portfolio of active online communities that could run on top of its Azure cloud computing platform. Pinterest - which boasts more than 320 million active users - currently relies on Amazon Web Services (AWS) as its infrastructure provider.
If they could onboard themselves on Google Cloud they'd probably be the biggest cloud. Seems like that's what they should be trying to do, it'd also future-proof them if they have to split up the business
What left a bad taste in my mouth is any quota increase getting rejected unless you got on a phone call with sales and listened to an upsell speech. I'm trying to give you money and you're putting roadblocks in front of me.
I don't know why all the cloud companies make you jump through so many hoops to get more quota - but Google has consistently been the fastest for me. Fill out some random Google Docs form (why does it have to be so dodgy?), receive quota within 24 hours.
Contrast:
Microsoft - flat out refused me more quota despite spending 10k/mo with them. Required me to convert to invoice billing, and then wanted a bunch of proof of incorporation and when my trading name didn't match my registration name they were unable to proceed.
Oracle - took 3 months of escalations and deliberations, required me to explain on the phone to a VP why I needed the quota.
AWS - frequently requiring me to write up a spiel about what I'm going to do with the quota before they approve it, increasing the RTT to 72hrs+ - do they actually verify this? How would they? Why do they care so much? We've spent 50k+ and always paid the bills, what's the issue?
Wait, really? Quota are recorded by code that must be changed by pull request, not an entry in a db? That sounds like an insane waste of engineer time.
Not generally. There is a quota service for all modern gcp APIs that handles per region, per user, and per project quotas that internally you just need the right permissions to update for a customer request, no PR required.
It's not as bad as you think, the PRs are written on their own and the engineer in question probably blanket approves all of them every morning while they're checking email (quickly scanning them looking for automated red flags).
- creates a PR with the project ID and the requested value as an exception in a file (this requires OWNERS approval, so at a min one eng/pm to approve)
- file would update a DB the next time it's picked up
There are several valid reason to do this - concerns about customer solvency and making sure you aint gonna crash tbheir house of cards architectures to name a few. The bigger problem is when those requests get stuck in the bureaucracy hell and all three major cloud provider companies are known to be extremely bureaucratic
Microsoft was the most baffling - happy to give me 25k+ credits for free in their startup program, unable to let me pay them for the same setup going forwards.
My quota requests weren't outrageous - never more than 20 VM's, albeit very large VM's.
It is not your quota that is the problem, it is the credit risk.
If you paperwork is not up to spec, they are running the risk of credit exposure when you the customer doesn't' pay. Higher risks for any provider when there is no legal entity to sue etc, which is why they want you to convert to invoice billing.
I had it happen three times on the same account including a hidden quota that the UI didn't even list. Sales person said that it might get better if we moved to invoice billing instead of a credit card (why?). After that I just gave up and moved back to AWS.
edit: It's also not a new account, very consistently paying for some services (maps, etc.) for years before I decided to ramp up.
edit2: It was also a tiny quota increase from the default one so not like we suddenly asked for five hundred instances.
I was really impressed by how simple it is to increase quota on AWS. It was just simple elastic IP quota increase and few things related to VPC quotas. I haven't used Google cloud as much, so I can't comment on it.
If you don't mind me asking, what quota increase was rejected?
A relatively small GPU increase so we could do some machine learning project testing. We asked for I think 8 T4s which are like $180/month each if run 24/7.
I had a similar problem. We wanted to get on a call to resolve it quickly, but was instead forced into an email chain where I received a reply every 3 weeks.
While I'm not arguing the general point of the article, I will counter point one thing.
> Will Google Cloud even exist a decade from now?
This seems wildly speculative, and the likelihood of GCP, or its core offerings, not existing any time so soon is next to zero. Google has to royally fuck up for this to be the case, but even if it ends up being case, there will be a string of lawsuits lined up that will likely cost the company more than keeping the product.
I've worked at billion dollar companies that aren't shy to drop a lawsuit who have gone all in on GCP, with contracts worth millions of dollar. To force such a company off their product seems reckless at best, malicious at worst. Such a big decision would drive Google into the ground, maybe not from the consumers, but certainly from the lawsuits that will inevitably ensue.
Keeping GCP running to satisfy their contractual obligations and continuing GCP as a product sold to the general public are very different things. Shutting down any enterprise product tends to involve ending sales long before you actually shut it down.
Cloud computing hasn't existed for all that long though - eventually a cloud provider is going to fold for one reason or another, and that'll change the industry pretty thoroughly when it does happen.
Lots of stuff "won't happen" until it does and the big speculation at the moment is that Google might eventually convince itself that the adtech business is the only business worth being in.
Can HN please add a filter for these increasingly lame "Google cancels all the stuff" posts?
Yes, Google has cancelled services, but they've all been free things that they had every right to decide would never increase revenue. Why should Google have to keep everything they ever built running for ever?
If you pay for services from Google, then it's a completely different story. We've used Appengine for 12 years now, and every time they've decided to deprecate services, there's always plenty of notice, a superior replacement, and usually lower costs.
>If you pay for services from Google, then it's a completely different story. We've used Appengine for 12 years now, and every time they've decided to deprecate services, there's always plenty of notice, a superior replacement, and usually lower costs.
Really? I've had the complete opposite experience on AppEngine as a paying customer.
I was using Python2 AppEngine with ndb and the Users API. Cloud Datastore + ndb automagically cached your data and worked pretty nicely. When they moved to Firestore, they dropped that feature and recommended you buy your own Redis DB and manage caching yourself. They got rid of the Users API entirely and forced apps onto OAuth, which is much more complicated to integrate.
They old AppEngine emulator worked really nicely as well, in that you could emulate a pretty full AppEngine environment locally. When they moved from Python 2 to 3, they dropped most of the emulator's features. True, AppEngine apps require less AppEngine-specific code, so there's less need for an emulator, but it's still useful for testing certain scenarios. I checked recently and it seemed like they had improved their emulator, but I believe there was about a year where there was no Admin UI for their emulators like there had been for AppEngine Python2.
It's all caused me to move away from AppEngine and rely more on vendor-agnostic stacks.
This sounds like a mis-characterization of the situation. It is not Google who have moved from Python 2 to 3, it's you. Google still offers and supports the legacy Python 2.7 runtime for App Engine, and will continue to do so indefinitely. The same is true about ndb and Firestore. It may be that you moved from NDB to Cloud NDB (Firestore), but nobody forced you to do it.
But I think it's not such a "free choice" when Google announces a service is deprecated given that they're notorious for shutting off their deprecated services. Once Google announces a deprecation, I think it's fair to assume an EOL could come at any time, and I don't want to be caught on the back foot when that happens with not enough time to migrate.
I see that Google now has clear messaging that they will support Python 2.7 AppEngine indefinitely,[0] but I don't recall seeing that messaging in 2019. Internet Archive only has snapshots of that page[1] going back to April 2020, which makes me think they hadn't made it clear until then that this was their policy.
In 2019, I just remember seeing scary warnings everywhere in AppEngine docs of "we strongly recommend you get off of Python 2.7." I talked to Google DevRel folks at PyGotham 2019 and asked them what was going to happen to Python 2.7 AppEngine. They said it was going away but they hadn't picked an EOL date yet.
Woof, sorry. I worked on the Users API deprecation a while ago (2018-2019, prior to any announcement), and there were a few features that just couldn't be migrated reasonably to OAuth (e.g. the `admin` functionality). We did consider things like moving to Cloud IAM (e.g. what we did for GCF and Run) as well as Firebase Auth, but couldn't replicate everything :/
Hi-5 fellow "got caught across the boundary" traveller. This happened to me when I moved jobs to be devops lead on a new product for a company - the google recommended contractor implemented regular AppEngine with ndb, and about 6 months later we were staring down the barrel of everything being deprecated (and had to do the same: add a redis instance to the stack) and then hope that the new AppEngine was going to be ready when we actually went live.
It eventually came together but we ended up having to do a whole lot of refactoring while we were on a tight launch schedule.
The thing is they change/deprecate/retire paid-for services too (in a brutal contrast with the competiton).
Two quotes from one of the posts referenced in the submission (from Steve Yegge):
...I know I haven’t gone into a lot of specific details about GCP’s deprecations. I can tell you that virtually everything I’ve used, from networking (legacy to VPC) to storage (Cloud SQL v1 to v2) to Firebase (now Firestore with a totally different API) to App Engine (don’t even get me started) to Cloud Endpoints to… I dunno, everything, has forced me to rewrite it all after at most 2–3 years, and they never automate it for you, and often there is no documented migration path at all. It’s just crickets. And every time, I look over at AWS, and I ask myself what the fuck I’m still doing on GCP. ...
... Update 3, Aug 31 2020: A Google engineer in Cloud Marketplace who happens to be an old friend of mine contacted me to find out why C2D didn’t work, and we eventually figured out that it was because I had committed the sin of creating my network a few years ago, and C2D was failing for legacy networks due to a missing subnet parameter in their templates. I guess my advice to prospective GCP users is to make sure you know a lot of people at Google… ...
On the first, I've worked as a PM on Firebase, App Engine, and Endpoints.
Nobody has been forced to migrate from the Firebase RTDB to Firestore (and AFAICT the Firestore API hasn't deprecated anything?), App Engine deprecations (https://cloud.google.com/appengine/docs/deprecations) are basically "you can't do new things using these old things, but the old ones will continue to run" (though other deprecations I've done have provided clear explanations of why we're deprecating and how someone can work around it), and Endpoints is still around despite being comically out of date (it's even getting a managed version!).
>We've used Appengine for 12 years now, and every time they've decided to deprecate services, there's always plenty of notice, a superior replacement, and usually lower costs.
That doesn't remove the cost and time of updating your code and migrating.
100% this. Deprecating things your customers use always sucks for your customers. No matter what. It doesn't matter how long the notice is. It doesn't matter how good the replacement is. You have made work for your customers that they wouldn't otherwise need to do.
That should be a decision for the customer to make and not one that is forced on them. Many larger companies tend to have legacy systems that are basically black boxes to them. The cost of updating one can be orders of magnitude more than any cloud costs for running it.
>>but they've all been free things that they had every right to decide would never increase revenue
If my service provider were to hold this opinion, I would not be able to trust them, actually I would start to search for an alternative immediately. It sounds like a service provider who is okay with turning customers into lab-rats, to experiment on, and once done just discard them away.
> every time they've decided to deprecate services, there's always plenty of notice
Ah, but with AWS, if something is deprecated, generally they tell you you should use something else, but the old way will continue to work indefinitely. You can switch over on your own timeframe.
Just got kicked off of google music as a paying customer, so nope, got a shitty youtube music replacement.
As a paying google fi customer got transferred to hangouts then that got canceled and I apparently need to change my phone number if I want to make an outgoing voip call again because ??? google.
> Just got kicked off of google music as a paying customer, so nope, got a shitty youtube music replacement.
YouTube Music isn't available in my area. Got kicked off Google Play Music with a "download your content, we're deleting it" and couldn't pay even if I wanted to.
Announced discontinuation in August, full deletion of my Music Library in February.
> On 24 February 2021, we will delete all of your Google Play Music data. This includes your music library, with any uploads, purchases and anything you've added from Google Play Music. After this date, there will be no way to recover it.
Same here. I ended up on Spotify instead, which is still shitty although less than I remember (no music player should ever display the notice "can't play the current song", that is its core feature).
However I don't think the two can be compared: we don't need to launch a giant project to make serious infrastructure changes to switch provider.
Just responding to the OP "Yes, Google has cancelled services, but they've all been free things that they had every right to decide would never increase revenue. Why should Google have to keep everything they ever built running for ever?"
Which to be clear they are not responsible for running all their services forever, but it puts a lie to the idea that google is fine with stability in a service, they'll take something commercially successful enough and sacrifice it for something else in hope it attains "hypergrowth".
At the end of the day google doesnt give a flip about its customers, the structure of google will always incentive new services over fixing/maintaining existing stuff and anyone who has a pollyannish view of this needs to wake up.
What about all the data they gobble up for free from its users during the time its active?
Google has never been free, its bread and butter is the data we give it. We were it's product.
Look where the cash cows are. For MS and Amazon, their cloud platforms are top of mind for senior management. Even more so with Amazon's new CEO having been with AWS since inception.
For Google, anything not driving search and ads is a side show. Does anyone think Sundar's staying up at night worrying about Asian egress pricing when he's about to spend the next day being accused by performatively outraged senators about censorship and election influcence?
This is very real. After a production outage caused by excessive health check failures the day after a massive GCP outage (Sept 2020) -- where we quickly hit our already-oversized quota (quotas, another GCP issue) during a traffic spike -- we've moved all of our sensitive workloads to AWS.
We continue to use GCP for less sensitive workloads and for GKE, but our entire ops team has unspoken distrust. This is totally an infra-specific opinion, ignoring the fact that we've had to rewrite apps entirely after breaking changes from Google products.
GCP has a great UI, the project structure makes much more sense, and billing is way easier, but after having a massive outage during a pretty standard scaling event, we just can't justify the risks.
Google generally does not come off to me as having a great customer support culture. To appreciate the scale of the issue, take a look at the hoops and blackboxes merchants who want to be listed on Google Shopping have to go through and all the horror stories online. I did notice on Google Workspace and Google One that they do seem to be trying to improve the support experience.
Gotta love a typical hackernews comment begging for links instead of just searching for it yourself. On the contrary I don't think your comment is useful either.
I’m always considering the potential for Google to deprecate GCP but, for us, it’s the best option today.
We’re heavily invested in GCP but aside from BQ, I feel we can lift and shift to another provider if need be with some pain. Even our BQ work, while extensive, is mostly SQL and would likely work with effort but nothing earth shattering.
That said, I still prefer GCP to AWS by far, but there’s no way they’re going to surpass AWS by 2023 unless something big changes.
Probably would have been better for the author to have taken the "don't sell out to a single cloud vendor API" angle. But I guess no clicky-baits for that approach. I've been screwed over by both AWS and Azure, and never by GCloud, but that doesn't mean I trust any of them.
The best (silliest) business model I've seen is the new thing of vendors promising to make you cross-cloud...via locking you into the platform offered by the vendor (which is usually entirely hosted on one specific cloud provider anyway).
There's valid reasons to support multiple providers, but that is definitely not one of them.
Indeed. The thing that caused me to actually move stuff off AWS to GCP was that you could deploy generic Node Express apps as Firebase Cloud Functions (and then Cloud Run). I knew I could move those anywhere if I needed to.
Honestly, it's kind of brilliant. Convincing people that they need to pay you outrageous sums to not change the fact that they're locked-in at the same level is really clever.
Kinda scummy? Probably. But brilliant nonetheless.
GCP doesn't really support IPv6 at all. Sure you can terminate IPv6 on a load balancer and then proxy the connections over IPv4 back to your instances, but you can't get native IPv6 on an instance.
It's that type of shortcoming that leads me to believe Google does not see a future in this product.
The network virtualization scheme and likely many other parts of the stack need to change in order to support v6. It's not that there aren't investments in this area, it's just a non-trivial effort.
Were they surprised that IPv6 existed? It's not as though this is a new technology, or that the necessity hasn't been obvious since around the time Google was born.
The best argument against GCP is indeed the unpredictability with which Google turns down services. On the other hand you have Microsoft which still supports - let me give an extreme example - Silverlight (!). If Silverlight would be a Google product, all support would have ended years ago.
“Reliable”. In addition to major outages that normal dcs have they also have dozens of small ones that never make it to the status pages. Even for simple stuff, more so for managed stuff. They are some good reasons to pick public cloud but if you’re picking it for reliability you will be disappointed
The only reason I didn't try Google cloud is payment. I can't even run a couple of VM, some test db without using the credit card. This means that if someone hacks me, if the documentation isn't clear enough or if I make a mistake I will fully pay for it (and immediately). This is incredibly haunting for beginners and people that cannot afford such scenarios.
With AWS at least I can use a prepaid card: if something bad happens for any reason, at least I know I can afford to eat the day after.
This seems like a particularly strange reason to not try their service and to me is akin to saying "I won't use CarRentalCompany because they require a credit card. What if I'm new to driving and total the car by wrapping it around a tree?"
If anything, this is just AWS being overly generous and forgiving.
I'm shocked that any cloud provider lets you use prepaid cards... all of the major providers have problems with crypto mining and abuse, so it's crazy that AWS would allow prepaid cards that might be laundered, etc.
And IMO, if you're a real customer, all the providers are fairly forgiving, provided you can get in touch with a real human who works there.
While I agree with the automation piece, guessing first.last@google.com is going to get someone a large % of the time. You can reach an engineer or a PM with about 10 minutes of stalking LinkedIn.
I hate google and would NEVER use GCP for anything remotely meaningful (I'd use Azure and even Oracle first). Nevertheless I had a domain issue (through google.Domain ) and I was pleasantly surprised by the customer support. Almost as good as AWs paid support.
Otoh I find AWS quite straightforward, but I've been using it for several years now. Their support is worth their weight in gold.
Also, if you're paying GCP any reasonable amount of money, you have an account manager who will respond in < 24 hours to connect you with the right PM to deal with the issue. Google deals with lots of random humans, GCP mostly deals with actual businesses. As much as I hated the support offshoring Google (and sometimes GCP) did, most actual businesses could get a human fairly quickly.
Answering tickets is partly the kind of support I look for, Azure will get their account managers of my customer connected to me, Both AWS/Azure are okay to come on customer calls with me for a large enough deal. I never got that kind of support from Google.
Seems pretty thin. A handful of price hikes that made the news is not something that would bother an ordinary company. Maybe a little mom and pop that's running on tight margins, but they would be better served by squarespace anyway. Everyone in business understands that sometimes you raise prices.
The only advantage of Google Cloud is the TPUs--if you're not running massive machine learning workloads, AWS is almost always the better choice on features, service, and reliability.
The *link between compute and storage* is not even officially a production product:
"Please treat gcsfuse as beta-quality software. Use it for whatever you like, but be aware that bugs may lurk, and that we reserve the right to make small backwards-incompatible changes."
https://github.com/GoogleCloudPlatform/gcsfuse/
If a supposed cloud platform can't even produce a reliable way to access your data, then they have no basis being used in any halfway serious setting.
This definitely applies to Google's PaaS offerings. Google App Engine looks like a great solution except your app is now entirely stuck on a constantly changing platform. The drop-in components they offer are constantly getting deprecated and re-architected with no clear upgrade path. For example many of their original drop-in components were custom (Memcache, Taskqueues, NDB) and are now deprecated with no interoperability with the now recommended 3rd party components. If you depended on these components you now are either existing in a precarious purgatory or you need to rip out and replace all uses of those libraries which completely reneges on the PaaS value proposition
I recently switched from AWS to Google due to the complexity of managing AWS. However, I am not willing to use ANY proprietary GCP services or any tools at any level. Firebase looks amazing, not a chance I’ll use it.
GCP is great if you’re going to stick to containers and Cloud SQL. You can pick up your toys and leave if Google tries some stupid shenanigans.
But for the time being I am saving money directly by hosting on GCP, and saving even more money by not needing as much DevOps investment.
Honestly I think people are so used to AWS that they don’t realize how much of a complicated mess it’s become.
> Most business can't rationally avoid picking a cloud provider option - and that often means choosing between AWS, Azure or Google Cloud.
It looks like this lie has been repeated long enough it became a reality for some people. Yes, you can perfectly avoid using a "cloud provider", as millions of businesses worldwide already do, from small companies to largest tech businesses (for drastically different reasons though).
I have little experience with GCP, but AWS does an AMAZING job supporting old products. When they need to deprecate/remove something you get 12+ months notice and lots of support.
I thought this was going to be written by someone who uses it a large scale.
from a customer service point of view, is Microsoft the best ?
Is Google the most secure?
Is Amazon the simplest to enter?
Google cancels products so frequently that no matter now cool it may seem I rarely even bother checking them out - I can't stand becoming reliant and then having the rug pulled out from under me
GCP does deprecate products (former App Engine PM here, deprecated many APIs), but it's definitely less frequent than the "Google deprecates everything" memers want you to believe, and there's a minimum 12 month deprecation period before literally anything is deprecated.
This week it’s ICE and Exxon-Mobil. Next week, who knows. It’s not worth the risk when Google has a demonstrated history of catering to its employees demands to drop customers.
I'd like to hear arguments on this topic that aren't just moronic. If you'd launched your company on Google App Engine 10 years ago, what harm would have come to you in the meantime?
I'm not sure what the author (or Yegge, for that matter) expect a good timeframe to be for deprecating services.
Anecdotally, Google announced the changeover of Cloud logging API versions in October of 2016, with a 5-month ramp (October to March) to switch from the v1 beta API to the v2 beta API. Five months is nearly two quarters, which is quite a long window for a beta API IMHO.
That having been said, Google's habit of leaving things that are pretty much mission-critical in beta is unwise, but it should be unwise for them, not end-users. End-users that need reliability and low churn shouldn't be developing on beta-anything.
About deprecation: the gold standard is what AWS is doing with SimpleDB: essentially, never.
The thing here is that if you run hundreds of services in production - many of which work smoothly and you don't need to touch often, you will find that Google's habit of forcing you to change how to use their tooling will generate a huge burden...
They discourage you from using it and make it clear that for every use case some other tool at AWS would be best, and have been doing so for several years now... They wont even list it anymore under https://aws.amazon.com/products/databases/ ..
Still, they support it ( https://aws.amazon.com/simpledb/ ) because there are customers with legacy systems that depend upon this service.
When I was doing deprecations on GCP, it was minimum 12 months for any deprecation. Five seems shockingly fast!
As for the beta thing: GCP's definition of beta was basically everyone else's definition of GA, since the GA requirements were so insane (e.g. 99.999% internal availability) that getting there would take literal years (see the GCF beta to GA taking like 18 months?). I totally agree that it's weird that things would stay in beta for so long, as opposed to hitting industry standard levels so users can have confidence in them, but setting GA as far higher than the industry was part of Google Cloud's plan.
Giant changes like that are worth capturing in long term planning processes, and then you need time to get ramped up on the new stack, design and implement the replacements, run all your backfills, and also, still have a couple years to figure out the parts that don't migrate nicely. With enough time available, you also don't have to drop actual business related improvements, even if your progress slows down a bit
I thought Yegge was rather specific about what he thought. He specifically mentioned a bunch of things (Java, emacs, AWS, Android, browsers) that take the approach to deprecation that he wished to see.
I would sum up his ideas as: "If you offer a service I pay for, and deploy code on, you should not break my code while you still offer the product."
I also don't think he'd have bothered to mention one or two minor things. His examples were rampant and incredibly bad, like breaking their own offerings.
I'm not sure whether this has been discussed here before, but I'd love to take this forum to share an angle from the tech side of things:
IMO, Google is _cursed_ to keep deprecating its products and services. It's cursed by Google's famous choice of mono-repo tech stack.
It makes all the sense and has all the benefits. But at a cost: we had to keep every single line of code in active development mode. Whenever someone changed a line of code in a random file that's three steps away on your dependency chain, you will get a ticket to understand what has changed, make changes and fire up every tests (also fix them in 99% of the cases).
Yeah, the "Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time. It’s costing us time and money to support our shit, and we’re tired of it, so we’re not going to support it anymore." is kind of true story for internal engineers.
We once had a shipped product (which took about 20-engineer-month to develop in the first place) in maintenance mode, but still requires a full time engineer to deal with those random things all the time. Would have save 90% of that person's time it it's on a sperate branch and we only need to focus on security patches. (NO, there is no such concept of branching in Google's dev system).
We kept doing this for a while and soon realized that there is no way we can sustain this, especially after the only guys who understand how everything works switched teams. Thus, it just became obvious that deprecation is the only "responsible" and "reasonable" choice.
Honestly, I think Google's engineering practice is somewhat flawed for the lack of a good solution to support shipped products in maintenance. As a result, there is either massively successful products being actively developed; or deprecated products.