I no longer work at Firebase / Google, but two points:
1. There may be issues with the GCP integrations & UX/DX, but GCP integration is good for many customers and necessary for the future of the business.
One of the common failure modes for the 2011-2014 crop of Backend-as-a-Service offerings was their inability to technically support large customers. The economics of developer tooling are a super-power-law. So, if you hope to pay your employees you'll need to grow with your biggest customers.
Eventually, as they become TheNextBigThing, your biggest customers end up wanting the bells and whistles that only a Big Cloud Platform provide.
This was a part of the reason we chose to join Google, and why the Firebase team really really really pushed hard to integrate with GCP at a project, billing, and product level (philosophy: Firebase exposed the product from the client, GCP from the server) despite all the organizational/political overhead.
While this seems reasonable, surely there’s a caveat here:
If you have a successful popular product (firebase) and a not particularly successful or well loved product (GCP), does mixing A into B make sense?
It might make technical sense to have the robust engineering capabilities of B to support A.
…but if it’s driving customers away from A, because it’s starting to look like what they don’t like from B…
All I can say is that there seems to be a lot of pressure for GCP to succeed, and I’m pretty skeptical that the changes to firebase are being made for the sake of making that product better.
Betty had a bit of butter, but the butter was bitter, so she mixed the bitter butter with the better butter to make the bitter butter better but it made the better butter bitter…
It's from Google, they deprecate things every 6 months including APIs your app is using, if you don't follow your app will be down pretty quickly. AWS nearly never do breaking changes.
AWS GameSparks.
Being shut down end of November and bringing down one of my favorite games with it, because the devs were foolish enough to believe what you believe, that AWS sticks with its products.
It's AWS page says Preview, meaning it has never gotten to an official stable release. It's like using a beta/rc product, there is an explicit warning this is not the final version and it might not get anywhere.
But since Firebase is from Google now, anyone who worries about Google deprecating products have equal worries whether Firebase is based on GCP or not.
Outside of HN, I find developers (at least in Europe) often prefer GCP and businesses have full faith in the offering because of the Google stamp.
For Azure I have yet to find a developer who likes it, but businesses are drawn to it due to the packaging with other Microsoft services (eg you buy office 364, teams, Active Directory and Azure together in a Enterprise package) and businesses have full faith in the Microsoft stamp of approval.
Same, of all the people I've interacted with, it's AWS and GCP which are liked (AWS mostly for features, GCP more for UX/DX), Azure is at best accepted and the only reasons anyone uses are "we were already a MS partner/shop".
> If you have a successful popular product (firebase) and a not particularly successful or well loved product (GCP), does mixing A into B make sense?
Firebase is more successful than GCP? In what way?
> Betty had a bit of butter, but the butter was bitter, so she mixed the bitter butter with the better butter to make the bitter butter better but it made the better butter bitter…
> > Betty had a bit of butter, but the butter was bitter, so she mixed the bitter butter with the better butter to make the bitter butter better but it made the better butter bitter…
> ?
They are saying that taking a good thing and combining it with a bad thing, doesn't make the bad thing good. It makes the good thing bad.
I have no "dog in this fight" (side note, any better phrases to use there?) but just explaining what they tried to convey.
This is interesting, is this the fate of every upstart PaaS, to
be acquired by an Amazon, Google, Microsoft, Oracle etc. I see it might be necessary but also feels like a long con. You got popular because of not being the stuffy, complex, corporate thing. Then you slowly become one.
This is why more and more I will err on the side of foss. I am investing my learning time into linux tools (bash and tmux for eg) even when using windows. Maybe even get back into vim. Because these tools will probably be the same after I die, or at least heavily backward compatible.
Windows got in the way of my web development and I only learned that when I switched to Linux. Years later, I don't need Windows anymore and I'm so glad to escape that bloatware and spyware of an OS.
Now I do gaming, video editing and programming all on Linux (ZorinOS) and it feels and looks so good.
It inherently creates the need to make sure that the games can run on Linux, even if on a compatibility layer. This will only incentivize more people to try Linux and devs to create better tools and support systems.
It's not the utopia we expect but it does get us close to it.
I have a very similar take (although I’ve been using Linux since it was distributed on floppies :), and I definitely prefer working with indies, but I will say that one of the things I really like about Supabase is that everything is available open source.
So even if they are merged with one Borg or another, I would expect to have sufficient time to deploy my own infra if it looks like it’s all going to go pear shaped.
I'm the founder of WunderGraph (https://wundergraph.com) and I'd like to mention it as an open source alternative as well.
What makes WunderGraph different is the focus on integrations. You can easily integrate internal and 3rd party APIs, and choose your own client. We generate code e.g. using swr, so you can build on top of existing ecosystems.
We're also currently building a cloud version, which uses fly.io machines to be able to scale functions to zero and even run a SQLite database that can "sleep". I'd love to hear people's opinion.
Hi James. Thanks for commenting. Just wanted to let you know that your HN profile is outdated based on this comment. It says you are "Now leading product for Firebase at Google."
Thanks James, I remember Firebase coming to Citrix in Santa Barbara to discuss being aquired. Wonder if you participated in those meetings? So glad they did not buy and ruin you! Ha
Thanks! Fwiw, we do plan to open source in the future :) For now we want to iterate with a small group of users. Feel free to drop me a line if you want to hack with us.
It's funny because Google pretty much invented this model that users want and love. App Engine was launched in 2008, before GCP, before docker and before Cloud was a term, basically. It had a globally replicated persistence layer, job queues, auth, scale to 0, a generous free tier, a fully fledged local dev server and a single command to deploy, and all config lived in a single file. At the time it was really innovative, and attracted a lot of hip startup- and college grad types.
The reason people liked it is, imo, because it was a single product with multiple features. GCP OTOH, has been about decoupling (or loosely coupling) the product suite, and users are supposed to pick and choose from a giant array of half-products. Simply being aware of all products, their interoperability and predicting the cost is a full time job, easily (it's mostly large enterprise who can absorb that cost). There's nothing wrong with the pick-and-choose model, if your products are simple, low level, and interoperable. But the product suite has grown organically, seemingly without coherence, so there's very little in terms of paved paths to walk – it's so confusing. When a shining star like Firebase comes along (2014) Google clearly knows not to mess with it too much. But even Firebase isn't strong enough to resist the gravitational pull of the GCP behemoth.
> The reason people liked it is, imo, because it was a single product with multiple features. GCP OTOH, has been about decoupling (or loosely coupling) the product suite, and users are supposed to pick and choose from a giant array of half-products.
"There are two ways to make money, bundling and unbundling." - Jim Barksdale
The quote is mostly provided in jest, but at the same time there are many customers who want something totally bundled that solves 90% of their problems in 20 minutes but literally can't solve the last 10%, and there are other customers that want generic building blocks that can solve 100% of their problems, even if it takes years to do.
I think it makes sense that Google offers customers both.
> But even Firebase isn't strong enough to resist the gravitational pull of the GCP behemoth.
Part of the reason we got acquired is because of the above: we knew that we had a number of limitations (e.g. infra scalability) that could be solved, and we knew that GCP had DX problems offering a bundled mobile offering (App Engine wasn't "mobile" focused).
And that acquisition really paid dividends: we were able to build a _ton_ of "bundled" services for Firebase by 2016, and continue deeper integrations with GCP (Storage, Functions, Firestore, Auth, etc.). There are definitely UX differences (some of them intentional, due to different audiences), some not (e.g. it seems like not having an integrated log viewer is a DX miss, though my assumption is that staffing on the Firebase side is down so the decision was made to just send folks to the GCP console :/).
Easy things should be easy, and complicated things should be possible.
The problem with GCP is that a bog-standard setup is considerably more complicated than it was before. Maybe the config files are roughly the same, but there isn't a single dashboard where you can see everything from storage to instance usage to queue delays anymore. Instead there are a bunch of dashboards, each with way too much detail for getting a broad overview of your metrics.
> I believe it was the inventor of Regex who said:
> Easy things should be easy, and complicated things should be possible.
Was he being serious when he said that? I'd hardly call regex a technology where easy things are easy and complicated things are possible (there are plenty of impossible things in regex implementations e.g. backtracking in RE2).
I think (as applied to technology) it is originally an adage from Alan Kay and he worded it as “Simple things should be simple, complex things should be possible.” It wasn’t about any specific technology but rather for what they were building at PARC.
This is an engineering choice by RE2 to not support backtracking or lookahead in exchange for runtime guarantees. PCRE2, which is not much more complex, supports both of these, but doesn't give the same runtime guarantees as RE2. For someone who is operating on vast quantities of unknown, unsanitized data, eg. Google, RE2s limitations are a reasonable tradeoff to make to avoid DOS situations from malformed or malicious inputs.
Yeah I was there. It was great because it was, apparently, impossibly cheap. Then they decided to take it out of beta and jack the prices. People who were spending $10/mo suddenly had $1000/mo bill.
Yeah I don't think PaaS should ever be proprietary in terms of API surface. It's just perverse incentives, you get subsidized until market saturation, and then they jack up the prices. The cloud providers should, imo, compete on uptime, resilience, performance, etc. Kinda like the VPS or VPN providers of today.
Let's take a quick example of S3: GCS was able to take the same API and they both compete on uptime, performance, etc. Price/GB has remained stable or decreased with new tiering options over time. Managed K8s clusters, SQL databases, etc. are all competing in similar ways.
If folks want to build proprietary APIs, and people choose to use them because they are faster/better/cheaper/more suited to a particular use case, I'm not sure why that's a bad thing.
Engineering is tradeoffs, proprietary API surface is one of many things people consider.
True, I didn't mean to come off as a purist. I think it depends on the service provided and it's maturity level. If you need basics like auth, persistence etc i think those parts should be open, at the very least have a stable API that has >1 provider. (The SRE analogy would be a SPOF).
For things like push- and email notifications, it makes more sense that a provider is involved (although APIs should probably still be open). New innovations can always start out proprietary, and the inventors can always offer support and managed infra for OSS projects.
Why? I believe the health and future innovations within the tech sector relies on healthy competition, and imo, that means that the majority of standard services have multiple providers to choose from. The only way to migrate from one provider to another is through standardized, or open, APIs. VPCs is a pretty good example of this: I can get the premium option from GCP or AWS, and have uptime, bandwidth, and choose a DC from all over the world. OTOH, I can get a cheaper, local one, depending on my needs.
Let's continue that s3 example. There are many products branded as "s3 compatible", but they all have slightly different definitions of what that compatibility means. Not to mention that the s3 api itself is a moving target, and doesn't even have a version number you can reference, so the best you can do is list which features you do or don't support. Some of these products actually have native APIs which IMO are better than S3, partly because they can learn from mistakes in s3s API.
The other thing that is really nice about Firebase is that it pretty seamlessly integrates with GCP.
For example, with your #2, it's really, really easy to spin up a postgres DB on Google Cloud SQL and then access that DB from a Firebase Function. While it can get confusing sometimes, I think Firebase has done a good job "overlaying" their functionality set on top of GCP infrastructure, e.g. "Cloud Functions for Firebase" is really just a very thin layer on top of GCP Cloud Functions, Firebase Auth is basically the same thing as Google Identity Platform, etc. The author of the blog post sees that as a negative in some areas, but I don't.
FWIW, I looked through the authors list of Firebase "cons", and as someone who has used Firebase for years, none of those have really been a concern for me. My biggest concern is that there are areas of Firebase that have been calling out for love for years, particularly Firebase Auth, but they've gotten virtually no updates. Apparently nobody at Google sees making these straightforward but important improvements as a path toward getting a promotion.
> For example, with your #2, it's really, really easy to spin up a postgres DB on Google Cloud SQL and then access that DB from a Firebase Function
God no. The last time I checked, Firebase functions had intensively bad latency times for me. So bad that I decided to learn AWS Lambdas to get my work done. And yes, Lambda just blew Firebase and GCP functions out of the water.
[cloud functions for firebase manager] Lambda is indeed an innovative and solid product. Cloud functions for firebase has been getting a lot better recently too!
The biggest innovations lately has been our release of v2. V2 is built on Run and can support concurrent executions in the same container. This dramatically reduces the number of cold starts, and makes min instances reservations even cheaper if you want to eliminate cold starts for a given workload. Plus, Firebase lets you configure CPU separately from memory in V2, so you can give functions extra oomph if they need it. Docs are at [1]
Finally, Firestore’s SDK was slower than we liked in GCF so we’ve done a number of improvements there. About half a year ago we redesigned the SDK so we could lazy load the networking layer. This lets you handle Firestore events without loading the bulk of the SDK. We have a more extreme update in the works that will let you configure the Firestore SDK to prefer REST over gRPC so you can avoid heavy dependencies in latency sensitive/event driven environments like GCF.
I don't know when you tried this, but I don't think this is valid anymore:
1. If cold start times are a big deal, you can now set a minimum number of instances with Cloud Functions so they don't scale to 0. You can also now set concurrency on Cloud Functions so one function can run multiple requests simultaneously on a single function.
2. There are tons of other serverless options if you want to, say, have a Firebase front end but a backend API served in your language of choice that uses the Firebase Admin API to do whatever you want, e.g. App Engine or Cloud Run.
What I mean is mobile push notification.
Once I felt tempted to replace FCM with nats.io, turned out there were several cases need to handled. Oh well, back to FCM. At least, it's free.
Firebase is super convenient and works very well for the most part but my heart races every time I receive an e-mail from them. A silly mistake can cost you so much money and they tend to word the e-mails strongly.
> A silly mistake can cost you so much money and they tend to word the e-mails strongly.
They tend to word emails strongly to affect action.
As for the cost overruns: I don't know anyone who wrote one of those blog posts (or emailed support) who didn't get their bill refunded.
We can argue about spending caps or hard limits (which people generally dislike for their production apps), but the people side of the business is (or at least was) solid.
Source: worded emails strongly to affect action at Firebase and refunded a number of bills.
I am not convinced it is: frameworks on nodejs, ruby, php etc. can
give you most of it pretty simply for less expended
hours. For any project that doesn’t care
about webscaling:
which is
99%
Sure, you can do everything yourself but when you use Firebase all that is no longer your concern. It works as if you are interacting with 3rd party API, which means you only care about the request working and you don't care how exactly they are working behind the scenes which means you care about the actual stuff you are building.
Firebase does have some issues but the core premise is solid and IMHO everything will be like that as we go forward.
I am not sure doing this yourself is that problematic. You can get a managed postgres - and that is most of the ops work done. The rest is npm i this/that. The effort is approximately the same as the effort go get to know firebase.
I might be speaking as a jack of all trade so it seems easier to me. For a frontend only dev who needs a backend it might be different.
This is the main selling point of Supabase. Indeed, sometimes you don't need SQL. But in most cases you need SQL. Or you make awkward Firestore workarounds that are prone to introduce bugs and will definitely introduce complexity.
Hi there, I’m the manager of Cloud Functions for Firebase. I wanted to note that Firebase is getting better for large deployments and we’re continuing to invest in the area.
First, the article might have been written before our recent feature to skip deployments of unmodified functions. If the source, env, and secret metadata SHA has not changed, we skip deploying that function. At minimum, this means large deployments that fail can be retried and make progress (more on that below).
Second, we’ve started investing in a feature called “codebases.” At its simplest, codebases are multiple folders of functions. This makes it easier to deploy single codebases, but it also means you’re likely to skip deploys of functions in codebases other than the ones you’re working on. If you want to invoke functions in another codebase, consider a Pub/Sub, Task Queue, or Eventarc Custom Event (coming in a few weeks) function depending on the feature set that suits you best. Codebases are just getting started and we have hopes to develop this feature in the future.
Finally, we’ve set aside budget in Q4 to rewrite part of the core of our deployment logic to improve the way we batch, backoff, and retry function deployments in the face of quotas (yes, they are retried multiple times, even when we give up due to quota errors). We hope to substantially raise the reliability of 100+ function deployments.
Two things that would make firebase-functions awesome:
1. Let us use "cloud run" instead of firebase functions. They're so easy to test (just a docker image) and the limitations on "cloud run" are far less than cloud functions (timeout, CPU etc).
2. Please develop a file system-based routing framework (like next.js) that way we can just create a file named "my-endpoint.ts" and it would be automatically mapped to "/my-endpoint" on the server...
Loved Firebase a few years ago, but I've moved away to simpler solutions like Vercel's offering
That's awesome to hear there's ongoing investment in the Firebase/GCP functions space! I'm in touch with many game developers who use or considered Firebase Functions/GCP Functions for their backends/player data web services. Figure I'd pass along the primary gotchas/complaints I hear, since it sounds like many of them are being actively addressed in the work you mentioned:
- Time from deployment to availability feels very long compared to alternatives. Firebase Functions (even for 1-3 functions) can be 1-3 minutes long. I've heard it's much longer for more functions (/ maybe region dependent?)
- Cold Starts are a major pain to deal with. Workarounds like minInstances are expensive/subvert the scale to $0 value proposition/don't solve latency in the scale-up case, and are charged per function. Some devs refactor their backend to be a single function endpoint to work around this and minimize cost which seems to contradict the small functions development style demonstrated in the docs.
- It'd be nice to have more serverless-friendly datastore primitives within the GCP ecosystem that can (1) scale to $0/mo base for pricing, (2) handle high write throughput (including per entry) and (3) support serverless connections well. RTDB, Firestore, Datastore, Memorystore, Spanner, Alloy etc. don't quite nail all those points. Something based on Spanner or an elastic sort of Memorystore that really scale down to $0 for cost could be amazing.
Some are migrating to Cloud Run to have concurrency per function, though it sounds like Cloud Functions v2 gets very close to that use case once it's available across all regions.
I love Firebase Functions since it really nails the use case of: (1) start from $0, prototype your application quickly, (2) scale up to withstand practically infinite traffic at reasonable cost without having to change any code, (3) seamlessly graduate to using more of a high quality cloud platform without having to change any code. It's rare to find and other efforts haven't matched the overall dev UX. Outside of cold start spikes the platform is very stable and hands-off, and has incredible logging/metrics/alerting available from the GCP side.
Separately I worry the GCP ecosystem is missing a story around cheap/fast edge functions and integrations with next-gen frontend tooling which often rely on many quick API calls. (It would be interesting to see something like a Firebase acquisition & integration targeting that world of tooling)
If there were a scale to $0 edge datastore like PlanetScale/UpStash + scale to $0 edge functions offering with the simplicity and GCP-integration of Firebase it would be awesome.
Thank you for the feedback. I can’t comment about future roadmaps, but we expect v2 to severely reduce cold start problems with concurrency support. WRT deploy times, I’m not a fan either, but this is the cost of standardizing on Docker. I don’t honestly see that decision being reversed soon. That’s why we’ve instead decided to invest in an emulation suite. How has that worked out for you? And I’m curious why the real-time database and Firestore don’t meet your needs. The real-time database requires manual sharding, but tools for that have dramatically improved (e.g. in v2 functions, a single function can listen to all databases in a region). Firestore is built on spanner. It prohibits queries that fall apart at scale, but it’s a planet scale database that you’ll never have to shard.
As a side-project Firebase user, I empathize with it being slowly consumed by GCP. I don’t have the time or inclination to learn GCP for a side project. I opted for Firebase because it had a great developer experience.
As time toes on and more and more Firebase features are redirected to GCP equivalents, the DX value prop is being drained out of the product and I’m left not even understanding my own infrastructure. I’m thankful the author pointed out Supabase, it seems like a great alternative.
[cloud functions for firebase manager] I understand that the decision to shuffle users to GCP for logs was controversial. It wasn’t decided, as some have said here, because some director had an OKR to fluff up; it was because our UX team couldn’t keep up with the sheer amount of innovation in GCP’s observability suite. Check out Daniel Lee’s talk next Tuesday on observability and cloud functions for firebase for some cool tricks. Did you know, for example, that you can jump into the trace for a log line in GCP? That you can create custom metrics with alerts? That you can filter by structured log segments?
I think a tutorial could smooth over the transition, but I think this decision was for the best. If you want a super simple logs reader for in-the-moment analysis, try the CLI command “firebase functions:logs”
I appreciate your reply and I believe every word you have said. Thank you for the hard work and thought you and your team have put into your products, and I'm sorry to hear if anyone is casting you as faceless corporate politicians reaching for meaningless goals. I know exactly how that feels.
To be frank with my feedback, I am a person who uses Firebase for a side project. I don't have the mental capacity to watch an ~hour talk for each piece of GCP infra I don't understand. I liked Firebase because I could understand it intuitively through its UX.
I'm sure the GCP features are very helpful and innovative and I'd get a great ROI from the talk if I worked on them for my full-time job. But some weeks, an hour is all I have. I'm an app developer and I would just like to deploy my app please, and have the rest fade into the background, or at least integrate with my existing workflows.
Thanks for all your hard work. I understand if "side project developers" is not the top of your funnel. But I hope you hear my feedback.
Ha there was a man named Steve in our industry who made a pretty good career out of making technology ‘usable by mere mortals’ who comes to mind right now.
What if 741 new features are introduced to Cloud Logging, will these be forced onto the Firebase user?
Oppression of choice is a real problem. Firebase must cherry pick the best of the new features and wrap it in their own UX in order to stay relevant.
Being booted into a completely new and unfamiliar UX with dozens of knobs and dials is a jarring experience.
It might not have been a managers OKR but it’s very obvious that in an effort not to lose the “whales” your casting the new too widely and potentially killing a lot of smaller fish.
I‘ve been using Firebase and Firestore extensively in past projects, because I was attracted by it‘s simplicity on first sight.
Then there’s the day you need some relational queries. It‘s somehow possible, but you need to rethink your data structure.
Then you figure out, that the realtime part of Firebase is CPU intense and slows down your browser.
Call me nuts, crazy or whatever - I‘ve replaced most of these applications with SQLite (WAL enabled) and use either server-side-events (SSE) for almost realtime notification or the HTTP streaming API.
It works perfectly for my needs and is pretty simple to deploy.
You should check out supabase -- it takes this pattern (with a Postgres backend instead) and packages it up in an open source, cloud hosted but also self-hostable model
Exactly!
Or in some cases I create a view - for example a highscore list - that is queried by a simple server-side script and sends the data as JSON to the client.
I thought the same way, but I think now Supabase is much better than Firebase. Simply because you get a lot more functionality from the postgresql DB compared to the nosql DB.
Fair I am not saying it is the only option. Parse was really neat before I ever heard of Firebase, Meta acquired them but at least they released it somewhat open source.
“Intended” according to whom? The firebase landing page makes a couple references to being scalable, but otherwise it doesn’t appear that using Firebase at a smaller scale is outside its intended scope at all.
What does “intended according to the technology” even mean? Just because software is built to scale up, doesn't mean that using it at a smaller scale goes against its intention.
Marketing copy aside, it's not as if the technical docs discourage people from using it for non-scale use cases.
The scalability argument does not fly with me. Realtime Database is not scalable, and you will eventually need to use sharding to get it to work with reasonable amounts of data. Cloud Firestore is designed to extract as much money from you based on the structure of their pricing model - you are compelled to make unnecessary reads and duplicated writes in order to make comparable operations. If you want a scalable Nosql, why wouldn't you go for something like mongoDB cloud, which is far less likely to burn your startup's pockets?
The most definitive use case for Firebase is its Realtime capability, which makes it useful for gaming particularly but also team-based apps. But Supabase is just getting started with their Realtime capability - I expect to see more progress from them while Firebase deteriorates under the cloud of Google (pun intended).
I enjoyed reading this having battled with firebase on a personal project. It has just enough quirks that I now prefer postgres unless I need all of the firebase features desperately.
> It seems that GCP is cannibalizing the Firebase developer environment.
Yeah as a latecomer I got this impression and rolled my eyes. I imagined a debate in the office between an passionate Firebase OG and a Google
exec holding an OKR portfolio, the passionate engineer trying to delay the inevitable gobbling up of firebase!
For me Firebase has lot of cuts, maybe not 1000 but things like:
* No formal definition of rules, just some very scant documentation. Much of it is undocumented or can only be found in SO answers.
* Emulator fatigue! Like Azure storage, the emulator is a good approximation to the real thing but you end up chasing down bugs locally when it is an emulator quirk. Postgres OTOH you run the same thing in prod. Same for mongodb, mssql, mysql etc.
* Firebase admin sdk has a completely different API to the normal sdk to do all the same things! I get it: smaller JS
bundles, but why not just reuse a lot of those changes in admin?
* You have to have a local file with authentication secrets to a cloud db in order to run the emulator! I just have a test deployment for this but it feels unnecessary.
* I don’t think the model of letting users upload json directly to collections is secure- unless you are a genius with firebase rules. This might be more of a general criticism of nosql over http. CouchDB might have the
same issues I guess.
* Rules bugs like if you do a query with clauses your rules
don’t have access to the entire object just the queried fields. You need to add ghost fields to the query to make the
rule run properly.
Probably more things I forgot about too.
I am even tempted to reverse engineer rules to write a proper guide. If that sounds interesting let me know.
Finally I prefer open source stuff for a lot of the usual reasons. So closed source stuff has to be beyond excellent to make sense to me to use.
> I am even tempted to reverse engineer rules to write a proper guide. If that sounds interesting let me know.
No need to reverse engineer them, I built them, so I'm happy to answer any questions you may have (to the best of my memory).
> * I don’t think the model of letting users upload json directly to collections is secure- unless you are a genius with firebase rules. This might be more of a general criticism of nosql over http. CouchDB might have the same issues I guess.
It's a general criticism of any "access your database/storage bucket/etc. directly from the client" product--everyone has to build _some_ authZ mechanism that lives in between the client and the server, whether it's Firebase Rules, Postgres Row Level ACLs, Lambda Authorizers, etc. Otherwise you're writing your own authZ code on your own server, which has it's own set of issues.
> * No formal definition of rules, just some very scant documentation. Much of it is undocumented or can only be found in SO answers.
I don't think https://firebase.google.com/docs/rules is the best documentation ever, but I'm not sure "scant" is the word I'd use to describe it. What are the main areas you think are missing?
I always thought a rules "learning" mode in the emulator would be useful. While the mode is switched on, the rules are set to allow read/write everywhere. Run your app and simulate a typical end user interaction. After you're done, the learning mode writes a set of the most restrictive rules possible based on the data read/writes from the simulated interaction.
I think this would make a good starting point for new apps.
First and foremost I appreciate the time you spent putting together the article. Feedback is a gift and always welcome.
I'll address your main point: Firebase and GCP.
Over the past decade, developers have consistently turned to Firebase as the fastest way to take their idea to market, thanks to an innovative client-first model and a singular focus on developer experience. Our goal is to provide a comprehensive and "opinionated" platform that just works, saving considerable development and maintenance time.
We consciously build some of our services on top of GCP primitives like Cloud Storage and Cloud Functions. This allows us to best apply our team in continuing with our mission, while also making sure that our users never outgrow the platform in any dimension (security, scalability, compliance, etc).
You bring up excellent points as far as our UX and better integration opportunities. My goal is not to necessarily push developers to GCP. The way I see it is that both platforms serve users with different use cases and preferences. Some will use one over the other, and many will use both.
I would love to continue the conversation and I hope you don’t mind if I try to reach out.
Firebase has been a huge influence but with the rise of serverless there are a new generation of platforms people should be checking out instead, such as Convex, Supabase, etc.
I have to also mention https://directus.io/ here. Coming from Strapi, Feathers.js and Supabase i have to admit it is the closest competitor to Supabase i know. There are things Supabase is doing better but the Admin Interface and the Customization Capabilities of Directus are just unmatched.
Supabase has distributes consistency issues and not suitable for multiplay/offline first. I am annoyed at firebase for real technical reasons like Firebase auth can delay page load by several seconds. So the real alt to firebase (on web) is replicache IMHO
> We love PostgreSQL which Supabase utilizes. We plan to do more research on scalability, since column-based databases can’t grow as big as their NoSQL counterparts.
I would argue there are very, very few types of problems NoSQL solves better than relational DB, and the “benefit” of quickly deploying schema changes is actually a nightmare later on. That it took so long for these devs to realize this is… perplexing
There are over a dozen distributed SQL database projects out there, and as many middleware to shard SQL databases. Some of the worlds largest properties , eg. Meta , largely run on MySQL. Eg. https://engineering.fb.com/2016/08/31/core-data/myrocks-a-sp...
A firehose of ingestion is a specific problem that can be solved. But basing your entire application on a nosql data store is a mistake - quick and dirty now for pain and tears later.
>Authentication out of the box is nice. (Built-in Firebase email-verification is, in our opinion, a poor experience though).
>Firebase mandates Google / GSuite sign-in
Is Firebase authentication different from GCP Identity Platform? I've read the docs for both Firebase and Identity Platform and the lines seem blurred..
Do these issues extend to Identity platform? I haven't heard of many people using it, but it looks very feature-rich.
Yes, they are the essentially the same (again, more confusing branding).
Firebase Auth is great, with the notable exception that I am worried it will become Google project #2898 to die on the vine. It has received virtually no substantive updates for about 2.5 years. The last big one was they added SMS 2FA in early 2020. Given that Google themselves has long ago stated SMS is insecure for 2FA, it's ridiculous they have no other options, nor have they said anything about adding options in their official channels. Here's a post from Dec 2021 in the google groups forum, and again there have been no real substantive updates since: https://groups.google.com/g/firebase-talk/c/RBRdDHPybC8
> Is Firebase Authentication dead?
> I apologize for the inflammatory subject line, but I think this is an important matter that the Firebase and Google team should be transparent about.
> What does the future of Firebase Authentication look like? Is it in maintenance mode? Is there any roadmap for future development?
> I migrated my user base to the Firebase platform a few years ago, mainly based on their Authentication platform. It looked like it had a lot of promise. There was a lot of development going on with it at the time. Although it didn't have all features I wanted, the pace of development seemed to show a good trend line going in that direction. At the time, they said MFA was on the roadmap. I considered other AaaS options, but chose Firebase Auth based on where I thought it was going in the future.
> Fast forward a few years, and it seems like there is no activity any more with Firebase Auth. I'm still waiting for TOTP and WebAuthN MFA. SMS is not a no-starter, and even Google has said that SMS is not secure MFA. There is no way to control the password strength policy, nor lock-out behavior when incorrect passwords are entered.
> Now I need to make a decision: whether to stick with Firebase Auth, or to move on to a different AaaS platform. Will Firebase Auth pick up the feature development pace again, or is it on a long death spiral? I'd appreciate insight from Google what their plans are in a concrete way.
> We love PostgreSQL which Supabase utilizes. We plan to do more research on scalability, since column-based * SQL databases can’t grow as big as their NoSQL counterparts. Nonetheless, Supabase came at the right time.
> * Edit: poor choice of words
Where do people get this idea that sql cannot scale or grow? Ever since this fad of “NoSQL” we have ended up with these false claims everywhere.
That’s what’s called successful marketing, at least with a segment of developers that never looked hard at databases, and bought into the NoSQL claims.
The decades long market debates over network, hierarchical, object vs SQL databases are also a distant memory for most. There’s good technical reasons SQL remains dominant, but again, a segment of folks aren’t incentivized to learn why.
I see Supabase's Realtime hasn't reached the production milestone yet. How far away is that? I'm interested in using it for an offline first app. Watermelon sync would be nice.
It’s pretty close - we just added Presence and Broadcast. As soon as we are confident that they are stable at scale then we will consider it GA (probably a few months)
Don’t do that! Firebase CI tokens are not a good way to authorize with Firebase anymore. Use a GCP Service Account which you can scope very precisely and remotely adjust permissions / track usage.
> Google Cloud Console dashboard…GCP is cannibalizing the Firebase developer environment. From an ops perspective, that makes sense. But axing the simplified cloud experience of Firebase removes much of its value
> GCP favoritism… Why Firebase Hosting requires Cloud Function list authorization confounds me
Someone gained promotion credits for these “migrations”. It’s turtles most of the way up. The bricklayers justify projects that are aligned with some director level’s visionary initiative to unify and consolidate GCP. If there was real vision, the fragmented and disjointed documentations and code packages should be normalized, but that is out of scope when you’re operating on promotion-cycle level timescales.
On an orthogonal about cloud functions lost permissions, we wanted to list our cloud functions and since we used OAuth2 the most granular OAuth2 scope we requested involved the ability to write to GCP as a whole. Point being, there’s a pattern with GCP of forcing people to migrate to features that are half baked if not abandoned (again, promotion).
The CLI complaints, while relatable, are pretty minor. It seems they believe shell scripting is a dirty hack. But in that case, why not just write a clean program to pull the values in the form they want from the API or SDK?
Part of my evolution with regard to Single Responsibility Principle has been to start exposing command lines for some modules. It makes it a lot easier to write integration tests for one, but it also helps people know the interaction boundaries of a particular module. Because if they aren’t in the dependency list, either the program doesn’t work or it doesn’t talk to that code. It’s also a lovely bit of friction to Kitchen Sink Syndrome, because it’s just that little extra pain in the ass to cross link everything to everything instead of gating it through nexus modules versus leaf modules.
The CLIs usually stay as debugging tools, but in some cases they have gotten incorporated into more complex tools, such as to add more hints and hyperlinks to out deployment management tools.
I dunno. There's nice CLI interfaces and there's one's that make things annoyingly hard to do. I haven't used firebase ever but I have had that problem before.
Is there anything like Firebase that lets you use SQL like schemas instead of their data store? That would fill a nice niche for some use cases I have been working with.
It's really easy to add a Google Cloud SQL database in GCP which then can be accessed by, for example, Firebase Cloud Functions. It's also very easy to mix/match parts of Firebase with GCP. For example, you can very easily deploy a server-side API on a plethora of GCP technologies (i.e. App Engine, Cloud Run, a dedicated Compute instance, etc.), deploy a front end with Firebase Hosting, use Firebase Auth for user authentication, then when you call your backend API you can verify the authentication using the Firebase Admin APIs.
I really miss firebase function logs. Even though it was a bit glitchy at times, it was the perfect simplified overview of everything on my backend, and fitted well with the simple UI of firebase. I'm sure I'll get used to Google Cloud logs - but it just feels really out of place now.
it is impossible to do anything remotely similar to a SQL join. Therefore, developers must embrace the ethos of NoSQL by distributing relational data ahead of time.
Can someone help me translate this?
Are they saying that since there's no joins they store their aggregates denormalised upfront? Or what?
TL:DR; We don't want to use GCP and we don't like that Google.is intergrating Firebase into it. As someone who is running on GCP, I can't wait for them to fully merge Firebase into the GCP dashboards. Not a high vakue article, really...
From the article: "Being closed-source, you don’t have the implicit assurance that Firebase will always be around (like Parse), nor can you reliably depend on a specific API version."
Firebase is amazing, but I'll never use it for anything that's meant to last more than a year. I don't trust the API to remain stable, and I _especially_ don't trust Google to keep it running for the long term.
The RTDB APIs have remained stable and available since ~2012. Storage has existed in the same way since ~2016. Do you have some specific examples of build APIs changing on you in ways that break your applications?
The first application I wrote for Firebase was back in 2015, before it was acquired. I don't remember the specific issues, but I know I spent about the same amount of time trying to keep it running post-acquisition as I did building the app in the first place.
I tried again in 2019 and got burned by Cloud Functions Node runtime changing underneath me.
Edit: I should point out that averaging a single breaking change every five years means that an application has a 20% chance of being affected each year. I'm not willing to roll the dice, generally speaking.
> The first application I wrote for Firebase was back in 2015, before it was acquired. I don't remember the specific issues, but I know I spent about the same amount of time trying to keep it running post-acquisition as I did building the app in the first place.
Nit: Firebase was acquired in Oct 2014. New functionality was launched in early 2016, and the major change that would have affected you at that point was the auth system, but that should have been a one time change with low ongoing maintenance.
> I should point out that averaging a single breaking change every five years means that an application has a 20% chance of being affected each year. I'm not willing to roll the dice, generally speaking.
What technologies are you using on a routine basis that fall into a category you consider acceptable?
> What technologies are you using on a routine basis that fall into a category you consider acceptable?
Can't speak for them, but some LTS GNU/Linux distros have a decent track record, for example, I've had fairly few issues with Debian, Ubuntu or even CentOS back in the day. The worse I've had was CentOS xrdp package breaking for no good reason, or unattended upgrades in Debian breaking GRUB in my homelab once, here's a rant about it: https://blog.kronis.dev/everything%20is%20broken/debian-and-...
Some languages out there have a pretty long and boring releases cadence, for example, in the case of Java, JDK 8 has been around for almost 10 years, JDK 11 and JDK 17 will be around for about 5 years each. Migrating between versions can be pretty painful, but within the bounds of a version, generally there are fairly few bugs and issues to be found. Even the feature updates, like new GC implementations, are largely optional and you don't get the rug pulled out from under your feet.
Some databases out there also have been supported for long, for example, MySQL 5.7 and MySQL 8 won't quite make it to 10 years of being supported, but will get pretty close. They remain compatible with the drivers out there in the wild pretty well and even though there can be things like performance regressions, I can't recall that many problems that necessitated changes due to any breakages, more like non-critical slowdowns and such until you get a patch.
I think that there are a few software packages like that in any domain, whether you're talking about operating systems and kernels, programming languages or databases. That said, things can get murky depending on how you use them: something like Apache httpd or Apache Tomcat can be regarded as relatively stable, but that stability gets worse, the more modules/plugins you install. Same with operating systems and running a huge amount of different software on them.
Focusing on stable software isn't something that we as an industry do that much, admittedly, everyone just wants greater velocity and more features to meet whatever business goals are relevant, which doesn't fare too well for the folks who care about stability and keeping things running long term.
I've not used Firebase, but seeing as it's not, as far as I can tell, "core" Google (i.e., not related to search or advertising explicitly), I'd be concerned about relying on it in any substantial, business-critical, way.
I’m not a VP and there is no way to promise that we’ll “never” be shut down, but Firebase is a successful product and additionally drives a lot of Cloud usage (which is certainly “core”!). I just don’t see any reason why a VP would need to sunset firebase to balance any books.
FWIW it mentions Parse is open source, but Parse wasn't open sourced until the service was shutdown. Also AFAIK the open source version wasn't want was running internally in production
Same here. I’m always curious as to what things Google is building and providing, but I’m so afraid that things will break within the next 18 months, because Google suddenly decides to ditch it again.
Very tangentially.. In the early days of DigitalOcean, I was fortunate enough to spend time with most of the devtool CEOs, Leibert, Polvi, Hykes, Newcomb, Collison, etc etc etc. All very smart guys.
However James (and Sara!) was one of the most brilliant thinkers I came across. It's always been somewhat of a shame to me he didn't keep building firebase independently. Knowing him (shyguy), I'm not surprised he sold it to google, but I feel like we need to make more space for shy/guys/gals/*.
How many of you think Dokku is awesome? But are we doing enough to support folks like Jeff Lindsay (shyguy)?
Yeah, firebase was awesome when James was still working on it, people like him and Jeff Lindsay (@progrium) are surely brilliant thinkers.
You did a great job James. Yes Andrew and Sara and the team were pivotal, but you CEO'd it... I'll never forget when you showed me that weird arcade game y'all used to demo, the paradigme shifting was high and you did a fantastic job of explaining why. Hope you're well old friend. :)
Author here- poor choice of words, thanks for pointing out. I was getting at the fact that SQL systems don't scale as well horizontally (it's difficult to distribute the same column across multiple machines), and inadvertently used a technical term connoting something else.
I'd caution against believing that NoSQL databases as a category are more scalable than relational databases.
We have decades of experience scaling relational databases at the moment, including horizontally - Flickr was using read-replicas and sharding MySQL back in 2005.
> I was getting at the fact that SQL systems don't scale as well horizontally
Google, for example, claims that BigQuery, which uses SQL, scales horizontally. Funnily enough, it is also column-based. Row-stores, like MySQL Cluster, can also scale horizontally and uses SQL.
But no matter what you are at the mercy of CAP theorem. Pick your poison.
SQL is a protocol. I'm not sure I would consider that "under the hood". What happens under the hood is up to the implementer, and implementations can vary widely (BigQuery is quite unlike Postgres, for example). It is the visible part that makes up a particular public API designed for interacting with databases.
Conceivably even Firestore could expose SQL as one of its APIs. In fact, I once built a tool to do exactly that so I could perform ad-hoc Firestore queries using SQL. Worked beautifully.
Correct, a common mistake people make is conflating these things. I wrote this several years ago about MongoDB:
One thing that helps is if people stop referring to things as SQL / NoSQL as what ends up happening is various things get conflated.
When talking about stores, it's important to be explicit about a few things:
1. Storage model
2. Distribution model
3. Access model
4. Transaction model
5. Maturity and competence of implementation
What happens is people talk about "SQL" as either an NSM or DSM storage model, over either a single node, or possibly more than that in some of the MPP systems, using SQL as an access model, with linearizable transactions, and a mature competent implementation.
NoSQL when most people refer to it can be any combination of those things, as long as the access model isn't SQL.
I work on database engines, and it's important to decouple these things and be explicit about them when discussing various tradeoffs.
You can do SQL the language over a distributed k/v store (not always a great idea) and other non-tabular / relational models and you can distribute relational engines (though scaling linearizable transactions is difficult and doesn't scale for certain use cases due to physics, but that's unrelated to the relational part of it).
Generally people talk about joins not scaling in some normalized form, but then what they do is just materialize the join into whatever they are using to store things in a denormalized model, which has its own drawbacks.
As to the comment above you, SQL vs NoSQL also doesn't have anything to do with the relative maturity of anything. Some of the newer non-relational engines have some operational issues, but that doesn't really have anything to do with their storage model or access method, it just has to due with the competence of the implementation. MongoDB is difficult operationally not because it's not a relational engine, but because it wasn't well designed.
Just like people put SQL over non-tabular stores, you can build non-tabular / relational engines over relational engines (sharding PostgreSQL etc.). In fact major cloud vendors do just that.
It seems like there are a few different terms being tossed around here, and it's hard to tell which one is being discussed.
* How is the data stored? "Columnar" (column oriented) vs "Row Oriented" vs Document
* How do you query the data? SQL vs json (elastic search/mongo)
With any discussions on performance, while it is true that any database which has ACID compliance will struggle at very large scale, if the solutions being discussed are firebase or superbase, it feels like that is an issue which is not relevant yet. If you are having scaling issues with superbase, I would look at a few different things (indexes, application caching, work_mem, replicas, etc) way before i look at a different database engine.
> You also therefore can’t truly run Firebase locally.
To me this is just incredible. As an old timer who learned the ropes in the late eighties it never crossed my mind (what with computers always getting faster) that we would once again face the situation that developers can't run their software locally. It's like a bloody throwback to the sixties and the days of punchcards and timesharing systems. Thanks but no thanks!
It's such a friction factor that all other benefits of the platform get overshadowed by this. It's the same reason I'm vehemently opposed to my team using AWS Lambdas for anything non-trivial. And no, SAM is not the answer here.
> It's the same reason I'm vehemently opposed to my team using AWS Lambdas for anything non-trivial. And no, SAM is not the answer here.
I’ve had to start using Lambda at work and this is my biggest problem. Waiting minutes to see if it was a typo or if there’s a larger problem is terrible. I miss Docker.
We really need completely open source stack for the applications themselves, and few enough deps that they can run and be debugged locally. Not only that, but one shouldn't need to provision a cluster and configure certs just to run some business logic that's unrelated to infrastructure.
Honestly curious, do you have examples or thoughts of what this might look like? What is the core primitive upon which you want to build that is zero config but highly scalable?
We had this in the early 2000s. It was called J2EE application servers. All external dependencies were specified in terms of interfaces. Granted a lot of them were needlessly convoluted but the premise was good.
Tbh I don't have the experience to give advice or reviews.
That said, check out Nats.io. It's basically just a messaging system but it's beautifully abstracted, horizontally scalable and runs locally from a small binary. It largely removes the need for much middleware like load balancers. They've also recently added persistence features for streams and KV stores.
It doesn't solve every problem but I do think a message system is a very good core abstraction to build other things on top of.
I had a customer, big "big" database >10TB used just internally, they paid 1000's of dollars monthly for "cloud", installed 2 redundant server's with backup's snapshot's just everything. Performance increased 5 time's, cost decreased 10 time's reliability increased 2 times (not a really reliable internet access there) the system is running now since 7 years with nearly no intervention. Cloud is not the answer to anything...but to some (workloads) it is...the right tool for the work....
While I feel the same way about local dev, I think the industry as a whole is indifferent.
Every place I've worked that's adopted into AWS/Azure gave up on running apps locally. And everything becomes harder because of it.
And now there's a push to not even run your editor locally - the next trend is your whole development experience happening in the browser connected to a cloud dev environment.
Most of firebase’s backend services have local emulators. Some even use the same codebase as production (real-time database). We’re also rewriting the functions emulator to use the functions-framework like prod (our emulator predated functions framework).
In the 2 instances where I really needed emulation, they completely failed for me. In one scenario, I needed to run a cloud trigger upon writing an object to storage. Nope, that doesn’t work.
In the second scenario, I wanted to replicate our dev database to local so I could run locally with the same conditions, rather than an empty store. Spent hours on that before giving up.
I’m not aware of any feature that allows you to load a db dump into the database emulator, though it’s two simple curl commands if this dev database is small enough. The storage emulator does trigger the functions emulator though and has for years.
correct me if i'm wrong, you can run lambdas locally whit firecracker, lost all the benefits of serverless (you are self-hosting) but you can do it if you need it.
Google's internal developer hazing and poor DX surely have something to do with this. I've talked with multiple ex coworker developers at Google about the 3 to 6 months it takes to actually figure out how to deploy something. Also the constant overhead of approvals and oddball systems (gerrit!) thereafter.
Eventually good DX seems unimportant and even associated with failure. Real programmers grovel to weird dependancies and find out stuff the hard way.
There's a better alternate universe where Firebase and 2010 era Google App Engine are the template for GCP.
> I've talked with multiple ex coworker developers at Google about the 3 to 6 months it takes to actually figure out how to deploy something.
In typical developer environments, you end up with a bell curve of "easy things are relatively easy (host a static site, 30 seconds), but some things end up being totally impossible (build a global CDN with 99.999... availability)."
Developing at Google is not at all like this: building anything at Google is medium-hard. It takes longer to do simple things, but at the same time, basically nothing is technically impossible. In many cases, doing the hard things is much easier than elsewhere because, yeah, sure, you have to set up 20 different config files, but those config files abstract damn near anything a production system needs to operate.
Example: I wanted to add improved image serving functionality (e.g. imgix style URL params), and I was able to integrate the existing infra Photos uses in about 15 lines of code and a few hours. I don't think there are too many places in the world where it's possible to provide that functionality in an afternoon. Why it never shipped is a separate story involving non-technical reasons (cross-PA politics).
> Also the constant overhead of approvals and oddball systems (gerrit!) thereafter.
IIRC only the Android codebase used Gerrit, the rest of Firebase and GCP were on Google3. The approvals and other stuff... true.
> Eventually good DX seems unimportant and even associated with failure.
I think this comes down to a definition of "good DX". I think the HN definition (indeed by default one) is "how quickly can I solve for an issue using the happy path." Indeed, Firebase solves that for a good number of problems (why people use it, why there are people who love it).
At some point though, you can't solve for _all_ use cases along the happy path (see e.g. SAML, ABAC, any $ENTERPRISE_FEATURE). If you want to expand the business, you need to solve for some things that don't have a perfect happy path, and that's where things start getting messy.
IMO, the goal of Firebase's acquisition was to keep Firebase solving the 80-90% of things that had a happy path, and GCP was the remaining 10-20% where folks had to break out of to do arbitrary things. It seems like a lot of folks in other threads here are complaining about that because the abstractions leak out--I think it's a fair criticism, but I also am not entirely sure how to solve for that.
DX to me is reduced cognitive load. We are all operating at the limits of abilities, better DX expands the scope of what we can accomplish in our weeks and months.
Often things sold as DX are absolute garbage dead end schlock, like a Flash to iphone app builder or something. So I get some of the indifference (google's not yours). But I don't think we should think of it that way.
Just on your last paragraph, I think it started that way. That Firebase would be the competent general problem solver that you might need to break out of. Firebase and App Engine have tended to weaken over time, so that you needed to move to GCP more often. Ideally Firebase would get more powerful, cover more cases, and have higher quota limits. Instead it feels like they are being degraded or abandoned rather than embraced.
I've never gotten the impression that at google DX was lionized. Examples of doing this well culturally are pre-Salesforce Heroku, Github, vscode, and Stripe.
A github/heroku style culture doing GCP would crush all competitors. Instead if anything GCP is a little worse than AWS.
> Just on your last paragraph, I think it started that way. That Firebase would be the competent general problem solver that you might need to break out of. Firebase and App Engine have tended to weaken over time, so that you needed to move to GCP more often. Ideally Firebase would get more powerful, cover more cases, and have higher quota limits. Instead it feels like they are being degraded or abandoned rather than embraced.
I think I mentioned this on another thread, but the "weakening" you mention is "strengthening integrations with other parts of the platform." I think the general thought here is that an App Engine or a Firebase is a local maximum, and in exchange for trading off deeper integration, there is a platform wide global maximum that is achievable.
Pre-acquisition Heroku, GitHub, Firebase, etc. all were able to achieve local maxima, which led to acquisition, and subsequent attempts to globally optimize.
Dev tools is a brutal business: if you're lucky, 80% of your customers cost you money, 15% break even, and 5% are where 80-90% of your revenue comes from. A better DX typically only gets you more customers in the bottom 95% bracket, because it's easier to onboard people who can get from 0 to 1 than from 10 to 100 (or 100 to 1000). Deeper integrations into the features on the far side of the curve tend to get prioritized and look a lot like what folks here are discussing as negatives.
> I've never gotten the impression that at google DX was lionized. Examples of doing this well culturally are pre-Salesforce Heroku, Github, vscode, and Stripe.
It's not, for the reasons others have pointed out about people being smart enough to figure out the solution regardless of the DX, etc.
I'm curious if you think it must be the case DX should be so cloud-specific. Could someone (maybe like my company, withcoherence.com) create a broadly "good" engineering workflow for different kinds of cloud apps, and map that onto different cloud substrates? I don't have your degree of experience with the internal culture at GCP but I would have to agree it seems to be less customer-obsessed than the examples you provide.
It's clear that AWS is more popular, especially with startups, but my experience is that GCP is better, if viewed through a user experience and product quality lens, outside of a few notable exceptions such as SSL certs (ACM) and some NoSQL DB products.
People with a high pain threshold create systems that require a high pain threshold. That filters out a lot of people with a low tolerance for bullshit.
Then the “indispensability” of people who built job security into the system filters again.
I still have no idea how to navigate a Gerrit PR. I sometimes land on one trying to investigate a bug in Chromium or something like that. I’m sure Gerrit’s approach has some benefits, but it definitely fails at showing a simple before and after view, i.e. “here’s what changed.”
When change is hard people start over. Anything in your system that adds friction to modifying existing code (be it source structure, lack of static analysis tools, build shenanigans, diff tool) encourages people to copy instead, or reinvent.
Not OP, but I'm guessing they're referring to the "readability" approval process, which can sometimes be pretty brutal. (Or at least that was the case a decade ago, don't know how it's evolved since.)
Googlers deploy software on production systems on Day 2 of their on-boarding training. I've never worked anywhere that had as straightforward and usable developer tools.
Running `borgcfg up` is very different than spending weeks writing monarch configs :P
I totally agree that Google's tooling is _legendary_ when you understand it all, but I also agree that having to learn 15-20 different config languages to get a service running in production was pretty damn painful when we only had three months to do it. There's also been a good amount of work in the world outside of google to get tooling to a similar place (and in many cases, without 20 years of baggage).
I guess the trick is to understand that it's all protobufs anyway, and the config langs are just ways of generating protobufs.
Outside of Google, it is useful to understand that, for example, Helm is not part of Kubernetes, in the same way that borgcfg is not part of borg and many people use borg without borgcfg.
I’ve been pretty critical of Firebase on HN, perhaps too strongly. My experiences with it have all been the same, it goes like this:
Client has a new-ish codebase that was written by relatively inexperienced front end developers. They aren’t confident enough in the backend, and are attracted to Firebase because it lets them write everything from the front end.
Then they proceed to create the most convoluted DB model and a spaghetti backend that’s split between the front-end repo and various cloud functions.
In practical terms it’s been a nightmare every time, for me personally.
Firebase is a large suite of products - are you talking about the Realtime Database? The answer is it depends on what "high scale" is. There are parts of the RTDB that didn't scale well when I worked on it (indexes are built in memory on demand so building the initial index is slow), but there are projects that could fix that.
I absolutely agree. Moreover with any Google products you will never know when they will shut it down or hike the prices like they have done countless times.
In my opinion one should refrain from any managed service that can’t be self hosted.
Not clear to me why the authors don’t use managed Postgres. Supabase is close enough I suppose. Personally I don’t see what you get with supabase vs an Orm and vanilla managed Postgres.
The appeal for firebase is that you can get away without your own backend for quite a while. It's perfectly feasible to consume databases directly from your frontend app without having to deal with a lot of caching or networking concerns, the firebase SDK does that for you. You basically get a complete API layer without having to write a lot of code yourself. It is very similar with stuff like auth handling, which is also tightly integrated into the database offering.
This makes development incredibly easy, especially in early stages and if you're starting out as a solo/small scale project. It is even better if you don't have an in depth knowledge about backend development/architecture and just want to build a product.
You don't get any of those benefits with a regular managed Postgres offering, at least not that I'm aware of. Supabase comes a lot closer to firebase than a regular managed postgres + ORM offering would but you also get an oepn source project that you could potentially self-host.
from what I heard self-hosting Supabase is possible, but a rather complex undertaking. There is some documentation around it, but is definitely a lot more complex than using their hosted offering is.
No mater how good Firebase or any of Google's cloud offerings are I could never trust them enough not to just decide to pull the plug one day like they eventually do with every product they launch.
They've made themselves untrustworth, and not just from the usual privacy issues.
I've been using Firebase since 2016 in production. Still have a couple of projects there but I will not start another project with Firebase.
I would only use it for quickly prototyping something that I'm 100% sure will be trashed later. Otherwise there are just too many issues with it for anything remotely serious.
The databases are completely useless for anything other than trivial demos.
Cloud functions are plagued by cold starts and deploy usually takes minutes. These are by far the worse cloud function offering in the market right now.
Storage is fast but it is extremely overpriced.
The JS client used to be super heavy. I think they've improved that with tree shaking but still, if you're using all their services, you will end up with huge JS bundle just for the Firebase stuff.
All your app becomes deeply locked in into Firebase. It's designed so that you have to invest a huge dev effort if you want to leave. Firebase could have adopted standards like GraphQL or maybe added a PG offering like Supabase, but no.
The static hosting is probably one of the slowest options in the market. At least that was the case in 2020 when I did these benchmarks.
I ran into unexpected difficulties from Firestore - the queries only allow 1 range clause, the transactions only allow up to 500 updates, and you can either search 1 collection or all of them with the same name.
This was a problem for us when applying a template to the schedule - templates could have over 500 appointments (so difficult to apply the template in 1 tx), and not being allowed two ranges meant we could test the appointment starts but not the end. The collection issue came up because we had appointments off of users off of orgs. It was impossible to search all the users within an org/tenant the way we set it up. Either search in 1 user or all users in all orgs.
We ditched all of firebase and I’m happy we made that decision early enough.
My recollection is that progress on Firebase stopped when Google bought it, and I haven't followed it for almost a decade. Does anyone know if these basic features were ever added?
1) It used to download the whole state (write history) upon startup. Did they ever update it to send a snapshot, or a partial view of the data with lazy-loading for additional nodes?
2) Did they ever provide a simple push notification service? That was pretty much the only thing missing to be able to run Firebase without another server.
3) Were node aliases/shortcuts ever implemented, so a node could be a reference to another node? That would have greatly reduced the need for a SQL JOIN alternative.
Also, after having done some cloud dev ops work, I can't endorse it from any provider at this time, as the burden of securing things like user permissions is beyond what humans are generally capable of, so I consider them an anti-pattern. A better pattern is token-based authentication for everything, using standard web metaphors and REST APIs, so that every service behaves like a sandboxed root user with limited abilities granted. I also wouldn't use any cloud service without Terraform or a similar declarative infrastructure manager. Meaning that the best cloud service is basically the best Terraform configuration, providing a higher level of abstraction like Heroku, which largely defeats the purpose of direct cloud hosting in the first place.
So.. the cloud is no substitute for Firebase, and if it goes that route, it will be no surprise if customers jump ship.
> My recollection is that progress on Firebase stopped when Google bought it, and I haven't followed it for almost a decade.
Firebase launched an alpha in April 2012 and got acquired in 2014, so I'm not sure how you've stopped following it for a decade because it's only existed for about that.
> 1) It used to download the whole state (write history) upon startup. Did they ever update it to send a snapshot, or a partial view of the data with lazy-loading for additional nodes?
Yes, that feature shipped in 2014.
> 2) Did they ever provide a simple push notification service? That was pretty much the only thing missing to be able to run Firebase without another server.
Firebase integrated with Google Cloud Messaging in 2016. Cloud Functions (which was also 2016-17) allowed operation without a server.
> 3) Were node aliases/shortcuts ever implemented, so a node could be a reference to another node? That would have greatly reduced the need for a SQL JOIN alternative.
No, the guidance was to duplicated data across paths (when Cloud Functions existed, folks used them).
I no longer work at Firebase / Google, but two points:
1. There may be issues with the GCP integrations & UX/DX, but GCP integration is good for many customers and necessary for the future of the business.
One of the common failure modes for the 2011-2014 crop of Backend-as-a-Service offerings was their inability to technically support large customers. The economics of developer tooling are a super-power-law. So, if you hope to pay your employees you'll need to grow with your biggest customers.
Eventually, as they become TheNextBigThing, your biggest customers end up wanting the bells and whistles that only a Big Cloud Platform provide.
This was a part of the reason we chose to join Google, and why the Firebase team really really really pushed hard to integrate with GCP at a project, billing, and product level (philosophy: Firebase exposed the product from the client, GCP from the server) despite all the organizational/political overhead.
2. I'm excited to see the current crop of app platforms emerge. It has been 10 years since we launched (https://news.ycombinator.com/item?id=3832877) and there are now some great innovations in the space. I like the way Supabase (https://supabase.com/) has exposed Postgres and InstantDB (https://www.instantdb.com) graphdb+realtime is really promising.