Hacker News new | past | comments | ask | show | jobs | submit login

In my experience, the issue isn't that Google will jack up the costs but that they'll deprecate their infrastructure and push the migration work onto you, often forcing you to reimplement major features.[0]

One notable example is how their NDB client library used to automatically handle memcache for you, but they got rid of that with Cloud NDB Library and forced clients to implement their own caching.

The sequence of datastore APIs I've seen during my experience with AppEngine is:

* Python DB Client Library for Datastore[1], deprecated in favor of...

* Python NDB Client Library[2], deprecated in favor of...

* Cloud NDB Library[3], still supported, but they ominously warn new apps to use...

* Datastore mode client library[4]

[0] https://steve-yegge.medium.com/dear-google-cloud-your-deprec...

[1] https://cloud.google.com/appengine/docs/standard/python/data...

[2] https://cloud.google.com/appengine/docs/standard/python/ndb

[3] https://cloud.google.com/appengine/docs/standard/python/migr...

[4] https://cloud.google.com/datastore/docs/reference/libraries




If you're using the App Engine Flexible editions, it's really easy to not worry about vendor lock in or really even deprecation much at all. E.g. it's easy to run a basic Node, Python or Java backend in App Engine Flexible, making use of a MySQL or Postgres DB in Cloud SQL, so you don't have to worry about managing servers at all and you get all the benefit of automatic scaling without the semi-nightmare of running your own kubernetes cluster. Then even if App Engine totally went away you just have a normal Node, Python or Java app running against a MySQL or Postgres DB that is pretty trivial to migrate to another platform.


Are you still with them? If yes, would love to hear why. Otherwise, what made you jump?


I still use GCP, but I avoid locking myself into their proprietary infrastructure when I'm writing new stuff. I feel like Google is far too cavalier about deprecating services and forcing their customers to do migration work.

It is hard to replace GCP's managed datastores because I really don't want to maintain my own database server (even if it's a managed service that someone else upgrades for me). So I've stuck to Google Cloud Datastore / Firestore, but I've been experimenting a lot with Litestream[0], and I think that might be my go-to choice in the future instead of proprietary managed datastores.

Litestream continuously streams data from a SQLite database to an S3 backend. It means that you can design your app to use SQLite and then sync the database to any S3 provider. I designed a simple pastebin clone on top of Litestream, and I use it in production for my open source KVM over IP. It's worked great so far, though I'm admittedly putting a pretty gentle workload on it (a handful of requests per day).

[0] https://litestream.io/

[1] https://github.com/mtlynch/logpaste


>I feel like Google is far too cavalier about deprecating services and forcing their customers to do migration work.

Having worked with quite a few ex-Googlers this is a pretty standard Google engineering pattern.


You don’t want to maintain your own database server, even managed by GCP, but with SQLite you have to maintain state on GCP Persistent Disks and backups to S3 using Litestream. Why do you think this is easier?


I don't have to maintain state on GCP persistent disks. I can blow away a server without warning, and I'll only lose a few seconds of data.

True, I have to maintain state on S3, but there's not much work involved in that.

If I was maintaining my own database server, I have to manage upgrades, backups, and the complexity of running an additional server. With Litestream, I don't have to manage upgrades because nothing bad happens if I don't upgrade, whereas there are security risks running an unpatched MySQL/Postgres server in production. Litestream has built-in snapshots and can replicate to multiple S3 backends, so I'm not too worried about backups. And there's no server to maintain.

What operational complexity do you see in Litestream?


SQLite is really great. By using it, you don't have to install and maintain another service, and you don't have to think about things like network security. From that point of view, that's clearly simpler.

But it also introduces a few challenges. It's not as easy to connect to your database remotely to inspect it, with something like SequelPro for MySQL. It's not possible to create an index or drop a column without blocking all writes, which can be annoying if your database is large. Database migrations in general are harder with SQLite because ALTER TABLE is limited. [1]

One last thing regarding losing the few seconds of data. If you use something like Google Cloud Regional Persistent Disk, then your data are replicated synchronously in two different data centers, which means you can lose your server, restart another one, and not lose any data. Can still be combined with Litestream for backup to S3 with point-in-time restores.

[1] https://sqlite.org/lang_altertable.html


yeah, this is the more sane approach. Just use Google's replication/durability, and export to S3 when you want/need to change vendors. In this case, you wouldn't even need lightstream. Just SQLite.


If you can lose the last few seconds then yes that's fine. But for most applications I've been working on, we didn't have that flexibility (committed means durable).

I don't see any operational complexity with Litestream.io. I think that's an awesome tool. But it's not that different of managing PostgreSQL backups with something like WAL-E.

The complexity of managing your own database server only exists if you don't use a managed service. Then there is no server to maintain and they do all the things you mentioned for you.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: