As a project maintainer, I like it because it gives prospective users a quick and cheap way to test out service deployment, and it required nothing more from me than the already existing Dockerfile.
That is fascinating. It's one of those "feels so obvious" type of things. I wonder how hard it will be to track versions of things?
I guess the "safest" workflow would be to fork the repo before you click run but the article doesn't say how to handles repeat clicks... multiple environments or if it prompts you? Off to test!
To save anyone else some time if they stumble across my comment... It creates multiple revisions, even from distinct forks, based on what appears to be the repo name (cloud-run-hello).
First run from a fork:
This created the revision "cloud-run-hello-00001" of the Cloud Run service "cloud-run-hello"...
After changing the button's HTML to point to my repo on my fork:
This created the revision "cloud-run-hello-00002" of the Cloud Run service "cloud-run-hello"...
There appears to be more vendor lock in with this one requiring heroku specific files...
I'm a bit behind on best practices with Heroku so the heroku.yml[1] config is new to me and it says it doesn't replace app.json. This is where I feel like Cloud Run supporting "plain" docker files or build packs is great. I wonder if Heroku will follow suit and make it a bit easier to deploy "just a container"?
Cloud Run doesn't recognize app.json natively. However this project (Cloud Run Button) accepts an app.json that lets the developer configure env variables to prompt to the user.
Just last saturday, I tried to take an application which prominently featured a "Run on Heroku" button and test a bug locally.
For the life of me, I could not figure out to go from the button-based(apparently automatically inferred by Heroku?) deployment to a local Procfile-based deployment.
It was faster to rely on the alternatively provided Dockerfile to start a local deployment and hope that it was as up-to-date as the Heroku setup.
I find the idea of one-click deployments really appealing, but Heroku's vendor-lock-in-based implementation really turned me off their service.
All of the Heroku button stuff is configured in an `app.json` file in the root of the repository. There's also still a Procfile used. The `app.json` just helps with the initial configuration such as environment variables that need to be set, scripts to run, etc.
I'm not sure how much you know about Heroku, but their runtime environment is based around so-called buildpacks, which provide the required executables, packages, configuration, etc. for various languages (ruby, php, js, ...). Once you push code to Heroku's git remote it will analyze your code and select the proper buildpack to run the code on.
So you are absolutely right that you can't simply go from an app.json file to a local development system in one step, as you (very likely) don't have the required infrastructure on your system.
I see - so the idea behind checking in a Procfile is that you can run this locally using "heroku local", but the idea behind a app.json is that you explicitly can't?
The Procfile describing your application given some base installation (e.g. PHP 7.3), the app.json describing in this case nothing in particular?
Any application requires both a base environment and instructions for application deployment. And that is how the Dockerfile for rss-bridge is constructed, too.
So what besides vendor lock-in is the advantage of Heroku's approach?
Heroku's buildpacks make some default assumptions regarding an app's startup. For a Ruby on Rails app, for example, it would simply start the web server using `bundle exec rails s` unless you define something else in the Procfile. I'd assume there is a similar procedure for PHP apps, probably starting an nginx instance and pointing it at the app's index.php or something like that.
The way I understood it, app.json is there to customize the environment for the application to run in – providing ENV variables, required addons (think databases, memcache, etc.) and pre/post deploy scripts. A sort of configuration-as-code if you will. It is explicitely not used to define the actual processes that should be started when the app is deployed, that's what the Procfile is for, as long as you're running in an environment that supports Procfiles.
I'm not really sure what to tell you regarding vendor lock-in. Apart from the app.json the repo itself looks completely vendor-unaware, as it is simply a PHP application. It doesn't seem to make many assumptions regarding your (local) infrastructure but rather assumes you know how to get a PHP application to run on your server/computer. The presence of the app.json file is just an affordance to those who would want to try out the app without having to configure anything themselves.
On the contrary, now that I think of it. I always found Heroku to be rather non-locking, as you can just take your code and run it somewhere else. You need to provide some additional tooling around your deployments yourself in those cases, but that's true for all PaaS providers, isn't it? Heroku Addons are nice features, but usually simply services provided by third parties that are made available using automatically generated ENV variables, which you could simply copy over to wherever else your app is running.
There are a couple of options for making a "deploy to Azure" button. Both rely on ARM templates (declarative, parameterized deployments powered by JSON files):
1. Follow the steps at https://deploy.azure.com. This one greases the wheels for linking from a GitHub repo README for code that can be deployed straight to an Azure web app - you can just link to the site and it gets the repo URL from the referer header and uses a premade template to deploy it. You can also provide your own templates with custom parameters.
I love this. Would be cool if the cloud run button (i.e. the image in the readme) updates and shows whether it's running and some status info after it's started (maybe it does already, didn't try it yet)
I am a bit disappointed- I skipped the details and now that I've tried it hands-on I see that it is hardcoding the link to the repo in the button's HTML. So even after you fork it the button still point's to Google's repo.
That makes sense, of course, but if that step were gone (maybe by checking referrer?) it would be a lot slicker!
This looks pretty slick, will have to see which parts/ideas we should copy for https://mybinder.org. Which is an open-source project that lets you do something very similar, mostly based around people wanting to share notebooks. Colab but with automatically detected environment/build instructions.
What's the best way to handle a button like this if you intentionally keep certain dependencies out of Git/GitHub? (e.g. my gpt-2-cloud-run repos [https://github.com/minimaxir/gpt-2-cloud-run] have a 500MB dependency)
Can you do a conditional in the Dockerfile, e.g. download a remote file if using Docker to build with certain parameters?
RUN cache is invalidated when the text (the whole RUN line) itself changes, which can be bad if you update a remote zip archive that you download with `RUN curl ...` and then expect the image to be updated after a simple `docker build`. This also goes for `RUN apk add ...` where the package might have received critical security updates but you're not getting them into your image because the cache is used.
COPY and ADD caches are invalidated when the hash of the actual file content that's added changes, which is usually what you want.
You don't put the docker image in git, so you don't need a conditional in the docker file. You'll only download the dependencies when you make the image assuming you've made it available in that way.
i want to love all these cloud run app engine cloud function type things (and amazon type equals), but it seems like stuff takes like 30 years to start up if it hasn’t been touched for a little bit even for super simple functions... is this stuff just near unusable unless you’re patient or like super baller with billions of hits per second?
Similarly, if you're using Node or Python, you might want to see if any of your dependencies are enormous with lots of files and slow startup time -- you can check this locally by timing how long it takes to get to the initial listen() call and just print that wall time as you adjust dependencies.
If you're building Golang and you're seeing slow cold starts... I have no idea how you're doing that. For development, a lot of us on the open source http://knative.dev/ side are using Go http servers that take tens of milliseconds to start up, so there's probably some other initialization that's slowing you down.
Any thoughts on auto-detecting the language/framework so that’s it’s just git push and you don’t even need a dockerfile? For a zeit or Heroku like experience.
It might require a lot of conventions (which might not be worth it finally) but as a quick deploy and experiment solution, it’d be super awesome.
I'm about to launch a new website and would like to start using AWS or GCP. Between Fargate and Cloud Run, which one would you recommend? (It's a simple React + Django + Postgres + Redis project) Thanks!
I personally make heavy use of classic App Engine tbh, the experience is pretty similar to Cloud Run but it's much more of a mature platform (if you can deal with the downsides).
Plus, I just finished making an auto deploy to AppEngine Workflow with GitHub Actions last night, so I can just push and auto deploy if all tests pass!
I used App Engine years ago, but didn't even consider it nowadays exactly because it's mature and I fear Google might sunset it soon. Will have a look now, thanks!
Cloud Run is great if you don't need something that's always on (e.g. websockets). For the Postgres part you still need Cloud SQL. It is also much simpler compared to Fargate (perceptually, haven't actually tried Fargate nor would want to, happy GCP user)
Cloud Run is not for hosting websites (edit: in most cases, see reply below), it's for bespoke API commands with a bit more flexibility than Lambda/Cloud Functions due to greater environmental control.
Cloud Run is a perfectly reasonable choice for hosting websites -- it's a serverless HTTP platform that uses containers as the base packaging and runtime infrastructure.
Unlike something like Fargate, it supports automatic scaling of containers based on requests, so it will run zero containers if you get no traffic, and 100 containers if you get (for example) 1500 requests per sec. The fully-managed version has a pay-per-100ms of execution model, while the GKE-hosted version uses an existing GKE cluster you provide.
It's stateless, which won't allow many user authentication/CRUD workflows, although recently they added integration with Cloud SQL which is interesting.
Don't think you could use Django on Cloud Run without issues, though, particularly with how it handles Sessions.
If you're doing any modern architecture with microservices using containers, you're primarily doing stateless things (even if it's your web frontend) and pushing the state off to somewhere like Redis/memcached/database.
You basically implied that web frontends don't run in a load-balanced multi-replica set up, which is not true.
Similarly from what you said one might think people don't deploy web frontends to Kubernetes (where containers come and go all the time as they're ephemeral, due to events like crashes, autoscaling), which is also not true.
If you’re writing anything that scales (i.e. has multiple replicas), then you actually store any significant state wrt logins/sessions on your app and you push it out to an external storage. Most web frameworks offer libraries or middleware that let you persist this "state" in external storage.
As a Cloud Run user myself, I suggest adding examples for that use case explicitly in the Cloud Run docs, as those workflows for Cloud Run specifically are harder to mental-model than say Kubernetes orchestration.
It's stateless, but it can connect to stateful services just fine to power things like authentication/CRUD. If the stateful services are also serverless, you can get a completely scalable, stateful system that scales to zero.
We use it in conjunction with Google PubSub and Cloud Storage to evaluate ML models in production and are really happy with it.
actually you can run django on cloud run without issues.
cloud run is basically gvisor+docker (which can be run on GKE or on google managed servers) its basically built with knative.. btw. even the new appengine can run django.
(you can't run background stuff or you shouldn't)
Conceptually cool. I wonder though if this would work for those of us whose application consists of multiple interdependent services living in multiple repos.
As a project maintainer, I like it because it gives prospective users a quick and cheap way to test out service deployment, and it required nothing more from me than the already existing Dockerfile.