Hacker News new | past | comments | ask | show | jobs | submit login
Cloud Run Button: Click-to-deploy your Git repos to GCP (cloud.google.com)
241 points by eloycoto on Aug 21, 2019 | hide | past | favorite | 82 comments



Back in June a Googler added a cloud run button to one of my open-source repositories (https://github.com/typpo/quickchart#deploy).

As a project maintainer, I like it because it gives prospective users a quick and cheap way to test out service deployment, and it required nothing more from me than the already existing Dockerfile.


That is fascinating. It's one of those "feels so obvious" type of things. I wonder how hard it will be to track versions of things?

I guess the "safest" workflow would be to fork the repo before you click run but the article doesn't say how to handles repeat clicks... multiple environments or if it prompts you? Off to test!


They had this in Heroku for years, just that it wasn't cheap.

https://elements.heroku.com/buttons


To save anyone else some time if they stumble across my comment... It creates multiple revisions, even from distinct forks, based on what appears to be the repo name (cloud-run-hello).

First run from a fork:

This created the revision "cloud-run-hello-00001" of the Cloud Run service "cloud-run-hello"...

After changing the button's HTML to point to my repo on my fork:

This created the revision "cloud-run-hello-00002" of the Cloud Run service "cloud-run-hello"...


Cloud Run requires a deployed container image somewhere. That can be Docker Hub or GCR, but the image version is what it actually operates on.


Heroku has had this for years. I'm surprised others have not followed suite more often.


I didn't realize that. Saving anyone else a search https://devcenter.heroku.com/articles/heroku-button

There appears to be more vendor lock in with this one requiring heroku specific files...

I'm a bit behind on best practices with Heroku so the heroku.yml[1] config is new to me and it says it doesn't replace app.json. This is where I feel like Cloud Run supporting "plain" docker files or build packs is great. I wonder if Heroku will follow suit and make it a bit easier to deploy "just a container"?

[1] https://devcenter.heroku.com/articles/build-docker-images-he...


`app.json` serves a different purpose from Dockerfiles (which Heroku also supports). Google's Cloud Run also makes use of `app.json`.


Cloud Run doesn't recognize app.json natively. However this project (Cloud Run Button) accepts an app.json that lets the developer configure env variables to prompt to the user.

Disclaimer: I've worked on this project.


My mistake for confusing Cloud Run and Cloud Run Button.


Just last saturday, I tried to take an application which prominently featured a "Run on Heroku" button and test a bug locally.

For the life of me, I could not figure out to go from the button-based(apparently automatically inferred by Heroku?) deployment to a local Procfile-based deployment.

It was faster to rely on the alternatively provided Dockerfile to start a local deployment and hope that it was as up-to-date as the Heroku setup.

I find the idea of one-click deployments really appealing, but Heroku's vendor-lock-in-based implementation really turned me off their service.


All of the Heroku button stuff is configured in an `app.json` file in the root of the repository. There's also still a Procfile used. The `app.json` just helps with the initial configuration such as environment variables that need to be set, scripts to run, etc.


I would be very glad if you could demonstrate both of these facts based on the project's app.json [0].

[0] https://github.com/RSS-Bridge/rss-bridge/blob/acc0787b001613...


I'm not sure how much you know about Heroku, but their runtime environment is based around so-called buildpacks, which provide the required executables, packages, configuration, etc. for various languages (ruby, php, js, ...). Once you push code to Heroku's git remote it will analyze your code and select the proper buildpack to run the code on.

So you are absolutely right that you can't simply go from an app.json file to a local development system in one step, as you (very likely) don't have the required infrastructure on your system.


I see - so the idea behind checking in a Procfile is that you can run this locally using "heroku local", but the idea behind a app.json is that you explicitly can't?

The Procfile describing your application given some base installation (e.g. PHP 7.3), the app.json describing in this case nothing in particular?

Any application requires both a base environment and instructions for application deployment. And that is how the Dockerfile for rss-bridge is constructed, too.

So what besides vendor lock-in is the advantage of Heroku's approach?


Heroku's buildpacks make some default assumptions regarding an app's startup. For a Ruby on Rails app, for example, it would simply start the web server using `bundle exec rails s` unless you define something else in the Procfile. I'd assume there is a similar procedure for PHP apps, probably starting an nginx instance and pointing it at the app's index.php or something like that.

The way I understood it, app.json is there to customize the environment for the application to run in – providing ENV variables, required addons (think databases, memcache, etc.) and pre/post deploy scripts. A sort of configuration-as-code if you will. It is explicitely not used to define the actual processes that should be started when the app is deployed, that's what the Procfile is for, as long as you're running in an environment that supports Procfiles.

I'm not really sure what to tell you regarding vendor lock-in. Apart from the app.json the repo itself looks completely vendor-unaware, as it is simply a PHP application. It doesn't seem to make many assumptions regarding your (local) infrastructure but rather assumes you know how to get a PHP application to run on your server/computer. The presence of the app.json file is just an affordance to those who would want to try out the app without having to configure anything themselves.

On the contrary, now that I think of it. I always found Heroku to be rather non-locking, as you can just take your code and run it somewhere else. You need to provide some additional tooling around your deployments yourself in those cases, but that's true for all PaaS providers, isn't it? Heroku Addons are nice features, but usually simply services provided by third parties that are made available using automatically generated ENV variables, which you could simply copy over to wherever else your app is running.


I also think Azure has preset deployments on their repos.


There are a couple of options for making a "deploy to Azure" button. Both rely on ARM templates (declarative, parameterized deployments powered by JSON files):

1. Follow the steps at https://deploy.azure.com. This one greases the wheels for linking from a GitHub repo README for code that can be deployed straight to an Azure web app - you can just link to the site and it gets the repo URL from the referer header and uses a premade template to deploy it. You can also provide your own templates with custom parameters.

2. Link to the Azure portal ARM template deployment wizard using a deeplink that preloads a template from a public URL. I've never seen any official documentation for this, but https://www.noelbundick.com/posts/deploying-arm-templates-fr... covers it.


I love this. Would be cool if the cloud run button (i.e. the image in the readme) updates and shows whether it's running and some status info after it's started (maybe it does already, didn't try it yet)


I am a bit disappointed- I skipped the details and now that I've tried it hands-on I see that it is hardcoding the link to the repo in the button's HTML. So even after you fork it the button still point's to Google's repo.

That makes sense, of course, but if that step were gone (maybe by checking referrer?) it would be a lot slicker!


Disclaimer: co-author of the project.

It's already on our list: https://github.com/GoogleCloudPlatform/cloud-run-button/issu... Ideally you'd just link to a URL like deploy.cloud.run and we'll figure out the rest from Referer.


Awesome! Thanks for the quick response on that!


This looks pretty slick, will have to see which parts/ideas we should copy for https://mybinder.org. Which is an open-source project that lets you do something very similar, mostly based around people wanting to share notebooks. Colab but with automatically detected environment/build instructions.


Looks awesome. Loved the deploy to Heroku button in the past.

Now I just wait for websocket support to move my backend to cloud run as well


If you want to support websockets from a not-always-on backend, you can use Fanout or Amazon API Gateway.


If you want to build your own micro PaaS, which is from Git to Deploy (on a Kubernetes cluster), check out https://rio.io (disclaimer: happy user).


What's the best way to handle a button like this if you intentionally keep certain dependencies out of Git/GitHub? (e.g. my gpt-2-cloud-run repos [https://github.com/minimaxir/gpt-2-cloud-run] have a 500MB dependency)

Can you do a conditional in the Dockerfile, e.g. download a remote file if using Docker to build with certain parameters?


Off-topic but I think Dockerfile allows downloading remote URLs, e.g.:

     ADD http://example.com ./file.txt


I've been using wget like a sucker?!


I think using RUN wget/curl is actually cached better than ADD [URL], which will invalidate your build steps cache more often unless I'm mistaken.


It's also easier to clean up, i.e download a file, do something with it and then remove it. With ADD this is impossible.


If I remember correctly it's like this:

RUN cache is invalidated when the text (the whole RUN line) itself changes, which can be bad if you update a remote zip archive that you download with `RUN curl ...` and then expect the image to be updated after a simple `docker build`. This also goes for `RUN apk add ...` where the package might have received critical security updates but you're not getting them into your image because the cache is used.

COPY and ADD caches are invalidated when the hash of the actual file content that's added changes, which is usually what you want.


Doesn't ADD do the download unconditionally but only invalidate the cache if the contents change?

The only good use for ADD that I've found is for invalidating git clones from GitHub:

    ADD https://api.github.com/repos/<user>/<repo>/git/refs/heads/<branch> version.json
    RUN git clone --depth=1 --single-branch --branch=<branch> https://github.com/<user>/<repo>.git


Over http, without a checksum!?


You don't put the docker image in git, so you don't need a conditional in the docker file. You'll only download the dependencies when you make the image assuming you've made it available in that way.


i want to love all these cloud run app engine cloud function type things (and amazon type equals), but it seems like stuff takes like 30 years to start up if it hasn’t been touched for a little bit even for super simple functions... is this stuff just near unusable unless you’re patient or like super baller with billions of hits per second?


It depends on the language you're using and how you manage your dependencies.

If you're using Java, see this writeup for how different frameworks can affect startup time: https://medium.com/google-cloud/java-frameworks-performances...

Similarly, if you're using Node or Python, you might want to see if any of your dependencies are enormous with lots of files and slow startup time -- you can check this locally by timing how long it takes to get to the initial listen() call and just print that wall time as you adjust dependencies.

If you're building Golang and you're seeing slow cold starts... I have no idea how you're doing that. For development, a lot of us on the open source http://knative.dev/ side are using Go http servers that take tens of milliseconds to start up, so there's probably some other initialization that's slowing you down.


App Engine Flex has to spin up a Compute Engine VM so deploys are slow. Cloud Run deploys containers to Borg so it's significantly faster.


The free tier seems pretty generous but I'm not sure I'm making sense of all these numbers [1].

Anyone has insight on how this compares to Heroku in terms of pricing/performance?

[1] https://cloud.google.com/run/pricing


Is this different from how Netlify does it?


Any thoughts on auto-detecting the language/framework so that’s it’s just git push and you don’t even need a dockerfile? For a zeit or Heroku like experience.

It might require a lot of conventions (which might not be worth it finally) but as a quick deploy and experiment solution, it’d be super awesome.


Disclaimer: working on this project.

When there are no Dockerfiles, we use CNCF Buildpacks (https://buildpacks.io) similar to Heroku buildpacks.

Some apps without Dockerfile are currently deployable using this, as buildpacks detect the language/framework.


> similar to Heroku buildpacks.

In fact Heroku is one of the founding contributors for Cloud Native Buildpacks (along with Pivotal, where I work).


I'm about to launch a new website and would like to start using AWS or GCP. Between Fargate and Cloud Run, which one would you recommend? (It's a simple React + Django + Postgres + Redis project) Thanks!


I personally make heavy use of classic App Engine tbh, the experience is pretty similar to Cloud Run but it's much more of a mature platform (if you can deal with the downsides).


AppEngine is great!

Plus, I just finished making an auto deploy to AppEngine Workflow with GitHub Actions last night, so I can just push and auto deploy if all tests pass!


If its open source - could you please point it.


https://gist.github.com/ZeroCool2u/c818f89906c9b46d28cb7de85...

The project I created it for isn't public yet, but I made a gist that provides an example workflow and the steps required to make it work.


If anyone happens upon this and finds it useful, I've created a public repository that demonstrates how to auto deploy if unit tests pass: https://github.com/ZeroCool2u/Python-Deploy-To-Google-App-En...


Thank you


Absolutely, happy to help!


I used App Engine years ago, but didn't even consider it nowadays exactly because it's mature and I fear Google might sunset it soon. Will have a look now, thanks!


App Engine is still GA and is still having features added.

Disclaimer: I work on both Cloud Run and App Engine at Google.


Again false perception issues, GCP is not Google consumer product. GCP rarely sunsets products already in GA (for now).


Cloud Run is great if you don't need something that's always on (e.g. websockets). For the Postgres part you still need Cloud SQL. It is also much simpler compared to Fargate (perceptually, haven't actually tried Fargate nor would want to, happy GCP user)


Thanks, it's a simple CRUD with a couple of cronjobs.


Yes, good point. It can do cron jobs with the help of Cloud Scheduler and PubSub


Cloud Run is not for hosting websites (edit: in most cases, see reply below), it's for bespoke API commands with a bit more flexibility than Lambda/Cloud Functions due to greater environmental control.


Cloud Run is a perfectly reasonable choice for hosting websites -- it's a serverless HTTP platform that uses containers as the base packaging and runtime infrastructure.

Unlike something like Fargate, it supports automatic scaling of containers based on requests, so it will run zero containers if you get no traffic, and 100 containers if you get (for example) 1500 requests per sec. The fully-managed version has a pay-per-100ms of execution model, while the GKE-hosted version uses an existing GKE cluster you provide.


It's stateless, which won't allow many user authentication/CRUD workflows, although recently they added integration with Cloud SQL which is interesting.

Don't think you could use Django on Cloud Run without issues, though, particularly with how it handles Sessions.


I disagree.

If you're doing any modern architecture with microservices using containers, you're primarily doing stateless things (even if it's your web frontend) and pushing the state off to somewhere like Redis/memcached/database.

You basically implied that web frontends don't run in a load-balanced multi-replica set up, which is not true.

Similarly from what you said one might think people don't deploy web frontends to Kubernetes (where containers come and go all the time as they're ephemeral, due to events like crashes, autoscaling), which is also not true.

If you’re writing anything that scales (i.e. has multiple replicas), then you actually store any significant state wrt logins/sessions on your app and you push it out to an external storage. Most web frameworks offer libraries or middleware that let you persist this "state" in external storage.


As a Cloud Run user myself, I suggest adding examples for that use case explicitly in the Cloud Run docs, as those workflows for Cloud Run specifically are harder to mental-model than say Kubernetes orchestration.


It's stateless, but it can connect to stateful services just fine to power things like authentication/CRUD. If the stateful services are also serverless, you can get a completely scalable, stateful system that scales to zero.

We use it in conjunction with Google PubSub and Cloud Storage to evaluate ML models in production and are really happy with it.


Hmm, that might be crazy enough to work.


actually you can run django on cloud run without issues. cloud run is basically gvisor+docker (which can be run on GKE or on google managed servers) its basically built with knative.. btw. even the new appengine can run django. (you can't run background stuff or you shouldn't)


Hi, thanks for the clarification, I'll give it a try then!


I read somewhere that it was comparable to Fargate and got confused. Thanks for clarifying it!


Conceptually cool. I wonder though if this would work for those of us whose application consists of multiple interdependent services living in multiple repos.


Exactly, going to production is 80%. This button while cool is only 20% of the story.


that's fine, sometimes you just want to try something out, test it, play around


This is excellent. Any way to pass environment variables?


Disclaimer: I work on this project.

Yes, you can create a file named app.json that prompts the user for environment variables that are set on the deployed application. See https://github.com/GoogleCloudPlatform/cloud-run-button/


Is this free? How do I know google wont terminate it?


Does this work with private repos? I don't see documentation on this.



> sped up for your viewing pleasure...

Has anyone tested how long this button actually takes?

Last time I checked its at least a [1] 7 minute wait to deploy a "Hello World" to GCP...

[1] https://stackoverflow.com/questions/40205222/why-does-google...


You're linking to a question about a different service (App Engine) from 3 years ago


Yup, OP's time is in reference to App Engine Flex, which is much slower to deploy.

I normally see 1-2 minute deployment times depending on the application.

(I work for GCP)


OP here, sorry my mistake. But what is this button deploying to then?


Cloud Run. Serverless containers

https://cloud.google.com/run/


Interesting looks like it's still in Beta, but will check it out.

We use Firebase and GCP at our company and have had a good experience aside from slow deploys with App Engine. What parts do you get to work on?


I am using this in production for https://www.pagedash.com

It's holding up pretty well so far.

> What parts do you get to work on?

Not sure if this answers your question, but I also use Cloud SQL, Cloud Build, GKE, GCE, Stackdriver, Bigquery etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: