Hacker News new | past | comments | ask | show | jobs | submit login
Google announces Cloud Build, its new continuous integration platform (techcrunch.com)
227 points by AnatMl2 on July 24, 2018 | hide | past | favorite | 65 comments



Maybe I'm missing something, or maybe they haven't rolled out all the new features yet, but this product looks identical to the GCloud Container Builder that's been on GCP for at least a year. Its even got all the old container builds from that product, and the UI is identical.

Is there anything actually new here? I can't find any indication in their documentation that its different; seems like it's just a different name?


It's the same product -- it was just renamed.


"its new continuous integration platform." Pretty misleading title, if you ask me. I'd suggest it be titled, "Google renames its old build system."


This may be a nice time and place to ask ... I have docker problems.

We have a monorepo in GitLab and that works nicely. Many Dockerfiles are present in this repo in various folders. They all build private docker images pushed to AWS ECR. They all depend on each other and some have multiple parents, so this sort of forms a family tree.

It is a major pain to know what docker images are stale and need to be rebuilt, and to ensure that it's parents are also up to date. We do all this manually and it sucks.

Some images take hours to build and we only want to rebuild them when necessary. It would be great if the build server cached build-steps so that the resulting images could share more layers with previously downloaded images, saving hugely on disk space, time and bandwidth for people and servers.

We make sure to have one Dockerfile per folder and a file called "destination.txt" indicating the final image name adjacent to it, this allows scripts to easily build the entire tree of images and parents by scanning our repo.

We want nice automation. I don't even know what the best practices are here.

What should I do?


Several hour builds? I have to ask what kind of app this is for, because I haven’t ever had a build take more than 2 minutes. I’ve been using docker heavily in CI/dev for about 2ish years and the past couple months been getting some production stuff going.

My first thought since you say you’re having issues with build caching, you might want to look into multi-stage builds[1], if you haven’t already. I haven’t gotten any “heavy” use from these since I mostly do web stuff in non compiled languages, so I can’t benefit from this feature as much as a compiled language than can just copy binaries around. But, it has been useful for like compiling JS, since it’s done in a separate stage, I a) don’t ever have to contaminate my final image with a node_modules at any point and b) can rebuild the image and used the cached stage if there are no changes in the front end, and that’s usually the longest parts of my builds anyway.

However, if you’re in a compiled language (which from the sounds of it, I have a feeling you might be — long builds and multiple dependent components that have to be rebuilt), you can compile some core component in the first stage (A), then have multiple stages (B, C) that depend on that component’s compiled files, and then maybe a final stage (D) that depends on B & C. So then, if you make changes to just C, you’ll only have to rebuild the C & D stages, while the other two are cached. Each stage itself is just an intermediary image, so you can use them as you would the final, and you can even tell Docker to just build up to stage B, if you only need that image.

[1] https://docs.docker.com/develop/develop-images/multistage-bu...


Ahh yes -- some of our dependencies are huge compiled C++ projects that need some customization.

We're starting to use multi stage builds here and there. Seems like an essential building block.

The main pain point seems to be knowing when an image needs to be rebuilt and to ensure any dependent images also get rebuilt.


Seems like a build pipeline may be the way to go. Check out Concourse CI or Jenkins Pipelines.

Those allow to define multi-stage builds with custom triggers, which should allow you to trigger rebuilding dependent images.


I would make build triggers that are based on specific Dockerfiles changing in the repo. Many CI systems can watch a specific file being changed as the task trigger.

So if container A is a dependency for container B, then make container B's task trigger be whenever the Dockerfile for container A changes.


This Google Cloud Container Builder rebranded, which is a good thing. I've been abusing Container Builder as a general CI system for a while, I hope the rebranding indicates there will be more focus and feature development on general CI functionality.


I have been using this for several years now. Great stuff! :) The only part that was missing from this was something that can update your deployments on image build. That's when I built Keel https://github.com/keel-hq/keel.

Basically, I have one trigger to always build from master but the rest are triggered by tag, for example, "1.2.5" and it builds the same image, then Keel updates affected workloads. So far I have never had to pay for it, apparently managing to stay in the free tier :)

Works way faster than automated Docker cloud builds either.


I understand how great this is, but does anyone else get a little upset when GCP, AWS, and Azure are able to buy their way into a market and are pretty much guaranteed to hurt the small players in the infra tooling market?


I take it as a sign that the value bar keeps getting raised for small players to exist. CI is becoming so commodity that it doesn't really matter if you get it from Google or Codeship. If a slow-moving corporate machine can replicate what you do...maybe you need to move faster.


Understandable. I suppose as time goes on any product is going to be centralized among a handful of major players.


> does anyone else get a little upset when GCP, AWS, and Azure are able to buy their way into a market

There's benefits to smaller, hungrier teams building things, but unless they have paying customers on day one they're also buying their way into the market one way or another...


Yes. I have no skin in this game personally but I feel bad when Google or one of the other behemoth's of tech decide to offer a service that small players are already thriving in. Google can compete with a scale of development that can easily drive the small guys out of business. What's worse is that they then get bored with the product at some point, or a pointy haired boss looks sadly at a revenue chart and the thing suddenly disappears with no recourse and the small players are gone.


I understand this, my company is currently dealing with a similar situation. We have a small vendor that hosts things, but I am pushing to drop them for Azure because it offers so many more features and tools that the vendor just can't match. It's kind of an acquisition thing, we acquired a company that deals with this vendor, we were already using Azure. It's much harder to go off the cloud than to go on it, imo. You just lose so many great features when you move off of it. It's sad, but at the same time it's becoming impossible for smaller people to compete in a cloud world. I don't think that's going to change. My biggest fear is that once all the small players are killed off Google, Microsoft and AWS will agree in some back room to not compete on price to protect their margins. I have a feeling that something like that is merely a decade away, if not less.


this is the reason companies build platforms. If you own the platform, you get a big advantage over third-parties building tools on your platforms. they aren't buying their way into the market with money, they're buying their way into the market with years and years of work building the platform. I don't think it's anything to get upset about.

Also, the big cloud platforms aren't shy about acquiring smaller players, and that's an exit that a lot of the people building tools for cloud platforms are looking for. I think it's safe to say that without GCP, AWS, and Azure there'd be a lot fewer companies building CI tools.


As much as big players move into green pastures - so do open source projects. It's more about low-hanging fruit than anything else.


I was really excited at first, but it's worth noting this does nothing for App Engine users. If you want CI there you still need to go to a third party. Disappointing considering how they're pitching this.


I know next to nothing about this, but there seem to be instructions on how to deploy to App Engine. See the App Engine tab under "Deploying artifacts" [1]. Is there more that's needed?

[1] https://cloud.google.com/cloud-build/docs/configuring-builds...


Ah you’re right. It was buried and non-obvious (to me).


How do you figure this? There's certainly a `gcloud` step available, you can integrate with any GCP services here.


Nice, all CI should be per minute.


Bitbucket Pipelines charges per minute as well: https://bitbucket.org/product/features/pipelines


Why? I'd have thought CI is a place where per-second billing would be an advantage? A CI job is likely to take somewhere between 1 and 15 minutes, per second billing could be a significant saving for many users.


GCB prices are described in terms of dollars-per-minute, but are actually calculated per-second.

The price is actually $0.00005/second. :)

disclosure: I'm an engineer on the GCB team


Sure that would be great, but the state of the art is actually worse. Some providers charge by concurrency limit, which means you're always paying for peak capacity.


Oh really?

I never really considered that. Perhaps it'd be profitable to actually use that capacity for something when it's idle...


I can't see any reason to favour this over Concourse.


I've been introducing Jason and Christopher from the Cloud Builder team to Concourse over the past few months. Jason & Co also developed Knative Build which closely tracks the Cloud Builder design.

I am hoping to see Concourse come in through the Pipelines proposal on Knative, but it will be necessary to digest a Concourse-on-Kubernetes epic first.

(Please come and sing the Concourse-is-amazing hymn with me on Knative, we both know it's for the good)

Disclosure: for those who don't know, I work for Pivotal and I'm a bit of a Concourse fan.


This is their container builder project re-branded.

Pretty nice product, but missing some LAUGHABLE features.

Specifically:

- Ability to start builds based on github pull requests

- Ability to send messages to slack on successful / failed builds

- Ability to update github PRs with build status

- Conditional build steps AT ALL

- Ability to start parameterized builds from GUI ( What if I want to deploy to a specific environment? )

- Any outside integrations AT. ALL.

- No story on how to store secrets

I've been running this product for about a year. I have a Jenkins job that detects github PRs, and then launches these builds. I would LOVE to delete that Jenkins VM, but for some reason a lot of basic functionality has been ignored.

edit:

People have informed me that Github PR building is in alpha! PRAISED BE THE GOOGLE!

https://cloud.google.com/cloud-build/docs/run-builds-on-gith...


Hi, member of the Google Cloud Build team here. Appreciate all the feedback on the launch! Yes it's both a rebrand as well as an update.

When we first launched Container Builder a year ago we always had a plan to support more CI use cases. With the launch today we've added a few new features such as:

- Built-in support for pushing non-container artifacts to Google Cloud Storage

- Filepath triggers for invoking builds only on changes to certain subdirectories or files

- Updates to the Cloud Console UI and of the Cloud SDK (from `gcloud container builds...` to `gcloud builds...`)

We have more updates in alpha now including built-in support for GitHub pull requests, status/Checks API support, and programmatic triggers, which we agree is one of the biggest missing pieces for many people. That GitHub app is the first step with more granular control over PR triggers coming soon.

We try not to comment on roadmap items but looking ahead a little, this rebrand is also an indication of the product's focus on broader CI/CD use cases, which many of our users are already using Cloud Build for, and an evolution towards bringing DevOps and Continuous Integration best practices to Google Cloud users. Feature requests like built-in conditional steps and notifications are on our radar and we always appreciate hearing from users what they'd like to see us prioritize.

Release notes are published here for anyone interested in updates on new features. https://cloud.google.com/cloud-build/release-notes

There's a public Slack channel as well where GCB users and the Cloud Build team discuss features an different use cases. Happy to see anyone on there. https://googlecloud-community.slack.com/messages/C4KCRJL4D/c...

We appreciate all the positive feedback on the thread as well. We're excited about what's ahead for Cloud Build and that it's helping people be more productive!


Awesome to see this product getting more love! I'm looking forward to the Github status updates.

A few suggestions from my experience using the product for the last few months:

- Setting the machine type on a per-step basis instead of for the entire job. For example, I'd like to use a large box to compile my scala, but can use a smaller one to test each package.

- Showing the status of each step before everything ends.

- Showing the elapsed time for a job on the page for the job in addition to the start time.

- Failing a single step without stopping the build (for example, a single project in a ci build fails.)

- Speeding up the web view (it's very frustratingly laggy when you have a significant logs in your build.)

- Using the dataflow visualization for cloud builder.

- JUnit support


Please change your username, which is misleading (makes think you represent Google Cloud Build in some capacity). @dang


Do you even support trivial features like sending an email if a build fails? Please don't answer write your own pubsub component. If you don't have that, you shouldn't have bothered rereleasing this.


It doesn't appear to be a rebranding, but an upgrade or a rewrite, since the site mentions the first two points (that I checked):

https://cloud.google.com/cloud-build/

"Set up triggers to automatically build, test, or deploy source code when you push changes to GitHub, Cloud Source Repositories, or a Bitbucket repository."

https://cloud.google.com/cloud-build/docs/configure-third-pa...

I looked at Container Builder before and, like you, was not impressed, but this is something else.


Container Builder has been able to trigger based on changes to GitHub repos for a while (tags and merges to branches). What it (and the new branding) can't do, and what the parent comment is referring to, is trigger builds when PRs are opened, and update the PR with the build status.


Don't you need an existing branch to open a PR? I get your second point about updating the PR itself: everything needs to go through PubSub.

Edit: I found the changelog (https://cloud.google.com/cloud-build/release-notes), which confirms that this is both a rebranding and update, but also points to a new alpha GitHub app: https://cloud.google.com/cloud-build/docs/run-builds-on-gith...

"Observe that the Google Cloud Build app builds your code on creating a pull request."

"Go to the Checks tab. You'll see that Cloud Build has built your changes and you should see that your build has succeded. You'll also see other build details such as the time it took to build your code, the build ID, etc."

There might be hope...


> Ability to send messages to slack on successful / failed builds

That should be easy enough, request a new cloud builder Slack. Currently this is possible though using the curl cloud builder and constructing a custom curl request to the Slack API.

> No story on how to store secrets

Agree, need a way to mark variables as sensitive/secure. Don't show them in the build history in plain text and make the input field of password type. Encrypt somehow when stored as well.

> Conditional build steps AT ALL

Agree, this is a biggest missing piece.


Encrypted environment variables and file contents are supported using Cloud KMS.


Is there an example/documentation of using a value from KMS in a cloudbuilder.yaml file?



The demo I just saw certainly seemed to have all of those things (except possibly PR status updates and outside integrations). I don't think it's a rebranding but an update.

We've been using https://drone.io/ for some time, which is very, very similar, but if we weren't I'd give it some serious consideration.


How does the free tier work? Can you sign up so that you only use the free tier, and builds beyond the limit are queued for the next day or don't occur at all?

Or do you need to have a "card on file" to run the free tier at all, with the chance that you'll have to pay if you somehow exceed the free tier (stuck build, etc)?


How does this work for booting up a database or other systems to run integration tests and such? I don't see anything about that sort of thing. Is the idea to bake everything into a single docker image and run that? Also, what are the resource limits (really, just interested in ram) for that listed price?


Tried a simple curl cloud builder with passed in variables and it works pretty nice.

The only feature request I have is a way to mark variables as sensitive/secure and not show them in the build history step in plain text (mask the value) and change the input type from text to password.


That console, though. Most players in this space seem to invest heavily in a very feature complete configuration GUI but in this case, there's absolutely nothing. Just a build history page and a small wizard to set up triggers.


Is there any plan to make this a partner product ?

It's limited to 10 Concurrent Build for obvious reason, making it impossible for startup who want to create CI services on top of this product due to limitation.


Does this replace traditional providers such as CircleCI or TravisCI?


It did for us, its really easy to create a trigger on a Github branch, run tests, build the executable, and deploy (in out case to k8s). Super quick builds too.


How do you deploy to k8s? Do you run kubectl from within cloudbuild.yaml, as an additional step at the end?


Yeah, just run kubectl from within your cloudbuild.yaml

https://cloud.google.com/cloud-build/docs/configuring-builds...


That's awesome! I'll have to spend more time looking into it then


Not if you want to build on Mac or Windows.


How are you able to build windows or ARM builds with only docker?


I suppose re-enlivening build-bot is out of the question now?


Do they have mac images for building with XCode?


I can't wait until they discontinue it!


Maybe they'll just rename it in 18 months. That would be an improvement and someone can still get a promotion. Now, they just to think up a new name for Reader.


You know this comment has become a cliche?


So have the actual discontinuations


My thoughts exactly. Too much risk to commit to Google platform. I will wait for Mozilla cloud instead.


Because Mozilla have never cancelled a project...


Because Mozilla cloud will be open, so if they drop it, an another firm or individual can continue to support it or host it. Rust is the ideal language (so far) for cloud software, so (if talks will materialize) Mozilla cloud software in Rust/Cargo will be much easier to write, support, and host by yourself, if necessary. Non-profit organizations are slower, but better in long term. Only non-profit organization can liberate us.


The three most expensive words in software development are "Generous free tier"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: