This is highly confusing to me. Pipelines in Jenkins are fragile. You can spend entire days trying to do very common things and run into tons of friction. The differences between scripted and declarative are severe, in terms of how erros and continuation are handled, the way that plugins interact depending on how you've nested stages and steps. Even things like environment vars and CWD can be a pain to coordinate. Trivial pipelines take far, far, far too long to setup and get working well.
This just appears to be a layer on top of the still-limited declarative pipelines. I'm not sure trading a groovy DSL for an extra layer (of yaml, no less) is a good deal.
It doesn't solve the weirdness around multibranch, it doesn't address the mismatch between having a parameterized pipeline where that parameterization has to live outside the Jenkinsfile (or if it is encoded in the Jenkinsfile, it acts really oddly, like running the Jenkinsfile actually reconfigures the job that invoked it, etc). This is why everyone has to continue using Groovy DSL and/or JJB to reinstantiate parameterized jobs or handle jobs that deal with multiple Jenkinsfiles in a project.
I know Jenkins isn't going anywhere but it's legacy shows a lot.
First, full disclosure, I'm the creator of Jenkins.
I'm sorry to hear that you had a bad experience with Jenkins Pipeline. I can see that you know a lot about it.
There's actually no difference between how continuation is handled between scripted and declarative, so I'm curious to know more about what hit you, because I suspect it's something else (though obviously equally frustrating!) Similarly curious about improvement to error reporting, because I think that's something the Pipeline team cares about, and it's one of those things where how errors in real world are made is always more interesting than what we can imagine. I used to work on compilers, so I know the frustration of a poor error message pushing you down a wrong lane, only to discover a few hours later that all it took was one line fix! Modern improvements in Pipeline (like declarative and this one) is in no small part motivated by making those error checks more thorough, easier, and more upfront. So I think this is a change in the right direction.
My perception has been that parameterized jobs are in decline, in part because more people are triggering automation implicitly through commits, and not through explicit "run this" button. Parameters are more implied from the context (branch, commit message, creation of tags, etc), as opposed to explicitly given from UI.
Stepping back from those specifics, I think regrettably software has bugs, and there's always more usability improvements that can be made, so we are just working on those one at a time, which kinda summarize my entire journey with Jenkins :-) So in that spirit, I want to make sure we learn from your suffering.
Just chiming in to say that I use parameterized builds! I use them for controlling the version stamp in custom one-off builds, and also in giving devs the ability to build from their custom branches.
Thanks for the response. Mine is a bit terse as I'm on mobile for a few days.
For one, the always/finally block in declarative pipelines flatly doesn't work in scripted mode. You have to set/maintain job status yourself and throw/catch. It's very unpleasant.
I am shocked to hear that parameterized builds are in decline. Every project I've seen or touched that used Jenkins had multiple pipelines in a repo that needed to be executed with a number of different configurations. If there was better native support for pipelines calling other pipelines, this might not be such a problem.
Example: a kube related project. It has a Jenkinsfile for release builds and another for general checkin/PR builds. Each of those need to be tested with two versions of kube, with flannel and calico, with and without RBAC. I don't know of a clean way to do that without Job DSL/JJB and parameterized jobs. In my experience, this type of scenario is not uncommon. We need visibility at the config level. It's not helpful to shove all of that into a single job with coarse -grain reporting up.
Even if you put parameters aside, there is just a fundamental mismatch because the pipeline only (today) encodes steps. What about triggers? Having them be part of the pipeline and having the pipeline modify it's own job in Jenkins feels very, very, very wrong to me. There's a reason Job DSL and JJB kept those separate. In fact, if it weren't for the Pipeline visualization, I could've gotten the same functionality with JJB /Job DSL alone, with some bash scripts, with considerably less heartache.
I don't know that I know a "lot" about Jenkins, but I do know that I strive for easily repeatable setups that need git triggers, need to post back status, and need to run bash scripts. This most recent attempt was the third time owning it all-up and it was still very tedious. And it was really much easier to achieve this simple requirements with k8s's Prow... Even when having to dive in and hack some of prow's plugins.
All of the mature, repeatable Jenkins setups I've seen are small mountains of groovy scripts with their own workarounds embedded. There needs to be a system config DSL too, because maintaining huge blobs of XML and writing groovy.init.d scripts keep me awake at night.
Anyway, Jenkins is crazy good stuff, especially for it's age, but for newer smaller projects that don't need the 10,000 Jenkins plugins, it feels unwiedly. I had hope that Jenkins X was going to tackle some of these things, but so far I'm not sure.
Thanks again for taking the time, I hope my further feedback is constructive.
I need to find someone from the pipeline team to pull into this, so in the mean time, just responding to what I can contribute,
Just to make sure, I'm not saying parameters are disappearing. I'm just making an observation that less people seem to be using it. Take your example of release vs general checkin/PR builds. I see increasing people doing releases as automation that kicks in after creating a tag, or by cutting a release branch. Or the master branch is always deployable.
I agree with you that reporting capability needs to be better in order for one Jenkinsfile to pack lots of different test cases. I believe the team totally gets this importance.
The "system config DSL" you mention has evolved into "Jenkins config as code", and I've referred to it in my other comment. I think we are on the same page that it's a crucial part of a repeatable mature Jenkins setup. I think I also totally get what you mean by "unwieldy", and Jenkins Essentials in that comment is making steps to attack that challenge.
I'd love to hear from you where Jenkins X fell short for you, because I think it should speak to some of the challenges by embracing a certain best practices.
First of all, this is a Google Summer of Code project. Abhishek, who is driving this work, is doing a great work, so I hope people can give him encouragements and feedbacks to push him forward. I'll make sure he sees those feedbacks and will stop by to answer questions you might have.
This is one of the efforts that is pushing the envelop of Jenkins that solve problems people had with Jenkins. Reading some of the reactions here, I wanted to use this opportunity to introduce other bigger efforts going on currently in Jenkins that I think addresses various points raised in this thread.
* Jenkins Essentials is aiming to be the kind of "readily usable out of the box" Jenkins distribution that is a lot less fragile, because it's self-updating appliance that has sane defaults and obvious path to success.
* There's architecture effort going on to change the deep guts of Jenkins so that data won't have to be on the file system, and instead go to managed data services.
* Jenkins configuration as code lets you define the entire Jenkins configuration in YAML and launch Jenkins as a docker container to do immutable infra. Jenkins Pipeline lets you define your pipeline in your Git repo, so that's the other part of immutable infra, and between modern pipeline and efforts liek this one, there's no need to write Groovy per se. It's just a configuration syntax based on brackets like nginx, which happens to conform to Groovy syntax, so that when you need to do a little bit of complicated stuff you can, but you don't need to
* Finally, Jenkins X is focused on making CD a whole lot easier for people using and developing apps for Kubernetes. It's a great example of how the community can take advantages of the flexibility & the OSS nature of Jenkins to the advantages of users.
* A few people mention about container-based build environment, which is very much a central paradigm with modern Jenkins (and thus obviously with Jenkins Essentials and Jenkins X.) See our very first page of the tutorial! https://jenkins.io/doc/pipeline/tour/hello-world/
Jenkins Essentials sounds great. I've become very good at not breaking Jenkins installations. Jenkins is a powerful tool, but it's hard to recommend when that's a skill you must acquire.
I remember when I installed the git plugin, which forced the credentials plugin to update, which caused runtime failures in communication with some other core plugin, so we reverted the install/update, which broke the whole system because the update had renamed fields in the configuration and the old version didn't understand them...
Since then, I have always updated all core plugins together in lockstep. Based on the name and description, that sounds like what Jenkins Essentials would do too. If so, that's a good sign.
Simpler, more reliable administration is exactly what Jenkins needs and you seem to have a credible way of achieving it, so I'm excited to see the results.
Looks interesting, although it seems very similar to another project, Jenkins Job Builder (https://docs.openstack.org/infra/jenkins-job-builder/) that can describe jobs as YAML or JSON, and is designed one level higher as a meta-jobs creator that can create many jobs all with slight variations, with all of the config kept in version control (no manual fiddling with the Jenkins GUI, other than troubleshooting). With JJB, most jobs aren't pipelines, but traditional Groovy-based pipeline scripts can be included and run.
Also highly recommended is Jenkins Job DSL, which, despite also using Groovy like Jenkins Pipeline, takes a far different approach.
In Job DSL, you write Groovy scripts to declare your job. These scripts, in turn, write the config.xml that Jenkins uses to actually define and execute jobs.
One of the big benefits to us is that it is _not_ an alternate execution engine like Jenkins Pipeline is. It just takes over the job of writing the job XML for you, instead of editing it through the GUI.
We had a large, complex, manually 'versioned' job chain that did not lend itself to being re-written in Jenkins Pipeline, as it made heavy use of plugins that were not supported by Pipeline at the time.
By using Job DSL, we were able to incrementally replace the manually maintained jobs with script-generated ones. Importantly, with nothing breaking for end users! All download links, URLs, etc of the new jobs exactly matched the old ones, so it was a very painless, incremental adoption that people did not even notice!
If you've got an existing job chain you are trying to get under control, Job DSL comes highly recommended.
We've been moving away from people authoring jobdsl. Instead we have a custom format suited to our policies. We still use jobdsl for taking care of knowing the configuration details of all of the plugins we use.
We have a custom format for
- Decoupling server configuration (version, plugin versions) from build configuration. This is especially important because jobdsl regularly breaks compatibility. You can't just check your build description into your repo and live on a happy life.
- Reuse a lot of build steps across our different repos. We have a lot of common steps specific to use and we wanted to reduce our boiler plate.
I am looking into moving off of jobdsl to jjb. In theory, jjb supports generating for multiple plugin versions. As a step towards aligning two distinct build farms, we want our custom format supported on both. Unfortunately, our build farms are on different versions of plugins. I could have our tool support generating for different jobdsl versions but am instead looking at relying on jjb's ability to do this.
Having worked with both, it is also interesting to know that Job DSL is aware of jobs that have to be removed, while Job builder is not. That was one of the reasons we chose Job DSL as our configuration method.
Besides all the special cases in the normal use of Apache Groovy (e.g. definition of `==` different to Java), you also have to remember all the Jenkins-specific special cases (e.g. Groovy's collections methods don't work). I wouldn't call it simple or easy at all.
Are you sure you're not thinking of Jenkins _Pipeline_?
Jenkins _Pipeline_ definitely has these issues. Jenkins _Job DSL_ has always worked fine, including all the special collection method operations (grep, find, collect, each...)
I think it totally makes sense to do stuff like this as long as you keep a very thin layer of abstraction between your YAML declarations and Jenkins config. Like, someone familiar with Jenkins and the naming conventions should easily be able to come in and map what buttons and boxes they fill in the UI to what is declared in the YAML. I’ve worked with several build systems that try to get way too clever with this and all you end up with is huge amounts of config files that only a few people are able to understand and manage (even if there is nice documentation!). Build systems are just as important for following the Principle of Least Astonishment as codebases are.
This is just lowering themselves to the same level as every other CI tool. For sure YAML is an easier barrier to entry to overcome, but it's also a low ceiling.
YAML doesn't let you transcribe logic easily compared to any programming language, and build logic always tends to go up. Programming languages also allows you to create abstraction to remove some of the complexity from the users.
My experience with yml based builds and deployments is basically using the yml file to tell the build system what code to run to do the build - usually other shell scripts or some other scripting language that can also be in the same git repo.
"This is just lowering themselves to the same level as every other CI tool" you mean making a stable simple and repeatable interface that can be saved in source control?
I've been thinking this recently as well. Some problem spaces are definitely very amenable to being transformed into a purely declarative language like YAML but, while builds look like that at first glance, I find that turing-complete-code customization is required just a little too often for it to be worthwhile.
YAML is a way better way of doing it than the current jenkins way though.
I am so grateful for this. As a non-Java person, using the Groovy pipelines was incredibly painful to incrementally fix on the server. Nothing like pushing 20 commits in a row and forcing a build, only to find each time that an odd syntax error.
I had the same problem. Luckily jenkins provides an API which you can validate your jenkinsfile. I setup the deploy to always lint before attempting to run.
That's another problem with the existing pipeline. There are so many slightly different ways of setting them up. They're all similar enough to make tutorials and documentation confusing, but different enough that that it's a problem when you mix them up.
All I want to do is poll an SVN repository for changes on any branch. If there is a change on a branch, I want to check it out, run the build script, and archive the resulting binary on success or send an email on failure.
I spent a few hours trying to figure out how to do it in their Groovy DSLs but I eventually gave up. That was mostly easy with the old Jenkins script jobs (though I had to copy it for each branch), and I find Travis CI to be a bit limited but very straightforward, so I was kinda disappointed.
I'm happy to see they're working in making it easier to use Jenkinsfiles. There's potential there but the UX needs improvement. This might be it.
Still have to do that to build the thing in the first place. I do not understand at all how every CI solution out there makes it entirely impossible to trivially test a pipeline. Here we are building something whose entire point is that it can run wherever, but the one place it can't run is my local machine? It's like they are doing it out of spite.
Sure that could be the case, but I think most people that set up and use CI just pick Jenkins or whatever on a whim, then we all read about horror stories and bad experiences because the solution they picked is a square peg and they're forcing it in a round hole.
Why do I need all of that bespoked complexity, my yml file is just there to tell the build system the scripts or shell commands I want to run - that are also in my git repo.
Fundamentally, a CI system for a simple application just has to gather dependencies, build a package/run automated tests and store the artifact somewhere.
A deployment system just has to copy the deployment package to a server/group of servers, and do some type of installation process based on an automatic (Dev/integration environment) or manual approval process (who ever is responsible for pushing to any other environment has to approve the release). There are a million ways to skin the cat but if your build process or deployment process is too complex, maybe it’s more of a question of the maturity of your framework or tooling.
My build process for .Net web apps is:
- nuget restore
- msbuild
- run nunit
- zip artifacts based on version.
My deployment process is
- run a CloudFormation file to configure AWS resources
- deploy code to a VM
- run one or two commands to configure IIS or install a Windows service.
- run a few AWS CLI commands to start autoscaling, reconfigure API gateway, etc.
No build servers to maintain, just an agent running on the target VM for deployment and a slightly custom Docker image for builds.
—-
For Lambda functions and scripting languages it’s even simpler...
Build:
- import deprndencies
- create zip file
Deployment
- run CloudFormation YML file to deploy lambda
The CF file can be the deployment step. If I need some programmatic, I can create a custom lambda backed resource that is run when the CF file is run.
Hah. I have a short perl script that trivially tests individual components of gitlab pipelines (parses the yaml, extracts the script components, saves them to a temp file then executes the temp file as bash script). The code is short but a little contorted. Works well though. Will be releasing into the wild in a couple of months.
We have Jenkins connected to ECS as build agent pool. Every team has it's own Jenkins instance running, secured with AD login.
Pipelines are using Jenkinsfiles.
My only critique is that it's not fully ci as code because you need to manually create the pipeline, connect it to fit repo, etc.
It may not work for you but "organization folders" (a feature of multi branch) were designed to discover projects from bitbucket, github etc (ie a new repo appears, with a Jenkinsfile, a new project will automatically configured). Some people have had luck with them (they don't always work how you want though, but when they do, don't need to manually create the pipeline other than dropping in a Jenkinsfile in a new repo).
A few years ago, at one of my first gigs in an enterprise environment, I used Jenkins for the first time to test and build my stuff. When I needed a newer version of my compiler or specific linter, some fellow had to install that on the VM that was running Jenkins.
Later, working in a more modern environment, we started using Gitlab CI. It was a bliss. I specified the docker image with my favourite tooling and my stuff got built in there. When some tooling changed, I updated my image.
At my current gig, again an enterprise, it is Jenkins everywhere. They do the most complex things with it, orchestrating entire releases, integration tests, etc. I don't know what to think of this yet.
Not sure if I can speak for the HN crowd overall. I've never used Gitlab CI, so I can't comment on how I like it. But my experience with Jenkins has been good overall.
Some thoughts:
- Jenkins is FOSS, which I like.
- If I want commercial support, I can buy it.
- Jenkins works effortlessly with Active Directory.
- Jenkins gives good group-level access control.
- I don't like the UI very much, but I can live with it.
- Jenkins is designed to work with remote build agents, which I like a lot.
I think Jenkins' greatest strength is also what some people hate about it: everything is configurable. Its flexibility is a burden if you've got simple build needs. But it's perfect if you've got weird build flows.
I do embedded Linux work. For me, that means I have to deal with weird cross-compilation issues, obscure toolchains, and lots of glue logic between different parts of the build. Jenkins gets the job done better than Travis or Circle, and lets me do it on remote build agents (which is very expensive with Bamboo). I could maybe use Buildbot maybe, but that needs too much customization IMO.
Yes, it's a little crusty. And yes, the UI isn't as pretty as some other CI tools. But it gets the job done with a functional UI and great access control. And you can't beat the price.
The biggest thing is that Jenkins isn't a CI tool, it's a job execution tool. Whether those jobs are scheduled, manually triggered, triggered by a webhook or otherwise..that's it. It runs the job, records the output and the exit status and makes it easy to read the log of any of those jobs, as well as alternating log rotation of them. It can also manage multiple nodes, distribute those jobs across them or isolate certain jobs to certain nodes.
All that a CI server does is run install dependencies, run the job and then do something with the output. There are tons of plugins for Jenkins to handle those specialized bits of output.
If you want to use Docker images for it, you can do that. If you want to use it to run cron jobs across multiple servers over SSH but centralize the logging and error notification...you can do that. If you want to schedule scaling jobs for expected traffic...you can do that. If you want to trigger Ansible execution...you can do that.
Trigger, Run, Track, Parse. Jenkins does it general purpose, for free, with flexibility to handle many different types of work and different auth tools in one place.
The UI could be better...but it's also hard to tune the UI unless you are specialized for a certain type of work.
The real question is...why bother with a special CI-only tool when you can use Jenkins for that and a whole lot more?
I’d like to find something friendlier than Jenkins, but the trade offs with the other CIs have made them not worth it or just plain not possible.
If you want or need to host your CI internally, a lot of the shiny new ones are off the table. Additionally, paying per user or per job for our CI is really unpalatable. We host our Jenkins on a few different nodes in our VPC and use them for WAY more than just building and deploying code. Content and database migrations, temporary stack creation and tear down, etc. We have code we build once and ship to multiple environments with different configurations and content, so we have different pipelines that run based off the commit branches, etc. We push to our private Docker, Maven, and npm registries, and auto-deploy via bastion nodes where necessary. On top of that it’s hooked in to our LDAP for auth, which is usually an ‘enterprise’ option that jacks up the price a ridiculous amount.
There’s not much out there that’s mostly batteries included that can handle what we’re doing. Where possible, it would be nice for a team or company to have a Jenkins focused person (back in ye olden days it would’ve been an SCM or build engineer position) because the UI and configuration can definitely be complex.
If you don’t need the complexity, services like BitBucket Pipelines, Travis, DeployBot, and their ilk are certainly much more friendly and likely less error prone. Jenkins definitely still has a valuable place in my book, though.
We're using it because it costs nothing and it's what everyone already knows. We're making a big architectural shift to microservices and I made a pitch for the Travis / CircleCI style workflow, but after Jenkins pipelines were discovered, that was the compromise that was made.
We've only got our toes in it now, but from what it looks like, you theoretically can use a Jenkins pipeline (with a Jenkinsfile) to get some of the benefits of those systems, but the problem is that they also allow you to rule out the others. Your Jenkinsfile can assume certain plugins are installed, assume other jobs are configured on the same Jenkins instance... basically all the things that led to Jenkins becoming what it has in terms of a carefully configured sacred cow that must be meticulously backed up and everyone is scared to update.
Having builds trigger on push isn't easy if you're not using GitHub or BitBucket, and having a series of pipelines that trigger off of each other is... not clean. You can certainly trigger another job as a "post" action just like you could in any other Jenkins job, but now your upstream job contains the logic for whether or not a downstream job is triggered. What if your downstream project (like a VM image) only wants builds from a certain branch? Or should hold off on new builds from a certain microservice while QA completes their testing? I guess you'll need to edit the Jenkinsfile for the upstream project (likely someone else's project) and be careful not to break it.
>> our Jenkinsfile can assume certain plugins are installed, assume other jobs are configured on the same Jenkins
I think what worked for us in this case was using Jenkins shared library[1]. We provide a common template for the common stacks and expose only few configurable options. This would really help in maintaining sanity across the jenkins env and since you maintain the shared lib, you can control the dependencies.
We use it because of legacy. Its fragile, un-updatable and virtually un-automatable. Use plugins? They'll break. Use JJB to make things repeatable? Someone has made a change in the GUI. At last count we had something like 70+ jenkins masters (because we couldn't share slaves, because people kept on cocking up the master.)
The rise of circle-CI style yaml interface is wonderful. The build script is there in the repo, for all to see. Yes, there is less logic, but that's a good thing. Build, test, deploy, alert if fail.
Gitlab's runner is also good. (just don't use the Saas version, as the only thing it does well is downtime.)
a few things: 1) plugins, they routinely break, the API they provide breaks, functionality changes, or the API they rely on also changes. Its a mine field.
2) bootstrapping a _secure_ jenkins server to the point where its able to accept jobs is a monumental faff. (gitlab runner is far far more simple, if you want Uber free, circle CI if you don't mind paying)
3) its just so much _effort_
Jenkins was great compared to the field in 2010/1 (after it forked changed from alfred) ever since it's failed to move in the right direction.
It over complicated what is essentially a cron/at daemon with web gui.
> At my current gig, again an enterprise, it is Jenkins everywhere. They do the most complex things with it, orchestrating entire releases, integration tests, etc.
You just answered your own question -- people use Jenkins because it can do all of those things.
It's possible to use Jenkins to do all of those things. It's also enormously costly in terms of manpower, etc.
It's truly absurd just how much shepherding Jenkins requires once you have build slaves, etc. The pipeline stuff is heaps better, but unfortunately it doesn't really work for the workflow we're using in our shop, so... sad faces all round :(.
Yes it is harder than it should be - there is a project under way to try to make this config and plugin pain go away: https://jenkins.io/blog/2018/04/06/jenkins-essentials/ (not sure if that name will stick, will maybe just be Jenkins at some point in the future). Hopefully that will help one day, and be far more efficient use of time.
I don't like it any more than you do, but I'm not aware of a competing enterprise-grade solution that has all of the necessary features and won't take a year to migrate to.
Jenkins strength is also its weakness. It's like a swiss army knife that, especially combined with plugins, can do just about anything in any way. This makes it hard to find support, best practices etc.
Anyway, you can also have your jobs completely run inside containers, but apparently they chose not to do that in your first job (or containers where simply not a thing yet). See: https://jenkins.io/doc/book/pipeline/docker/
I’m by no means a devops expert and while I have used Jenkins before, the only times I’ve had to design a CI/CD system from scratch I used either Microsoft’s hosted version of TFS (with git) -VSTS or AWS’s Code Pipeline.
With AWS Codepipeline, you can also specify a custom Docker image with your build environment and all of the tools you need - it’s in fact required if you want to build on Windows with the .Net framework (not .Net Core) and it’s also orchestrated with yml files.
ha YAML, when mentioned, always reminds that taking a look at the specs[1] is a great reminder that things can derail badly event without a large committee
There are technologies that are build using COD - consultant oriented development. That is one of the principals is that only a trained consultant would be able to configure it and that would require knowledge impossible to find in freely available documentation. It also needs multitude of gotchas that couldn't be solved using common sense. Jenkins I believe is one of such tools. We have completely moved away from it.
I guess rather than having to deal with the agony to move away from a sufficiently large and complex Jenkins setup, people rather delude themselves in that its actually good software.
Just the amount of obtuse setup that thing requires is crazy and always ends in a mess. I have no idea why there seem to be so many people praising that thing in this thread. I don't fault Jenkins though, its just old obsolete software that should be replaced.
Jenkins has a declarative pipeline generator, it should be relatively straightforward for it to output YAML rather than the standard groovy based declarative pipeline DSL.
This just appears to be a layer on top of the still-limited declarative pipelines. I'm not sure trading a groovy DSL for an extra layer (of yaml, no less) is a good deal.
It doesn't solve the weirdness around multibranch, it doesn't address the mismatch between having a parameterized pipeline where that parameterization has to live outside the Jenkinsfile (or if it is encoded in the Jenkinsfile, it acts really oddly, like running the Jenkinsfile actually reconfigures the job that invoked it, etc). This is why everyone has to continue using Groovy DSL and/or JJB to reinstantiate parameterized jobs or handle jobs that deal with multiple Jenkinsfiles in a project.
I know Jenkins isn't going anywhere but it's legacy shows a lot.