> Unlike some other CI tools, pipelines in Vespene are easily and graphically configured, and there is no custom DSL (“Domain Specific Language”) to learn and debug.
Does this mean that the CI config cannot be checked into some sort of version control? That was one of my biggest issues with Jenkins. Someone would change something somewhere and suddenly my builds would fail, and even if I track down what setting is causing the problem, I have no idea why it was changed, who changed it, what will happen if I put it back, or even what it's previous value is.
Additionally, it made duplicating a given workflow an extremely tedious, manual process.
Vespene is pretty new so a lot of things are up for grabs. Pipelines were implemented in about a day, so the potential to make rapid upgrades is pretty big.
This is a JSON file, and it can define what pipeline a project is part of, and what stage it is in that pipeline.
So it's like "this project is part of the analytics pipeline, and it is part of the deployment stage".
But you still have to make the analytics pipeline in the GUI right now. This means that you have to say (and this is all) the analytics pipeline has the following stages in order: A, B, C, and then D.
I think it's quite possible to have the definition OF the analytics pipeline in the vespene.json too, but it's a bit of a question of which one wins if there are conflicting definitions.
If you want to stop by talk.vespene.io this would be a great thing to bounce ideas around on!
Yes and those are great when the people managing the Jenkins server are willing to install those plugins and get everyone to rewrite their stuff to use that. They did that eventually install that at $WORK, but they have a bunch of configuration for it for various libraries, which they make backwards-incompatible changes to without warning.
Fortunately, we're mostly moving to GitLab CI which doesn't have most of this bullshit.
At an earlier job we installed a VC history plugin (I don't know which one) to at least have basic history of configuration changes, but it had several problems:
- It would often get into an unrecoverable state, and we would have to SSH into the box to clean things up to allow it to continue (and in the meantime we would lose all history).
- Trivial automated changes from another plugin would flood the log. Not this plugin's fault, of course, but Jenkins is unusable without lots of plugins, most developed in isolation from others.
Everyone I've talked to about this prefers the GitLab model of telling the server everything relevant to your build at build time. Where Jenkins would need copies of jobs or synchronized configuration updates, GitLab just assumes that each commit has the job configuration it needs.
> When organizations start to have too many projects, they need an easy way to share values and code snippets between those projects. To support this we have Jinja2 templating of build scripts
I would implore everyone implementing CI/CD to do the opposite. If you have a small project, maybe use a build script to get it off the ground quickly. If you have a large organization with complex, disparate tech projects, do not use build scripts.
Use CM or orchestration tools. Use pre-rolled containers/images. Use configuration options for existing solutions/plugins/deployment tools. Write only open source, composable, extensions/modules/plugins, with yaml/json/xml configs to control them.
The open source bit is actually crucial. By making all your build/test/deploy plugins open source, you force yourself not to include proprietary information, which is always a dark pattern. Hard-coding IPs/hostnames, credentials, product names, regions, etc just makes your work less composable/reusable. Open source all your pipeline code except for your configs. Then everyone in your business can find everyone else's work simply by going to GitHub and looking up a tool. Because they won't be scripts, they can be picked up and used immediately by any team, and not have to be forked by every team that needs a modified build script.
To do all this properly requires discipline and training and research and time, which big companies have. The point is not to be fast, but to be reliable, to prevent muda from sapping your productivity.
Compiling C code with puppet or whatever seems a little weird to me, to be honest :)
If you want to keep your data externally, there are some good options for this. I'd read the plugin docs about variable plugins, that can easily source variables for things like etcd or Consul.
A script for a given repo could be nothing more than "make go", and that actually is a pretty good practice, to keep that in source control.
Still, you need something to build your code, and to have a good place to see the status.
Ultimately, code still needs to be built, and orgs do want to avoid images they cannot easily recreate. That's the role of a build system and making sure the process to create those artifacts is in source control.
I should also mention that your build script CAN be sourced from source control.
In your .vespene file, just say "script: <path>.
However, the variables in that file can be evaluated by Jinja2 variables, so for instance your security team could set up a snippet everybody could use or your feature flags could be defined from there.
Another cool feature of Vespene is in each project build root all those variables appear as vespene.json files, so it can be a good tool to use to launch all kinds of automation from a web console where you want to record results.
Id strongly recommend adding the ability to implement pipelines as code -- ie: you check in the CI/CD configuration into the repo (or another repo, containing CI/CD configurations). Having to configure the pipelines manually is nasty, and often builds are caused by misconfigured of the pipeline. having this checked it allows easy versioning.
I also like the idea of pipelines as code. I'd love to see it go a step further and become pipelines as testable code. I've yet to see a pipeline of non-trivial complexity that stays easy to reason about. I predict the next wave of CI/CD (or pipeline anything) will make pipeline testing & sanity checking simple as heck. If this already exists, please tell me about it, because I don't see people using it ;)
How does this compare to something like Buildbot (https://buildbot.net/) - I share a lot of your frustrations with Jenkins and this is where we landed.
I don't know, I'm really not trying to make one of those cliched checkbox X vs Y type grids, but when I started building Vespene I was most concerned about being able to make builds more consistent when you had hundreds of microservices developed by a lot of different teams. This is why I did http://docs.vespene.io/variables.html#snippets
I also saw a lot of people trying to do ops style stuff from tools like Jenkins, which is why Ansible has built-in SSH automation so it can hold on to SSH keys and use them in your behalf, and has some really cool built in RBAC so you can decide who can run what and it can be different than who can edit what.
Mostly I just want to build an architecture that can go interesting places - what is here today I think is usuable, but it's just a starting point. My thinking is if you make an architecture that is really pluggable, and the code is easy to read and add to, it can go to some really neat places.
I'd just recommend taking a look at the various chapters of http://docs.vespene.io/ to get a feel for features, and if you like it, spin up a copy using the setup instructions and see what you think.
Snippets of stuff is really missing from a lot of platforms. I use gitlab to store parts of our builds in one location instead of duplicating build code in every repo (and we have lots of them) for stuff like building RPM, where they all use a common model. Making this first-class is a good move.
Hello, Community Advocate from GitLab here. Thanks for sharing your solution with other users, we are happy to hear how you use GitLab to make your work more convenient.
Configuration management is best with operating system packages, so in creating "Ansible", not only did you completely miss the ball, you created a monster. People now hack ad hoc YAML files instead of designing clean OS packages to manipulate the configuration.
It's fine if you don't like it - there are things I don't like about it, but ultimately single packages can't express multi-node configurations very easily. We need something.
While I "grew up" as it were, believing in RPM, we live in a very post-distro, multi-language kind of world, and there's a need for things to glue that together. How many times have I tried to campaign against "wget tarball" as a deployment mechanism, I don't know. It's rough and yes, there's a lack of discipline in ops that needs to improve.
Immutable systems is a VERY interesting way to solve that, but it doesn't work for certain stateful things and you always need something to deploy the undercloud.
I'd encourage you to try to build your own experimental project to try to find different ways to do it, as this is the only real way that technology ever gets ahead.
Funny you should mention RPM's, since that's one of the packaging formats we perform configuration management in.
I have my own configuration management framework which can add and remove configuration from files that each package brings on. It's written in AWK. No knowledge is required to use it - configuration packages deliver just their configuration files' excerpts. In addition to being able to template them with regular shell variables, self-assembly is supported as well.
How is that different than what Puppet, Chef, CFEngine, or any number of other configuration management tools do [and did long before Ansible]? How do you propose to have a package that requires dynamic configuration supplied by the system (something from dhcp, for example). Are you conflating package management and configuration management? I'm genuinely curious.
It's not any different, they all suck. Read my reply above to the author on how it was solved cleanly without needing to hack anything, just deliver a config excerpt via an OS package and it works. It's not open sourced yet, mainly due to lack of spare time to spend on computers and my cronic exhaustion.
The first proof of concept prototype was developed in 2007 based on insights gained with packaging and system engineering at two different companies, one a software one and the other a very large financial institution.
Refined framework (oh how I hate that word now thanks to web developers!) went into production in 2009 on Solaris 10, depending on advanced AT&T SVR4 packaging features. It was then subsequently ported to CentOS / RHEL RPM and went into production around 2011. It's been the corner stone of configuration management for the entire infrastructure, particularly production ever since.
Ugh. Conveying tone on the internet... I mean it sincerely: I do wish you good luck. Please about your project on HN when you release it, I'd love to see it. While I think it's deceptively hard, but config/immutable state are fascinating areas.
I think we owe Jenkins a TON of credit for being a free solution that has a lot of great traction.
I'm used to seeing pages of checkboxes and can't stand how obscure some of the configuration is, but mostly I wanted to write something that could handle configuration differences between hundreds of projects, so that is why there are things like Snippets in Vespene.
Plus, I wanted to make something that was a little better for ops use cases (so people don't also have to go pick up something like Rundeck), so that's why there are things like the SSH integration and http://docs.vespene.io/launch_questions.html
It's more about capabilities and future capabilities than the frustrations IMHO.
Though I do have a bunch of friends who are frustrated with plugin compatibility, plugin hunting (we're doing more of a "batteries included" approach like I ran with ansible - just with much less modules), and stuff like that.
I was also able to add in some stuff like container build isolation really easily, and that's all included stock.
So back around 2003 I was working at Adaptec (an IBM spinoff) and we were trying to put together a build system, I think I ended up front-ending it around DamageControl, which was a Ruby port of CruiseControl, except I had to hand-code a fake "SCM" module to run timed (versus commit-based builds). Our builds took like 8 hours! At that time, I really wanted to help people with build environments but never really did it.
Then, what sort of happened is I kept running into overly complex Jenkins installs. The typical miles of config checkboxes was overwhelming for me, but a larger problem was that organizations would have like 200 microservices projects, and all the build scripts would be slightly different.
The idea was then to take the templating system from Cobbler - which allows lots of variables merged in at various levels, and Jinja2, and allow reuse of build scripts through variables and snippets.
There was some other frustration with Jenkins - the idea that it was a ginormous codebase and still not database based, and you had to hunt for good plugins.
Ansible was pretty successful for getting everybody together to contribute to common modules (it maybe took on too many though) and make sure you didn't have to go on too much of a plugin hunt. So could possibly we start a "batteries included" build system?
Finally, I wanted to try out some opportunity to merge the functionality of a build system with something like Rundeck, so you didn't need two tools. I also found Rundeck complicated to set up, but if you want to run some self-service automation, most people do that within their build system. So I figured I could add in some SSH and Q&A interactivity in there to easily get that going.
My other idea was to write a VM/container controller, and while I think there still needs to be a simple one as Kubernetes is getting to be a beast, that's too much work to bite off without a lot of help :)
Honestly the main thing is I just wanted to work with a lot of the same open source folks again (and a bunch of new ones), so I'm really looking forward to that!
Maybe it's just me, but I always have trouble figuring out what apps are by reading the documentation. I'd usually understand more if there was an architecture diagram, which also clues me in that I need to do one for Vespene :)
One other reason I didn't build something is I don't have a current need to use something like that, so I'm probably not the one that should be developing it. There's a lot of datacenter use cases I'm ultimately not super familiar with.
I wanted another Science Fiction reference to something I loved.
Vespene gas is very important in build order.
My probable worst name was Cobbler (the PXE server) for those that were into cockney, but it was originally named "Shoe", and the idea was a Cobbler makes boots.
I'm pretty sure it's going to happen, or at least having Docker build files where you can run your own (on the workers, you'd want to install what you wanted anyway). I've had a couple of conversations from folks wanting to do that and it seems like it would be pretty easy, just making the entry point supervisor vs systemd.
This looks really great to me, and I can't wait to try it. I've been using buildbot for extremely light-weight CI/CD server that basically just triggers Ansible scripts on git changes. Not at all the way buildbot was designed to work, but very quick and easy to manage.
This looks like a tool that could be a lot closer to what I want to do and designed to do it that way.
Does every machine running a worker have to be a full node with Django? Workers in a complex CI setup can often be e.g. raspberry pi machines, iOS emulators, Windows machines without admin access deep inside a firewall that make outgoing connections but not incoming, etc.
So, you might consider a protocol for builders to report their results (e.g. over SSH) without being a full node, if that doesn't exist. Looks good!
Any node typically would run the webserver because it's not that heavy, but doesn't technically have to. It could just run one worker process. Right now the workers do need database access and I suspect that will stay that way for the short term.
Windows needs to happen, but I'm not sure when it happens. (Also a good list discussion).
I'm a bit curious about your raspberry pi use case - maybe a good topic for the forum? http://talk.vespene.io/
How do you manage normalizing webhook events from each source control provider, i.e. GitHub, GitLab, BitBucket, etc?
I have been interested in building a general purpose CI/CD tool but this part seems like the most "grunt work". Is there an open source library that standardizes webhooks? Or would you be interested in publishing that portion in a library?
Cool project! It strikes me that Django might not be the ideal for this project, have you thought about that? This logic [1] looks shoehorned into the framework.
While you're welcome to that opinion, Django has been freaking awesome. This whole thing got written in 3 months, not even working full time. I wrote pipelines in like a day.
It seems like you're saying I should have used a many-to-many there rather than 7 FKs. Probably, but shrug... it can be fixed later if it ever becomes a problem.
This looks cool. Unfortunately it is Common Clause licensed and therefore not Open Source. My organization would never consider a Commons Clause licensed tool.
The Anti-Commons Clause. I agree, this is a non-starter. No sane company would introduce themselves to the liabilities this presents and no sane open source project would utilize a nonfree CI.
There really aren't any liabilities any more so than in violating the GPL, and people have been over that for eons. Don't sell it without joining the partner program, more or less. Rates are posted, it's cheap, and helps support the upstream.
At least, I don't think I'm evil. Some people might not know me, but I don't feel evil :)
I'd argue the definition of what is open source is up for debate, but I had some reasons for this and thought about it for weeks.
I hope people know my history and how I've run projects in the past.
But what this does for me is keep me from releasing anything open core. With ansible, ansible was open source but the GUI was not, and we had to hold something back.
The traditional alternatives to OSS are to make a support business, which somewhat encourages making buggy or hard to install software, or a consulting business (IMHO, same) and all of those detract from building a product, collectively, with some of your favorite people on the internet.
So basically Vespene is going to be like a pseudo-foundation.
The structure for this is described here - http://docs.vespene.io/partnership.html - which keeps consulting and support totally free for small shops, and encouraging larger ones that could potentially make millions form Vespene to give back a very small amount.
What normally happens is large companies close six figure contracts supporting open source, and they don't give back a dime, and my goal here is to essentially make a developer's salary off of this project by having those who make more give back a very small amount.
This seems pretty fair to me, but I also recognize that some people don't agree with it.
That all being said, it's not a "no-commercial" clause in any way, and still free for small consultancies, so I'm not anticipating any major problems.
Most of the time, software gets installed in a place of business for that business.
It's not something I've considered lightly, but this keeps MORE software open, and for a small one-person shop, I think that's more noble than trying to hold some code back and not release it at all.
Some people just want 100% free, gimme gimme, etc - and fine - you're entitled to pick one of those things. That's ok.
I personally view this a little more pragmatically, under the view of fairness, and also have taken steps (in the CLA, etc), to guarantee that trust is never going to be abused.
For instance, the license reverts to pure Apache if Vespene were ever to change hands.
On the downside, I can't get screwed over by IBM or Amazon while I'm trying to crank out free software for everyone. That seems fair too, seeing the time investment running an open source community and working on the code takes.
"Open Source" has always had a strong definition. I respect your work and your reasons for choosing a non-open source license. I actually read about the partnership program on your site. I think it is an interesting business model.
If you are interested, there are two big reasons why my organization would not consider adopting it:
1) We are culturally and organizationally aligned with the open source movement.
2) (More widely applicable and practical) Risk Management. If IBM takes Ansible in a direction that is not aligned with our needs, we can fork it. More importantly- we can pay some one else to fork it, or sponsor a community of developers who want to fork it. Those developers would be under no obligation back to IBM to ask for a Partnership agreement to be able to support the fork.
You stated that "the license reverts to pure Apache if Vespene were ever to change hands". I did not see those terms, and that is definitely something to consider, but even with that, your organization may still make decisions that aren't aligned with a subset of the communities interests.
As an aside, thank you very much for creating Ansible, it is a fine product! Good luck with Vespene, I hope you find great success with it.
You have a lot to say here, but I'd like to respond to just this:
>I'd argue the definition of what is open source is up for debate
It's not up for debate. It hasn't been up for debate for years. There are two generally accepted definitions of FOSS (each addressing the F and OS respectively):
No one disagrees with these who isn't trying to sow discontent in the open source community. Disagreeing with the OSD or FSD is akin to denying climate change.
> No one disagrees with these who isn't trying to sow discontent in the open source community.
No. Everybody who disagrees with these has been driven from your ideological bubble. "Open source" has become a general term that is part of the natural language; nobody has the right to demand everybody use their preferred definition. Many people who use the term will never have visited gnu.org or opensource.org. Many others will have, but will have concluded that the strictness of those definitions is silly. It seems quite reasonable to disagree with a definition of "open source" that makes it practically impossible to make a living off of your work. If you want to have a debate on the merits, fine, but running around telling people they're using English wrong because they disagree with you is just foolish.
No, it hasn't. This is a lie which serves the interests of those who would harm open source. This language is important. You can't take the argument of "every word means what I want it to". The burden is on you to be understood, and by misusing terminology like "open source" you are either ignorant or deceitful. I wouldn't expand the definition of decongestant to include adderall because the former sells better and I'm a pharmacy that needs to get rid of my stock.
You might disagree with the principles of the definition, or think that the definition isn't useful. But that doesn't make it any less of the definition. If you want to do something different, you need to call it something else or you are lying to people.
> You might disagree with the principles of the definition, or think that the definition isn't useful. But that doesn't make it any less of the definition. If you want to do something different, you need to call it something else or you are lying to people.
This argument can be applied to your definition just as well as it can to my definition. Which underlines the pointlessness of arguing over the definition of "open source", and therefore the pointlessness of telling somebody that their thing is "not open source".
Not so. I'm just saying that it's valid to disagree on the definition of a thing, and that when that happens you need to have debate about the real underlying disagreement, rather than making a pointless and foolish declaration of your correctness by your definition.
Although I'm not sure why I bothered typing that, given the current readings I'm getting on my troll detector.
Thanks so much. I've had this conversation with a lot of founders and we all feel we are in a race to the bottom of sorts where the power dynamic is all with the big corps.
We don't like open core but proprietary software is the best way to make money, so I'm trying to push that just a tiny amount.
There's a fair amount of that available in http://docs.vespene.io/importing.html ... if you have more ideas can you stop by https://talk.vespene.io/c/ideas ? ... I don't want to make the syntax more complex than the YAML that is there now, but I think we can add other fields if there is something you want or a capability that might be missing.
Does this mean that the CI config cannot be checked into some sort of version control? That was one of my biggest issues with Jenkins. Someone would change something somewhere and suddenly my builds would fail, and even if I track down what setting is causing the problem, I have no idea why it was changed, who changed it, what will happen if I put it back, or even what it's previous value is.
Additionally, it made duplicating a given workflow an extremely tedious, manual process.