It's "funny" that for a deployment system, its own deployment follows all this crappy standards people have been running by lately.
Config in $HOME, startup with a random shell script, install is a compilation of python/bash/etc scripts that do magic, ... I mean look at this:
https://github.com/spinnaker/spinnaker/blob/master/gradlew#L... (or actually, read the whole script, be scared)
Software nowadays.. a bunch of shell scripts with hacks all over which few actually knows how to write :/ (writing bash scripts well does take quite a bit of knowledge)
I'm increasingly frustrated by the discovery of major projects where this sort of "spaghetti code" is considered acceptable. Few of them deem it a priority to remediate or even just document the mess. Of particular note is the lineage of build system scripts used by first Gentoo then mutated by the chromeos team, finally used by and dramatically simplified by the coreos team who forked off from the chromeos fully and are cleaning up the mess left by the chromeos team hacking away on these bash scripts. There are dozens of places where simple hash bang python scripts would have cleaned up hundreds of lines of bash boilerplate.
I'm incredibly frustrated by the lack of willingness to decide that the outcome is key and the build scripts should be a testable part of the release process and a growing number of places I find have wound up considering the build scripts part of their release tests therefore they should change as infrequently as possible.
Not even going to start talking about programs that don't have stand alone build/install instructions because they always generate builds inside some kind of harness like a Debian or RPM source package.
Lines 140-149... I have mixed feelings about that. On one hand, the software developer in me keeps screaming "DRY! DRY! DRY!". On the other hand I find it really concise and understandable. I mean, can you look at this code and NOT know what it does? Now try to make the code better - do you still get the same effect? Probably not.
(disclaimer: I am arguing about this specific case, not about such code smells in general)
UPDATE: ...and if I missed something besides that the code is repeated, please correct me. Bash is not my strong point.
This is not the way it is deployed inside of Netflix. The scripts were collaborated by Kenzan and Google to make it easy to try and install Spinnaker. There is also a docker-compose configuration to make it easy to try. Internally, we deploy each microservice to a separate aws intance that is heavily monitored.
Anyone know why this wasn't published in github.com/neflix as the rest of the NetFlix OSS tools are but rather in a new account at github.com/spinnaker? They've also gone through the trouble of doing more PR and setting up spinnaker.io for this.
I'm interested in what this says about Netflix's plans to stay on AWS, move off of it, or simply diversify their infrastructure. Maybe' we'll see a Chaos (Whatever's bigger than Kong), that models knocking out an entire provider: i.e., don't have to rely on Amazon's WAN anymore because they also have Cloud Platform/Azure to supply things.
Indeed. It's not a good sign that Netflix weren't aware of this very high-profile long-standing project with the same name. It suggests that they aren't much in touch with the academic CS community.
Should names be globally unique? Spinnaker Software [0] was around in the early 1980s and can claim the throne for longest use of the name in the computer space.
I just think it's unlikely they would have chosen the name if they were aware of Manchester's project, which means they didn't know about it, which means they don't read much about CS research.
We had looked at Asgard, but didn't use it because it used dedicated load balancers (e.g. separate ec2 instances) instead of ELBs (IIRC).
Our deployments were already using ELBs and we wanted to stay with that.
Does anyone know if this has changed with Spinnaker?
I would not be surprised if Spinnaker's canary/etc. features require finer-grained control over the traffic than an ELB with just course-grained "add/remove instance" gives you, so it probably makes sense.
Curious how this compares to Jenkins Workflow plugin suite. At first glance Spinnaker seems to be simpler and has a prettier UI. Workflow has a lot more customization possible and the crappy Jenkins UI to go along with it. I might look at Spinnaker anyways just to keep from gouging my eyes out on the Jenkins "UI".
I have a mixed feeling about this. I mean how do you Deliver Spinnaker. The project is fucking complex, if everything goes wrong you are pretty much done..
My reading is that this project aims to do more than just OpsWork. It basically manages your deployment and release cycle, including releasing your infrastructure changes, if you make it so.
You can basically do all of this in Jenkins today, but you don't have the fancy GUI, that's my impression so far.
A replacement for Jenkins as the CI Pipeline management tool is my understanding.
It now makes sense why they stepped away from the Job DSL Jenkins plugin that they helped create a while back (though there's still support for Jenkins steps in there which'll be a great help with porting existing pipelines).
JobDSL was more along the lines of a templating engine with groovy customization that tends to break Jenkins masters. The Workflow plugin would be more akin to Spinnaker than JobDSL.
I'm not saying it's a like for like replacement, I'm saying they stopped working on JobDSL because they were working on Spinnaker.
This feels like a Jenkins replacement rather than a Jenkins plugin replacement or at least something that sits atop Jenkins in driving your build and deployment activities (it's got support to call and watch Jenkins jobs so it's sensible to assume they use both).
I'm interested to hear that JobDSL is painful for Jenkins masters do you know what the pain points are?
>I'm interested to hear that JobDSL is painful for Jenkins masters do you know what the pain points are?
In a multi-tenant environment that we run using Enterprise Jenkins from CloudBees, the default option for JobDSL is to create jobs at the root level which is problematic if you don't allow teams at that level. We've also had issues with jobs just getting "stuck" which even after a restart of the master doesn't clear it out. Plus the fact that it encourages bad behavior among teams by allowing them to create thousands of jobs that never get cleaned up is just another reason we've banned it from other installations we've setup. Jenkins itself is pretty terrible and even the proposed 2.0 changes don't appear to do anything to change things for the better. If Spinnaker can replace Jenkins, more power to them. But I don't think that's the initial aim.
Regarding your other comment about "something that sits atop Jenkins in driving your build and deployment activities" that's just about exactly what the Workflow plugin ecosystem does. However you're still stuck with Jenkins and its baggage.
JobDSL came from an engineer who collaborated with some others at a conference. It's not anything we own or drive the development of. It's used in some places internally but is not pervasive. The Spinnaker guys were not responsible for JobDSL, totally unrelated.
yeah sorry, perhaps i was too broad. not OpsWorks specifically, but i'm wondering if it provides anything AWS services in general don't provide. like CodePipeline/CodeDeploy. think i just need to read a bit more or try it out.
It's "funny" that for a deployment system, its own deployment follows all this crappy standards people have been running by lately.
Config in $HOME, startup with a random shell script, install is a compilation of python/bash/etc scripts that do magic, ... I mean look at this: https://github.com/spinnaker/spinnaker/blob/master/gradlew#L... (or actually, read the whole script, be scared)
Software nowadays.. a bunch of shell scripts with hacks all over which few actually knows how to write :/ (writing bash scripts well does take quite a bit of knowledge)
/rant over, send me your downvotes.