Not really. Go doesn't use shared libraries (yet), so you're only really recompiling the end binaries.
If you're using Go, you should already have the infrastructure to do this. Everything we have gets rebuilt for each go release anyway, so this is no different, just a little more immediate.
> If you're using Go, you should already have the infrastructure to do this.
Unless you're using packages in an APT or Yum repo. You instead get to wait for packages to come downstream, almost certainly not in anything approaching synchronicity. Awesome.
We got away from statically linked monoliths for a reason.
This is a problem with relying on APT for security fixes anyways.
It's been particularly bad in the past for things like nginx, where we've known about memory corruption bugs for multiple days and had clients who were incapable of patching because they only had infrastructure for downloading packages.
You should have tools in place to build, from source, anything that might be a significant security issue.
If my clients would pay me for it, I certainly would. But `package 'foo'` is a lot cheaper than that, turns out.
And while I 100% agree about the optimal case, in practical terms you're going to see a lot better turnaround from the maintainers of Debian's OpenSSL than, say, the third-party Docker repo. A sane system not reliant on static linking 'til the cows come home is not a cure-all, but it's better than the Go situation, and is a part of why I'm super not thrilled with Go-as-infrastructure. (Go-as-app, whatever, people take that problem on themselves.)
The problem isn't really with APT (or deb). Building a package from source is trivial. Anyone can do it, and it's completely automated. Building software from upstream source can be a lot more daunting.
Feeling your pain right now. I have a service running in go that runs in multiple machines and this will keep me busy for the night. Worst is that I can't have a sysadmin do the whole process. Now QA has to sign off on it... It's no wonder things stay broken for a long time. It is a lotnif work. Sorry for the rant. :)
That's why you write tests. Not that always works either, but it's not like a qa team of point and clixkers running over a manual set of those same tests will always get it right.
Imagine the nightmare of having to hunt tens of shared libraries versions, and even have to re-build them and patch them with the correct version of the language/apis so that they continue to work.
How does that square? I mean, I have one copy of a shared library on any of my machines. I don't even necessarily know off the top of my head what of the Go tools my developers found bloggable enough to want in the stack that might be impacted by this bug. (Or rather, I do, but that's because it's my job, and I'm actually good at my job. I don't have the same high hopes with regards to most other infrastructure folks I've worked with.)
>How does that square? I mean, I have one copy of a shared library on any of my machines.
That's all well and good, but that's just you. Real systems often get messed with multiple copies/versions of shared libraries. Also known as dll-hell in Windows. In Java land you could easily end up with 20 versions of some jars just as easily, all installed locally by maven and needed by this or that lib/framework/etc.
>Or rather, I do, but that's because it's my job, and I'm actually good at my job.
I haven't seen that on a decently-maintained Linux system--certainly not any I'd call a "real system"--in a really long time, unless they're side-by-side packages from the OS.
What "Linux system"? We're not talking about your distro userland packages here.
We're talking about businesses deploying multiple server apps and such. Neither Java nor .NET shops for example rely on "OS packages" for their server dependencies when it comes to Jars, dlls, etc.
Yes, actually, I was talking about distro userland packages, but whatever. If you're deploying a Java application, everything should be coming out of Maven (and thus avoid any sort of shared library hell) in the first place, no?
>Yes, actually, I was talking about distro userland packages, but whatever.
Then you were off topic, since we were discussing Golang packages for deployment, which aren't userland packages either. But whatever.
>If you're deploying a Java application, everything should be coming out of Maven (and thus avoid any sort of shared library hell) in the first place, no?
You still get multiple copies of jars. And you still need to replace them. And you still might have older projects that need particular versions of a jar that you need to update somehow in case such a long-reaching issue is discovered (and upstream might not do that at all).
> You simply need to rebuild whatever binaries you run
"Stack", not "app". That includes, say, Docker. Or nsq. Which are packaged by the operating system (because, as noted in a reply to `tptacek, paying me to build out what they get from the operating system is generally not something a client is going to want to do). And which will not be updated as conscientiously as I can expect a core library on my system to be updated.
> As an aside, do you have any idea how pretentious you come across?
Pretension, frustration, whichever. My low opinion of Go aside, it's my job to deal with the details of my clients' stacks, make a coherent experience for their developers, and deal with the system-level security concerns they don't think about. If getting annoyed that Go and its community make my life suck a little bit more in ways it need not suck is pretension, I'm really gonna be okay with that. I mean, jeez, it's 2016. "Rebuild the entire world when a mouse farts" is fragile, dangerous garbage that puts the lie to the idea of "engineering" in this profession. We should be better than this. (And Go's first iteration, lovingly called Java 1.4, was. We're going backwards.)
I can't understand why someone making a statement like that ("bloggable..") which is untrue can be upvoted while you're downvoted. Worse, he even bragged about being an excellent engineer who knows everything there is to know when he clearly doesn't in this case.
It's trivial to do when you know which binary to replace (and you have the source code available).
But it's probably not so trivial to hunt down every single old binary on every system. There's also a continuous risk that you may receive old binaries from third parties in the future.
For example, gitlab has started to ship a component written in Go in recent versions. Things like these are easy to forget about.
If you can't track the binaries you are running you have a bigger problem than recompiling some of them. Security issues aren't the one in a decade thing, if you can't easily locate which code has to be updated, you are in trouble already...
I'd say as a general overzealous rule that if at least two people in the company don't know the code, it shouldn't be publicly accessible if it being compromised could cause you financial harm.