Handwriting (in and out) support is very important IMO. Also being able to draw DAGs.
I'd like an e-ink device with high frame rate and HW powerful enough to run some models locally or with good enough connectivity and sensors that e.g. Computer Vision tasks can be offloaded to the users' smartphone.
Feel free to expose your ideas on there :) I welcome Open Source discussion!
> Nothing in this diatribe argues that encryption at rest is creating a net negative, outside of it being represented as a be-all and end-all security measure. When I say encryption at rest is a scam, I’m talking about it from the eyes of the purchaser. And given that it’s their data at risk, this is the standpoint that matters.
The point is that while “not creating a net negative”, is it still creating the net positive that providers claim and in some cases want you to pay for.
Significantly: there are a whole host of risks that is doesn't mitigate, that it is not intended at all to mitigate, that people who don't know any better might assume are dealt with when things are pushed as secure “because the data is encrypted at rest”. If you read TFA you'll see that it details some of these concerns.
> I needed something that would manage all the containers without me having to ever log into the machine.
Not saying this would at all replace Harbormaster, but with DOCKER_HOST or `docker context` one can easily run docker and docker-compose commands without "ever logging in to the machine". Well, it does use SSH under the hood but this here seems more of a UX issue so there you go.
Discovering the DOCKER_HOST env var (changes the daemon socket) has made my usage of docker stuff much more powerful. Think "spawn a container on the machine with bad data" à la Bryan Cantrill at Joyent.
Hmm, doesn't that connect your local Docker client to the remote Docker daemon? My goal isn't "don't SSH to the machine" specifically, but "don't have state on the machine that isn't tracked in a repo somewhere", and this seems like it would fail that requirement.
You could put your SSH server configuration in a repo. You could put your SSH authorization key in a repo. You could even put your private key in a repo if you really wanted.
For me, I don't define any variables via the cli, i put them all in the docker-compose.yml or accompanying .env file, that way it's a simple `docker-compose up` to deploy.
Then I can track these files via git, and deploy to remote docker hosts using docker-machine, which effectively sets the DOCKER_HOST env var.
While I haven't used it personally, there is [0] Watchtower which aims to automate updating docker containers.
The killer feature of harbormaster is watching the remote repository. Can docker-compose do that? If it can, I should just leverage that feature instead of harbormaster!
The nicety here on harbormaster seems to be that there are some ways to use the same code as a template in which specific differences are dynamically inserted by harbormaster. I'm not aware of how you could use docker-compose (without swarm) to accomplish this, unless you start doing a lot of bash stuff.
I also appreciate that harbormaster offers opinions on secrets management.
You run what's supposed to run the same way you would anything else. It's the same for the environment variables.
How would you track what's supposed to run and what's not for Docker? Using the `DOCKER_HOST` environment variable to connect over SSH is the exact same way.
Chef is a configuration management system. It lets you define lists of things to do called "cookbooks" (analogous to Ansible "playbooks" etc.).
To "converge" is to run something until it is stable. This terminology, I think, comes from the early configuration managemnt system CFEngine, where you write your configuration in declarative(-ish) "make it so this is true" steps, instead of imperative "perform this change" steps the way that a shell script would do. See e.g. https://www.usenix.org/legacy/publications/library/proceedin...
chef-solo is a command that executes Chef's client - the thing that actually makes configuration changes, that is to say, "converges cookbooks" - in a way that does not require a server component. The normal way of deploying Chef is that a server runs things on clients, the machines being configured, but chef-solo is appropriate for the case where there is no such distinction and there's just one machine where you wish to run Chef.
> Chef Infra is a powerful automation platform that transforms infrastructure into code. Whether you’re operating in the cloud, on-premises, or in a hybrid environment, Chef Infra automates how infrastructure is configured, deployed, and managed across your network, no matter its size.
In an imprecise nutshell: You specify what needs to exist on the target system using Chef's DSL and Chef client will converge the state of the target to the desired one.
I have, and it's really good, but it needs some investment in creating packages (if they don't exist) and has some annoyances (eg you can't talk to the network to preserve determinism). It felt a bit too heavy-handed for a few processes. We also used to use it at work extensively for all our production but migrated off it after various difficulties (not bugs, just things like having its own language).
You can talk to the network, either through the escape hatch or provided fetch utilities, which tend to require checksums. But you do have to keep the result deterministic.
Agreed on it being a bit too heavy-handed, and the tooling isn't very helpful for dealing with it unless you're neck-deep into the ecosystem already.
Not (I think) the exacttalk/blog post gp was thinking of - but worth watching IMNHO:
"Debugging Under Fire: Keep your Head when Systems have Lost their Mind • Bryan Cantrill • GOTO 2017"
https://youtu.be/30jNsCVLpAE
Ed: oh, here we go I think?
> Running Aground: Debugging Docker in Production
Bryan Cantrill19,102 views16 Jan 2018
Talk originally given at DockerCon '15, which (despite being a popular presentation and still broadly current) Docker Inc. has elected to delist.
You mention different goals (from GA) and a will to open source (RK).
The parent user seems to have found a way for you to publicize your business efficiently: a Starlark syntax on top of github actions through your runtime, maybe brought in by the OSS community, giving you ample opportunity to capt this market of developers.
People looking for better Developer UX would pay for faster processes, right?
I'm on this route myself, trying various things out at https://github.com/fenollp/reMarkable-tools
Handwriting (in and out) support is very important IMO. Also being able to draw DAGs.
I'd like an e-ink device with high frame rate and HW powerful enough to run some models locally or with good enough connectivity and sensors that e.g. Computer Vision tasks can be offloaded to the users' smartphone.
Feel free to expose your ideas on there :) I welcome Open Source discussion!