The benefit of docker for home assistant is the packaging of it, rather than isolation. You can always run a container with host network mode and privileged mode so that it can access everything it needs to the same as if it were running directly on the host.
If you have some extra length, just coil it somewhere in the wall and don't bother splicing or re-terminating it. You can also use keystone jacks or couplers at both ends too so you have flexibility later without re-running it through the conduit.
I used the VLAN "trick" for connecting my cable modem to my router for a few years in an old multi-floor apartment with pre-wired ethernet, but it's not ideal because the router is then not able to detect the link state of the modem. For example, if you unplug a cable modem and plug it back in, normally the link would go down on the router and then come back up, and when the link returns the router will attempt to fetch a new DHCP lease.
If you have a static IP, it should be fine, but this became an annoyance the couple of times the IP changed when I was living there.
This is of course not the purpose of your post, but since you're interested in this topic, I wanted to mention that you can now create memory-backed files on linux using the memfd_create syscall without using any filesystem (nor unlink) and you can also execute them without the /proc/self/fd trick by using the execveat syscall. In glibc, there is fexecve which uses execveat or falls back to the /proc trick on older kernels.
You're entitled to your opinion, of course - language is subjective afterall - but the meaning of the word "hacking" as I'm using it comes from the MIT hacking community as described by Richard Stallman [0] [1].
I agree. I was looking into how you start a child process in C++ recently and I was surprised and not at all surprised to find that the answer is still fork and execve. Ridiculous.
Do you have any references to specific bugs here? We depend pretty heavily on containers and I'd love to look into these and see if we are impacted and whether we should carry these patches
FWIW on AWS, all nitro instance types can boot as UEFI if the AMI has itself set to use UEFI and all arm64 instances only support UEFI. So if you stick to modern instance types, you can happily use UEFI in AWS.
GCP looks like it does UEFI too. And Azure Gen2 instances use UEFI too.
memfd is a tmpfs file descriptor, but does not use any mounted tmpfs filesystem. It works no matter what filesystems are mounted or access you have.
It's truly great for situations where APIs refuse to take anything other than files and you don't worry about cleanup. Ex: loading certs from memory into a python openssl context.
I don't think the DNS exercise would behave the same although that probably depends on how the container was setup. Docker usually controls /etc/resolv.conf. Another exercise is "try to figure out if you're in a container or VM so that'd definitely be different"
The question is not if the exercises would behave identically, but if you can test the objective in a container. For example, you can totally test, screw up, and fix DNS in a container. I would think that "try to figure out if you're in a container or VM" would be exactly the same as it is right now.
User namespaces have resulted in multiple new container breakout CVEs in the last year. Some guides actually recommend disabling user namespaces because they are still somewhat new and perilous.
You're talking about creating new user namespaces inside a container, not running a container in a user namespace. Running a container in a user namespace is strictly a security improvement over running it in the host user namespace.
Also, all container runtimes automatically block unshare(CLONE_NEWUSER) with seccomp already (unless they've disabled seccomp, which I'm not sure if Kubernetes still does).
What are the ones in the last year? They provide security benefits as well. I mean, you could say the Linux kernel is also dangerous and the Windows kernel and pretty much anything that has ever had a CVE. You can also limit it to specific users too if that is a major concern.
If you use cri-o as the runtime along with an openshift container registry, it will actually verify signatures at the runtime layer. In addition to crio, podman and anything based on containers/image supports this too.
Really that just means a registry that sends back a header indicating it supports signatures and serves up the right signature endpoints. It's shocking this isn't more common.
But if you just want to check signatures at the cluster's point of entry, you can use an admission controller to block the pods from being created with unsigned images.
reply