Hacker News new | past | comments | ask | show | jobs | submit | paulfurtado's comments login

The benefit of docker for home assistant is the packaging of it, rather than isolation. You can always run a container with host network mode and privileged mode so that it can access everything it needs to the same as if it were running directly on the host.

You don't need a fiber splicer for this. You can order a pre-terminated fiber cable online.

On fs.com you can order fiber in custom lengths. An armored pre-terminated 350ft OS2 duplex cable costs $128: https://www.fs.com/products/20720.html Non-armored would be as cheap as $40: https://www.fs.com/products/50147.html?attribute=58053&id=17...

If you don't have a conduit, you can buy direct-burial cable. Two strands at 350 feet would be $590: https://www.lanshack.com/2-Strand-CustomLine-Corning-ALTOS-O... 6 strands at 350 feet would be $687: https://www.lanshack.com/6-Strand-CustomLine-Corning-ALTOS-O...

If you have some extra length, just coil it somewhere in the wall and don't bother splicing or re-terminating it. You can also use keystone jacks or couplers at both ends too so you have flexibility later without re-running it through the conduit.


I used the VLAN "trick" for connecting my cable modem to my router for a few years in an old multi-floor apartment with pre-wired ethernet, but it's not ideal because the router is then not able to detect the link state of the modem. For example, if you unplug a cable modem and plug it back in, normally the link would go down on the router and then come back up, and when the link returns the router will attempt to fetch a new DHCP lease.

If you have a static IP, it should be fine, but this became an annoyance the couple of times the IP changed when I was living there.


The way I handle this is to have a cronjob on the router that automatically resets the (virtual) interface when it can't successfully ping the modem.


This is of course not the purpose of your post, but since you're interested in this topic, I wanted to mention that you can now create memory-backed files on linux using the memfd_create syscall without using any filesystem (nor unlink) and you can also execute them without the /proc/self/fd trick by using the execveat syscall. In glibc, there is fexecve which uses execveat or falls back to the /proc trick on older kernels.


Looks like memfd_create is from Linux 3.17 (2014), which was after I wrote the function. I sort of miss the days when simple stuff was hard.


>I sort of miss the days when simple stuff was hard.

what? what's the point?

for me it's the most annoying thing when the simple stuff is hard because why would it be?


Hacking is the art doing things with software that don't seem possible.

In other words, it's just fun :)


Imo hacking is different thing


You're entitled to your opinion, of course - language is subjective afterall - but the meaning of the word "hacking" as I'm using it comes from the MIT hacking community as described by Richard Stallman [0] [1].

[0] https://youtu.be/D7PVrK58iGw?t=58

[1] https://stallman.org/articles/on-hacking.html


I agree. I was looking into how you start a child process in C++ recently and I was surprised and not at all surprised to find that the answer is still fork and execve. Ridiculous.


Wouldn't that be OS specific anyway? Like, Windows has no concept of forking processes but instead it uses the CreateProcess function.


Depending on how hacky you can also just re-invent a non-relocating ELF loader


Do you have any references to specific bugs here? We depend pretty heavily on containers and I'd love to look into these and see if we are impacted and whether we should carry these patches


Landed in v6.2: https://lore.kernel.org/lkml/20221105005902.407297-3-longman...

Set to land in v6.4: https://lore.kernel.org/linux-mm/20230330191801.1967435-1-yo...

To see if you are affected run ‘sudo perf top’ and see if ‘blkcg_rstat_flush()’ shows up.

blkcg_rstat_flush is quite slow on some kernels and it disables IRQs on the cpu which blocks nic queues on the same CPU.


FWIW on AWS, all nitro instance types can boot as UEFI if the AMI has itself set to use UEFI and all arm64 instances only support UEFI. So if you stick to modern instance types, you can happily use UEFI in AWS.

GCP looks like it does UEFI too. And Azure Gen2 instances use UEFI too.


memfd is a tmpfs file descriptor, but does not use any mounted tmpfs filesystem. It works no matter what filesystems are mounted or access you have.

It's truly great for situations where APIs refuse to take anything other than files and you don't worry about cleanup. Ex: loading certs from memory into a python openssl context.


If the goal of the test is to debug a sad linux server, containers are going to severely limit what ways the server can be sad in, isn't it?


Can you give me an example of some of the severe limitations you're mentioning?


I can give you a bunch of things that can't be simulated in a container:

* Boot problems, such as: GRUB config/install errors, kernel parameters, init startup errors, blocking processes

* Many network scenarios, such as: PXE issues, multipath, load-balacing, anything requiring configuring network interface settings, firewall configuration.

* Resetting an unknown root password

* Booting directly to bash

* Filesystem mounts through fstab or systemd mounts

There's probably more I could think of, but I think that's a good list.


I don't think the DNS exercise would behave the same although that probably depends on how the container was setup. Docker usually controls /etc/resolv.conf. Another exercise is "try to figure out if you're in a container or VM so that'd definitely be different"


The question is not if the exercises would behave identically, but if you can test the objective in a container. For example, you can totally test, screw up, and fix DNS in a container. I would think that "try to figure out if you're in a container or VM" would be exactly the same as it is right now.


User namespaces have resulted in multiple new container breakout CVEs in the last year. Some guides actually recommend disabling user namespaces because they are still somewhat new and perilous.


You're talking about creating new user namespaces inside a container, not running a container in a user namespace. Running a container in a user namespace is strictly a security improvement over running it in the host user namespace.

Also, all container runtimes automatically block unshare(CLONE_NEWUSER) with seccomp already (unless they've disabled seccomp, which I'm not sure if Kubernetes still does).


What are the ones in the last year? They provide security benefits as well. I mean, you could say the Linux kernel is also dangerous and the Windows kernel and pretty much anything that has ever had a CVE. You can also limit it to specific users too if that is a major concern.


If you use cri-o as the runtime along with an openshift container registry, it will actually verify signatures at the runtime layer. In addition to crio, podman and anything based on containers/image supports this too.

Really that just means a registry that sends back a header indicating it supports signatures and serves up the right signature endpoints. It's shocking this isn't more common.

But if you just want to check signatures at the cluster's point of entry, you can use an admission controller to block the pods from being created with unsigned images.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: