Hacker News new | past | comments | ask | show | jobs | submit | EmilioPeJu's comments login

If they could open it, send pictures and send dumps of its firmware, I'm sure there would be people around the world curious enough to reverse engineer it, and with that knowledge available, they could (at least potentially) get some support.


Only if the user base is large enough to gather enough interest. I expect someone reverse-engineering popular devices like nintendo switch but not bionic eyes used by a few hundred people.


Never underestimate the motivation of empathetic nerds.

Reading this has me horrified and wanting to help with the effort.


you mean, something like a new FOSS project called "OpenEye"? :-)


The way I prefer is slightly different: I work for as long as I like, then I break for as long as I like.


I found that I'm a lot happier doing things "according to need" instead of forcing my human body to follow machine patterns.

However forcing myself to take a break every hour helped a lot with chores, and with eating and hydrating properly.


If you are curious about how small a ELF binary file can be, you might like the following amusing article: https://www.muppetlabs.com/~breadbox/software/tiny/teensy.ht...


A book about this topic which I enjoyed is "Learning Linux Binary Analysis" by Ryan O'neil.


If I'm not wrong, the pre-allocation of I/O ranges in PCIe bridges is needed only if you intend to hot-plug devices that were not present in the first enumeration.. but in VMs the hardware is known from the start and the PCIe enumeration can assign I/O ranges only if devices underneath actually needs them... is there a reason why hot-plugging is needed in VMs?


Cloud customers love it when they can just attach stuff to their VMs without having to recreate them or even reboot them.


Isn't the cloud notoriously worse about hotplugging anything than on-prem systems are? For example, vSphere supports hot adding CPUs and RAM to VMs, but Azure doesn't.


Seems unsurprising. On Azure, if it goes wrong, the various tenants aren't all working at the same company.


> is needed only if you intend to hot-plug devices that were not present in the first enumeration

Correct. I regularly use VMs with more that 14 statically configured PCI devices using QEMU with libvirt without having to resort to qemu:cmdline.


Author here.

Have you got it working with PCI or PCIe? PCI devices attached to the top-level bus do not request I/O ports unless they need to, and if they do, they request only small slice.

QEMU also allows one to put 8 static PCIe devices into a single "multifunction PCIe device", so it requests 4K I/O ports per 8 devices, giving a bigger headroom. The downside, of course, is that all these 8 devices lose individual hotpluggability, and can only be added/removed en masse.

The biggest problem is hotplug slots, each taking 4K I/O ports unless told otherwise in a way libvirt does not support as I described in the article.


There are 14 hardware based PCIe devices (mixture of NVME drives and NIC VFs) along with various other emulated PCI devices (virtio block, serial, etc.).

I have not tried to hot [un]plug devices with this configuration. It looks as though I’m likely to be disappointed if I try. Thanks for the explanation.


Author here. As correctly guessed in other comments: cloud infrastructure.

To make public IPs and volumes hotpluggable without a guest agent running inside every VM one has to manage them in a way guest OS will handle hotplug using regular mechanisms. For volumes it's PCIe storage hotplug, for public IPs it's PCIe network card hotplug.

If a VM is used as a Kubernetes worker, couple of dozen volumes and public IPs attached is not an unlikely situation.


It’s not a common use-case but I could see it being useful for sharing hardware that requires exclusive access like GPUs/ML accelerators.

Currently if you need GPUs they come with the instance itself meaning you need to boot your VM from scratch, do the work and then shut it down to relinquish the GPU.

With hot-plug you could have continuously running VMs that only attach/detach GPUs as needed, no longer taking the overhead of a full cold boot/shutdown every time.


adding NVMe emulated storage would be one


Devices passed from the host to the guest?


Hot plugging refers to adding devices to the VM while the VM is running. Passing host devices through is commonly accomplished without hotplug.


Right. I should have been more clear that it can be you hotplug a host device and pass it in. Admitted, this is typically USB and not PCIe. And the PCMCIA days are over...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: