Not to mention the onboard LAN is over the USB connection, but is topped off at 100 Mbps, so no high-throughput uses will work over the network.
Alternative SBCs have started adding onboard eMMC, USB 3.0, and GigE; while they’re still not desktop PCs, having faster IO makes tinkering so much less annoying (for many applications). I hope if anything, the next Pi has Gigabit.
Me too. However, the board is cheap and properly supported and conforms to a lot of standards. If you stick to branded stuff then for less than £100 you get a thin client capable of 1080p in a decent case that can be bungeed to the back of a monitor with a power supply that wont blow up and a decent HDMI cable. The wifi and ethernet are good enough for 1080p streaming and good enough for good quality RDP. Modern low cost SD cards are pretty quick as well.
I have rather a lot of them, at home and work. For me, quality and standards count.
Nowadays Pis are made in Wales (by Sony). They are designed by a British foundation. To all intents and purposes they are a British computer that fulfils a particular niche rather well. No, the Pi3 does not have GBE or SATA but it is (in my opinion) a piece of kit that you can trust to not set fire to your cat.
The fastest low-cost networking board right now (that I can find) is the espressobin, built around the Marvell Armada chipset. It's got three full-duplex GigE ports that can actually run as such as well as pci-E and SATA. Unfortunately it's oriented toward routers and stuff so it lacks HDMI, so no good for home entertainment.
Problem with these: I can never be sure what level of acceleration for video playback is supported - and in what quality. The Pi is pretty rock solid in that department. Also, the Pi has by far the biggest community and extensive documentation, and I can rely on there still being Pis available in years, compared to the Chinese outfits where no one knows how long they'll stay in market.
Concur. I've played with the ODroid C2, Beaglebone Black, and the three "full flavor" Raspberry Pi models. The RasPis have, far and away, the best supportability. The ODroids are great when they work - love that GigE, but they aren't maintaining an OS distribution with the same care and man-power that Raspbian is getting.
I tried to get a Pinebook from those Pine64 guys but never heard back from them. Not sure I'd recommend them since they didn't even send a confirmation email or anything of the sort.
I agree, though maybe as a “Raspberry Pi Pro” or “Plus” model.
(Aside from that, I’d really like if all models had a serial console over USB, and perhaps a small amount of built-in flash memory to avoid the need for a MicroSD card in some cases)
this is a bit misleading in that it's not necessarily about netbooting (which per linked article just works).
As someone who's played with raspberry pi's on "real" networks I can tell you that netbooting one is not as reliable as you think. It may work on a small home network, but once you have 50+ devices tftp-ing something will show it's limits.
The firmware that netboots is also not as reliable as one would think. If it fails it just hangs. A proper netboot/PXE client will retry forever.
The approach I followed was to build a minimum ramdisk image which I've placed on the SD card. When that starts, it creates a in-memory file system and downloads the actual image to this file system. The SD card is in read-only mode after that.
I don't disagree, it reads a bit like submarine marketing for the clustering hardware. But the difficulty in net booting is for the Pi1 or Pi2. On the Pi 3 it "just works."
That said, net booting used to be a thing. When I joined Sun Microsystems in 1986 it was all about the 'diskless workstation.' At the time was the brand new Sun 3/50.
"The presence of a notch, and the presence and position of a tab, have no effect on the SD card's operation. A host device that supports write protection should refuse to write to an SD card that is designated read-only in this way. Some host devices do not support write protection, which is an optional feature of the SD specification. Drivers and devices that do obey a read-only indication may give the user a way to override it."
No there is no reason for the SD card to have to be writable. You can put / and other filesystems into RAM discs. You can disable swap or run it over NFS and you can get other filesystems over NFS as well, so lots of options.
If you pass the correct options to the mount command, you get read only mode (funnily enough it is the ro option)
It seems like another benefit of your method is that it will allow the subsequent image to be downloaded over WiFi, is that right? Or are you also limited to ethernet?
The real issue with raspberry pis is that ethernet is actually on the usb2 controller, so it's capped at 80mbps afaik, and it's also shared with other usb storage.
I hope raspberry pi 4 has gigabit ethernet and netbooting becomes a first class function.
Netbooting is one of these problems that turns out to be much more difficult than it's inherent complexity warrants. Setting up DHCP and TFTP doesn't cause a full psychosis yet, but then we enter the "no win zone" of NFS and filesystems. One of the persistent frustrations with any sort of Linux filesystem is that their makers feel you would only ever want to use them with a full-blown Linux kernel, too. No libext, the only userland tool you get is a bunch of 10000 LoC C monsters that an autoconf behemoth ensures can only be built on a modern Linux glibc system. When at the end of the day we are talking about an abstraction that should work perfectly fine with just read(), write() and seek().
My ideal "netbooting" setup is a faux SD card that translates block IO to UDP packets on a GBit link and has a little software on my computer reading/writing from a binary blob. I can still mount it if theres something mountable in the blob. Being a SD card it solves the other big problem with netbooting, which is that it's not transparent. The bootloader needs to know it's netbooting. The kernel needs to have support for NFS. init needs to know you are netbooting to mount the differently-named FS. And then when a service decides on startup to reset the network connection (hello Android), you're still fucked.
> Setting up DHCP and TFTP doesn't cause a full psychosis yet, but then we enter the "no win zone" of NFS and filesystems.
Linux kernel + initramfs, there you are. Why reimplement stuff when it can be done cheaper in kernelspace?
> One of the persistent frustrations with any sort of Linux filesystem is that their makers feel you would only ever want to use them with a full-blown Linux kernel, too
That's true of any modern general-purpose filesystem, for all OSes - there's a reason why embedded devices often enough speak only FAT32, and some even only FAT16.
That's a very good start, if you install a DHCP server on Linux they usually assume it's because you trying to run a federated thousand machine network, not just to stub some initial config values, and so the complexity matches that expectation..
But the biggest pain point remains the whole NFS mess. I'm not sure you can even do something like SELinux over NFS, and even if you could, you still need to configure the right host computer permissions and attributes only for stuff to match on the netbooting machine.
"But the biggest pain point remains the whole NFS mess"
That's a little unfair. NFS has been around for a very long time and yet it is still modern and being developed. You do have to do things the Unix way to get the most out of it, so you need to get Unix uid/gid standardised across your network for example. Samba's winbind and the rid backend to idmap is a rather easy way to turn AD users into full Unix users with consistent uid/gid.
If you get your time, DNS and user IDs right then NFS can be pretty easy and very, very reliable. On modern Unix systems, you also have Samba to play with and other things. Lots of choice and with a bit of thought and research, you will find a decent solution.
Netboot is most effective when the system is a minimal system and is configuration driven, see tinycore linux or CoreOS container linux with ignition.
NFS is a mess. So design your solutions to not need it. I see no reason why the pixieboot solution isn't something capable of scaling to thousands of machines on a network.
I mean, we are in agreement. There are just systems like mobile SoC running platforms like Android where you desperately want some sort of network booting scheme for development because you are constantly rebuilding parts of the software, where the storage options for the part are highly constrained, slow and unreliable (like the RPi) and the performance and architecture necessitate cross-building.
I'm pretty curious about using RPis for a Kubernetes Cluster. Is there a benefit to using a RPi cluster compared to a simple server or is this purely for research purposes?
Depending on the amount of nodes you’re wanting, it usually is better to just grab an Intel NUC and virtualise. Much smaller than a RPi cluster and much more powerful.
Alternative SBCs have started adding onboard eMMC, USB 3.0, and GigE; while they’re still not desktop PCs, having faster IO makes tinkering so much less annoying (for many applications). I hope if anything, the next Pi has Gigabit.