UK resident here. The original version gave me the push I needed to get a rPi 2B+, subscribe to a VPN, and use it as a wifi AP that routes all traffic from my house through it.
Can you trust a VPN who say they don't log? No, but more so than an ISP who might be legally required to at any moment without you ever finding out.
Also, I will now never start a tech company in the UK, and this is because I will never put myself into a position where I am forced to add backdoors to a product.
One is a vpn router and a wifi AP, it also has Uptime Kuma. I need this to be reliable and rarely touch it except to improve its reliability.
- Openvpn
- HostAPD
- Uptime Kuma (in docker)
- A microservice invoked from Uptime Kuma that monitors connectivity to my ISPs router (in docker)
- nginx, not in docker, reverse proxies to Uptime Kuma
The second acts as a NAS and has a RAID array, consisting of disks plugged into a powered USB hub. It runs OpenMediaVault and as many network sharing services as I can set up. I also want maximum reliability/availability from this pi, so rarely touch it. All the storage for all my services is hosted here, or backed up to here in the case of databases that need to be faster.
The third rpi runs all the rest of my services. All the web apps are dockerized. Those that need a DB also have their DB hosted. Those that need file storage are using kerberized NFS from my NAS. This rpi is also another wifi AP. This rpi keeps running out of RAM and crashing and I plan to scale it when rpis become cheaper or I can repair some old laptops.:
- Postgres
- HostAPD
- nginx
- Nextcloud
- Keycloak
- Heimdall
- Jellyfin
- N8N
- Firefly-iii
- Grist
- A persistent reverse SSH tunnel to a small VM in the cloud to make some services public
- A microservice needed for one of my hobbies
- A monitoring service for my backups
TBH I haven't ever had a problem with SD card corruption so far. If I did, it wouldn't really matter, since all the important data is on the RAID array, and the OS can be reprovisioned if needed.
Performance proved to be an issue for SD cards though, when attempting to host nextcloud and postgres. I do what teh_klev is talking about and selected the fastest USB stick I could find, which was a Samsung FIT Plus 128 GB Type-A 300 MB/s USB 3.1 Flash Drive (MUF-128AB), and this gave me a huge speedup.
Unfortunately Jellyfin is not really fast enough on an rpi and I have no solution.
I have been thinking of setting up a pi4 as a wifi AP. Can you comment on the hardware performance? I am worried that the range or throughput might be poor, and thinking I might need to use an Intel ax200 or similar.
I've done this before and it works in a pinch, but I didn't think it was reliable enough to use on a permanent basis. I added a USB WiFi interface, and that helped with the signal quite a bit. Setting up the AP and networking isn't trivial (but is certainly do-able if you're familiar with linux networking).
My use case was using it to connect my family's devices to an AirBNB network. I used the Rpi as a bridge to the host WiFi. This way I could keep a common SSID/password and didn't have to reconfigure all of my kid's devices. It kinda worked.
However, it wasn't very reliable and had poor range and performance. The Rpi was meh with one client attached, but it was bleh with more than one. I ended up replacing it quickly with a cheap dedicated AP that I flashed with openwrt. Much easier, and device-wise was cheaper too.
To me it's been obvious ever since the term SaaS was coined, that it would be worse for users. Not only is it more expensive, you don't get control over your data or how you use the product. The idea of cloud computing is similar - you have to pay more for someone else's computer. Granted, SaaS and cloud computing make sense if you're an organization, and can have advantages in terms of scale, reliability, etc.
But also, when business interests get involved in producing software in general, it often causes problems, i.e. ads, worse interop, performance considered unimportant, marketing emails, DRM, the software not working after the company is acquihired or fails. However, producing software takes time which costs money. So, commercially produced software can only exist at this intersection between there being a business model, and the software being useful. The condition is, the usefulness must be enough to be worth paying for, and the result is what we have now.
Imagine rewinding to 1990 with unlimited borrowed VC funds, hiring every person employed in tech full time until 2023, and building a massive suite of useful software for individuals, companies, govt, with a few different alternatives for each use case (like we have now), except they communicate via a series of well defined and public APIs. The entire software stack would be developed in this way, for maximum usability, performance, interoperability, features, etc. . After getting to the set of features we now have in late 2022, we pause the thought experiment, note the date, and split the cost between the users. Ignoring the various practical issues with this experiment, I bet it would be possible to get to where we are sooner and far cheaper per user.
Long story short, I don't think the goal of making money as a business is very well aligned with the goal of producing really good quality and long lasting software, even if the users are willing to pay, and this is a real problem. For personal use, I won't tolerate ads, DRM, etc., so I now self host.
> Why bother with docker for a home server other than for the fun of it?
I do this. Over time you forget how each service was configured, or simply don't care. Adding more and more stuff to a home server increases the complexity and the attack surface more than linearly in the number of services.
I run nearly all my home services in docker, and I have a cookie-cutter approach for generating SSL certs and nginx config for SSL termination (not dockerized). Provisioning is automated through ansible, so my machines can be cattle not pets, as far as is possible on 3 raspberry pis.
Same, but I haven't started with anything like Ansible yet, only beginning to learn it at work.
Running all my services in Docker keeps it all clean because I'm a very messy person when it comes to Linux. Change, change, change, it works, forget about it, it breaks, find something I did years ago tripping me up now, change, change, change, it works, forget about it.
With docker every service is contained and nearly separated. I can rip something out and replace it like stacking a new network switch in the rack. Delete the container, delete the image(s) and delete the volume if I want to start over with something completely fresh.
I can move everything to a new server by moving bulk hard drives over, restoring docker volumes from backup and cloning docker-compose configs from git. Haven't tried any distributed volume storage yet.
> Haven't tried any distributed volume storage yet.
Having tried Gluster, Ceph/Rook, and Longhorn, I strongly recommend Longhorn. Gluster is kinda clunky to setup but works, albeit with very little built-in observability. Ceph on its own also works but has some fairly intense hardware requirements. Ceph with Rook is a nightmare. Longhorn works great (as long as you use ext4 for the underlying filesystem), has good observability, is easy to install and uninstall, and has lower hardware requirements.
Its main drawback is it only supports replication, not erasure coding, which tbf is a large contributor to its ease of use and lower hardware requirements.
longhorn has no authentication at the moment, so any workload running in your cluster can send API requests to delete any volume. I think they are working on it but it might not be the best solution unless you deploy a security policy to prevent network access to the API pod.
I also found it very weird, but here's my intuition.
There are 2^k > 512 spheres stuck to eachother across k-d (pretend k=9). The line from the center to the point where the inner sphere touches one of outer spheres has to shortcut through all k dimensions to get from the center to the sphere.
This distance has been massively inflated due to the number of dimensions. But the distance to the edge of the box hasn't been inflated - it's just constant, so the inner sphere breaks out.
The fourier transform is the same for all time, and the time-series contains all frequencies, so it makes sense to plot them on orthogonal axes. However it makes sense for them to share the 3rd axis for magnitude.
However, the Im parts are just bolted on orthogonally to the Re parts and orthogonally to the magnitude. The reason the Im part of the time series shares an axis with frequency is just because the author ran out of dimensions.
Since a commercial fusion reactor hasn't yet been developed, you can't argue that it is or will be the same cost per-megawatt as fission power. That's because you can't predict any advances in technology that make fusion easier or reduce the capital cost.
The authors argue that ITER will get there and it's a matter of time, funding, and politics. SPARC might be able to get there around roughly the same time. Neither will be hooked up to the grid, but they will demonstrate the tech needed to make a viable fusion power plant. Unfortunately none of the other exciting fusion projects out there will be able to get off the ground due to fairly fundamental limitations.
If you ignore any tedious jokes about when fusion power will be ready, and assume it will be ready in a few decades, it's still a process that converts reasonable quantities of seawater into power, with no CO2 emissions, and is relatively safe compared to fission.
It won't be ready in time to reduce emissions enough to prevent catastrophic climate change. However, it can be ready soon enough to power the devices we'll need to sequester CO2 from the atmosphere once we've reduced our emissions to the point of diminishing returns.
After the concept has been demonstrated, there's plenty of scope for improvement which will make it better and cheaper. On the other hand, the price of oil will increase as emissions taxes are introduced (I hesitate to say that we'll run out of reserves).
There's a lot more mileage in fission technology that what's commonly deployed for power, but the new types of reactors needed make it far easier to produce weapons(). Also, although fission technology is mature and safe, the human factors around it are not, which will still lead to accidents, contamination, and deliberate theft.
() Fusion reactors would also make it possible to breed fissile material since they are a neutron source but it would be slower and easy to detect.
Why could an airship not be made using a vacuum instead of a lifting gas?
A quick google reveals that someone has thought of this before: https://en.wikipedia.org/wiki/Vacuum_airship and it seems like it might not be possible with currently available materials.
Would it be possible to reduce the density of helium for the same pressure by electrostatically charging it to make the molecules repel, effectively turning the airship into a giant capacitor with alternating +ve and -ve sections?
I once had to port a siamese neural network from Tensorflow to Apple's CoreML to make it run on an iPhone. Siamese neural networks have a cross convolution step which wasn't something CoreML could handle. But CoreML could multiply two layer outputs element-wise.
I implemented it using a fourier transform (not a fast fourier transform), with separate re and im parts, since a fourier transform is just a matrix multiplication, and convolution is element-wise multiplication in the fourier domain. Unsurprisingly it was very slow.
Can you trust a VPN who say they don't log? No, but more so than an ISP who might be legally required to at any moment without you ever finding out.
Also, I will now never start a tech company in the UK, and this is because I will never put myself into a position where I am forced to add backdoors to a product.