Hacker News new | past | comments | ask | show | jobs | submit login
Off-Grid Cyberdeck: Raspberry Pi Recovery Kit (back7.co)
649 points by g0xA52A2A on Nov 27, 2019 | hide | past | favorite | 144 comments



As I've been playing with Raspi's and Beaglebones and stuff lately, it's been driving me nuts that EVERYTHING I do needs to be apt-gotten off the internet, the base image doesn't even include basics like screen/tmux.

If those repos are inaccessable for any reason, I have a bunch of hardware that's very hard to do anything useful with.

I know there are such things as apt-caches and squid caches and stuff, but I could really use thing that goes through every apt-get I've ever done and the top 50,000 packages on github and stuffs 'em all onto an SD card and shows me how to use them from my commandline.

OP mentions this as a future direction for the project, but I think it's one of the most important.


On Debian-based distros there are several tools.

squid-deb-proxy can do most of this - it fetches and locally caches lists and packages. Clients install squid-deb-proxy-client which uses multicast-DNS to discover the local proxy.

Packages are fetched and cached by the proxy first time they're requested and thereafter served locally (subject to lifetime/space etc. squid configuration).

There are ways to pre-populate the cache but you're always going to have situations where package updates have to pull from remote archives.

apt-mirror is designed for a similar purpose. It allows creating a sub-set or entire mirror of one or more upstream archives.

Clients can then be pointed at the local mirror which could be on the same host or LAN.

Generally, 'packages' on github will be source-code only so you'd need a quite complex build server, or at least per-package specific links in your local cache rules for pre-built binaries.


Update: In case anyone finds this later, I just ran across this page of someone else doing EXACTLY what I wanted to do, for exactly the same reason:

https://blog.thelifeofkenneth.com/2018/01/off-grid-raspbian-...


> it's been driving me nuts that EVERYTHING I do needs to be apt-gotten off the internet,

One solution is to do it once only: grab the base image, install all the packages you need, and then make an image of it so you don't start from scratch again next time.


But next time I'll be doing something different. That's like saying "the only internet you need is that which is already in your browser history".

No.

What I want is something closer to Kiwix already on an SD card so even if everything is down, I can read about something I've never been interested in before. But for software.


This is my plan (in action) for an offline GPS nixie tube clock. I'm not sure about what is the right choice for options. Many are nice-to-have features for people like me, who would be the most likely to want a clock like this (snmp to host drift, jitter, and temperature mrtg logs), but come at the mild cost of security.

Will those who receive this bother to connect it to networks? Should I even enable network/GUI for those who want to tinker easily? How should the password be handled?

I'm considering two extremes:

1. A raspbian lite headless system with all unnecessary peripherals disabled and no network features.

2. A full installation, GUI and all with HDMI and network enabled so it's easy for people to play with it if they want.

I'm leaning towards 2 because I think it would be nice to have a clock that auto updates leap seconds and the tzdata db. If I go for 1, that opens up the possibility of using a raspi zero (non-W) for a real BOM and power savings.

As for the password: I'm thinking of having something short and simple other than "raspberry" that the user can/should change. This seems like standard practice with many enterprise systems.


This is what I do, and it works well.


>the base image doesn't even include basics like screen/tmux.

Easily arguable that RPi isn't made with such use cases in mind. I'm all in favor of removing unnecessary bloat if only a small percentage of user base is ever going use it, especially when the software is readily available.

>If those repos are inaccessable for any reason, I have a bunch of hardware that's very hard to do anything useful with.

How so? They are just Linux boxes, you can just download the source code and compile the binaries you need. Pre-built packages are not necessary for functional OS.


> How so? They are just Linux boxes, you can just download the source code and compile the binaries you need. Pre-built packages are not necessary for functional OS.

What about dependencies? If you have the internet access required to download the source code I'd say you'd be better off just using the repos.


Download the source code, from the repos?


NixOS solves all of these problems.

a) define your entire system state and dependencies in a single, declarative file

b) prebake an image based on this file

It's all done and ready, available now, no hacking around required.


This is precisely what every linux distro does. Nix isn't providing novel functionality for bootable OS images.

But a bigger question for the ergonomics of NixOS: Are NixOS and Nix prebuilding for ARM now?


> This is precisely what every linux distro does. Nix isn't providing novel functionality for bootable OS images.

None of them provide native, single-source-of-truth declarative configuration that is easy to reason about, pure, and guaranteed to deliver sane results every time (vs. something managed via a classic CM system). Oh, also one that it symmetrical to the way the distribution itself is built and managed.

> But a bigger question for the ergonomics of NixOS: Are NixOS and Nix prebuilding for ARM now?

Yes. https://nixos.wiki/wiki/NixOS_on_ARM/Raspberry_Pi

Ports to other ARM devices are also very easy.


> None of them provide native, single-source-of-truth declarative configuration that is easy to reason about, pure, and guaranteed to deliver sane results every time (vs. something managed via a classic CM system).

Firstly: the Nix language isn't pure. And neither are some base Nix library functions.

Secondly: You may find it easy to reason about. Many of us have not had that experience. Trying to do work as a developer, I felt it was absolutely miserable the instant I needed to depend on a new package or a new runtime. Every language had slightly different conventions and rules. You had to relearn how any specific package worked to integrate it with another, because there often wasn't sanity. And if you DID need to somehow interface with something outside of Nix (say, a vendor binary not in Nix) you had to use an unreliable environment hack.

And of course, the tutorials and docs didn't actually cover the majority of concerns on how to add new stuff folks will inevitably have, except for a trivial C executable.

In one case, after spending a week working out how to add a package to enable a Haskell binding to said package correctly, I submitted package updates that took MONTHS to propagate into the main repo, so I had to start pushing my fork of the nix repo from machine to machine via github on my own to manage multiple machines. It was pretty ridiculous and I regretted my choices.

I like the Nix philosophy. I respect a lot of the people on the project. But I am not a fan of the "it is all fine and well-baked and I'm sure you can use it too" approach a lot of Nix proponents decide to take.

You could absolutely arrive at a solid installable image for ANY major Linux distro.

> Yes. https://nixos.wiki/wiki/NixOS_on_ARM/Raspberry_Pi

That's great! I'll have to try it again there if it has packages I want for SDR work that are not comically ancient.


I really like the idea of NixOS and Guix and tried NixOS for half a year on my laptop. Then I changed back to ordinary package based distributions. (Arch, Debian and Gentoo)

The issue with NixOS for me was probably quality control. When I did an update and instead of fetching the files from the binary cache it starts to build stuff on its own, I could be reasonably sure that some build error happens. Coupled with sparse documentation, I was very often at a loss at how to fix it and just had to remove the package for a while and try again a couple of days later.

Another point is NixOSes path mangling... Do we really need that? Can we not try to use namespaces and OverlayFS etc. to let each process assume it has a normal FHS file-hierachy while in real its 'root' is cobbled together from multiple package installation directories? Instead of patching the paths of every package, letting the kernel do the path calculations seems to be less intrusive.


You are wrong with build stuff in NixOS, their pure functional approach make easy to guarantee that builds that passed in CI will pass in your machine.


In theory you are right, but somehow changes made it into their channels that weren't cleared by the CI and therefor the binary caches didn't have the artifacts in them and I had to build it myself which failed.

So in practice updating a Gentoo system is more reliable than updating NixOS.


(A) is also possible with Debian. OP could just make a pressed file for debinstaller.

(B) is probably accomplished with any OS.


.. but if you discover you need something, you have to go back to (a).


Your prebaked image will contain the definition (a), and you can incrementally continue building upon it, then regenerate yet another image if needed.


Yes, software accessibility is an issue in the most pronounced disaster scenarios. But anybody who looks at a device of such exquisite craftsmanship, which is evidently already set up to function, and frets about downloading software after some apocalyptic scenario has missed at least half the point.


Since these are small devices, I think a minimal image is a necessity. What you will use is not what someone else will use.

As to caching, I tried setting one up and found out a few things.

I originally tried apt-cacher-ng:

https://www.unix-ag.uni-kl.de/~bloch/acng/

but had trouble (can't remember exactly what) and muddled through polipo instad.

I found out:

- it was a lot easier and faster than copying cached packages around

- when doing "apt-get update; apt-get upgrade" the cache really sped up multi-machine updates. The first machine was slow, the rest were fast.

- By far, the majority of cache traffic was for updates to base stuff. foo-1.2, foo-1.2.1 foo-1.2.2, etc.

- second was prerequisites for packages I needed.

- the few specialized packages I used were not updated as frequently.


You may want to look at debos for pre-baking debian/ubuntu images.

https://github.com/go-debos/debos


Yep, big problem with the repo model that Linux insists on using for everything. Offline is a 5th class citizen because of course everyone lives in SV or a university dorm.


I see the repo model as the closest we've come to a solution. Not only are there mirrors in case the main repo drops, because all software is in one repo it's extreme easy to cache/sync/download. Offline mirrors are even an officially supported use case with many package managers. Have enough space? Mirror the whole thing. Not? Select only the interesting package groups. Missed one? You can probably get it from another machine it's installed on.

Want to have an offline copy of software on Windows? Go download all of the installers from their respective sites. Want it to constantly update - that's gonna be about 100 lines of Python code. And better make sure they're the "offline" installers, not those idiotic stubs that just dl the latest version for a server. Also don't forget about all of the 20 different VC++ runtimes that are sometimes not packaged in the installer.

I have a feeling it would be even harder on macOS.

(all of this is of course ignoring the legal issues of distributing those files - which is generally not an issue on Linux)


> because all software is in one repo

All software is most definitely not in one repo. If it were, people wouldn't have to use PPAs or compile from source and deal with all the problems those things cause.

> better make sure they're the "offline" installers, not those idiotic stubs that just dl the latest version for a server

Oh, you mean the ones that aren't just acting like apt.

> I have a feeling it would be even harder on macOS.

Depends, but MacOS software that uses Application Bundles should be fine. Even a lot of Windows software works fine if you just copy the installer contents to a directory. Self-contained application directories (or single files) are an old as dirt concept that Linux communities never got behind, preferring instead to use overly complicated schemes like package management that come with a bunch of their own problems. So many, in fact, that people are now often distributing applications in Docker images.


I had a feeling PPAs would come up, but I've never had more then maybe 7 PPAs on any of my Ubuntu machines, which is still better then each program coming from a different site and with a different installer. Including PPAs in caches/backups is trivial.

The way I see it there are 2 different use cases here: 1. You want a few individual apps kept safe on a USB stick in case you need them offline - installers on Win, AppImage on Linux 2. You want a constantly updated cache of the programs you use in case of the apocalypse - pain in the ass on Win, trivial on most Linux

Windows only seems superior here because no. 2 is barely possible, so everyone has gotten good at no. 1.


> AppImage on Linux

Oh, if only there were more than a handful of applications on Linux distributed as AppImage, and any support given to them at all by file browsers, life would be so much simpler. Linux community hates desktop so much they completely ignored an embedded ELF icon standard and have consistently poo-pooed every non-repo way of dealing with applications ever.

> You want a constantly updated cache of the programs you use in case of the apocalypse

Why would I want that? So I could discover that something I rely on was broken by a recent update only after I have no ability to do anything about it? Well, lets assume so. On Windows this is relatively simple, just keep a copy of Program Files and most applications will still work fine. It's not ideal, but I'll take it over package managers not even being able to install an application to a different disk.


> just keep a copy of Program Files and most applications will still work fine

In some mythical parallel dimension where the Windows system directory, user profile directories and the registry don't exist, sure.


Applications rarely copy dlls to the system directory these days, as that led to horrifying DLL hell just like Linux has, where conflicts abound unless some central authority carefully manages everything. I routinely run about a hundred different applications portably from a thumb drive. Try that shit with Linux (it only works with AppImage!).

User profile and registry are for settings. You can easily keep backups of those as well, but it isn't really necessary unless you have highly tweaked configurations or something.

Have you not touched Windows since 1995?


As oppose to which OS that gets new software while being offline?


Linux repos can be hosted on a local disk. Or you can order a CD/DVD through the post with mirrors of the repos.

Which OS handles this better, in your opinion?


DOS, classic MacOS, NeXT, RiscOS... Single file or folder applications that are entirely portable was the norm in those OSs.


Or just not behind an enterprise proxy it seems.


FWIW providing you have the kernel and firmware basically everything can be built from source.

I've built most of the base repository of Arch Linux on my RPi4 without cross compiling. I'm buying a few more so that I can go 'full Gentoo' and pretty much rebuild the world.

Have a recording of a container booting from a rebuilt userland: https://asciinema.org/a/283303

It might feel like the ARM packages are somehow special but really only the blobby bits like the RPi firmware is. Everything else is just your bog standard armv7/aarch64 ELF.

So yeah, back up a GCC binary or bootstrap, I guess? I can probably email you a tmux binary in a pinch, plus I have a full local mirror of the repo for the apocalypse? :P


Debian provide a physical artefact containing all the packages that could be used without internet access. I don't know whether this also works for Raspian: https://www.debian.org/CD/


This is why projects like yocto and buildroot exist.

They help you maintain a stable software stack over time. And generate a fully contained software image that can be flashed without needing the internet afterwards. (All downloads happen during compilation)


Thank you for filling in a gap in my understanding!

I'm pretty severely on the noob end of the scale when it comes to software, so I don't know if maintaining a new distro for myself would make sense, since all the tutorials I'm following assume I have Raspbian and all its built-ins available to me.

But maybe in the post-apocalypse, some hero with a prebuilt Rasp-yocto will rescue all our useless boards.


How often are the repos inaccessible? I've come around to the opinion that, unless you have a very specific use case, networking is an essential element of any (Linux) computer system. Once you have that, you can benefit from a world of free software, with updates all handled automatically.

Personally I find it almost magical that I can install and update almost any software I need with a short command.


You can build your own image/rootfs

Imagine that you can build your rootfs is a docker container and export it to IMG


I don't think I've ever seen the rpi apt repository offline, so I'm not sure why you're so worried. Embedded computers tend towards lean rather than heavy and that is the mindset you're going to encounter.


Can't you rsync down your own repository?


Why would you even want to do that?


to keep a local copy to work from


Most of the important stuff is in busybox and gcc (maybe Python.) Cache those and their dependancies and you have most of what you want.


This site cannot be viewed without third-party JavaScript enabled (it shows a blank page on both Firefox and Chrome), and this is happening more and more on HN links. I think it's pretty sad that so many websites are adding a client-side dependency on third-party code just to view the website. In this case, this doesn't seem intentional, as the code is full of <noscript> tags.

I also cannot begin to understand how can a <body> tag's class definition can take 4400 bytes. In what kind of situation do we need to apply 146 CSS classes to the <body> tag ?


It's a squarespace site, and this particular behavior is pretty universal to squarespace sites, near as I can tell.

Since it's due to the squarespace, I wouldn't really characterize this as "third-party" javascript per se, but I agree it's annoying the page needs javascript to even render. Boo on squarespace.

If you don't want to enable all third-party scripts, try uMatrix (https://chrome.google.com/webstore/detail/umatrix/ogfcmafjal...). It has very fine-grained control over what assets you allow from where (it's why I knew offhand this was squarespace). A warning though: it's got a bit of a learning curve, and depending on how restrictive you want things, you will probably end up spending a fair amount of time un-breaking the internet.

Needing third-party scripts isn't necessarily evil in my mind though--aside from squarespace-like cases where a page loads scripts from the underlying platform (squarespace, or custom domains on top of medium), the other common case I see is loading scripts straight from cdnjs or similar. Is it really evil or insecure to load jquery from cdnjs?


Strange, it works for me on an ancient version of Safari (9.1.3) that's kitted out with ghostery and JS blocker. Most modern sites don't work on it, but this one did.


That's probably because that version of Safari ignores this little bugger:

    .site.page-loading { opacity: 0 }
I used the developer tools in Firefox to disable that one and the page was instantly viewable without JavaScript.

That's right, the site uses CSS to hide all the content and then presumably reenables it somewhere in the gobs of JS, maybe after its loaded whatever other analytics/tracking crap there is. Absolutely vile.


It makes me sad that this is (currently) the 2nd-top-voted toplevel comment on this post.

Can we stop whining about how people choose to present their work and instead discuss the (in this case, really cool) work instead?


People are rightly complaining about the accessibility (or lack thereof) of the content --- it needs to be nothing more than a static page, and in fact would be perfectly readable as such, if it weren't for one line of CSS that hid the content unless JS was run.


By "third-party", I presume you mean anything originating from other than the site's domain?

I haven't actually checked the site in question (on mobile, kinda tricky), but isn't this OK as long as a subresource integrity hash is used?


My main gripe is that JS is required to view the website. It doesn't work on my mobile phone (which is rather old, I admit), and it takes a huge amount of CPU time to load on my X60.

I really like that I can still use HN on this phone, but it's more and more frequent that I cannot open the links themselves.


This is why hacker news, despite all its problems, is such a joy. There are people doing some amazing stuff that I can learn so much from (while trying not to think to hard about how lazy I am). Its not only a cool project, but is really well photographed and documented. Thanks for posting.


Projects like this remind me of why we need to use public money for space exploration. The extreme constraints on space exploration drive new innovations and ideas that can flow down into our everyday lives.


I wish everyone believed this. It feels so inspiring to every generation of engineers.


We used public money to go to the moon and our government recorded over all of the video footage. Can you name a single piece of video footage that is more valuable than that which was recorded on the Apollo missions? I can't. Governments are either incompetent or corrupt - usually both.


The video footage may be inspiring and I agree that it's a huge shame that it was erased but it really is of very little scientific value compared to the other research and data gathered on these missions.


If the footage from the Apollo missions is worthless that would support my claim beyond the point I made in the first place.


I genuinely envy the creator — as in “envy is the sincerest form of flattery”. I love everything about this: that the maker perceived an entirely legitimate use-case, that he had the skills necessary to assemble off-the-shelf components into a device with such a professional finish, and that he shared so much detail with us. It’s the kind of project I’d support on Kickstarter for the twin delights of helping somebody creare something truly nice but exquisite and for the thrill of having such a unit myself.

Yup, this is why I come to Hacker News.


Thank you! I'm new here but have more stuff in the works. Right now I'm mid-project on some 3D printer enclosures but I've got some more stuff planned in the months ahead.


I really like this project! But I speak from authority, having fully integrated a wireless and wired network and charging station into a carry bag for work uses:

Thar Netgear switch will cut through your onboard battery like a bullet. You'll absolutely need a larger battery if you flip that switch on.


In terms of power usage, the best solution would probably be a passive Ethernet hub.


You can find crummy little 100mb switches on Aliexpress that actually power from micro USB. They're almost but not quite as bad as you think they'll be but for only $8...

https://www.aliexpress.com/item/33047436686.html


I tried one of these and I just found it was constantly drawing out 100mA+ of current. They're really not made with portable efficiency inside.

In the end, I just found that portable wifi solutions to be more power efficient.


OP could use something like a LAN9354 Ethernet switch and build their own switch. Probably needs a third PHY chip to interface with the Raspberry Pi as the Pi doesn't expose raw MII to attach to the LAN9354.



What do you have there? I'm a little bit out of touch with the HAM scene lately


I once worked on a project trying to isolate WiFi signals. We ended up purchasing a metal box with a conductive gasket, and were able to detect wireless signals through the box until we screwed the lid down to spec. Just a point of reference, I've never done EMP work... But I'm not sure how much the copper foil will buy you. Regardless, beautiful build!


An ungrounded Faraday cage is a re-radiator.

If it's grounded you need very little material. The most important factor is the largest gap in the cage, which dictates the longest wavelength that is passed. Look at microwave oven meshes to see a size of gap that blocks slightly above 2.45 GHz.


>But I'm not sure how much the copper foil will buy you.

It'll do very little. There is a paper authored by US army talking about various EMP protection measures. The minimum reliable protection there is a 12mm thick (half inch) mild steel container with a lead seal.

This is why if there is ever a nuclear war the vacuum tube equipment will be the only stuff that survives.


> Numerous holes in the case would make ingress of water or moisture even more common

Salt in air from moisture is also a major issue. I have lost many RPi from the sea air.


A bit OT, but my brother lives by the sea, literally at the top of a cliff. He's a biker, and has owned 5 motorbikes in 10 years - every single one has ended up badly corroded because of the salty sea air, resulting in hefty bills.

After the first bike, he started putting a cover over them, but it didn't help much.

Any ideas on what might help reduce corrosion? (he doesn't have a garage, and building one isn't an option).


Afaik the traditional corrosion protection method is a decent layer of paint, or variations thereof.


Indeed.

For some reason cars seem to be mostly OK there, it's just motorbikes that are badly affected. Obviously the parts that can be painted, the parts that aren't stainless steel, are paintee.


Would some sort of sacrificial anode work?


Galvanic rust protection requires the entire structure be immersed in an electrically conductive fluid. Works great for boats and hot water heaters, less well for cars and motorcycles.


ACF-50 can help. Living right next to the sea your brother will need a lot of it.


On the Pi at least, once you've got a card set up you can `dd` it onto another card. Saves a lot of time when rebuilding. I tape them to their respective cases so they're easy to find when needed.


I love the integrated switch! Is it meant to stand up a small cluster in a hurry? Thinking more, what is the use case?


I was thinking a lower power USB3 hub might be more useful, could do USB ethernet dongles if really needed, or other options... may not be as high a throughput, but that probably isn't the goal.


Really cool! I especially liked the external EMP-shielding-box. I wonder if it's tested and how would one test those? Put it in a microwave and see if anything breaks?


You put a spectrum alalyzer inside and transmit to it.

This guy shows that a box of aluminum foil drops EM by at least 40 dBm. Chicken wire is more like 20.

https://youtu.be/9M6h1U9jgWs


A simple test would be to stick a few wireless devices in the box and check that they all loose signal once the box is closed. If you want to get fancy any place that does EM interference tests of consumer goods should have the room and equipment to test the box across a broad spectrum.

My intuition would be that the lid doesn't seal that good as it is. For long-term storage that's easily fixed by putting the shielding on the outside and gluing it shut with copper tape. Maybe you could instead add some copper-lined magnetic flaps that close the gaps.


The box has flaps lined with copper (https://www.instagram.com/p/B3Uwc9SHzot/?igshid=5dgayhtfayad). The box was a bit of a joke since I have no background in EE and couldn't find good reference material.


I think various electronics companies have test rooms for this and govements have actual EPM test rigs


This is not too healthy for the microwave


Put a glass of water in the microwave as dummy-load to prevent high VSWR from reflecting back into the magnetron. Standard procedure any time you're doing dumb shit with a kitchen microwave.


Good point, but it's probably the most accessible test device for high levels of RF flux. But based on quick wikipedia tour, just testing on the ~2.4ghz band of microwave oven would probably not be enough to verify the shielding.


This might pair well with an Othernet kit, supposing you're in a coverage area.

https://othernet.is


What is this? No offense but this site looks like a concept mock-up. There's no actual information besides general ideas and stock photos, and there's no further links or reading, just an email address.


I can assure you, it's real! I got mine in the mail on Monday!

Caveat: I have not attained reception yet, because of obstacles.

The North America kit comes with:

- Low Noise Block Downconverter (LNB) antenna, capable of Ku-Band reception of the SES-2 satellite. Must be pointed very accurately: elevation, azimuth AND rotating for polarization

- Crappy little tripod, good enough to get started

- An acrylic laser-cut LNB collar to hold the antenna on the tripod, which I immediately snapped.

- A 1GHz arm based board with embedded software defined radio.

- An 802.11n dongle.

Here's a youtube overview of the desktop environment: https://www.youtube.com/watch?v=9TuNVC0Vw2Y&t=281s

Basically, the board receives compressed tar files in the data stream and caches them on a secondary microSD card. You access that content with various linux desktop apps served over a webdesktop.


I’ve got one of installed back home in the outskirts of Milan (near lake Como). It’s the sort of thing you don’t expect to ever need but that might be useful in the one in a billion chance.

It also receives APRS signals, which can be really handy in situations of truly catastrophic disaster.


My only issue with othernet is, from what I've been able to tell, you only get to see what they determine you can see.

In other words, they're shaping the perceptions for you, rather than giving you the raw data to decide for yourself.

You can view some folks online feeds[0] to get a feel for what data they let you see.

[0] https://forums.othernet.is/t/online-othernet-feed/6186


Can I get an explainer on how this would help in a disaster recovery situation?


I was thinking it could be pre-loaded with tons of relevant media and reference materials. Plus being a general purpose tool. I could see distance sailboat cruisers packing one away, similar to how they would pack some redundant spares of key electronics, or an industrial sewing machine like a Sailrite, for enhanced self-reliance in challenging environments. Pack a 3d printer in there too!


> "I could see distance sailboat cruisers packing one away..."

...where the high humidity and salt air of the ocean would corrode the heck out of it. The author mentions that it has air vents.

A Panasonic Toughbook would be a far better choice.


I'm pretty sure he didn't mean he added vents to the pelican case, since he specifically states that this time he did not make holes into the case.


He has to open the case to use it, no? Quoting from the article:

> "I also added cooling vents- the internal Pi 4 has a fan on it, but it needed vents too- so if you look close you can see vents above the connector panel and above the display."


I assume one could use it to store and read back an offline dump of wikipedia[0] in order to help recreate society.

[0]: https://en.wikipedia.org/wiki/Wikipedia:Database_download


In all the ways having a known good/not hacked working linux computer with all your diagnostic and recovery tools installed.

Extra its not reliant on external power, periphials, and is very water resistant and environmentally sealed (the main / only benefit over just a laptop)


A raspberry pi doesn't make a good candidate for that, as it boots into a proprietary, closed-source system before loading Linux. It's hard to beat a toughbook, or even a ruggedized tablet, for any such scenario, especially as they are still sealed when in use. The BOM cost for this project is also relatively expensive - I estimate it around $600.


Well obviously at the end of the world, SOMEONE has gotta bring HN back online!


don't be such a buzzkill! :D


For the keyboard, a random question... does anyone know a real laptop keyboard that's available for general purchase and includes a datasheet on how to interface it? I don't really want to go to the length OP did and build a keyboard from individual switches.



This is really neat, and timely -- I was just watching some of those "buying and opening a Titan II missile launch facility in Arkansas" videos on YT. Seems like a useful addition to one's underground lair.


That needs connectivity via PJON: https://github.com/gioblu/PJON :)


I'd go with an x86 or 64 bit SBC because although someone compiled Linux to run on ARM a lot of programs still can't.


Isn't the whole point of C that you can compile it against whatever architecture you wish? What programs you know of that can not be compiled to run on ARM?


Any program that's closed source. But also it's just kind of a pain to do.


this is just cool as hell. congrats OP.


I wonder about the keyboard though -- why ortholinear? I guess if you prefer it...? I find it very unwieldy. Or maybe all the staggered 40% kits out there wouldn't fit?


Probably an aesthetic choice.

Have you ever used ortholinear layouts? (I have not). I've heard they aren't as weird as you first expect, and you can get used to it fairly quickly.


Is this the new thing now, were everything is going to be cyber-thing, built for a post-apocalyptic world, with bullet proof shielding and water proof, emp proof?


I actually would really like that as the theme for 2020. The perception of an increasingly hostile world pushes designs to reflect rugged and reliable qualities. It would be a breath of fresh air compared to the crappy planned obsolescence of the 2010s. Can’t wait for Apple to create a phone with sharp square corners!


That would be planned obsolescence of clothes.


The shape vaguely reminds me of an oscilloscope.

The raised lettering looks nice at first but I think might be rather prone to damage; inset/engraved would be preferable.


I call this a fake. Look at the photos and focus on the keyboard. What is it missing? The space bar. How the hell do you use a keyboard without a space bar?


Love that tiny keyboard, but where is the space bar?


Bottom row. On mine I have space left of centre and enter right of centre. Its a full keyboard, you use the next two outwards on the bottom row to shift layers for access to all the punctuation. They actually very ergonomic. See olkb.com for more on the original.


:O

Just looked more closely at the photo. That is a pretty unorthodox layout for a 40% keyboard! (having a number row on the default layer)


Certainly missing other often used keys, like colon, semicolon, comma, quotes, /, ?, <, >. Guessing he must have some way of dealing with all that.

Higher res picture of the keyboard: https://images.squarespace-cdn.com/content/v1/5caa4adb92441b...


This style of keyboard is called "40%" and, just as you would press shift+1 to input an exclamation mark, they're invariably set up with additional modifier keys (or chorded sequences) to input the values of all the missing keys.

It's hard to say exactly how this one is configured, because it looks like a custom layout. The kit can be found here: https://5z6p.com/products/plaid-through-hole/


The keys chosen in the primary layout in your link seem much more sensible to me. Trying to imagine what vim is like on the OP's keyboard :)


Small keyboards like this one have programmed layers. You use a key combo to switch between layers, which changes what the keys do.


It's a planck style keyboard, you can find out more here https://olkb.com/planck


It's certainly not using that layout. There is no space key on the default layer. https://images.squarespace-cdn.com/content/v1/5caa4adb92441b...


Many crucial keys are missing. That does not look functional.


Thats slick. Gives me alot of ideas but also makes me regret passing by those milspec connectors on a pile of surplus.


Looks very cool. Is it comfortable to use?


I don't think comfort is within scope for this device, given its usecase.


The back panel is beautiful.


This is beautiful and I am now motivated to make my own as well. And I've figured out what I'm going to do with it.

Since 1999 I've been 'printing to PDF' all the great stuff I've read on the Internet. As a result, I have an 36gigabyte archive of PDF files.

I'm going to make one of these ultimate boxes, and put my archive out in the wilderness, up a mountain, behind my hut in a deep, dry hole in the rock.

That way I'll always have something great to read when I get up the scrag. ;)


Reminds me of a personalized version of the Long Now Foundation's "Manual for Civilization"


That would be pretty sweet to share as a doomsday bathroom reading collection. Any chance you might throw it up on archive.org?


Oh I dunno, it'd take a fair bit of time to vet for personal data .. I think I'll just make it more of a family/friends heirloom installation, and for those 'in the know' about how to get up to the hut. There's a ton of great stuff in the archive, but I don't think its something of public interest as much as just my personal tastes in interesting shit ..


No connectivity in TFA. I'd suggest IRLP or better [0]

[0]. https://en.wikipedia.org/wiki/Internet_Radio_Linking_Project


APRS would probably be better, IMO (at the very least, throw a small TNC in the box wired up to the Pi so you've got options).


What does "TFA" stand for?


The fine article. In the same way that rtfm means “read the fine manual”.

At least, that’s the sfw version.


I always read the F as "featured"



Not “Falcon”?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: