Hacker News new | past | comments | ask | show | jobs | submit login
Open Hardware on Its Own Doesn’t Solve the Trust Problem (bunniestudios.com)
244 points by sabas_ge on Dec 27, 2019 | hide | past | favorite | 76 comments



Excellent work. I completely agree that today the easiest option to develop safe hardware is using FPGAs.

Two things to look forward:

   1. Usage of open source FPGA synthesis and implementation tools
   2. Usage of open source FPGA chips :)
I've already seen some traction happening for open source FPGA tools, but open source FPGA chips are only in my head (as far as I know).

I'm a chip designer myself, and for years I have been thinking on kickstarting something to pay for a tapeout of an open source FPGA. If anyone is interested let me know, I live in Ontario/Canada.


How will you know the mask of your FPGA isn't changed?

I think that the idea behind all of this is sound but that at some point we will have to accept that there always will be some remnant of insecurity unless you are willing to create your own fab or build your CPU up out of discrete transistors which can be (1) exhaustively tested and (2) are too simple to contain anything nefarious given that there is no way to know where in a circuit that transistor will end up.


A couple things:

   1. FPGAs are easier to verify cause they are regular structures.
   2. How do you insert a backdoor in an FPGA at the supply chain if you don't know what is the exact logic that is going to be uploaded?


> How do you insert a backdoor in an FPGA at the supply chain if you don't know what is the exact logic that is going to be uploaded?

Popularity of certain open core designs might be one way to gain advance knowledge of how an FPGA might be used.

That suggests an interesting option: to scramble the input to an FPGA in such a way that the device will still work but that it is even more unpredictable how its internal connections will be used (otherwise you could take a number of open core designs and arrange for your attack to work with those configurations, which might be detectable in hardware or in the toolchain).

Better yet, scramble the bitstream on every boot (but what would do the scrambling?).


TFA says “We rely on logic placement randomization to mitigate the threat of fixed silicon backdoors“ and goes on to talk about the tools they are working on, but there is still some way to go.


Ah good catch. I must have picked it up while skimming, but it also seems kind of obvious.


From the article:

> The placement of logic with an FPGA can be trivially randomized by incorporating a random seed in the source code. This means it is not practically useful for an adversary to backdoor a few logic cells within an FPGA. A broadly effective silicon-level attack on an FPGA would lead to gross size changes in the silicon die that can be readily quantified non-destructively through X-rays. The efficacy of this mitigation is analogous to ASLR: it’s not bulletproof, but it’s cheap to execute with a significant payout in complicating potential attacks.


There is active research around the topics of logic locking and logic obfuscation. You can get a feel for the state-of-the-art by following HOST: http://www.hostsymposium.org/


Yep. Great times are coming :D

I think if you can enforce that the user will resynthesize the design on his own, and you make sure the synthesis/placement is different, you are way ahead of any other AISC alternative in terms of trust.


But that would automatically limit the deployment to a very, very small portion of the public, those tech savvy enough to do that. A few thousand to tens of thousands of people worldwide. Unless they would become ambassadors of sort - and assuming anybody else would even care - that would still leave the rest wide open.

Cracking that is a difficult problem, you would have non-tech savvy consumers who need - or at least, that's what we think, your average consumer doesn't care at all - to gain access to secure devices. It would require a very large, visible and super embarrassing event to change the typical 'privacy is dead, get over it' mindset to switch to 'give me that secure hardware'.

Right now the only people who would be interested are those that rightfully have something to fear from nation state level actors (spies, dissidents, politicians, would-be whistleblowers). And using a device like this would make them stand out like sore thumbs.


> that would automatically limit the deployment to a very, very small portion of the public, those tech savvy enough to [resynthesize their CPU from source]

Every time someone opens the Facebook web page, their browser recompiles the Facebook application from JS source, typically using ASLR. Facebook's user interface is vastly more complex than a CPU. Yet Facebook is not limited to a very, very small tech savvy portion of the public.


Facebook's user interface is at least a few orders of magnitude less complex than a CPU. What would lead you to think otherwise?


Simple measures of complexity lead me to think otherwise. For example, the size of the 6502 design team (8 people, four of them circuit designers (several of them were doing things like design rule checks) from August 1974 to June 1975, 10 months) versus the size of Facebook's frontend team (within a factor of 3 of 1000 people, with 15 years of work behind them), the number of transistors in the 6502 (3,510) versus the number of bits in Facebook's minified frontend code (on the order of 100 million), and the line count of the Verilog designs for James Bowman's J1A CPU and Chuck Thacker's thing for STEPS (both under 100 lines of code) versus the line count of just React (a small part of Facebook's code).

You can try to convince people that software that has had 250 times as many people working on it for 18 times as long is "a few orders of magnitude less complex than" the 6502, but I think you're going to have an uphill battle. Three orders of magnitude less complex than the 6502 would be four transistors, a single CMOS NAND or NOR gate.


That's when you could do layout by hand. I thought you meant a modern CPU which is what that UI runs on. Good luck using Facebook on a 6502 which really brings up the rest of the subsystems that collaborate with the CPU.


Yes, that's when you had to do layout and design rule checks by hand, and you didn't have SPICE, so you couldn't run a simulation --- you had to breadboard your circuits if you weren't sure of them. Before that, you had to do the gate-level design because you didn't even have Verilog, much less modern high-level HDLs like Chisel, Migen, and Spinal. Before that, you had to desk-check your RTL design, because mainframe time was too expensive to waste finding obvious bugs in designs that hadn't been desk-checked. That's why the 6502 took eight talented people ten months. Nowadays you don't need to do any of that stuff, so it's much easier now to design a CPU than it was in 1974.

It's true that you need a faster CPU than a 6502 to run Facebook, but that's a matter of transistor count and clock speed much more than logical complexity. To a great extent, in fact, since both transistor count and clever design can improve performance, you can trade off transistor count against logical complexity if you're holding performance constant. (As a simple example, a 64-bit processor can be the same logical complexity as a 16-bit processor --- you can even make the bit width a parameter in your Chisel source code. An 8-bit processor needs to be more complex because an 8-bit address space is not practical.) Such a tradeoff is not an option for Intel, who need the best possible price-performance tradeoff in the market, which involves pushing hard on both transistor count and logical design.

Even if we take Intel's current designs as a reference, it's absurd to suggest that they're even equally complex as Facebook's user interface, let alone multiple orders of magnitude more complex. Do you literally think that Intel has hundreds of thousands of employees working on CPU design? They don't even have multiple hundreds of thousands of employees total. Do you literally think that the "source code" for a 64-core, 64-bit 30-billion-transistor CPU like the Amazon Graviton2 --- thus less than half a billion transistors per core --- is multiple gigabytes? Like, several bytes per transistor?

Let's look at a real CPU design it's plausible to run Fecebutt's UI on. https://github.com/SpinalHDL/VexRiscv is an LGPL-licensed RISC-V core written in SpinalHDL, an embedded DSL in Scala for hardware design. The CPU implementation is a bit less than 10,000 lines of Scala, but only about 2500 of that is the core part, the rest being in the "demo" and "plugin" subdirectories. There's another few thousand lines of C++ for tests. (There's also 40,000 lines of objdump output for tests, but presumably that's mostly disassembled compiler output.) You can run Linux on it, and you can run it on a variety of FPGAs; one Linux-capable Artix 7 configuration runs 1.21 DMIPS/MHz at 170 MHz.

This is not terribly atypical; the Shakti C-class processor from IIT-Madras at https://gitlab.com/shaktiproject/cores/c-class (1.72 DMIPS/MHz) is 33,000 lines of Bluespec, according to

    find c-class/ -name '*.bsv' -print0 | xargs -0 cat |
        sed 's,//.*,,' | grep -P \\S  | wc
Shakti or VexRiscv are about two orders of magnitude more complexity than a simple CPU design like the J1A or Wirth's RISC, but Shakti and VexRiscv are full-featured RISC-V CPUs with reasonable performance, MMUs, cache hierarchies, and multicore cache-coherency protocols, that can run operating systems like Linux.

In summary, a simple CPU is about a hundred lines of code and is reasonable for one person to write in a day or a few days. A modern RISC-V CPU with all the bells and whistles is about ten thousand lines of code and is reasonable for half a dozen people to write in a year. Facebook's UI is presumably a few million lines of code and has taken around a thousand talented people over a decade to build. Intel's and AMD's CPUs presumably represent around the same order of magnitude of effort, but much of that is the verification needed to avoid a repeat of the Pentium FDIV bug, which both doesn't add to the complexity of the CPU, and also isn't necessary either for Facebook's UI or for a core you're running on an FPGA.

Ergo, a full-featured modern CPU is about two or three orders of magnitude less complexity than Facebook's UI, and a CPU optimized for simplicity is about two or three orders of magnitude less complexity than that.


Aren't you ignoring a whole host of physical design complexities? Power, clock speed, signal integrity, packaging, manufacturability and yield? Yes, implementing the design in an FPGA solves some of those, but not all.

I guess your overall point is that it could be possible to provide people with source code, have them push one button, and get a working bitstream out (just the same as we simple browse to facebook.com and get a working app). That assumes that the designers know the target FPGA and work extra hard to make sure that their design meets timing/power/etc. budgets with any randomized placement and routing for that FPGA. Hmm, yeah, I guess that probably still is easier than creating Facebook's UI, as long as we can assume some constraints.


> it could be possible to provide people with source code, have them push one button, and get a working bitstream out (just the same as we simple browse to facebook.com and get a working app).

Right.

> packaging, manufacturability and yield

Using an FPGA solves those problems.

> signal integrity,

When we're talking about digital computing device design, rather than test instrument design or VHF+ RF design, there's a tradeoff curve between how much performance you get and how much risk you're taking on things like signal integrity, and, consequently, how much effort you need to devote to them.

> know the target FPGA

> timing/power/etc. budgets

> Power, clock speed

Similarly, those are optimizations. Facebook actually has a lot of trouble with power and speed, I think because they don't give a flip --- they aren't the ones who have to buy the new phones. They have trouble delivering messaging functionality on my 1.6GHz Atom that worked fine on a 12MHz 286 with 640K of RAM, so they have something like three orders of magnitude of deoptimization. (The 286 took a couple of seconds to decode a 640x350 GIF, as I recall, and Facebook is quite a bit faster than that at decoding and compositing JPEGs --- because that code is written in C and comes from libjpeg.)


Not really. The complexity of a basic open source CPU can easily be less far less code then a massive UI application.


Presumably you'd start with the standard sort of attacks, e.g. compromising any kind of hardware random number generation present.


In solving this problem, I think there is no perfect solution right now, just steps in the good direction. Making attacks harder instead of just impossible.

The article is long so it is normal that many people did not read it to the end, which is a shame because I think the conclusion is really important:

"I personally regard Betrusted as more of an evolution toward — rather than an end to — the quest for verifiable, trustworthy hardware. I’ve struggled for years to distill the reasons why openness is insufficient to solve trust problems in hardware into a succinct set of principles. I’m also sure these principles will continue to evolve as we develop a better and more sophisticated understanding of the use cases, their threat models, and the tools available to address them."

It is a quest. It will be made of a lot of partial solutions. FPGA are just easier to inspect and their functions harder to backdoor if you don't know what they will run. Harder but by no means impossible. But at this stage if we can make things 50% harder in 50% of the cases, that's progress.


I wonder if you can monitor energy usage (with an external chip) and compare it to what is expected to catch major changes.

So for the FPGA you could load it with a risc-v arch and then run that arch through some performance load. If the energy usage has changed a lot it may well be doing something nefarious. Bonus points if you can have a (set of) reference fpga's in the cloud you can compare arbitrary work loads on so that it is harder to predict and be stealthy about nefarious activities.

Use side-channel sources of information, where possible, to drive down the scale of changes possible.


I think that at some point in the future 'zero trust' will extend all the way down to the hardware level with individual components exchanging keys or otherwise nothing will happen. There simply won't be a safe perimeter within which you can trust another piece of hardware. And that's probably as it should be because a modern computer is better thought of as a network of - hopefully - collaborating processors than a single CPU with some RAM and peripherals.

Any one of those can be turned against you.


This design does actually have a second external FPGA chip, which is in the "Untrusted" domain. It's running an ICE40UP5K, and acts more as the power management IC that turns the secure domain on and off.


> How will you know the mask of your FPGA isn't changed?

https://spectrum.ieee.org/nanoclast/semiconductors/design/xr...

This actually sounds reasonable with an open source model. Masks are open, so a third party could xray chips coming out of various fabs. Since that's a nondestructive process, identical chips could be tested by multiple parties, and the community can compare notes.


Some of the mask changes suggested in this thread would have pretty serious security implications and would be very hard to detect so I'm not sure if that holds.


Okay, yeah. Read some papers about some different attacks that avoid detection through such means. OTOH, those attacks do seem to rely on the simplistic nature of consistency checks. If this were used to make and open FPGA, it seems like we could run a very rigorous set of test structures that would exhaustively test the operations of various devices on-chip.


> unless you are willing to create your own fab or build your CPU up out of discrete transistors [...]

It’ll happen someday. I think there are enough hobbyists interested in home manufacturing (of all sorts of kinds) that we’ll eventually have low barrier to entry home semiconductor fabs. They’ll probably sacrifice performance for simplicity — I can’t imagine a home fab ever being cutting-edge — but for most applications that’s fine.


not in the foreseeable future. the chemicals used are not for home use.


I'm planning a free and open source Path Programmable Cell Array or as you describe it, an open source FPGA.

This solves both the #1 hard tools problem and the easier #2 problem of $25k-$100K mask cost.

I contacted you on Linkedin to see if we can work together on releasing this.


I've seen things like this (https://www.crowdsupply.com/tinyfpga/tinyfpga-bx) crop up on CrowdSupply. Not sure how close it is to what you have in mind.


IMO #1 is the Hard Problem. #2 seems, to me, like a minuscule fraction of the effort. In this day & age, with modern tooling, something similar to a Spartan 3E I would think could be done practically by a one-man team. The only obstacle would be money for the masks.


> open hardware is precisely as trustworthy as closed hardware. Which is to say, I have no inherent reason to trust either at all

I think the sentence should be rewritten as "closed hardware is precisely untrustworthy as open hardware", meaning that open reveals limits and assumptions while close is pretending there are none and everything is secure, while it's precisely not.

The trust imo is not in the product, is in the company or team building the product.

> I’m a strong proponent of open hardware, because sharing knowledge is sharing power.

This is what, I think, makes a company or team more trustworthy. Not just making a product (even if it's really great and has the ambition to protect millions) but also sharing knowledge with the ambition that more can learn.


One might quibble over implications, but I think yours is basically the right take. As Bruce Schneier said: "security is a process, not a product".


One idea on how to verify equivalence between a design and a physical chip:

If we have the design, could we generate instruction sequences (or, in general, input sequences) and deterministically predict the time required and power consumed to execute those instruction sequences? Then we could fuzz the chip with a bunch of generated code and measure that the consumed time and power matches what we expect. Any backdoor would throw off the measurements.

Can anyone who knows hardware better comment on whether there are other kinds of attacks that this wouldn't cover?

This idea is inspired by a paper I read once which used a somewhat similar approach for verifying that a hardware system hadn't been infected by persistent firmware malware. The authors had the system compute a function which had a known memory-optimal implementation which required the use of all the persistent memory available in the system (including firmware, etc.). Unfortunately I can't find the paper now.


While power signatures are repeatable on a single-chip basis, from chip-to-chip the variation in individual transistor performance will require a tolerance to be applied on the threshold criteria for determining if a given power signature is correct or not across a population of chips. Furthermore, power consumption is also dependent upon temperature and voltage so that will need to be carefully controlled for the measurement. The wider the tolerance on the power measurement, the larger the implant that can be buried.

Unfortunately transistor performance can vary quite widely for a single design; recall that a single CPU mask design is often sold in several speed grades. These aren't different designs, they are all the exact same design but then tested to pass at a given frequency and power profile.

My intuition is that a simple implant would probably escape power characterization, and larger implants can be left off until a trigger condition is met. That condition could be made so that fuzzing is highly unlikely to hit it, e.g. a particular 64-bit number pattern has to appear in a given register to power on the implant.

The technique may be suitable to detect gross anomalous execution in a single well-characterized chip but I feel any robust criteria against false positives would also leave a sufficiently large margin for small implants to slip through undetected.


Thanks for the in-depth reply! That makes sense that the primary flaw is that nondeterministic attributes of a particular physical chip generated from a design will cause its individual characteristics to vary too widely.

It's interesting - there is a loose analogy to reproducible builds chasing down and squashing every source of nondeterminacy in software builds. I'm guessing that's not possible to do that for chip fabs for deep physics reasons + reasons of efficiency, but maybe some other trick (recording the "random seed"?) could be used... And the thought that verified chips require a verified chip fab process is appealing! But I know nothing about chip fabrication processes, so I'll leave it at that.


Yea state space is also a huge problem, like even something as simple as 2 random numbers in 2 different 64 bit registers gives your an 2^128 possible inputs. Aka basically impossible to randomly stumble across and fuzzing would never show it.


Well math wise its kinda impossible to know if a black box implementation is malicious from outputs alone. Its in a similar vein to the halting problem. Whether side channels such as power work I would have to find the paper, but what your suggesting is impossible at least in general from my understanding. I have also read a few papers also using emf emissions. Although they all certainly would increase attack difficulty.

There also the problem of margin of error. Could an attack hide in the allowed tolerances of error. Also if the tolerance is too small a lot genuine/unaltered devices may trigger false positives.


Every chip design created also has a large number of test vectors created with it. If the test team has done their job well then those will include random fuzzing and power monitoring like you mention.

As I read this article I thought about this as well. Every chip is tested as part of the manufacturing process, but as Bunnie points out, to be really secure you can't just trust the manufacturer when they say the tests all passed.

If there were some way to run all the tests on the finished silicon product myself, would I trust it? It's very hard (expensive, time consuming) to test everything in a modern chip, however, it would certainly make it harder for a malicious mask change to go unnoticed.


Not an expert, but AFAIK that is how most "remote attestation" schemes work, so it seems like a viable approach.


Related project: https://betrusted.io

(announced towards the 2nd half of the post)


That may be so but if I have to choose between open source software and closed software I know which one I trust more and I would assume the same goes for open hardware, in spite of the difficulties of transferring trust at the hardware level because of the parties over which you have no control. Of course if your threat model includes parties changing the masks of your chips then all bets are off but in general more openness is better and my implicit trust in people working on open source software and hardware is at a different level than the alternative.

If only because the ones seem to be driven by altruistic motives and the alternatives have already been shown many times to be willing to sell their - and your - soul to the devil for a price.

Open source hardware would need a verification and inspection method that reliably determines whether or not the manufacturer delivered what they said they would if you want that level of trust. And even then those tools could be compromised and so on.

To put it simpler: between say Intel or AMD and 'Bunnie's chip factory' I know which one I would trust more because with Intel and AMD I know for sure that there will be a bunch of misery included and with Bunnie I would at least know that he didn't include it himself and would do his best to avoid others doing it. Trust, eventually, is always going to be in people.

I'd love to see his 'non destructive method for the verification of chips' become a reality. It would be at least an interesting exercise to compare that what we have with what we should have.

And if funding is an issue, this is exactly the sort of thing where I would be very happy to throw money at a kickstarter.


> Trust, eventually, is always going to be in people.

Funny thing about trust; you can trust your enemies to behave poorly too. Trust seems to be about predictability first and foremost, and great mathematical work is being done to factor humans out of it. Autonomous vehicles are subject to a formalized version of trust already and engineers are working to get dependent type theory to do verification for them.


I just got rid of my Google Home devices; gave them away as Christmas gifts and noticed half of those in the gift exchange didnt want any spying devices either.

Overall I really enjoyed using them especially the digital picture frame, but as long as they are spying devices to show you creepy ads (created by Google and Amazon) then Im not interested. I wish Apple would offer comparably tech at a normal price point. If not Apple another company whose is all about privacy and the device only connects to the Internet to download and store daily weather and traffic info. It would be a home network only device that listens when you summon it and it would alert you (via email or text) that a hacker is trying to hack it if it's Internet channel for downloading info is compromised.


Obviously, any device that communicates with an untrusted 3rd party whenever it's turned on is not even trying to be trustworthy. Watching for any unexpected network traffic would be a very basic step in assessing the trustworthiness of a device.

I suppose, lacking the kind of efforts that Bunnie is making, the best you can do is start with relatively simple mass produced hardware, purchase it in a way that it can't be tampered with specifically for you, install an open-source OS which doesn't communicate with 3rd parties, check the software checksums.

Some applications, like vote counters, may not even need an Internet connection. Creating a secure system to count votes isn't something I'd like to be asked to do.


I think there's always another layer of trust issues no matter how trustworthy your system is.

It might be you shouldn't even trust the physical environment, even power supply can be used to do evil things. Or radiation, even ambient temperature.

Don't forget code itself can affect the environment in unobvious fashion.

EDIT: My point is that we need to be aware security will never be a solved problem. We can't also consider security as a software-only issue, but have to have a holistic viewpoint considering the whole system.

There's of course the point where risk mitigation is not worth the cost. That's another matter.


That's true, but if I'm reading the subtext of your post correctly, you're saying that this is pointless, and I don't think it is.

For a lot of people, we're well below the level of security where hardware security is relevant. The average person is still willing to run software that downloads/installs/runs malware on their machine (browsers with Javascript enabled). So in that sense, we've got a lot of work to do to get developers and consumers to practice better security at the software level before the hardware level is even a significant concern.

However, I do hope for a future where software security is both a fairly solved problem and the norm, and if that time comes, I want to be ready for the inevitable arms race when hackers move to hardware vulnerabilities.

Recognize that yes, there are always more layers, but each layer of vulnerability has decreased capabilities. It's a lot easier to read someone's hard drive using an installed app with full filesystem permissions than it is to read the same hard drive using their power supply.

Also recognize that these layers require more capability from your attacker. A Russian hacker can write an app that reads people's hard drives from anywhere in the world. The power grid simply lacks the networking infrastructure for long-distance remote access in many cases, requiring more physical locality.


> That's true, but if I'm reading the subtext of your post correctly, you're saying that this is pointless, and I don't think it is.

I'm not saying this is pointless at all! Just that so far there seems to always be another layer that can be used to circumvent the protections.

I guess I'm more like saying we should not ever take security for granted. Instead, we should look at the whole system and try to find the easiest attack vectors. Which might not be purely software issues anymore.

For example, at some point we might need to consider the physical effects of the code being executed or how the physical environment can affect it.

When the software is sufficiently strong, the attacks will be directly or indirectly against the hardware.


I think we're in agreement, then. I'm sorry for accusing you of saying something you didn't say!


So far though, software has been the soft underbelly of secure systems. The hardware is assumed to be compromised at a level that is shockingly bad and nobody seems to care, we just accept it and move on (IME, PSP, TrustZone).


I once read an article about computer security that talked about why your passwords are displayed as asterisks but omitted mention of Van Eck radiation.

https://en.wikipedia.org/wiki/Van_Eck_radiation


You are right, but in the scope of chip design we can solve all that. What we can not solve is supply chain trust (how do you now your temp sensor has not been tampered?)


Or your temperature sensor being rendered useless via a clever attack?

Perhaps by something as simple as by a clever pattern of temp sensor reads causing your code to think temperature is in safe range for your application?

Or causing your code to execute extra multiplies to generate heat to hide a momentary temperature drop.

My point is even if you could 100% trust your supply chain and 100% validate your silicon matches your design, there are still residual issues.

For example, power supply glitch attacks [0] are nowadays well known, including techniques to make them sufficiently reliable.

[0]: https://www.google.com/search?q=power+glitch+attack


You should be able to exhaustive test a temperature sensor, it is typically a two lead passive device.


It was just a simple example. There are counter-measures WAY more complex than a simple two lead sensor. I was part of a CC certified chip design, and I can tell you we must implement tests for every single countermeasure in the chip. But still you have no means to check whether your sensor/countermeasure hasn't been tampered all together in the supply chain.


> There are counter-measures WAY more complex than a simple two lead sensor.

Fair enough.

Has there to your knowledge ever been documented evidence that a mask had in fact been tampered with in a way that caused a system to be compromised?


During a CC certification, the design-house, mask shop, and fab are certified to reduce the chances of the chip being tampered. The certification ensures that all those places have decent security practices and protocols. It helps but is quite far from completely mitigating it.

I don't have any reference of a mask being modified to give to you, but it is so easy to do it that we don't actually need evidences to be worried.

If you think about it, just by changing implantation parameters on the transistors that form a ring oscillator for generating random numbers can bias it (this does not require not even modifying the mask).


Ah yes, that makes good sense. I once built a hardware RNG and it was surprisingly hard to keep it stable over the longer term to satisfy the certification criteria. In the end I managed but it was a lot of analog voodoo and I can see how easy it would be to tamper with that in a way that would not be detectable unless you monitored the device continuously.

RNGs are a weak spot. Thank you for that example.


Randomly pick samples every now and then, in randomized intervals and test them?


Ideally, any piece of open hardware would be made by multiple competing companies and they would all be perfectly interchangeable. If one was discovered to be tampered with, you could easily switch, the offending company dies, and maybe some people are actually held accountable so others would be discouraged to try it again.

While the industry is still innovating, such an ideal world may not arise, but when things settle down and just about anyone can make a competitive version of any type of chip, it could, but only if we demand open hardware now to allow that competition to start forming.


The modularity of the supply chain is part of the challenge, as Bunnie illustrated. You could have interchangeability at the chip level or OEM level, but still have the same bad actor tampering at the foundry, chip packaging house, part distributor, SMT factory, FATP factory, 3PL, retailer, or any of the shipping companies between those.


> open hardware is precisely as trustworthy as closed hardware

Nice work but the conclusion is absolutely unwarranted:

Security is all about mitigating risk. With closed hardware, as much as software, it becomes much easier to implement backdoors but also hide who did it and when.

Unsurprising, a lot of closed source comes with spyware functions - look at the phone "app market".

By all means an FPGA is better than trusting a SoC, but this does not mean that all hardware is the same.

Management Engine is a good counterexample.


Single-source ISAs of the past relied on general industry verification technologies and methodologies, but open-source ISA-based processor users and adopters will need to review the verification flows of the processor and SoC https://semiengineering.com/will-open-source-processors-caus...


Great post. A well distilled argument as to the perceived virtues of open hardware and the actual root of the trust problem. The post strikes a great balance between showing the problem of complexity and yet highlighting a feasible step in the right direction. Kudos for not ignoring supply chain issues.


Wow, that talk on silicon implants was super interesting.

I commend him for continuing in this quest for trustable hardware. I more or less gave it up. I fear it wont be really possible until the tech has moved so far that we can produce silicon and PCB at home on open hardware machines...


I like this subset of the Hardest Problem: trusting other human beings to get enough things right.


I trust my sliderule. (This probably sounds like a snarky joke, so let me qualify it by saying I'm pretty serious.)

I'm an "Apocalyptic": I literally believe that these are the "End Times" and that our global civilization is about to tank (in ~10-50 years.) So I'm not so much worried about spooks in my chips as I am about being able to compute effectively at all, at all.

In that context, I think pretty seriously about what computer hardware will be available in a post-apocalyptic scenario. In that case chips will likely be worthless due to unavailability of datasheets.

The things that will work are sliderules and nomographs[0], henges and other geophysical "calendars", clockwork, fluidics, and relays. You might be able to make discreet transistors.

See the Clock of the Long Now mechanism: https://en.wikipedia.org/wiki/Clock_of_the_Long_Now#Design (Although a large pitch-drop[1] "water" clock[2] would be more reliable and much simpler.)

- - - -

Note that, if civilization doesn't collapse, the Trust Problem will become much worse as IoT advances, and, in the limit, nanotech... and you can't trust anything. Within the limits of physics, reality will become permeated with "ghosts", we will haunt the world with our own daemons. The Daemon-Haunted World. ("'The Demon-Haunted World: Science as a Candle in the Dark' is a 1995 book by astrophysicist Carl Sagan, in which the author aims to explain the scientific method to laypeople, and to encourage people to learn critical and skeptical thinking." https://en.wikipedia.org/wiki/The_Demon-Haunted_World )

- - - -

Maybe we will develop massive interlocking computer networks that respect (your idea of) your human rights, but that's certainly not a solid projection from current trends, eh?

[0] Image search for nomograph: https://duckduckgo.com/?q=nomograph&t=ffcm&atb=v60-1&iax=ima...

[1] https://en.wikipedia.org/wiki/Pitch_drop_experiment

[2] https://en.wikipedia.org/wiki/Water_clock


I'm an "Apocalyptic": I literally believe that these are the "End Times" and that our global civilization is about to tank (in ~10-50 years.)

Why do you think that?

Despite the pessimism in the US and some other parts of the developed world, global civilisation has never been healthier.

It's true that nuclear war is a possible civilisation destroying event (as it has been for ~70 years) but beyond that there are no threats that haven't been there for millennia (eg, large asteroid strikes).

Things like global warming will cause huge destruction in some parts of the world, disrupt and perhaps kill millions of people (eg, Bangladesh isn't in a great way), and probably lead to wars that kill many more.

But even this is far from a global civilisation killer.


I think that total ecosystem collapse and the extinction of wildlife is a possible scenario this century (perhaps even likely, if we continue with business as usual rather than addressing the climate crisis).

You can see how it would happen with e.g. insects dying off: https://news.ycombinator.com/item?id=18541536

Followed by birds that eat insects dying off: https://news.ycombinator.com/item?id=21018916

Followed by the death of everything that depends on birds and insects, which is probably a large percentage of the remaining wildlife. Similar extinction scenarios are plausible with ocean life.

I guarantee that this would result in the collapse of civilization if it were to occur.

This is one reason it's absolutely vital to halt greenhouse gas emissions. Loss of habitat and pesticides are also huge problems for insects and other wildlife, so dealing with global heating is necessary but perhaps not sufficient to save them, but we have a clear deadline for when we must reach carbon neutrality, so that seems like the top priority.


I think that the decrease in biodiversity is dangerous, but not threatening an extinction event yet - or at least I've never seen a credible scientist who works in that field making that claim.

(I'd also note that both of these things seem to be caused primarily by pesticide usage and other agricultural practices, not climate change)


FWIW, In re: "decrease in biodiversity"

I once saw the Monarch migration. I can try to describe it but I can't tell you what it was like. Maybe if I was a poet. Can you feel it if I say "1/m^3"? I was within them. The whole world was butterflies.

It doesn't happen anymore. We killed them. We took their food. They weren't hurting anyone. We didn't mean to kill them. We did it by accident. We weren't paying attention.

A world has already been destroyed.


First, thanks for asking.

Next, let me qualify "global civilization is about to tank..." by adding "...unless we have some sort of Spiritual transformation of some kind." I hope that will happen, but this isn't the forum to get into what are essentially religious beliefs, eh?

Further, I believe that we actually have all the technology we need to enact a wonderful quasi-utopia. I'm a Bucky Fuller fanboy, and he basically laid out the math (he was an engineer) to show that, if we applied our technology efficiently to our problems, we could create a kind of secular utopia "without disadvantaging anyone". That's even before you get into things like applied ecology ("Permaculture" et. al.), or fusion power, etc. We even have the psychological science now to reprogram our minds. If we chose "be better people" as a goal there is no time in history more amenable.

Now then, to answer your question,

> Why do you think that?

The simple answer is that I read some books about ecology as a child and realized that our civilization is predicated on the destruction of life. We literally destroy living things, everywhere, all the time.

(As an aside, I've never understood how people compartmentalize the world as "the environment" and folks who care about other [non-human] lives as "environmentalists", as if the environment is some object or fetish. Every breath is a bond.)

The only reason we haven't tanked already is due to the truly massive inertia and resilience of the living world. We are strip-mining the oceans of biomass and returning plastic. Our agriculture "mines" the soil and causes erosion.[0] Together with our meat animals we outweigh all other terrestrial animals by an order of magnitude.[1] I could go on, at great length.

Since my childhood, I've seen that the ecological destruction has only gotten worse. And, while some people have started to change their attitudes towards Nature, the bulk of humanity seems to be just as self-centered and destructive as ever. And now they are distracted by their phones.

Also, we are absurdly not-quite-smart. E.g. refrigerators that open like cabinets instead of drawers. Every time you open a fridge, the cold air spills out and is replaced by some warm air, and the fridge then has to cool that air off when you close the door. I can't begin to imagine the energy we could conserve just by changing that one stupid detail.

Everything is like that. I could go on, at great length.

You mention nukes...

> The most recent officially announced setting— 2 minutes to midnight —was made in January 2018, which was left unchanged in 2019 due to the twin threats of nuclear weapons and climate change, and the problem of those threats being "exacerbated this past year by the increased use of information warfare to undermine democracy around the world, amplifying risk from these and other threats and putting the future of civilization in extraordinary danger.”

https://en.wikipedia.org/wiki/Doomsday_Clock

So, um, yeah...

We have massive, serious problems and I don't see enough people and nations making the comprehensive changes that would be necessary to avert them.

[0] "Soil and Civilization" https://archive.org/details/SoilCivilization/page/n3 https://www.goodreads.com/book/show/14060428-soil-and-civili...

[1] https://xkcd.com/1338/

> Earth's LAND MAMMALS By Weight [[A graph in which one square equals 1,000,000 tons. Dark grey squares represent humans, light gray represent our pets and livestock, and green squares represent wild animals. The squares are arranged in a roughly round shape, with clusters for each type of animal. Animals represented: Humans, cattle, pigs, goats (39 squares), sheep, horses (29 squares), elephants (1 square). There are other small, unlabeled clusters also. It is clear that humans and our pets & livestock outweigh wild animals by at least a factor of 10. ]] {{Title text: Bacteria still outweigh us thousands to one--and that's not even counting the several pounds of them in your body.}}


> ...the Trust Problem will become much worse as IoT advances, and, in the limit, nanotech... and you can't trust anything. Within the limits of physics, reality will become permeated with "ghosts", we will haunt the world with our own daemons.

I am reminded of the Quen Ho "localizers" from "A Deepness in the Sky"[1]. They're nanotech devices that provide useful sensor functionality, but they also have hidden features known only to a "programmer archaeologist" who exploits them to advance his plans.

[1] https://en.wikipedia.org/wiki/A_Deepness_in_the_Sky


Exactly! BTW "locator dust" IRL is already becoming a thing: "smartdust" https://en.wikipedia.org/wiki/Smartdust


Slide rules are very limited computers though. They mostly let you do fairly rapid multiplication and division up to limited precision. Also look up trig values and logs (obviously). Probably a few other things depending on the model and its scales.

But really the only thing it buys you is some speed over just pen and paper. (At least assuming you have trig, log, and probably some other tables from a standard math handbook. Which could presumably be derived in a manner I don't recall off the top of my head but likely very painfully.)


You bring up an important point: How much computation do you really need for the good life?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: