Hacker News new | past | comments | ask | show | jobs | submit login

That's a worthy goal, but so are "A computer should not betray me if someone else is a user of the computer temporarily" and "I should have full freedom to modify a computer I own." And I think you can't satisfy all three at once.

You can satisfy "do not betray the user" and "do not betray the owner" if you have a locked-down computer like a Chromebook that only accepts signed firmware and signed OSes. That way, no user of the computer, owner or otherwise, can make permanent changes to it that will compromise other users; guests are protected from the owner just as much as the owner is protected from guests.

You can satisfy "do not betray the user" and "have full freedom" if you have a computer with no protections against OS modification (the traditional PC architecture), but that is susceptible to physical attacks, where a user can subvert the owner. And every user needs to subvert the machine afresh in order for the computer not to betray them to the previous user; this is a risky model for things like shared computer labs.

You can satisfy "do not betray the owner" and "have full freedom" with an architecture where the owner gets a password/key that unlocks management functions or BIOS reconfiguration, and physical access (short of desoldering) is insufficient to access those functions.

I'm a fan of that latter model. Of course, I do have the resources to own a computer of my own for my personal computing, which biases my preferences. There's a good argument for building a more just world for people whose only computers are shared computers (libraries, work-issued laptops, etc.), but I think smartphones are becoming ubiquitous enough that we can expect that everyone will soon have a device they own for their own computing.




> You can satisfy "do not betray the owner" and "have full freedom" with an architecture where the owner gets a password/key that unlocks management functions or BIOS reconfiguration, and physical access (short of desoldering) is insufficient to access those functions.

The problem with this model is that it destroys freedom as soon as the user isn't the owner. That is somewhat what corporate IT departments are looking for, but it's even more problematic when a person goes to the store and brings home a device that some corporation has already appointed itself "owner" of. In other words, the third model devolves into the first.

And it doesn't really buy you anything. If you use FDE and erase the decryption key from memory when not using the computer then you don't need hardware to protect your data, math is already protecting it. Past that you can only want to prevent compromise of the boot loader, but in that case detection is just as good as prevention which makes prevention an unnecessary evil.

And lab environments don't need this either. There you put padlocks on the computers after setting the flash write protect jumper, don't even install hard drives in them and boot from the network. It's the same situation -- detection is more important than prevention. Someone can cut the lock but then you know that computer is compromised.


It's a large topic and you're right, we will have to make trade-offs. For the most part I'm okay with giving up "I should have full freedom to modify a computer I own" in cases where I voluntarily allow others to use it.

One possibility I like to think about for PCs is replacing usernames with a root of trust (hash). A trusted ROMed hypervisor downloads the configuration (or loads from cache) and netboots.

But ultimately it's bigger than PCs. Embedded computing is becoming pervasive and allows easier monitoring and control of what people can and cannot do. Today people accept that owners or manufacturers can set whatever policy they like even if it's user-hostile.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: