Hacker News new | past | comments | ask | show | jobs | submit login

I'm generally willing to take statements from Google at face value; they've done unpopular things (killing Reader, etc.), but in my experience haven't been deceptive about them.

I hope they realize how valuable a reasonably secure, "stateless" OS like Chrome OS is for many applications. Chrome OS is far from perfection in this space (and there are a lot of easy ways to make it better, and some hard ways to make it vastly better), but it is head and shoulders above anything else for a lot of deployment models right now.

Handing me a Chromebook with reasonable assurances the "dev switch" hasn't ever been switched is one of the general purpose full-TCB devices I'd accept from someone and mostly trust. (obviously hardware hacks, or switch was thrown and reflashed and disabled/reset). It's also a cheap/awesome way to manage fleets of machines in education and corporate/kiosk/etc. environments -- you CAN achieve that with Mac OS or Windows, but it requires a lot of work.

I hope they don't kill it in the future.




> Chrome OS is far from perfection

In fact, I would say for some users, Chrome OS is near perfection. Take my uncle for instance, who for years was getting himself infected with malware requiring regular visits from me or geek squad to wipe his system. When he was ready for his most recent upgrade, I tried him on Mint, but he didn't like the interface. Then I suggested a Chromebook. He LOVES IT. He says it's the best machine he's ever owned. One year later, and not a single issue with malware.

I don't worry about him getting infected. If instead of a Chromebook, he had gotten some Android tablet with an OS stuck at 4.x, he's likely be infected by now.

Android and Chrome OS serve different use cases. Even if Android manages to fix their upgrade issue, I doubt it will ever be as secure as Chrome OS. From the blog, "guaranteed auto-updates for five years", sounds like a few people at Google get this. Fingers crossed.


The "perfection" I want is "not having to trust Google as an operator, only as a developer (and, open source!), and being able to plug in my own enterprise as root of trust for XXX-OS, with my own choice of backend services too -- which may include google apps".

I'd also like some non-defeatable indicator in the hardware/display/etc. that "box is in "good" state. The dev switch doesn't retain state.

These probably don't matter much for individuals but do matter for enterprise. If this existed, even if it were just keys to approve/sign updates and still required google services for a lot of the functionality, would be awesome.


> "not having to trust Google as an operator, only as a developer (and, open source!), and being able to plug in my own enterprise as root of trust for XXX-OS, with my own choice of backend services too -- which may include google apps"

That's a good idea I wish they'd implement. Basically allow you to choose your own sync/updates server instead of having it hardcoded to Google's.

I imagine I could then basically make any modifications to my own personal Chrome OS fork and have them stick.


You're seeing the contradiction between having a verifiably "good" device and the ability to change the update/sync server at will, right?


Write-once of an enterprise key. If you buy 10k units, you should be able to put your own key in there. Or, have some kind of "we just check keys and add a layer of indirection as a service" for boot and boot signing keys. A batch of devices (identified by range of manufacture-time keys) is registered to a specific enterprise, and then that enterprise's keys are allowed to boot. That involves some kind of network service at boot time, or some kind of signed second-stage loader delivered to the devices telling them they're authorized to boot only from third-stage signed by that registered key.


I don't quite understand the need for that level of security. Surely the cases where being able to elaborately go into the BIOS and change such settings will lead to infections is miniscule. It's certainly worth the tradeoff of not relying on an external entity for verification of what's "good".


It's a big deal if you have fleets of machines which travel outside your control. It lets you treat them as identical (modulo hardware damage).

It's also a big deal because on a conventional machine, malware remotely can download, root, reflash, and persistently doom you, no physical access needed.


Why wouldn't you be able to treat them as identical?

How would being able to change the sync/verification server make Chromebooks vulnerable like those conventional machines?


> Why wouldn't you be able to treat them as identical?

If a given machine can have arbitrary software installed on it, then that machine can behave differently than other machines in the fleet.

If all machines in your fleet can only install and/or run software signed with the company key, then the company can ensure that the software load for all machines remains the same and -thus- all machines behave identically.

> How would being able to change the sync/verification server make Chromebooks vulnerable...

If the software repo and/or verification server can be changed by a third party, and the trusted keys installed in the machine can be changed, then it's trivial to pwn such a machine. If only the servers can be changed, then it requires loss of control of one's signing keys to pwn such a machine. [0]

[0] Or -obviously- a sufficiently bad privilege escalation bug can pwn such a machine.


Where do they make money on that? The margins on the devices are so slim and go to the OEMs, which Google mostly isn't. I don't think they charge any money for OEMs licensing the OS, and if they do it's likely a pittance. The only way Chromebooks make money for Google is as a gateway to Google services, and allowing people to fork ChromeOS like that puts that revenue source in jeopardy.


> I'd also like some non-defeatable indicator in the hardware/display/etc. that "box is in "good" state. The dev switch doesn't retain state.

Isn't the verified boot portion of the firmware not reflashable without hardware modifications[1]? Even if the dev switch has been flipped before, a verified boot is a verified boot and the machine should be in a good state.

I haven't looked at that since the early ChromeOS machines, so has something changed about that?

[1] https://www.chromium.org/chromium-os/chromiumos-design-docs/...


Yeah -- it's supposed to be physically Read Only.

There are some corner cases requiring physical access (you could physically replace the RO firmware, or fake the switch). I don't think it does full remote attestation validation of the local OS before talking to cloud services, although it does some other stuff. I should look into that more.

But really, what I want is the ability to put my own enterprise key in there, in a write-once area, and do exactly what Google does now.


he's likely be infected by now - What kind of infections would these be viruses/malware? Other day there was security expert who mentioned how Android security was bad, but did not mention any specific cases. My dad uses a 4.x device. Many other as well( India where low end has 4.x devices). But i don't see these user complaining about viruses or malware. Considering that they would usually complain about virus affecting their windows os. Is there anyway to know if the devices are infected?


Even Android have a much better security model than your traditional desktop OS. Except if the malware includes a root exploit (in this case, all bets are off), the traditional "malware" in Android is limited to basically what the user allows it. However malware including root exploits should be rare, at least if the user only install applications using Google Play. Using applications from other sources, specially to get "free versions" of paid applications can be really bad.

Unless the user are a completely moron, without enabling access for Accessibility, a malicious app can't, for example, read everything on the user screen, or capture the information the user types (the user would need to explicit allow it by turning on keyboard servicesin this case). This means that a malware can't sniff login information, or read the user e-mail. However they can still get valuable information (if the user did not read the permissions that Android asks before performing installation), like user contacts.


> Handing me a Chromebook with reasonable assurances the "dev switch" hasn't ever been switched

Why would you distrust it if the dev switch has been switched? As long as the write-protect screw hasn't been turned, you can trust the root of trust when the dev switch is switched back.


Have they said if Android TV is dead or not? Because the nexus players already hit the clearance bin.


what is "TCB"?


https://en.wikipedia.org/wiki/Trusted_computing_base

Consider one thing though, "trusted computing" as a concept comes from the military. It is a computer the generals can put into the field and trust it to not leak classified info (be it the enemy, or the soldiers learning that their lives are being wasted).

So when you use such a computer, you have to ask yourself "trusted by whom?". Because it is also a lovely starting point for the *AA's to cram the file copying cat back in the bag.


In the War on General-Purpose Computing, a trusted computing base is a powerful weapon. There are multiple sides that want to wield it against you. The company you work at will want to be the owner of your work machine. OEMs will want to own your private machines. MAFIAA will want to be the owner of whatever you use to consume media. Most of the applications of TC seem to be harmful to society. I'm yet to hear a reason for why I, as an user, should like to have a TC platform.


> I'm yet to hear a reason for why I, as an user, should like to have a TC platform.

When you are in control of the trusted keys loaded in your machine, this allows you -the computer user- to defend against several classes of attacks, including many Evil Maid attacks.

Many Linux distros and most open source security software projects sign their releases for a really good reason.

"Trusted Computing" takes this a step further by pushing this signature verification into a tamper-resistant part of the computer, and verifying everything on the system, including (on some systems) the bootloader and the UEFI firmware.

It's only when the end-user doesn't have control of the trusted keys loaded into his computer that TC becomes something that can be used for great evil.


Gotta love how mucking around with the boot sector has come back in vogue in -sec circles.


This is always the issue with mechanisms of control like "trusted computing" and cryptography in general. The question is, "who is in control?"

Cryptography can be used to control you (for example, signed BIOS), or you can use it to be in control (encrypted email). I think that "trusted computing", when properly implemented, could be a powerful tool for users.

In this case, the concept of a "trusted computing base" is different from "Trusted Computing".





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: