Hacker News new | past | comments | ask | show | jobs | submit login
Remote Attestation is coming back (gabrielsieben.tech)
320 points by gjsman-1000 on July 29, 2022 | hide | past | favorite | 246 comments



The problem isn't the capability of remote attestation. The problem is who's using it, i.e. who's defining what "security" means. As noted above, for a company, "security" often means intentionally inhibiting my freedom, not actually securing anything I care about.

We would benefit from a better public discussion of what "security" encompasses. Else, we risk conflating "what MS wants me to do with my computer" with "preventing hackers from stealing my credit card number".

Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers.


Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers.

That's a classic "road to hell paved with good intentions". The approaching reality is more like:

Imagine a world where to be allowed to use the Internet you will be mandated to run certain software, which reports your personal information to a company you are obligated to use, and whose use of that information is absolutely something you do not want.

Yes, the problem is indeed "who's using it". Unfortunately you aren't going to be able to decide either, and it will certainly be used against you.


Indeed! My imagination was tongue-in-cheek; we're never going to get that.


> The problem is who's using it, who's defining what "security" means?

Ask that question every time you see the word "security" written. There is no such word as bare security.

- security for who?

- security from who?

- security to what ends?

Much of the time security is a closed system, fixed-sum game. My security means your loss of it.


Can you give some examples?


Let's go with online games, which is mentioned in the article. This focus will specifically be on Riot, but any game publisher can perform this.

> - security for who?

Riot Games

> - security from who?

The users of their software.

> - security to what ends?

Ensuring a device (A) is running windows (B) is running unmodified Windows system files (C) a rootkit that replaces syscall behavior isn't installed

All of this is an effort to prevent cheats that wallhack/aimbot or otherwise give the player an unfair advantage - at least, it ensures the cheats aren't loaded early enough to where their anti-cheat is unable to detect their influence on the game process.

While i say 'Riot Games' is who benefits, it's all at the request of their users; you can search for 'hacker' or 'cheats' on r/leagueoflegends and see tons of posts from years ago complaining about cheaters scripting (automatically using abilities in the best possible way) and gaining an unfair advantage against them. Every posts' comments will boil down to "Riot really should figure out how to stop these cheaters". It's a cat-and-mouse game, but it'll be a lot easier to catch the mouse once they can safely enable the remote attestation requirement and only lose 0.1% of their players.

On the less moral side, this can also be applied to single-player games to reduce the chances of a game's anti-piracy protections being cracked.


We don't need to replace trust with complete submission to a third party. Just build trust. Game servers don't need to be a single festering pool of every user who purchased the title. You can just play games with a community that has accountability.

It's like putting a camera network and automated tranq drones in every playground so kids don't play tag 'wrong'.

This insanity of trying to conflate complete submission to a third party with trust or security when in reality it provides neither because that party is an adversary is a society-wide mental illness.


But via the same argument, you are also free to look at Riot Games products and say "No thanks, I'm not willing to submit".

I play some games like Valorant which use Ring 0 anti-cheat mechanisms, and to do this I have a Corsair i300 which I bought basically exclusively for FPS, flight simulators, and other games that I enjoy. I'm actually equally unhappy with corporate-provided Mobile Device Management and "Endpoint Protection" technologies being on personally-owned devices, but one clear solution is to just physically partition your devices by purpose and by what restrictions you're willing to tolerate on them. "But I can't do what I want with the hardware that I own" is a bit of a misnomer, you can, you just might not also have the right to participate in some communities (those that have 'entry requirements' which you no longer meet if you won't install their anti-cheat mechanisms).

Why tolerate Riot Games, why not "play games with a community that has accountability"? It's simple for me: in the extremely limited free time that I have for this activity, my objective is to click <PLAY> and quickly get into a game where my opponents are 'well balanced' (matched against my own abilities) and servers which are not infested with cheaters.

Without any question in my mind, cheaters utterly ruin online multiplayer games, Team Fortress 2 has been a haven of bots and cheats for several years and Valve is only recently starting to take steps to address.

I have exactly zero desire to spend time "locating communities with accountability". I want a matchmaking system provided by Riot Games which simply doesn't tolerate cheating, period. I'm willing to be in that community even with its 'entry requirements'. You may not be willing to submit to those entry requirements and that's okay. You should advocate that games support your desire to launch without anti-cheat protections, and restrict you to playing on 'Untrusted Servers' outside the first-party matchmaking community, where you will enjoy no anti-cheat protection, and you can gather freely with your own "communities with accountability".


While I'm right there with you on physical partitioning as a practical matter of mitigating the damage, it is most definitively not a solution to dealing with the looming threat of remote attestation.

The premise of personal computing is that my computer works as my agent. For any remote party that I'm interacting with - their sphere of influence ends at the demarcation point of the protocol that we interact with. Attempts to dictate what software my computer can run when interacting with them are unjust, and ultimately computationally disenfranchising. Despite the naive references littered throughout this thread to users being able to verify what software companies are running, it will never work out that way because what remote attestation does is magnify existing power relationships. This is why so many people are trying to fall back to usual the crutch of "Exit" as if going somewhere else could possibly tame the power imbalances.

Practically what will happen is that, for example, online banks (and then web stores, and so on) will demand that you only can use locked down Apple/Windows to do your online banking. This will progress somewhat evenly with all businesses in a sector, because the amount of people not already using proprietary operating systems for their desktop is vanishingly small. Which will destroy your ability to use your regular desktop/laptop with your regular uniformly-administered OS, your nice window manager, your browser tweaks to deal with the annoying bits of their site, your automation scripts to make your life easier etc. Instead you'll be stuck manually driving the proprietary Web TV experience, while they continue to use computers to create endless complexity to decommodify their offerings - computational disenfranchisement.

I'll admit that you might find this argument kind of hollow with respect to games, where you do have a desire to computationally disenfranchise all the other players so it's really a person-on-person game. But applying these niche standards of gaming as a justification for a technology that will warp the entire industry is a terrible idea.


Magnifying power relationships is the entire point of capitalism - consumers have always been at the whim of larger organizations and their wishes, with their only agency being when they decide whether to purchase a product or not. If both Product A and Product B are amazing and so prevalent that you must purchase one to be as productive as others in society, but you don't like certain terms they impose on you, then you don't have any options and must decide to either deal with it or go without using that product. Saying otherwise is effectively suggesting that companies be forced to make product in a certain way to accommodate your requests.


> Magnifying power relationships is the entire point of capitalism

Only if by "entire point of capitalism", you mean the philosophical paradigm that highly centralizing corporations market to gain more power and ultimately undermine the distributed sine qua non of capitalism.

> Saying otherwise is effectively suggesting that companies be forced to make product in a certain way to accommodate your requests.

You're missing market inefficiency and the development of Schelling points based on the incentive for uniformity. In this case specifically, the inability of a company to investigate what I am running on my computer creates the concept of protocols, and keeps each party on a more even footing. Remote attestation changes that dynamic, undermining the Schelling point of protocols and replacing them with take-it-or-leave-it authoritarianism extending further into our lives.


I'm willing to accept remote attestation of an unaltered multiplayer client with one condition: The company stops pushing their kernel level spyware and trusts the operating system. Anticheat providers won't let this happen because there's just too much money involved and they will insist on selling perceived extra value over that provided by the OS.

Earlier this spring, Easyanticheat crashed the Windows 11 Insider kernel and a good deal of games were unplayable for weeks.


Remote Attestation is only one piece of the puzzle; all it verifies is that the user hasn’t tampered with the Windows boot process, it doesn’t stop kernel drivers from being loaded. Since it isn’t a turnkey anti-cheat solution provided by Microsoft, the game developer still has to detect and watch anything that injects code into the process and look at the list of kernel drivers to see if the player is cheating via a new cheat software or maybe even cheat software they made themselves. Effectively, Remote Attestation gives anti-cheat assurance that its process detections are accurate.


> But via the same argument, you are also free to look at Riot Games products and say "No thanks, I'm not willing to submit".

Informed consent requires the consenter have understanding of what is happening, know what the implications are and agree. Riot games anticheat software doesn'tpass the first two, and is largely irrelevant to the conversation because this use case is a trojan horse anyway.

Community and social graph is a finite resource. I can't just go get another one if you colonise mine.

This is exactly the same argument libertarians have against food safety and labelling regulations. I can't go get baby formula without melamine in it if every brand has it because they price dumped to bankrupt the competition and I don't have a chemistry lab to test for it.

I can't go find another bank if they all switch to requiring attestation. I can't go buy another government. I can't go find a new social graph if everyone on it is on facebook.

Operating systems and CPUs are utilities with natural monopolies, as is communication software. Treating an ecosystem, a community, and a social graph as a fungible good is a blatant lie.


One solution is to detect cheating through behavior and match them with other cheaters. It’s probably way easier to classify as “def not cheating” vs “maybe cheating” vs “obviously cheating.”

Any cheater will probably still do really well against another cheater while a human won’t have a chance. I think this is kind of like shadow banning?


I'm not GP but I'm not sure what you'd need examples for. One meaning of security is similar to that of protection. Protection is directional, so to speak.

A company, an organization or an individual can have security guards, security procedures, etc. Security can protect the organization from objectively malicious threats, but security can also mean protection from any real or perceived threat to someone's interests.

Security can also protect an organization from the leakage of embarrassing or potentially incriminating information. An authoritarian regime has security to prevent it from being challenged. Security guards at an industry might stop activists from getting to the grounds to gather evidence of harm to the environment or people. Indeed, security staff would stop unauthorized people regardless of those people's intentions.

All of those are examples of security even if other people's legitimate interests were in conflict with it.

Security is for someone, and from someone or something.


"Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers."

That world already exists, it just doesn't get used much. You can do this with Intel SGX and AMD SEV.

The obvious place for this is blocking cloud providers from accessing personal data. For example, it could be used to resolve concerns about using US based services from Europe, because any data uploaded to such a service can be encrypted such that it's only processed in a certain way (this is what RA does).

RA gets demonized by people making the arguments found in the sibling comment, but they end up throwing the baby out with the bathwater. There are tons of privacy, control and decentralization problems that look intractable until you throw RA in the mix, then suddenly solving them becomes easy. Instead of needing teams of cryptographers to invent ad-hoc and app specific protocols for every app (which in reality they never do), you write a client that RAs the server to check that it's running software that won't leak your private information as part of the connect sequence.


> The obvious place for this is blocking cloud providers from accessing personal data. For example, it could be used to resolve concerns about using US based services from Europe, because any data uploaded to such a service can be encrypted such that it's only processed in a certain way (this is what RA does).

This will not work because the concerns about US based services are legal ones due to access requirements by the US government which cannot be solved by technical restrictions while still complying with those requirements.


The US Gov requirements don't require that cloud services make as much data available to themselves as possible, only that they provide access to what they do have (otherwise end to end encrypted messengers would already be illegal and shut down).


> The US Gov requirements don't require that cloud services make as much data available to themselves as possible, only that they provide access to what they do have (otherwise end to end encrypted messengers would already be illegal and shut down).

The US gov can walk into any company and demand everything and anything they want while making it illegal for anyone at that company to say a damn thing to anyone about it. This includes taking over parts of that company's facilities and taking a copy of every last bit of data that goes in and out (see room 641A - they've been doing it for ages).

"secure" enclaves can't save us here because the companies who develop them are subject to the same government who can insist on adding backdoors in their products. Even without explicit support of the companies involved we've already seen side-channel attacks that allow access to the data in enclaves.

As for end to end encrypted messengers, it's reasonable to suspect that once they gain enough popularity they will be compromised in some form or another. Signal, for example, had gotten a lot of attention followed by another huge jump in popularity after WhatsApp changed their privacy policy.

Signal also suddenly started collecting and storing sensitive user data in the cloud, they ignored protests from their users about it, were extremely shady in their communications surrounding that move, and have never updated their privacy policy to reflect their new data collection practices. Does that mean that Signal has been compromised? In my opinion, probably (refusing to update their privacy policy is a huge dead canary), but even if it hasn't it absolutely means the government can march in and take whatever they want including data they'd have to use a backdoor or an exploit to access.

Lawmakers have been trying to ban or control end to end encryption for years. (See https://www.forbes.com/sites/zakdoffman/2020/06/24/new-warni... or https://www.eff.org/deeplinks/2020/07/new-earn-it-bill-still... or https://www.cnbc.com/2020/10/12/five-eyes-warn-tech-firms-th...) and while they've so far been kept at bay eventually they'll succeed in sneaking it past us in one form or another.

For now, it's perhaps better in their view to let us think our communications are more secure than they are. (See https://www.zdnet.com/article/australias-encryption-laws-use... and https://gizmodo.com/the-fbis-fake-encrypted-honeypot-phones-...)


It can be used to heavily track user around the web.


Is there any way we can make Remote Attestation providers liable for any losses incurred while using their services? Can we make it so that banks, record companies, and individuals can sue Microsoft or Google if their system doesn't deliver on the promise? If we still see cheating in on-line gaming even though all machines are attested, can we we get our money back?

I feel like part of the problem is that Remote Attestation providers get to have their cake and eat it too: they make a theme park, set up boundaries, and charge admission under the premise that it's safer to play in their walled garden than in a public park.

But if a bad actor slips through their gate and picks a few pockets or kidnaps a couple children, the operators get to say "not our problem, our services have no warranty -- read the EULA".

I feel like in the real world, if a park operator explicitly bills itself as "a safe place to play" it's their problem if someone goes on a crime spree on their property -- there is some duty to deliver on the advertised safety promise.

But somehow, in the software world people can control admission, control what you do and somehow have no liability if things still go off the rails. It's just a sucker's game.

Of course, I'd rather not see remote attestation happen, but maybe part of the reason it keeps creeping back is exactly because there is zero legal downside to making security promises that can't be kept, but incredible market advantages if they can sucker enough people to believe in the scheme.


> If we still see cheating in on-line gaming even though all machines are attested, can we we get our money back?

The concept of remote attestation isn't somehow safer if it works perfectly, and it isn't clear to me that this is actually impossible to build (within an acceptable and specified liability constraint) as opposed to merely exceedingly difficult. I do relish the schadenfreude, though ;P.

> Of course, I'd rather not see remote attestation happen...

Interestingly, the CEO of MobileCoin told me earlier this year that they were "going deeper on discussions with [you] to design a fully open source enclave specifically for [their] use case" (which, for anyone who doesn't know much about this, currently relies on remote attestation and encrypted RAM from Intel SGX to allow mobile devices to offload privacy-sensitive computations and database lookups to their server). I wrote a long letter to you a few days later in the hope of (after verifying with you whether that was even true or not) convincing you to stop, but then decided I should probably try to talk to Kyle and/or Cory first on my way to you (and even later ended up deciding I was stressed out about too many things at the time to deal with it)... does this mean you actually aren't, and we are all safe? ;P (I guess it could be the case that this special design somehow doesn't involve any form of remote attestation--as while my core issue with their product is their reliance on such, I went back through the entire argument and I didn't use that term with THEM--in which case I'm very curious how that could actually work.)


Huh...maybe I didn't parse your comment correctly, but I just checked and I don't think I ever got an email from you on the subject? Totally possible I just bungled it, I'm terrible with names and my inbox is a dumpster fire :P

It's also interesting to see how the game of "telephone" works out when the message comes full circle. Mobilecoin did reach out to me, initially to see if I would write a whitepaper on SGX. After I told them I would be frank about all my opinions, the conversation pivoted to "well, if you could make something that fixed this problem what would it be?". Which I entertained by saying I think the problem may not be solvable, but whatever it was, it had to be open source; and "oh by the way let me tell you about my latest projects, perhaps I could interest you in those". To which it trailed off with a "I'll have my people call your people" and that was that, modulo a podcast I did for them about a month ago which surprisingly didn’t touch on SGX.

So: long story short, no, I'm not creating a solution for them, and I think remote attestation is both a bad idea and not practical. Is it worse than burning some hundreds of tera-watt hours of power per annum to secure a cryptocurrency? That is a harder question to answer: is climate change a bigger problem than remote attestation? The answer is probably obvious to anyone who reads that question, but no two people will agree on what it is.

To your point on RA being not impossible but possibly just exceedingly difficult – you might be right. My take on it is that remote attestation is only "transiently feasible": you can create a system that is unbreakable with the known techniques today; but the very "unbreakability" of such a scheme would cause ever more valuable secrets to be put in such devices, which eventually promotes sufficient investment to uncover an as of yet unknown technique that, once again, breaks the whole model.

Which is why I’m calling out the legal angle, because the next step in the playbook of the corps currently pushing RA is to break that cycle -- by lobbying to make it unlawful to break their attestation toys. Yet, somehow, they still carry no liability themselves for the fact that their toys never worked in the first place. I feel like if they actually bore a consequence for selling technology that was broken, they’d stop trying to peddle it. However, if they can get enough of society to buy into their lie, they’ll have the votes they need to change the laws so that people like you and me could bear the penalty of their failure. With that strategy, they get to decide when the music stops – as well as where they sit.

I'd like to see a return to sanity. Security is fundamentally a problem of dealing with people acting as humans, not of ciphers and code. Technology tends to only delay the manifestation of malintent, while doing little to address the root cause, or worse yet -- hiding the root cause.


> Huh...maybe I didn't parse your comment correctly, but I just checked and I don't think I ever got an email from you on the subject? Totally possible I just bungled it, I'm terrible with names and my inbox is a dumpster fire :P

Ah, yeah: I was really tired when I wrote that last night and the sentence complexity was brutal ;P. I wrote the letter, but it felt weird to send "out of the blue" as we don't ever actually talk; and I wasn't even sure I could trust that anything was going on at all, but had written this sad sad letter (lol) and I was just like "I shouldn't send this; maybe I should first have a meeting with Kyle about it, and maybe Kyle can decide how to approach you", and then I managed to overthink it so hard that I just gave up because I was dealing with something else (and I even wasn't sure if Cory, who also started to get injected into my overly-complex strategy, would agree with me, which made it seem even more difficult).

> [everything else you said]

<3


lol -- I've done similar, I know the feeling (-_-). Feel free, tho, to reach out anytime in the future. I'd value hearing your opinion, especially on matters like this!


IMO this just seems like bargaining and hoping for a just world where the law actually applies equally and constrains too-big-to-fail actors. What would actually happen is various limits/exceptions would get written in, like as long as you used "proper" software (read: microsoft) and did "proper" audits (read: tediously check moar boxes) then you could pass that liability onto someone else or have it be "nobody's fault". We'd likely end up with the same software totalitarianism even faster, because companies would be even more incentivized to deploy cookie cutter centralizing solutions to escape the additional liability.

Never mind that you can't really put a dollar value on personal information to substantiate damages or even personal time spent dealing with the fallout from someone else's negligence, which is like one of the fundamental problems with our legal system.

(There's also the elephant in the room that one of the main industries clamoring for ever more "security" still continues to insist that widely-published numbers (ssn/acct/etc) are somehow secret.)


Perhaps incentivizing better methods is more helpful than an outcomes in this case.


"Is there any way we can make Remote Attestation providers liable for any losses incurred while using their services?"

RA is a use-case neutral hardware feature, so it doesn't really make sense to talk about making providers liable for anything. That's an argument for making CPU manufacturers liable for anything that goes wrong with any use of a computer.

The sort of companies that use RA are already exposed to losses if RA breaks, that's why they invest in it to start with. Console makers lose money if cheating is rampant on their platforms for example, because people will stop playing games when they realize they can't win without cheating.

So what you're saying is, let's incentivize these already incentivized people to use RA even more, and moreover, let's strongly incentivize companies that don't use it to start doing so. Because if you think governments will say "oh, you didn't use the best available tech to protect the kids, fair enough no liability" then you're not very experienced with how governments work! They will say "you should have used RA like your competitors, 10x the fine".


That feels like making a bike lock manufacturer liable if someone uses an angle grinder and steals your bike anyways.

In practice, both a good bike lock and remote attestation raise the bar against attacks significantly, without providing 100% security.


Unpopular opinion:

Hardware-based attestation of the running software is an important security feature, especially in a world where data leaks and identity theft are rampant. Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?

If the vendor wants to install some self-built OS that they trust on their computer and not update it for 5 years, that's their business, but I may not want to trust their computer to have access to my personal data.

Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised). This is useful for more than just DRM.


Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?

No.

Contrarily unpopular opinion: You cannot own data except what resides on your own property. Once you give someone a copy, it is theirs to do with as they wish. They may tell you what they will and will not do, but it is entirely on you to trust them.

...and that's the peril of things like remote attestation and other "zero trust" crap. They replace the nuanced meaning of trust that holds society together (and has literally done so since the beginning of life) with absolutes enforced by an unrelenting machine controlled by some faceless bureaucracy which is also partly under the command of the government. There should already be enough dystopian sci-fi to convince everyone why that is a really bad idea.


Strongly agree.

We've already seen shades of this in banking. After chips were added to credit cards, people started having their chargebacks denied because "our records show the card was physically present" (even if the charge originated in another country)

How long until companies try to deny responsibility for data leaks because "our records show Windows was fully up-to-date and secure"


The error in logic there is that chip usage is stronger proof, but not infallible. How was the account started? Cards can be stolen from mailboxes and purses. Some smartcard manufacturers have poor key handling security; Gemalto emailed keys before writing them to SIMs. Some EMV chips were vulnerable to replay attacks due to shoddy implementation.

This is why consumer protection laws are more important than any technical means of financial security. Having a super duper hardware wallet to store your cryptocurrency doesn't negate the irreversible nature of transactions.

Raw data is even harder to secure than money. Money in a banking system can be clawed back or frozen. Data can't be un-leaked.


That seems to be an argument for damned if you do, damned if you don't. Yes, people need some incentive for deploying security upgrades and being able to say "we are sure it wasn't us" in disputes is part of that incentive. Otherwise why bother? If people get treated the same whether they made a genuine good faith effort to be secure, or do nothing, then you're just rewarding the companies that ignored security to focus on other things.


> You cannot own data except what resides on your own property. Once you give someone a copy, it is theirs to do with as they wish.

Completely agree. These outdated notions of information ownership are destroying free computing as we know it. Everything "hacker" stands for is completely antithetical to such notions.


Strongly agree, but even if you are wrong on all those points, I still don't want to be forced to run the same exact monoculture software as everone else.

Good luck getting your x86-64 windows kernel + chrome JavaScript exploit chain to run on my big endian arm 64 running Linux and Firefox.

(Also, the existence of competition like that regularly forces all the alternatives to improve.)


It's not necessarily "you" vs. "someone else". You could be one person with two computers and want one computer to be able to attest to the other computer something about its software. (Imagine it's not two computers, but a thousand computers that are exposed to both physical and network attacks.)


Right, but if they aren't going to follow best security practices and prove it (via a signed a hardware attestation of the running software that includes the transport key they want me to use to send them the data), then I'm not going to send them the data. That's my choice.


> if they aren't going to follow best security practices and prove it (via a signed a hardware attestation of the running software that includes the transport key they want me to use to send them the data)

You can mandate whatever remote attestation you want, and they'll follow whatever security practices they damn well feel like and you can't do a damn thing about it. So, you've given up your ability to run software that doesn't spy on you, and they're operating business as usual because they don't have a single goddamn reason to care what you think remote attestation mean in the real world.


> best security practices

as defined by whom? Some government (which one) organization ?

This will end up making everything more ossified and less secure.

But also once that is in place, various organizations and goverments will be able to force you to use whatever spyware they want, in order for your attestation to go through.


"as defined by whom? Some government (which one) organization ?"

As defined by the user.

RA doesn't care what software you run. In fact RA is better supported by Linux than any other OS! And, although the discussion in this thread is about RA of entire machines, that's actually pretty old school. Modern RA is all about attesting the tiniest slice of code possible, hence the "enclave" terminology. The surrounding OS and infrastructure doesn't get attested because it can be blinded with encryption. This is beneficial for both sides. I don't actually necessarily care how you configure your OS or even if it's up to date with security patches, if the security model treats the entire OS as an adversary, which is how Intel SGX works. You just attest the code inside the enclave and I send/receive encrypted messages with it.


Best security practices as defined by Microsoft, but if I want, I can still create my own arbitrary security requirements and enforce them via software/audits.


Best security practices as defined by Microsoft = "You can't have a computer if your country is under US sanctions". Important word: US, a single country. I don't want to punch such a huge hole in any of my systems.


Microsoft. The same company which strongly pushes a spyware-filled, user-hostile OS. "best"? Really?

but if I want, I can still create my own arbitrary security requirements and enforce them via software/audits

Try doing that to your bank or whatever other large company you interact with...


You can't have your cake and eat it too. Everyone has agency, to decide who they interact with and who they give money to or, on the other side, who they sell products to/provide services to, and there are remarkably few exceptions to this rule (most based on things that the victim can't control). If a company wants to require you only use their products, or only use a allowlist of approved products, they can do that, just as you can decide not to use their services if they charge too much, perform unethical actions, or even if their company name contains the letter 'Y'.


Corporations are an artificial construct that we as a society let exist. We can decide to add additional restrictions to that existence like requiring them to not discriminate based on what software you run on your own devices.


SP500 corp I often work with has security department filled with mindless drones, who say things like "regular enforced passwords changes are well regarded best practice".

You almost certainly use software that calls their server at some point. Hope you will enjoy their vision of security. I'm moving into the woods if they can define how my _personal_ computer behaves.


In the US healthcare industry, providers are legally mandated to share patient data with certain other organizations. You don't have a choice.


They actually aren't. The only reason that is necessitated is A) medicare/medicaid integration is strongly predicated on EMR, and our damn insurance model is cripplingly dependent in it.

There's nothing that keeps a medical provider from going old school.

Unless I'm completely overlooking something... It may have snuck in with ACA.


Yes you are overlooking a variety of more recent federal laws and associated interoperability regulations, some of which apply even to providers that only accept direct payments from patients and don't bill third-party payers (insurers).

The basic guiding principle in force since HIPAA in 1996 is that patients, not providers, control access to their medical records regardless of whether those are stored on paper or in an EHR. If the patient authorizes sharing those records with another healthcare organization then the provider can charge a small fee for that service but they can't introduce additional spurious technical requirements on the receiving system.


> Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?

I trust myself more than I trust anyone or anything else. It's as simple as that. I don't even slightly trust Microsoft, Google, or Apple.

Your logic is built on an invalid premise that these companies can, in fact, be trusted.

> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised).

This is exactly what I want to avoid. It's my device. It should only ever serve me, not anyone else, including its manufacturer and/or OS developer. It should not execute a single instruction that isn't in service of helping me achieve something.

Also, the concept of ownership can simply not be applied to something that does not obey the physical conservation law, i.e. can be copied perfectly and indefinitely.


If I want to buy a device that can generate a proof I can share with others to increase their trust in me, you shouldn't be able to stop me. Implemented properly, these machines can still boot whatever custom software you want; you don't have to share the proof of what booted with anyone.


I'm not saying that secure boot is inherently a bad idea. It's a good idea but only if all signing keys are treated equally. Right now, they aren't. AFAIK modern motherboards, those of them that use UEFI, come with Microsoft keys preloaded — and that preferential treatment is the part that's not okay. In an ideal world, all devices that support secure boot should come with a completely empty keystore so that you could either trust Microsoft keys or generate your own key pair and trust that. Possibly re-sign the Windows bootloader with it even.

It's much, much worse with mobile devices. You can re-lock the bootloader on a Pixel with your custom key, but you still can't touch TrustZone and you'll still get a warning on boot that it's not running an "official" OS build.


This logic works for software signing, but not remote attestation. For remote attestation, the "tamper-proof-ness" is the root of the trust chain, and the signing keys are individually baked into the specific piece of hardware and not controlled by a third party. You seem to be hoping that we can disrupt that chain of trust by having manufacturers not record the public keys associated with each piece of hardware (such that individuals could create their own signing keys on open hardware), but that's just not going to happen.


You are correct, but I think that misses the point: Neither you nor I should be forced to buy only devices that run specified software as determined by a third-party. You are making this out to be a choice, that if it's available and you want it, you should be able to buy it. However, the worry is not over a choice to buy such devices, but over a mandate that only such devices be available and no others.


All cryptography is important. Hardware attestation is no exception. The problem is who's using it, who owns the keys, who's exploiting who.

It's totally fine if it's used to empower and protect us, normal people. If we can use this with our own keys to cryptographically prove that our own systems haven't been tampered with, it's not evil, it's amazing technology that empowers us.

What we really don't need is billion dollar corporations using cryptography to enforce their rule over their little extra-legal digital fiefdoms where they own users and sell access to them to other corporations or create artificial scarcity out of infinite bits. Such things should be straight up illegal. I don't care how much money it costs them, they shouldn't be able to do it.

The problem is we have chip makers like Intel and AMD catering to the billion dollar corporation's use case instead of ours. They come up with technology like IME and SGX. They sell us chips that are essentially factory-pwned by mega-corps. My own computer will not obey me if it's not in some copyright owner's interest to do so and I can't override their control due to their own cryptographic roadblocks. Putting up these roadblocks to user freedom should be illegal.


The problem is who's using it, who owns the keys, who's exploiting who.

The governments know this all too well; that's why they've been trying to ban cryptography, and it was (and I believe still is in many cases) classified as a munition.


But that's not a good argument because SGX isn't something that empowers the Big Guys over the Little Guys. In fact it's the other way around - they took it out of their consumer chips and now it's only found in their server class chips. So companies can create RA proofs and send them to users, but not the other way around.


> Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?

No, because this still doesn't mean my data is secure. A human can still go into the third party vendor's system and see my data, and if that human wants to steal it, they can. No amount of remote attestation will prevent that.

> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines

Oh, really? So every time my health care provider wants to send my data to a third party, a remote attestation confirmation box will pop up on my phone so I can say yes or no, or ask more questions about the third party vendor's computers?

Ultimately the problem here is trust, and trust is a social problem, and as the saying goes, you can't use technology to solve a social problem. But you sure can pretend to be using technology to "solve" a problem in order to get people to give up more and more control over their devices.


Even if we assume that the features will be basically unbreakable your world will still end up looking like the following.

Entities (ab)using remote attestation in order of 'screws over those below them':

Government > Cyber criminal groups > Large organizations > Normal people.

Do you want to live in a world where a large corp can dictate which $VERSION of $APPROVED_SOFTWARE you should be running? I think fundamentally it's just not the direction we should be going. I don't actually doubt that proper remote attestation eventually would be possible, but before then it will be possible to bypass it in countless ways. Probably eventually you'd end up with only a single software stack, assumed to be flawlessly secure.

I think, luckily, this will severely limit the usability of the technology that can work in this way. Developing for this stack will be a pain, the machine will have all sorts of super annoying limitations: can't use that display the driver is not vetted, can't use that USB webcam it might have DMA, etc. That will hopefully harm the uptake of such technologies.

Like often in tech remote attestation in your case is a technical fix for a social problem. If the problem is sharing sensitive data with institutions you don't trust then you need to build that trust, or transform the institutions so that they can be trusted. Transparency, laws, oversight, that type of stuff.


Unless you're including faang/ms in government, large organizations belong at the top of the list.

Who needs espionage or lobbying when you have an undetectable root shell on every computer in the country?


IMO the entire remote attestation is an obfuscated dance about who has root, control and ultimately ownership over devices.

If vendors were plain about it, "attestation" wouldn't be a big deal: you do not own the devices, we do, and you lease it from us, maybe for a one time fee.

But companies know it won't actually fly if your plain about it, ESPECIALLY with large corporations and governments who will outright refuse to buy your services or equipment for many key things if they are not the ultimate controllers of the machines for multiple reasons.


"Hardware-based attestation of the running software is an important security feature"

I understand the mechanics in a "lies to children" way but who exactly is attesting what? Let's face it: MS isn't going to compensate me for a perceived flaw in ... why am I even finishing this sentence?

I recently bought some TPM 2.0 boards for my work VMware hosts so I could switch on secure boot and "attestation" for the OS. They are R630s which have a TPM 1.2 built in but a 2.0 jobbie costs about £16.

I've ticked a box or three on a sheet but I'm not too sure I have significantly enhanced the security of my VMware cluster.


"Attestation" of a VM is such a fraught concept... Isn't whole idea of virtualization, to outright lie to the "guest" operating system?

Yes, dear Windows, you're running on a dual-core Xeon Gold 6326 with i440BX chipset. Don't ask how this is possible, just trust me...


And so isn't this basically the flaw in the whole idea? You can always emulate a TPM. You can always boot with a stock kernel and have the host patch the memory afterwards. Software can try to detect whether it's running in a VM, but the VM can lie. Last I heard, blocking VMs didn't go so well when nVidia tried it.

Am I wrong about the effectiveness of this? I'll readily admit I don't understand most of the underlying tech here.


The emulated TPM will not contain the TPM manufacturer's private key that is used to sign responses.


Which is why the comment which started this sub-thread mentioned buying extra physical TPM 2.0 chips. They contain the correct keys, and since they're external devices, it's trivial to lie to them, pretending to be the physical CPU doing a normal boot.

Of course, that only works until they start rejecting external TPM chips, and accepting only the built-in "firmware" TPMs found in more recent CPUs.


Yeah, Pluton "fixes" this because it's inside the CPU.


nit: the TPM contains its own internally-generated private key. That private key never leaves the TPM, and has nothing intrinsic to the manufacturer.

The manufacturer then signs the public portion of that TPM key, creating the ability for everyone to assert that said key was generated internal to their hardware (and thus couldn't be used by an emulator).

You yourself could also sign the public portion of the TPM key, or even generate a new one and sign it, but that wouldn't affect the perverse incentive generated by the manufacturer's assertion. It would just enable you to assert that you trust the TPM key is internal to the TPM without trusting the manufacturer's records.

We're dealing with something like the dual of software signing here.


>Am I wrong about the effectiveness of this?

Partially. For online attestation you'd be missing the most important part. The vendor signed keypair that is insanely hard to extract from the device.


I'll extract them for 40k a pop all day long. I've got the hardware in storage from an old contract. Side channel power analysis is fun.


lol If I had USA money I'd go for it for 40k.

I've read once about the hardware tricks DRM dongles use in the silicon itself. Doesn't sound like a 40 job :^)


The hardware attests the hypervisor, the hypervisor attests the OS, the OS attests the app, etc. It all works as long as you chain down to the unique key in hardware.


Implemented properly, the idea is that you have a chain of certificates (rooted by the CPU vendor's public key) that can identify all the different bits of software that have executed on the machine, along with a ephemeral public key. The hardware guarantees that the associated private key can only be wielded by the software versions that the chain attested to. So when you initiate your TLS connection with this machine, you can validate the cert chain and understand exactly what software the machine is running, assuming that you trust the CPU vendor and all the versions of the software that were attested to.


> … assuming that you trust the CPU vendor and all the versions of the software that were attested to.

That’s a huge caveat.

You also cannot verify your trust is deserved, and that it will continue to be deserved, because such a system by its very nature must be opaque to untrusted parties (which means you).


such a system by its very nature must be opaque to untrusted parties

No, you can attest to a completely open source system. Nobody's actually doing that, but it's possible. The private keys have to be secret and non-extractable, but that's it.


Plenty of people do it. I use tpm2-totp for it. There is a key sealed in my TPM, that will only unseal for known boot stacks (firmware/bootloader/kernel). I have the same key stored in my Yubikey's TOTP application. After boot I can verify my stack by comparing a TOTP code generated by my Yubikey with one generated by the TPM.

Caveat is that security only extends into the kernel image, so for my use case I embed the initrd in the kernel image and have all the filesystems and swap on a dm-crypt volume.

I also have to unseal and reseal when performing upgrades of the initramfs and above, but I'm fine with that.


> After boot I can verify my stack by comparing a TOTP code generated by my Yubikey with one generated by the TPM.

But if you're not sure whether the system booted cleanly, then it might be compromised. If it's compromised couldn't your tools simply lie about the codes generated by both the TPM and the Yubikey so that they always match?


Yes. Intel is willing to lend me all the equipment and logic analyzers I need to analyze their products, access to their internal design docs, access to their engineering team to answer my questions, etc, etc...

Do you realize how daft and unrealistic your assertion is?

Tell ya what. You get Broadcom, Intel, AMD, Nvidia, etc... to go full transparent, and we'll talk.


The fundamental security mechanism upon which the entire system hinges is opaque.


Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?

This is a pretty bad example. The attack vector is rarely, if ever, the technical way the encrypted file is received or where it is decrypted. The attack vector is what happens after it's decrypted. You've given an encrypted file to a computer you've confirmed knows how to decrypt it "securely" (whatever that means). And after that, that clean OS image with all the latest security patches still enables, by design, the decrypted data to be used (read, manipulated, do whatever it is you sent it to them in the first place) and sent someplace else or copied to removable media.


> Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor.

Let's say I'd like mandatory disclosure on shenanigans like that, so I can avoid this healthcare provider.


Can you even do paper prescriptions any more? I've only had digital ones my entire adult life.

Quick edit to answer my own question: In my home state paper prescriptions are only legal in a few situations (if it's for an animal, glasses, justifiable emergencies). However in some parts of the country they're still possible. Even if I had a choice, I prefer the convenience of sending the data digitally- once you actual fill the paper prescription CVS or whoever is still gonna be able to glean sensitive medical info, so you're just delaying the inevitable.


Yes! You can!

And in fact, if your provider is doing ePrescribing, odds are they are contributing to supporting a Monopoly by SureScriots who has cornered the market emwith anti-competitive business practices!

DEA still issues serialized paper prescription pads.

https://www.ftc.gov/news-events/news/press-releases/2019/04/...

Everytime an ePrescription goes over the wire, this one weird company based out of Virginia is likely shotgunning your personal info as collected by PBM's/health insurers between all parties involved, (with the obligatory copy for themselves, probably "anonymized and repackaged for monetizable exposure to research groups), and in the contractual terms requiring that people in the network not make arrangements with anyone else for the service.

As a common victim of the perniciousness of this arrangement. I'm more than familiar with how this nonsense goes.


Go to any emergency room here in Winnipeg and yeah they will send you home with a handwritten prescription.


Yes, but they're inconvenient as can possibly be. Pharmacies here seem like they have digital prescriptions available the same day, while I almost always have to return another day when dropping off a paper prescription.


Electronic prescriptions being faster is only faster if they prioritize them over paper orders. The bottleneck in speed is waiting for the pharmacist to verify it. If they're not too busy the process can run to completion while you travel from the doctor's office. If they're busy due to short staffing or heavy demand your escript is part of a several hundred long queue.


What's stopping them from making a cert and encoding it in a QR code? It was perfectly possible with covid vaccination certificates.


Verify as in check for drug interactions and catch any medication errors by the physician.


You already do have mandatory disclosure on shenanigans like that in the US. It's the boilerplate HIPAA agreement you sign when you first see a provider.

Good luck finding a provider that doesn't ship your sensitive medical data out to an EMR company though.


Right?! It's telling that that's the use case: "what if we want to securely exchange The New Oil for some sweet cash, without the chance of some other asshole honing in on our racket or the little people hearing about it?"


The reality though is none of this is "secure" except by extensive, massive collusion and centralization in society - and such a thing is implicitly able to be used against the people as much as it might be used for them.

The only reason such hardware is secure is because the resources required to hack it are large.

Basically, a sane system would be: two parties exchange their own TPM keys which they generated on device themselves. They agree to a common set of measurements they will use with their TPMs to determine if they believe the systems are running normally. They then exchange data.

What's happening instead: a large company uses its market position to bake in it's own security keys, which the user can't access or change. They then use their market position to demand your system be configured a specific way that they control. Everyone else suborns to them because they're a big player and manufacturing TPMs is complicated. They have full control of the process.

The essential difference is that rather then two individuals establishing trust, and agreeing to protocols for it - secured with the aid of technology, instead one larger party seizes control by coercion, pretends it'll never do wrong, and allows people to "trust" each other as mediated by its own definition. Trust between individuals ceases to exist, because it's trust provided you're not betrayed by the middle-man.

Weirdly enough, this is actually a big god damn problem if you actually work for any organization that's going government or security work, because the actual processes of those places tend to be behind whether or not they believe large corporate providers doing things like this are actually doing them well enough, or can be trusted enough, to be a part of the process. So even if you're notionally part of "the system" it doesn't actually make anything easier: in an ideal world open-source security parts would enable COTS systems to be used by defense and government departments with surety because they'd be built from an end-user trust and empowerment perspective.

So even the notional beneficiaries tend to have problems because a security assessment ends up at "well we just have to trust Microsoft not to screw up" and while the Head of the NSA might be able call them up and get access, random state-level government department trying to handle healthcare or traffic data or whatever cannot.


> Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?

I'd rather be able to access it without google or microsoft sticking their nose in.

I'd rather be able to combine it with my other data in whatever ways I see fit.

I'd rather be able to back it up in whatever way I see fit.

I'd rather be able to open it on a device that doesn't have a backdoor provided by the US government.

Because it's not microsoft'sor qualcomm's data, it's mine.


I actually don't disagree with you. As I mention in the article:

> I cannot say how much freedom it will take. Arguably, some of the new features will be “good.” Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat). Being able to potentially play 4K Blu-ray Discs on your PC again would be convenient.

However, I'm more worried about the questions the increased deployment of technology will bring, such as will Linux users be doomed to a CAPTCHA onslaught being the untrusted devices, or worse. Important questions that, unless raised, risk us just "going with the flow" until it is way too late.


> Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat).

I have a rooted Android phone and I had to spend effort spoofing attestation in order to even launch some of my games which don't even have multiplayer. Allow me to be the one to tell you that I do not appreciate it.

I don't even care enough to cheat at these games but if I wanted to cheat it would be merely an exercise of my computer freedom which nobody has any business denying.


IMHO I don't see a CAPTCHA onslaught happening - if only because at that point the CAPTCHAs will be practically useless at stopping bots. They will just ban untrusted devices.

The current landscape of CAPTCHA technology is pretty bleak. It's pretty easy to use ML to learn and solve the early first-gen CAPTCHAs that just used crossed-out words. Google reCAPTCHA relies primarily on user data, obfuscation, and browser fingerprinting to filter out bots, but that only works because of (possibly misplaced) trust in Google. It falls back to an image recognition challenge (which hCaptcha uses exclusively) if you don't have a good data profile - which can also be solved by automated means.

I don't see desktop Linux being fully untrusted off the Internet, if only because Google won't let it happen. They banned Windows workstations internally over a decade ago and they are institutionally reliant upon Linux and macOS. What will almost certainly happen is that Linux will be relegated to forwarding attestation responses between Pluton, some annoying blob in Google Chrome, and any web service that does not want to be flooded with bots in our new hellscape of post-scarcity automation.


Another "doomsday scenario" is that a distinct market for FOSS hardware will arise, albeit much shittier hardware than what everyone else uses. Those users will become fringe and progressively isolated.


This is already the case for FOSS communications software and social networks.

In the past you culd use your FOSS client to commicate with your ICQ, AIM, MSN Messenger-using friends. Today, not using the official client will likely get you banned from the network.


Unfortunately, it does seem likely that many services will require that your machine run a kernel/web browser signed by an entity they trust before they give you access to what they consider sensitive data. That will suck for those of us who want to build our own kernels/web browsers and use that software to interact with sensitive data from large corporations, but that's their choice to make (IMHO). And it's my choice not to use their service.


Often it's not your choice, when e.g. all banking apps have this requirement, and banks require an app to allow you access to your account at all. Or when it's a health service because the data is so "sensitive". Today, platforms like Discord and Twitter very often want your phone number despite not having any technological need for it. Will they in the future require this thing as well so that they are sure that you are not using ad blockers? Will you be unable to communicate with most of society through these "optional" services if you don't have one of these "trusted computing" devices?

This is way more than just about not watching movies in 4k that you could also pirate. This is about turning people who don't have "trusted computing" devices that track every behaviour of theirs into societal outcasts.


So how do you solve this? Get the government to ban CPU vendors from implementing hardware-rooted remote attestation? I can assure you that this technology is used inside corporations for their own internal security, and such a ban would weaken our ability to survive a cyberwar.


Using this technology to secure non-private infrastructure, including corporate networks, makes total sense. And yes, it has some helpful properties to secure that infrastructure. But don't be mistaken, configuration mistakes still exist, as do zero days. Attestation helps against persistence, and this is valuable, but it's only one link in the chain.

That being said, extending it to everyone in a way that curtails individual control of computing devices creates an environment that is dangerous in many ways. I don't want to be in a world where only "approved" software is allowed on my computer or something. This can get wrong really quickly, and a lot of the application of attestation technology for consumers is really just about removing their freedoms.

The place where the government should step in IMO is not to ban CPU vendors from implementing this, but to pass anti-discrimination laws, so ban companies from requiring remote attestation to unlock some specific feature. They should maybe endorse it, or be allowed to warn you, but they should still allow full access regardless.

For the B2B setting there are obvious dangers of monopoly abuse, here the government just needs to enforce existing laws. Microsoft dropping the requirement that the signing key for third parties has to be trusted is IMO a major antitrust violation.


> Get the government to ban CPU vendors from implementing hardware-rooted remote attestation?

Get the government to regulate the corporations requiring it. Classify any attestation requirement as discrimination or something. They're excluding people without good reason.


Discord and Twitter want your phone number to limit how many accounts you are allowed to sign up for.


That's only part of it, Twitter is also in the ad business, and in the ad industry, phone numbers are used as identifiers to correlate users between datasets.

If it's just about limiting access, Cloudflare imposes a similar limitation of number of accesses you can have to a website via remote attestation. I think once remote attestation becomes more prevalent, it might become useful in the ad business too, e.g. to prevent you from using ad blockers, or similar things.


It's not your choice. The choice will be taken away, and you will not have the choice to control your computer any longer.


This crap hasn't worked and it will never work. Console vendors have built much stronger things than remote attestation but you know one thing stays true: if you sign crappy vulnerable code, you are just signing any code.


No, that would be horrible for data portability, which is the reason many hospitals are locked into shitty EPR systems. If you are going to design a way to transfer data across healthcare providers, it better be as easy as sending a fax. Sheesh.


You'd only be verifying the machine you sent the data to was up to date, but that is likely to be a file server or router of some sort. You'd need to validate the entire network that will touch the data.

While it might be theoretically possible to assert the whole network is up to date, the hospitals will definitely fail that check. There's all sorts of equipment the hospitals can't update for various reasons, such as aging radiology machines.


>Hardware-based attestation of the running software is an important security feature, especially in a world where data leaks and identity theft are rampant. Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?

I'd prefer it to not be run a computer which has already been compromised with a UEFI rootkit which is what trusted computing has gotten us so far.


These primitives can be used to tell you when you're talking to a machine that has not been compromised with a UEFI rootkit.


Which is meaningless. Bad actors can use machines that have not been compromised with a rootkit.


i dont think the attestation mechanism is wrong, but that the ability to perform remote attestation is going to be abused to lock in consumers (even if in some circumstances there's improved security using it).


> Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?

Yeah, because transferring that data into another machine is an impossible task.

That's the stupidest argument I heard today...


Admittedly it is a simplified scenario for the sake of argument. You'd need to have a full attestation of everything that controls access to the data (including ACLs) to get much of a guarantee.

But in a future world it's not hard to imagine the vendor software running in some sort of SGX-like environment that is very difficult to manually extract the data from.


Where are the pro-consumer advocacy groups from last time? EFF? FSF? They haven't even briefly mentioned the issue at hand.


Both groups are a shell of its former self, either sprouting unjustified clickbait or being silent on the dangers that matter.


For them the battle starts when you cannot add your own root or doing silent attestation. Before this is a reasonable feature.


It's not just a war on general purpose computing, it's not just the mainframe ascendant: the mainframe seems intent on taking over & coopting every computer it can.

Google's anti-rooting regime (SafetyNet) has been painful to experience. I'm not sure what's next with the new Play Integrity API but it's hard to have hope users will see wins here or anywhere.


Remote attestation or not, "Software freedom" fighters should understand that things happen based on some user base need. Somebody needed this and they added it, whoever needs it doesn't care if they can't run linux on it. If the user cares about running anything else on the hardware, they will add a way to disable the feature. it is all about the user need.

if you are a secondary priority user on some hardware, the way to fix it is to focus on becoming important enough to be prioritized instead of fearing some technology will limit things.


> "Software freedom" fighters should understand that things happen based on some user base need.

I wish that were true. However, I think the movie Tron (1982) sums this up very nicely.

From the movie Tron:

> Dr. Walter Gibbs: That MCP, that's half our problem right there.

> Ed Dillinger: The MCP is the most efficient way of handling what we do! I can't sit here and worry about every little user request that comes in!

> Dr. Walter Gibbs: User requests are what computers are for!

> Ed Dillinger: Doing our business is what computers are for.

We are now moving toward a world where all computers have a "MCP". No, it is not to solve user problems, it is to do the business of the corporations that designed it.


> Somebody needed this and they added it, whoever needs it doesn't care if they can't run linux on it.

Originally, the ones who "needed" features like this this are the big content distributors. Without these features, it's too easy for normal people to extract content and give copies of it to their friends and family.

As a parallel development, another one who "needed" features like this is Microsoft, for a different reason. They were taking reputational damage from malware, and needed a way to prevent malware from running before their operating system kernel (malware loading after the operating system kernel could be contained by the normal security mechanisms like ACLs).

These two development threads had enough in common that they ended up merging together, and those who want to prevent copying content can now point to security as an excuse. And yes, neither of these two groups care if you can't run Linux on your own devices.

> if you are a secondary priority user on some hardware, the way to fix it is to focus on becoming important enough to be prioritized instead of fearing some technology will limit things.

I fully agree that this is our best defense. In fact, the only reason we can still run Linux on our desktops and notebooks is that, when SecureBoot was developed, Linux was already important enough. However, this could only happen because Linux had time to grow and become important enough (while being a "secondary priority user" of the hardware) before things started to become limited. Had SecureBoot come before Linux became important enough, running third party operating systems would not have been allowed, and Linux would not have had a chance to grow and gain importance.


> malware loading after the operating system kernel could be contained by the normal security mechanisms like ACLs

If this were true, how would the malware ever get itself to the point where it is loaded before the kernel is?


> things happen based on some user base need

Of course not. Things happen based on what investors and developers want. Users are very much secondary. They're a nuisance. If things did really happen based on some user-base need, would we have had Instagram or Facebook in their current form?


There is some kind of balance since both of those are free. I'm sure they prioritize user engagement and any loss of users is inadvertent.


I think so? FB exists in its current form because they're worried about users preferring TikTok


See, it's them being worried about it. Users are totally fine with there being different forms of social media. That is what users want. No one ever asked other social media companies to add TikTok-like functionality — they already have actual TikTok for that.


This. All these comments (and this article) worried that this is MS coming to take their Linux or whatever are missing that this is something their biggest customers want.

We need this in our corporate client device fleet to counter specific threats. We need this in our servers for the same reason — we do remote attestation today for Linux servers in semi-trusted locations. We’ve conveyed to our vendors that this is a desired capability in next-gen network equipment.

We’re not doing this to control data once it’s on an end-user’s computer. We’re doing it because we have a regulatory (and moral) obligation to protect the data that is entrusted to us.

We’re not Intel/AMD/NVIDIA/etc’s largest customer, but when we defer orders or shift vendor allocation it gets mentioned in their quarterly earnings reports. They tend to listen when we ask for features, and when our peer companies (not to mention governments) ask for the same thing because we have similar data security requirements?

Cloud and Business products is what, ~2/3rds of Microsoft’s revenue at this point? This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.


I think if owners of devices have ultimate control over the root key / credential that determines attestation, I don't think people care about that.

So in your case, for devices you buy, you set up your corporate TPM key as the root owner, and then you send the device to employees, vendors, etc. The ownership chain is clear and you can send attestation requests. The corp is the owner of the device, and that is fairly obvious.

The issue is when people and corps buy devices, they do not have effective root. Microsoft, apple, google, etc have the tpm root key, and you as a corporation actually do not have root yourself. They can force you to do things you want to do. It makes you more vulnerable, because if it is in MSFTs interest (or they are coerced by the state to do so clandestinely) a lot of threats can happen, and you don't even need an 0day to do so!

If it starts becoming status quo, the freedom to do the things you need to your devices starts going away.


> The issue is when people and corps buy devices, they do not have effective root.

Totally!

The answer isn't to ban the tech, it's to ensure that everyone can set their own root if they so choose.

Want your system to only load firmware that's been signed by Debian to ensure that unfree blobs aren't sneaking in? Great! That's basically what we want too (s/Debian/our own supply chain integrity folks/g but same concept).


The thing is, it's effectively impossible to politically guarantee that, so rather ban attestation than walk a fine line of 'your not allowed to do that'.


> We need this in our corporate client device fleet to counter specific threats

Can you please expand on what you verify via remote attestation and against which attack vectors this protects you?

Does this protect you against the usual attack vectors of your employees logging in on phishing sites, downloading malware, running office macros etc? Stealing your data usually does not need any root/kernel access.


>We’re not doing this to control data once it’s on an end-user’s computer. We’re doing it because we have a regulatory (and moral) obligation to protect the data that is entrusted to us.

>Cloud and Business products is what, ~2/3rds of Microsoft’s revenue at this point? This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.

Except... Yes it is. When your Ur business case was "do computation on someone else's computer, but ensure the operator cannot have full transparent access to their own computer's operational details", you are in the end casting the first stone. Just because I don't have an LLC, or Inc. or other legal fiction tied to my name, doesn't mean I'm not bound by the same moral imperatives you claim to be, but more importantly (I am not willing to sell everyone else's computational freedom up the river for a pole of quick bucks).

Get your collective heads out of your arses. Get back out in the sun. This nonsense is ripping every last bit of potential computing ever had and ripping it out of the hands of the lay consumer unless they dance the blessed dance of upstream.

You do not know best. You think you do. You've gotten where you are without that which you seek to create, and once created, that which you make can and will never be taken back. It creates and magnifies too much power asymmetry.

My god, have you never really stopped to think through the ethical implications? To really walk down the garden path?


We’re doing it because we have a regulatory (and moral) obligation to protect the data that is entrusted to us.

The same insane regulations that were probably the result of corporate lobbying are now the excuse for these hostile features? WTF?


The regulations that were the result of lobbying by privacy and digital sovereignty advocates, contrary to the lobbying interests of Big Tech(tm)?

> are now the excuse for these hostile features

These features may be hostile if you don't control your own root of trust or if your vendor burns fuses prior to selling a device to you. If you were expecting otherwise, in that context they sold you a defective product.

Those same features are beneficial if you run your own root of trust. They help maintain control over your devices and increase confidence that they have not been coopted by your adversaries.


So yes. Now the bar for rudimentary, independent computer use requires an understanding of the finer points of cryptography.

Do you not understand how insane that prospect is?


> This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.

Then they should prove it. I'm sure they have lots of expensive lobbyists under their employ, have them go to the government and tell the politicians the computer industry needs regulation to make it illegal to screw over users by depriving them of their computer freedom. If effective rules make it into law, I will trust their intentions.


Let us formulate it like that: it is nowadays much bigger than MPAA related topics. They will happily join the party and bring snacks.


"Somebody" means billion dollar corporations that already have way too much power over people. Their ability to want and actually realize this bleak attestation future needs to be regulated out of existence.


Somebody wanted this, but it might not be a user, and it might not fill any user need whatsoever. If I am the primary user of the hardware, my wants take precedence over anyone else, including the manufacturer and the software vendors.


agreed, your wants take precedence. If there are enough buyers like you who care about hardware freedom they'll drop this feature.


I was not making a market-based argument, as the preferences of other buyers is irrelevant. That Having purchased the hardware, that I wish to run software is the only justification required.

This is not something that is amenable to market-based solutions, as the market contains bad actors pushing for infringement of my ability to run software.


It may make web scraping even more difficult, or anonymity online more broadly.


It will make web scraping impossible. You will not be able to install any software that can do that because it won't be let in to app stores (i.e. it will never get the right keys and permissions, websites won't even respond to it as it won't have attestation).

If you somehow try to work around and use your "attested" machine and user id to do this (because websites will require it and your script can't have it, but may be it can run under your user account, for example) - monitoring systems will soon block your account for "suspicious activity" and it will be next to impossible to re-instate because Google and Microsoft don't provide any human support, unless you are some 1mm+ influencer on instagram and will manage to start a rukus on social media.

The outlook is quite bleak :-(


> Being able to potentially play 4K Blu-ray Discs on your PC again.

We cannot do this now because user-hostile vendors have locked the functionality away from us. I think this is a perfect microcosm for the whole tradeoff: lock away the rest of your userspace and we might let you watch the new Batman DVD you already bought.


> we might let you watch

This choice of words perfectly captures the arrogance of these copyright corporations. Who are they to dictate how our computers work just to maintain their irrelevant business model? They're ones who should be playing by our rules, not the other way around. It makes me wish piracy was as bad as they make it out to be, to the point it kills them.


It feels like they've been trying to shove variations on the celestial jukebox down our throats forever. Supply chain integration is definitely how they could win. I can see it now, DIVX on the desktop.


Why you assume artists cannot define how you consume media they created? Yes they royally screwed up, but it is their product. They define how they sell it.

Just to be clear: I hate how movies are currently distributed.


> Why you assume

It's not an assumption. It's the reality of the information age. The only way to have complete control over information is to not publish it. Once data's out there, there's virtually no way to control what will be done with it. Creators started from a lost position: they want to publish their works and yet they want to somehow control what happens to "their" data. The level of tyranny necessary to accomplish such an end requires the destruction of free computing as we know it. We're already seeing shades of it today: computers that aren't really ours, they only do some corporation or government's bidding. It's only a step away from such digital copyright enforcement nonsense to far more serious matters like cryptography regulation.

So I'm not the one assuming anything. It's creators who live under this notion that they own their creations. The truth is public domain is the default state. Intellectual property laws are responsible for bending reality and introducing this assumption that you can even own ideas to begin with. That was workable in the age of printing presses but not in the 21st century where everyone has multiple powerful globally networked computers at home. I for one think computers are a far more important innovation than almost everything humanity has ever created and I don't think enabling creators to continue living under such illusions is important enough to cripple the potential of computers. I want society to eventually reach a post scarcity state in the real world, mirroring the digital world. I don't want corporations creating artificial economies where there are none.

All creations are just data, and data is just bits, and bits are just numbers in base two. All intellectual work comes down to humanity discovering a really big unique number. How is it even sane to claim ownership over such a thing?


Why should an artist be able to dictate that? If I sell you a coffee, should I be able to dictate what you do with it afterwards? What? You want to compost the grounds? That’s a violation of my vision and our sales agreement!


> but it is their product. They define how they sell it.

It's not "theirs", they have only been granted a limited-time monopoly on it in order to incentivize the initial creation. If they abuse that monopoly, we (i.e. society) CAN take it away.


> limited-time monopoly

If only... Copyright monopolists lobbied governments to the point they extended the duration of that monopoly to multiple lifetimes. About 5 years of copyright protection is more than enough for creators to make their money back and then some but for some reason these people saw the need for it to last centuries. Copyright is functionally infinite, we'll all probably die before the works we enjoyed enter the public domain.

In effect, we've all been robbed of our public domain rights. The social contract was: we'll all pretend the creator's works are scarce for a while so they can make money, and then it will enter the public domain. These monopolists aren't really keeping up their end of the bargain so why should we keep ours? The second we stop pretending, they're done.


Just to be clear, this little subthread is a complaint about how Microsoft is implementing a "feature" which is harmful to users and happens to be in line with a decades-old recording industry business wish. There are no content creators in this discussion. This is predominantly because the toxic capitalism of the recording/movie industry is untethered from content creation--e.g. Disney buys the content then extracts value from consumers without consulting the creators.


I can watch 4K bkurays just fine on my PC, with the hacked firmware for my bkuray drive and the proper decryption keys.


Prior thread, https://news.ycombinator.com/item?id=32234561

> On-premise, open-source, customer-owned remote attestation servers are possible. Avoid outsourcing integrity verification to 3rd-party clouds.

With owner-operated OSS MDM & attestation servers, PCs can have diverse, owner-customized OS and configs, reducing monoculture binary blobs.


PCs can have diverse, owner-customized OS and configs,

...which won't be able to interact with any of the walled gardens which will be enabled by these same technologies.


That cuts in both directions. If sufficiently large customers run their own attestation servers, the discussion moves from binary yes/no attestation to the details of interoperable measurements, single-purpose OS components and provable security vs vendor lock-in.

Walled gardens care about including their large customers, so it's not as simple as locking them out. There is also an ongoing EU legislative effort to mandate digital platform interoperability, which will likely apply to attestation.


...and the large customers are going to treat users as the attackers to be secured against, so I don't think that's going to help one bit.


Many owner-defined OSes would be a bit better than a handful of vendor-defined OSes being imposed on the entire planet. Influencing device owners to provide sensible policies would be the next step, but at least there would be the possibility of competition, and the voices of multiple economic stakeholders.

Attestation can also be entirely local, e.g. between a device and a USB key with OSS software that is configured by the owner.


Once again. To meaningfully exist seperate from digital overlords, you must be able to grok nuances of cryptography. You are dead on arrival for 95% of the populace.


Do you consider Let's Encrypt to be a digital overlord? Many are using this service successfully without being cryptography experts?

Why can't there be a "local attestation server" equivalent to Lets Encrypt, e.g. offering the Top 10 most-requested OS configurations which are not being addressed by digital overlords?

Cryptographer priests are scarce, but not numerically capped or fully monopolized by digital overlords.


Let's Encrypt is spawned by digital overlords, btw.

Might be overseen by a neutral group, but it was spawned out of them.

And I'm sorry, but no. Absolutely not. If I have to teach someone to do a damn Certificate signing request just to say, get a kernel tweak done, or (nightmare mode) just to run a self-written hello world because the powers that be have decided that nothing less than perfect non-repudiation of every binary ever built from now on is acceptable; (the logical terminis of "apply cryptography to programming until top down control is realized)... I'm not even completing the thought. This is a bad, bad, bad, bad, bad idea.


Very reasonable points but we are sadly entering a world with many legal+tech differences from the last three decades. The convergence of foo with bar is occurring in multiple societal stacks, in both top-down and bottom-up directions. Short of halting these flywheel transformations, OSS EEE is a safety valve.

If you are in the US, take a look at the recently approved UCC changes for CERs (controllable electronic records, e.g. blockchains and CBDCs), which will now proceed to US state legislatures, https://www.clearygottlieb.com//news-and-insights/publicatio...


DRM is separate from security when the user has control of what exactly they want to run and how much data it may collect.

RA is a shortcut companies are taking to market for privacy and security while not mentioning lock-in, network effects and DRM. When pressed, they will ultimately still say privacy and security aren't 100%.

That said, could a Linux distro come out with the same thing as these PATs using IMA/EVM/TPM/ME/PSP? Probably (Graphene has some support for RA, but I haven't looked in depth), and as long as Cloudflare had multi-platform support, and the implementation still allows letting me have control, I wouldn't see the problem. Bugs can be exploited by anyone with the know how, and I see no reason why I should give up control because companies want to take a shortcut instead of designing proper systems.

We see Epic has already said they won't support Linux because of anti-cheat, so that scapegoat exists, but there are other games that didn't go that route! Porting costs, and size of install base are real excuses among others, but anti-cheat is a BS excuse. No shortcuts by companies and no short cuts by platforms, but we know lock-in, network effects and DRM are too good to let go.


To be clear, any OS can attest they're running signed kernel/modules, it says nothing about how determined an attacker is. If you just want some confidence a user isn't a bad actor, there are other ways of doing that.


Attestation is great for cloud computing. Horrible for user devices.


Attestation is great for consolidating power and eliminating competition. As far as the cloud goes - you will only be able to use AWS, Google or Azure, because they will have proper certificates and attestation. Browser support will be built for "big three". For all the others there will be a fee for said certificates which will increase barriers to entry, and there will be even less competition and choice. Existing cloud providers will be the only game in town, just like it is with mobile operators and cable companies.


Here is an example of contact discovery protocol by Signal using remote attestation [1]. Client itself can ensure that the code running on the server is authentic. It is computationally insensitive, but beneficial to the end user.

[1] https://signal.org/blog/private-contact-discovery/


I really wonder if we need to be concerned at all about this. In the end people are always able to vote with their feet. We've seen this happening with SACD (Super Audio CD), the successor to the CD which had unbreakable encryption. People ended up simply ignoring it. Today, you can buy HiRes audio files without DRM of any kind online. These files have mostly been ripped from SACD's with their unbreakable encryption. The record companies thought they might as well sell them online, lest they become worthless.

If people feel they're being hampered too much by Windows' DRM they'll most likely switch over to some RISC-V processor from some Chinese company. I'll bet you that if that happens Microsoft will flush Pluton down the toilet before you can say "Clippy." It simply isn't worth it to them. Sure they want to lock in their customers even more, but as soon as they start leaving in droves they'll swiftly move to stem the outflow.


The only reason SACD failed and you can buy DRM-free music today is because people were concerned an pushed back hard against that crap.


Has this ever been challenged by reporting it to the FTC and consumer protection agencies in any country?


I totally agree. I don't want Microsoft controlling my devices. If Cloudflare starts giving this feature easily like by a click of a button, almost all website will start using it. Making web more difficult and useless to navigate.


> Published by Gabriel Sieben

> Gabriel Sieben is a 20-year-old software developer from St. Paul, MN, who enjoys experimenting with computers and loves to share his various technology-related projects. He owns and runs this blog, and is a traditional Catholic.

Well isn't that something, someone who writes blog posts proselytizing the importance of individual freedoms while also making sure his readers know he's a "traditional Catholic".

The level of cognitive dissonance is impressive.


Why do I need Remote Attestation to enjoy 4K content when I can more conveniently pirate it?


See: https://xkcd.com/488/

Also, since a lot of different movie streaming services (e.g. Hulu, Disney+) have launched, a lot of content has moved off of Netflix, leading to a higher piracy rate.


without any negative meaning whatsoever, I wish I had 1% of the confidence today that this 20-year old person has.

I do not agree with this person's conclusions on any article they've self-promoted on this site, but their self-confidence is unparalleled.


This is exactly how 3rd party apps and clients will be locked out of the services they depend on. In a world where AR and "metaverse" platforms take off, this will ensure that they're accessed only from approved clients.

This is also how OS, app distributors and platforms will ensure that they get their 30%+ cut of all revenue generated using their products, as well.

Similarly, this is how OS providers will ensure that apps built for their platforms can't run on other operating systems. You already can't run SafetyNet-enabled Android apps on other platforms despite Android support existing on Android and Linux.


We've gone from open Windows and Linux PCs where you could install anything to walled gardens everywhere.

We need to bring the giants down to a level playing field and stop this nonsense.


Completely agree. How could such a thing be achieved?


Anti-trust enforcement.

While there is a need for complex criteria, there is also a need for something simple to base enforcement on.

The simple criteria should be something like having more than some number of customers/users gets you automatic scrutiny and forces you to have things like customer service people along with government required metrics (10K-type things) and some larger number of customers/users forces you to break up.

It's the uncontrolled aggregations of users and data that are the problem.


You probably don't want to hear this, but one way it could happen is the Russians will get locked out of all this stuff because of sanctions, so they'll have to create their own separate open world. Yandex will build the open platform like they are releasing big language models and are not scared of the consequences of the bad people, like the Russians, getting ahold of them. Yes, the open platform will be lame compared to what the majors are offering, but Linux on the Desktop was never spectacular either.


> You probably don't want to hear this

Why? Part of what you said is something I already believe in. I think it's only a matter of time before the international network we enjoyed in the early days of the internet is destroyed by governments and their eternal struggle to impose their own incompatible laws on it. One day the internet will fracture into several regional networks with well delimited and filtered borders.

I for one am glad to have known the internet. It was truly one of the most wonderful creations of humanity.


You should read about North Korea's linux distro if you believe that they will be forced to create an open computing world.


China is already creating their own world and it's even worse.


legislation of openness (of protocols used) and interoperability. In the same vein that reverse engineering a compatible, competing product is protected by law.

outlaw any use of methods to which clients are discriminated against, including using remote attestation to do so. Similar language has been used in the DMCA legislation to similar effects (aka, software circumvention).


This is a fairly popular take around here, but is it compatible with the EU’s new DMA? It seems if anything legislation is moving in the opposite direction of strengthened walled gardens.

They just made it illegal to forbid sideloading for example. On its face it’s going to be illegal for Apple/Android to use attestation to lock down their devices further, indeed they are now legally required to open up in the EU.

Maybe they would go in one direction in the US and the opposite outside? Seems unlikely to me though…


I don't know. What's happening now with Android apps is that sideloading works just fine, it's just that SafetyNet apps refuse to work on other platforms like Windows, Linux or even un-blessed Android distributions. I don't know how compatible that is with EU legislation, or if technicalities like that side step it or not.

> Maybe they would go in one direction in the US and the opposite outside? Seems unlikely to me though…

Companies seem to have no problem conforming to particular legal regimes all over the world and not importing/exporting them elsewhere. I believe companies will try everything in their power to protect their cash cows in the US for as long as they possibly can.


Windows* and Linux, not Android and Linux.


I'm convinced Cloudflare is a giant anti-privacy man-in-the-middle attack on the entire web experience.

It would be a conspiracy theory to say they were created by a three letter government agency, but if I was running one of those three letter agencies, this is exactly the kind of company I'd setup and control. People just give them their TLS keys lol

If you use a VPN or just like browsing in privacy mode, it will make your life as difficult as possible by having you fill out multiple captchas. And even then, it will sometimes not let you through.

If you're running a website, please stop using Cloudflare.


I hate Cloudflare with burning passion. Not only is it a US company that centralizes internet infrastructure around itself, it also terminates TLS for way too many websites and sees their cleartext traffic. It then actively meddles with said traffic and punishes you for something you have no control over (your IP address). It's a dystopian nightmare only second to online ads. But ads at least can be blocked — you have zero agency in case of Cloudflare.

Whenever I see the "one more step" crap, I just close that tab.

Cloudflare needs to stop existing, and it needs to do so yesterday.


> It's a dystopian nightmare only second to online ads. But ads at least can be blocked — you have zero agency in case of Cloudflare.

About that, I imagine the millisecond that you can validate using remote attestation that a client has no adblockers, Cloudflare will add a remote attestation "gateway" (like the one they have now with the captcha) that will, overnight, give every Cloudflare customer (so half of the internet) the ability to block users that may have adblockers.

It's simply too juicy of a service for these people.


The scary thing though is that if Cloudflare decides to pivot into targeted ads... You get the idea. No single company should ever be allowed to have this much control over the internet.


While I tend to agree with all of your post, I also have to admit that as a service provider it's nice to be protected from malefactors. If I had to point to the moral problem here its not Cloudflare's existence, but rather the lack of awareness that service providers have about the trade-offs they are making - very similar to the problem we have with "free" services like Google Analytics, or CDN hosting. In each case the programmer is trading away end-user privacy on an ongoing basis for transient developer convenience and a slightly cheaper-to-operate runtime with more failure modes. It's usually a bad trade, I think.


While Cloudflare is a big player, pretty much every service today has some form of third party MITM machine - things like Fastly, Akamai, Google Cloud Load Balancer and CloudFront all provide reverse proxy capabilities on the same level as Cloudflare.

Even if you run your own proxy and caching, you can’t trust your cloud provider not to DMA your keys unless you’re using trusted computing[0] (which ironically requires remote attestation if a company wants to verify it’s active on their CPU), and then chances are a dedicated three-letter-agency has exploits at the ready if they really need to extract information.

If a company isn’t running their own bare metal, nothing is safe.

0: https://aws.amazon.com/blogs/security/confidential-computing...


> While Cloudflare is a big player, pretty much every service today has some form of third party MITM machine - things like Fastly, Akamai, Google Cloud Load Balancer and CloudFront all provide reverse proxy capabilities on the same level as Cloudflare.

the normal way to do this is to run your static content through CDN's and allow your dynamic content to hit origin.

you're not saved from DDoS of course, but you'd be surprised at how much cookies for static content can cost you in CDN costs; usually people use a separate domain.


Cloudflare does have the ability to front your website without providing your private key to them, it's called keyless SSL: https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...


They are still terminating the encrypted connection and get to see all data in cleartext so this is irrelevant to the concerns stated in gp. The only thing this scheme buys you as the website owner is that you can prevent CF from accepting new SSL connections without revoking the certificate entirely.


We don’t use cloudfront but we use a competitor. Before we used it there would be millions of malicious attempted requests on our api. It doesn’t block everything and it makes things a pain for 1% of users but not using it would put tons of users data at risk.


>If you use a VPN or just like browsing in privacy mode, it will make your life as difficult as possible by having you fill out multiple captchas. And even then, it will sometimes not let you through.

I use VPN and private browsing and the worst I've been subjected to is getting IP/ASN blocked, which to be fair can be implemented without cloudflare. I've had to fill out captchas but that's something that happens a few times a month at most, and it's never a captcha loop that you mentioned.


How are you "convinced" by something when you have zero evidence for the claim? It sounds like you wish it were true.


> Some video games are already requiring Secure Boot and TPM on Windows 11.

Great...

Just now that we have Proton, Vulkan and other technologies allowing us to seamless run an incredible number of Windows games, we are going backwards and locking down again. That's even assuming we'll be able to run other operating systems. Because why stop at remote attestation? Force secure boot because, hey, noone is using other operating systems anymore since all cloud providers are requiring remote attestation.


This "security" feature confuses the hell out of me. HOW in the sam hill are they going to prove every process running with privileges is in a "secure" state, how can that be possible? Even with constant wasteful memory scans there could be data files that evade micro-brother. So what if it booted in an authentic manner, the user could have tampered with any number of privileged processes, and the second they opened one of THREE web browser engines and ran some unknown 3'rd party advertisement code all bets are off. Not even going to get into malicious firmware/drivers that microsoft will happily sign (see: lenovo superfish, or razer driver menu privilege escalation for recent examples). This is more security theatrics from the same old players, when will it stop? When will all you supporters see what is happening? This is more about operational control than security; it makes me sick to see so-called hackers supporting such an obvious trap. Yall need to dump your shares and join the right side of history because it only gets worse until the weight of oppression ignites a revolution. If you're still holding, think about what is the next "security" frontier after you let them have this? Require every program on the OS, and then every website to sign javascript with cryptographic keys they don't have root control over? GTFO board room members, you're drunk. You want to throw out our freedoms just because microsoft and intel designed a garbage (UEFI) spec that doesn't have a way to physically prevent writes, and damn near requires you to use an ancient FAT filesystem? WHY WOULD YOU TRUST THEM NOT TO BOTCH THIS ONE TOO?


"Secure" doesn't mean secure for the user, it means that the interests and revenue streams of platform providers are secured.


I'm only here to say that I think Pluton is a completely ridiculous name.


It's a mythological reference: "In ancient Greek religion and mythology, Pluto (Greek: Πλούτων, Ploutōn) was the ruler of the Greek underworld." (https://en.wikipedia.org/wiki/Pluto_%28mythology%29). As someone else remarked in a related thread elsewhere, we already have his dog (https://en.wikipedia.org/wiki/Kerberos_%28mythology%29) as the name of a security protocol.


[flagged]


Please don't do this here.


[flagged]


Is calling anyone who disagrees with you a "corporate mouthpiece" similar to how some people argue anyone who doesn't say "ACAB" is a boot licker?

Here's my personal take: No one is forced to use remote attestation merely to use their device. It's only if you want to use some other service. So - if you don't want to play by the requirements of the service, then just don't use the service? It seems pretty simple. There's a country club that I want to eat in, but they require me to wear a suit jacket. I don't want to wear a suit jacket - so what do I do? Simply not eat at that country club and go get food somewhere else that doesn't require a suit jacket, or do I start a PR campaign about how evil suit jackets are, and call anyone who doesn't mind wearing a suit jacket a shill for Big Golf?


I don't like knowing that my image is recorded by surveillance cameras when I go out in public.

A few decades ago someone might have said, "Well, you can choose to just not go to those places which have surveillance cameras," and that would have been a perfectly valid suggestion to solve my immediate problem at the time.

A few decades later, in 2022, the city that I live in has cameras and facial recognition everywhere. Avoiding them means never going to the supermarket, visiting the CBD, driving practically anywhere.

I don't think the hypothetical person was a shill for "Big Camera". It seems much more likely that they don't believe or haven't considered the possibility that proliferation of surveillance could lead to a future where stepping out your front door will result in your image being sent to Amazon by your neighbours doorbell. Or that they're fine with that future, but have no sympathy for people who aren't I guess.


[flagged]


You are, of course, welcome to make whatever judgement you like about my preferences. "Abnormal psychology" seems a bit extreme though.


Because to them, what they want "normal psychology" to be is a docile, obedient, bootlicking sheep.


The "use/build your own something else" argument is no longer valid when the ones imposing such requirements have effectively become a monopoly.


Can you name a single important service that will use this technology which has a monopoly on that type of service, where there is no alternative?


Everything on every single app store in existence today. Even Android apps now have the ability to require hardware attestation and lock out access to rooted phones.


No app store forces remote attestation. Some offer APIs to do it.


And eventually everything essential will be using those APIs and we'll be locked out.


>There's a country club that I want to eat in, but they require me to wear a suit jacket.

Bad example. Remote attestation is more like ordering takeout to eat in your own home but the delivery guy refusing to hand you the food you ordered unless you provide cryptographic proof that you will be eating while doing a handstand. Nevermind the fact that It's none of his business how you eat in your own house.


Ok, much better analogy. But it still points to a simple solution: Just don't order food from those people.


The solution is ok while you have options. But when all banks require attestation to access your account, you can no longer choose.


> Here's my personal take: No one is forced to use remote attestation merely to use their device. It's only if you want to use some other service. So - if you don't want to play by the requirements of the service, then just don't use the service?

Completely and utterly disingenuously wrong. Safetynet is already required for most banks, and this will be followed by medical and government services and all forms of media.

This free market 'vote with your wallet' bullcrap only works if there's an alternative to vote for.


Please reread my statement. Safetynet may be required if you want to use online banking for most banks. If you don't want to use online banking, then you aren't subject to the requirements for safetynet, and you're free to roll your own android.


I read your statement. It's still disingenuous nonsense.

And what happens when all the physical branches within 100s of km of you shut down because 'online is so convenient' or your doctor starts using an app that requires it, or it's mandatory to do your university exam or log into your government's tax portal or the next covid where you need it to enter a shop?

Or what about when every piece of hardware you can buy has a locked bootloader, or your local phone providers decide to require using their app that puts ads on your lock screen?

Or your local taxis get price dumped out of business by uber which requires it?


It's still disingenuous nonsense.

How interesting that my original comment which called them out on this blatant shilling has been flagged. While it's not traditional politics, it certainly gives one an appreciation for the effort with which they're attempting to hide their true motives.


At that point it would be worth investing in a secure device which you can use to do things that require security while you can do your own thing on your insecure device.


The fact that you're calling it "insecure" is a very telling reflection of how much you support this authoritarian dystopia.


Let's say I'm a chat service. In order to authenticate a connection with me you need to give me an access token. When you successfully login I want to give you an access token which you can use in the future to prove who you are. I don't want this token to be stolen by someone else and start sending malware to everyone on your friends list. To avoid this I want to make sure that this token is always securely handled. Remote attestation allows you to prove that you are able to offer this secure envirnment so I can trust you with this token.

You may say that you want to be able to steal this token for yourself to make your own chat client. Unfortunately if you can steal the token, then so can malware. It's a trade off between freedom and security. Platforms that suffer from being abused may opt to focus on security over freedom to offer a better service to its users.


To avoid this I want to make sure that this token is always securely handled. Remote attestation allows you to prove that you are able to offer this secure envirnment so I can trust you with this token.

That's called overreach. Absolutely massive overreach. To go one step further, do you also want to prove that my house has no windows, so "attackers" can't see what you show me?

Trust is trust, not proof. Asking someone to prove to you something is to say that you are not trusting them! It's like asking your spouse to prove that he/she is not cheating on you --- and we don't find that acceptable in the physical world either. The whole idea of trusting someone is that you do not have to constantly monitor and enforce what they're doing. I elaborated more about this "destruction of trust" here: https://news.ycombinator.com/item?id=32283134

You may say that you want to be able to steal this token for yourself

The fact that you're calling it "stealing" is also insane. As soon as that token leaves your system, it is no longer yours.


>Trust is trust, not proof

While I can trust most of my users to not try and steal an authentication token there are always a small number of people who I will. Being able to prove to me that you can't steal a token improves the security of my service.

>Asking someone to prove to you something is to say that you are not trusting them!

Unfortunately, I do not trust everyone it the world. While there are some people I can trust, my service is marketed to a large number of people who I may not trust. If my service was for just my friends I wouldn't need proof and I could just trust them.

>As soon as that token leaves your system, it is no longer yours.

No, in the real world when you take your company laptop home it doesn't automatically became your property. Knish because I send a taken to a client running on your device it doesn't mean that you can do whatever you want with that token.


That's not security. That's submission.


Please elaborate.


Your entitlement to control what your users do in their house or on their device is sickening. Your right to control ends when you send the packet.

Requiring that everyone have a telescreen watching them at all times, wear a shock collar that goes off if their heartrate spikes and sing about how they love big brother all day would stop assaults, but that doesn't make it a secure or safe world -- it makes it a totalitarian hellhole.

Noone pushing these features cares about security or the firmware would be open and the keys would be changeable, they don't care about user rights or better experience or it wouldn't be coming from microsoft and you would be able to register a public key taking responsibility for your own device that any locally provided service would be required to accept or choose your own CA. You just want the ability to sell removing the nitro button or having a slightly less unusable UI or getting the ads off of the lock screen for $9.99 a month.


>Your entitlement to control what your users do in their house or on their device is sickening

I'm not controlling what you do. I'm just making a whitelist of allowed clients for my service. You are free to do whatever you want with your computer, but if you want to use my service you need to use an allowed client.

>Your right to control ends when you send the packet.

Again, I'm not controlling what you are doing. I'm just requiring certain software to use my service. You are free to run whatever you want on your machine, but if you can't prove to me that you are running an allowed client I can choose to not let you use my service. You can't force me to serve you if I don't want to.

This isn't spying on you, or forcing you to do something against your will. I have designed a system whose safety is better since I can trust clients. Asking you to client I trust to not compromise the safety of my service should be a reasonable request.

>No one pushing these features cares about security or the firmware would be open and the keys would be changeable

TPM providers have many interests. They have an interest of developing a secure device, but they also have an interest of protecting their IP. Just because they care about protecting their IP, it doesn't mean they don't care about security. In regards to making your own chain trust go ahead, but I won't trust you.

>You just want the ability to sell removing the nitro button or having a slightly less unusable UI or getting the ads off of the lock screen for $9.99 a month.

Yes, just like how DRM stops people from ripping content you have invented money into. This can prevent people from leaching by using a modded client that removed ads for free. People spend a lot of money building services and they rely on ad revenue in order to make money. If you choose to mod advertisements out of your client I can choose to stop responding to your requests.


The danger is one day there might be no options that don't require remote attestation on Windows and Chrome.

Your bank doesn't allow Linux? Choose another bank! ... but what happens once every bank has those requirements?


I agree with you. In my country, almost every bank mandates that users install "online security modules" on their own computers before they'll allow them to log into their account via browser. These software packages were causing massive instability and low performance on the computers of everyone I know. I reverse engineered one of these things and found kernel mode software intercepting every single network connection and doing god knows what with that information.

"Just choose a company that doesn't abuse you" is absolutely invalid since abusing us gives them competitive advantage and they will compete to see who can get away with abusing us the most. Abuse should simply be illegal and the companies doing it shouldn't even be afforded the right to exist to begin with. People should not have to suffer abuse in order to learn their lesson and go elsewhere, abuse should simply not happen to begin with and any corporation violating this assumption needs to get fined out of existence.


Right, “just don’t use that service” rapidly becomes unreasonable when the service is your bank, the DMV, your university’s web portal, etc.


You can just bank in some other manner, like in person, or over the phone. You don't have a right to online banking on your mobile device. It's a convenience.


Sure, and you can grow all food yourself as well, grocery stores are a convenience as well.


They are far less of a convenience and far more of a necessity than online banking is.


What if every restaurant within 50 miles of your home required a jacket to eat (or take out) from there... and also required that you had purchased that jacket from the very same restaurant, at a 30% markup?


Then I would probably start cooking more often.


> Simply not eat at that country club

Your argument boils down to trivialising the imposition to the victim not being able to use a service. How do you justify the presumption that it's trivial? What if I need the service to access health care, a bank account, education, welfare etc? What if there aren't alternatives? And as victims of discrimination everywhere will say, why should I have to sacrifice things even if you think they are trivial, just because I have a preference?

The real argument should be around what is the legitimate need for attestation. Where there is a genuine need then a discussion could be had. For example, if it will threaten the bank's security or liabilities (not mine) then its reasonable for them to at least evaluate the cost of me not providing that attestation and building it into their estimate of the cost of transacting with me (even up to the point of not doing business with me). That doesn't seem to be the discussion people want to have though.


Everything we've done as a society up until now has not needed remote attestation.

That we aren't yet in a computational dystopia is proof that enough that the privation of it has kept computing safe from getting so locked in that nobody can compete or navigate the barriers to entry.

As soon as we start seeing remote attestation fetting implemented, that's gone. Period.


We all carry GPS trackers that send our personal info to large companies constantly. This has been normalized to the point where opting out is too expensive, even for people who would like to.


Asking from a place of ignorance, not argument:

Is opting out of physical location tracking expensive? I just checked on my Pixel and was able to find a setting which ostensibly can disable location services after a few seconds of searching through settings.

What am I missing here?


Even if you shut off GPS, your cell provider can trivially triangulate your position with acceptable precision based on your signal strength and ping response times.

(Also, many Cell Service Providers have a backchannel API that websites can use to geolocate the source of a connection given an IP + Port)


You'll need to turn off the cellular modem to prevent network level location triangulation. At which point it's not really a cellphone any more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: