Hacker News new | past | comments | ask | show | jobs | submit login

Is there any way we can make Remote Attestation providers liable for any losses incurred while using their services? Can we make it so that banks, record companies, and individuals can sue Microsoft or Google if their system doesn't deliver on the promise? If we still see cheating in on-line gaming even though all machines are attested, can we we get our money back?

I feel like part of the problem is that Remote Attestation providers get to have their cake and eat it too: they make a theme park, set up boundaries, and charge admission under the premise that it's safer to play in their walled garden than in a public park.

But if a bad actor slips through their gate and picks a few pockets or kidnaps a couple children, the operators get to say "not our problem, our services have no warranty -- read the EULA".

I feel like in the real world, if a park operator explicitly bills itself as "a safe place to play" it's their problem if someone goes on a crime spree on their property -- there is some duty to deliver on the advertised safety promise.

But somehow, in the software world people can control admission, control what you do and somehow have no liability if things still go off the rails. It's just a sucker's game.

Of course, I'd rather not see remote attestation happen, but maybe part of the reason it keeps creeping back is exactly because there is zero legal downside to making security promises that can't be kept, but incredible market advantages if they can sucker enough people to believe in the scheme.




> If we still see cheating in on-line gaming even though all machines are attested, can we we get our money back?

The concept of remote attestation isn't somehow safer if it works perfectly, and it isn't clear to me that this is actually impossible to build (within an acceptable and specified liability constraint) as opposed to merely exceedingly difficult. I do relish the schadenfreude, though ;P.

> Of course, I'd rather not see remote attestation happen...

Interestingly, the CEO of MobileCoin told me earlier this year that they were "going deeper on discussions with [you] to design a fully open source enclave specifically for [their] use case" (which, for anyone who doesn't know much about this, currently relies on remote attestation and encrypted RAM from Intel SGX to allow mobile devices to offload privacy-sensitive computations and database lookups to their server). I wrote a long letter to you a few days later in the hope of (after verifying with you whether that was even true or not) convincing you to stop, but then decided I should probably try to talk to Kyle and/or Cory first on my way to you (and even later ended up deciding I was stressed out about too many things at the time to deal with it)... does this mean you actually aren't, and we are all safe? ;P (I guess it could be the case that this special design somehow doesn't involve any form of remote attestation--as while my core issue with their product is their reliance on such, I went back through the entire argument and I didn't use that term with THEM--in which case I'm very curious how that could actually work.)


Huh...maybe I didn't parse your comment correctly, but I just checked and I don't think I ever got an email from you on the subject? Totally possible I just bungled it, I'm terrible with names and my inbox is a dumpster fire :P

It's also interesting to see how the game of "telephone" works out when the message comes full circle. Mobilecoin did reach out to me, initially to see if I would write a whitepaper on SGX. After I told them I would be frank about all my opinions, the conversation pivoted to "well, if you could make something that fixed this problem what would it be?". Which I entertained by saying I think the problem may not be solvable, but whatever it was, it had to be open source; and "oh by the way let me tell you about my latest projects, perhaps I could interest you in those". To which it trailed off with a "I'll have my people call your people" and that was that, modulo a podcast I did for them about a month ago which surprisingly didn’t touch on SGX.

So: long story short, no, I'm not creating a solution for them, and I think remote attestation is both a bad idea and not practical. Is it worse than burning some hundreds of tera-watt hours of power per annum to secure a cryptocurrency? That is a harder question to answer: is climate change a bigger problem than remote attestation? The answer is probably obvious to anyone who reads that question, but no two people will agree on what it is.

To your point on RA being not impossible but possibly just exceedingly difficult – you might be right. My take on it is that remote attestation is only "transiently feasible": you can create a system that is unbreakable with the known techniques today; but the very "unbreakability" of such a scheme would cause ever more valuable secrets to be put in such devices, which eventually promotes sufficient investment to uncover an as of yet unknown technique that, once again, breaks the whole model.

Which is why I’m calling out the legal angle, because the next step in the playbook of the corps currently pushing RA is to break that cycle -- by lobbying to make it unlawful to break their attestation toys. Yet, somehow, they still carry no liability themselves for the fact that their toys never worked in the first place. I feel like if they actually bore a consequence for selling technology that was broken, they’d stop trying to peddle it. However, if they can get enough of society to buy into their lie, they’ll have the votes they need to change the laws so that people like you and me could bear the penalty of their failure. With that strategy, they get to decide when the music stops – as well as where they sit.

I'd like to see a return to sanity. Security is fundamentally a problem of dealing with people acting as humans, not of ciphers and code. Technology tends to only delay the manifestation of malintent, while doing little to address the root cause, or worse yet -- hiding the root cause.


> Huh...maybe I didn't parse your comment correctly, but I just checked and I don't think I ever got an email from you on the subject? Totally possible I just bungled it, I'm terrible with names and my inbox is a dumpster fire :P

Ah, yeah: I was really tired when I wrote that last night and the sentence complexity was brutal ;P. I wrote the letter, but it felt weird to send "out of the blue" as we don't ever actually talk; and I wasn't even sure I could trust that anything was going on at all, but had written this sad sad letter (lol) and I was just like "I shouldn't send this; maybe I should first have a meeting with Kyle about it, and maybe Kyle can decide how to approach you", and then I managed to overthink it so hard that I just gave up because I was dealing with something else (and I even wasn't sure if Cory, who also started to get injected into my overly-complex strategy, would agree with me, which made it seem even more difficult).

> [everything else you said]

<3


lol -- I've done similar, I know the feeling (-_-). Feel free, tho, to reach out anytime in the future. I'd value hearing your opinion, especially on matters like this!


IMO this just seems like bargaining and hoping for a just world where the law actually applies equally and constrains too-big-to-fail actors. What would actually happen is various limits/exceptions would get written in, like as long as you used "proper" software (read: microsoft) and did "proper" audits (read: tediously check moar boxes) then you could pass that liability onto someone else or have it be "nobody's fault". We'd likely end up with the same software totalitarianism even faster, because companies would be even more incentivized to deploy cookie cutter centralizing solutions to escape the additional liability.

Never mind that you can't really put a dollar value on personal information to substantiate damages or even personal time spent dealing with the fallout from someone else's negligence, which is like one of the fundamental problems with our legal system.

(There's also the elephant in the room that one of the main industries clamoring for ever more "security" still continues to insist that widely-published numbers (ssn/acct/etc) are somehow secret.)


Perhaps incentivizing better methods is more helpful than an outcomes in this case.


"Is there any way we can make Remote Attestation providers liable for any losses incurred while using their services?"

RA is a use-case neutral hardware feature, so it doesn't really make sense to talk about making providers liable for anything. That's an argument for making CPU manufacturers liable for anything that goes wrong with any use of a computer.

The sort of companies that use RA are already exposed to losses if RA breaks, that's why they invest in it to start with. Console makers lose money if cheating is rampant on their platforms for example, because people will stop playing games when they realize they can't win without cheating.

So what you're saying is, let's incentivize these already incentivized people to use RA even more, and moreover, let's strongly incentivize companies that don't use it to start doing so. Because if you think governments will say "oh, you didn't use the best available tech to protect the kids, fair enough no liability" then you're not very experienced with how governments work! They will say "you should have used RA like your competitors, 10x the fine".


That feels like making a bike lock manufacturer liable if someone uses an angle grinder and steals your bike anyways.

In practice, both a good bike lock and remote attestation raise the bar against attacks significantly, without providing 100% security.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: