Hacker News new | past | comments | ask | show | jobs | submit login

Whenever this conversation appears in the news and I listen to various law enforcement and political folks defend some sort of need for breakable encryption, I always jump to the logical end. Or at least one possible one.

In the future there WILL be brain computer interfaces. Memory offloading, recording of visual and auditory cortex data, and other more mundane uses. This may seem like fantasy/scifi. But barring some societal/technological collapse, this will eventually happen. If the precident is set for breakable encryption/back doors, this will absolutely be used to "supoena" people's "brain information" in that context. The invasive-ness will have no end.




> If the precident is set for breakable encryption/back doors, this will absolutely be used to "supoena" people's "brain information" in that context.

I believe this would be a terrible idea because people's recollection of events isn't going to magically improve just because someone connects to their brains directly instead of asking them some questions under oath. Add dreams and other literally crazy things that go in people's minds and there's a big filtering problem. Judicial systems that take "brain dumps" to be the truth would remove the need for human judges, and so we'd end up with a "Minority Report" plus "1984" plus "Skynet"/"Terminator" situation...people's brains being dumped (or "hacked") all the time and any information in that dump used against them to detect pre-crime and stop it. The logical conclusion is that all humans in the future are going to be in prisons run by robots, unless they're wiped out like we've seen in many books and movies already.


Of course it's a terrible idea, but they're gonna do it.


The scary thing is that a large number of people (sometimes in Australia I feel like it’s more than 50%) _would_ give access to their brain, vote for it even. People crave feedback, they can be scared of themselves, so something in their heads monitoring everything could actually be reassuring in an almost religious sense

Australians also don’t think a lot in general (many exist who do but as a general populace we’re culturally weak and short on history), so the long term effects of such reassurance just never really makes it into the discussion

Anecdotally there’s been a slight uptick in my social circles recently about people genuinely caring about privacy, so maybe there’s hope


The difference between the data you could get via a smart phone or smart watch and "brain information" is already quite thin.


Yes! This is a point I always try to bring up. Asking for someone’s smartphone password is like asking for a key to unlock their memories. A smartphone is just another substrate for our consciousness.


This is the argument I've made to people over the last few years, and I think is far better and more substantial then the technical based ones that so typically come up (and have in the comments here already, the whole "well a key will of course leak thing"). It's risky and unnecessary to go to technical arguments for something that's a moral matter, because technology changes. If you base your entire opposition around the key, what if at some point someone does have an entirely formally verified stack and strong measures and can reasonably argue that keys aren't going to leak? Apple's master key for example, it becoming available would be an enormous thing for both criminals and ordinary owners of iOS devices. Yet while there have been plenty of flaws that have been exploited to bypass a need for it, the private root key remains unleaked.

The real question that I would want to see some Congress person put to agencies is

>"Do you believe there should be any inherent limits at all? If we developed the technology someday to read people's minds, should it be permissible to go through their brains with a warrant? It would certainly let you find the guilty of some 'crimes', where for 'crimes' we should keep in mind that gay sex and interracial relations were felonies in the near past."

I mean that's the real thing, if security agencies could root through people's brains I see no need to beat around the bush that likely at least a few truly horrific crimes would be stopped or solved. There would be children saved, terrorists stopped, murderers caught. But I think not just the abuse of it, but even the use of it to eliminate any gray area for a human society would be so horrific that it's just plain not worth it. That yes, some children will be abused/kill, some murderers will escape, some terrorists succeed, and that really is the price we need to pay. That we should try to reduce it as much as possible but only in opposition to strong privacy and an inviolable personal sphere. And that should include artificial augmentations to our minds, which typical mobile devices are already arguably at the point of.

The incentive structures right now for law enforcement agencies and intel agencies remains geared always towards more more more, and paying attention to singular big harms rather then small harms across enormous swaths of the population. It hasn't evolved much from decades and centuries in the past arguably. I think that's the ground to fight on though, will they argue that total erasure of the private sphere is worth it? Will the public agree? I think the answer is no, and with that established it's a lot easier to argue back against "think of the children/terrorists/drugs" typical attack.


The next question needs to be (assuming they feel it should be allowed) is do they feel there should be allowances for someone to be above this law? In our current climate we are seeing criminal looking behavior in the US by people within the Executive branch of the government. They have been citing Presidential Privilege but if mind reading became legal there would be multiple issues here.

This law would rid us of the last safe harbor, your own mind. At that point allowing some individuals the ability to skirt this law would create a class definition of ultimate scale. Those who are allowed to keep their own thoughts to themselves and those who aren't allowed to.


Well put. To wander a bit off topic, but I feel this is a specific example of a general problem in free societies. There is no perfect safety without perfect surveillance. In my opinion, it is necessary to be at peace with the fact that there will be a certain level of bad outcomes in exchange for the protection of general freedoms. There will be crime, murder, kidnappings, embezzlement, death by neglect, etc. It is unavoidable absent a perfect surveillance state. I, for one, am willing to accept risks. Others will disagree. But I think that in a perfect surveillance society you are also pefectly stagnant.


There will be no perfect safety. Period.

There will be no perfect surveillance. Period.

Humans are humans, and inherently imperfect.

Until all humans are eliminated, there will be no perfect ... anything.


> It's risky and unnecessary to go to technical arguments for something that's a moral matter, because technology changes.

True.

> If you base your entire opposition around the key, what if at some point someone does have an entirely formally verified stack and strong measures and can reasonably argue that keys aren't going to leak?

Irrelevant, since formally proving the stack says absolutely nothing about compromising the system at a human level, and 'reasonably argue' does not mean 'proving that the key cannot leak'.

The key will always leak. It's just a matter of when. There will never be any technical solution to that, and it's not a technical problem. It is a hole which fundamentally cannot be plugged, and the hole is people. Technology has nothing to do with this.

>>"Do you believe there should be any inherent limits at all? If we developed the technology someday to read people's minds, should it be permissible to go through their brains with a warrant? It would certainly let you find the guilty of some 'crimes', where for 'crimes' we should keep in mind that gay sex and interracial relations were felonies in the near past."

But that's always a question of legality, not technical capability. Legislation is all about where we draw lines.

However, it is entirely true that legislation (or rather, the 'undefined behaviour' in it) tends to be abused a great deal before it can be shut down.

The argument that the key will leak is not technical, and strictly speaking, nor is the argument that 'technology may one day do this and then where would we be?'. But one is a lot less speculative than the other, and admits no mitigations.


>The key will always leak. It's just a matter of when. There will never be any technical solution to that, and it's not a technical problem. It is a hole which fundamentally cannot be plugged, and the hole is people. Technology has nothing to do with this.

Ok, so again then what would be your response if some law enforcement official said "well what about Apple then?" iPhone is now 13+ years old, why hasn't their master key leaked? Microsoft or Google signing keys, same boat. Or what about SSL in general for that matter, the entire HTTPS paradigm depends upon master private keys that are not leaked, or at least not often. It would certainly be potentially worth a lot to certain adversaries if they could simply spoof major banks, not through some CA hack but literally just getting their keys. Yet broadly the effort to keep that information protected seems to be fairly successful, for better and for worse.

I mean, that's kind of my issue, it's not hard to raise plausible counter examples, and once we get into the weeds about "how likely" and "when" vs "how many will be saved in that time huh" I think we may have already lost. It distracts from the real debate, which is about how valuable a zone of private space is and the harm that comes from infringing upon it. That's the real cost. Just because you can doesn't mean you should.

>But that's always a question of legality, not technical capability.

I don't think that's quite justified by the history of law and privacy. There are plenty of practices law enforcement/intel can and do engage in now that were simply inconceivable in the relatively near past, when much of the law that still governs us was written. Not everything automatically adapts, it's worth watching out for things that are simply taken for granted because they're considered inviolable, and in turn lack explicit legal protection. There should be legal protection, what I'm saying is that winning it requires a frank discussion about harm tradeoffs and trying to get people to more generally grasp new kinds of costs, like emergent effects.

>The argument that the key will leak is not technical

I think it really is. I mean, even in your argument where you say "the hole is people", you're making a technological implicit assumption that people remain involved. Do your assumptions make the same sense if we imagine human equivalent or better AI? I'd rather just talk about why we shouldn't as a matter of ideals, even if we could, and even if it means some things we don't like go unstopped.


I think the whole concept that law should be enforced at all times is flawed. People should have more ways to resolve conflicts between themselves on their own, without cops getting involved - even if the law was broken.


Beyond that I would expect legal precedence set to sanitize inappropriate thoughts. I don't see any of this as far fetched. People will do what people can do.


That is very unreliable, because humans can easily invent memories of events that did not occur. And it may even fool any lie detecting technology, if they truly believe it themselves.

For example, I was 100% sure that some event in 10 years past did happen, because I was there and saw it. Then a friend of mine said that it did not happen. I did not believe and checked the recording - and yes, it did not happen. What I thought I remembered actually happened at another time with a completely different person.


Shoutout to Black Mirror. They've done a phenomenal job dramatizing this and other similar concepts. Do give it a watch.


Indeed! I enjoy many of the episodes. But the first time I had these thought was after reading Charles Stross' Accelerando. https://en.wikipedia.org/wiki/Accelerando

There's a character in there that has his "brain drive" stolen and he's disabled by not being able to access all that memory.

That part of the story got me thinking about how that data would be securely stored.

And then more recently in "Fall; or Dodge in Hell" there is a brief discussion about the uploaded peeoples data being encrypted in such a way that only they would know their own "thoughts", while those on the outside could observer via metadata, the goings on in the digital rhelm.


Honestly, once spying at a distance is perfected, there won't be any need to have broken encryption.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: