Hacker News new | past | comments | ask | show | jobs | submit login
Suspect can’t be compelled to reveal “64-character” password, court rules (arstechnica.com)
244 points by sxp on Nov 23, 2019 | hide | past | favorite | 207 comments



What would happen in the case where the perpetrator does not know the password? For example, I don't know the passwords to most of my accounts because they are random and stored in my password store. If I lost access to my password store, what would happen if the court demanded I provide the password to one of my accounts or encrypted files on my computer?


This exists in Australia as law, you are considered just as guilty as not providing it. Max penalty is 10 years jail from memory. No other crime needs to be committed.

We are the electoral proving grounds for US/UK/CA laws.

It's totally coming for you if you think this is some silly Australian thing.

I deleted a bunch a of old veracrypt drives that I've lost the passwords for and clung onto hoping I'd work it out one day after the laws came in, they've done incredibly well to make encryption seem criminal, it's terrifying and China-esque.


So if I wanted a relative in jail, just ensure they use a password manager then do something from their wifi, or plant a keypass database on their computer.


Going to have to raise the bar a bit. This is Australia, we all do things on wifi.


From their wifi. eg go to your enemy's house, connect to their WiFi, frame them with an encrypted file that they don't have the password to.


If only reasonable doubt still got bad cases tossed.


The problem for that is in the US, such laws would be unconstitutional under the 4th, 5th and 1st amendments

Just like Australia's Gun Ban is unconstitutional under the 2nd Amendment


First, the 5th amendment protection against revealing a password seems completely wrong to me, i.e. I'd expect it to get overturned at some point. The point of the 5th amendment was to prevent forced confessions, not to somehow block access to evidence (which the justice system is otherwise entitled to).

Second, the protection doesn't apply if you're given immunity (because, again, it's not about blocking access to evidence). So if the court provides immunity and then orders you to reveal the password and you claim you can't because you don't know... the 5th amendment will not help you.


>First, the 5th amendment protection against revealing a password seems completely wrong to me, i.e. I'd expect it to get overturned at some point. The point of the 5th amendment was to prevent forced confessions, not to somehow block access to evidence (which the justice system is otherwise entitled to).

This slope is arbitrary and slippery. There's a reason to stand on the extreme principle here. There are very few violations into a person's right to remain silent that cannot be weaponized.

I'd rather not empower the next arbitrary group of political appointees to wield that kind of power just because you think "the justice system is entitled to" evidence.

That sort of short term thinking without long term consequences does massive damage. Think harder.


Otherwise entitled in this context means that there's a warrant, court order, reasonable suspicion, etc. that establishes the existence of evidence and that the defendant is completely able even if not willing to produce it.

Of course, where the line is blurry, is as you rightfully note, beyond a point no justice system can be certain that the defendant really still has access to whatever evidence, be it some part of their mind's contents (so a memory, the location of a body, a password, a face), and even if somehow we know they had access but now they don't it'll be impossible to determine intent. (So destruction of evidence cannot really stand when it comes to forgetting things. Unless, maybe if the defendant caused brain damage to themselves intentionally.)


> ... establishes the existence of evidence ...

This phrase doesn't sit well with me. They cannot establish the existence of evidence without having found it. They can't find it without a proper search. They can't execute a proper search with a warrant or reasonable suspicion.

The existence of evidence is established once it's found and documented.

Edited to add: if they can arbitrarily establish the existence of evidence, and then don't find it, then "obviously" the suspect is a no good dirty rotten evidence destroyer.


That's why I said it's always a matter of certainty. It's up to the judge, the (grand jury), the tribunal, etc.

It's not arbitrary. It depends on other pieces of legal proof, like statements from witnesses, other material evidence, etc. For example in case of hidden partitions people usually get busted because forensics find data directly referring to it (eg. in caches, metadata, logs, thumbnails).


>It's up to the judge, the (grand jury), the tribunal, etc.

The judge has the power to let them search your property.

Nobody has the power to let them search your mind, and they can only do so by imprisoning you or otherwise depriving you of your rights.

The distinction between a search warrant and self incrimination is a big one. You should always have the right to say nothing.

If forensics and data know for sure you did something, the stuff in your mind is unnecessary. Either they have the weapon or not, so to speak. You shouldnt be placed in a position where the justice department "is entitled to" the knowledge in your head.


>there's a warrant, court order, reasonable suspicion, etc. that establishes the existence of evidence

Warrants dont establish existence of evidence. they just authorize the state to override your rights, because they convinced a judge. All of this is weaponizable.

The way to establish evidence is to have evidence. If you dont have evidence you haven't established it. Forcing you to divulge something in your head is to force you to give up evidence against yourself, and the ultimate trespass.

I can't think of a situation I'd authorize someone else to do that.


No, warrants authorize searches/seizures.

I'm not agreeing with the practice, I just explained it, how pieces of evidence (and testimony) can legally establish that there's evidence that the defendant has ways to produce.


what you're suggesting is even worse - a court order to force someone to divulge some information under threat of jail. Unauthorized searches are an violation of rights but what you're suggesting is another level and on the same authority.

I dont want to live in your society. I don't want to empower anyone to do that to me, even if they claim they're only going to do it to criminals and bad guys.

The terms of service change.


I'm not "suggesting". It's the current reality. Numerous articles over the past years demonstrated it. (This was already linked here https://en.wikipedia.org/wiki/H._Beatty_Chadwick )


> First, the 5th amendment protection against revealing a password seems completely wrong to me, i.e. I'd expect it to get overturned at some point. The point of the 5th amendment was to prevent forced confessions, not to somehow block access to evidence (which the justice system is otherwise entitled to).

There's already established case law that the 5th protects case combinations; this is the reason why American safes use combinations while those everywhere else in the world use physical keys. Encryption keys are very similar on the face.


What happens if important evidence is inside a safe, and the combination was written on a piece of paper that is now lost or destroyed? I imagine eventually the safe would be destroyed in order to access the evidence, but is the owner of the safe liable for withholding the evidence even if they don't know the combination?


This is on to something as encryption is basically a practically unbreakable safe (which can only store information).

I find it completely natural that cannot put people in jail just for possessing something which may possibly be an unbreakable safe just because unbreakable safes happen to exist.


the Police can employ people to "crack" or other wise physically open the safe if they have a warrant.

The owner is not liable for withholding evidence, even if they know the combination as they are under zero legal requirement to tell the police the combination


> The point of the 5th amendment was to prevent forced confessions, not to somehow block access to evidence (which the justice system is otherwise entitled to).

It's not just about forced confessions. It's about not putting people -- including innocent people -- into a situation where they have a powerful incentive to lie because silence is prohibited. Even innocent people may have secrets they don't want in the public record, and allowing them to say nothing is better for all parties than dishonesty. Revealing a password is no different. It saves everybody a lot of trouble and crime and immorality to be able to say nothing than to make up some nonsense about how the device belongs to some fictional person that law enforcement would then have to waste resources trying to track down, or worse yet some real innocent person who the person being compelled to testify would then have the incentive to divert blame to.

Moreover, the 5th amendment regularly does block access to evidence which the justice system is otherwise entitled to. If you're accused of committing a murder then the body of the murder victim is evidence which the justice system is otherwise entitled to. They can get a warrant to search your property and seize it if they find it. But the 5th amendment still doesn't allow them to compel you to tell them where it is.

> Second, the protection doesn't apply if you're given immunity (because, again, it's not about blocking access to evidence). So if the court provides immunity and then orders you to reveal the password and you claim you can't because you don't know... the 5th amendment will not help you.

Immunity has very little to do with not knowing. It makes no logical sense that providing immunity could allow you to tell them something you don't actually know. Consequently it makes no sense that providing immunity should allow them to punish you for not knowing. It could allow them to require you to tell them that you don't know, but what then? How do you expect them to prove that you didn't actually forget?


>the point of the 5th amendment was to prevent forced confessions

No the purpose of the 5th amendment was to prevent people from being forced to be a witness against themselves, that is why the 5th amendment says

"nor shall be compelled in any criminal case to be a witness against himself"

Forcing to revel what is in your mind (aka a password) is a clear and obvious violation of the 5th amendment, and I do not expect it to be over turned

>Second, the protection doesn't apply if you're given immunity

If they give you immunity then is no longer a case for which they can compel you to give up anything in the first place. Unless they are going after someone else in which case the 5th amendment did not apply in the first place.

I fail to see while this relevant to the discussion.


> to prevent people from being forced to be a witness against themselves

Also known as a "forced confession"?

> If they give you immunity then is no longer a case for which they can compel you to give up anything in the first place. Unless they are going after someone else in which case the 5th amendment did not apply in the first place.

Two people committing a crime together.

Honestly and fully testifying against one's partner in a jointly commissioned crime would constitute a confession. Immunity allows that testimony.


> Also known as a "forced confession"?

Not really -- you could be compelled to testify against yourself and not confess to a specific crime, the very fact there is even a trial presupposes one plead "not guilty".


This is why the whole debate is whether being forced to reveal a piece of information in your mind (aka the pass phrase) is testimonial in nature. Yes, the court may well be able to demand the evidence, but they cannot require you to testify against yourself.


There are situations the fifth amendment protects against other than forced confessions.

Here's a hypothetical example: you are accused of murdering your ex-lover. You witnessed the crime and know that the real killer is the chief of police. While disclosing those details could potentially implicate him, they will certainly prove that you know details of the crime, which implicates you. You don't think anyone will believe you, and the other evidence against you is weak (you didn't do it, after all), so you don't want to say anything until you're acquitted.


Forced confessions... like providing the key to an OTP that makes it seem like your whole drive was filled with CP.

Even then, most encrypted data is indistinguishable from random data. Doing dd bs=1024 count=12345678 < /dev/urandom > childporn.aes on your own machine should not be an one-way ticket to jail. (and not to mention the cases where you have forgotten the password)


We have that in the UK already. Technically on the statute book before Australia's although not activated til 2007. https://en.wikipedia.org/wiki/Key_disclosure_law#United_King...


In the U.K. you can be convicted for not being able to decrypt data in your possession which the prosecutors think is encrypted. Ie if I filled a hard drive with random bits and sent it to you, you could be found guilty of not knowing the decryption keys for that disk.


> if I filled a hard drive with random bits and sent it to you, you could be found guilty of not knowing the decryption keys for that disk.

Technically true but misleading. It's true in the same sense as "if I plant evidence to frame you for murder, you could be found guilty of murder". I mean, sure, but that skips over the bit where the prosecution has to prove to a jury, beyond reasonable doubt, that you did actually commit the murder. Or in this case, that the hard drive is filled with encrypted data and that you have the key to it.

In particular, the implication that if the police find some random data that they think is encrypted, you can be convicted just on the assumption that it's encrypted and that you have the key to it (unless you can prove otherwise) is false. If there's enough evidence to raise a question about whether you have the key to something that could be encrypted data, the burden of proof is on the prosecution to prove you do have the key to it (and therefore that it is encrypted data) beyond reasonable doubt: s.53(3) http://www.legislation.gov.uk/ukpga/2000/23/section/53

(to be clear I definitely do agree it's a bad law, just not for burden-of-proof-reversal reasons)


The problem with this is that proving that someone didn't forget something beyond a reasonable doubt is basically impossible. People forget stuff all the time. Especially once you throw in the stress and change in environment of a legal proceeding.

The result is that either the law has no practical effect because anybody can claim they forgot, or the courts fudge the requirement to prove that beyond a reasonable doubt in order to give the law effect, and then you put innocent people behind bars because they really did forget.


Everyone start sending your representatives encrypted thumbdrives and sd cards. Contents? A document asking for reform of this law.


Then I would XOR it with any plaintext of the same length and use the resulting "key" to satisfy my obligation under the law to enable "decrpytion" by the one-time pad algorithm.


That would imply you're smarter and have more power than the law which is not the case.

They'd ask you to provide actual readable files, pictures, etc, or else would reject your "decryption".


It is true but not really relevant that they're smarter and more powerful than me. Since they don't trust me, they won't let me decrypt the disk D but will demand the key K to do it themselves. Recall K = D XOR P for some readable plaintext pictures or files, etc., I've already chosen. The "decryption" yields the perfectly readable output P = D XOR K.


Well, I read the original comment as being a: "here, K is the password, D XOR K is your results" idea -- based on that any D XOR K is technically a decryption, whether random stuff comes from the process.

If you intend K to provide "perfectly readable output", then

(1) if you actually produced the contents of D yourself to hide data, then you need to come up with some scheme so that K provides "perfectly readable output" and some alternative V provides D XOR V (or another decryption scheme) that gives you your actual secrets. I don't think it's that easy to have "perfectly readable data" on D XOR K plus have your secrets with another key. Except if you mean through steganography, but then K is not needed at all.

(2) if you were just send a random noise drive to "frame you" then you need access to the drive to come up with a K so that D XOR K decrypts to valid data.


This is a one-time pad. Given a ciphertext D it's trivial to come up with a key K that "decrypts" D to whatever plaintext P you want (K = D XOR P). This is why one-time pads are immune to brute-force search.


"We are the electoral proving grounds for US/UK/CA laws."

... and I would say you're falling down on the job ...


The US is still a mostly constitutional republic where mob rule can’t just do as it pleases. There’s been a recent uptick in efforts to delegitimize the courts, but historically that goes nowhere.

IE the first amendment is still solid protection against this kind of insanity.


VeraCrypt has a hidden volume mode where an inner encrypted volume is stored within the contents of an outer hidden volume, and the inner volume is indistinguishable from random data when the outer volume is decrypted for plausible deniability.


IIRC the configuration that allows hidden volumes is distinguishable from the basic configuration, so they could tell if there's the potential for a hidden drive to exist. If so, they could assume the hidden container exists and throw you in jail if you don't produce a password that unlocks one.

If the format schemes are indistinguishable that's good news.


If it is distinguishable, then it has no point.

If it is indistinguishable then you run the risk of losing data in the nested container if you copy enough data (accidentally) on the outer container.

Not sure which is the case with veracrypt though.


It's indistinguishable but there's a way to put in both passwords while using the outer container so it prevents overwriting the inner one.


IIRC free space on the outer volume has to be continuous―normally at the end of the fs. I.e. the volume will look like it's not used too extensively. Which I guess may or may not look suspicious depending on your claimed use-case.


The recommended practice is multiple hidden volumes. "Oh no, you made me decrypt this volume that contains inappropriate photos!"


Or say authorities demanded access to a long lost forum account you haven't used nor remember the password from many years ago. Can they use this to detain people as a loophole... just find accounts you don't use anymore and demand access to all. Seems a dangerous law to allow.


Yes, it can be used against you. I agree it is a "slippery slope" type of law.


This is part of why democratic norms are as important as law: it norms as a kind of cartilage leaving the law to be relatively simple and strong. If you lose the cartilage, it's painful bone-on-bone (that is, you are forced to try to pass unambigous laws with norms baked in which tries to replace norms with laws, and the downside is in a complex, slow system, which is it's own sort of peculiar horrible injustice).

There are too many examples in the US and Australia where LEOs ignore all norms, seeing them as an unjust limit on just power, and manipulate the (asymmetrically well-known) law to retroactively justify any action. Even when LEOs are legally challenged (a rare enough act of courage, and risks further victimization) and time and again get away with it anyway.

And yet, the people who should be clamoring for stricter, fairer justice against LEOs are LEOs, because every time they get away with it, they win the battle but lose the war for legitimacy. When LEOs lose legitimacy, it leads to a cynical world where laws like this password one are expected to be pushed to their absolute limit by LEOs, and it will be, once again, up to the courts to enforce the norm.

I fear that the modern smartphone is just too big of a temptation for centralized authority, because of their universal appeal and potential for as close to a (retroactive AND realtime!) panopticon as we can get without the palantírs from the Tolkien universe.

Why don't we force people to put cameras in every room of their home with full access given to the law enforcement? And if you resist this, do you have something to hide? I think it's interesting how strong this argument is, even in it's most obviously oppressive form. Coming up with an equally satisfying counter-argument is our #1 priority as privacy advocates.


The law is capable of taking into account probabilities.

Claim to have forgotten the password to a forum that you haven't used in a very long time? That's quite believable.

Claim to have forgotten the password to, say, your password manager that you successfully have typed from memory several times a day every day for the last 10 years? That's a lot less believable.


I reckon authorities have much lower-hanging fruit to exploit in order to detain you. This method seems to require too much trouble and too much competence to be useful for them.


Why cannot the authorities just require the forum to hand over the data?


Obvious possibility: it's out of their jurisdiction.


It depends on whether or not the court believes you. If not, you could be held in contempt of court for a very long time, similarly to Beatty Chadwick: https://en.wikipedia.org/wiki/H._Beatty_Chadwick.


I probably have some holes in my understanding, but most of the responses here seem to miss a few key distinctions.

This very much depends on the jurisdiction. Outside the US, some require you to provide electronic keys unconditionally in which case "I forgot" isn't a valid defense. Some let you off the hook if you forgot, but only if you can convince them that it's really true.

Then there's the US. Here, AFAIU the court can require you to hand over physical objects which it is certain actually exist. Failure to comply is contempt, and most states don't have a maximum for that (https://psmag.com/news/a-most-uncivil-contempt-3464).

For a password that's not written down, "I forgot" is indeed a valid defense here last I checked. Note that it doesn't apply if the court has a convincing reason to believe you were lying, which they would have in this case obviously.

However! It's more complicated that that here. Whether the court can demand things you know (ex passwords) as opposed to physical objects has historically been contentious. The trouble is that the fifth amendment protects you from being required to testify against yourself, and verbally providing a password seems an awful lot like testimony as it will presumably be used to incriminate you. It's gone both ways, and at some point the Supreme Court stated that a password was roughly equivalent to a physical key and so didn't qualify for protection. This ruling goes against (?) that, stating that it's equivalent to incriminating testimony and so can't be compelled.


In what jurisdiction is “I forgot” not a valid defence? If you’ve forgotten the password, you can’t comply with a disclosure order any more than if you were unconscious. The law doesn’t normally need to expressly state that it’s not a crime to fail to do something impossible. Of course, if there was evidence that satisfied the judge or jury beyond reasonable doubt that you were lying about forgotting, the defence would fail.


Well now I'm not sure. I'm certain I saw a few reported years ago as assuming you had the key if they could show that you had encrypted something. I looked just now and couldn't find record of such key disclosure laws though.


Can you cite the SCOTUS decision that said encryption keys were equivalent to physical keys?


As disclaimed, it turns out there were holes in my understanding. :)

The Ars article being discussed is actually where I read that. However, upon attempting to find the original case it looks like SCOTUS hasn't yet heard such. From this Lawfare post, (https://www.lawfareblog.com/fifth-amendment-decryption-and-b...), SCOTUS Fisher v. United States is relevant but doesn't actually involve passwords. Also from that page, the Eleventh Circuit seems to think that government knowledge of incriminating device contents permits forced decryption while the Third Circuit argues that merely knowing you possess the password is sufficient.

In both cases the precedent is that you can be forced to turn over a password; it's only the details of the prerequisite government knowledge that are under debate. As far as I can tell, the current PA case goes against that.

Referenced PA Supreme Court case: https://law.justia.com/cases/pennsylvania/supreme-court/2019...

Related PA Superior Court case that was appealed: https://law.justia.com/cases/pennsylvania/superior-court/201...


Yes - there has been no SCOTUS decision on this. The current circuit split between 3d and 11th circuit will probably put a case in front of SCOTUS eventually.


The EFF seems hopeful (https://www.eff.org/deeplinks/2019/11/victory-pennsylvania-s...) that an even larger split might arise due to this PA case in addition to a couple pending elsewhere. IMO ideally passwords should receive complete and unconditional fifth amendment protection.


I agree that passwords are testimonial in nature and should be fully protected, but I have no idea which way SCOTUS will go. There are a lot of things where one can with some reliability predict the general outcome of a case at that level due to previous rulings and precedent -- this one I just don't know that anyone has a good idea which way they'll go.


You get charged with obstruction of justice or contempt of court and you will be found guilty of that crime and you will be sentenced for it.

This is the slippery slope of law where basically if they want to get you then they will get you for something.

Of course there is a simple solution. If you have "super secret" type files then you need to encrypt those files then upload them through TOR to a cloud hosting service. Of course per standard opsec protocol make sure you don't use a e-mail that is connected to you (make the e-mail account through TOR). This is how you prevent from being put in a situation where you are compelled to basically release a password.


That simple solution is basically impossible to pull off. I can't think of any cloud provider that'll let you sign off without having any identifying details about you. They'll need a payment method to begin with, and unless you usurp someone's identity and use stolen cards, you won't be able to anonymously pay for the hosting. Just getting an email account via Tor without providing any identification is hard.

Every step along the path of doing something anonymously online is difficult. There's nothing simple about it.


>They'll need a payment method to begin with,

Cryptocurrency, Visa/Master Card Gift Cards, "Free" Hosting (things like GitHub Pages), Free Cloud Storage Accounts....


Most of these free things will still require some ID before letting you store any data. Using gift cards will flag and probably lock your account for further review before you can publish anything or use any resources. Same if you created your account or logged in from a known Tor or VPN IP address. Same if you pay in cryptocurrency.

There's probably ways to do it, but I wager they're not "simple" at all. All these companies usually are required by law to make sure they can trace their customers in one way or another (Know Your Customer) and are further incentivized to prevent anonymous usage because such usage is usually done by abusive actors who'll cause problems to the platform.

I'd be very curious to see someone who did it, write up the steps they took (that anyone else could take) to actually host anything online in a truly anonymous and _simple_ manner.


I am aware of zero "Know your customer" laws in the US for web hosting, email or other services like that

Know your customer is a banking law that banks have to follow for money laundering purposes


I might be thinking of the wrong thing, but the payment gateways will need to know who the payer is, and the service provider doing the payment will need to ensure the payment isn't fraudulent (or be exposed to chargebacks), which means the service provider in the end needs to validate your ID, or someone in the chain will in any case.

I'm not pretending to know every platform out there and every laws, but I think it's fair to say that accomplishing the things GP said is not simple, and that most service provider will need some form of ID somehow. Even ProtonMail will require you to make a payment or provide your phone number if you try to create an account from Tor.


You don't know any because you did not try to google any. There are many. But yeah, it's not simple for everyone, I agree.


Held in contempt indefinitely?


https://news.ycombinator.com/item?id=18907924

Francis Rawls has been for years now.


I always thought this is why things in your mind are safe. The reason never seemed philosophical to me, simply that there is no way to prove the difference between not knowing and not telling.


> I always thought this is why things in your mind are safe. The reason never seemed philosophical to me, simply that there is no way to prove the difference between not knowing and not telling.

This seems like a natural but dangerous argument to make in a world where we are attempting to determine the states of people's minds externally. Even if we could tell the difference between lying and ignorance, is nothing sacred?


I see it more as not letting constraints stay because they simply weren't possible in the past. Previously surgeons considered chest cavity surgeries forever out of reach because of the foregone infection risk even with clean tools - antibiotics changed that.

There are still good reasons to object. Not going mind diving willy nilly has many other reasons, said invasiveness is both horrifyingly abusable and unduly stressful to the subject.


What happens if the police ask you to turn over the key to a safe and you’ve lost it?


They can apply an angle grinder to a safe.

The whole point is that they can not be allowed to apply it to your head.


what would be interesting to me see in a court of law is instances where there is a recovery option. if the challenge/responses are not freely given would the courts permit guessing based upon known information about the accused in order to reset the password?

second, how many recovery processes actually protect the challenge/response information and even require it to generate a new password?

now this won't work for devices secured by password that don't have an outside reset but for online accounts want prevents law enforcement from spoofing the system?


> now this won't work for devices secured by password that don't have an outside reset but for online accounts want prevents law enforcement from spoofing the system?

The power to subpoena evidence from the service is powerful and the tool likely to be reached for in such a case. Unless you mean an online account storing encrypted info?


Can you be “convinced” to click on the forgot password link?


Don’t forget about that Philadelphia police officer still in jail not wanting to provide a password for some Mac encrypted drives.


They'd get you access to your password store, I assume.


They are talking about in situations where the password is so strong it cannot be bruteforced.


Though it does make me wonder if online password managers like lastpass or 1Password could hand that info over if authorities demanded it.


1Password publicly documents their security model, if you want to verify it inspecting the HTTP requests is fairly straightforward. I assume Lastpass does the same, but as I don’t use it I haven’t bothered checking.


I hope not. If they demonstrate that capability and word gets out then that will end their business. If they can give law enforcement access to your passwords then the employees of the company have access to the passwords too.


I appreciate the “hope”, and I sympathize with the general view but I think the probability of a law abiding corporation to divulge this info to the American government approaches 100% across a few years, provided said corporation actually has the data (cf iPhone access codes).


I really don’t want to have to ask myself that question, which is why I’d rather use Keepass and sync the store myself.


That’s why you should put a pass phrase on it. Firefox syncs my passwords but has to ask for my master password at launch. Same goes for Google Chrome’s passwords: they’re encrypted at rest and can’t be displayed online for that reason, you have to sync with Chrome.


Brute force? Nothing brute force about bringing you your laptop/phone so you can unlock your password store.


That quote where the self-avowed pedophile says ‘No fucking way’ is going to appear in every bill to increase law enforcement’s powers from now on. It’s an irresistible soundbite. The scumbag would have done better to just shut the hell up.


Which is always true, at least in the US. The police are allowed to lie to you during questioning. They are allowed to claim they have evidence that they do not have. No amount of lying on their part will invalidate any "confession" they obtain from you. But if you even inadvertently tell the police something that is untrue, you can end up in prison for that, even if you committed no other crime. NEVER talk to the police, if there is any chance they suspect you are involved with a crime.


I’ve never understood why the police needs to lie. In Sweden, the police is not allowed to lie but may however withold information or ”speculate” how the circumstances and evidence will be interpreted at trial. The prosecutors still win 90% of cases.


They should need to fix it then, so that the police are not allowed to lie to you (but are allowed to withhold information, or be deceptive, or lie when they are not on duty and don't have a police uniform), but you are allowed to tell the police untrue stuff as long as it does not constitute filing a false police report or accusing someone else of a crime that they have not committed, or a few other exceptions (e.g. abusing the emergency service).


Actually I missed a few things. Both sides should be allowed to do the following: Refuse to answer a question (except asking the police to see a copy of a warrant or their police identification (they can refuse you to touch their police identification though)), or claim to not know something or forget something (even if that is untrue). And if someone nevertheless says something untrue even in a way which is not permitted, if there is a reasonable chance to believe it would be inadvertent, then they are forgiven (and if they are the police, may be allowed to try again later (if the person they are interrogating is still a suspect), even though the first attempt at arresting them didn't count).


> Which is always true, at least in the US. The police are allowed to lie to you during questioning. They are allowed to claim they have evidence that they do not have. No amount of lying on their part will invalidate any "confession" they obtain from you. But if you even inadvertently tell the police something that is untrue, you can end up in prison for that, even if you committed no other crime. NEVER talk to the police, if there is any chance they suspect you are involved with a crime.

While police are allowed to lie, are they allowed to lie to you with regards to what rights you may or may not have? Naively, it would seem so but I'm not aware of any language explicitly allowing deceit on the part of police.


You're looking for the wrong thing. There's no language explicitly DISallowing deceit on the part of the police.


Why wouldn't he, when the penalty for pedophilia is worse than murder? You can kill someone, do your time, and be out in as little as 5-10 years having paid your debt to society. Sex offenders are branded for life, and are reduced to living in the modern-day equivalent of leper colonies.

https://www.nytimes.com/2011/08/21/opinion/sunday/sex-offend...


This seems like a case where the prosecutors are making things hard on themselves. Or, my tinfoil hat wants me to think, deliberately testing the waters with an interpretation they’d like to turn into precedent.

If the contents of the disk are a foregone conclusion (and assuming the article’s framing of the “foregone conclusion” idea is correct, and I’m not even close to being a lawyer so I have no idea) then it seems like a slam dunk that they would be able to compel him to produce the incriminating contents of the disk - but not the password that unlocks the entire disk, presumably including things that don’t have that “foregone conclusion” status. Or they could likely just use his statement as evidence of the content of the disk. Either of those would almost certainly get the conviction they want with far less drama.


Of course he would have. This is a fantastic example of how everything you say, can and will be used against you.

But I think criminals being interrogated aren’t really of sound mind.


They're not proven to be criminals at that stage. They're suspects, maybe, but no court has convicted them.


Fair enough. But that doesn’t really change my argument?


> Fair enough. But that doesn’t really change my argument?

I think the point (which bears repeating) is that suspects are not criminals until proven so in a court, and it's important to be careful in our language, especially in our modern times where the court of public opinion operates swiftly and without due process.


It’s one thing if they deny the accusations, but if they agree with them...


> The scumbag would have done better to just shut the hell up.

It also would've been better had he not been a pedophile and yet... we are here.


We need to change the analogy on these topics from “a passcode to a wall safe” to the encrypted computing device being seen as an extension of the person’s mind.

When we encrypt information with a key known only within our mind, we are creating a cyborg like extension of our own mind.


> extension of the person’s mind

That's exactly what our computers are. However, it's a mistake to think the government cares or can be persuaded to care.

People in power simply cannot tolerate the notion that something is out of their reach. Encryption is just mathematics but it can render entire governments and militaries powerless. The fact common citizens have access to something like this is an affront to their power.

What's happening in practice is a politico-technological arms race: governments make laws, people make technologies that neutralize those laws. With every iteration, the state must become more invasive and tyrannical in order to maintain the same amount of control over its population.


One more thought to clarify: in the future imagine we can repair brain damage with silicon-based implants. Will courts force their extraction or enumeration in criminal cases? My point above is that we’re already augmenting our minds with electronic devices, they’re just not implanted physically yet.


With the recent advances in brain imaging and particularly interpretation of that data in post-processing, I suspect we're not terribly far off from what we might as well call mind-reading machines.

The right to mental privacy will be a great battle of the 21st century.


Courts can already require fingerprints, iris scans, genetic material, etc for forensics.

I don't see why a machine organ would be any different.


Invading someone's mind has to be treated differently. I am not any of those things - but I am my mind.


The difference is not because the mind is different, but because we cannot infer its content otherwise reliably. (That's the problem with polygraphs, they are extremely unreliable relative to the importance of their purpose.)

If it were possible to do it with some sort of scan (reliably!), eventually courts would order those scans.


But that's kind of the point: the idea that sanctity of one's thoughts is not ensured, that really it's just a technological limitation, should be extremely troubling.


I know, but also that's reality. We were (and still are) pretty brutal when it comes to getting our way with the powerless.


Seems a lot more productive to use silicon as a feedback system for the brain via electro-magnetic radiation. Then it could function as scaffolding. Silicon isn't magically becoming more organic than a wet, soft brain full of DNA and other complex organic compounds.

I mean, that's one of the things we've been doing human experiments with for 30+ years now, via the wireless phone network and wifi. "Hold this radiation emitter next to your brain!" "New radiation patterns for the masses!"


How is that not analogous to a wall safe with my journal in it?


In my view it depend on two philosophical questions; does the journal exist when it rests in the safe, and does information exist when it is encrypted?

To simplify it, let's imagine a one time pad encryption. Does the information exist, or does it only exist in potentia? I think there is a good argument in favor of defining it only in terms of potential information as any one time pad can represent any information of the same length (or smaller if we accept padding).

With Schrödinger's cat experiment we could create a similar setup for the journal but it's questionable if we can still apply the perspective that the journal only exist in potentia. From the cat's perspective it knows if it exists or not. The encrypted information however has a more metaphysical environment and it is more questionable if it can be said to have a similar perspective.


We're talking about a 64-character password, not a one-time pad. Unlike a one-time pad, you can tell whether you've undone the encryption correctly. There is real information there that a third party can uncover given enough effort.


For simplicity sake we can see the password as representing a infinitive long stream for a one-time pad generator with the seed being the hash of the 64-character password.

The way programs detect if the encryption is successful is usually by looking at the first bits of information with the assumption that random collisions are unlikely to produce an expected pattern. Not all decryption systems does this however and some just give you the data as produced by the given key.

Both are however just technical details in how to turn the potentia of the random-like encryption data into information.


I'm not sure what you're trying to simplify here. If a third party can tell whether they've successfully guessed the right password, then the encrypted holds real information that the third party can learn with sufficient effort.

If it's just random bits that you need a one-time pad to decode, then there isn't any information without the decryption key.

If you're encrypting a hard drive, most encryption methods give you full certainty that you've correctly decrypted the text, in the same way that you'd have full certainty that you've correctly opened a safe and found the journal inside.


>If you're encrypting a hard drive, most encryption methods give you full certainty that you've correctly decrypted the text

I was thinking about mention it before when I wrote the above comment but it was already becoming a lengthy comment.

Truecrypt (now Veracrypt) is one of the more popular disk encryption software and was part of at least one US lawsuit in regard to revealing passwords. Truecrypt support a technique called hidden drives. The technique use the fact that free space is indistinguishable from encrypted data, so an attacker can never be fully certain if they have decrypted the whole data or just part of it.

A older and similar concept was/is utilized by Freenet project. Here the data get one-time pad encrypted using existing encrypted data blocks of same size. Each encrypted block then becomes both the key and data from the perspective of the encryption scheme, and the same block can be reused multiple times as one side of the operation for any given number of decrypted data. In order to decrypt a given file you need to first download the map that identify which blocks represent both sides of the one-time pad encryption, then the blocks which combined are twice the size of the decrypted data, and then do the operation. Freenet theorized that since any block could be the key/data for any other block you could never be certain of what information you have stored by looking at a single block. The block is just information in potentia.


Do you have any reason to think that what you're describing is what happened here? It seems unlikely to me, as it would mean that the guy could have given out a decryption key that would exonerate him.


This case has very little to do with the gliding scale between certainty and potentia. The legal system usually do not care about technicalities of technology but rather the interpretation of legal arguments. In here the Foregone Conclusion Doctrine is not about the existence of the data. The idea is that the government must already know a certain amount, and the dispute is over how much they know. In the case of the journal, some courts have find that the government need to know about the journal and that they contain authentic evidence in order to argue a forgone conclusion, while other courts think it is enough that the government know about the wall safe. Courts that demand that the government know about the content usually reject the case and the opposite for the later.

It is also important to understand that its not the journal itself that get subpoena. In the later case it is the wall case, with the government arguing that all locked wall cases must contain an unlocked wall case. The conclusion of the existence of an unlocked wall case is thus a forgone conclusion, with the content within being irrelevant to the argument.

The court in this case looked at this and said "The Commonwealth is seeking the password, not as an end, but as a pathway to the files being withheld". This basically mean that they don't accept the argument that the government can request the password based on the simple fact that an encrypted disk exist. Thus the focus is changed away from the container and onto the information within, and here the court do not think the government has enough information to prove a forgone conclusion.


I'm a bit confused about what you're saying here. We've already established that a court can't force you to provide the password to open a wall case. Why does it matter whether they know about the container or the contents?


There is nothing that is similar. The journal is a physical thing you can lay your hands on and the password exists only in the individuals mind. The act of producing it is both inherently testimonial and testimony to ownership of the contents of the data on the machine which is distinct from ownership of the machine.

It is further impossible to distinguish between won't and can't unlock.

Even if believed that you can subpoena the content of your skull you arrive at a situation where every criminal "can't remember"


Giving out the code to a wall safe demonstrates my ownership of the safe contents in exactly the same way that giving out the password to a computer demonstrates my ownership of the contents of the computer.


It seems so. It seems unlikely that such an order would be successful so they would drill the safe.


The point of bringing up wall safes is that it courts have already ruled that it's unconstitutional to require someone to give up a wall safe combination.


Isn't there a right to not testimony against oneself?


This was the point of the article.


No, the point was that there was a ruling on it from a highly divided court that only applies to Pennsylvania.

The article didn't really argue something like that. It only reported on a court case and really didn't say if they agree with the decision or not.


In my view, it’s analogous to a journal that will only exist once you have written (some of) it down in front of them. The encrypted content can only exist in decrypted form with the information in your head.


This is how wall safes are supposed to work too, which is why you can't be compelled to reveal the combination.


Nice.

This is yet another reason why I avoid biometrics for authentication, and I'm glad that the court backed up my assumption that it is more secure. In this particular case, I hope this perpetrator is convicted regardless (assuming he's guilty, that is), but I'm glad the courts agree that nobody should be compelled to give up information. This is a big win for privacy.


How will he be convicted if the primary evidence is on his computer?

My geek half that believes in strong 4th and 5th Amendment tights and crypto thinks this is a fine decision. My forensics half bets this guy has gigabytes of atrocious child pornography on his system and that CP cases are the motivating factor for the police wanting to decrypt your hard drive in 99 out of 100 cases with a search warrant.

These cases are not abstractions; every pic is of a child who was abused and the perpetrators organize themselves into networks that include distributors and abusers/“content producers.” How is justice served if these networks can’t be rolled up due to full disk encryption?

I like encryption. I don’t want to be compelled to give up my password to a tyrannical government. But I think the tech community is quick to take an absolutist view of the right to strong crypto, and quick to discount that harm can happen as a result. I also don’t think that view is shared by most people... so it seems like a problem worth solving.


Don't we have other forms of surveillance, installing of cameras/microphones in people's homes and sting operations that could help catch people?

I think that demanding citizens incriminate themselves is an especially lazy form of justice.


I don’t think such techniques are readily and cheaply available to most police departments.


That's a good thing. It forces the police to actually do their homework rather than invade people's privacy whenever they want.


Schneier et. al.'s Keys Under Dormats paper tackles that problem from many angles. You might find it interesting.

https://www.schneier.com/academic/paperfiles/paper-keys-unde...


> How is justice served if these networks can’t be rolled up due to full disk encryption

they can be. it just takes more law enforcement effort. they want their jobs to be as easy as possible, even at the expense of civil liberty.


I’ve worked with a lot of LEOs who handle these cases. I’m not sure I’d characterize them as wanting things to be as easy as possible even at the expense of civil liberty.

How do you prove possession of CP if all the data is on encrypted drives? What sort of additional effort do you think is necessary?


Biometrics are a userid, not a password.


Biometrics are not usernames or passwords. They are a separate thing that has some similarly to both. A unique key is likewise neither a username nor a password but could act as both in the right circumstance.

The discussion of biometrics is not advanced by parroting this hollow statement again and again. Please stop. There are compelling arguments why biometrics can be problematic. “It’s username not a password” is neither compelling nor truthful.


I found their comment to be a useful way to easily explain it to those who are less technical. In that way I find it has value.

Your comment, however, has quite an aggressive feel. I don't think it was warranted.


It seems a bit harsh to me as well but since you read the "fingerprints are just user names" mantra way too often on Hacker News, maybe the author of the comment thought it was time to become a bit louder.

Here's my take on it:

If you enter a passcode anyone close to you can see you enter it. It's much easier to figure out what you're typing on a smartphone than on a keyboard, and the oily residue on the touchscreen makes it even easier.

A fingerprint on the other hand cannot be observed. Someone has to follow you or already know where you live and take fingerprints off doorknobs or something like it. Takes a lot of time and failure rate is still high. Then they have to use it on your device, so they have to have access to the device as well.

Fingerprint scenarios make sense for _targeted_ attacks_ when you are way more likely to be hit by a scalable attack.

It is much easier to brute force your password on a non-proof website or to find it in leaked password database. It scales very well and needs no physical access where no 2FA is enabled.

One could argue that most users still will use weak passwords in combination with biometrics.

Still, it pushes them to at least have a password.

And if you're a more professional user the combination of a strong and random password in a password manager plus fingerprint for convenience seems like an okay trade-off, especially since you will know how to deactivate it temporarily (reboot device or press power button 10 times on iPhone for example).


> A fingerprint on the other hand cannot be observed.

This is not correct [1].

It won't be that long before someone gets around to training some sort of ML system to scour photographs to extract fingerprints and start building a database of everyone's fingerprints. These databases will only expand in coverage/accuracy and the quantity leaked will only increase. Fingerprints for authentication will not survive the next decade.

[1] https://www.csoonline.com/article/3268837/busted-cops-use-fi...


Interesting, though the thing in the article is still a targeted attack. You will not remember the fingerprint of the guy unlocking his phone next to you.

But I agree that ML scale attacks can definitely change what I wrote in the future. They could also be used for CCTV evaluations of people entering passcodes.


Some “finger print” tech doesn’t use actual prints, but the deeper layout of the capillaries that can’t be easily observed. I believe Apple uses this on their devices that support “fingerprint” auth.


It is an inaccurate explanation and therefore low value if truth is the measure of value.

I also believe my comment was entirely warranted. This same misleading talking point comes up nonstop. It is unreasonable that dissent is expected to be buried under a pile of politeness. There was nothing particularly aggressive about the comment except that is was a clear statement of disagreement.


Not on iOS or Android phones. On most recent phones, they're both.


Which is why I will never enable fingerprint or face unlocking ever. I'm fine with physically typing in my password. I like real security.


I have an iPhone with Face ID and an 8 digit passcode. If I were to be arrested or whatever, all I need to do is press the power button 5 times in quick succession and Face ID is disabled and my passcode is required.

So I can choose:

- Unlock my phone with ease for 10’s of years and then quickly lock it once - Struggle to unlock it for 10’s of years to avoid having to quickly lock it once


Rebooting does that by default on android: it always requires your pin. No need to remember to press the x button n times. Just shut it down.

A good habit anyway: you don't want them to be able to poke the ram.


Rebooting resets the passcode requirement on iOS as well.


If they tell you to unlock it but you 5 tap instead, then it wouldn't surprise me if it counts as obstructing justice.

You'd need to do it quickly before the cuffs come on. And make sure the officer doesn't mistake you reaching for your pocket as you grabbing your gun.


If you enable voice control in accessibility you can reboot the device using a voice command “reboot this device”.

After which you will need your passcode/word to unlock the device. Handy if you foresee a time you won’t be able to reach your device and tap the unlock button 5 times.


Physically typing in a passcode is less secure that you think given the wide range of cameras everywhere. All I have to do is observe you entering it once with a camera (likely from any angle where your movements are visible) and your security now belongs to me. FaceID cannot be recorded in any way. Of course you have to occasionally enter the PIN so it's still not perfect.


For the average user whose password is 0000, maybe face unlocking IS more secure for them.


unfortunately for that user, face unlock doesn’t disable passcode unlock


Not really both. If anything, they are a temporary quick-unlock key. After a restart, before an extra-sensitive operation, and also after a certain amount of time/unlocks you can’t use your fingerprint to unlock things anymore and need to use your normal passcode/pattern.


Another good point is that biometrics aren’t secure. They invest millions and billions in fingerprint and eye scanner systems that get fooled by scotch tape. Just not effective security.


It's effective against nosy people which is probably the main threat for most people.


It's better than no security. Most people aren't going to use a secure pin code on their phone.


Exactly. I have accidentally witnessed so many 4-digit PINs typed into phones by friends and casual acquaintances. It’s kind of unavoidable if you spend any reasonable amount of time in person with them. Very few people in my experience go the extra mile to set a long fulltext passphrase on their phone.


Depends on the device.


It doesn't.


I hope he doesn't. We don't need more lives ruined for victimless crimes/thought crimes.


Child porn is victimless?


To me the possession of child porn is victim-less, just as the possession of media of murders is victim-less, or the possession of media of any other crime is victim-less. The purchasing of media of a crime is dubious, as it may serve to reward people who commit that crime, and the committing of these crimes almost always has a victim (excluding cases for example someone wanting to be murdered or something).

The prosecution of these cases seems to look a lot like crimes of morality than crimes that directly impact someone else's life and happiness. Sometimes I've heard people say that the mere fact of media existing of a crime that they were in hurts them, and hurts them more when people watch it. But to me, prosecution of these cases, if anything makes one more aware of people watching media of these things, not less.

The thing that bugs me the most about these sorts of prosecutions of "thought crimes" is that the trail from the "thought criminal" to the actual crime is always hazy, and definitely not 1-1.

E.g Imagine someone who was really into collecting child porn as some kind of bizarre stamp collecting type of thing. They were never into it per-se, it was just some kind of messed up hobby. They'd trawl forums on the dark web and download every image they see, maybe write a script to do it. It would be very hard to trace a direct line to a victim there. More likely than not, particularly if no money ever changed hands, there probably wasn't a direct victim. Someone may have paid for it at some time in the past, but not the stamp-collector. It seems like these are crimes that convict people who hurt others, but also those who do not hurt others.


Meanwhile politicians regularly go with "I don't remember" if challenged in court.


I don't understand what's being defended here. The guy already admitted what's on the computer. Like, the EFF saying

> "We store a wealth of deeply personal information on our electronic devices. The government simply should not put individuals in the no-win situation of choosing between disclosing a password—and turning over everything on these devices—or instead defying a court order to do so."

This is deeply baffling to me as applied to this case. The personal information being ‘protected’ is not his property.


Yeah, this is an extreme case. I mean, he reportedly said:

> It’s 64 characters and why would I give that to you. We both know what’s on there. It’s only going to hurt me. No fucking way I’m going to give it to you.

It seems like a totally dumbass thing to say. The standard advice is politely declining to answer, and requesting an attorney.

But at least he didn't lie.

And yes, it's true that the content isn't likely his, in the sense that he generated it or that it's about him. But it would -- as he admitted -- incriminate him. And the court affirmed that he has the right to not incriminate himself.

What's especially interesting is that the court didn't agree that "We both know what’s on there." didn't moot protection against self-incrimination.


I bet he did lie and its not 64 characters. He was probably saying that to make them not try to break his password12345.


But they haven't broken it. And I assume, not by failing to try.


Worse still is the threat of quantum computers that will make cracking his vault trivial. Apparently in 20-30 years people will be able to fit a quantum computer into a laptop form factor and break certain types of crypto at will, regardless of the keysize or key complexity.


At the moment, quantum computers have only been shown to give exponential speedups over classical computers on problems that can reduced to the special case of the hidden subgroup problem for finite abelian groups [1]. This is only really an issue for public key crypto at the moment (RSA, Diffie-Hellman, and ECC) since secret-key crypto tends to need less structure in the underlying problem. The best you could do to speed up cracking AES with modern quantum algorithms would probably be Grover's algorithm, which only gives a quadratic speedup (an O(N) search becomes O(sqrt(N)) instead). [2]

[1] https://en.wikipedia.org/wiki/Hidden_subgroup_problem

[2] https://en.wikipedia.org/wiki/Grover%27s_algorithm


> quantum computers that will make cracking his vault trivial

Disk encryption tends to use something like AES. Key derivation is usually built on top of hash functions, but a 64-character password has more bits than most people use for AES, so key derivation might not matter.

The implications for AES aren't known yet, beyond effectively reducing the key length[1]. You're probably thinking about prime factoring and RSA, which will be weakened by quantum computing.

[1]: https://security.stackexchange.com/questions/116596/will-qua...


well if that is true then it will mean that we will just be able to use the same quantum processing to make the encryption strong enough that it won't be able to be cracked for X million years or whatever.

you could argue that things encrypted today might be easily decrypted though


> But at least he didn't lie.

> We both know what’s on there

To be fair, they _could_ claim he's lying there, since neither of them know 100% "what's on there". There are certainly things on the disk that neither one of them are aware of. Sure, it's a technicality, but it's known that the authorities are perfectly happy to use such technicalities against suspects.


OK, sure.

But it's pretty clear that he meant child porn. I mean, he even shared some of his favorite themes.

The point, though, is that he didn't need to say any of that.


What's being protected is the contents of his head and only tangentially his computer.

If the computer required a physical key he would be forced to provide it.


> The personal information being ‘protected’ is not his property.

The information being protected isn’t the pictures. It’s everything else on the PC.

Though I guess in this case it’s about providing information in his head?


And in the case of the pictures, what if you manage to memorize the hex codes of the pictures and just retype them if the computer data is lost? That is difficult, but some people might do such things.


Just as there are signs of the "drug war" finally faltering, "child pornography" is primed to continue the erosion of generally held rights in service of a moral panic. This case itself is good news, but the everpresent problem is that the attackers are paid with public funds to try again and again until they succeed. Criminalizing mind altering substances seems downright practical compared to criminalizing patterns of bits.


This "yOu cAn't mAkE nUmBeRS iLlEgAl" thing is one of the most annoying memes I've ever encountered.

It's literally self defeating - you could just as easily say that he has no right to privacy for the contents of the harddrive because the contents of his harddrive is "jUsT aN iNtEgEr" and no integers are secret.

Some patterns of bits are clearly and rightfully illegal.


> This "yOu cAn't mAkE nUmBeRS iLlEgAl" thing is one of the most annoying memes I've ever encountered.

If you take any reference to a concrete phenomenon - the extreme difficulty of policing private data storage and communications - and pigeonhole it into a simplistic caricature, then it's no wonder you see that everywhere?


Perhaps I misunderstood what you were saying but you appeared to be the one describing a complex situation as "criminalizing a pattern of bits". IMO it's a level of reduction below what is useful, kind of like arguing against verbal harassment laws as "criminalizing vibrations of air".


I wasn't making a moral argument about the uh, merits, of child pornography, but rather the practical effects of trying to police patterns of bits stored on private computers and transmitted between consensual peers. This level of complexity is relevant, as it describes the scope of the activity, and implies what is required to actually control it.

If most child pornography enthusiasts are caught due to having their computer fixed at Best Buy, have you actually criminalized viewing child pornography, or would it be more appropriate to say that you've criminalized patronizing Best Buy? As stated, that's obviously hyperbolic as bona fide CP enthusiasts are an extreme minority. But we can imagine tripping up innocent people with the letter of the law (eg botnets, teen sexting, unregistered pornography), as well as similar dynamics around much more common "patterns of bits" like say pirated Hollywood movies.


> you could just as easily say that he has no right to privacy for the contents of the harddrive because the contents of his harddrive is "jUsT aN iNtEgEr"

I would say that this is true, the police should be free to try to brute-force his password or crack the encryption used via some other method that does not involve forcing said person to act (such as disclosing his password). Although in said world the police would not have any reason to attempt to decrypt said data as it would be legal.


Is it the patterns themselves or how, what the patterns represent, are put to use that should be illegal?

I'd always argue the latter; as I can kill someone with a rock, but making rocks illegal seems unreasonable.

contents of his harddrive is "jUsT aN iNtEgEr" and no integers are secret.

Right, but the pattern is secret right? If someone happens to derive it by brute-force then, well, secret's out.


To play devil's advocate, aren't drugs just “patterns of molecules”? And can't patterns of bits be used for “mind-altering” purposes (broadly speaking, such as in this case)?


Devils advocate would be to engage with the core of an argument, rather than nitpicking my characterizations. The larger point there is that both of those actions don't themselves have any effect outside of some isolated context. This necessitates using invasive methods to police them and also uneven enforcement ultimately based on other circumstances. Each of these aspects causes terrible results that undermine the rule of law itself.


I wonder what would happen, if a criminal picks a passphrase that admits his crime. Since the defendant cannot be forced to self-incriminate himself, wouldn't that be a perfect defense?


Courts and the law doesn't work like that. (See the funny problem of which court would have jurisdiction if a murder would happen in some strange part of a forest, which happens to be Federal land, and also intersect some state borders, and at first glance the currently valid statutes say that the trial should be held somewhere where there's no judge because of some lines on a map.)

But could you give an example of what kind of string could self-incriminate someone?


> But could you give an example of what kind of string could self-incriminate someone?

You can conjure up the previous scenario but not one for this simple case?

“I am guilty of XYZ offense”

Seems like a pretty obvious example.


How would that mean anything? It's just a password. A testimony is not anything you utter without context.


> A testimony is not anything you utter without context.

Go ahead and get arrested and say incriminating statements without context. Let us all know how it goes!


If you get to trial and the judge orders you to produce the password, then and there is the time to say it. It becomes a matter of public record, you don't even have to say it continuously, you can spell it, or whatever.


You could run a "Who's on first" and set the password to "I plead the fifth"?


Hah. I may or may not have used something vaguely similar on my phone in the past.


The perp can't reveal the password, but would the "foregone conclusion" doctrine still empower prosecutors to have him unlock his device?


I would probably agree with the verdict if the defendant wasn't in a child pornography case and hadn't stated (FTA) "we both know what's on it, why would I [reveal my password]".


> Nevertheless, this constitutional right is firmly grounded in the “realization that the privilege, while sometimes ‘a shelter to the guilty,’ is often ‘a protection to the innocent.’” Moreover, there are serious questions about applying the foregone conclusion exception to information that manifests through the usage of one’s mind.

I think they got it right. And as long a prosecution didn’t base their whole case on getting the password, they probably will still prevail.


I’ve worked in the digital forensics space for many years and am not so confident. How do you prove possession of child pornography when there’s no evidence of possession?


The defendant made the remark “we both know what’s on there”.

I assume the prosecution has evidence.


They likely have probable cause that they used for a search warrant. I doubt there's much evidence for possession.


> I would probably agree with the verdict if the defendant wasn't in a child pornography case and hadn't stated (FTA) "we both know what's on it, why would I [reveal my password]".

Which is the problem, they only try to push the boundaries on "indefensible" cases (because nobody cares about a pedophile) but then use their newly minted powers on everyone else.


So what is the reason they are not accepting that preexisting confession?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: