Hacker News new | past | comments | ask | show | jobs | submit login
Cellebrite asks cops to keep its technology secret (techcrunch.com)
319 points by pg_1234 on Aug 19, 2023 | hide | past | favorite | 215 comments



I’d think defense attorneys would be all over that. “We’re suppose to take some ‘hacking company’’s word for it that they’re doing everything forensically correctly, and the prosecution expects us to just trust them?”

Were I such an attorney, I’d campaign to taint the reputation of Cellebrite and make the evidence it generates completely untrustworthy.


>“We’re suppose to take some ‘hacking company’’s word for it that they’re doing everything forensically correctly, and the prosecution expects us to just trust them?”

You'd be surprised. The courts are not scientific debate arenas. Burn forensics? Pseudoscience. Bite mark forensics? Pseudoscience. Lots of DNA-based forensics? Totally unreliable. Firearms ballistics forensics? Laughable pseudoscience. Polygraph tests? Laughable pseudoscience. Field sobriety tests / "drug recognition expert" certifications? Lol, lmao even.

All of this nonsense has been successfully used to convict people in court. Turns out, putting someone with PhD after their name or a white lab coat or a phony certification on the stand tends to work for prosecutors.


You are right on the money. Many of the tools law enforcement uses are straight up hocus pocus, and they lack the training or ability to question their methods.

New York's main crime lab completely blew DNA testing, and thus rendered all of their results worthless. This went on for many years. Instead of rectifying their mistakes, they swept it under the rug and forgot about it. To this day, thousands of people are in jail based upon wildly incorrect "science".

It is rather unsettling to know that nearly everything the criminal justice system uses is no better than voodoo, and you are not far away from getting thrown behind bars for little more than the whimsy of people who are empowered to do so.


The pseudoscience of those forensics is powerful because if has a lot of technobabble that makes it sound well supported by evidence, and because it can pass as a science to the jury and even judges. The crime series that people eat up rely on dishing that technobabble step by step so the average person feels like they're being educated in detail about something that can apparently be replicated by anyone because "it's science". Science is for everyone and people can peer behind the curtain to see how the sausage is made.

But Cellebrite is unambiguously a private, for profit enterprise. They haven't established themselves as a "science" like authority in the average person's eyes, they actively avoid explaining the processes. They're just a company paid by a customer to deliver what the customer wants. So in the average person's eyes the difference should be obvious: this isn't science, it's not for everyone (just for the Police), and the people are not allowed to peer behind the curtain. And like authorities like to say "honest people don't need to hide things".

So as far as jury trials go there may still be a fighting chance.


> Many of the tools law enforcement uses are straight up hocus pocus, and they lack the training or ability to question their methods.

I suspect that the real issue is the lack of incentive; it doesn't matter whether they know (or are capable of figuring out) that their science is bunk if there's no consequences for the status quo to continue.


> Instead of rectifying their mistakes, they swept it under the rug and forgot about it.

Yeah, that should be a crime. Their fraud led to false imprisonment


"Forensics" is pseudo science. Little, if any of the "science" is peer reviewed. And what is is not convincing to anyone who has an undergraduate experience in a chemistry, biology, or physics classroom and lab. Circumstantial evidence is a lot more convincing than physical evidence, but the TV shows convince us otherwise.

There's really no incentive for forensic science to be a science at all, at the end of the day. You're either working for cops, and paid for evidence to convict their suspect, or working for the defense, and paid to convince us the cops are wrong. There's no one paid to tell us what reality we're in.


Peer-reviewed science have less than a hundred years of history.

Even today, Large scale experimental science seldom get review from outside their lab. (Because the lab literally hire every expert in that sub-field)

Calling something "not science" because it is not peer reviewed is just ignorant


The entire edifice of modern science is based on the bedrock of every result being reproducible. In fact it is supposedly the only way any discovery becomes acceptable. The fact that it's no longer considered necessary says more about the state of the industry then it does about the reliability of the scientists.


This misses the point. Peer review is just one of many processes that can be used to ensure that good science is more likely to have happened through intentional external analysis and critique. And the idea here is that forensics is not based on good science to begin with, not that lack of peer review is what makes it not good science, but that peer review might have been able to catch the falseness before it entered mainstream judicial use.


Peer reviewed studies have a roughly 50% replication rate… that’s not ver good either


Polygraph is inadmissible in court. It’s used for interrogation as a prop.


The US Federal government still uses polygraph in the security clearance process today. In 2013, they indicted several people for teaching others how to beat the test. Here’s the DOJ press release (prepare to lose brain cells):

https://www.justice.gov/opa/pr/owner-polygraph-indicted-alle...

Clearly our government and legal system think highly enough of the technique to include it in interview processes. The NSA even has an entire production video rebuking the criticisms of polygraph. It’s not just a tool (in the eyes of the government) used to trick dumb criminals into spilling the beans.


Not a justification and only pure speculation but what if there's another reason? You can't think that the NSA and CIA are stupid. They know its bunk science that has been disproven 1,000 times over. But the psychological effects of being on the machine are very, very measurable.

Perhaps its being used as a tool to evaluate a person in other ways. For example, you're generally not arrested for admitting to crimes like smoking weed or stealing candy bars from a gas station. But admitting to crimes, or morality divergence (such as adultery), can mean with enough effort a smart intelligence organization could manipulate you.


They're not stupid but they have to maintain the shroud of "this works", both internally and externally, lest the word get out that "the intelligence community doesn't even believe in their own polygraphs"


The reason they're doing it is because it's national security and espionage. It's objectively better to deny 1,000 solid applicants because of something on a polygraph doesn't look right than to allow in one foreign agent, at least that's the idea. So they add this filter, it will filter out some good people but maybe it'll filter out a bad one or two. That one or two makes it worth it in their eyes.

It has nothing to do with "maintaining the shroud of 'this works'" because it does work for the purpose they have. It doesn't need to be 100% accurate or admissible in court.


They don’t have to maintain anything. Admit it was a stupid practice and stop doing it. Maintaining appearances makes them look dumb AND stubborn AND unwilling to admit prior mistakes. Yeah, sign me up.


Interview process and courts are different though. If it is used as a vetting tool it would not be a big deal if it scores a false positive if it at the same time is able to identify at least some positives. If you have enough candidates certain accuracy may be worth it.


So Ouija boards, astrological sign analysis, and a random number generator should be on the table too right? After all, if it identifies at least some true positives...


Talking about the interview process...I know you are talking about suspect police interviews, but job interviews is another space where crackpot pseudo-science and shit propietary tech such as IA pre-screening have a prominent say as well, for the worse


What does “this works” mean? You’re assuming it means to detect lies. Perhaps that’s not the value it brings.


> You can't think that the NSA and CIA are stupid

Au contraire


You’d be surprised what you can’t get a man of Yale to believe when honor is on the line.


As demonstrated in Oppenheimer, security clearance processing is not one where the US Government must obey any court or evidence rules. They can deny secret access for any reason at all.

Did that case against polygraph.com ever go to trial? Sounds like a slam-dunk free speech case, but I imagine the DOJ was willing to settle for anything to avoid that being fought in open court.


It did, the defendant was sentenced to two years. https://www.lawfirmofoklahoma.com/blog/polygraph-com-owner-s...


One of the major reasons I would never work in defense. If I have to prove my worthiness to a clearance system that’s not far off from witchcraft and magic spirits, yeah, I’m gonna pass. Good luck finding intelligent people to work that kind of job.


Agreed. Don’t mind them talking to my former neighbors or doing forensic accounting on my bank accounts to make sure I’m not a foreign adversary. Definitely do mind being strapped to polygraph and being forced to pretend that it’s not a complete waste of time for everyone involved. They’re filtering far more smart people than they’re filtering dummies who don’t know the device is just a prop. Can’t make stupidity a part of the hiring process when you’re offering government salary bands.


I managed a group of very intelligent programmers with the highest clearance available, including taking polygraphs regularly.

They know it's just part of the system, and the routine.

These folks have a slightly different mindset, and, I think, view procedures as important to weed out people who might cut corners, lean into their ego (how dare they make ME take a polygraph), or be careless with information. A nice side effect of this process is it selects for humility, openness, and conscientiousness.

And they view the work they do as hugely important for the security of their nation. Including those who laugh at them for their chosen profession.

They were some of the most professional, conscientious, diligent, and excellent technologists I've ever worked with. Definitely on par with anyone at FAANGs or startups that I've work in.


>A nice side effect of this process is it selects for humility, openness, and conscientiousness.

Or perhaps it selects for submissive, controllable automata that will go along with anything, with no limit on absurdity. Your rationalization of this stupidity is embarrassing.


You're entitled to that opinion, and I don't think it's a great system, but I think it's not the lunacy you're implying. Military life, for example, involves a lot of bullshit routines, which have ancillary benefits of creating discipline and coherent mindset. I don't mind disagreeing about how good or bad this is.

But I do want to dispell the idea that people are less intelligent, less capable, or somehow mindless just because they disagree with you about this (Or OP), and choose to be part of it.


Being skilled != being intelligent


Not sure what you're implying or responding to, but that _is_ why I listed capable and intelligent separately.


I don’t believe that’s an accurate representation of the situation on either side


I'm sure they were technically brilliant, but it really sounds like it was weeding out people who would not be compliant. It's a small jump from that to realise it's most likely weeding out people who have 'egos' that might make them object to morally dubious orders. Arbitrary requirements (like polygraphs which are literally bunk) are the most classic way to select for compliant people.


There are plenty of cleared jobs that don’t do polygraphs. I work for a DOE contractor. Q-cleared, no poly. The vast majority of positions where I am don’t have a polygraph associated with them.


to sibling:

> They were some of the most professional, conscientious, diligent, and excellent technologists I've ever worked with. Definitely on par with anyone at FAANGs or startups that I've work in.

such a shame they choose the most absolute evil actor to use those qualities for. which brings me to

> conscientious

yeah.... I think its pretty much impossible not to know what evil lives openly in those places


The vast majority of defense work does not require a polygraph. It's really only a requirement for certain SCI/SAPs which are programs that have their own additional security requirements on top of Top Secret.


They don't use readings from the polygraph. It's more like a prop they use to try to get to your honest/embarrassing truth.

You can get false confessions on a polygraph interview but you can also get them in a normal police interview eg with the Reid Technique.


This is not true. It's a requirement in many states' probations that probationers undergo regular polygraphy. If they fail the polygraph enough times, they get kicked out of state-mandated therapy and are violated.

Thousands of people are in prison right now over failed polygraphs.

In places with strong courts that won't put up with this, probation officers simply tell the therapist to write up the probationer for some other, vague non-compliance issue should they fail enough polygraphs


Reminds me of the old urban legend about using a photocopier as a polygraph:

https://youtu.be/rN7pkFNEg5c?t=95


Don’t forget that eyewitness testimony is also pseudoscience- especially cross race identification


Hell why not add juries themselves to the list? A widely cited 2011 study[^0] found that the probability of a jury granting parole was around 65% at the start of a session but dropped to nearly 0% right before lunch. After having lunch the probability again jumps to ~65%

What kind of justice system do we have where whether or not people have had lunch is the biggest factor in outcomes?

[^0]: https://www.pnas.org/doi/full/10.1073/pnas.1018033108


1. The study you link is the famous one about judges. It doesn't mention juries, nor are juries involved in the study.

2. Whether or not people have had lunch is not the biggest factor in outcomes. The most important thing you can read about this study is Daniel Lakens' commentary:

> I appreciate that people have tried to think about which mechanism could cause this effect, and if you are interested, highly recommend reading the commentaries (and perhaps even the response by the authors).

> But I want to take a different approach in this blog. I think we should dismiss this finding, simply because it is impossible. When we interpret how impossibly large the effect size is, anyone with even a modest understanding of psychology should be able to conclude that it is impossible that this data pattern is caused by a psychological mechanism. As psychologists, we shouldn’t teach or cite this finding, nor use it in policy decisions as an example of psychological bias in decision making.

> If hunger had an effect on our mental resources of this magnitude, our society would fall into minor chaos every day at 11:45. Or at the very least, our society would have organized itself around this incredibly strong effect of mental depletion. Just like manufacturers take size differences between men and women into account when producing items such as golf clubs or watches, we would stop teaching in the time before lunch, doctors would not schedule surgery, and driving before lunch would be illegal. If a psychological effect is this big, we don’t need to discover it and publish it in a scientific journal - you would already know it exists.

> I think it is telling that most psychologists don’t seem to be able to recognize data patterns that are too large to be caused by psychological mechanisms. There are simply no plausible psychological effects that are strong enough to cause the data pattern in the hungry judges study. Implausibility is not a reason to completely dismiss empirical findings, but impossibility is.

( https://daniellakens.blogspot.com/2017/07/impossibly-hungry-... )

If you do read the other commentary, it is clear that the reason the rate drops to zero before breaks is that cases are scheduled with an eye toward not running into the break. The 0% grant rate causes the just-before-the-break timing, not the other way around.


So, they were measuring that there was zero grants during lunch breaks?


The courts don't start a long, difficult case at 11:30, they schedule the difficult case after lunch and fill the time before lunch with simple, slam-dunk cases.


And yet I bet that study would fail to reproduce so it to is pseudoscience!


Haha that would be fitting, but it's an observational study not an experiment


How is ballistics forensics unreliable? Genuinely curious, I’ve never heard that.


It’s pretty much just complete non-replicable BS. The claim is that the rifling on gun barrels differs from gun to gun, and that they can examine grooves made in the bullets by the rifling and compare them to other rounds shot through the same barrel and find a match. It doesn’t stand up to independent scrutiny, and I would imagine that with modern machining the gun to gun differences in rifling on the same mass produced guns are minimal. Perhaps there are differences in fowling in well-used guns, but the notion that they’re gonna match a flattened bullet to a specific barrel is ridiculous. Basically a few kooks across the country wear white coats and claim that they can do this “science” and prosecutions pay them to be expert witnesses. Juries eat it up. It’s really nuts.

The only legitimate analysis they’re doing is ammo brand matching. “Hey, there was a Sellier & Bellot 380 Auto casing at the murder scene. The suspect had S&B 380 in his gun safe. We got him.”


You should see the games they play with statistics.

Find some 1 in 1000 match. Find another 1 in 1000 match. Claim they're independent, so the chances of both of these are 1 in a million.

What they don't say is that there are 5000 things you can test that each have a 1 in 1000 chance, so you should expect to find ~5 at random if you test them all. The 1 in a million chance is if you choose 2 of the 5000 at random, test only those and they both match; not if you systematically run tests that only notify you when they find one of the matches.


This reminds me of another bit of psudeo-science bullshit, claims that bullet wounds can be matched to close bullet size, like saying someone was hit by a 9mm vs .45 cal vs .40 cal if there is no bullet to recover.


There's really no way to relate rifling/etc to a specific gun. Unless it was forged in some bespoke process that can also be accessed and replicated the rifling pattern will look like 100,000 different guns. Other analysis is also bunk (blood spatter, gunpowder stuff, etc). Ammo matching can provide circumstantial evidence but is likely defeated by just shutting your mouth.

Most criminals subject to this are probably caught because they loaded the magazine without gloves and left a fingerprint on the casing that gets completely etched into it on firing. Or, you know, police do actual police work and pressure the right people into narc'ing on the suspect.


Radley Balko has a great overview on the fundamental flaws with ballistics forensics[1]. The tl;dr is that the core claim of ballistics forensic analysis, that it is possible to reliably differentiate between two models of the same gun using just casings and shells fired from those guns, has never been proven to be valid.

[1]: https://radleybalko.substack.com/p/devil-in-the-grooves-the-...


Great read, had no idea it was so incredibly bad! Definitely like pseudo-science, with extremely poor reproducibility. And a lot of dishonest massaging of words to make it seem much more certain than it is, benefiting the prosecution only. Imagine being innocent and having an “expert” take the stand and say it’s a “practical impossibility” that the bullet was fired from any other gun. The analogy with the tarot cards seems appropriate.

Seems like markings can be exculpatory at best, for instance that casings or bullets clearly don’t match, but that the “unique markings” of an individual barrel is complete BS, especially with precision mass production of modern firearms. But in practice, it seems to rarely exonerate suspects, even if the evidence exists.


Some of it (e.g. what bullet type) is real, but angles and distances are largely impossible to reliably.


It is IMHO a very bad adverse effect of a good thing after all: a world where people decide is probably a better one than one where only science and technology decides (I guess there are enough dystopian novels about this one). The judicial system represents society: I do not think the believe in pseudoscience is any worse or better .

Still there is certainly problems relating to the fact that people without money cannot easily produce counterargument and let "real" scientists present counterhypotheses. Further there are incentives to sell products to law enforcement that produce simple truth (I would by a product from a critical scientist). Also I think it is human rights problem, if states do not have to fully compensate the damages produced by applying known faulty technology as many other actors would have to.


You forgot about the dogs.


While I appreciate a fair dose of skepticism, why should I trust your claims that these are all pseudoscience as opposed to the people that claim to be experts in those fields? How are you sure that your understanding isn’t simply a dunning Kruger effect and you truly do have knowledge that debunks all these fields?

Once again, I’m skeptical of many supposed sciences as well, but I’m just as skeptical of internet commenters that claim entire fields of expertise and science are all pseudoscience and made up.


It should not be about who is presenting the argument, but the weight of the argument itself. For example, for balistics: the debate falls onto whether each individual weapon's barrel would leave a uniquely matching trace that can be analyzed from the fired casing. The argument against this method is that machining precision today is so high that there will be no huge differences between each 2 barrels, that make any match valid as unique and as evidence.

For all the other claims (e.g. polygraph), you can easily search for more information online.

The point is that authority or lack there of should not be a part of your evaluation of the claims, on either side.


I can just as easily make the counter claims to all the proposed areas of pseudoscience and then make the same argument you just did: “it’s not about the authority of the person making the argument, it’s about the weight of the argument itself”. And then I can tell you how you can just look up the counterclaims and find supporting evidence (which you probably can). Just because you can find supporting evidence for your view doesn’t mean it’s right, and I like to know all sides of the story before I make a decision.

The point I’m making is I’m definitely not qualified to analyze all these claims that you’re making in support of these being pseudoscience. Sure your claims sound reasonable, but people also thought the earth being the center of the universe sounded reasonable. The thing I like about hacker news is people usually don’t just make unsubstantiated claims, and when they do they usually get gut checked by opposing views, which is a good thing.

And OP didn’t really make any claims, they just listed a bunch of fields and said they were all pseudoscience. I appreciate the fact that you’re at least giving an explanation as to why it’s not considered science.


Well you can just... look them up. Some of these were new to me (e.g. firearm ballistics). But when it comes to stuff like the unreliability of drug tests or bite mark forensics, it's not at all a secret. Any recently published meta analysis would suffice. In fact I'd be shocked if you could find any in support of these methods.

Why put the onus of the work to quote a bunch of articles on the OP when these are all very easily verifiable with a tiny bit of effort


Drug metabolites are measured repeatedly and precisely, as is a forensic bite analysis . Not sure how you can claim those are unreliable when they are used daily. GC/MS is a good start for drug testing..


> Drug metabolites are measured repeatedly and precisely

Drug metabolites are measured repeatedly and precisely, assuming no one mislabeled anything and the machine is well-calibrated and the technicians are honest even though they know who chooses which lab to use and the sample was collected fastidiously and not contaminated with anything.

But drug metabolites aren't drugs. Eat a bagel with poppy seeds and you can test positive for opioids. Take certain decongestants and you can test positive for meth.


"Used daily" is not proof of efficacy.

Law enforcement does all sorts of sketchy things in order to get a conviction. Why is it a surprise that they use pseudoscience that juries eat up because they've seen it used in their favorite police procedural TV show hundreds of times?


Somewhat similar thing does happens often with DNA evidence. Prosecution submits evidence where report was generated in proprietary software and company's employee testifies that the result is correct & tell the chance of false positive is 1 in 10 billion. Defense asks for access to source code to validate the claims and company says they are trade secrets & judge decides to not grant access and does not dismiss the evidence.



Ha. I got to see Dr. Mullins speak back in University. Fascinating guy.


a diss in jest, an d demonstrating a possible court tactic.

yes that was a very colorful person.


I have wondered if there are matches between different people in DNA databases and, if so, how often is happens.


Depends on the lengths of the sequences being compared but the birthday problem indicates that it's more common than you'd expect.


At the very least there will be identical multiple births in some of them.


All the ammunition you need for the last part is right here: https://signal.org/blog/cellebrite-vulnerabilities/


This is quite frankly a farce of a PR stunt. “Oops we hacked the UFED we stole” is not going to fly in any reasonable court. There are proper ways to get evidence thrown out and trying to use the most petty way possible since it sells well is not it.


It's way dumber than a PR stunt.

Plenty of Signal users live in totalitarian dictatorships, where the law doesn't care if you're actually guilty. Even in democracies you can't always rely on the courts. "We're planting incriminating evidence on your phone" should lead to mass uninstall of Signal.

Except nobody really believes they did that. Signal would be banned from all App Stores, and moxie would go to jail on so many charges. It's one of those nerd law ideas where 'you can't 100% prove it' is the golden defence except it isn't really - the law can call a bluff, especially when millions of users and multiple app stores already did. It says reasonable doubt, not any doubt.


They are, they are just expensive.

I was on a jury where the defense counsel eviscerated an expert witness from a red light camera company. The police used the cellular/gps time on the camera to splice together 10-12 videos from various camera whose times weren’t in sync.

Having impeached the time; the whole case fell apart and her client’s manslaughter charge was dismissed.


How is this is type of evidence tampering not a slam-dunk case of perjury?


It wasn’t tampering, the rhetorical skills of the attorney and the dumbass ADAs failure to rein it in bamboozled a poorly prepared expert.

In fairness, the guy was there to talk about a camera, not to describe how NTP on the cellular network works. The defense had the judge questioning the nature of time and space!


In general, defense attorneys who have used the software are not inherently opposed to it:

1. most cases are about sms text contents and timestamps. these can be externally validated from the phones themselves, other software, or from having both sides of the conversation

2. other data, like location, is less certain, needs to be validated, and there are debates about what the evidence means,

3. Cellebrite software is available to people who pay for it on both sides and their training videos encourage LEO to actually go out and validate the results they see. the constantly changing cat and mouse nature of cellphone extraction and parsing requires double checking and validation. this is the big issue. LEO generally have no tech skills and are just armchair phone extractors. they do not validate the data. that is a problem, but not the tool itself.


Cellebrite apparently had vulnerabilities and was poorly patched. Seems possible to me.

https://signal.org/blog/cellebrite-vulnerabilities/


It's your expert witnesses versus the state's, and the state can always outspend you. Judges and juries are going to default to trusting them, as well.


There's a nerd law idea where everything needs to be 100% scientifically proven and reproducible for a criminal conviction. That's not how the law works at all. The law works probabilistically, and when the doubts are low enough than it's goto jail, even for criminal cases.

The prosecution would present the Cellebrite 'evidence' to the judge and (in our scenario) it would move the needle towards the prosecution. Then the defence will (always) try to doubt the evidence and move the needle back, but it would take a lot more than 'something might have happened, dunno how' to do that fully. When the needle is tilted enough, the prosecution has their conviction. Note that in our reality, judges have a lot of trust in prosecutors, and the needle always starts tilted...


"Nerd law idea" or not, this should bother us. Some "expert" saying some software found incriminating evidence, but, oops, you'll just have to trust us that it works as we're telling you it does... no, no, no, that's not justice at all.


Are you suggesting they can't hack phones, and just fake entire address book/location history/all messages? That's obviously false. Is the suggestion that there was some sort of massive conspiracy planting evidence*? I strongly suspect judges won't assign much credence to that.

* Such a conspiracy would have to include Cellebrite, the police and Apple/Google. After all, hacking the phone likely means you have the iCloud keys and can download the backups to verify. Furthermore, the conspiracy would always be at risk from the accused choosing to restore the backups to show some evidence wasn't there. So someone at Apple/Google would have to be complicit as well to modify the backups.


>law works probabilistically

Science also works probabilistically, like most of it, but the law works "probabilistically", as in eyeballing a rough estimate using folk theories kind of probability.


Let's say you're charged.

They seize your phone and pull photographs, text messages, messenger logs, maps data, etc. The messages are inculpatory.

What do you want to say? That the extraction method is unreliable, that the Police have incorrect logs? Imply that the text messages have been extracted with errors, somehow?

That simply isn't the case. The data extracted is both reliable and probative. This is a copy and paste.


The data extracted is both reliable and probative. This is a copy and paste.

Debatable.

https://signal.org/blog/cellebrite-vulnerabilities/


Ouch, that's pretty nasty.

Also, I couldn't help but be amused by cheekiness:

"By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me. As I got closer, the dull enterprise typeface slowly came into focus: Cellebrite. Inside, we found the latest versions of the Cellebrite software, a hardware dongle designed to prevent piracy (tells you something about their customers I guess!), and a bizarrely large number of cable adapters."


Chain of custody matters and if there's an untrustworthy link in that chain it calls into question the validity of the evidence itself.


sure, but if the phone company says there was an sms, when you open the phone it looks like there is an sms, and cellebrite extracts one, which you then present in a nice power point to the court... then your accusation as to tapering with the evidence and where in the chain of custody it occurred had better be good, or all you are going to do is annoy the court (as someone with dozens , maybe hundred+, of cellebrite reports in various court records) This is what armchair lawyers tend to forget, especially coders tend to forget. The law is not code. The judge is not a complier. When the law does not represent reality (this guys sells drugs and uses his phone to do it being a common reality) , then it becomes news-worthy.

But normally people just look at the reality and go "oh yes, this tool extracts the stuff on a phone and turns it into a pdf/html, how convenient". 99.99% of the time, the time a drug dealer alleging he has no knowledge of the 100's of deals on his phone is about as realistic as your 5 year old nephew with cake smeared on his face denying he ate the last bit of cake... and is treated as such. Should the act of selling drugs be a crime?- completely different topic.


We have inprisoned 600 people in britain because electronic errors of transactions contained mistakes.

You cannot assume a random company's tech works reliably without any proof, if someone's life is at stake. If yhry have cloud upload shenanighans they could be mixing up records of different people.

Evidence from a network probider is a completely different matter, but if SMS records are enough you would not need this crap.


I don’t believe that’s the case at all. Cellebrite uses shady techniques kept in a black box. I see no reason to believe that it’s reliable at all. They’ve giving zero reasons to.


Try your hand with this next time you're arrested and have your phone dumped, I guess. Tons of police departments across the US use them and so many courts have accepted data extracted from devices.


People have been let off for crimes they obviously committed because the rock-solid evidence against them was illegally gathered or handled. I assume that's the sort of argument that should be leveled here: your evidence is illegal, you can't include that.

And I believe that's what the argument against secrecy with these systems is. How can you know whether legal lines have been crossed if the system is shrouded in secrecy?


That's what Parallel Construction is for.


This is where you're wrong. You can't just get up in front of a criminal court and decide that a company is a "hacking company". The judge is not going to allow it.


Except that they literally are. They use undocumented zero-day exploits to hack into locked devices.


If you know anything about biology, you'd realise the jury based court process is fundamentally flawed anyway.


Defense attorney here. Client of Cellebrite. Look, it’s the gold standard in smartphone processing. What would anyone want to taint something that works well. For BOTH sides. Don’t assume a tool always favors one side. The data is the data. A tool reveals it. Don’t want it found? Don’t use an easily processable app…


How do we know that the tool doesn't alter the data?


It's a defence often used in CP cases - 'my computer was hacked! I had no idea the computer had CSAM!' - and it always fails. Because hacking is atypical, courts place the burden on the defendant to prove hacking, and there's usually no good way to do that. Same thing here, except the defendant is in an even worse position.

In theory, a CP felon would have reason to generically hack some another computer (since CSAM storage itself is illegal), and desktop OSs are less secure. In the phone case, you likely need to assume a specific criminal conspiracy entirely meant to convict the defendant, one that must involve the police*, and that's an extremely high bar to meet.

* The police are the ones using the tool; the only way to effectively plant non-generic evidence is to have the corroborating data first and that requires police assistance.


Or perhaps even more likely - process fake data injected by a malicious 3rd party trying to spoof an app.


How good is it? Can it extract secure enclave keys or only bypass the rate limiting?


I just served on a jury for a that made extensive use of Cellbrite, Greykey and cell tower triangulation. It was crazy how routine this is on law enforcement, and how much they got from an iPhone. (Everything). Case has now completed but it completed because of that evidence.


Please tell us more! This is likely the best way for that information to come out.


What information are you looking for? I didn’t know there was such a lack of information on what law enforcement agencies get from Cellebrite. I can only speak to my personal and professional (non-LEO) experience, but I’ll give it a shot.


Which phone model/version? Did they need anything from other owner? From apple? Physical access? USB? Brute force bypass? Anything you can describe about the capabilities and techniques people would be interested in knowing.


i rekon that if the case is big enough, the'll get access to the cloud backups. the UFED product page states that it can bypass device locks, this sounds like physical access and would only require the device to be confiscated.


So just normal regular cops got "everything" off a locked iPhone.

Please share what you know, because that'd be huge news. The last info we had is that even the FBI had trouble getting into a locked 4S and the protection has improved significantly.

After that the info was that Graykey(?) could bypass the attempt limit on passcodes, but by having an alphanumeric code you could make brute forcing take longer than our Sun has life left in it.


That's not as simple as it sounds. If the device is locked after reboot (or locked through SOS buttons combo) then it's in a state that cannot be cracked without a series of zero-day vulnerabilities that must be chained together to get in. And the moment Apple learns of any of those vulnerabilities they start patching those, undermining the feasibility of Celebrite and Greykey.

In many cases this is not what happens. An average iPhone doesn't run the very latest iOS, the model might be old as well, and pass code may be super primitive.


You do understand that everything you know about iPhone's privacy comes from Apple's mouth, right?

Snowden has revealed they are more than happy to let agencies into their supply chain.


> You do understand that everything you know about iPhone's privacy comes from Apple's mouth, right?

No, we know that from the words of independent security experts.


Apple does not exist in a vacuum, they are subject to national or federal law like any other, and every time they make you believe the opposite it's a publicity stunt the nuances of which clearly escape you


I think you are confusing several different aspects of security and privacy here, like iCloud data backups and encrypted local iPhone storage. Apple does have access to iCloud data, but they do not and cannot have access to locally encrypted stuff which is locked behind your personal PIN code. Apple does share iCloud data with law enforcement.

Knowing this allows users to improve their protection and privacy at the cost of convenience. For instance, one can avoid using iCloud altogether, or choose to encrypt iCloud backups.


There is history of secret agencies infiltrating the supply chain to implant malware on a whole batch of devices in hope of catching a single person, with the silent endorsement of Apple of course. Their security is just like everyone else


Do you have a source for this? I remember the Snowden leaks showed this occurs with individual target devices via the NSA's "Tailored Access Operations" unit,[1] but I haven't seen anything suggesting it's been done to batches of devices that include non-targets.

[1]: https://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa...


You don't waste supply chain level attacks on regular joes.

Apple devices get so much attention that the risk of tainting huge batches of devices at the supply chain level and having one of those end up in the hands of a security researcher is too big.

All of the security professionals I know use iPhones just because it's really fucking hard to get anything out of them without resorting to rubber-hose crypto, which is illegal in most of the world.

The last iOS zero-day was burned on no other than Jeff Bezos.

Joe Schmoe is perfectly safe, the local PD won't have access to tooling like that.


Apple's handling on HN is a sore point for me with this otherwise pretty factual forum.

The amount of shills, purely positive news, accepting all their PR without questions, ignoring all that happened in the past... not all members, but some very highly voted opinions makes me think this place simply can't be objective on this topic.

Probably hundreds of Apple employees come here quietly and stay quiet on relevant topics, but folks either paid or simply fanatical about this specific brand make any serious discussion impossible.

I dont blame the company, its for profit mega corporation just like the rest doing same things as everybody else on the market (ie not paying taxes where it should to support local development, happy with chinese child labor in contractors etc), but some people refuse to see that. Its not unique to electronics, but one would expect a bit more rational and balanced discussion on them.


Completely agree. Don't forget the all-American phenomenon of customers defending corporations to the death


> The last info we had is that even the FBI had trouble getting into a locked 4S and the protection has improved significantly.

iirc the feds complained bc they wanted apple to give them access for free and not have to use/buy 3rd party tools


Ah you're correct, Apple denied them the tools and afterwards changed the design so that there are no tools they can give.


I replied to my first comment.


Sorry, I just saw there was interest in this. They were able to get everything off of an older iPhone, i think two generations prior to now, (i'm sorry I can't remember the exact model) but for updated (current) iPhones they were not able to get a full disk image, but could get "some" data. (I'm sorry I again can't give specifics as I don't remember, but it would be metadata). Essentially we were told Greykey keeps updating the software to find new ways to extract the data. We were also told it didn't work if the phone had been powered off. For the phones that it did work on, it extracted everything from the disk and then Cellbrite went through the data to present it to the user.

The Greykey device plugs into the phone, requires the phone to not have been powered off. The untrusted USB settings also help prevent it from working.


You can’t leave us hanging with “…from an iPhone (everything)”!!


[flagged]


WTF is with your account? Are you trying to impersonate dang, the official moderator?


We used to have those in every Verizon store for transferring contacts and whatnot for people to their new phones. I used it for a cop one time and he said oh you're using a cellebrite huh? Those things are pretty incredible aren't they. I had no idea at the time that this was all a big deal. I actually thought they were cumbersome to use and slow. Transferring photos off an old flip phone could take hours, we refused to do it on smart phones once we started getting them. You also couldn't do anything if the phone was locked and on Android you had to enable USB debugging which is now hidden in settings.


Cellebrite has many offerings. The lower-level ones do data backup for stores like yours. The advanced ones break into modern iPhones.


How do they break current iPhones?


They buy and build unpublished exploits and hide them so that they don't get patched - that's why the key software is tricky to get even for gov't agencies, some things for breaking fully-patched recent phones Cellebrite will only do in-house so that the customer can't leak and 'burn' the quite expensive exploit which was used.


I'm pretty sure they do the same thing as a manual iTunes backup. A lot of the older phones with chips before A13 also seem to be variably vulnerable to being jailbroken with CheckRa1n where they can gain root access. It seems likely that Cellebrite uses CheckRa1n for a lot of older phones. There's also the issue of protection levels where if your phone has been unlocked once since boot (AFU), various keys are kept (possibly transparently) in RAM where they can be dumped somehow.

Also, iCloud backup and several other dangerous "features" like iMessage are enabled by default whenever you sign in to iCloud so there's a few backdoors already standard to anyone who uses iCloud. If you sign in in settings, it automatically signs/opts you into iCloud, even if you don't want it.


true, imessage and facetime have a long history of indecency and it's somewhat unfounded to assume they're safe now.

cloud backups are also very problematic because they contain a lot of sensitive information and are not safely encrypted.


This isn’t the case if you enable advanced data protection

> If you choose to enable Advanced Data Protection, the majority of your iCloud data — including iCloud Backup, Photos, Notes, and more — is protected using end-to-end encryption. No one else can access your end-to-end encrypted data, not even Apple, and this data remains secure even in the case of a data breach in the cloud.

Which requires you first setup 2fa, which itself requires 2 keys.

https://support.apple.com/en-us/HT212520

I’d be more worried about these ota security patches they can send out whenever and how easily it is to MITM that process and send down exploited security patches. I’m sure it’s already happening in the wild


The fineprint for ADP is that all the filenames and checksums/hashes, and various manipulation dates are all StandardDataProtection meaning Apple retains access. Think of ADP like a safe made of bulletproof but highly transparent glass. So an enormous amount of (meta)data is still available to Apple or anyone who can hack or compel cooperation.


Here we go with the metadata again. Viewing metadata means nothing as it can be easily changed by the user while retaining the actual data. Rename the file, change the creation date. Instead of naming the folder “classified military pics” you name it “Susan’s 5th birthday party”.

Metadata means nothing.


There's an option in Configurator to disable OTA PKI updates. Is that the same thing?


Nice try, Apple employee!


If I were an Apple security employee I would be checking the Cellebrite devices with my team and work on fixing any issue. Reverse engineering and debugging.


Yes, assuming you can get access to one. Do you think Cellebrite would knowingly sell one to Apple?


Perhaps I’m just naïve, but it seems unlikely that Apple couldn’t figure out some way or another to get ahold of them given the resources at their disposal. They could employ a full team whose only purpose is to reliably source Cellebrites by whatever means necessary and the cost wouldn’t even be a rounding error.


I would be mind-blown if Apple didnt have any and all of those devices (assuming that they're actually interested in security).


IIRC you don't even buy them they're all subscription based.


A company the size of Apple has ways of acquiring the devices.


Indeed. Impersonating the cops in order to obtain hardware that they want is well-trodden ground for Apple: https://www.wired.com/2011/09/iphone5-lost-in-bar-more/


Cellebrite won't sell that thing to most police forces (they might sell the service to them, in a way where they don't get their hands on the actual exploits), so I'd assert that it is tricky for Apple to get this. This is also supported by the fact that Apple has systematically not been able to timely patch the vulnerabilities used by Cellebrite.


If college campuses routinely have their own "police department" surely, Apple HQ Security could expand to have a law enforcement agency?


They could get lucky and find one that fell off of a truck.

https://signal.org/blog/cellebrite-vulnerabilities/


Exploits.


Ok, so I assume they have a short life. I worked on the exploits business for other platforms/apps and you need to invest a lot for finding one. Once they are alive and 0-day discovered the vendor fix them.

I assume the advantage here is the Apple difficulty to access these devices?


I do not think they are discovered, usually. Cellebrite does not operate like other 0-day vendors, because they don't need to deploy their exploits.


They only have a short life when you aren't constantly adding new attack surface every year/iOS release. They will always have a steady pipeline with or without the maker's direct cooperation


It's interesting how the guys behind commercializing the Tegra X1 exploit (Nintendo Switch) were nailed to a cross, yet the people commercializing these exploits are given a green light at every step of the process...

Copyright and corporate secrets apparently trump personal privacy and secret keeping in our world.


Unfortunately Cellebrite probably isn’t violating the DMCA here.



Ouch, that's pretty nasty.

Also, I couldn't help but be amused by cheekiness:

"By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me. As I got closer, the dull enterprise typeface slowly came into focus: Cellebrite. Inside, we found the latest versions of the Cellebrite software, a hardware dongle designed to prevent piracy (tells you something about their customers I guess!), and a bizarrely large number of cable adapters."


This was amazing and made me even more happy to use Signal. Incredible read


If you think Cellebrite can't extract all your Signal messages you are very much mistaken.


If the messages are encrypted locally (an option on iOS), and Celebrite fails to brute force your login code, then yes, they cannot extract Signal messages.


"bypassing" the screenlock is a selling point on the product page, thou


Keep in mind the default and what most people stick to is a simple numeric passcode, which can easily be bruteforced because you can just iterate through all the numbers. If the replay protection doesn't work they will be able to get in, its just a matter of time. If everything is working as it should and you use a proper alphanumeric/symbol passcode, I think thats less feasible if not downright impossible.


i'd say that the amount of ppl that have this "delete data after 10 failures" feature enabled is vanishingly small, hence every passcode can be bruteforced


Login code is used to encrypt this data, so merely "bypassing" wouldn't help. It needs to be brute forced.


if i can try all the combinations i have indeed bypassed the need to know the correct one


How the court will even accept that with a broken chain of custody?! That being said, for iPhone users, make sure to pair lock your device so that any cannot be cloned for later investigations

https://arkadiyt.com/2019/10/07/pair-locking-your-iphone-wit...


Couldn't this all be accomplished as well by simply setting an encrypted backup password and making sure you have an alphanumeric with symbols password for maximum entropy? Its my understanding that Cellebrite (maybe not their super secret version) is really just a fancy iTunes backup machine and that it requires no fewer conditions than whatever is required to backup your phone manually at home...


Of course, a vulnerability would allow for bypass of such a thing.


You go to court with facts, not a compelling story of how you acquired the facts. If the defense believes the facts are not correct they can challenge the method of collection, at which point the expert witnesses are called to describe to the court how collection is performed.

Not that any of this matters anyway. Cellebrite doesn't let you extract from modern phones anymore. You send it to a lab and get back an image you dump into physical analyzer. The actual exploitation of the phones is done by them so they don't have to worry about leaks.


> You go to court with facts, not a compelling story of how you acquired the facts.

That’s not correct. Check out parallel construction[0] to see the lengths cops will go to in order to make it look like they have a plausible process to have acquired evidence.

[0]https://en.wikipedia.org/wiki/Parallel_construction


There are plenty of indictments you can read on the internet. It is rare for one to state how evidence was collected unless it is material to the facts of the case.

The defense can question or challenge the methods, but that is not a material part of the prosecutions case.

As for parallel construction, Herring v. US opened it up as a legitimate investigative process.


No it didn’t. It only allowed it in case of a good-faith mistake, which is very different from intentionally using a Cellebrite to capture evidence illegally.


Cellebrite’s internal processes would still be subject to discovery. Defendants have a right to show that the extraction process was flawed otherwise it could lead to wrongful convictions.

Secret / Magic / black box processes can’t generate admissible evidence in criminal proceedings in the US.


Defendants do have that right, and in practice Cellebrite has a handful of expert witnesses they will send out to answer the courts questions. In this case there is no "black box" because the Cellebrite has been litigated previously and is known to the court.

If you want to challenge the reliability or accuracy of the process you need to have some basis to show the judge there is a thread to pull on and it isn't just a fishing expedition.

For an idea of how courts handle this, look at breathalyzers. People occasionally get the bright idea to ask to inspect the source code of the machine, and it is routinely denied because it is not in the possession of the prosecution. https://digitalcommons.law.uw.edu/cgi/viewcontent.cgi?articl...


>People occasionally get the bright idea to ask to inspect the source code

Isn't this a reasonable ask?


Reasonable for the average computer geek. Not reasonable for a judge who wants to get things done and be on time for lunch.

It is shocking and disgusting how ignorant the legal profession is about any real kind of science and engineering.


Absolutely. It's disgraceful that they can opt-out of discovery simply by outsourcing part of the investigation to a third-party.


Generally, no. As I mentioned up thread you need to have some basis to show the court there is some chance you'll change the outcome of your case if you are granted access to the source code. For example if a blood draw at the hospital yielded wildly different results, or the machine froze up multiple times during the test.


> Secret / Magic / black box processes can’t generate admissible evidence in criminal proceedings in the US.

Speed cameras?


Speed cameras are usually tested and certified by an independent third party. With proper cause, there's probably a pretty good chance you could get an expert to review the design files after signing an NDA.


Traffic violations have, I believe, a lower standard of evidence than criminal cases. I am very much not a lawyer, though.


Traffic typically has lower standards because it is in county court with a clown presiding as magistrate. It becomes too expensive to appeal and if you aren't an attorney you will immediately lose because they know you are unlikely to again appeal.


>You go to court with facts, not a compelling story of how you acquired the facts.

False, chain of custody is needed, or .. cough cough .. bad cops will set you up.


The vast majority of these exams are done under the authority of a search warrant issued by a valid court. The police have the authority to access the data on the device...their method of doing it is not really relevant. One of the reasons that companies like Cellebrite of Greyshift are so secretive with their exploits is because as soon as Apple or Google get a grasp of how they are bypassing security mechanisms, they push an update to block it. It's a constant cat and mouse game.


Regardless of how the legal establishment views it, the access method definitely matters for the data to be unmodified. Things like file system metadata can be effected.


So they are asking for security through obscurity and actually thinking that is going to last.

The article mentions a training video that demos the product, yet didn’t include it as far as I can tell. Why is that not being published? I don’t want a transcription of a company telling cops to protect their property, I want to see what this company is able to access when they illegally access my property.

Pretty sure if I access a computer system that I’m not authorized to, aka, hacking, I just committed a federal offense, yet a company is legally selling the ability to do that very thing, at scale, to law enforcement and telling them to keep it a secret. Last I checked, a law was a law, and it applied to everybody, not just us common folk.

Also, “certainly mention premium” is rich. So don’t disclose the illegal hacking internals, but do give a shout-out and free marketing for our premium product, so others in gov and law enforcement will see what you got from them and go signup. Shit, do they have a referral bonus to? Get on the stand and mention premium and the referral code WHOWATCHESTHEWATCHERS for 60% off your first month.

What the fuck? How are these companies allowed to do this? Oh right, because their customers are the ones who would normally be arresting them, so it’s fine to break the law.


It's funny to me that Apple and the FBI coordinated to spin the whole iPhone decryption thing in the media after the San Bernardino shooting as Apple vs FBI (while in reality the FBI had all the access to all the data they wanted, via iCloud, via an encryption backdoor that Apple explicitly preserved for them).

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

Now Apple no doubt would love to do some RE on these devices to identify and patch any exploits they use, and many of the local PDs that can get Cellebrite to sell them these boxes probably will abide by these terms, having previously been convinced via Apple/FBI propaganda that Apple is opposed to cooperating with law enforcement.

It's a little bit funny, but I think Apple security will win out in the long run as it only takes one instance of a leak for Apple to patch a vulnerability, and they do have a lot of money (and the cooperation of the FBI/DHS et al). The text of the training quoted in the article makes it seem that Cellebrite knows they are fighting an ultimately losing battle there, as information never becomes more secret over time, only less so.


IIRC that's not how any of it went down. The authorities wanted access to the device itself and were asking for a "good guys only" backdoor into the device to obtain anything that might not have been in the iCloud backup; Apple refused to do this and that is where the controversy / news story came from.

Apple was very open about the fact that law enforcement would have access to the iCloud backup, which is even mentioned in the article you provided. And there was no "backdoor explicitly preserved" for the FBI, the iCloud backup was just not e2ee. This was also (somewhat) common knowledge at the time and what was often suggested was to do backups through iTunes because there was still an option available to encrypt the backup.


Cellebrite keeps the good stuff on their premises. You send the phone to them and they crack it for you.


So all apple needs to do is enable a gps that destroys the Secure Enclave if it detects that it is on a cellebrite campus.


LEOs already use faraday bags and cages for devices they confiscate to prevent remote wipes. Some of the products even include chargers to ensure devices don't go dark (searching for signal eats up battery life, after all) and flush valuable data from memory. How could Apple rely on GPS when the device will be totally offline?

https://www.teeltech.com/mobile-device-forensics-equipment/m...

https://tritechforensics.com/digital-forensics/df-faraday-pr...

https://arrowheadforensics.com/products/evidence-packaging/f...


Sounds like a feature that wipes the phone if it loses signal for X minutes could be useful.


I would think a more reliable method would be heuristics to detect tampering which triggers enclave destruction when confidence level is high or the activity continues long enough. There are exceptions but the actions that enable deeper exploits rarely resemble normal usage in any way and should be simple to differentiate.


I’m sure Cellebrite never buys new real estate.


You don't believe Apple would find out about them? They could know before the lease is signed.


Where would they put the code to do this? How would it function if Cellebrite put devices in a Faraday cage before transport?


Indeed you can’t rely on GPS. But…the absence of GPS for more than a few hours is pretty weird. An automatic reboot and maybe nuke could be appropriate for some users. I guess then the faraday cage also needs a fake gps signal to defeat this.


If you have the power budget than you can compare accelerometer data to check validity of the GPS, dead reconing-style.


Ooo great point


Apple also has their tracking network. Airtags + every Apple device


Probably not useful if you’re in a faraday cage.


except apple now has e2ee for iCloud, and apple still doesnt want others to get into their firmware and secure enclaves


The fact that we have to call it "their firmware and secure enclaves" is a bit telling, though. The majority of consumer-facing devices today have a layer of inscrutability that we leave up to the manufacturer to implement properly. For instance, Microsoft has Bitlocker, but I couldn't confidently say that isn't backdoored either.

Apple has whitepapers, but they're about as verifiable as LK-99 in-practice. Their security model entirely revolves around the Apple-issued root of trust, and if you can't trust them then you have to hit the bricks. If you don't own both ends, the end-to-end encryption shtick is a theatrical farce.


If Bitlocker had a backdoor it would have leaked by now. I don't believe that they could keep an "alphabet boys only" backdoor secret for that long.

How would that backdoor even look like? A some sort of master key that can decrypt every Bitlocker encrypted drive? Imagine if something like THAT got leaked.

Who would have access to that key? Microsoft themselves? NSA? FBI? What about the UK, Australia or other US allies? Do their alphabet boys also get access to this?

Do small town cops also get access?

And I'm pretty sure lots of security researchers have tried to find vulnerabilities of Bitlocker. If there was something fishy going on - they would have noticed.


It can potentially be a theatrical farce, but you've only created hypothetical scenarios in your mind for why that might be the case. Even if you did "own both ends", you still have to rely on other parties to contribute the pieces that you don't watch over, unless you're going to be E2EE from scratch (again, because you can hypothetically just say "well what if the encryption scheme itself is backdoored").


Both of those conceits apply to Apple's scheme as well, though. Transparency enforces accountability, Apple themselves wouldn't be using things like OpenSSL and bash if they weren't open enough to scrutinize and modify.


sure, all clouds are other people’s computers; laypeople just want things to work and don’t really care about whose root of trust they use.

living in a world where you run everything yourself sounds good in practice but then you can’t communicate with anyone else.

i’d like for the firmware to be open source and for the cloud to be federated, but it’s a pipe dream and i’m busy so i just gotta trust apple in the end


e2ee for iCloud is opt in (ie disabled by default) so approximately 0% of iCloud users use it.

Until it's on by default for new iCloud accounts and previous non-e2ee iCloud accounts get automatically upgraded to use it, the fact that it is offered is basically irrelevant.

Additionally if you are using it to protect your iMessages, it's ineffective, as your iMessages are stored on Apple servers twice: once for each end of the conversation. Unless both you and the other end of your conversation have both explicitly opted in to e2ee for iCloud, a single party enabling the setting does nothing for the security of iMessage, given that approximately nobody uses the feature today.


If you’re that paranoid, maybe don’t send super sensitive information using a chat application.


This has nothing to do with "paranoid", or with me personally. I don't use iMessage at all.

I also didn't live in Nazi Germany under the SS and Gestapo but I can still recognize and identify that police surveillance over the private communications of all (or even a significant fraction of all) members of society silently and discreetly creates a world which sucks for everyone in it, as things like new political parties or new labor unions (or any other threat to the status quo) will be detected and defeated before they ever gain popular attention or critical mass.

Did we already forget OWS and the tea party? How about MLK?

Please stop using defined psychological disease terms to describe people who seek basic human rights to privacy from corporations and the state. It's disrespectful to both people who suffer from paranoid delusions as well as sane people who desire human rights.


> “And anyone testifying about those products under oath must not hide important information that could help exonerate a criminal defendant solely to protect the business interests of some company,”

What? To exonerate means not to find someone a criminal. It specifically has the nuance of finding someone not guilty due to careful consideration of the facts of the case. Someone who gets off on some technicality has not been exonerated.


:D I could imagine that they buy 0days for their products to work - and the more get revealed, the less the profits are


They develop them in-house.


Did they say magic twice...

Throw that shit out the window.


Apple also likes to keep it hush hush.


Propietary garbage shouldn't be used for these kinds of public affair stuff


Can FOIA requests expose this stuff?


FOIA has an explicit exemption for trade secrets which would harm the competitiveness or business interests of a company. Zero days, obviously fall quite clearly under that since Cellebrite spends a huge amount of money on R&D to find and develop exploits.





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: