Data on Storj is by default end-to-end encrypted with keys only the data owner controls (with optional support for sharing features). Only the data owner can decide who to share the keys with and who can see the data. Put another way, Storj can’t access data without the data owner sharing keys and access!
However, if the owner shares the encryption keys and provides access to others, it can be further distributed by others. Storj does not allow illegal content per our terms of use and conditions. If someone has stored potentially illegal content and shared it with others, law enforcement may seek to obtain information by way of a subpoena, warrant, or other legal process. As you probably know since you're reading this thread, often such inquiries are confidential and the recipients may be prohibited from disclosing their existence.
If you're interested in our encryption and security design decisions, there are a lot more details over at https://www.storj.io/disclosures. Glad you're all paying such detailed attention!
"As you probably know since you're reading this thread, often such inquiries are confidential and the recipients may be prohibited from disclosing their existence."
Yes, that is exactly the kind of thing you're supposed to be taking a stand against and resisting.
In fact, warrants like this are not "often" confidential - that is an aberration and an abomination - and a relatively recent one.
We - all of us - should publicly oppose these measures and work to resist them.
EDIT:
I think I have misunderstood - the HN title is incorrect/misleading.
Storj did not remove their warrant canary, they failed to update it.
I'm usually a strong advocate for holding citizens of democratic nations morally accountable for their nation's actions. Asking software authors to martyr and incriminate themselves on behalf of a minuscule fraction of the public, even though they are part of the few doing anything at all, seems exactly backwards. Law reform should start and end with the electorate and their chosen representatives.
Ok, so the hosting company’s management goes to jail, the service goes down, the general public are aware “a warrant was served”, the person hosting illegal content goes to host it somewhere else… who has benefitted from this situation, and how?
Encryption is all well and good, but only when paired with anonymization. It’s worth comparing the Storj privacy stance linked above, which describes numerous ways in which IP addresses might be logged and associated with accounts by their analytics providers, as opposed to e.g. https://www.privateinternetaccess.com/vpn-features/no-logs-v... .
Of course it’s a different business model, and I can’t vouch for PIA actually standing by those commitments. And I empathize with wanting to use best in class tooling to optimize your site experience. But prioritization of privacy, and commitments to minimizing log retention, are things you should consider revising to the extent you are legally able to do so. Don’t feel you need to respond here, of course, to that point!
I think his response actually DID address the canary disappearing. By law he cannot say “Yes we have been issues a secret warrant” if they had been served, but if they had NOT been served he could legal say so.
He directly acknowledges that this post is about the canary disappearing and doesn’t immediately explain why. If the canary vanished due to a script failing or a person forgetting to update it he would have loudly and clearly stated that they have not been served and the reason why the canary was not updated.
Those facts, imo, HIGHLY suggest they have been served with a secret warrant.
Because by law it is forbidden to say if they are forced to give access by law. The removal of the canary together with that message without addressing that directly can be seen as strong hint that the where forced to give access.
I am working in security and subjected to audits and do audits on others. This is the way we talk. Painstakingly explicit That this rubs some the wrong way, I can see. Charm is not a currency in this domain.
I of course understand what you mean, but I don't feel like excusing my self for pointing out logical or factual flaws.
I can say that while this is essentially a security discussion and not a dinner party conversation, I am nice to talk to :).
It looks like the EFF had one. [1] Though, the website currently does not load, and from archive.org, it looks like they killed the project sometime before July 2021. [2]
It could also just be a glitch as it appears it is a script that generates the .txt (and fetching external text / news as content).
In NSLs, you are not allowed to reveal the existence of the request.
Removing the warrant canary reveals the existence of the NSL.
Sometimes courts also prohibit you from revealing that you gave away user account information to the police.
In theory, using the (incorrect) logic of the canary warrant, you could publish a list of all user IDs and say "The police never requested the user information for these IDs below:", but this seems very gimmicky in front of a judge.
At the end of the day, a company that is actually subjected to NSL wishes has very little reasons to remove a canary warrant.
1) They cannot be sued for lying in their canary warrant as this was a properly formed court request.
2) It is good marketing for them.
3) They risk significant criminal charges for no benefits.
Given the way this warrant canary works, in that it’s published daily, wouldn’t the government instructing that you cannot opt to stop publishing the canary equate to compelling speech, and be a fairly clear path to a First Amendment violation? I ask this as a total outsider to both the United States, and US law.
I wonder if any companies have considered publishing warrant canaries as part of their public securities filings (for example, inserting it into the risk factors sections).
There is a fairly reasonable argument one could make for doing so:
- The company's value is tied to its reputation for securing its users' data
- An NSL or similar would risk their users' security and the company's reputation
- This will affect the value of the enterprise and therefore the existence of an NSL ought to be disclosed to investors
However at the meta-level, this would be a substantial escalation since an order to continue publishing a canary is no longer just compelled lying, but compelled securities fraud, effectively pitting one branch of government (the national security apparatus) with another (the SEC).
Concrete case: you are the owner of an encrypted chat app, let's say "WhatsSignalGram".
Tomorrow, the government is asking you to capture messages of some users that are planning a terrorist attack.
By default your app is not capturing these messages, but technically you could do it with a specific update.
The lawyers already challenged the decision, they confirmed the request cannot be avoided.
You end up pushing a backdoor targeting specific users ("a law enforcement custom update").
Court is explicitly asking you to not disclose the existence of this special update.
As a business owner, why would you reveal it ?
You'll go to jail (or struggle in court at least) for few years, have a horrible reputation and end up poor because your company is going to lose all its user base :/
This sounds like an insane decision.
The users, upon learning you got backdoored are going to go away, because the competitors "Telegram", "WhatsApp", etc, they will not have removed their canary or they will simply not have claimed anything :)
Perhaps on paper the law cannot force you, but if you don't comply you are cutting the branch you are sitting on.
The alternative is just to leave the warrant canary and live happily after.
Perhaps you even made the world better after all and actually prevented an attack.
The sort of banana republic that can generate a court order for you to do that is the same kind that would simply put a gun to your head instead. In the US, no court would order that type of equitable relief.
But, pretending that's not the case:
> As a business owner, why would you reveal it ?
Because you have principles? Backed up with at least a little bit of spine?
> You'll go to jail (or struggle in court at least) for few years, have a horrible reputation and end up poor because your company is going to lose all its user base ?
It's not clear that you would go to jail. You can simply shutdown[1]. For those that actually care about privacy, your reputation would only increase.
Not a single doubt that the authorities (everywhere in the world!) have plenty of ways to coerce businesses and their owners to collaborate if the cause is important enough.
Blockchain-based OSS regularly gets negative comments here on HN, but they do implement such workarounds (personally I think xx Network is well-protected), as do non-blockchain based Open Source projects. The problem for the latter is no funding for devs and independent, decentralized infrastructure.
> The alternative is just to leave the warrant canary and live happily after.
Another alternative would be to implement Binary Transparency, and make the app only download updates whose hashes appear in an independently-run jurisdictionally-decentralized append-only log. (Rolling out such a change might take too long to help the target of the current NSL, but it would protect future users, and announcing such a feature would itself be sending quite an important message).
I suppose if your business relies on keeping its source code secret, then you could just put an "if userName == the_target_mentioned_in_the_NSL" branch into the code, so that all your users receive the same update, but hopefully someone out there would be able to reverse engineer that code (perhaps after an anonymous tip-off).
Perhaps the government would be willing to pay for a software engineer to obfuscate the code enough that this malicious branch won't be detected in time, but I think that would put selective pressure on software companies to not distribute obfuscated binaries.
I'm late to add to this conversation, but one of the reasons Storj is open source is so that our customers can audit our code and end-to-end encryption to confirm that no backdoors exist.
> wouldn’t the government instructing that you cannot opt to stop publishing the canary equate to compelling speech
We compel speech in companies all the time. We force them to disclose ingredients and add labels to their products. We force them to hand over financial information and employee records. Forcing them to lie is something I haven't seen though.
I don't see how the government could be stopped from forcing a company to hand over their encryption keys and just continuing to publish the canary on the company's website themselves though.
> I don't see how the government could be stopped from forcing a company to hand over their encryption keys and just continuing to publish the canary on the company's website themselves though.
The government could force the handover of encryption keys, with the caveat that if a set of keys controlled by the provider can compromise your security, it's a trash system.
Forcing the existing canary to remain would be straightforward as well.
Compelling false speech, i.e., continuing to publish a time-based canary, is a huge leap from either of those things.
I thought it was more that you can't be ordered to lie.
Not saying something is one thing, saying something false is something else.
It boggles my mind. "Why do I have to comply with this order?" "Because not complying with a legitimate authority is breaking the law." "Can you really use an argument based on the sanctity of the law to justify ordering me to break the law?"
Imagine you are a 5 year old child, and the judge is your parent. You think that this argument would work for them?
Trying to logic bomb your way out of this is just asking for a summary judgement against you. It doesn't matter if it's not logically consistent, most rulings aren't! They will simply ignore this argument.
The counter to the "compelled speech" argument is that the government is not the one that forced you to start doing warrant canaries! You started doing warrant canaries, the gov't wouldn't ask for a remedy of you putting up continued canaries. You put the onus on lying on yourself, and if you don't do it you'll just be charged with revealing the facts.
Government demanding you to compel speech is not what would happen.
I would take odds that the current supreme court would rule the government forcing you to continue to publish the canary was compelled speech in violation of the 1st amendment.
NSL are already on shaky legal ground in the first place with a good part of the court looking for ways to curb them. Which is why when push comes to shove the government often drops challenges to their authority instead of allowing it to get up to the Supreme Court.
They, like common thugs, exist because of the threat and the fact that most people lack the resources to fight them so they just give in
It's not a 'logic bomb'. US law sets a high bar for prior restraint, an even higher one for compelled speech, and has never compelled false speech as far as I'm aware.
And no, most rulings are, in fact, logically consistent.
I still contend that "compelled speech" is not what's happening when you get charged for publishing the existence of the NSL through a bed of your own making. Though I do agree that the feeling is very different compared to regulation of other commercial speech because of the ... lying aspect.
If we take the inverse, and we are talking about prior constraint, I have a really hard time imagining courts not siding with the executive. There are so many more controversial things the courts side with on law enforcement, "do not tell people who we are investigating" feels like such an easy win (and honestly much more acceptable to the general public than anything).
I feel like there's some fundamental argument here about negligence. In what way is the government responsible for you making a promise you can't keep to your customers?
But... ultimately there's no "real" answer except what the case looks like when it gets in front of judges and how they feel about it. And I will admit arguing it's not compelled speech takes a hell of a lot more effort (even if I believe it's true!).
If compelling this speech is a First Amendment violation, wouldn't the prohibition on outright saying you got the warrant also be a First Amendment violation?
No. US law makes a distinction between 'prior restraint' and 'compelled speech'.
Prior restraint is the government forcing you to _not_ say something.
Compelled speech is the opposite, forcing you _to_ say something.
Both have a high bar to meet, and meeting one does not mean you that you meet the other. In the case of compelled speech, I don't believe a US court has ever ordered someone to lie, which is what would be required here.
It's always felt like engineers thinking they're outsmarting a much more mature legal system with humans built-in to slap away foolish gotchas.
The EFF seems to think warrant canaries would work, though, but under the premise that you can say "I've received some number of national security letters", just not "I've received this precise number of NSLs." That's an _entirely_ different line of thinking than every advocation of warrant canaries I've read.
> It's always felt like engineers thinking they're outsmarting a much more mature legal system with humans built-in to slap away foolish gotchas.
And then the engineers, unsatisfied with the messy world of human intervention, create a system without humans, where the "code is the contract" (ethereum). Then rapidly abandon that tenet at the first inconvenience and return to human discretion (DAO fork).
How you describe it is very similar to how a senior legal counsel and a couple C-Levels described it to me when I suggested we create one. There are merits to the ability to challenge on compelled speech but no company will take that risk. If anything a company wants the government on its side and will not take such a risk even if it's in a grey area. This was in a publicly traded company so perhaps that is also a factor. I've seen a lot of back scratching both ways.
If it was a glitch, I'd expect that they'd have fixed it by now to avoid the implications and potentially lose customers over it. Surely over the course of the month+ that it's been missing, they would have noticed it or had it brought to their attention?
"It could also just be a glitch as it appears it is a script that generates the .txt (and fetching external text / news as content)."
I don't like this ...
If you're publishing a warrant canary, updating it should be a manual process that a human is involved in.
The whole point of the headlines and the baseball/basketball scores is to prove that the messaged weren't pre-generated and pre-signed in advance.
What's the difference between pre-signing a stack of future canaries vs. script-generating them as time goes by ? Either way, you're muddying the negative-statement aspect of it ...
There must be some level of something to the idea that you can't be ordered to commit a crime, and so can't be ordered to lie, aka fraud, perjury, etc.
"What hasn't gone away are the nondisclosure provisions of National Security Letters that were amended by the USA FREEDOM ACT of 2015 and the 9th Circuit Court of Appeals' ruling that "the nondisclosure requirement does not run afoul of the First Amendment."
...
"... and so we will continue. We will also continue to mirror internationally to CH and HK. A false, or coerced, publication will require cooperation across multiple continents, languages and legal regimes - all in seven days or less since we publish every Monday morning ..."
Has the use of warrant canary’s ever been tested in the US courts, especially with an NSL (or whatever they are called these days)? I don’t ever remember a case, but maybe I missed it.
"National security" are magic words that nullify the effect of the Constitution within their AoE when used. People who do not comply with an NSL's order forbidding disclosure that they received an NSL may be found guilty of a federal crime with e.g. secret evidence brought against them that their defense team doesn't have access to.
What would "tested in the US courts" look like? Would the federal government take an entity to court, demanding that they put their warrant canary back up, as a form of compelled speech?
No, I don't think there's been an example of that happening, but perhaps we would expect that if the government wanted to bring such a case, they would use a secret court, or an NSL, which would likely result in the warrant canary being added back to the site before anyone noticed. (An injunction against removing the canary could even be included in the initial secret warrant).
Removing the canary could potentially be classified as an illegal disclosure of the government request, so uh, criminal charges on that front. Of course that assumes that the secondary case isn’t also classified I guess.
Judges in regular courts really don't like "one weird trick" style legal arguments. I'm guessing judges in secret national security courts like them even less. Hence the skepticism that, in this circumstance, they wouldn't just order the site to update their canary and threaten the operators with charges themselves if they refuse.
Judges in the United States also don't like compelling speech. So compelling a private entity to repeatedly update a warrant canary is unlikely to be legal.
I am not a lawyer, but my understanding is that this is the stated legal opinion of the lawyers at the EFF.
Indeed, the bar is (supposed to be) very high for this kind of thing. But at the same time, it feels like exactly the kind of "gotcha" that rarely flies, because the situation was so deliberately constructed. But without any (public?) litigation, it's anyone's guess what may happen.
But if it were that simple, couldn’t a provider have a page (say /canary/<user>.txt) that makes a similar attestation on a per user minute by minute basis, thus directly disclosing not just that there was a warrant, but for who? If the legal basis is protection against compelled speech, then logically that should be “ok” as the government “can’t” compel you to update it. I realize this is question is bordering on reductio ad absurdum, but not intentionally.
If you were manually updating said document, sure. The government can't compel your speech...
But if you had a script doing so, which is likely the case, you would have to modify it or direct it to stop updating, which would be a violation again.
> What would "tested in the US courts" look like? Would the federal government take an entity to court, demanding that they put their warrant canary back up, as a form of compelled speech?
The warrant canary exists because disclosing such warrants is illegal and carries some penalty. I imagine the federal government would bring a case to apply that penalty, and the courts would have to decide whether "removing a canary" === "illegally disclosing a secret subpoena." If so they can freely apply the penalty, and the penalty will carry legal precedent for being applicable to warrant canaries, and it will have a chilling effect on sites that wish to use one.
But the question here is: what counts as removing? Is inaction on your part considered removing?
If i smoked every day, and chose to stop smoking, but the act of me stopping smoking is a signal to some third party that is deemed illegal, can the gov't compel the continuation of smoking?
At the point that the US government prosecutes you, the cat is out of the bag and they don’t care about making you put the canary back. What they care about is punishing you, with jail time or a large fine or some other penalty, to discourage other people from attempting the same “loophole.”
The injunction against removing it wouldn't compel them to update it, though, right? In this one they promise to post a new one every month, so even if you're forbidden from removing the current one, unless they force you to create new updates, people would still notice when the current one "expires" without having been updated (and if that turns out to hold legal water, no reason you couldn't post new ones arbitrarily often -- daily, say).
> The injunction against removing it wouldn't compel them to update it, though, right?
In the US the government can take over parts of your facility, and that could mean installing whatever equipment they want or even setting up camp and running ongoing operations on location. They'd have no problem updating the canary of a company who refused to keep doing it themselves.
Did you not see Storj's canary? They used a cryptographic signature. [1] For the government to continue ongoing operations, they would either have to physically seize the relevant private keys, or compel release of them if they couldn't locate them. As far as I am aware, as long as the government fails to physically seize them, compelling is still not allowed under the Fifth Amendment. [2]
> The Fifth Amendment to the United States Constitution protects witnesses from being forced to incriminate themselves
Although, with Storj, the signatures didn't expire, so in the event that the government did setup operations, they could have just continued using one of the older signatures. (And only would have been unable to create new ones on request)
I hadn't seen the old one, just the empty one they have published now. It was a lot more specific than other comments suggested!
> Although, with Storj, the signatures didn't expire, so in the event that the government did setup operations, they could have just continued using one of the older signatures. (And only would have been unable to create new ones on request)
that's my guess. they could either compel the company to turn over the keys or they could get them themselves from wherever they are stored/used.
This is why you have to protect your Hardware Security Module with a passphrase that is kept only in your brain. Compelling the disclosure of that passphrase is much more likely to violate the Fifth Amendment, especially if you set your passphrase to something like "I killed him and buried the body under my garage".
Of course such an important passphrase shouldn't only exist in the head of one person, and instead should be distributed between multiple members of the company (so perhaps the HSM could require N of M passphrases to unlock the master secret, using Shamir's Secret Sharing), which means creating a very complicated on-going criminal conspiracy, with new hires forced to further the crime in unique and creative ways, so that their individually-chosen passphrases can't be guessed.
It would be interesting to see a company implement this on the individual account level, rather than for the service as a whole. As it stands, while certainly interesting from a legal standpoint, I'm not sure this achieves much other than confirming that secret warrants are in use, and perhaps giving some vague indication of their frequency.
I believe they can't because of the legality, it would remove the secrecy of the warrant. The canary allows a company to imply they got searched without breaching the secrecy.
In this case, information got removed, which is not illegal because the information removed is not required by law.
It's not so much that the information got removed but rather they decided to not update the message. Failure of a new update is the signal that something happened.
So if they're allowed to reveal they got searched, why the need for the canary? And if it works on a company level, why does it not work on a user level?
I dislike the phrasing here. The proper design of a warrant canary is to actively publish new messages on some time period. No one should have actively pulled the canary, they just didn't publish a new message.
This distinction, from my understanding, is important for legal reasons.
What utility does that provide? In time the other provider will be issued a different warrant. The user threat model for every online service ought to include legal and illegal data seizure. Any imaginary nation which doesn't perform this legal function would simply be giving a mandate to adversarial actors to obtain the data by force and espionage.
I guess it's more about protesting and trying to fight the existence of secret warrants than any practical benefit for users? Which I think is fine if it's the best they can do.
1. The native Storj "uplink" command. Using this interface, a Go utility called uplink is run on the local client machine. It contacts a Satellite Node (the non-decentralized aspect of Storj) to retrieve a list of Storage Nodes that will accept the upload, then the file is split up and encrypted by the local uplink client code and sent to Storage Nodes recommended by the Satellite Node. In this case, the Satellite Node knows about the various pieces making up a file, the Storage Nodes have encrypted pieces of the file (but do not know how they relate to each other), and neither the Satellite Node nor the Storage Nodes could reconstruct the original file, even if working together, because the encryption key is stored on the local client machine only.
2. There is an S3 Gateway that gives Storj an S3-compatible interface. To use this, a Storj user would register a user account on the S3 Gateway, giving them an access key (login name) and secret key (password). When files are uploaded using the S3 Gateway, the access key and secret key are used to validate that the user has access to the specified bucket but there is no encryption happening. When data is received on the S3 Gateway, the Gateway uses the uplink technology to send split and encrypt the file and send the pieces to Storage Nodes. When a file is retrieved using the S3 Gateway, the Gateway does the reverse and sends the original, unencrypted file back to the S3 client.
Storj customers using the Storj network with the native Storj uplink client should have nothing to worry about as long as their local Storj key isn't disclosed.
For Storj customers using the S3 Gateway, it seems to me that by using data stored on the S3 Gateway, authorities could reconstruct files that were uploaded.
For HashBackup (I'm the author), both interfaces are supported, though the S3 interface is recommended. Since HashBackup encrypts everything locally before doing any uploads, backups stored on Storj using either interface cannot be reconstructed without a copy of the HB backup key, which is only stored on the local client machine, is not part of the backup data, and is never uploaded anywhere.
For our hosted S3 Gateway (called the Gateway MT), we have more details about how it works on this page: https://www.storj.io/disclosures, in the section titled "Encryption for Gateway MT"
The summary is that while the S3 gateway does have temporary access to unencrypted data during transit by protocol necessity, the S3 gateway does not keep the keys necessary to do this outside of the context of a request.
Is there a particular reason to think any of this involved national security? The "standard" warrant canary is extremely broad, and would appear to cover any kind of warrant at all.
Do you think they would be in trouble if, sat 3 months in the future, they started saying they had not received any NEW warrants in the past 3,4,5,X months?
The argument goes that the government might be able to compel you not to speak, but they cannot compel you to speak against your wishes (and, in particular, to lie).
The precedent supports that to some degree, but it's really not clear.
Warrant canaries need restated and re-signed regularly, daily in this case. That is what makes them canaries as opposed to something like a transparency report.
The government probably can NOT compel you to re-sign and update an expired canary, at least according to the lawyers at the EFF.
Exactly, they (government) can ask you to continue to update, you decline, they sue to force the issue. Highly unlikely the courts rule so quickly that at least one day goes by without an update. This is the signal something has happened.
I understand it's an open source project with p2p encryption. So for government to snoop, they must modify the code and let users download it and run the now insecure application. So in this case the GitHub code itself is a warrant canary.
Can we please repeal the Patriot Act or Freedom Act or whatever it is called now? It’s been a one-way ticket down the toilet since this travesty was foisted on the stupid scared populace of 2001. I really frame my life in America in two sections, before and after the planes hit the Twin Towers. Not because of the tragedy which it was, but because that is when the America that I know changed and started becoming twisted. Optimism gone, privacy gone, thinking gone, replaced by fear and ignorance and lowest common denominator politics and TV.
The most important parts of the patriot act have already been repealed during the trump administration. Major parts of the bill weren't renewed back in 2020 IIRC.
Any politician running on a platform to repeal the Patriot act et al would have my vote, regardless of party and almost without regard to any other positions they hold.
When the act was first passed, most of us really wanted some kind of sunset provision to be included. At the very least “once the wars in Afghanistan and Iraq came to a close” clause. You could already, even then, see federal agencies salivating at the carte Blanche lack of oversight.
We had been driving this country on fear and ignorance for a long time before that. You might be recalling a brief period of post Cold War dotcom boom, but even that was marked by a deep divisiveness fomented by the 24 hour news cycle and a disdain for bipartisanship.
The TSA is the result of a brief moment of unity where we decided we feared someone else more than each other. Eliminating it might save money and simplify travel, but it won't touch the fear and ignorance.
Whatever optimism we had died during Vietnam, if not long before that.
The Patriot Act was a renamed law (several laws actually) that the the FBI/CIA/NSA had been trying to get passed for decades, it was a wish list for them that was rubber stamped shredding what little protections left in the constitution
I do think that we crossed some lines post 9/11. It's likely that the US had always been violating the constitution by spying on American citizens, or were engaging in torture or secreting people off to black sites and indefinitely detaining them without trial, but after 9/11 it was all out in the open. It was sanctioned by the government. It became policy.
In the decades before 9/11 the US government had the decency to lie to the American public about it. They told us that America was so great because it would never do those kinds of things. Those types of activities were held up as examples of horrible atrocities communist nations subjected their citizens to.
Even as evidence came up from time to time showing that the US government didn't always live up to those ideals they continued to be expressed as what this country stood for. Post 9/11 that was no longer the case. This might be a more honest American government, but I can't help feeling like we've lost something.
Data on Storj is by default end-to-end encrypted with keys only the data owner controls (with optional support for sharing features). Only the data owner can decide who to share the keys with and who can see the data. Put another way, Storj can’t access data without the data owner sharing keys and access!
However, if the owner shares the encryption keys and provides access to others, it can be further distributed by others. Storj does not allow illegal content per our terms of use and conditions. If someone has stored potentially illegal content and shared it with others, law enforcement may seek to obtain information by way of a subpoena, warrant, or other legal process. As you probably know since you're reading this thread, often such inquiries are confidential and the recipients may be prohibited from disclosing their existence.
If you're interested in our encryption and security design decisions, there are a lot more details over at https://www.storj.io/disclosures. Glad you're all paying such detailed attention!