One of the main problems while I was researching about status page services and saw many having the same doubts here was about where to deploy the status page. Besides this market has a lot of players even the biggest one had problem when the big S3 outage happened last year (see ref-1).
What about do not depend on a centralized infrastructure to deploy you status page and kept always alive?
This project aims to deploy you status page on a decentralized infrastructure IPFS (see ref-2), after installed it, you will be running a status page service on top of a local IPFS node. So you’ll be able to to publish you status pages on IPFS while being part of the network.
I thought this use case fits perfect on a decentralized environment.
You can deploy this service on a VPS for 1/4 of the price you pay for your current status page service provider.
See an example of a status page deployed using D StatusPage:
IPFS nodes don't really rehost content for any substantial period of time (especially the gateway) so you're still stuck with some major problems:
1. You're still hosting off your IPFS node. This isn't worse, but it isn't better. You need to have a node and it needs to have connectivity.
2. IPNS resolution is glacial and it's a known issue without resolution currently. So any gateway trying to resolve your current version of the IPFS-hosted status page through a gateway using IPNS can often end up waiting seconds (sometimes even tens of seconds) for name resolution, giving the impression of a downed status page.
Sadly, IPFS is more of a decentralized presentation and perhaps caching framework. It doesn't really achieve the goal of decentralized storage until there is some reliable way to persist the data on the network beyond immediate use. Pinning services exist, but most seem quite expensive to me.
1. They're hoping that FileCoin will resolve that issue. Akin to Storj and Siacoin, people will offer to host (aka pin) content on their IPFS gateway node in return for FileCoins, and the market will decide the price.
2. In the meantime, you can set up a script to update your DNS TXT record to point to the most recent IPFS hash. I've got a static site generator that does this upon the completion of a build.
It's really too bad that anyone still thinks FileCoin is not a deeply flawed (if not outright scammy) endeavor. The only reason it isn't more widely decried is that the IPFS folks have a lot of goodwill in the community.
But it's a bad coin.
As for your #2, I don't use this solution because of propagation times. Instead, I use an nginx proxy that rewrites incoming requests on a specific path to root on IPFS node. When I rebuild, I rebuild that site config there.
But tbh, I'm going to undo that. I get absolutely nothing for being part of IPFS and there's effectively no reason to host content there. It's a DHT and while that's cool, it's actually substantially less efficient than alternatives.
I've been enthusiastic about IPFS because it's a neat white paper, but after using it for months I've concluded it's a tech demo with no real direction to go other than a deeply flawed cryptocurrency.
If I understand correctly, you still need a gateway VPS to proxy the ipfs site out to the internet over http. Isn't this a single point of failure, as your gateway server will be pegged. Or am I misunderstanding the implementation? You could have a VPS in multiple regions, but we're back at square one.
There is at least two ways of fixing this. First one is that browsers implements IPFS, and reshares the website when you visit it.
Second would be for paulogr to include js-ipfs in the webpage, so when users visits the page, they also reshare the website (if there is enough resources/not on battery/$other_criteria). Users would send the data for the website in-between them, just verify the data's signature.
Sounds like IPFS needs more ubiquity before this is something you can rely on. I know IPFS support is coming to Firefox soon, but I suspect it's much further off for other browsers.
Not exactly, as far as I understand it; you still need to manually install an addon, it's just that the addon can now handle ipfs:// links when you click on them.
Anybody can gateway the entire IPFS network, so not really. The officially maintained gateway is just one gateway. If you don’t want to rely on it, can run your own.
I'm brand new to the IPFS concept, this looks really cool!
I was surprised to see that the page served via IPFS supports HTTPS, do you happen to know how the secret key is securely shared among nodes in the decentralized environment?
The https version is served thru a proxy. This is the way you can get your status page visible to the HTTP world. The gotcha is that anyone on the IPFS network could have a copy of your page and serve them in your behalf.
> anyone on the IPFS network could have a copy of your page
Yes, but critically they can't modify it thanks to content addressing [0].
> and serve them in your behalf.
Yes, over IPFS. So anyone with an IPFS client will have a robust way to view your status page. If people want to view it via HTTPS, they hit ipfs.io, which is an IPFS/HTTP gateway. While it's possible for other people to run gateways, I believe ipfs.io is the main one. It could theoretically be a bottleneck.
IPFS has a so called IPNS system that use mutable hashes. So when you publish using this piece of software you got always a permanent link to share with your users.
IPNS has a huge flaw, as I understand it - there is no way to prove, as a consumer of an IPNS name, that you’ve got the latest data. A malicious node in the network could present you with outdated information and you’d have no way to tell.
You can't prove that you got the absolute latest data (same with DNS by the way) as it's being distributed. However, a malicious node in the network can't present you with outdated information or false information, as the IPNS record is signed with the key from the peer.
If the IPNS record wasn't signed, it would indeed be a huge flaw as it wouldn't be tied to a key from a peer. That would defeat the entire purpose of IPNS. Luckily, we don't have that flaw in IPNS :)
> However, a malicious node in the network can't present you with outdated information or false information, as the IPNS record is signed with the key from the peer.
False information - no. Outdated information - why not? What you've described in this comment doesn't solve it. If I signed that the name N points at hash H1 yesterday, and then signed that the name N points at hash H2 today, why can a malicious node not simply keep telling people asking for N that it points at H1?
Do IPNS signatures expire in a similar way to DNSSEC signatures? (Some poking around github says "maybe".) If so, does the owner of the IPNS name have to regularly connect to the network to refresh them? This would suggest that IPNS records can very easily disappear with no way to reinstate them, even if other nodes are keeping the data they point to up. Is this documented somewhere? Can I set a much shorter expiration time (e.g. 5 minutes for quickly-updating information)?
IPNS records have an optional and user-configurable expiry time, but more importantly, they contain a sequence counter.
So unless an attacker can completely disconnect you from everybody else who's interested in a particular IPNS address (and in that case you're lost anyway), they can't hoodwink you into going back to an old version.
I see. So if they can disconnect you from everybody else (for example, if they control the internet connection you're connected to), you have no way of telling whether they're replaying IPNS records to you.
The traditional internet solves the problem of not being able to trust your internet connection (say, in a coffee shop) with public key infrastructure so that the most a rogue internet provider can do is DoS you (they can't get a certificate for google.com and TLS is protected against replay attacks), so this sounds like a downgrade in actual security.
I think it's a bit unfair to pretend that TLS replay would be the same as having a not entirely updated IPNS record. The threat models are very different.
The corresponding attack against IPNS would be if the attacker could make your perspective of the world go backwards, and that is prevented by the sequence number.
Indeed, but TLS doesn't have the problem of replay at all (we hope), so IPNS is by design susceptible to a threat that the traditional internet is not in the case that your connection is untrusted.
But since the content can include timing and date information, it's pretty straightforward to work around this. IPFS bridges to HTTP, but is fundamentally a very different protocol that gives very different guarantees. Application developers need to recognize and mitigate these.
Being able to recognise the protocol's limitations and guarantees depends on those limitations and guarantees, as well as best practices for developing applications using the protocol, being openly documented.
Indeed I am - can you find me a document which describes whether IPNS is or is not currently vulnerable to replay attacks, in which scenarios its assumptions are broken, and/or best practices for handling any shortcomings of IPNS?
This isn't even a well-formed question: IPNS clearly documents what it does do. It provides superior guarantees to cached HTTP.
What's more, calling a node fault in a distributed quorum a "replay attack" suggests that application logic is hosted on IPFS. Since it is not and cannot be and redundancy is ultimately the responsibility of the storing agent, this seems like at best a misapplication of the term and at worst a disingenuous scare attempt.
In either event; IPNS is still considered a second tier, less complete that other "beta" parts of the protocol. It's not as experimental as pubflood, but less reliable than pinning.
It's all a moot point anyways, since IPNS is so slow as to be unusable in all but the least interesting cases anyways.
There is no clear specification of IPNS in the specs repo, never mind documentation in any of the repos I browsed. So no, it doesn't document anything useful to a user or application developer interested in knowing what they have to watch out for, including malicious nodes presenting outdated information.
While replay attack might not be exactly the correct terminology (although I think it is), the result is that you cannot trust any information pointed to by an IPNS record to be up to date. There's fairly trivial attacks I can think of that revolve around this - for example, if a git repository is hosted in IPFS with an IPNS record linking to it, you might actually get an older version of the code with known security flaws. This just isn't something you think about if you were using a more traditionally hosted git repository hosted on a trusted developer's server (or someone they trust, etc).
> This just isn't something you think about if you were using a more traditionally hosted git repository hosted on a trusted developer's server (or someone they trust, etc).
You don't until github accidentally rolls back your content, which they have done.
Unlike the github scenario, particularly popular content will have more than 1 node relaying it so you can form a consensus. It's also the case that only 1 value can be at consensus in the GHT at any given time, so the proper content node is verifiable from many content sources.
Now, do the clients DO this? No. They don't.
But in general this is so far down the list of IPNS concerns as to read odd. They have bigger fixes to make besides concerns about highly visible attacks like this.
Not if the page itself contains the time it was last updated (IMO all status pages should have this, IPFS or not). Then you could still get outdated information, but you'd be able to tell.
Procedurally; since updating IPNS records is free it's pretty straightforward to continuously deploy a tree.
What's obnoxious about that is that existing IPFS daemons aren't really good at managing multiple identities so if you have multiple trees to maintain you're left writing custom software or using docker containers.
The latest would be a link like /ipns/yourcompany.com, and you'd have a TXT record pointing at your node's IPNS hash (or an IPFS content hash directly) and you update it by either updating what your node's IPNS hash points to, or by editing the DNS record to point to the latest IPFS content hash.
No, if you put an IPNS hash in the TXT record you just update your IPFS node.
If you put an IPFS hash in the TXT record then you need to update that every time. I personally do this (domain name jes.xxx) because it means you don't need to leave your IPFS node running constantly in order for your IPNS name to be resolvable.
The record is:
jes.xxx. 300 IN TXT "dnslink=Qme12vJPtMpeUwmG2NLG11Q47jy2unSonegNJxQb9QgYax"
And I have a small shell script to update it automatically.
I believe that's what the previous poster means, you get a namespace that points to the canonical version of your resource, whatever that may be. Kind of like how HEAD is an alias for the latest SHA on a branch in git. But I don't know, this is just how I understood the previous comment.
What about do not depend on a centralized infrastructure to deploy you status page and kept always alive?
This project aims to deploy you status page on a decentralized infrastructure IPFS (see ref-2), after installed it, you will be running a status page service on top of a local IPFS node. So you’ll be able to to publish you status pages on IPFS while being part of the network.
I thought this use case fits perfect on a decentralized environment.
You can deploy this service on a VPS for 1/4 of the price you pay for your current status page service provider.
See an example of a status page deployed using D StatusPage:
https://gw1.dstatuspage.net/ipfs/QmePTzsSVae8BK8antLHfV2xWfE...
http://gw2.dstatuspage.net/ipfs/QmePTzsSVae8BK8antLHfV2xWfEf...
What are you thoughts?
https://www.dstatuspage.net https://github.com/paulogr/dstatuspage
This software is still in alpha state with basic status page service functionalities, feel free to ask for a feature or address any issue you have:
https://github.com/paulogr/dstatuspage/issues
- ref-1: https://blog.statuspage.io/a-birds-eye-view-of-the-amazon-s3...
- ref-2: https://ipfs.io
The software will be distributed for free and open source under MIT license.