Hacker News new | past | comments | ask | show | jobs | submit login

I find this pretty exciting for (hopefully; someday) wider adoption of alternative protocols:

Starting with Firefox 59, several protocols that support decentralized architectures are available for use by extensions. The white-listed protocols are:

+ Dat Project (dat://)

+ IPFS (dweb:// ipfs:// ipns://)

+ Secure Scuttlebutt (ssb://)




Hey! I made that patch! :-D

so basically the explanation is simple. There is a whitelist of protocols you can have your WebExtension take over.

If the protocol you want to control is not on that whitelist such as an hypothetical "catgifs:" protocol, you need to prefix it like: "web+catgifs" or "ext+catgifs" depending if it will be used from the Add-on or by redirection to another Web page. This makes it inconvenient to use with lots of decentralization protocols because in many other clients we are already using urls such as "ssb:" and "dat:" (eg, check out beaker browser). In essence this allows us to implement many new cool decentralization features as add-ons now that we can take over protocols, so, you could be in Firefox browsing the normal web and suddenly see a "dat:" link, normally you'd need to switch clients to a dat enabled app, now, you can have an add-on display that content in the current user-agent you're using.

Still, there is another feature that we need before we can start really implementing decentralization protocols as pure WebExtensions, we need both TCP and UDP APIs like we had in Firefox OS (as an example, Scuttlebutt uses UDP to find peers in LAN and its own muxrpc TCP protocol to exchange data, DAT also uses UDP/TCP instead of HTTP).

I have been building little experiments in Firefox for interfacing with Scuttlebutt which can be seen at:

https://viewer.scuttlebot.io/%25csKtp9VmxTjJoKy17O7GA6%2F3S8...

https://viewer.scuttlebot.io/%25uBev5w8m8iZGVbQDo9fpr%2BCXLB...

I hope to start a conversation in the add-ons list about TCP and UDP APIs for WebExtensions soon :-)


Decentralization and safer protocols are needed. Plain HTTP and even HTTPS are really not the best one could come up with. There are better alternatives, but using them is not always easy. So, thanks a lot for your work on making that easier!


Thanks a lot for the kind words! :-) talking about safety and integrity, have you seen how Scuttlebutt works? Check the secret handshake part of:

https://ssbc.github.io/scuttlebutt-protocol-guide/#handshake

In essence, your connection to a given peer is encrypted in a way that only you both have the keys, even if someone breaks that key (such as the peer being a bad agent) it would not compromise your connection with other peers as they will use different keys. It is quite an awesome protocol.


It's nice but the fact that you can never delete a message in your feed means that it doesn't really work as a social media protocol. Some people see that as a feature since it is theoretically uncensorable, but that's not how humans like to interact.


I have been using patchwork[1] as my main social network client for scuttlebutt. In my current experience, the fact that messages are not removable makes me more careful when writing and has led to much better and more meaningful interactions on the network.

Also remember that a message being in the feed doesn't mean it is displayed. Scuttlebutt is quite flexible, there are clients that have support for "chess messages" so their users can play chess, patchwork doesn't support those messages so I don't even see them. There is git-ssb[2] which allows people to host and contribute to code directly inside the feed, not all clients show these messages but they are all there.

New messages could be added for flagging a message id as deleted and clients could honor them and not display that message anymore, they would still be on the feed, much like in version control systems we still have access to deleted files (if no one rewrites history).

I enjoy how permanent things are there because as a side-effect it causes people to care more about the ecosystem and culture as those are permanent stuff you're putting out there. Check out this essay "the future will be technical"[3] about the culture on scuttlebutt, you'll see it is quite different than other social networks, but I agree with you, your experience may vary and what I consider an advantage, others may see as a reason not to use.

[1]: https://github.com/ssbc/patchwork/ [2]: https://github.com/noffle/git-ssb-intro [3]: https://coolguy.website/writing/the-future-will-be-technical...


Erasure of data, on demand, is a requirement of GDPR[0], which becomes law in Europe on 25 May 2018.

Note that erasure of the data is required, not just its display.

IANAL, but the above appears to be contrary to GDPR.

[0] https://en.wikipedia.org/wiki/General_Data_Protection_Regula...


I am not a lawyer either, but doing a quick read on the scope, I found in first phrase:

  "The regulation applies if the data controller (an organization that collects data from EU residents) or processor (an organization that processes data on behalf of data controller e.g. cloud service providers) or the data subject (person) is based in the EU."
And this might not apply as there is no data controller, organization or company. You data is on your computer and it replicates that data to friends and friends of friends. There is no cloud or service involved, it is from one machine to the other, I believe someone that has real knowledge of legal matters and p2p should chime in. I am also a bit lost regarding this.


I contacted an e-chum who is a lawyer specialising in the field of IT, IP and media law [0]. He pointed me back to Article 17 of the Regulation, Right to erasure ('right to be forgotten'), which is contained in [1].

This states that:

"The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay ... where .. the data subject withdraws consent on which the processing is based..."

Paul also expressed the view that: "It’s certainly been written in a way that should require systems to be created to allow for the deletion of personal data, though!"

That intent is key here. Basically: if someone asks you to remove their data and you refuse (or fudge, etc.), then don't be surprised if the EU comes knocking.

[0] https://www.uea.ac.uk/law/people/profile/paul-bernal

[1] http://data.consilium.europa.eu/doc/document/ST-5419-2016-IN...


Surely that means that the people who replicate it from your ("your friends" and "your friends of friends") would be required to delete the (personal) data?


So I could require anyone inside the EU to delete any mail they received from me? (I'm in the EU)


Yeah, email was the first thing I thought of as well. Surely that doesn't apply.


Thank you very much for the contribution. The decentralized web projects sound really exciting and it's awesome to see support from Mozilla.

However, I find the concept of the scheme whitelist pretty strange in context of WebExtensions.

Of course it makes sense that http:, https:, data:, blob:, etc should be off-limits for extensions. But on the other hand, I'd think that the primary use-case of registering protocol handlers in extensions is to handle existing links - something that the ext+/web+ rule seems to be particularly designed to disallow.

Apparently, the actual handler whitelist for WebExtensions can be found at [1]. It looks pretty... arbitrary?

For example, it's amazing to see that magnet: irc:, ssh: and gopher: are on there, but then why not ftp:, ws: or even steam:? There could be some useful extensions intercepting those as well.

Likewise, while the inclusion of scuttlebutt, ipfs etc is very good news, what makes them eligible now that didn't make them earlier?

Generally, I'd like to know by what process is that whitelist managed and what are the criteria to get on there.

(I understand it's derived from the official HTML5 whitelist [2]. To be honest, that one looks even more arbitrary - but of course has different security considerations, as it defines an API for the open web. The two lists also seem to have some differences, as e.g. gopher: isn't on the HTML5 whitelist while openpgp4fpr: is not on the extensions list.)

[1] https://developer.mozilla.org/en-US/Add-ons/WebExtensions/ma...

[2] https://html.spec.whatwg.org/multipage/system-state.html#cus...


Is there a chance we could get MAFF: (and MHT:) protocols as well and let our web-extension handle those (single-file archived web pages if anyone wondered)?

This would allow us to (again) associate the Firefox browser with for example .maf files and have them displayed in the browser via the extension. (I'm sure there must be thousands of archived pages out there that currently are a hassle to view - it would be great to get support for those).


Would it be possible to implement ssh:// ?

I know it's theoretically possible to make some browsers respect ssh:// urls. But to actually do so requires a lot of work.

Presumably this is to avoid some security problem that I'm not immediately aware of. Or maybe just plain caution?


Theoretically? Yes. (You'd need to add it to the whitelist, but you could do that.)

Practically, though, you still have the problem of actually making the connection; there's no API for that yet.


Ah, OK. That's probably the sticking point.

I was thinking it could just invoke some external program.


That sounds great! Could you give us a pointer how to add a protocol like freenet: to Firefox (or ext+freenet:, as long as its not in the whitelist)?


Please excuse the inappropriate question, but this one has been bugging me for many years: will there ever be some kind of session manager on Firefox Mobile that can save and recall your browsing session (a group of tabs) to/from a local file, similar to how OneTab or MySessions are working?


Yeah this is pretty great. AFAIK it's limited to only redirecting to a handler website, but that's a big step forward for using these protocols in links and etc.

We're still hoping to get full protocol-handling with an API similar to https://github.com/electron/electron/blob/v1.8.2-beta.3/docs...


Maybe I'm reading this wrong, but it sounds like they've made it so you can develop a webextension that handles DAT:// or IPFS:// URLs. They didn't make IPFS:// URLs redirect to http://ipfs.io/ipfs/$hash.


Sort of. I think this is based on the `registerProtocolHandler()` means, and if so, you can redirect this to an internal page in your extension. See https://addons.mozilla.org/en-US/firefox/addon/overbitewx/ (disclaimer: authored by yours truly).

The more pressing concern is how you would speak these protocols because there's still no direct socket support. In my case, the addon just acts as a history and redirect shim and my server does the actual server access and translation. I'm actually thinking of cobbling together a native messaging-based extension as a stopgap since obviously this is suboptimal.


Hi, I did just like you said using Native Messaging API[1] to communicate with a local companion app using standard input/output shell calls, this app can handle the socket communication, that is how I am handling it for the scuttlebutt experiments I am doing (I mentioned them above with some URLs if you want to check), I hope to start a conversation soon on the add-ons list about the needs of having UDP and TCP available for WebExtensions. It is suboptimal but it works today...


Is this bug 1247628?



oh! I was not aware of that bug! thanks a lot for pointing it to me, I will add myself to the CC list there and join the talk.


They made that before that protocol was on the whitelist, now they could ship a native app that has a webextension bundled with it to handle all of it without using that redirect.


Why does it require a whitelist? If I make my own, do I have to contact Mozilla or is the user prompted when first access is attempted?


You can make your own but it will need a prefix like "web+catgifs:" or "ext+catgifs:" depending on how the custom url is handled (redirect vs webextension), to be able to use custom urls without the prefix, you need to be on the whitelist. Thats why that little patch is cool, it enables us to build add-ons without requiring a prefix for the custom procotols used by popular dex stuff.


I think the whitelist on what you can register as protocol handlers is a security measure. E.g. if Paypal uses "paypal:" links which are handled by a OS native application, you can't write a extension that hijacks those links.


So how do I make an extension that supports Firefox 57 or later and does mycustomprotocol:// without prefix? Do I have to Rowhammer my access through Firefox' code? I do not see how it is in Mozilla's right to restrict what a user can do with the user's browser.


The hell does the prefix have to do with user rights? This is a matter of convenience, not capability.

You want arbitrary protocols? The prefix is available to you. You want blessed protocols? The non-prefix is available to you.

There's no restriction in place. And anyways, given OSS, you can bless your protocol with a custom build. But just because you want it blessed doesn't mean it should be forced onto everyone else.


I follow Stallman’s school of thought that software should never restrict the user in ability.

A user should not be protected from themselves, if a user wants to install an addon to intercept paypal:// requests for whatever reason, that should be their choice. And it should not require the user to do custom changes to the system.


You have the source code and can create a build that works like that. The current way it works, which is with a whitelist, makes it a lot harder for some malware to exist and hijack a protocol and this is, in my opinion, a good thing.

Remember that Firefox is used by millions, and a whitelist is a measure that adds a bit more safety to the millions out there that are not as tech-savvy as you and still want to browse the web.


For somebody who blindly follows Stallman's school of thought no matter what the consequences, I find it odd that your password isn't the same as your login name.

https://www.gnu.org/philosophy/stallman-kth.html

>But gradually things got worse and worse, it's just the nature of the way the system had been constructed forced people to demand more and more security. Until eventually I was forced to stop using the machine, because I refused to have a password that was secret. Ever since passwords first appeared at the MIT-AI lab I had come to the conclusion that to stand up for my belief, to follow my belief that there should be no passwords, I should always make sure to have a password that is as obvious as possible and I should tell everyone what it is. Because I don't believe that it's really desirable to have security on a computer, I shouldn't be willing to help uphold the security regime. On the systems that permit it I use the “empty password”, and on systems where that isn't allowed, or where that means you can't log in at all from other places, things like that, I use my login name as my password. It's about as obvious as you can get. And when people point out that this way people might be able to log in as me, i say “yes that's the idea, somebody might have a need to get some data from this machine. I want to make sure that they aren't screwed by security”.


Back then Stallman worked in a secure space with trusted people where physical locks were between him and people wanting to just wreak havoc. Not using a password was a measure of not restricting good willing people.

Blindly askilg people to apply that to todays computers chickens out from the need to distinguish between very different situations.

Your message is not just an ad-hominem ("blindly"), but also a try to play someone for an idiot.


Actually, it wasn't quite as secure a physical or virtual space as you imagine.

It was quite easy for anyone to get into the 9th Floor at 545 Tech Square simply by thumping on the door in the elevator lobby, because anyone on a Lisp Machine within earshot would just press Terminal-D to buzz open the door without getting up from their chair [1]. (And they could summon an elevator by pressing Terminal-E.) And many students and non-laboratory people (referred to as "random turists") knew the series of digits to tell a locksmith to make a key to those locks, or somebody who could make them a master key in the robot machine shop. And even if you didn't have your own key, there was always the MIT Lockpicking Guide. [2]

[1] http://dspace.mit.edu/bitstream/handle/1721.1/41180/AI_WP_23... (figure 5-1 and 5-2)

[2] https://www.lysator.liu.se/mit-guide/MITLockGuide.pdf

The physical locks on those doors certainly never stopped me from showing up unannounced, gaining physical access, and wrecking havoc by playing around with the Lisp Machines [3] and PDP-10's [4], spying on other people's sessions with the Knight TVs, printing out and Velobinding reams of documents on the Dover laser printer, and sleeping on the beanbag chair in the "Lounge Lizard Lispmacho" office.

[3] http://donhopkins.com/home/catalog/images/cadr.jpg

[4] http://donhopkins.com/home/catalog/images/mc-console.jpg

And then there was this thing called the ARPANET that you could use to log in without even knocking on any doors, picking any locks, or being physically present in Cambridge Massachusetts. Breaking into the ARPANET wasn't as difficult as depicted in The Americans "ARPANET" episode [5] (S02E07) where the KGB agents had to actually break into campus and murder somebody to gain access.

[5] https://www.youtube.com/watch?v=hVth6T3gMa0

The dial-up TIPs themselves actually had no passwords, and BBN would mail you a free copy of the "Users Guide to the Terminal IMP" [6] if you asked nicely -- including "APPENDIX A: HOST ADDRESSES". Then all you had to know was a phone number (301 948 3850 for example) and what to type (E, @O 134, :LOGIN RMS, RMS), and you were in.

[6] http://www.walden-family.com/dave/archive/bbn-tip-man.txt

You didn't even have to know RMS's password. If you tried to log in to MIT-AI with an unknown user name, it would ask you if you wanted to apply for an account, what your name is, and why you wanted to use the system, etc. If you answered sensibly (like "learning LISP"), you'd have your very own free off-hours "tourist" account [7] within days.

[7] http://www.art.net/~hopkins/Don/text/tourist-policy.html

Steven Levy wrote about the Hacker Ethic and MIT-AI Lab culture in his classic book, "Hackers: Heroes of the Computer Revolution" [8].

[8] https://murdercube.com/files/Computers/Heroes%20of%20the%20C...

The basic acquisition of every lock hacker was a master key. The proper master key would unlock the doors of a building, or a floor of a building. Even better than a master key was a grand-master key, sort of a master master-key; one of those babies could open perhaps two thirds of the doors on campus. Just like phone hacking, lock hacking required persistence and patience. So the hackers would go on late-night excursions, unscrewing and removing locks on doors. Then they would carefully dismantle the locks. Most locks could be opened by several different key combinations; so the hackers would take apart several locks in the same hallway to ascertain which combination they accepted in common. Then they would go about trying to make a key shaped in that particular combination.

It might be that the master key had to be made from special "blanks" unavailable to the general public. (This is often the case with high-security master keys, such as those used in defense work). This did not stop the hackers, because several of them had taken correspondence courses to qualify for locksmith certification; they were officially allowed to buy those restricted blank keys. Some keys were so high-security that even licensed locksmiths could not buy blanks for them; to duplicate those, the hackers would make midnight calls to the machine shop a corner work space on the ninth floor where a skilled metal craftsman named Bill Bennett worked by day on such material as robot arms. Working from scratch, several hackers made their own blanks in the machine shop.

The master key was more than a means to an end; it was a symbol of the hacker love of free access. At one point, the TMRC hackers even considered sending an MIT master key to every incoming freshman as a recruitment enticement. The master key was a magic sword to wave away evil. Evil, of course, was a locked door. Even if no tools were behind locked doors, the locks symbolized the power of bureaucracy, a power that would eventually be used to prevent full implementation of the Hacker Ethic. Bureaucracies were always threatened by people who wanted to know how things worked. Bureaucrats knew their survival depended on keeping people in ignorance, by using artificial means like locks to keep people under control. So when an administrator upped the ante in this war by installing a new lock, or purchasing a Class Two safe (government-certified for classified material), the hackers would immediately work to crack the lock, open the safe. In the latter case, they went to a super-ultra-techno surplus yard in Taunt-on, found a similar Class Two safe, took it back to the ninth floor, and opened it up with acetylene torches to find out how the locks and tumblers worked.

With all this lock hacking, the AI lab was an administrator's nightmare. Russ Noftsker knew; he was the administrator. He had arrived at Tech Square in 1965 with an engineering degree from the University of Mexico, an interest in artificial intelligence, and a friend who worked at Project MAC. He met Minsky, whose prime grad student-administrator, Dan Edwards, had just left the lab. Minsky, notoriously uninterested in administration, needed someone to handle the paperwork of the AI lab, which was eventually to split from Project MAC into a separate entity with its own government funding. So Marvin hired Noftsker, who in turn officially hired Greenblatt, Nelson, and Gosper as full-time hackers. Somehow, Noftsker had to keep this electronic circus in line with the values and policy of the Institute.

[...]

They went wherever they wanted, entering offices by traveling in the crawl space created by the low-hanging artificial ceiling, removing a ceiling tile, and dropping into their destinations commandos with pencil-pals in their shirt pockets. One hacker hurt his back one night when the ceiling collapsed and he fell into Minsky's office. But more often, the only evidence Noftsker would find was the occasional footprint on his wall. And, of course, sometimes he would enter his locked office and discover a hacker dozing on the sofa.


Regardless of what RMS thinks, users should be protected from code downloaded from the Internet. There have been too many problems with malicious extensions not to do this.


I hate this, its my system let me do what I want.

Besides there's already a browser for the mainstream audience and its called Chrome.


Restricting what users can do for their own "safety" is what gives us walled gardens such as iOS.

I use Firefox precisely because I can (or, rather, could) override basically all browser functionality with addons, without having to set up a custom build server and apt repo myself.


Where by "safety" you mean "the most secure mainstream computing platform currently available, setting a standard to which other platforms approach only asymptotically and with significant end-user effort".

Obviously, I can't end decide this debate, such as it is, in a single comment (or, likely at all, with an acolyte!). But someone should get the message into the tech message board bubble that in the modern software security world, the controversial argument to make would be that iOS isn't the most secure OS out-of-the-box.


The safest computing platform is an airgapped 386, that doesn’t make it useful.

In the same way, iOS may be safe, but entirely useless for many people’s use cases.

And additional, safety is entirely orthogonal to restricting choice. You can have a walled garden that is insecure, e.g. the Amazon Kindles, or you can have an open system that is secure, e.g. OpenBSD.


AviD's rule of security: Security at the expense of usability comes at the expense of security.


Power users can hack and compile their own build. But you just want to complain on the internet.

So I don't see why the rest of us should have to endure your specific, less secure world view on software.

For example, your pitch certainly doesn't entice me yet you act like its necessity is self-evident. Maybe it's just time for you to find a browser that suits you instead of replying to every post that disagrees with you.


The software is less restrictive than the last version.


The software is more restrictive than it was several versions ago.

I’ll also refer to AviD's rule of security: Security at the expense of usability comes at the expense of security.


Simple. You download the source, patch it, and build your own version.


The protocol handler support is restricted to a whitelist to prevent an addon from taking over, say, "http://" (or other protocols which could be used to insinuate unsafe content).


It still seems safe though to set up rules where domain ownership could automatically grant ownership for a protocol. E.g. Facebook.com:// or just Facebook:// if .coms were allowed to be dropped.


[How] do they handle collisions of apps that can manage the whitelisted protocols then?


When you navigate to a custom protocol URL there is a popup asking how to handle it, it will show all the add-ons in a list (and maybe native apps too). You can select you're chosen one and tell it "not to ask again".


Honest question: who's using ssb://?


Secure Scuttlebutt, check it out at http://scuttlebutt.nz


My question was actually: who's using Secure Scuttlebutt?


I'm using, daily. Also about ~2300 other people, but that's just how much my own local installation holds, they may be more that I'm not aware of.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: