Hacker News new | past | comments | ask | show | jobs | submit login
Peer-to-peer social networking with Rotonde and Beaker (louis.center)
181 points by jimpick on Oct 13, 2017 | hide | past | favorite | 71 comments



This is cool. It's using a pattern called the 'self mutating site'.

During setup, you git-clone the application code and put it into a dat site you create. The app code publishes by modifying files within its own dat. When you visit somebody's profile, you're actually visiting a new site, so there's no uniform software layer on the social network. A lot like indie Web software.

There's a new 'application' pattern we're working on for the upcoming 0.8 release. In that pattern, the app dat and the data dat will be separated. This wont replace self-mutating sites, but it has some benefits as an alternative: apps will be able to update independently of the data, the browser will provide install and signin flows, and apps can provide persistent UIs.

There's a lot of other ideas we're kicking around (intents) and some kind of out there experiments (the app scheme [1]) so we'll need lots of feedback, esp from folks like @neauoire. Pre-release of 0.8 should happen by the new year.

1. https://github.com/beakerbrowser/beaker/wiki/App-Scheme

EDIT: updated for clarity EDIT EDIT: and brevity


It's a neat idea. I don't use Facebook, but I'd be open to a sort of social network based around the idea of a local profile that each individual sets and controls. Why is it a single big brick of a service that everyone must connect to and publish through?

Why not make profiles something modular that can be easily published and shared online, but also in a 'mesh network' setting where you publish and receive from devices within your general vicinity.

The profile should be a bundle of data about the individual, published by the individual. More of a standard than a service. There could be 3rd-party services to scrape and store what pepople publish. There could be 3rd-party services to produce nice cookie-cutter profiles, like resume templates. Services for hosting, collating, searching, viewing profiles. And people could have loads of different profiles for different parts of their lives, and separate them as much or little as they feel like. But at the core of things, the profile would belong to the User.


Check out scuttlebutt (https://www.scuttlebutt.nz/concepts/gossip.html), it really decouples the content from the device, and there are some services out there that will take all your content and help broadcast it, but the real flow of information is from friends to friends


Really awesome demo - the Beaker/Dat/Hypercore ecosystem is amazing. Real-time p2p sync of anything you can imagine.

I was able to build a 'Slack clone' as a one-day hack, using Hypercore + Electron: https://github.com/lachenmayer/p2p-slack-clone-poc

I think we'll be seeing a lot of these kind of projects soon :)


Hypercore has realtime sync, but is limited to log structured data. Dat can do diffs on data, but is meant for large datasets, not realtime changes. This sounds like it would be difficult (or not scalable) to build something like a Wiki, features in social networks, Trello, or other apps (i.e., anything with shared mutable state). How would you do this?

The reason I ask is that me, Mafintosh, Juan at IPFS, Dominic with Scuttlebutt, Feross at WebTorrent, Substack, and others all met back in 2014 for our different P2P projects. All of us had slightly different approaches. Juan and others seemed mostly interested in hash addressing, which I think is great but doesn't solve the end problem of data sync. Seems like Dat deals with that fairly well, but not for highly mutable data (versus large scientific files). Meanwhile we ( https://github.com/amark/gun ) tackled that problem first, because it seems like CRDTs are the most relevant for killing traditional centralized Facebook/Twitter/Reddit/gDocs like apps, and hashing is more applicable for killing centralized YouTube/imgur like apps. Both are necessary, but it certainly seems harder/easier based on which underlying P2P tools you use.

You are one of the few people actually jumping in and building end apps (thanks for sharing your chat app!), and we need more people doing that (not just all of us who are trying to rebuild the underlying architecture). So I would be curious to hear your experience comparing the different use cases behind dat/IPFS/gun/WebTorrent/etc. it would make for an interesting and informative comparison article. Thoughts?


The real-time syncing immutable, append-only log that hypercore provides can also be accessed randomly, allowing for lots of cool parallel/distributed log processing architectures (Kappa architecture), similar to Kafka (which is also based on immutable, append-only logs). We have just focused on syncing a filesystem at first because we had a strong use case there, but you could totally build a CRDT on top of an append only log.


Hey Max, long time no chat since Hash the Planet! Was it you or Mathias that demoed running an Operating System while streaming it live from remote peers? Those were some insanely sick demos.

Is random access something you guys have added since then? Or, could you clarify how it reduces the overhead on apps that would have shared mutable state (even if it is composed of immutable streams)? Don't you still have to back-scan through the log to recompose the current view? Which then wouldn't scale for read performance. Or is there an alternative approach?

This was one of the main things I was discussing with Dominic Tarr about Scuttlebutt back then. We had done a lot of event-sourcing (immutable logs) at my previous startup, but had problems with compaction or generating views/state of those streams. Which is what lead me to the CRDT approach as the base, not the thing on top. I know Tim Caswell/Creationix is using DAT for one of his products he is building with @coolaj86 .

But that was 4 years ago now, that me and Dominic were talking about those problems? Does random access solve that? Would love to read about it, shoot me some links! Also, Dominic and I are starting to have Web Crypto meetings with some folks at MIT (anvil/trust, ex-riak/basho) and they are in town (the Bay Area) this week. If you are around, we should all get together again to discuss identity, security, crypto, and P2P!


For those with an Android device that want to browse the Dat based P2P Web, there is an experimental Android browser called Bunsen Browser in development. And we could use your help :)

https://github.com/bunsenbrowser/bunsen


What kind of help is needed?

Also is there any chance of switching to Janea Systems' NodeJS builds so that we get 32-bit and x86 Android support?

http://www.janeasystems.com/blog/announcing-node-js-mobile-a...


I'm happy seeing more P2P projects, and if Javascript is the conduit which gets people to invest time and money then I'm all for it. Software doesn't get investment because it is good, software gets good when it gets investment. And I don't mean just money. The greatest regret I have when building Firestr (http://firestr.com) is not using electron. I'm only half joking.


I think JS is critical infrastructure to this because the only way this stuff takes off is if users flock to it. And so the UX can't regress too much from what they're used to, and the modern web is what they're used to.

In time, I think that a lot of backend plumbing around DAT and dex web stuff will not necessarily be JS, and if there are native mobile apps (if Google and App allow for them), those will be written in those native languages. But the front-end is almost certainly going to JS and modern web stack.


I'm not a big fan of the JS ecosystem or bloated browsers, but honestly, this is a perfect fit for JS. Distribute a static app which is a JS payload. The page knows how to join the distributed network and send/receive messages, which makes it trivial to allow you to build things like a social network on top.

Just make sure the protocol doesn't depend on weirdness from JS land to allow other clients in and you're good to go.


This is working on top of Beaker Browser, and since BB is not available for Windows, Rotonde is not too. Since BB is a main platform for dat browsing, I am really interested in how secure it is. Poking around BB website I could not find if it is a fork of webkit or blink or something else? Another question, is it possible to enable dat protocol via extension for chrome/firefox?


I believe it’s built with Electron.

The developers do plan to support Windows, but there are some missing dependencies that need to be built first.


I'm wondering that too, about implementing DAT browsing as an extension!


SAFE Browser [0] is a lightly modified fork of the Beaker Browser for the SAFE Network [1]. It supports the same goals as the standard Beaker Browser but uses SafeCoin to incentivize app creation and hosting. It's also cool in that MaidSafe builds their core libraries with Rust.

[0] https://github.com/maidsafe/safe_browser [1] https://safenetwork.wiki/en/FAQ


yay! the many nascent and forthcoming deX networks and protocols are gonna make the internet fun again


How is this Rotonde different from the RSS Reader client that Paul Frazee wrote? https://hashbase.io/pfrazee/rss-reader

Also, is it really necessary to git clone the rotonde repository? Why not go to a starter Dat url and use Beaker to fork your own copy?


Interestingly, I once built a very similar project as Rotonde but all based on RSS/OPML files with some namespace extensions to add social communication layer. It was federated, not true p2p (circa 2010) but the concepts were the same. App layer did not matter, only constructing properly formatted feeds. I think even today it would be good to base this off RSS or at least have RSS be a tandem feed.


Yeah, I presume you could actually just fork someone's rotonde instance... Though, some files are ignored in the .datignore which implies to me that some important files will not get copied via a fork??


I think you missed the initial sentence stating "This code is bleeding edge, so you need to install manually".

Any POC project is going to start out with a highly manual install process.


Are there any networks like beaker that are not only distributed but also truly anonymous? There is no point in being peer to peer if you can't be anonymous as well. Well, there IS a point, like with the pro-catalonia websites, but it would be even more useful when you can be anonymous.

Also, how secure is beaker? Are there any security audits?


P2P and anonymity are orthogonal concerns. Maybe I want to peer-wise distribute content to well-known friends of mine, to whom I am well-known. Alternatively, perhaps I want to contribute an anonymous voice to a globally centralized conversation?

There is even a third element which is missed by this false dichotomy, which is that "blanket anonymity" is not the same as "managing how much of my identity I want to reveal".

Without the latter, you actually end up with a digital commons whose social dynamics asymptotically approach the chans or Youtube comments.


Freenet is anonymous and is similar to DAT/IPFS. ZeroNet can be anonymous if you configure the node to always use Tor. In that configuration the node will spawn Tor onion sites to seed the ZeroNet sites.


We are working towards that with Peergos [1][2]. Your username is public, but the network can't see your social connections, or your physical location. Note that this isn't true yet though, as we haven't started using Tor hidden services.

[1] https://github.com/peergos/peergos

[2] https://peergos.org


How is that different from Dat?

https://docs.datproject.org/security


* Peergos is based on IPFS

* Everything is encrypted at rest

* We have fine grained capability-based access control

* If I share a file with two people, they can't see that I've shared it with the other person

* Access revocation with key rotation (on the same fine grained level)

* Peergos is resistant to quantum computer based attacks (unshared files are already safe, and shared ones will be)

* Peergos doesn't rely on out of band sharing of keys (we use TOFU on a publicly auditable append only PKI)


Freenet and I2P both fit these criteria.


Is dat more-or-less in the same domain as IPFS? (I know the technology is different, but as I see they are being used mostly for the same purposes.)

If so, which one is winning?


at least strongly related domains, yes.

I don't think you can decide "who is winning" right now. They both are still in development and both have strengths and weaknesses resulting from different development focus that do not have to remain as they are.


Agree on that. They are both BitTorrent variants. I liked Dat's design a little better, and it's more focused on mutable data streams which is better for Web content. IPFS is good though and may end up getting supported in Beaker again at some point. Dat's the protocol we chose to start with.


It looks good but, doesn't work on my work network and didn't work when I tried it through Tunnelbear VPN.


I set one up for myself here:

dat://72671c5004d3b956791b6ffca7f05025d62309feaf99cde04c6f434189694291/

Nobody follows me yet though...


Give us some time. We're still playing around with Beaker setups. Hell, I'm installing Ubuntu bash on my Windows box just to see if I can get this running today :)

PS: I think this would pick up more steam if you changed the title to "Decentralized Twitter clone with Beaker and Dat Project"


See the issue in BB issue queue around Windows build. That will help point you in the right direction https://github.com/beakerbrowser/beaker/issues/55


I do follow you now.

Here is my feed: dat://73bf68c7e480d53f231d0f077e2865afa098d0b6b1bd3eb65364b9b7cb149d0c


For some reason when I follow you, my feed doesn't load. Maybe there's a bug with offline dats?

edit, a quote from @neauoire:

>>> Good morning, I will fix up the client now. Make sure it catches the timeouts, and also make @ names clickeable by everyone.

nice!


Just followed you! Online now: dat://d9ab6634eda283a21c774fe3be8d2860ab9ce826a7d62d0dc33c3d152578efbc/


I added you!

mine: dat://1d09fb13964569d6d03b90c9c2944f3f34bc5aebd1e5a02d3a259b347789c982/


Awesome, I added you back!


This is cool!


how it is similar and or different from the posting feature that is a part of Beaker 0.8 ?


is dat better than zeronet?


It's hard to find comprehensive comparation. I think Dat is something like IPFS while Beaker is something like Zeronet but it uses Dat instead some custom transfer protocol. I don't get why Dat and IPFS are developed separately - it seems like unnecessary fragmentation and duplicated effort. I'd love to hear opinion on this.


From Dat's FAQ https://docs.datproject.org/faq

How is Dat different than IPFS?

IPFS and Dat share a number of underlying similarities but address different problems. Both deduplicate content-addressed pieces of data and have a mechanism for searching for peers who have a specific piece of data. Both have implementations which work in modern Web browsers, as well as command line tools.

The two systems also have a number of differences. Dat keeps a secure version log of changes to a dataset over time which allows Dat to act as a version control tool. The type of Merkle tree used by Dat lets peers compare which pieces of a specific version of a dataset they each have and efficiently exchange the deltas to complete a full sync. It is not possible to synchronize or version a dataset in this way in IPFS without implementing such functionality yourself, as IPFS provides a CDN and/or filesystem interface but not a synchronization mechanism.

Dat has also prioritized efficiency and speed for the most basic use cases, especially when sharing large datasets. Dat does not make a duplicate of the data on the filesystem, unlike IPFS in which storage is duplicated upon import. Dat's pieces can also be easily decoupled for implementing lower-level object stores. See hypercore and hyperdb for more information.

In order for IPFS to provide guarantees about interoperability, IPFS applications must use only the IPFS network stack. In contrast, Dat is only an application protocol and is agnostic to which network protocols (transports and naming systems) are used.


> It is not possible to synchronize or version a dataset in this way in IPFS without implementing such functionality yourself

But why not build version control on top of IPFS?

It has been discussed here: [1]. It seems a matter of correctly working with immutable data structures (which IPFS provides storage for). And it seems a bit silly to rebuild that storage layer from scratch.

[1] https://github.com/ipfs/faq/issues/83


From what I've read and seen in the docs, faq, presentations, etc., it appears to me that IPFS really pulls in a lot of stuff about the network layer, because it's trying to be, well, an Interplanetary (distributed) File System.

DAT is fundamentally a portable, self-contained, data repository. Replicating DAT archives across a broad network and whatnot is definitely a problem that needs to be solved, but IMO that should be solved at a different layer, without rolling in all sorts of complecting concerns such as network ports, routing, and payments for storage and whatnot.


Yes, agreed. Of course overlap among projects is inevitable and it would be nice if there was more effort to coordinate and collaborate as this would allow for potential dev efficiencies (not guaranteed though). In this case, DAT is a practical and pragmatic approach where IPFS is more hinged in the crypto blockchain token space which can be a turn-off and add unnecessary baggage. The DAT and Beaker teams don't want to add that noise and have different philosophies, so it is better for now that these projects are independent and in future can assess and maybe a new project will treat both as prior art and converge the best parts. And around we go.


Ya, we've thought about that. Dat's storage is pretty flexible and we have a content-addressed storage library. A lot of our users do not want or need to storage data in IPFS though so it adds unnecessary complexity to do that by default.

Someone could built a storage that uses IPFS, similar to our dat-http storage [1].

[1] https://github.com/datproject/dat-http


I would love to hear a comparison of Dat and Zeronet. The only difference I'm aware of is that Zeronet uses Blockchain technology and the Beaker Browser / Dat folks are of the opinion that Blockchain won't scale for the P2P web.


ZeroNet does not use blockchain tech beyond same cryptography as Bitcoin (and by extension bitcoin addresses). But it does not use "blockchain" for anything core.

You could prob swap this question for "what is the difference between DAT and Bit Torrent" since ZeroNet also uses BT and DAT is similar tech... but this has been answered already.

Outside of this, DAT is agnostic and modular while ZeroNet is focused on p2p website. So a comparison of ZN and Beaker (rather than DAT) would make more sense since both have same focus. Beaker's approach is to use a web browser wrapper (electron/chrome) while ZN is headless and allows you to use your existing web browsers. Beaker is able to offer more/different features because of the tight relationship with a browser app and this affords a powerful path forward beyond just serving static files as web pages. However, ZN also lets you leverage browser database and ability to make simple webapps.

It's really just different approaches to accomplish more or less the same goal of a p2p web. You could say Beaker is more cutting edge because it uses newer DAT (something new) and open to integrating IPFS etc vs being bound to BitTorrent (older tech).

The nice thing is, you can run ZeroNet and Beaker at same time and enjoy both p2p web networks ;-)


From what I can tell (and I am not an expert): Zeronet and IPFS are really fixated with solving the P2P secure content distribution problem. They convolve the solution of that transport-level problem with what I believe to be the central problem, which is a distributed content integrity system. DAT is a technology is that primarily fixated on that latter component, and is relatively independent of transport.

I think that things like p2p transport via Tor, bt, etc. are all red herrings. The robust computing infrastructure for the next-gen, distributed information system that the world needs, should not be tied to transport layer concerns like that. It should work reasonably well via flash-drive-sneakernet as it does over fiber and LTE.


Dat works great in the sneakernet flashdrive scenario as you can P2P sync over offline WiFi and dat clients like beaker browser will automatically verify cryto signatures. It makes distributing a verified offline and online P2P web very accessible for non technical folks. It's like SSL for the offline web.


Yep, that's why I like it. :-)


ZeroNet doesn't use a blockchain. Its only bitcoin related technology is that the site addresses are compatible with bitcoin addresses. You can import the private key into bitcoin and receive funds sent to the address used for the site.


Is Beaker based on Firefox?


> Is Beaker built entirely from scratch?

> No, it’s a fork of the Chrome browser.

https://beakerbrowser.com/docs/inside-beaker/faq.html#is-bea...


Thanks. The UI seems to resemble Firefox more, though, which is strange.


Hey! I’m one of the creators of Beaker. Beaker is built with Electron, a wrapper around Chromium, and it doesn’t include any UI elements like tabs or the URL bar. We made those ourselves, and have taken inspiration from a number of browsers.

What you see here is actually the “old” version of Beaker - a lot has changed in the upcoming release. Here are a few tweets with screenshots if you want to see a preview:

https://twitter.com/taravancil/status/906583737789542400

https://twitter.com/pfrazee/status/910988429881675782

https://twitter.com/taravancil/status/918232881641779201


You are doing a great job on the aesthetics of the browser and exposing powerful network features in an elegant way.


As far as I understand, "the hard part" of a web browser is not the front end but the rendering engine. I imagine that's where most of the current code is.


[flagged]


- DAT is not about decentralization; it's about content integrity

- Beaker is an app framework built on Electron/Chrome because without gorgeous apps you don't get users. But all of the data is stored within DAT, and there's nothing that prevents alternative apps (even textual ones) being built on that data.

- Three sentences into your incoherent rant, I lost the antecedent, but if you think involvement with something like 18F invalidates someone's interest in this kind of tech, then I think you're a Russian troll bot unless you can prove otherwise. QED.

- The authors of Beaker are very well aware of the history and original purposes of the project.

- DAT is not geared for centralized anything. You have no idea what you're talking about. Content-based addressing is the way past the centralized transport networks of today, which masquerade as information networks, and they are the single biggest threat to existing powers that gate/throttle/control the internet. Full stop.


Your wall of text would be more effective with some added clarity and organization. I feel like the "they" you refer to is vague and unclear throughout.

If i had to, I'd guess you're referring to Google as "they" at first, then the Dat project, with some Github "they" mixed up in the middle. It's honestly hard to tell.

I'm not sure why you think Dat is "centralized decentralization" and how using it would lead to "big brother" getting your research data. What part of Dat is centralized? This seemingly poorly informed hot-take on Dat leads me to question the other assertions in your screed as well.


Noted lol, I've had caffeine now. Sorry wow that was disjointed, but the comment aged past edit. I know the big red privacy flag is difficult to see this far off, so lets take off our serious hats and I'll explain this bit of nonsense and paranoia. You should probably dismiss all of this however.

TL;DR

I see freedom and privacy as something which cannot be combined with this concept as the project currently stands, due to reasons which are not immediately apparent but which I believe have at least enough substance to raise an eyebrow and question things.

I am left with the following questions after examining SEC documents, SM accounts, financial relationships, and company activities of parties involved and technologies used:

   1) Do I want to build on a platform which can never be truly safe
      because the stakeholders have a compelling interest in undermining
      its anonymous usage? (See explanation below)
   
   2) Why do things smell fishy...

        2c) Realizing I personally equate P2P with privacy, free speech, etc.,
            I wonder, why Chrome? Then I think of all of my compatriots. How 
            many of them would like using hacked-chrome to access sites? Why
            not mainline it on Chrome?
                Google doesn't do privacy <flag> hmm.

        2d) Where the heck is Firefox in this... or anything free/open...?

        WHAT KIND OF PEOPLE ARE THESE?!!! ZOPMG?!
Let's find out...

[Exhibit A] The guy who designed the protocol this depends on says in his paper on the subject that he offers an alternative to GitHub, then they build this derivative project on Electron and host on GitHub lol. o.O Okay, not by itself suspicious but weird and it stuck in my head, spurring more curiosity about individuals/projects/affiliations/home planets.

[Exhibit B] An ex-Mozillan building on a Chrome fork. Huh? Okay. It's a free world, but odd nonetheless. This makes me imagine where the project will go in the future. Will this get mainlined and become a feature in Chrome? What might prevent that? What if I don't wanna... Where's the alternatives? I don't want a Chrome-fork of ill repute on my systems to create more security vulnerabilities. Who reviews their changes? How quick do they roll out patches from upstream? Ack... Hang on a minute.. Google wouldn't want a P2P distributed web.

[Exhibit C] A handful of logos, a little namedropping... That makes me question who/why. Okay, let's see what their actual affiliation is. Code for Science turns out to be legit, and cool, but a tiny group so funding is... personal donations? The others seem to be foundations granting them some cash. Let's see who they are...

[Exhibit D] Upon looking up the Knight Foundation's recent dealings, I find they're now owned by a media company making its money from advertising, according to their SEC filings. Woah now, not friends of privacy, or P2P. What gives? Maybe the company has nothing to do with the foundation's activities, so I dig. Well, they're not in a position to spend money on bleeding edge tech, holy cow they're hemorrhaging money and have been for a while. Let's Google em and see why... Googling turns up fiascoes with the NSA, undermining counter-terrorism activities at a level the Inspector General's office deemed greater than all of the leaks by Edward Snowden. Wow that's a lot of heat, it can change a place - and who runs it. $1,000,000,000 USD/yr is a big fucking crowbar to leverage a company with. Susceptible to control? Yes. Motives to control? Yes. Opportunity to infiltrate? That reminds me that I haven't Googled the rest of the staff. This yields information that an adviser on the project is a GSA employee, in 18F - data. By itself that means little, but...

[Exhibit E] Giving their Fed (lol can't resist, sorry Jay-quith, it's meant in good fun) the benefit of the doubt, I Google him and find his anti-Trump tweetfest. Lol, ok, but you're a fed right? So why the Hillarsque feed? When I was in service, I wouldn't have undermined POTUS publicly, but kids these days are different, still seems like a weird fed. So I look up the 18F department handbook, hiring policies, and what kinds of people work there. He wouldn't fit in for a second by the sound of it, and... what is this? Don't they need clearances? Yes... For Open Data, we need an SF85a/SF86 do we? Huh, okay. Wtf? Moving on... Secretly Open Data?

Ok, so basically what I meant to say this morning is that the software, the project, its apparent contributors, and purpose all seem very nice, open, pro- freedom and sharing, targeted at people interested in decentralization and P2P sharing. Cool, they've got ex-mozilla people and they're 100% javascript buzzword compliant. They've got inspiring LinkedIns and professionally written bios. What hacker-for-public-good has traditional academia roots, gov ties, and likes Google/GitHub and Big Data _TM_ but aligns with Mozilla in a past life? Kinda strange, not incriminating, but those cool looking people are dependent on organizations and technology which they Beaker/Dat/Codeforscience.org) do not control. These forces have agendas which oppose the goals of this project.

One adviser is employed by the US government in an agency concerned with these matters, which seems fine, but I don't like single government anything really <tin foil hat>. Where is everyone else at the party? Curiouser still: When does gov+P2P anything mix? Who is accountable when I serve pirated media content I am unknowingly hosting via P2P using beaker? In some places using such software is illegal for that reason. Who takes down the page when I serve up bomb plans? There's one strong reason privacy may be intentionally broken, or at least cast aside. Deniability for people hosting the mirrored content is there, but it leaves nobody accountable for a DMCA notice or law enforcement action right? Unless they can come kick my door, then it's fine. See why they might not wanna have any kind of anonymity on such a network? Call it paranoia if you wish - whatever. It demonstrates a conflict between the design, and the objectives of involved parties. There are dozens of reasons why gov+p2p typically have nothing to do with one another, which would give some compelling reasons for a gov to want to put some boots on the ground, maybe manipulate the playing field a little. At least, they're solid grounds for gov to be anti-(beaker+privacy) combos.

One company which owns a foundation supporting the project makes its money primarily in an industry which is infamous for tracking, privacy invasions, selling and mishandling of user data, and exploiting user browsing behavior, but they are asking me to trust their modified browser and server, you need to run a modded httpd to serve "legacy browser" users with normal DNS etc.) I was under the impression that the contemporary cybersecurity concerns of users and governments were focused on improving privacy, not creating monetary partnerships with media companies.

So, wondering what the biz model is, where the money flows and why, and why government (read: THATS _YOU_ FED! lol) _may_ be interested and might present challenges to using it in the way I would like, for anonymous and open exchange of data. If you've been involved in research, defense, or fedgov the reasons are apparent. Well, doesn't mean they _are_ involved, or even _care about it_, but they may at some point care a lot, if history is an indicator. GitHub stands to lose a little here, maybe, so I doubt they'll jump to the front with their credit card in hand to help. Google sure won't benefit, and that sure is a lot of work for such a small team to tackle, so how are they gonna maintain this? Is this gonna be a forever-separated fork of Chrome? Will Google get shitty and try to break compatibility or prevent usage of Beaker or its features to protect their investments? Doubt they'll help at any rate.

Summary It seems like they're a project which is working for open data and an open web with the very people who want to prevent this at any cost and are in a position to be forced by those people to alter their behavior. The software this is built on is not privacy focused or even aware, and the project itself in no way ensures privacy or anonymity, and is controlled by parties who have interests counter to the goals of the project, so why would I invest my time-money in helping something which is at best naive, and at worst doomed to fail. I love the concept but WTF, how is _this_ the way to accomplish the goals of Dat, Beaker, or the pro-P2P community? By building in anti-privacy technologies and stakeholders?

I hope this makes more sense. Thanks!


Beaker and Dat are actually two different groups that work together. We discuss things all the time but decide on things separately.

Beaker uses Electron. We chose that because we're from the nodejs world and it allows us to move really quickly.


Okay :) I find node very tempting myself but then it feels like it'll take all of my concentration to be any good with node, then I'll wanna make everything node, and that's how _you people_ happen lol :) How did you meet?


I believe the way Dat works is if you don't have the public key for a Dat Repository (AKA the Dat URL), then you'll never find it even if other people are sharing it because peers will only respond if they are asked for that specific public key. However, as soon as you have that public key, then you can see the IP addresses of everyone sharing it. This seems like a privacy problem with the Internet itself that the Tor network is attempting to at least help solve.

- https://docs.datproject.org/security

- https://blog.datproject.org/2016/12/12/reader-privacy-on-the...


Beaker uses Google's rendering engine, yes.

The Knight foundation has given dat money to pay attention to a specific use case (sharing of large data sets)

You have to be clearer if you want to insist that either of those things compromises dat (or beaker, although beaker is only a client for dat) as a project and makes it unsuitable for its other goals.


I have an issue with Beaker, and it's that it's build on html/js/css the unholy trinity.

edit: my bad, there's also markdown support.

It also needs to have a wasm backed canvas.


Well, It's web browser so I think Electron is justified here :)


i'm sure think will/should change once chromium has improved wasm support.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: