Hacker News new | past | comments | ask | show | jobs | submit login
Beaker Browser is now archived (github.com/beakerbrowser)
106 points by pfraze on Dec 27, 2022 | hide | past | favorite | 25 comments



What actually is Bluesky? All of their public-facing marketing material and promotion that I've read sounds like they're reinventing Activity Pub, but they carefully avoid actually mentioning AP or comparing themselves to prior art in the distributed social media field. I would expect a comparison table of features/tradeoffs on the page of the project so it can differentiate itself from competing protocols.


Also relevant, from https://github.com/bluesky-social/atproto/issues/255

> Activitypub already exists. Why not just work on that? Why is this needed?

Response:

> There are a lot of differing design decisions.

> Account portability is a major reason why we chose to build a separate protocol. Signed data repositories and DIDs are both unique to ATP and not terribly easy to retrofit into ActivityPub.

> There are smaller things as well: a different viewpoint about how schemas should be handled, a preference for domain usernames over AP’s double-@ email usernames, and the goal of having large scale search and discovery (rather than the hashtag style of discovery that AP favors).

> Meta:

> The AP community has always been somewhat suspicious of Bluesky, for reasons that are understandable. AP’s community is volunteer driven and the users joined to escape Twitter. So, here we come, a startup funded by Twitter, with strong opinions on how the tech should be done. I’ve heard concerns that we’d “embrace, extend, & extinguish.” I’ve heard concerns that we’d force through changes that people don’t want by dint of being a funded company. You can criticize us for going a different direction but I think it’d have been a difficult collaboration if we chose to use AP, especially since we weren’t willing to compromise on some of the decisions above.


Have you seen their protocol overview?

https://atproto.com/guides/overview



God speed!

I met Paul at the CCC in Leipzig in 2019 and invited him out for a new years party. Was interesting getting to interview him in person. I set up a beaker node in a cloud machine for a while but never ended up using it much.

So much love for projects like these, glad to see the work continues.


The huge mistake everybody in dweb is making right now is that it's not about the protocol, it's about the interface. Nobody cares about HTTP, it's a badly designed CRUD protocol that has spelling errors in the spec, it doesn't matter. it's the interface (AKA the web browser) that makes the the web have it's magic. All of the dweb attempts that have focused on protocols instead of interfaces have been far less successful at actual user adoption than the Beaker project was, and I hope nobody is taking the wrong lessons from this archival. The idea was fundamentally correct.

The group or person that internalizes this and finds the funding, timing and momentum through an interface is going to succeed. Until then, I expect the trend of random protocols and crypto-currencies variously named Donkey Coin claiming to be "Web 3.0" for publicity to continue.

In my opinion, Beaker remains the closest, and in many ways first attempt to build an actual, working distributed web that we actually have tried so far and if the history book overlooks it to focus on protocols and where the piles of money ended up, don't read that book.


This is a really, really good postmortem. It’s frank, it includes what worked along with what didn’t, and I’m likely to refer to it again when I start something new.


I'm sad to see this go, a remnant of another web which could have been. I spent a fair bit of time playing with Beaker and hacking it up for my own purposes.

We actually had a discussion a few years ago where I made a suggestion about change to the default behavior. At the time, you made a perfectly valid response and declined my suggestion, but I'm curious if your thinking is the same today, given how things played out: https://github.com/beakerbrowser/beaker/issues/1444


Hey Mizza, wow that issue is a throwback.

I think I stand by my position. My concerns are still the same, and I don’t think your proposal would’ve changed the outcome for Beaker. I always felt the purpose of the project was to enable more open and hackable networked computing, and so adopting some adverse risk to improve availability felt like the wrong choice. I can understand why other folks with different missions would make a different choice.


HTTP is the new TCP. I wonder why all of those projects (IPFS, SSB, Dat/Hypercore) developed their own protocols rather than piggybacking on HTTP instead---thus requiring desktop daemons, specialised browsers, or web gateways.

I think the issue is the lack of separation between peer-to-peer synchronisation (e.g., nodes sharing messages with each other) and peer-to-browser communication (e.g., a browser requesting a file). I get that the former may require a specialised protocol, but the latter should be available through plain HTTP.


NAT traversal. Until IPV6 is universal one can't just point your browser to another persons computer that does not have a IPV4 network address.


http and tcp aren’t very comparable.

Anyway, the whole OSI has problems. If it were up to me, I’d redo IP to include named ports and see what follows.


Of course, I was alluding to HTTP's ubiquity.

If you want your protocol to be used in browsers you need to speak their language. I'm claiming that it's not something to overlook, especially since asking people to install and run additional software standalone/alongside is impractical.


Wonderful postmortem, Paul. Appreciate you taking the time to reflect and share what you learned.


There are now so many visions about the "next web" i wonder if anybody has the full list...

One would think that ruthless darwinian selection would single out the approach that is best fit to the market. Yet it has been a while now and little seems to be standing out. What if the "market" is so thoroughly broken that even solid ideas cant really get attention and traction?


This happens at the end of every hype cycle. People are dissatisfied, everybody is looking for the next big thing, and a bunch of people and companies throw spaghetti against the wall to see what sticks. A winner will eventually emerge, but it can take some time. A good example from the last generation: who remembers OpenLaszlo or YUI?


Raises hand

Just their (reasonably detailed) documentation, though; I didn't do anything productive with either project. Also Ext JS, a descendant of YUI.


I’m keeping my eyes on Farcaster and Nostr. BlueSky AKA “at protocol” seems too bloated to me. It really feels like everything in the architecture overview could have been 2x more simple.


Trying to read the crystal ball of "next" keeps leaving me with a vast feeling of underfulfillment, that the attempts are almost always far too conservative. I have a hard time seeing most would-be competitors acknowledge just how flexible & powerful the web is.

Almost all new ambitions are decidedly lesser ambitions in many dimensions. The web is medium plus transport. Yet in the quest to reshape transport (a source of many limits of the web, what roots us the web in request/response client/server asymmetry), many protocols don't accommodate or consider the medium itself, or present significantly less. New protocols so often end up transporting lo-fi pure data, cutting free the malleable flexible self-modifying hypermedia canvas (the html + javascript + css trinity). They almost never expand the medium, make new connectivity possible within the page, or present a new rich self-running hypermedia alternative to the webpage.

Beaker Browser embraced DAT, which was notably one of the rare attempts that kept the web page itself intact, which made it far more featureful than many many would-be "next" upstarts!

I personally still struggled with understanding a post-page model though. Abstractly a p2p database on the page might constitute it's comment section, but how we can built co-interactive systems, how the original post might expand to link/encompass newly added comment entities for example, was still a sticky point about "what are resources" that I remained somewhat unclear on in DAT. The transport semi-seamlessly pulled together content, but the model of resources never gel'ed for me like it did for my understanding of the web; it's "expansion of the medium" & "new connectivity' were not as seamlessly usable as the web's native http transports.

For sure, there's lots of good reason to listen to those who advocate for doing less. They'll almost certainly come up with some interesting data-points, interesting modes from the pioneers away from broad & flexible hypermedia. Even systems like RSS or- more-so- ActivityPub can somewhat be classified in this camp: a regularization of data, of protocols, a step away from general & malleable media towards a transport for specific style-less activities. There's hypermedia systems that just espouse lo-fi, such as Gemini, which I personally am not very excited for but which I hope form stable communities, become a practice we can look at.

Personally I think there's a lot of mileage left for the web, that the web itself might be heading towards a point where radically different things start happening. WebRTC- with hope- seems semi-fated to gain some QUIC transports, at which point we can potentially think much more intensely about multiple streams of data or connection being open between clients. There's still no client-side address scheme visible, but one could emerge & we could try to make it tenable across host origins/authorities, over time, hopefully. I'm getting way into the future, but systems like WebBundles/WebPackage crossed with the more client-ful experience suggest client-authored content bundled, that might be authorable potentially offline, then relayed over potentially-local p2p networks. The webpage can itself be an extremely potent navigator of many pieces of web content, and I believe we have only started exploring how much agency we can give users, only started exploring how interconnected a single page might be. And I tend to think what emerges here will constitute a "next" web, even living in-part inside & being hostable from within the current web. And I think it'll be a long time before the crystal ball gazing starts having anything that looks like a possibly well-defined destination; it's still all very early.


for those in a need of replacement, [Agregore Browser](https://github.com/AgregoreWeb/agregore-browser) supports BitTorrent, IPFS, Hypercore, Gemini, GUN, and Scuttlebutt protocols


Thank you and contributors for all the work!

Out of curiosity, is there any infrastructural portion that also gets shut down or if someone fired up the last released version or a dev instance, could they still access a hyperdrive (provided it is still hosted)?


That's a very honest and well written post mortem. The "bloated MVP" blog is interesting and it seems like a useful concept, albeit a bit tautological (a bloated min-viable product is a contradiction).

I spent some time last night pondering if my current company (https://hydraulic.software/) shipped a bloated MVP. Conveyor simplifies packaging and distributing desktop apps, which as you'll know if you've ever wrestled with the existing tools is usually a really unpleasant experience. There were quite a few features in the first release that weren't strictly necessary, like parallel/incremental builds and self-signing support, but in the end I figure this "bloat" was acceptable because it makes it so much faster to test and iterate on the product itself as well as making the tool more pleasant for end users. Liron's value prop test is also easily passed: there are clearly defined users (devs) who derive unambiguous value from the product (it's much easier to use than other approaches). So having slept on it I think it's not (too) bloated of an MVP. The hardest part with this sort of developer tool is just letting the right people know it exists, not lacking a clear use case.

Speaking of value props, decentralized apps were actually one of the original (smaller) use cases for Conveyor. I spent a lot of time on the early years of Bitcoin, back before "tokens" were even a thing, and the tech-focused part of that community hit similar problems. Some decentralized projects have succeeded in a big way like the internet/web/email etc, which encourages people to try it with new things too, but they date from a different time and the world has changed.

The biggest problem looks like insufficient incrementalism. Decentralized things are expensive to create due to technical challenges, so they become over-ambitious in an attempt to justify the high baseline development costs. Then because they're so expensive to create the ability to iterate quickly is gone, so they end up advertising how they work as the primary benefit vs tangible end user value props.

How to increase incrementalism? Here's one proposal. Some decentralization projects could on close inspection drop a lot of stuff often considered fundamental, like P2P networks and cryptography. Instead you'd re-orient around nicely designed desktop apps which can remotely control cloud resources, allowing people to fit inside free tiers and deploy stuff to the cloud without needing any technical skills. The app itself would take care of signing up for accounts, using the cloud APIs to instantiate serving resources, obtaining and wiring up domain names and even moving it all between clouds. This may not sound much like classical decentralization but is actually how the internet did things originally - a competing market of commercial providers for connectivity and hosting which you can switch between easily. Although there's no TCP/IP equivalent for all cloud services, there are some close equivalents like SFTP and the S3 API for file hosting. The "microfeed" project currently on the HN front page is an example of this, albeit without the end-user targeting app.

Because the management app is running locally, you get a lot of stuff like good levels of privacy for free (and it can be upgraded with reproducible builds, audits, threshold signed updates etc).

Exploring that approach requires it to be very easy to create, ship and update desktop apps. Which thanks to Conveyor and the new wave of desktop app frameworks, it now is. Hopefully at some point someone will try this and we'll get some meaningful level of independence from big SaaS providers.


To be fair, I didn’t read the full post-Morten. Once I saw decentralized I started to tune out then saw the reference to bit-torrent and it brought back memories of Napster and limewire, etc. curious how we got from decentralized music services where everyone has a copy to Bitcoin and NFTs. A topic for another day but am I off base to think we went off the rails quickly in the world of decentralization


you completely off-base here, check this: https://webtorrent.io

and beaker was all about p2p sharing of html over torrent-like protocol


What are you even talking about




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: