Hacker News new | past | comments | ask | show | jobs | submit login
Urbit user guide, hosted on Urbit (urbit.org)
150 points by urbit on Oct 22, 2015 | hide | past | favorite | 69 comments



From[1]: "Anyone can run the Urbit VM, of course. But the %ames network is officially invitation-only. Not that we're antisocial -- just that we're under construction."

So, is there support for running a separate p2p network? Or is the only option to wait for urbit to stabilize?

[ed: Never mind, it's covered under "Launch Instructions": If you don't have an invitation, pick a nickname for your comet, like mycomet. Urbit will randomly generate a 128-bit plot]

[ed2: Hm, looks like the ability to spawn an alternate universe would make playing with urbit more interesting, as I understand it - without an invite, most of the best parts are inaccessible: "The fanciest way to control your urbit is through Urbit itself: a moon, or satellite urbit. Sadly, only planets can have moons."]

[1] http://urbit.org/docs/user/intro


It's entirely possible to spawn an alternate universe. In fact, it's recommended to spin up an offline version of ~zod to develop on so you don't end up crashing you actual planet. The `-F` flag makes it so your ships only connect over localhost instead of looking up the top-level galaxies from *.urbit.org

It's also possible to hard-fork the network. The galaxies' public key hash is hard-coded in the source, so if you change it your ship will accept that the new key is the actual galaxy. Of course, there is no guarentee that any OTHER ship you talk to will agree with you...


Oh, thanks for the update. But this doesn't appear to be documented anywhere? So far I've only been reading the documentation, not the source code -- perhaps it's more obvious from the source code that such functionality already exists (eg: the -F flag)?


Our documentation is still very young, but it is mentioned in our contributing guide:

http://urbit.org/docs/dev/contributing

There's much more to cover in terms of how the network works. It's coming! For the time being some things are just folk knowledge. We're always happy to answer questions though.


Just to clarify: you can substitute 'planet' for 'ship' in the above. We changed the naming scheme.


Another interesting tidbit, that maybe someone can shed some light on: "Reactive auto-updates are a particular speciality of %clay. We use them to drive auto-updates of code at every layer. A normal Urbit user never has to think about software update."

Is this another way to say that you can never know what version of urbit you're running? So when someone compromises the upstream keys/ids urbit becomes one big botnet?


Yes. And the same is probably true of the browser you used to post this. Also, the OS it's running on. It's the price of being "evergreen."

It's worth thinking about why we've accepted this tradeoff. The cost of evergreen software is that we put all our eggs in one basket, and watch the heck out of that basket. The benefit is that we solve a huge set of system administration problems that would otherwise be ridiculously impractical.

One metaphor I like to use is the difference between the Soviet and American design styles in aerospace. The Soviet way was to build systems with loose tolerances that worked okay even when parts were a little out of spec. The American way is to build systems with precise tolerances that work perfectly when everything is right, and fail catastrophically when it isn't.

There's much to be said for the Soviet style, and indeed it might be summed up well in Postel's law. But as the problems you're trying to solve get harder (like keeping all the world's browsers updated), it doesn't scale very well. If we compare the problems we can solve with manual upgrades and Postel's law, to the problems we can solve with automatic upgrades and rigorous protocol validation, there's no contest.


Thank you for taking the time to respond.

> Yes. And the same is probably true of the browser you used to post this. Also, the OS it's running on. It's the price of being "evergreen."

The OS, yes, to a certain extent. I don't think I've set up apt/cron-apt to automatically pull in stuff on (any of) my desktop(s) yet -- they tend to have a couple of bleeding edge repos enabled, and I often do not want even security updates at surprising times. Nothing like firing up your laptop on an airplane just to discover 3d acceleration no longer works because of a kernel security update (frequently for a local-only crash/exploit).

As for browsers, I'm mostly familiar with FF, and that usually prompts before update? I think you can set it to automatically update, though?

I do accept that trusting a single group of people to maintain the OS can be a good trade-off -- I trust Debian's Security team to do that. Sure, if they are compromised (or more likely, make a mistake) I'll suffer. But I'm not interested in having the small chance of key compromise be multiplied with all the (complex) software packages I use.

Also, for context, the same documentation clearly states "Urbit is not (currently) secure in any way" (or something to that effect), and in passing "if urbit runs as root". Well, apt-get does run as root, but a) it only runs automatically if I tell it to, and b) it's built on rather well-tested primitives (GnuPG etc).

So, having Urbit be notified of changes, and optionally automatically update sounds great, I'm not sure if I think "always automatically update" sounds quite as great. Especially if the stuff on which trust is built (encryption etc) is still considered unstable.

[ed: To be clear, the last bit, I like: "A normal Urbit user never has to think about software update." Key word being "normal". As Urbit is unstable, and everyone are developers and/or testers - there aren't (yet) any such "normal" users? ]


In the general case, we think evergreen updates are an excellent idea. It's important to note, though, that it's easy to stop syncing from the upstream repository. Notification plus manual update is totally possible. Most people won't want to do that, but some will.

With SaaS web apps, it is of course impossible to turn off updates, which annoys the heck out of me.


I am not convinced that urbit is not just some sort of ironic performance-art criticism of computer science. Perhaps it would be elucidating if someone would be so kind as to explain what the point is.


Urbit is a reinvention of the Old Internet with a lot of the principles from ancient computer lore thrown in. (eg. Lisp Machines)

It reintroduces the social computing environment, emphasis on the word computing. The Old Internet and timesharing systems like ITS and Unix were based around this model:

https://medium.com/message/tilde-club-i-had-a-couple-drinks-...

So you have a standard identity layer, instead of a million little fiefdoms of identity competing for sign ups. You have proper one-to-one connections between individual users on the system, with modern cryptography inbetween.

You have a functional, very very small kernel out of which the rest of the system is built. This kernel and core concept is a sort of distillation of the ACID concept from database systems, so that your computer has transactions and forgives mistakes.

It bakes the social layer and the community layer of the Internet into the protocol. Urbit tries to be community-aware and politics-aware and handle this gracefully. This sort of formal acknowledgement of necessary human factors in technology is certainly in line with Curtis Yarvin's previously expressed views on social organization. (And is mostly what people are talking about when they say he 'baked neoreaction' into Urbit. This accusation strikes me as intellectually dishonest on multiple levels, but I'd rather not digress on it.)

If nothing else it's a very interesting piece of new research in computer science, you should be excited about it.

EDIT: Curtis I know you're lurking, can I get an invite?


Why do you think it is a good idea to make things "social" at the OS level? That seems like a very strange abstraction policy.

And how is this system "politics-aware"? I'm not even sure what that would entail from what basically seems to amount to a collection of esolangs.


>Why do you think it is a good idea to make things "social" at the OS level? That seems like a very strange abstraction policy.

Social means a bit more than just Facebook. It also means primitives for things like collaborative editors and coworking spaces, video conferencing, etc. IMO the ideal social system would look a lot like a mix between the old timesharing systems, the early Internet and Douglas Engelbarts "Mother of All Demos": https://archive.org/details/XD300-23_68HighlightsAResearchCn...

>And how is this system "politics-aware"? I'm not even sure what that would entail from what basically seems to amount to a collection of esolangs.

Well it comes with the community. The way Urbit is structured your personal cloud/ship/etc is intended to be linked up to a community of other users. Politics are part of being social and part of communities. If you don't have a structured way of handling it it'll happen regardless and can be made less ugly with official support.

"The part that is stable we are going to predict, and the part that is unstable we are going to control." - John Von Neumann, 1948


Thanks! The appropriate authorities have been contacted :-)


Fair! We do a pretty good job of generating this response.

The point is that we ought to have a permanent, immutable home for our personal computation that's universally available on the network. Not an app, not a service, but a general-purpose tool that I trust and can program. I have one on my desk, but I want one in the cloud that doesn't feel like flying a 747 (aka being a unix sysadmin).

Our approach is simple: the reason this doesn't exist is not because it's not a good idea, but because existing old-school system software is too complicated.


This reasoning is great and makes perfect sense. The thing that makes it look like performance art is where Urbit appears to replace that complexity with something incredibly obscure that does not look like it will do anything helpful.


I think this is just borne out of the stark and daunting difference between the way Urbit works and everything else we're familiar with.

I'm daunted by it, but I'm intrigued by it. I'm also pretty convinced that if I devote some time to it, it will make sense. And as a bonus I'll probably understand the rest of my computing a little better as well. Which is why I'm cloning the repo right now to have a play.

PS. Big repo. 400mb and counting.


Just because a system is unfamiliar does not mean that it hides some new insights, though the reverse is often true. Urbit's problems are much bigger than that- it is full of redundancies and poor choices of abstractions, like using "jets" to optimize away peano numbers.


Ugh! So sorry about the repo size. We can archive some old binaries to get that down.


If the big files are in git history then I think the only way to remove them is to do a full git filter-branch.

https://git-scm.com/docs/git-filter-branch

But this will sever history with all cloned copies, much like rebasing master would.


I'm not involved with the project, but here's what I got:

- Programming/computing is very new.

- Many mistakes were made.

- Due to human stuff, mistakes stick around.

- It's pretty clear that computing as it is today is very far from optimal.

- What can computing/programming look like if we start from a clean slate?


No. I did a project on Urbit for an OS course. It is deliberately obfuscated. For example, take the specification of Nock, the lisp-like "assembly code":

https://github.com/cgyarvin/urbit/blob/master/doc/book/1-noc...

The symbol evaluation is designed to take whoever is trying to program in Nock in circles around the specification. It makes even the most basic operations unnecessarily verbose and hard to memorize. To increment a number, you must put it adjacent to the expression [4 0 1]. So 42 [4 0 1] evaluates to 43, after several steps. There is no reason that an "assembly code" should have a decrement operator that must be evaluated more than once in order to get to the final expression.

Hoon, the "high-level" langauge is worse. A function to decrement an input by one is specified by the expression:

(a=. =+(b=0 |-(?:(=(a +(b)) b $(b +(b))))))

Like a postmodernist artists asks "what makes art?" and answers that there can be no objective answer, Urbit asks the question "What makes good software engineering?" Unfortunately there are many objectively bad design decisions you can make in software engineering. Forcing even the most basic operations to be incredibly verbose and unintelligible is one of them.


Well, normally, if you want to decrement a, you write (dec a). But yes, if you want to write your own implementation of decrement, you could do it this way. It's just as bad an idea in Urbit as in any other language.

Nock is a functional assembly language, so it's not meant as a human interface. I think you mean "increment" instead of "decrement." Yes, [4 0 1] is an increment formula in Nock; 4 is the increment operator, 0 dereferences a tree address in the subject, 1 is the root of the tree; so [4 0 1] means "increment the subject," where your subject is 42.

If you know Lisp, you can think of Nock as Lisp without symbols or an environment; instead of having this hardcoded key-value store, the environment, you have a subject which is referenced with tree addresses (1 is the root, n2 is the left child, n2 + 1 is the right child).

Is this horribly arcane? It doesn't seem that way to me, but de gustibus non disputandum.


Nock just seems deliberately obtuse. The website boasts that "Nock is also the easiest thing in the world to learn" and that the spec "gzips to 340 bytes", but I can't seem to make heads or tails of it. Sure, you can defend Nock by saying it's not meant to be used directly, but can you give any advantages Nock offers over anything else? Because lambda calculus is even easier to implement and is more intuitive.


I don't know anything about urbit.

I stared at the nock specification for about 3 minutes, just the symbolic specifications, not any explanation in english language, and I easily understood it. After just 3 minutes of staring at some symbols, I know everything there is to know about nock.

There is no other assembly language with this property. It seems to me it not deliberately obtuse, but in fact it's very clear, simple, and understandable.

I haven't looked at hoon and other things yet.


So I get that Nock is hard to wrap your head around. But that's like complaining that your cpu's machine code is deliberately obtuse. You aren't going to be writing nock unless it's for fun.

Hoon may be a little deliberately different which can be construed as obtuse but it actually takes less time than you might think to get up to speed in it.


I still don't understand why this thing was chosen to be the basis for all of Urbit. What advantages do you get from building on top of Nock vs. lambda calculus or SKI combinators or anything else?


I don't know that anyone has tried to build a practical system on top of SKI combinators, which are certainly simpler than Nock. It's probably quite hard.

You can build a practical system around the lambda calculus, which is arguably as simple as Nock. But you don't build a practical system by building layers on top of a simple lambda interpreter; you do it by extending the simple lambda interpreter, until it's no longer simple.

The critical feature of Nock is that it's very simple and you don't extend it, you layer on top of it. So, for instance, it's very easy to upgrade Hoon's syntax or semantics over the network in a live system, because the interpreter is a Nock interpreter and doesn't know anything about Hoon.


I have no idea. I'm not Curtis. However, have you ever just wanted to burn it all down and start over? I mean all of it. lambda calculus, the von neuman architecture, all of it?

Urbit is a little bit like that. When he says clean slate he really does mean clean slate.

Which in a way is what makes it fun. It's so far out of your normal experience that it's delightful to play with. At least if you are someone like me anyway.


Starting over is fine, but if you make choices that are deliberately awkward because the only reasonable ideas you have were done before, that's not a clean slate. That's a twisted mirror of existing designs.


Seems like (as I've said in every thread on Urbit) it just needs more developers and users so that we get the Urbit flavoured higher level languages.


I can give you one advantage from the generic perspective of a creator, in that I experience unbridled joy from using my own creations.

From my point of view, no other justification need be made, since no one is in any way forced to use, understand, or even acknowledge my creations.


> Lisp without symbols or an environment

Why is that an advantage? The ability to give things names seems like a big win to me.


Naming things is pushed up to a higher layer. The lower layers of the Internet don't care about names; why should the lower layers of a computation system?

To be practical, though, I think the point here has to do with the idea that languages come and go, and their package namespaces along with them, but VMs stick around as long as useful libraries run on them... unless the VM is tied in some way or another to a language that has gone out of fashion.

Think of the JVM: even if you're writing Clojure, you call Java functions by their Java names. Effectively, you have to know a (tiny) bit of Java to call those functions. Why? Because Java syntax assumptions are baked into JVM's module/function naming rules. Clojure functions end up with Java-like names too, after compilation. Clojure hides it from you, but to call a Clojure function from Scala, you can't write a tiny bit of Clojure; you have to write a tiny bit of Java. Naming rules have made Java the only first-class JVM citizen.

Nock, on the other hand, defers the concept of naming up to the language/platform level. Effectively, each language running on Nock needs to make up a naming rule, and decide the canonical name (canonical to the language, not to the VM) for every available function itself. This means that there would be no de-facto "Urbit library ecosystem"; there couldn't be, as a particular arrangement of Urbit functions—a taxonomy—wouldn't be guaranteed to survive in any sensible manner between languages. Not all languages would have the concept of a "module", or a "package", etc.

This could be seen as bad: every Urbit-derived language would need to effectively create its own ecosystem of "library taxonomies", manifests mapping from its module system out to function-space, making each language maintainer roughly like a Linux distro maintainer, consuming functions from "upstream" and packing them.

But this lack of forced naming also has the potential to be very good: it creates the opportunity for taxonomies to be created separately from any particular language, and then consumed voluntarily by multiple languages. A taxonomy becomes its own first-class object, above "runtime" but below "language", where languages can pick a runtime+taxonomy to support. This, further, moves the creation of a "standard library" from the language authors to the taxonomy authors; languages "on" the same taxonomy become thinner bundles of syntax and compile-time features, while sharing all their stdlib algorithms, "primitive" data structures, and language "features" like GC (because that's mostly up to whether the data structure "primitives" provided by the particular taxonomy are implemented that way.)


> The lower layers of the Internet don't care about names; why should the lower layers of a computation system?

Because the whole point of computing is to create value for humans, and humans think in terms of names. The fact that the lower layers of the internet don't care about names is not a feature, it's a bug. The internet runs on 32-bit IPV4 addresses because in the 1970s when the ARPAnet was invented that's all we could afford. But it's not 1970 any more, it's 2015, and computing and storage are many orders of magnitude cheaper today then they were then. We can afford to give names to things that we couldn't before.


> The fact that the lower layers of the internet don't care about names is not a feature, it's a bug. The internet runs on 32-bit IPV4 addresses because in the 1970s when the ARPAnet was invented that's all we could afford.

We had the chance to "fix" this with IPv6... but we decided to give IPv6 addresses as well. I'm pretty sure having addresses that can be broken into prefixes to assign to different ASes and refer to in BGP routing tables is a feature.

(In IPv6, there are such things as Cryptographically Generated Addresses that are "identity-like"—but these still only exist within the last 64 bits of the IPv6 address, leaving the first 64 bits to function as a hierarchically-assigned routing prefix.)


> We had the chance to "fix" this with IPv6... but we decided to give IPv6 addresses as well.

Who is this "we" of which you speak? IPv6 was designed by a committee, and committees get things wrong all the time, even in 2015.

> I'm pretty sure having addresses that can be broken into prefixes to assign to different ASes and refer to in BGP routing tables is a feature.

You are conflating two different things here. Yes, it's good to be able to build routing tables that are very efficient. But having a number that describes the route to a machine is a very different matter than having a number that determines a machine's identity.

The way things work today -- even with IPv6 -- is that a machine's identity is determined by a number (its IP address) and we have a namespace layered on top of that (DNS). The assumption that the machine's identity is determined by its IP address (a number) rather than by its host name is baked deeply into the fabric of today's standards. This leads to all sorts of horrible non-orthogonalities, like the "Host" header being required in HTTP 1.1.

I'm not saying this was an unreasonable design tradeoff, just that it was a tradeoff, not something that was desirable for its own sake.


> There is no reason that an "assembly code" should have a decrement operator that must be evaluated more than once in order to get to the final expression.

Doesn't this ignore the concept of Urbit's "jets"? You write code that does things in a stupid-but-canonical way; it's then the runtime's responsibility to take canonical patterns and provide optimized implementations.

The cool thing is that you get to write a naive interpreter for all existing code in a couple-hundred lines, while also being able to write a good interpreter that makes the same code run quickly. Like regular Lua vs. LuaJIT, but much moreso.


Jets are pretty ignorable. Pattern-matching an incredibly obtuse language against a distributed database of machine language is not in any way comparable to LuaJIT, nor do I see any good way for it to do any meaningful optimization beyond e.g. replacing peano numbers with machine integers.


I would note that Nock is a representation of an ISA for an abstract machine, not a language. Aren't loop-unrolled x64 code, LLVM bitcode, BEAM bytecode, etc., also "obtuse"?

In fact, the best thing to compare it to, in my mind, would be RarVM bytecode: another ISA designed to be "forward-compatible" in the sense of ensuring that even the first interpreter will be able to run code from years in the future. Nobody ever sees RarVM bytecode; nobody even realized it existed for decades. It was just there, an implementation detail of the RAR encoding/decoding process, doing its job of enabling the RAR format to change its algorithm over time.


Abstract ISAs are still languages.

And at least loop-unrolled x64 code describes what it's doing straightforwardly, instead of throwing around redundant mathematical abstractions and hoping the interpreter will magically pattern-match them away.


I get the sense that the (unspoken) point of encoding basic operations in an extremely redundant way is, in fact, forcing "production-quality" Urbit interpreters to support jets. It's not really a "magic hope" for this pattern-matching to happen if the language's stdlib is written to rely on the presence of it; any more than it's a "magic hope" that a Prolog interpreter will support backtracking, when all Prolog code relies on that fact; or a "magic hope" for an Erlang interpreter to support tail-call elimination, when all Erlang processes are idiomatically written using terminal self-calls.

As an aside, though:

> Abstract ISAs are still languages.

Not always true. A lot of current abstract ISAs are (because they were designed to be programmed in, as well as as compile targets), but many aren't. Most code read into a modern VM gets chewed on a bit more from its "canonical" form; the internal representation that results, with threaded code and JIT profiling hooks and tracing et al, basically looks like Nock: a graph of numbers. For most VMs, those numbers are just pointers to structs it has allocated, so they can't really "live" outside the VM. Nock just goes a bit further and says "but if you give those struct-pointers persistent wire-representable identifiers ala CapnProto, you can serialize the whole internal-VM-state graph in a portable way—and then that can serve as the VM's ISA."


I'm not disputing the idea that the standard library's reliance on a feature will force its support, I'm disputing the idea that jets are a workable solution to anything at all.

And no matter the form you turn a language into, it's still a language. Speech and text are generally considered equivalent in that sense, for example- it's the abstract form that matters here.


Yes, Nock is a "language" in at least one sense. But you're arguing that it's a "language" (n., 1. linear encoding of semantic information) and then using that to argue that it's bad, because it's not a very good "language" (n., 2. method for sequential communication of thoughts between humans.)

A Smalltalk live-image, a core-dump of a process, an SQLite database, etc. are all "languages" by definition 1, but obviously not "languages" by definition 2. Nock's ISA is more like those: a format for machines to generate and other machines to consume, and for humans to use tools to introspect; not a format targeted at direct human production or consumption, even through a 1:1 mapping ala disassembly.


Nope. I'm not arguing Nock is bad because it's hard for humans to understand- live images, core dumps, databases, etc. are great. I'm arguing Nock is bad because it relies on jets, which are nonsense.


Is your project online anywhere?


It is not. It might still be on the school's network, but I did it back in the spring and our department's network admin might have wiped my account since I graduated. Didn't accomplish too much to be honest, as might be hinted at by the frustration underlying my post.


You can certainly blame our (old) doc for that. Sorry for the bad experience -- we're definitely still at the stage where I wouldn't expect people to be playing with Urbit and not asking for handholding. Even more so this spring.


At this point I think it's part performance-art criticism and part a serious attempt at rethinking the entire computing stack from the bottom up.

Either way it's undeniably fun to play with right now.


It is performance art. Any language that brags that addition is O(n^2) is pure performance art.


There is no point.


I was late last time, so this is my chance to ask again. Is the Urbit team familiar with Ted Nelson's work? For some context, see my question from the previous Urbit thread at https://news.ycombinator.com/item?id=10286521.


How could anyone not be familiar with Ted Nelson's work? In the broad sense, anyway.

I think the crucial layer that we need to implement... a lot of things... is a global immutable (aka referentially transparent) namespace. Urbit is one project building such a thing; another one is IPFS. (Urbit names are addressed by identity; IPFS names are content-addressed; so they're complementary and not competitive.)

One of the reasons the Web seems like such a poor imitation of Xanadu is that it rests on this rickety foundation of a mutable binding from name to resource. Once global immutable namespaces -- Urbit, IPFS, anything -- are more widely deployed, I think Xanadu would be wise to use such a thing as a layer.

But to paraphrase a famous saying: grant me the serenity to accept the code I cannot rewrite, the courage to rewrite the code I can, and the wisdom to know the difference :-)


>I think the crucial layer that we need to implement... a lot of things... is a global immutable (aka referentially transparent) namespace.

Theodore Nelson has said this himself, I can't remember the exact source but I think it was in his google talk he said that you need permanent addressing for Xanadu to work. He at least reiterates the concept (though with less principal importance) here:

http://xanadu.com/XanaduSpace/btf.htm

"STABILIZED ADDRESSES

Imagine that everything you type is given a permanent, immutable address. Then to refer to a given sentence, or paragraph, you would refer to its permanent address span (start, length). This would have many benefits.

This is not the way things are ordinarily done, but in this system we simulate such permanent addresses in order to get these benefits."


> How could anyone not be familiar with Ted Nelson's work? In the broad sense, anyway.

Computer Science is not renowned for it's diligent study of history. And that's just for those that actually study it in some form of institution, not all the people who "practice" it without any formal training.

That said, the fact that Ted Nelson/the publisher have stubbornly refused to just publish for example "Computer Lib/Dream Machines" free on the (inferior) web, or at least as a DRMed ebook or merely a dead-tree re-print -- makes it unnecessarily hard for people to read up on the concept(s).


"Computer Lib/Dream Machines" was in print once upon a time. I bought a copy circa 1992, the copy I have is the revised edition which was published in 1987, these are photos of my copy (it's a wee bit aged looking now):

http://imgur.com/a/ghtEb

You can still find copies on Abe Books but they're not cheap.

http://www.abebooks.com/servlet/SearchResults?isbn=091484549...

Glad I held on to this little bit of dead tree history even if it is the revised version.


Ted Nelson had some great thoughts, but fell short on implementing them. Nelson's Xanadu stands for the impossibility of implementing a perfect system. One cannot implement a perfect system, b/c anyone's idea of perfection must necessarily confront reality. Nelson was never willing to compromise to reality. I get the sense that Mr. Yarvin is totally willing to compromise in the face of reality, and has already done so in many respects to get this far with urbit.


Another important lesson is not to take decades before trying out things end to end. According to second hand report, the API to connect a UI to Xanadu was very complicated, way too complicated to expect wide adaptation.

Another lesson is to avoid too long a stealth period.


I'm not sure that Xanadu's stealth period was intentional. Maybe only a few people really understood what Nelson was talking about, at least until the Web came along and it was clear that Nelson was really on the right track philosophically. He could've run a superbowl ad every year until 1990 --most people would've shrugged their shoulders and wondered why this crackpot was wasting so much money on a silly piece of performance art!


I love that you emailed him!

Urbit is 100% open source, and (as you mention in your other post) probably a good fit for implementing Xanadu on top. Our filesystem satisfies a lot of the requirements.

Want to build it? Great. We'll happily help.


We also put up a new homepage that's hosted on Urbit: http://urbit.org/.


Suggest adding a reminder that when saying "urbit" as "herb it", you need to not say the "h". Unless you are actually supposed to say the h, of course. Perhaps it's the opposite of a silent letter.


Sorry, this is a ridiculous parochialism on our part. In America, which is where English was invented and is also of course the biggest bestest country in the world with giant atomic bombs and stuff, we don't say the "h."

But apparently there's some little islands or somewhere where they do. Will fix.


Honestly, i was a little sad to see you mention latin out of the corner of my eye, which would mean you'd properly pronounce it oorbit, to then find out you're being as english about it as the internet usually is.


It would be nice if i could read the docs without needing JS.

It is nice to see sanely written documentation though. Cheers on improving that. :)

Edit: Useful, thanks! vvvv


Fair enough.

All those resources are actually markdown files that are being built for use in our doc browser — but you can also browse the raw .md.

Here's an example of how to retrieve raw .md:

http://urbit.org/docs/user/intro http://doznec.urbit.org/home/pub/docs/user/intro.md


Or for that matter http://urbit.org/home/pub/docs/user, which is a simple-html rendering of the OP link. Some of the pages require JS to list children components, however.


Thanks, because something is not working with the main site. You go to the user introduction, click on the install manual or dojo or something, and nothing happens. PaleMoon, latest.


Isn't this one of those Mencius Moldbug things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: