Hacker News new | past | comments | ask | show | jobs | submit login
Urbit: an operating function (urbit.org)
212 points by privong on Sept 25, 2015 | hide | past | favorite | 169 comments



For a more accessible alternative approach to all this madness see this mirror project

https://github.com/tibru/tibru


They do say noitatimi is the highest form of yrettalf...


You're such a snoc.


I do so enjoy seeing the casual vivisection of that which tends towards attracting mystical appreciation; what remains intact is all the more deservedly numinous.

For an exactly diametric plunge from the same origin into the rarefied esoteric, there is: https://github.com/mnemnion/ax/blob/master/commentary%20on%2...


Rarefied esoteric what?


"Esoteric" as in "the esoteric", its nominalized adjectival [0] guise.

[0] https://en.wikipedia.org/wiki/Nominalized_adjective


The architecture is exciting. The problem is that the aesthetics of the tooling seems deliberately designed to alienate new users. For example the whitepaper gives new names to every punctuation symbol!

And poking around the repo, there are gems like this:

"A...variable name is a random pronounceable three-letter string, sometimes with some vague relationship to its meaning, but usually not..."

"Nock, for mysterious reasons, uses 0 as true...and 1 as false..."

The architecture is really cool, but the syntax is terrible.


Traditional punctuation names suck. "Ampersand", "at sign", "caret"? For a language like hoon where we use a lot of symbols, that gets really tiring. Having one-syllable names for glyphs is really convenient, and they tend to roll of the tongue. Everyone's free to use whatever names they want, of course, we just find these useful. Most people love these once they get used to them.

I don't think you can judge syntax by looking at it -- you have to use it for a little while. Hoon's syntax looks hard, but it's actually rather pleasant to use.


This right here is what I mean when I suggest Urbit is unserious. "Green is just yellow and blue. Why do we call it 'green'? That name sucks. From now on, we're going to call it 'yellue'."

There's overreaching first-principles boil-the-ocean dorm room rethinking of concepts, and then there's renaming the ASCII characters.


This definitely counts as a highbrow dismissal!

As I age, I find it harder and harder to remember how easy and natural it is for young people to learn new things: ideas, theories, languages, and yes -- names. Alas, we can't rejuvenate our brains. I know of only one (partial) cure: have your own kids. You'll feel jealous all the time, but it's worth it.

It's painful to admit that I'm probably too old to learn other peoples' new languages. But any new language isn't and can't be designed for 42-year-old silverbacks. It has to be designed for kids -- or at least, people are who are kids now. And believe me, teenagers love this kind of stuff...

[Edit: tptacek, when you edit a comment after posting it, I think it's good etiquette to mark it with an [edit]]?


FWIW: I wrote the first half of that comment, then 5 minutes later, feeling it was too snarky without explaining itself, added the sentence: "There's overreaching first-principles boil-the-ocean dorm room rethinking of concepts, and then there's renaming the ASCII characters."

I'm a parent of two teenagers, neither of whom seems particularly interested in new names for punctuation.


I understand. And it did improve the comment, so I forgive you. But it's just irritating when someone hangs a fat fastball over the plate, you whack it, then find out that the pitch magically turned into a curveball and all you've hit is a ground-ball single.

An XS Nock shirt is a dress for my daughter, but she loves the "code." I guess all kids are different. But you never know what they're ready to learn until you try to teach them.


As for having kids, I have to say this: it's a wonderful experience, but it definitely doesn't make you smarter! Maybe that's the sleep deprivation talking though...


I wonder if it's hard because we don't try enough, because reality and pragmatism bolt us to dry reuse rather wheel reinvention.

Even more digressing, I'm very curious if young brains have an innate sense of 'new' in the most complex and abstract sense. It's as if they smell it.


A better analogy would be a modern day SSB [0] proposing a constitutional amendment to officially rename federal offices by their standardized acronyms, e.g. "President of the United States" to "POTUS", "Supreme Court of the United States" to "SCOTUS", etc., especially as this would be an equally superficial distraction from this hypothetical neo-SSB's real goal of replacing all Congressmen with AIs and relocating the capitol to Kansas City.

As it happens, "green" exists linguistically, so one is not compelled to wax artfully about the "yellow-blue blades [of grass] laced in dew". However, it used to be that the only way to talk of "orange" was as "yellow-red" or similar, until the simplifying convention of a new word for a distinct entity took hold.

Edit: Further, Urbit's equivalent of the Sunflower State Congress plan gloms enough together that an atomistically novel domain specific vocabulary is at least as useful.

Edit: As one effectively lay to the many disciplines of the professional field of computation, I use this forum to advertise my continued interest in a critical audit of Urbit. If Keean Schupke and Thomas Lord can probe the depths of Carl Hewitt's ActorScript [1] (the true standard for any so accused "obscurantist" software), then a fortiori Urbit's Anathem barrier [2] can be only semi-impermeable.

[0] https://en.wikipedia.org/wiki/Simplified_Spelling_Board

[1] http://lambda-the-ultimate.org/node/5243

[2] https://xkcd.com/483/


Well you should drop the OTUS, since it's redundant and implied. So you have the P, the SC (pronounced scee - with a long s), etc.


You know someone's going to pop up and talk about the Russian names for blue, and how changing the word allows us to change the way we see things, right?

http://www.pnas.org/content/104/19/7780.full


Also, the three-letter variable names aren't part of Hoon -- they're just a convention that works well if (and only if) you have short, simple functions.

I've actually taken to using TLVs with a Hungarian suffix in C -- it works well if (and only if) you have short, simple functions.

One way to think about variable names: declaring a variable is a way of saying "I couldn't quite get this into point-free form." Names should be the exception, not the rule. Again, this is much more true in a functional language.


> For a language like hoon where we use a lot of symbols

Languages with lots of symbols tend to be incredibly difficult to read at first. If you have a programming language where a typical program looks like line noise, it means the programming language is going to likely have a very steep learning curve (because a lot of those operators are probably not going to have their standardized meaning).

Think about Python. A well-written Python program is similar to the pseudocode you would write on a napkin. For a concrete example, look at the whitepaper's definition of I1 in pseudo-code. The "pseudo-Hoon" implementation of I1 looks totally different from the pseudocode, and to even begin to read the Hoon code, I would have to look up the meanings of approximately 10 different operators.

I guess there are two different camps of language design -- the Perl / Ruby / shell camp where having lots of non-standard operators in your syntax is desirable, and the C / Python / Lua camp which prefers a small set of operators with meanings close to their "standard" mathematical ones.


Well, C has ++, which is different from +, and it has &&, which is different from &. It also has ?:, and >> and <<, and * for pointers as well as multiplication, and // to mean something incredibly different from /, and...

To me, C syntax is obvious, because I've been using it for 30 years. It may be a small set of operators, but "meanings close to their standard mathematical ones" may be a bit of a stretch.


Yeah, I balked at C's inclusion as well. And that's just the signs. The way those signs are combined and the rest of the syntax are also often pretty gross. Try allocating an array of function pointers on the heap and you'll see what I mean.


And then there's Haskell, with it seems to be customary to define new operators extremely often; so much so than I can't cope with the speed trying to invent and memorize how to read them subvocally, unfortunately.


This is one thing we don't do in Hoon -- there are no user-defined macros, operator overloading, etc.

If you're going to put a lot of energy into binding syntax to sound to semantics, you only want to do it once (per language, at least.)


> Traditional punctuation names suck. "Ampersand", "at sign", "caret"?

Hence why geeks and programmers rarely call them by name. See http://www.catb.org/jargon/html/A/ASCII.html .


There's a lot of folk workarounds for this problem. They're mostly fine for languages that use ASCII lightly. But it'd be a pretty confusing mess if we applied the "pick something in the jargon file" approach to something like the Hoon syntax.

Everyone who's learned our ASCII dictionary, which admittedly is not a whole lot of people, applies it compulsively.


Is the list in the whitepaper correct? It shows in part:

  nap [
  pan ]
  lep (
  pel }
If that's not a typo, using mirrored words for ( and } is downright malicious.


Doh! That serves me right for tweaking these right before shipping the whitepaper. Yes, "pel" is ).


but isn't pel a rip ?


Clearly we should be using Victor Borge's phonetic punctuation: https://www.youtube.com/watch?v=6bpIbdZhrzA


I admit that I have a fondness for learning esoteric languages. That said I've actually gone far enough down the urbit rabbit hole to write a set of tutorials[0].

urbit has a high startup cost and is hampered by a lack of documentation as well as a desire to reinvent terminology as well as a fast changing API to write too. All of which means that it's more suitable for the curious and hobbyists right now.

I'm withholding judgement on whether it will manage to succeed despite these.

[0]: http://hidduc-posmeg.urbit.org/home/pub/hoon-intro/


This has been done before: https://en.wikipedia.org/wiki/INTERCAL

"The INTERCAL manual gives unusual names to all non-alphanumeric ASCII characters: single and double quotes are "sparks" and "rabbit ears" respectively. (The exception is the ampersand: as the Jargon File states, "what could be sillier?") The assignment operator ... is in INTERCAL a left-arrow, <-, referred to as "gets" and made up of an "angle" and a "worm"."


Slightly less insanely, Forth did this too. The ANS Forth spec defines official pronounceable names for all words:

http://lars.nocrew.org/dpans/dpans6.htm#6.1

The names are a mix between the visual (* is 'star', and ; is 'semicolon') and the semantic (<> is 'not-equals', and ! is 'store'). Some are a mix (+! is 'plus-store').


Well, ++ is "increment" in C/C++. The glyphs themselves can be described "plus plus", but everybody calls it increment, because of what it does. Also "->" the arrow operator. Everyone calls it arrow, rather than "minus greater than". It makes sense to call the glyph or collection of glyphs by what they do, rather than spelling them out.


Actually, everyone I know calls it 'plus plus'.

I'm with you with 'arrow', though.


Arrow's still rather on the 'plus plus' side of things anyway - maybe not 'minus greater than', but generally describes the look of it. If one would call ++ 'increment', one should call -> 'field' or something similar, I suppose.


The inversion of 0/1 for true/false is common in good C APIs. 0 as true and 1 as false makes a lot of sense from a perspective of error handling. 0 means everything's okay, while nonzero means it isn't. Various forms of nonzero can then represent different kinds of fail. Variations on this theme are common enough in programming that the inversion is reasonable, and it does no harm in the simple boolean case.

The architecture is no more esoteric than Unix, C, C++, or a submicron-scale polished turd like HTML5+CSS+JavaScript+React+JSX+NodeJS or most other "stacks."

It's just unfamiliar because it's new, and that's okay, because it is not making any pretensions about not being new or about being a baby-step hack on top of our existing heap of hacks. If you're going to try to reinvent the OS and the platform, then go full Yoda and do or do not.


> it is not making any pretensions about not being new or about being a baby-step hack

And, with its Kelvin (count down to zero, then freeze forever) revisioning scheme, Urbit admits its own mortality (or so it seems to me). Any change that can not be managed as an intrasystemic, in place upgrade must necessarily - and righteously - detonate the Urbit ecosystem as thoroughly as mature Urbit obviates Unix. Future competitors are thereby encouraged to be equally generalist, thus hardening Urbit against dissolution via tepid, partial reforms.


Don't blame Nock, Bash does that too. Return code 0 = true.


I call it the Tolstoy model:

    "All happy families are alike; each unhappy family
    is unhappy in its own way"
You only need one value for success/true. But you need many to indicate the modes of failure.


How about not using numbers for Booleans?


CPUs grok numbers. This would be inefficient.


Oh, you can use numbers behind the scenes. Just make sure your type system hides the ugly truth.


The aesthetics is what is making this thing possible. Thinking with the old words would not allow development of new things.

I don't know if this is good or bad, but it is sincerely trying to do new things, so a new language is appropriate.


New things? Is Urbit based on Unix/Linux and the Von Neumann architecture? If so, it is just cosmetic coating on top of the same old foundation. I really like what's going on with people trying to create something like Urbit or old Lisp Machines on new hardware (not Von Neumann based). Interim OS, PilOS (PicoLisp on bare metal). I am still not sold on immutable data is the way forward either. I am reading a 1991 book by Peter M. Kogge "The Architecture of Symbolic Computers", playing with Shen, and interested to see what becomes of vector-based computing hardware with a language like J/APL. Lisp Machines still enthrall me, and I think a new take on them can be just what's needed to recreate computing. Not just a copy of the old Lisp Machines, but an entire rethinking of them.


For another Kogge devotee, see Loper OS: http://www.loper-os.org/?p=8


Yes, I am familiar with his blog and writings. I would love to see what comes of Loper OS. He has examples of what he does for his day job - Lisp and robotics for one thing. He certainly puts his money where his mouth is, a very productive guy.


New things are developed every day with old words. There are existing words for just about everything Urbit does.


You didn't get it. Not radically new things. There are existing words, but using them wouldn't allow the people from Urbit to think about combining concepts in the radically different way they are doing.

In fact, they would end up saying: well, all we are trying to do already exists, so let's just write a library here, another there. You can argue this would be better, but their goal is to do a completely different thing, even if it looks it could be done with a new library for something.


No... I got what you meant. But there are plenty of new languages and even reinvent-the-world projects that reuse existing terminology just fine. There's nothing about using existing words that makes it harder to "develop new things"- in fact, it makes it easier.


Indeed, even a Post Res mathematical structuralist will admit the instrumental utility of common words for similar entities; would you care to list a few, and their mappings between the jargon of Earth and Mars?


the concepts expressed in the first section "Obstacles" are interesting and make for good reading, but the implementation strategy in "Definition" are nuts. The authors completely disregard human nature and usability when designing Hoon. Random variable names? Sigils-only syntax? If they want to replace Unix and the Internet, they should remember why Unix won: because it was easy tp get things done.

In Urbit new developers not only have to learn all the new concept words, they also have to suffer through a brainfuck/perl syntax and a hostile programming style. Success seems impossible.


I'm not exactly unbiased to Urbit, but I found Hoon as usable as they say it is. It seems like everyone that hears about Hoon's rune syntax comments about how insane it is, but it really is easy to get into.

I'm partial to calling it "Chinese Lisp" - Hoon runes are converted directly into AST nodes, but instead of using friendly words it uses weird diagraphs. The fact that the runes are grouped into families makes it much simply, however. You don't have to know what "|*" does exactly, just know that all runes that start with bar (|) create the equivalent of functions, so it has to be related to that. Instead of memorizing 100+ runes that are all completely different from eachother, most of them are just variants of others and are even macros to other runes.

While it may look like garbage, programs such as a brainfuck vm[1] are easy to scan when you can get the gist of the program structure very easily.

While some of the names are quite a pain, a good portion of the stdlib's arms are teniously named related to their subject, or are very easily grouped with their function. snag is index, scag is prefix, slag is suffix.[2] The docs for the stdlib, along with examples of how to use it, are shipped with every planet at http://localhost:8080/home/tree/pub/doc/hoon/library, although the initial page generation takes a bit. I'm not that big of a fan of the CVC variable names, however.

1: https://raw.githubusercontent.com/chc4/sample-apps/master/bf... 2: http://doznec.urbit.org/home/tree/pub/doc/hoon/library/2b#-s...


Apparent usability isn't the same as actual usability. Pretty much everyone who learns Hoon is surprised by how easy it was, which may be a good thing or a bad thing depending.

There are about a hundred runes (digraphs) in Hoon, but you mostly see only 10 or 15. Also, they're organized by internal structure (all | runes do the same kind of thing), and most runes are macros which resolve to about 20 built-in forms. It's a couple of orders of magnitude easier than learning Chinese, which again may be a good thing or a bad thing.

Variable names designed to be memorable rather than meaningful are pretty normal in both math and functional programming. Math uses Greek letters for the same purpose, for instance. Also, as we note explicitly, this is a style that's optimal for simple code - in any language, you'd probably write add(a, b), not add(left_argument, right_argument).

Actually, Perl originally won because it was easy to get things done. It's had problems since, but for different reasons...


Every time I look at Hoon I think "this looks like the same sort of cliff-steep startup followed by 'woah' as vi", but I can never quite get past the cliff as yet.


That's exactly what we want you to think. :-)

Don't worry, we'll put up some ropes...


The random three-letter variable names are only used in the kernel, and they're meant to only be used in code that's small/simple enough that they're warranted. In my opinion, we use too many of them, and I've been converting many of them to longer, more descriptive names. The rule that "punctuation is syntax, text is content" is useful, and it becomes very natural with a bit of practice.

Hoon is very much not designed to be apparently usable, it is designed to be actually usable. Personally, I find it to be one of the most usable languages I've ever used.


Awesome.

'For example, %= sounds like "centis" rather than "percent equals." Since even a silent reader will subvocalize, the length and complexity of the sound is a tax on reading the code.' This is great. Clever how some of the sounds are reminiscent of existing readings.


"Since even a silent reader will subvocalize..."

Actually, no. I don't.


"Subvocalize" has a literal meaning (micro-activation of the vocal cords) which good silent readers avoid. But you're still activating the vocal areas of the brain [0]. If you read much poetry, you'll see that the connection between reading and sound is pretty inseparable.

[0] http://www.ncbi.nlm.nih.gov/pubmed/23223279


I'm afraid I'm having a great deal of trouble understanding your point here.

> "Subvocalize" has a literal meaning (micro-activation of the vocal cords) which good silent readers avoid.

Right; so I'm not subvocalising.

> But you're still activating the vocal areas of the brain...

Well, sure. I see the symbol 'cat' and both the memory-complex representing a cat and the audio complex representing the spoken version of the symbol will be activated. That's how memory associations work. The written version of the symbol and the spoken version of the symbol will be strongly associated, because they represent the same concept.

But that doesn't mean I have to wait for the audio to finish playing before I move on to the next symbol. That's a misconception of what's actually happening.

(Plus, of course, hardly anyone reads a word at a time. It's nearly always complete phrases. Frequently not even in the right order.)

> If you read much poetry, you'll see that the connection between reading and sound is pretty inseparable.

Well, no. Poetry is mostly intended to be read aloud; it's supposed to be subvocalised. It's unrelated to prose (or computer programs).


It definitely doesn't mean you have to wait for the audio to finish playing. It does mean that your brain thinks the audio. Which has a lot of consequences, including the energy it takes to think...

Most people do read poetry silently, in the same way they read prose (I don't literally subvocalize), and the sound still is everything. Try reading these two poems silently:

http://www.mcgonagall-online.org.uk/gems/the-tay-bridge-disa... http://www.poetryfoundation.org/poem/174183


> It does mean that your brain thinks the audio.

I'm sorry, that's simply not true. At least for me. It may be true for you.

From the things you've said, I suspect you're a word-at-a-time reader, treating words as a sequential symbol stream, processing them as if they were speech. This is just one of the several different styles of reading. Others exist.

I am, as I mentioned above, a phrase-at-a-time reader. I take in multiple symbols at a time, and not necessarily in sequence. If I had to think through the audio of a phrase, this simply wouldn't work. It also means that I would be unable to read symbols that didn't have an audio equivalent.

Right now I'm working with Smalltalk. One of its operators is ~~. How is this pronounced? Don't know, don't care. It's just ~~. When I perceive it, I don't perceive 'tilde tilde'; it's a ~~.

(Also, if you aren't reading Scotland's worst poet aloud, you are wasting him.)


I read more like a page at a time. Some might accuse me of being a page-at-a-time writer.

Compare the way you perceive "~~" to the way you perceive "++". I hear these as "sig sig" and "lus lus". (Or rather, as "slus," because that's a further Hoon abbreviation, but never mind.) You don't hear the former at all; but you hear the latter as "plus plus," don't you?

This is because "tilde tilde" is so heavy your brain doesn't want to do the work of hearing it out. But overriding that connection doesn't save energy, which is why you do hear "plus plus." Your brain has to think the very complicated little thought, "squiggle I don't want to pronounce." It would much rather have a sound.

It's torture enough to read McGonagall silently. Out loud? Who would try that? It's tantamount to suicide.


> You don't hear the former at all; but you hear the latter as "plus plus," don't you?

I really do not. A short silence, both of them. I suppose it depends on how you normally code. I have never had a need for speaking out code and am not well versed in it.


I experience reading similarly to you. I will often become familiar with written words before I know how to pronounce them. When I want to say the word for the first time, I will have to pause and think about how it would sound out loud.

This is especially true of code and symbols. For example, consider this poem

> is it already too dark > to play tennis with a racket > i asked? > while I code with [

and contrast with this one

> The house filled with laughter > from mother and daughter > Both were fiends > but neither friends

or even better, combine the two

> I never used a ~ > as well as Oscar Wilde


Yes, this is precisely my experience!

And the weird thing is that the little silences where symbols should go are all completely distinct.


Do you like, read code from left to right? You don't parse it visually straight into an AST? How do you subvocalize the parentheses in (x + y) * z?


In my head I pronounce it as "snake snake" which I think is OK for specialized operators like that. But on the whole I prefer to use Smalltalk keyword-syntax for more understandable method names. I think binary operators make sense for commonly used operations which are FREQUENTLY used. If they are not, better to use English. #*$!:-)


I wonder how much of that is pronunciation and how much is grammar and parsing. The latter don't require sounds.


Neither do I. One of the very first things that guides on how to speed-read tell you is to stop yourself from internally reading aloud.


For code, I've never even needed to stop myself; it's just intuitive. The idea that one reads code as one reads prose just seems weird. Code has so much structure (and so few operators, usually) that I can just directly map the symbols to the semantics, bypassing the composing characters.


Interesting stuff, but in my usual 15 minutes of attention-span for things like this, utterly impenetrable. Can anyone who has had the privilege of being invited to the Urbit network enlighten us as to just how useful it is shaping up to be in light of, say, the situation with IPFS by comparison? (http://ipfs.io) Because to me, it seems that IPFS may well be ahead in terms of actual applicability right now. Am I mistaken?


>Interesting stuff, but in my usual 15 minutes of attention-span for things like this, utterly impenetrable.

It's pretty much designed to be as hard to understand as possible.

Look at the source/demo videos - it seems like it's designed to be obfuscated.

https://github.com/urbit/urbit

http://urbit.org/preview/~2015.9.25/materials/part-i

Personally, I can't get over the really made up words.


> Personally, I can't get over the really made up words.

The idea of having new names for everything is that when you use a name you've already seen before in another context, you carry forward any ideas you have about things with that name based on their implementations in those other languages.

There's a specific meaning for the words that are used for the introductory language concepts: arm, gate, battery, sample, core, rune, glyph, ... twig, jet, and so on.

Many of these are either new concepts, or new arrangements of old concepts. A gate is not a lambda or a function, neither is a core, and if either were called those things anyone who hadn't seen them explained before would probably go ahead and take the big word to their nearest search engine, only to become even more confused by idiosyncratic and sometimes conflicting explanations of those ideas that appear slightly differently in hundreds of other languages.

I think that Hoon hopes to be the first programming language for a lot of people one day, so they won't usually be coming expecting familiar things to have familiar names.


Interestingly, this same objection comes up when people try to read so-called (normally French) postructuralist theory from the 60s and 70s. Derrida himself, or Rorty writing on Derrida, supposedly advanced a similar argument, that using new terms allowed one to avoid falling back onto old concepts.

I couldn't find a good source quickly, but it's mentioned here: https://en.wikipedia.org/wiki/Jacques_Derrida#Criticism_from...


Fascinating side-chain .. thanks for that!


You mean that out of all the things in this combinator [0] flavored abstract rewrite system [1], programmed via a symmetrical [2], forward-inferenced [3] typed language, exposing modula-2 style per-code-block compilation control [4], all designed to support a content centric [5], natively networked [6] global computing environment with sovereignty-hard siloing capabilities [7], the most difficult part to understand is the naming scheme?

Edit: In the early Urbit docs [8] there was a nigh ad nauseam emphasis on the "stupidity" of the project. What the epiphany of careful inspection revealed was that this stupidity was not that of intellectual deficiency, but rather opposition to "cleverness" of the kind that tends to foster (and infest) Urbit's peer group of deep stack [9] rebuilds. One of the most insightful comments I've heard after springing Urbit on unsuspecting PL professionals was (paraphrased): "It [Urbit] does all the things we've said we wanted, in the worst possible way." All words are equally made up until grounded in referents. The question is, are the Worfian shorthands new words provide worth the cost [10] of expanding the symbol table, as compared to the interpretation overhead of translating new concepts into old? Under the burden of internalizing all the content linked below and more, I judge new vocabulary justified.

[0] http://www.ucombinator.org/

[1] https://en.wikipedia.org/wiki/Abstract_rewriting_system (ARS being the most mechanical of the pure computational models outside reversible logic)

[2] http://tunes.org/wiki/symmetric_20lisp.html

[3] https://en.wikipedia.org/wiki/Forward_chaining

[4] https://en.wikipedia.org/wiki/Modula-2#Description (see Definition/Implementation modules)

[5] https://en.wikipedia.org/wiki/Content_centric_networking

[6] http://netcentriccomputing.org/

[7] https://en.wikipedia.org/wiki/Capability-based_security (not yet fully implemented)

[8] http://moronlab.blogspot.com/2010/01/moron-lab-goals-princip...

[9] I won't say full stack, because the reshaping blade only penetrates OS deep. For a hardware-grounded reset, see good old Loper OS (http://www.loper-os.org/?p=8).

[10] Because words are context dependent, this expansion is only logarithmic with respect to the increase of immediately referable concept space.


That's not at all the thing that I have the hardest time understanding, you're right - it's just the thing, along with CY's past, that makes me stop taking it seriously, although it is an interesting idea if it were implemented in a way that didn't attempt to be as arcane as humanly possible.


Arcanity is relative to experience, so the ASCII phonemes can indeed be the most stymieing element of this experiment in futurity. However, what do you think of the phonetic numeral system? Compared to other protocols that map identity to a unique point sha256 space, "~hex" is no worse than "46", and "~fantyv-ralpen" is obviously better than whatever is its corresponding number; Urbit's vocalizable number to name mapping is possibly its most obviously correct (to me) and portable innovation.


Yes.


Then you are of inestimable value to the project and its intellectual antecedents! You will perhaps find the phonetic encoding for numerals more obviously correct - many's the time I've wanted to share some long hash identifier with a friend across the table, only to have to reiterate when they or I transpose "75555294111..." into "75552941111..." or the like. Even if nothing else from Urbit is adopted, the phonetic numerals are a demonstrably superior human readable format.

Hopefully the tractability of the number base will help bootstrap your familiarity with the ASCII alt-names. When you've poked around Urbit enough to see the ideas described in the link storm above, I would be very interested in discussing them with you (or even to discuss them in a general, non-Urbit context, both are standing offers).


I love IPFS, and it's definitely ahead in terms of applicability.

IPFS is solving a different problem -- it's a storage network, not a personal server. IPFS is more comparable to Freenet or BitTorrent. Urbit is more comparable to Sandstorm, although of course they're technically very different.


Like I said, I'm yet to finish boning up on what, exactly, Urbit is .. Sorry - not trying to be pedantic - but how is IPFS not a personal server? I can serve content with it right now, in fact - its what I mostly use it for at the moment.

I guess Urbit is more of an 'operating environment' that allows applications to be built, then?


Yes -- Urbit is a "server" in the sense of "computer," which you control and which runs your own apps.

IPFS stores and serves your data; so does Dropbox, Cloudflare, etc. In a sense the best way to see IPFS is as a distributed CDN, of course with an immutable namespace that actually works.


Do you foresee Urbit making use of IPFS?


When both succeed, certainly. :-) We certainly don't try to solve the CDN problem that IPFS is solving.


Is Urbit comparable to Ethereum?


Ethereum is distributed: the network acts as a single organism, and computation only co-occurs with (perhaps can only be considered a side-effect of) consensus among nodes.

Urbit is [de|para|un]-centralized, with each node capable of independent (possibly antagonistic) action.


This seems to come up at regular intervals. It's worth noting that the architects of this system are the "facist teenage Dungeon Master[s]" of http://thebaffler.com/blog/mouthbreathing-machiavellis


That doesn't seem worth noting.


Front page with demo videos: http://urbit.org


Currently under delightfully heavy load, it would seem, since I can icmp but not http it.


Sorry, whenever we push updates we have a brief period of inavailability. Should be working now.


The backup plan is to typeset and publish the mailing list as a collaboratively written science fiction novel in the event that the main project fails...


This is a really interesting read. I'm excited for urbit now!


The best tagline for the project is what is written on the whitepaper, that a planet in Urbit is "the browser for the server side."


The flaw in this white paper is that it starts by explaining the solution instead of the problem. So everyone should start by skipping to the bottom and reading the conclusion:

>In 1985 it seemed completely natural and inevitable that, by 2015, everyone in the world would have a network computer. Our files, our programs, our communication would all go through it.

>When we got a video call, our computer would pick up. When we had to pay a bill, our computer would pay it. When we wanted to listen to a song, we'd play a music file on our computer. When we wanted to share a spreadsheet, our computer would talk to someone else's computer. What could be more obvious? How else would it work? . . .

>The Internet didn't scale into an open, high-trust network of personal servers. It scaled into a low-trust network that we use as a better modem — to talk to walled-garden servers that are better AOLs. We wish it wasn't this way. It is this way.

This is the problem Urbit aspires to solve: Why does everything suck? Why aren't we living in the future we were promised in 1985, where everything is easy because all software plays nice together? Why do we have a planetary series of Rube Goldberg machines instead of the fun version of the Borg?

The answer is that we've been building everything on top of leaky abstraction, piling band-aids on top of band-aids instead of just starting with a stable foundation. Urbit's solution is to replace every OS, file system, and communications protocol on Earth with a single application: Urbit.

The project is meant to be the biggest pain in the ass ever. It's an attempt to fix everything that is wrong with computers by rebooting the entire information age.

The beauty is that it doesn't all have to happen at once. Urbit can function like a kernel for all non-Urbit systems: They get to call Urbit to receive its super-reliable data, but it never makes system calls so they can't inject side effects into the shiny new Urbit ecosystem. Over time, if Urbit works as intended, it will assimilate everything else.

Once you understand the stakes, Nock and Hoon no longer look like cruel jokes. It would be madness to expect coders to invest all that effort to learn 'just another language.' It's only pseudo-madness to make the same demands while promising that their code will still work a million years from now, because all computers will still be running Urbit.

Instead of caviling about details like how to pronounce the code when reading it aloud, let's delve into the issues that really matter:

I. Does modern computing suffer from a leaky abstraction problem that needs to be solved?

II. If so, is the solution to build an overlay network that provides a global static functional namespace?

III. If so, is Urbit a viable implementation of that solution?

IV. If so, what is the likelihood of Urbit reaching the critical mass necessary for mass adoption?

The last question is the one that interests me the most. In order to overcome the massive inertia behind legacy computing, Urbit needs a killer app: Something that exploits the new system's intrinsic advantages to accomplish feats that were previously impossible.

So... uh... Any suggestions?


A hint: consider the difference between calling web APIs from a cloud appliance controlled by a third party, versus calling web APIs from a personal server controlled by the user.

A new network has no Metcalfe's law effect by definition. So an important early skill is parasitizing existing networks...


I've always heard: do one thing and do it right. It's the Unix philosophy, before it was totally disregarded.

Urbit seems to take the total counterpoint of this advice, and I wonder whether it's a good idea.

What I don't get is what it is trying to achieve. The stated goal are too abstract. What are some use-cases? Is it really necessary to scrap the whole OS and languages to achieve these goals?

I do sympathize with the idea of rewriting everything from scratch, and I too feel that "almost everything is terrible" in software-land. But I doubt rewriting everything without having a firm grasp on the underlying issues is the way to go. It it a fun exercise, and inspiring besides, but I'm unsure it can be more than that.


isnt this basically the same idea as plan 9, but with some more opinions on "decentralised" networks.


To the depths of my familiarity, Urbit is basically the same, to some shade or degree, including but not limited too, the ideas listed here: https://news.ycombinator.com/item?id=10279931


Loosely, yes, they're both network operating systems. And of course, Plan 9 is brilliant. :-) All the details are different....


If things are really so screwed up, and we need to recover the dreams of the past, why not "recurse on the idea of the computer", and invest in something like Pharo Smalltalk instead?


Reading anything from them always reminds me of TempleOS. There are some hilarious gems if you skim it, like "Urbit is highly intolerant of computation error, for obvious reasons, and should be run in an EMP shielded data center on ECC memory."


TempleOS is definitely more of a religious experience. They're also way ahead of us in CGA graphics.

I think pretty much anything but the most fly-by-night data centers use parity memory, but I'm always concerned by the failure to spend a couple extra bucks on chicken wire for EMP shielding. The Carrington event actually did happen.


I wouldn't be so sure of that -- none of Google's clusters use ECC, for instance.


Really?

"This paper studied the incidence and characteristics of DRAM errors in a large fleet of commodity servers. Our study is based on data collected over more than 2 years and covers DIMMs of multiple vendors, generations, technolo- gies, and capacities. All DIMMs were equipped with error correcting logic (ECC) to correct at least single bit errors"

from conclusion 1.

"The conclusion we draw is that error correcting codes are crucial for reducing the large number of memory errors to a manageable number of uncorrectable errors. In fact, we found that platforms with more powerful error codes (chip- kill versus SECDED) were able to reduce uncorrectable er- ror rates by a factor of 4–10 over the less powerful codes."

DRAM Errors in the Wild: A Large-Scale Field Study : http://research.google.com/pubs/pub35162.html


Urbit confuses me to frightening levels. Psychedelic substances levels. Kudos to them.


I always think of TempleOS as well. Both project maintainers have created some impressive and/or interesting technology that is hindered by inflated self importance and being different for the sake of being different. Another common point - HN has a soft spot for both and upvotes most content related to these projects.


It'd be interesting to hear a more detailed description of what you'd classify as "different for the sake of being different."

There's certainly plenty of code in this world that's the same for the sake of being the same. It's not terribly interesting to use CGA graphics for the sake of being different. On the other hand, I do wonder whether my children will grow up learning 1970s programming for the sake of being the same.


One example is that they flipped the meaning of 0 and 1 in their new language.

>We should note that in Nock and Hoon, 0 (pronounced “yes”) is true, and 1 (“no”) is false. Why? It’s fresh, it’s different, it’s new. And it’s annoying. And it keeps you on your toes. And it’s also just intuitively right.


I'll give you that one! It's an old mistake and not very costly in practice, but the intuitive rightness isn't worth the pain in the butt. But the pain in the butt also isn't painful enough to match the difficulty of fixing it.

I actually got this bad idea from Unix: !strcmp(), etc. It's certainly easier to overload error codes into a 0=true scheme, although Urbit doesn't actually do that.


"It's an old mistake and not very costly in practice, but the intuitive rightness isn't worth the pain in the butt."

Why is it intuitively right?


"a string can be slack in many ways, but taut in only one"

0 is true, everything else is false. At least that's my intuition.


Huh! Pretty similar to my comparison to Tolstoy's quote on happy and unhappy families.

https://news.ycombinator.com/item?id=10282933


This is exactly the way it's used in shell scripting, and honestly, it's more useful that way for us because it makes booleans default to true instead of false. If we're going to redo everything, we may as well do it right this time.


> This is exactly the way it's used in shell scripting

You say that like it’s a good thing.


Well, as I see it, the design of Hoon is definetly different for the sake of being different, syntax wise and considering the pronounciation guide. Using new terminology where existing terminology will suffice (off the to of my head, twig and span) is different for the sake of being different.


> being different for the sake of being different

Urbit has brilliant technical insight into perhaps the hardest problem our industry currently faces: why does everything always turn into a big ball of mud?

Urbit may turn out to be incredibly important. Parent comment is unjustified, uninformed, poor tone and ad hominem.


On the contrary, I'm quite interested and impressed by the technical quality of the project. I'm pointing out that throwing all first principles out of the window, even when its clearly unnecessary (renaming the ASCII characters) makes it harder to approach for no good reason.


Sorry, it looks like I read your comment uncharitably.


[flagged]


Their opinions about things unrelated to technology are, of course, highly relevant to their technological insight.


Urbit embodies Moldbug's politics.

Speaking of politics, one of the most interesting practical user-facing things about Urbit that I saw in an earlier video demo was the user choosing his or her political affiliation when registering. The claimed affiliation then acts as a mandatory filter for political conversations. You can opt out of participating in any political discussion but not opt in to more than one camp. I think this is a brilliant idea.


Finding the video (https://vimeo.com/75312418) I see that the four political associations are far-left, left, right, and far-right.

Now I need to know where the Urbit developers are based, because how these terms are interpreted is very, very, very different depending what country you're in...


America. Far right is monarchism/fascism.


The idea that technical aptitude and social/humanistic aptitude are orthogonal is bizarre, especially in the context of someone designing something to be used by other people. The intentional obtuseness and unfriendliness of the urbit ecosystem should tell you something.


Meh, I prefer not to work with bigots. Good to know this fact; I was considering getting involved.


I'm curious. Did you do some research into what was meant, or did the phrase "post opinions about black people" carry some hidden meaning I am not equipped to detect?

There's a heavy implication, of course, but you mentioned a fact. I'm wondering what fact you reference.


2nd hit on Google is an interesting article from Slate: http://www.slate.com/articles/technology/bitwise/2015/06/cur...

His opinion on colonialism in South Africa is "if it ain't broke, don't fix it": http://blogs.swarthmore.edu/burke/blog/2008/11/07/on-its-sto...

Here's a nice post of his where he literally all but claims to be a white nationalist: http://unqualified-reservations.blogspot.com/2007/11/why-i-a...

He might have interesting software engineering ideas, but I'm not interested in ever working with him.


Okay, I read the third post, and it seems like he's saying that he is not a white nationalist but is not afraid to read them.

Okay, fine. I don't see why this is so big and scary.

Personally I think the reason Curtis's writings provoke so much hategasm around here is... well... "methinks you doth protest too much."

If hackerdom and Greater Silicon Valley were bastions of diversity and tolerance, nobody would give a shit about Curtis's alter-persona or any eccentric views he might hold. But they're not bastions of diversity. They're really really really really not.

Back when my own venture was getting off the ground, I did this as part of recruiting research:

https://github.com/adamierymenko/headhunter

I ended up with a monster list of GitHub profiles and projects in the SoCal area, so I started going through them. White dude, white dude, white dude, asian dude, white dude, white dude... I saw probably 2/3 white dudes, 1/3 asian/Indian dudes, and I could count on one hand the number of women I saw without running out of fingers. I mean there are practically no women on GitHub. This is not the result of bigotry on my part. I applied no filter. They are just not there. Why?

There is no significant statistical IQ difference between women and men. There are loads of women in biotech (another field I've studied), and trust me it is no easier than this one. There are more women in aerospace, in actual rocket science. There is no biological or neurological reason there should be no women who hack. There's something wrong here.

Dig a bit and it's not hard to see. Hackerdom is really misogynistic and perhaps also somewhat racist, though the misogyny is in my experience by far the dominant -ism. I mean I am a card carrying penis owner and it annoys me at times.

I think the tech field is self-conscious about this. To cover it over we've applied this thin bullshit veneer of twee politically correct nonsense, and anyone who openly refuses to tow that party line gets scapegoated and blacklisted. But that stuff doesn't work. All it does is drive the bad stuff underground, creating today's spectacle of a field that plays endless lip service to equality and diversity but is made up almost entirely of young white and asian men.


Unfortunately, at this point bigot is a distinctly Orwellian term. Consider, for example, the statement "Italians are more intelligent than Germans." By itself, this statement can't be bigoted, because it simply represents a possible fact about reality. At worst, it's false, or perhaps simply ill-posed (if, for example, one rejects the notion of quantifying intelligence). It's categorically distinct from things like "I don't like Germans because they're dumb" or "She can't be smart, she's German!" or "We don't serve Germans here." Any use of bigot that groups the first statement with the other three creates a false category, using language to attack (possible) reality for particular social and political purposes.

In the colloquial sense of not liking people simply because of their ancestry (or other such categories), I can assure you that the creator of Urbit is not a bigot. But, as with all Orwellian terms, bigot tinkers with the tools you use to think, so the word is best avoided.


[Disclaimer: The following is a digression into political and rhetorical philosophy. Caveat lector.]

Of course, such a rigorous delineation of denotation and connotation is the mark of a decidedly literal mind, and the neoreactos do laud the socio-linguistic adriotness to see both accident and essence. Thus this appeal to pure dialectical interpretation (which I affirm as my preferable norm) of politically sensitive statements seems vulnerable to attack in ways resonant with their assault on arguments for universal human rights - the ideal which is being discussed is not the practicality which the victor hopes to influence.


It's too bad, really, because someone should do a serious general-purpose overlay network project, and it's clear that Urbit isn't it.


You're probably capable of something more interesting than a "middlebrow dismissal..."


I think Urbit spends a lot of time framing overlay networks in obscurantist and obfuscatory terms, tightly coupling the simple, useful idea of layering a more flexible network on top of restrictive IPv4 networks with less useful ideas like a series of new programming environments with their own ASCII pronunciation guide.

I'm not sure what I gave was a middlebrow dismissal; it was just terse. I find Urbit unserious. I think that's unfortunate, because someone should do a serious, pragmatic overlay network.


We'll call it a terse highbrow dismissal, then.

My terse highbrow dismissal: separating the programming environment from the protocol results in serious, pragmatic problems. For instance: protocols specified in English, in which messages are validated by hand and not by a type system.


The notion that Urbit believes it can solve all the problems, rather than tackling a single important problem, is one of the things that makes it unserious to me.

The inflexibility of the Internet service model is a serious problem. Poor formal methods for verifying protocols is a serious (albeit not lucrative) problem. Programming languages that make it difficult to express correct programs: serious problem. It is not clear to me how the solution to all three of those problems is the same startup.


At a certain scale, it's often a lot easier to build one system that solves all the problems.

Building a building: hard problem. Building 1/3 of a building: impossible problem. Building a cow: hard problem. Building 1/3 of a cow: impossible problem.


Well, some types of buildings are designed to be modular. Cows are not.

I'm also in favor of a single system that solves a bunch of problems than a hodgepodge nightmare. That was the vision behind NLS, the Mother of All Demos, SmallTalk, Plan9 etc. And Doug Engelbart can be found favoring the easy to master but complex at first glance approach https://www.youtube.com/watch?v=VeSgaJt27PM instead of the easy to get started approach of Mac GUIs that won out. The problem arises when the system makes too many weird decisions and people fork it to make their own all encompassing systems. Then it takes forever for crippled versions of these system to be adopted. It doesn't help when even the niche audience is divided. Things that can be disagreed on should be minimized.

So there is no technical need to throw the complex at first glance syntax at new users these days. You could convert it to a more verbose syntax and back again.

Terse syntax looks clean and easy to scan when you use it daily. But what if you don't? Then it's not self documenting, you have to look up commands to read code. If I don't use command line for a few months, I remember the basics like mkdir, ls, cd but I forget the nuances like ls -a and the billion other options. So it becomes a chore of looking things up constantly. Could be fixed with natural language autocomplete for writing code, but not reading. If you considered such use cases, you wouldn't be so sure about the terse syntax.

So many made up words are also asking too much. They'll certainly be worth learning if the system is proven. Haskell has it's own way of doing things, does a lot of things well, but seems to make many things difficult in practice. So it remains a niche that very few learn beyond the surface curiosities. Same for APL, J and K.

And I wrote most of this before seeing https://news.ycombinator.com/item?id=10280348


I mean, this is pithy and all, but it obviously doesn't rebut my point. You could apply the same logic to any disparate collection of engineering problems.


No, you're right, it's a highbrow dismissal. Let me be a little more detailed.

Once you've built a system that's both a protocol and an OS, you realize that putting an abstraction barrier between them is like building a cow by building two halves of the cow, then sewing them together. It's incredibly hard and the resulting cow doesn't work very well.

Take a problem like identity. Is this an OS problem, or a protocol problem? Earlier I mentioned validation of message / content types. Is this an OS problem, or a protocol problem? Is the spinal column part of the cow-frontend, or the cow-backend?

These are just the most obvious examples. For instance, the standard way of building protocols assumes that message processing isn't transactional, and the OS is a dual-level store that loses its mind all the time. If your OS is a single-level store, you can essentially use persistent sessions. Which means you can get exactly-once messaging. Which is a very desirable feature that's impossible to achieve if you assume that the endpoints can lose their minds.

So the appropriate comparison is: build the front-end of a cow, and stitch it to the back end of an alligator, creating the mighty alligow; or, build a whole cow.

Granted, Urbit isn't perfect and I'd hardly call it done, but the whole stack (counting apps!) is only 25K lines of code. So the "whole cow" doesn't seem appalling. The alligow -- I wouldn't even try.


The notion that the OS and the network are essentially the same thing, a series of procedure calls somehow knitted together either with a stack in memory or as a series of serialized frames on a network, strikes me as a very 1980s way of framing computer science. I feel like the last 20 years of network software development have been a repudiation of that idea.

So yeah, I find it frustrating that Urbit insists that an overlay network is "half the cow", and that to have a "whole cow", we must also adopt an excruciatingly idiosyncratic bespoke programming language.

Mostly, for me, this is coming from a place of deep respect for the concept and potential of overlay networks, and of frustration with designs that seem to sabotage that potential in order to make philosophical/political statements.

(That's my only attachment to Urbit as a thing worth discussing; by way of bona fides:

https://hn.algolia.com/?query=author:tptacek%20overlay%20net...)


I actually like that we're having a substantive discussion in a snarky way.

But 1980s? Not to make it personal, but I finished Brown in 1992 and dropped out of Berkeley in 1994 (where I arguably invented ASLR [0], though the basic idea was Larry Peterson's) My field was OS in general and networking in specific, I took Mark Weiser's ubiquitous computing seminar from Mark Weiser, and if there are pieces of paper entitling me to talk about anything, it's '80s networking. "And you, sir, are no '80s networking."

'80s networking as I remember it: at Brown, it was all about TCP/IP versus OSI and crap like that. At Berkeley, the emphasis was more on ATM. And crap like that. Congestion control algorithms, like garbage collection, can be invented indefinitely. Frankly, the only thing more boring and irrelevant than '80s networking: '90s networking. At least in the '80s people still wrote new transport layers and stuff, in the vain delusion that they might get adopted.

And at least in the '90s they still invented their own IETF application-layer protocols that got deployed, or that you could imagine getting deployed. Like that great winner, XMPP. (I spent a particularly depressing afternoon as a fly on the wall in a pre-XMPP working group at some IETF in '97 or '98.) And ACAP? Wherefore art thou, ACAP? And all the calendaring stuff? Various bits of things that were designed to be open networks got fitted into various proprietary protocols of the 2000s.

(Nothing is ever new in CS, and if you're a grad student looking for new ideas about networking, this pattern suggests that the best place to look is '60s and '70s papers. Have fun digging! Or does no one do that these days, either?)

So anyway, we were talking about cows. I get it. You're not interested in cows, only in cow heads. Obviously I share your feelings about cow heads. Love 'em.

So, your problem is: the Internet is full of firewalls and crap. It's a restricted-routing network. This sucks. I remember the Internet when it had no firewalls. You do too. We remember when the Internet was a social network. What is the Internet now? Like we say on the intro page, it's a fing modem, which you use to log in to fing AOL.

So let's create an overlay network with unrestricted routing. Some kind of P2P scheme. (I hope you know Adam Ierymenko's ZeroTier -- he's api on HN. I believe he's basically solved this problem as you define it.)

Great! You have a near-perfect VPN. What else is an overlay network? Have you recreated the Internet as a social network? You know, the Internet that had a distributed Reddit called Usenet -- which, as a digital society, was as far above Reddit as Reddit is above YouTube comments?

Dude, you haven't even come close. All you have is another VPN. Why were those restricted networks put in place? Nobody had a "firewall" when I was at Brown.

To put it another way: what is your overlay network doing? Since you're delegating layer 7 to the OS, you're providing the same basic service as '80s networking, '90s networking, and of course now networking. You're sending datagrams or streams or something between Unix processes on different Unix machines.

For starters, how do you identify these endpoints? A ZeroTier address is an identity in a sense, but it provides no useful information. There's no way to tie it to any identity you're actually interested in.

Who needs a new network for Unix processes to receive data from an effectively infinite set of anonymous untrusted identities?

We already have one of these networks. We call it the Internet. It was a great social network when everyone with an IP address was an institutionally trusted entity. Once that became untrue, we put firewalls on it and it turned into a modem. Anyone can see that this will happen to your overlay network. ZeroTier can be used as a public network or a VPN -- it's a great VPN. With due respect to api, I would focus on that side of the business :-)

But of course there's a bazillion VPNs. You want something different -- you want a cow head. You don't care about the cow body. That's fine. We don't all have to be generalists.

Your theory is: why hasn't someone built a cow head and stitched it onto the alligator body yet? My theory is: (1) if you stitch any head to an alligator and it sticks, it's probably an alligator head; (2) our alligator already has a head, and nothing will be gained by cutting it off and stitching it back on again.

Now, you can see how this process of inferring the rest of the cow goes. You need an identity system, or something, for your network. A PKI. The cow neck. Who holds these identities? Processes on Unix servers? O rly? You're going to infer that person X signed document Y, because person X is connected -- in some way -- to a Unix process with access key to K? Sure, I guess we do that for HTTPS servers, but... for individual human beings?

Ok, you're going to build access to K as a separate component of each application? One keystore per process? Or since it's 2015, per container or whatever? Or the whole computer has access to K? O rly? This is good, this is really good.

So you decide to do what the browser did: create a new opaque layer, above Unix proper and isolated from it, to manage your applications. Imagine if JS "apps" could make system calls through the browser. There'd be no such thing as a Web app. HN would be like a Java applet or something.

An opaque layer (imagine a node that couldn't make system calls, for instance) gives you two wins: it lets you standardize the semantics of a network node precisely, and it lets you run untrusted code with perfect encapsulation. And it offers the even more intriguing possibility of running other peoples' code automagically, which is incredibly useful in a distributed system -- for example, to disseminate protocol validator updates. Thus, "the browser for the server side."

And you go on cow-engineering thus. Until you have the full cow. Then you can drink milk every day and laugh at your strange detour into cow-alligator Frankenstacks.

[0]http://bit.ly/1YGuvcC


It must be evident at this late juncture, Chancellor Yarvin, a full half decade since your Durdenesque doppelganger deemed the denizens of earth prepared for his prolix proclamation of Martian supremacy, that the more Carlylean your Urbit apologetics, the deeper your detractors' disposition for derisory dismissal of your loving labors as the grandiloquent, malformed products of, inter alia, luddite nostalgia for the apocryphal elegance of antediluvian computing, a nocturnal-emission inducing fetish for a fascist technocratic telos, viz automation of global autocracy, or a perniciously persuasive strain of neo-Swiftian satire, which advocates Kafkaesque madness with Kaufmanesque earnestness.

That confuses the hell out of me, because I think your plan is brilliant. What gives, broseph?

My only formal CS-education was at summer camp learning Logo when I was nine. In my early 20s, I taught myself LISP from PG's book purely so I could start wasting a few weekends a year coding an urGAI. The first practical coding I've ever done was this year, when I finally bit the bullet and learned Python and Java.

Is your vision convincing to me because I'm clueless about the harsh realities of programming useful things? Or am I able to discern its splendor because I avoided exposure to the false conventional wisdom that blinds other coders to your insights?

If you put a gun to my head, I'd have to pick door number one. Actually, if you put a gun to my head I'd say Urbit was the best thing since Filmer and then apologize for that over-the-top pastiche. But you see my dilemma.

[Edit: Serious question. You came up with Watt/Hoon 5+ years ago, and you claim it's not much harder to use than Lisp. If that's true, where is all the useful Hoon-coded software?

If I gave you five years to write things in Lisp, you could produce an incredible library of offerings... No?]


Hoon didn't really work properly until 2012 or so. Actually I've paid very little attention to it since 2013.

Most of the time since mid 2012 (me for about a year, me and a few other for two for two more) has gone into writing Arvo, a purely functional operating system. Arvo is not enough like anything else to be seen as anything else but research.

This is a fairly typical timeline for CS research. What's unusual is just the depth of the stack. I would also point out that if you look at the normal cost of developing any sort of operating system, even to the alpha level, the metrics are pretty good.

Where the project looks really unproductive is in objective measures of content production. For instance, Nock took me roughly from 2002 to 2008, which is something like a bit and a half of output per day.


Library gap aside, Lisp has an FFI. If you want to use libpng from Lisp, you just make some C calls. If you want to use libpng from Hoon, you first have to re-implement the functionality of interest in Hoon just to have a sound basis for using the C library (as a drop-in optimization, or "jet").

This obviously makes for slower going, especially near the beginning.


I have added you to the dossier, Mr. Ehrlich - do not fear, this is a good thing.


Duly noted. Feel free to shoot a message to my Gmail.

My address is my first name, followed by the first initial of my last name, then the letter Q.


And now it hits me, for the first time, shamefully: the sense that Urbit is Pynchonesque... it's often said of Pynchon that nostalgia for the 60s is the animating theme of most of his later work, and here it is made explicit: the chain Usenet -> Reddit -> YouTube. (Clearly, I think this is a good thing.)

Do I miss usenet? I have to say I only ever liked it from within emacs, with glowing characters on a black background. Seeing it on a web page, dark on light, is just not the same. Les neiges d'antan

Once upon a time, it was fun to see what you could learn about an IP address that you found in a weblog. Who bothers anymore?

Perhaps it is snowing, in a way that we are too sinful to perceive!

There are plenty of numbers. We could all have one, forever; or until we die, and then we can, like Charlemagne, will them to our children, who will factor them at leisure


Moreover, the idea that Urbit thinks of the network as "a series of procedure calls somehow knitted together" is a very, very gross distortion. (I guess if you think of "'80s networking" as RPC-heavy, you might have sort of a point about that '80s thing. For me, RPC is already in the OS layer.)

Granted, the high-level interface you want to present to the programmer is something like RPC. The two application-level communication paradigms in Urbit are a transactional "poke" and publish-subscribe.

A key difference is that a successful poke (a) contains no return data and (b) is piggybacked on the packet ack. Actually the whole transaction is piggybacked on the packet ack ("single acknowledgment" or "E2E acknowledgment.") Oh yeah, that's another feature that crosses OS/protocol lines.

This lets us produce a particularly non-leaky network abstraction against not procedure/function calls, but Arvo's stacked event calls (if normal events are a lot like GOTOs, Arvo events are more like GOSUBs). So it's not quite RPC. But it's fair to be reminded of RPC.

But at the actual packet layer, a network is a bus for sharing large (but not too large) unsigned integers, which may or may not arrive anywhere. (Network programming is quite a bit easier, by the way, given a bus-width-independent language that can just model packets and blobs as big atoms, then operate on them functionally.)

Effectively, a packet you hear is something that someone said, not something that someone told you to do. Hearing it is learning something. Idempotence at the packet level is crucial, because if someone tells you something twice, it's the same as telling you once. The protocol exists to answer the question: what happens to me if I learn this number?

It's only a short step from here to defining the entire state of an endpoint as a permanently fixed function of the list of numbers you've learned. The computer's state is a function of its packet log. What could be more natural? Why would anyone define a computer in any other way?

But of course, defining a computer as a pure function of its packet history requires you to define its VM and OS as part of defining the protocol.

And a general rule of protocol design is that your chance of achieving a compatible protocol is inversely proportional to the square of the length of the specification. It's also inversely proportional to the extensibility of the protocol.

So you're going to squeeze all of Unix into your RFC? Or even all of JS, bless its soul? And again, you wind up looking for something like Urbit. You may just be interested in the cow head, but you can't get to it without building the whole cow.


>At a certain scale, it's often a lot easier to build one system that solves all the problems.

I have been wondering, are you familiar with Project Xanadu and Ted Nelson's work in general? Project Xanadu took a "whole cow" approach to building a network centered around publishing documents and linking between them. The stated goals for Urbit and Xanadu seem to overlap [1] enough that Xanadu would be easier to implement on top of Urbit than on top of anything else. It could then be Urbit's superior alternative to the World Wide Web for publishing hypertext documents. Last year I emailed Nelson asking whether he knew about Urbit (secretly hoping he might already be working with or for you) but he replied he didn't.

[1] https://en.wikipedia.org/wiki/Project_Xanadu#Original_17_rul...


Without addressing the merits of Urbit, these examples aren't valid. A third of a building is just a building in progress (foundations laid, i-beams installed, etc.) A third of a cow is a calf.

It might be the case that you will get synergistic effects and/or Urbit's vision isn't possible without solving multiple problems. But it could equally be you've bitten off more than you could chew by choosing to work on multiple orthogonal problems.


Thanks. Dividera et Impera. This is the whole idea behind modularity. Solve the network-abstraction problem the best way you can. Then solve the programming language problem the best you can. Do we need a "network programming language"? Maybe, but general purpose programming languages are better for other things, and they too should be able to talk to the network "layer". I probably missed a lot on the article, it was so long to read.

Of course if you have new concepts they need a name of their own. But that only applies if all the new concepts really are new. If they are the same old concepts, use the same old commonly understood name. If they are somewhat different, use a derived name, like "Big Potatoes". They are LIKE potatoes, but they are really big. If they are totally different, try to invent a name that EXPLAINS THEM BY COMBINING the names of previously existing concepts. How many people program in "BrainFuck" and get paid for it? There must be a reason that there are not so many professional brainfuck programmers around. Nevertheless I applaud this project for thinking outside the box, if that is possible they have done it.


A third of a cow is dinner!


[deleted]


I'm not associated with this project, but your questions are answered in the Urbit white paper: http://urbit.org/preview/~2015.9.25/materials/whitepaper


Really, you didn't read the whitepaper.


Hey, don't delete your comment like that!


Why not? You were right, I didn't read the paper, I just skimmed it, so my original comment was completely bogus. Deleting it seemed to be the best way to clean up the mess.


And... this is why I keep coming back to HN.

We should get lunch again sometime, Ron...


Sure, as soon as I've had a chance to read the white paper. :-)

(Send me an email if you're serious.)


I think it's important to note that Urbit - a very interesting piece of work that deserves more attention - would've been presented at Strange Loop if not the PC crowd made Strange Loop disinvite Curtis Yarvin (the creator of Urbit) for reasons unrelated to Urbit. Using throwaway for obviously reasons.


I obviously find this a very interesting subject. However, I suspect there are better times and places to discuss it...


I don't mean to take attention away from the real conversation here. However, there are a lot of people who are unaware of the back story, and this seems like a good place to make people aware of it.


I like how your rhetoric removes all agency from the conference organizers, like they had no choice but to do what that horrible PC crowd wanted.


JavaScript needed to be able to read text document. Next.


I got as far as the first mention of Hoon, and then I thought to myself, "Oh fuck, didn't I read this a year or two ago?" It's all downhill from there.


I do computing daily i.e. I use the internet like anyone else, and I use Emacs for feeds, mail, writing, programming little tools in elisp etc. Now after having read this, I cannot see how it be supposed to improve my daily computing routine. I think that it is indeed revolutionary this Urbit thing, in that it is as opaque and unpractically idealistic as any other revolution.


"Do users want their own general-purpose personal cloud computer?"

No, they do not.

General purpose computers, as such, are useless and boring. They are only useful when converted into one or more appliances. Modern OSes are very good at this. Cloud general purpose compute exists. Its niche to the community of appliance builders, because nobody has any use for it directly.


"in the cloud, all users have is a herd of special-purpose appliances".

The argument of the whitepaper is that our current internet and Unix environment does not allow for anything else than a cloud with lots of special-purpose appliances, but that does not mean this is the desirable situation, neither for the appliance builders nor for their users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: