Urbit is a weird thing that feels like it comes from a parallel universe of computing. It's a bit hard to wrap your head around it at first, but it has some neat ideas.
I like this article which introduces it:
Urbit is perhaps how you'd do computing in a post-singularity world, where computational speed and bandwidth are infinite, and what's valuable is security, trust, creativity, and collaboration. It's essentially a combination of a programming language, OS, virtual machine, social network, and digital identity platform.
"Despite messages between ships being encrypted, the founders state that they've purposely designed the network to make it as easy as possible for governments to regulate and control. It's not entirely clear why this is supposed to be a good thing."
I definitely wouldn't endorse this description - it's a bit secondhand.
What I'd say is that Urbit is not designed to be a darknet. It is not Tor and it's not Bitcoin. Also, it is not designed for external governments, or even its mysterious and sketchy founders, to govern - it is designed to govern itself. (Through user-level reputation mechanisms which have yet to be built, but are relatively easy to build - because the limited supply of identities controls Sybil attacks.)
In particular, Urbit should be quite good at enabling anonymous free speech beyond the reach of governments. A self-hosted node in your closet is quite practical and effective. There is no anonymization / onion routing (and you can't trivially route Urbit over Tor, because Urbit uses UDP), but someone could build that.
Although I don't want them to. I want to enable anonymous, or better yet pseudonymous, free speech. I don't want to help people buy drug$ or childpron through the mail. Fortunately or unfortunately, there are already much better tools for that.
My take on that is that this design choice arrises from the anti-democratic authoritarian sentiments which (at least one of) the developers apparently hold. At least they have repeatedly expressed these sorts of sentiments in their rather long winded blog(s).
Have you considered that democracy is not actually immune to authority, and it's really a reflection of this, rather than a window into the designs of one crazy person? (hi Curtis)
Maybe every system that comes along and says "impossible to be subverted by centralized governments" in reality is just saying "come and try to stop me, Federales". Do you think that is that what we all should be doing instead?
Actually, I have. I understand that there is complex tension between fundamental human rights (like freedom of expression, freedom of association, & privacy) and realities of the intrusion & abridgment of these rights carried out by various state security agencies and corporations.
I agree that any project which claims "impossible to be subverted by centralized governments" is just saying "come and try to stop me, Federales". Moreover, I feel that such claims are extremely difficult to substantiate (and so not particularly credible) and for the most part made by groups I'm not really motivated to associate myself with.
Speaking in a very general sense (having kept up with the news & goings on related to technology, privacy, corporate activities, and state security activities for the past few decades); it's my opinion that this tension is not only not in an ethically defensible or sustainable state, it's not in the state that most of the people in the western capitalist nations believe & expect it is.
So I find assertions like: "Despite messages between ships being encrypted, the founders state that they've purposely designed the network to make it as easy as possible for governments to regulate and control. It's not entirely clear why this is supposed to be a good thing." to be similarly worthy of skepticism and disconcerting as the "impossible to be subverted by centralized governments" claims. My feeling is that this statement translates fairly directly into "all user data is available to any corporate entity or state security agency for any purpose" and that this includes purposes like backchannel monetization through data aggregation and industrial scale warrantless surveillance & data collection.
It's pretty obvious to me that you are pretty invested in this project and I've only spent a few hours perusing the various docs, repositories, and blogs. So I'm fairly hesitant to make any sweeping statements or claims of certainty. So, with those qualifications in mind, my conclusion after my perusal is that this architectural design choice is unlikely to be a reflection of some sort of pragmatic policy coming from a non-ideological position of social responsibility that's striving to minimize any intrusion & abridgment of rights while still allowing for the realistic needs of businesses and state security agencies.
Frankly spoken, I personally find a number of the abstract concepts and architectural design choices in Urbit project to be fascinating and compelling. I've spent far too much time dredging around the information that is currently publicly available... Time I honestly can't afford. So it's really disappointing to me to find that there is a pervasive toxicity which is deeply intertwined with the project. My conclusion is that this toxicity makes the project broadly unusable (seriously) & and possibly even unfixable.
Elsewhere in this thread I saw someone claim that there were big things coming in a few months. If this is accurate I look forward to it. Perhaps there will be changes and new developments which will demonstrate my current assessment inaccurate. Honestly, that would make me pretty happy... but from what I've seen so far, I suspect that's really, really unlikely.
> My feeling is that this statement translates fairly directly into "all user data is available to any corporate entity or state security agency for any purpose"
Speaking as a semi-informed bystander who's been following Urbit for a while now, it certainly seems like one of the problems it's trying to solve is the typical notion of "user data" as something controlled by third parties. Eg [1]:
> Where is Joe's financial data in mint.com? In, well, mint.com. Suppose Joe wants to move his financial data to taxbrain.com? Suppose Joe decides he doesn't like taxbrain.com, and wants to go back to mint.com? With all his data perfectly intact? [...] Imagine the restfulness of 2020 Joe when he finds that he can have just one computer in the sky, and he is the one who controls all its data and all of its code.
That said, the current implementation has been explicitly called out (in past incarnations of the docs, at least) as not-remotely-trustworthy with sensitive private data.
I know you said you are out of time, but that mention from above about "the founders state" was actually responded to by the founder, and he said he didn't agree (it actually sounded more like "I never said that" to my reading.)
I don't claim to know everything about how Urbit works. I do think that superficially, Urbit is more able to respond positively to a demand from a state that sounds like "we think your network is being used to recruit radicals and a subversive element that threatens the nation is using it to communicate operational details and plan their attacks. can you shut it down" than say, BitTorrent...
I mean, it's a centralized network where the leader is able to push out updates to the software, and in a future version you may not even need to ask for them to be downloaded or approve them before they replace your running kernel. This will be considered a feature by anyone who comes from at least a managed Windows domain.
So, it has the potential to carry out a "poisoned updates" type attack, like Apple could do to an iPhone. And the more that I think about it the more ways I can imagine that ~zod can fuck you up. It's true I am interested in this project, more than passively, I am a kind of stakeholder who owns a large part of the namespace. The only thing that keeps my ownership safe is a line of text in the git repository under ames.hoon where my public key is stored, generated by an app called :pope and interpreted by a crypto suite that I cannot audit, simply for lack of time and understanding.
So, if I've led you down the wrong path or led you to believe something about the code that simply isn't true, I apologize! I feel I have to admit this is possible, I may have grave misunderstandings or mischaracterizations about the current state of the software, and to add to it, things are also always still changing now. It's in active development, pre-alpha, not yet sure who the customers are. YMMV, take with a grain of salt.
First of all, this is staggeringly brilliant. You should pay attention to it in the coming months. I am not sure if it's destined to be the future, but I sure as hell hope it is.
I had the privilege of interning at Tlon, the company working on Urbit's development, this summer. It owns most of the namespace (Personal cloud computer IP addresses, essentially.) and is where the architect of the system works full-time. They are funded, well enough, by VCs you know of. Urbit is not really launched yet though - We spent the summer doing a lot of work getting the network, file system, protocols, and application layer in "Industrial grade" shape, and I believe more of that is happening this fall.
Because the system is still unlaunched and the docs are being retooled, I imagine these pages are discombobulating. That's... expected. Urbit has a lot of odd ideas that take time to appreciate. However, if you do take the time to understand the motivation behind the design of everything from Hoon's appearance to the network protocol replacement for TCP to the vision for future social networks, you'll find some of the best and most complete computer science research done in decades in networks, systems, and functional programming. The essential idea is not an outlandish one - We need a new basis of computing and networking to build digital identities with, and 1970s system software is not up to the task.
It's unfortunate that ambition and a sense of humor can be misinterpreted as a joke today. For now, you'll just have to take my word for it [1] that these guys are deadly serious and have the technical chops to back up their ambition. Future documentation and applications built on the OS should soon make that more immediately evident.
There are hundreds of lines of noise. This makes perl and forth look absurdly readable. What on earth do the directory and file names mean?
I dunno how familiar people on HN are but the original author of urbit is Mencius Moldbug, a neoreactionary blogger. His style of writing is absurdly obfuscated and purposely impenetrable if containing some interesting ideas. The exact same thing is true of urbit. Interesting ideas but obfuscated the point of utter inaccessibility.
Here's sort of the wider group of blogs he philosophically aligns with:
His views are likely unpopular around here but I am a fan of a bunch of those other blogs, and I've tried to dive into Moldbug's blog Unqualified Reservations a few times. It's kind of insane—the guy is clearly very intelligent—yet writes in his own discursive style that is absolutely opaque.
Stepping away from the Algol keyword tradition is obviously a risk. At the same time, after using a keyword-free syntax for a while, reserved words feel really weird.
Someone just emailed and pointed out that he couldn't check out Urbit on Windows, because it has a file con.c. Oh, right, reserved filenames. How are reserved words different? The difference is - you're used to them.
Also, perceived usability (while it matters for marketing) is not the same thing as actual usability. Actual usability is something you only find out when you try to actually use something. We have a good bit of experience inflicting the syntax on coders of various abilities and backgrounds, and we're pretty satisfied with the result.
It helps that the syntax is quite a bit more regular than it appears at first glance. It's much more regular than Perl, and it also comes with a vocalization model that makes it easy to say out loud.
For instance, "=+", which more or less declares a variable, is "tislus." A lot of things like "=+" also start with "=", ie, "tis." You wind up learning about a hundred of these digraphs or "runes," which is a lot less than, say, Chinese.
Speaking of Perl, have you heard - Larry Wall is a Christian? I don't think this makes Perl a Christian programming language, though.One of the joys of our engineering inheritance from the 20th century, which was really humanity's golden age of political insanity, is that we get to stand on the shoulders of loons. We fly Christian astronauts into space on Nazi rockets programmed by Communist physicists. And it works...
> it also comes with a vocalization model that makes it easy to say out loud.
> For instance, "=+", which more or less declares a variable, is "tislus." A lot of things like "=+" also start with "=", ie, "tis." You wind up learning about a hundred of these digraphs or "runes," which is a lot less than, say, Chinese.
Uh... you realize this isn't helping to convince anyone?
The project may well be brilliant rather than ridiculous, but if so I suspect you're gonna have to come up with a better pitch to get people interested enough to invest the time to see it.
Your made up language where every punctuation mark is a phoneme or something... it's not helping.
And I should hope any programming language would indeed be easier to learn than Chinese (or just about any natural language).
Don't worry - the test of a system is what it can do. If we were ready to show you what it can do, we'd have released it intentionally instead of unintentionally.
It's very difficult for a language, good or bad, to compete on its own merits or even demerits. C is a better language than Pascal, but people didn't start programming in C because of that - they started programming in C because the native language of Unix was C, and they wanted to program in Unix. Why did they want to program in Unix? Because, relative to its competitors in the minicomputer OS market, Unix was a better way to solve their problems. This is the normal way that new languages achieve adoption - they ride on the back of a new platform, which rides on the back of new solutions to (typically) new problems.
Today you see a lot of apps like Buffer and Slack, which are very successful in terms of userbase and even revenue. From a certain perspective, the value these apps are adding is very minimal - Buffer is... what, cron? And yet, there is a considerable distance between an AWS box in the cloud and a Buffer instance. A lot of value actually is being added.
Most of the code in a product of this type is actually the (usually somewhat) general framework that sits between the Linux ABI and the application logic/UI. Essentially there's a type of Greenspun's Law in play. If you build a layer like this, but as a true operating environment rather than an internal framework, the result is a VM in which the distance from a virgin instance to many of these kinds of apps is much smaller. In a familiar Valley pattern, they then become commoditized and have more trouble extracting rents.
If it's possible to provide this kind of value, which we certainly haven't yet demonstrated, I can assure you quite confidently that a flashcard or two is not an obstacle. I'm in my 40s, so I know how hard it is for a middle-aged dog to learn new tricks. But it's not that hard.
On the other hand, if people had found C very difficult to program in, it probably would have hurt the adoption of unix.
In fact, if I understand the history, developers and sysadmins generally found C an improvement on their previous tools, and generally saw how it was so fairly quickly, without a lot of convincing.
You keep insisting that the... oddities of your language aren't really barriers, aren't as difficult as they seem, really aren't much worse than the conventional way to do things. But I think I've missed it if you've tried to explain why they are supposed to be better, what the justification is for such an unconventional approach, what it's supposed to actually accomplish to be worth it.
(beyond no reserved keywords in the domain of a-zA-Z, only reserved punctuation instead... which I don't think is convincing many people, who haven't really had much of a problem with reserved keywords).
(And the 'reserved keywords' thing doesn't even apply to your "lapidary" thing, which you insist really isn't that hard to deal with once you get used to it, which may or may not be true for people... but what makes it _better_, what's the point? Why not just use meaningful variable names everywhere, instead of taking an extra step to decide if it 'needs' meaningful variable names, or if instead obfuscated 'lapidary' variable names are sufficient? Maybe they're sufficient maybe they aren't, but what's the point, what's the harm in just using meaningful variable names in case they are helpful for a future reader?)
If you look below I've made arguments for all these things, but let me try to address them here. Compare
++ add
|= [a=@ b=@]
^- @
?: =(0 a) b
$(a (dec a), b +(b))
to
attribute add {
function(left-operand: atom, right-operand: atom)
produce atom
if equals(0, left-operand) {
right-operand
} else {
recurse(left-operand (decrement left-operand)),
right-operand (increment right-operand))
}
}
As for the variable names, I can think of very few programmers who would insist that "left-operand" and "right-operand" are better names, in this context, then "a" and "b".
Using "a" and "b" accurately expresses the simplicity of the algorithm, and nothing could be easier to read. OLV in "add" says: there is nothing interesting here. "left-operand" and "right-operand" says: there is some semantic meaning here, which you have to trouble your brain to understand. But actually, it's just pointless wordiness - in this case.
As for the syntax, you no doubt can read my wordy example better. But a programming language is a professional tool, not a toy. Anything new demands a genuine investment to learn - this investment has to be paid off by actual productivity.
And people are, as I've said, much better at learning random associations than they think. Again, we've taught this syntax to everyone from Haskell jocks to JS cruft-jockeys to innocent CS 101 victims. The biggest problem - and it is a problem - is not that it it is hard, but that it seems hard.
The nonsense words are all names. This is entirely a style choice. The actual name syntax is roughly Lisp's - for instance, (my-long-function arg1 arg2) does the same thing (roughly) in Hoon as in Lisp.
In the "lapidary" Hoon in which most of the kernel is written, facets (variable names, roughly) are meaningless TLV strings, usually CVC (consonant-variable-consonant).
The user experience is much the same as with Greek letters in math: you remember the binding between symbol and semantics if you know the code you're looking at. If you are learning the code, your first task is to learn that binding. Once you know the code, there is no quasi-semantic intermediary - a meaningful name - between the symbol and the concept.
I think our small sum of experience with this daring idea is that sometimes, it works better than other times. Basically, TLV naming is appropriate for code that is simple, but not trivial. (Trivial code, like "++add", is "ultralapidary" and gets OLV names - like "a" and "b".)
Ideally all of the Arvo kernel and Hoon self-compiler would meet the TLV standard of quality. All of it is written this way, though. In general, if you should be writing lapidary code, you know it, and if you don't know you shouldn't. (And should use more or less normal names as in any Lisp.)
In the "lapidary" Hoon in which most of the kernel is written, facets (variable names, roughly) are meaningless TLV strings, usually CVC (consonant-variable-consonant).
The user experience is much the same as with Greek letters in math: you remember the binding between symbol and semantics if you know the code you're looking at. If you are learning the code, your first task is to learn that binding. Once you know the code, there is no quasi-semantic intermediary - a meaningful name - between the symbol and the concept.
Have you been reading a lot of Heidegger or was this an independent decision?
I'm not sure much strictly non-lapidary hoon exists, but for samples marginally less lapidary than the kernel, I would like to point you to a simple algorithm translation at https://gist.github.com/MichaelBlume/17a227cc839f52f68c97, and the twitter library /main/lib/twitter/core.hook in the standard distribution.
I think this is not at all an unusual data structure for one to find in an HTTP server, no?
Now, this style (four-letter lapidary arm names, usually not nonsense strings but nonsense words) works in some places but not others. Frankly, %eyre is not exactly a triumph of the lapidary style...
This Hoon code is very pretty but the English content of the names is extremely low.
I mean.. reading a programming language I don't know, okay.. I don't expect to be able to understand much. But syntax and semantics aside the names are a language of their own with no attempt at referencing anything that might even be a little bit similar.
It's as though every time you needed to name a concept you came up with a unique word for it. This reads like poetry in a language I will probably never understand.
The docs are written in English but I don't see a lot of signal in here and no real explanation of this vocabulary you've created.
I am having a hard time believing any of this. I would need to see an example of non-code communicative writing in this language. An email or chat excerpt, something with someone saying "call pimp with the seam to get the marl" or however you would phrase whatever it is. Do you verb these nouns? DO you use English prepositions and adjectives when discussing them?
I mean this is not just reading a programming language I don't know, it's reading a programming language with all the names in a foreign language. So even if I could understand broadly what's going on here.. I cannot.
Searching around the repo to try to find out wth `marl` is I found this:
The fundamental hoon structure used to generate XML is a manx. A manx is composed of a marx and a marl, representing an XML node
Yeah.. this is like esolang-level opaque. I'm afraid to even run this on my computer.
Frankly, it always surprises me that people are willing to randomly run other peoples' C code! It's really quite plausible that I'm just out to steal your Bitcoin wallet.
The above are all data structures - not even functions. So we say "a pimp" or "a marl" or whatever. For precision, we might say "slam a gate on a pimp," but everyone will understand you when you say "call a function on a pimp."
Then, we refer to the comments (written in English) to see what a pimp, etc, is. Documentation is admittedly a problem, but it's not any more or less a problem than in any other language, I feel. Obviously, we could have more of it!
It's amazing how quickly words you know lose their meanings, or rather acquire new and distinct ones, in a functional context. When the iPad was released, I remember distinctly the number of people who thought the name was funny, because it reminded them of a female hygiene product. It reminded me of a female hygiene product. But it doesn't anymore...
The whole "ideology" is basically just fascism and institutional racism dressed up as something else. Despicable. I would never use any software from some Hitler-reboot type of person. Don't empower and feed such people. They are usually narcissistic megalomaniacs who think they are fit to rule and tyrannize others.
Before accusing someone of being a "Hitler-reboot type of person", perhaps you should, you know, actually read what they have to say about Hitler. For example:
Since most people are neither historians nor philosophers, the fact that Hitler was on the extreme Right, and this Reaction is also on the extreme Right, raises some natural concerns. Again: the only way to face these concerns is to (a) provide a complete engineering explanation of Hitler, and (b) include an effective anti-Hitler device in our design.
In other words, an explicit design goal is to avoid Hitler-like outcomes (because, duh). Thus, the accusation that he's a "Hitler-reboot type of person" reflects either simple ignorance—which you can fix by doing a little reading—or outright mendacity—for which, I'm afraid, there may be no cure.
I mean look, I understand what the philosophy is about, and I think it would never work in the real world. We've learned time and again that entrusting a lot of executive power to one person is a horrible idea - extremely exploitable and subject to corruption.
Looks almost like a product of a deranged autistic mind. I have no interest in such asinine "philosophies".
> I have no interest in such asinine "philosophies".
I bet no one does .. the real issue is how to decide on what is "asinine". After all, why are your views and beliefs any more valid than any one else's?
Normally I'd just tell you that I don't subscribe to moral and philosophical relativism, but for the sake of this discussion, I will oblige in explaining further.
First off, what is the definition of valid and correct? What is the metric that makes a certain belief more "valid" than some other belief? Well, over the millennia, we humans have arrived to certain definitions of what's good and what's bad in the context of preserving life, civilization, society and liberty. In that sense, things that harm those concepts can be defined as bad and those that further those goals are good. History has shown, time and again, that empowering one person with a lot of executive power inevitably leads to tyranny, oppression, genocide, slavery and other completely horrible things that we seem to have been able to get rid of (more or less).
So when someone rejects thousands of years of historical evidence as to what happens with such regimes and where they eventually lead, and proposes instating one again, I can only call it either asinine (if the author is well-intentioned) or evil (if they simply want power over others).
In more practical terms, liberty, self-determination and self-ownership are sacrosanct to a lot of people. In order to take those things away, you will need a lot of people, guns and body bags.
I thought it was pretty common to think that a benevolent dictatorship was a great way to run a country. The problem is in making sure your dictator is benevolent. Nobody knows how to do that, so we try non-dictatorial systems instead.
That is quite actually one of the creepiest things I've read in a while. I've always been fundamentally uncomfortable with a certain wing of Valley thought, but man, even I didn't think it went that far and in such lofty halls.
I haven't read all of Unqualified Reservations, but what I have read so far (Dawkins, Open Letter, Gentle Introduction) has not been difficult to read at all. Perhaps long and winding, but not opaque. In fact, I find Moldbug to be much more accessible than his sources of inspiration (which are standard fare if you even want to even begin with political theory).
Yeah, people who think Moldbug is opaque should try reading a little Carlyle. A few years back I spent a week powering through Carlyle's main political works (Chartism, Latter-Day Pamphlets, Shooting Niagara, and the Occasional Discourse), and it took a couple of days just to get acclimated to his writing style. By comparison, Moldbug is a breeze.
Is this better? Arguably, it's easier to learn. But I'm not sure I would regard it as better.
Also, for your convenience we've assigned CVC nonsense names to all the ASCII characters. Pasting from the source:
++ ace (just ' ')
++ bar (just '|')
++ bas (just '\\')
++ buc (just '$')
++ cab (just '_')
++ cen (just '%')
++ col (just ':')
++ com (just ',')
++ doq (just '"')
++ dot (just '.')
++ fas (just '/')
++ gal (just '<')
++ gar (just '>')
++ hax (just '#')
++ kel (just '{')
++ ker (just '}')
++ ket (just '^')
++ lus (just '+')
++ hep (just '-')
++ pel (just '(')
++ pam (just '&')
++ per (just ')')
++ pat (just '@')
++ sel (just '[')
++ sem (just ';')
++ ser (just ']')
++ sig (just '~')
++ soq (just '\'')
++ tar (just '*')
++ tec (just '`')
++ tis (just '=')
++ wut (just '?')
++ zap (just '!')
One of the problems with using these handy little glyphs in syntax (and an even worse problem with using Unicode glyphs in programming, btw) is that "semicolon" is not convenient to say. The length of the vocalization matters a lot to how you think about a symbol, even if you don't often say it.
> Is this better? Arguably, it's easier to learn. But I'm not sure I would regard it as better.
For what criteria of 'better'?
I can't really present an argument either way since I'm not familiar enough with the language, but it does seem like it requires a fairly high amount of cognitive overhead for... I'm not sure exactly. Keystrokes?
There is something to be said about the efficiency of glyph-based writing systems like kanji, so APL might've been on to something, but overloading common ascii characters and creating meaningless words? That seems like a lot of noise for brains without extraordinary working memory to filter out on a regular basis.
I'm sure thinking and writing in it are fine (even brainfuck isn't difficult to write), but for something intending to be the backbone of the internet, I would've assumed maintainability to have been the most important design goal.
Essentially, I'd say most peoples' brains are better at associative memory than they think they are.
When you see a digraph like -> or ?: in C, do you think "hyphen greater-than" or "question-colon"? Your brain long ago learned to treat these as individual symbols, not strings of two characters. And this despite not even having easy-to-say names for them. (In Hoon they are "hepgar" and "wutcol" respectively.)
The digraph operators are not a large problem for me. I'm used to Perl, so I'm sure I can master them. But what is a lew.wod.dur, and what is lax supposed to mean? What happened to variable names that made at least a nod to descriptiveness? Why do you think adding a two-word comment to a function is enough to make it all crystal clear?
As a coder of "various ability and background", I found that Hoon source code was a relaxing joy to read and write after a few weeks of dedicated study.
It is, believe it not, designed with readability as a first-class priority. However, a common meme in language design today is that "Code should read like prose." This sounds good, because it means languages are easy to learn. Who wants to learn a bunch of symbols and patterns and junk? I want to code (Python, JavaScript, etc.) now!
But when we program, we aren't dealing with the trivial knowledge English is efficient at exchanging in an email, or the subtly it portrays in a novel. We are talking to a computer. The ideas we explain to a computer are often much more complex than we can explain precisely with English in any reasonable amount of time, so complex we often lose ourselves in them.
Mathematicians and physicists gave up on making human language explain specific technical ideas long ago. Have you ever read a paper on cosmic topology or quantum field theory? There's a comforting padded bumper of English framing the ideas, but the reason the paper is published is in symbols which would appear to the uninitiated as "hundreds of lines of noise." The English is usually just an introduction.
I think Hoon is wonderful [1] because it has both power and predictability. Power lets you express complex ideas concisely and easily, while predictability lets you read and understand someone else's code without frustration. Perl is powerful because it's dense, but it does so at the cost of predictably. Mathematics symbols are terribly powerful, but wildly unpredictable [2]. Python is sort of powerful and sort of predictable. The seductiveness of English simulacra programming languages is that you often don't realize how much power you're giving up because it's just so relaxing to not think about symbols and patterns when your code has the predictability of an email.
Hoon's power is in the zoology of runes (Digraphs) and it's predictability is rooted in its homoiconicity. Every symbol on the keyboard is utilized to build powerful runes, each of which in turn is utilized to append predictable structure onto your AST. When you understand the basic patterns of these constructions and reductions, it's not hard to read someone else's library and understand it in an afternoon. I can't say the same for the Java SDKs I come across. [3]
I'm not saying it isn't inaccessible at first and doesn't take time to learn. I am saying it is worth it.
[1] It's my favorite language - It is just plain fun.
[2] That's why the English frame around a physics paper is there - It tells you what each symbol represents in this paper and what they're trying to do.
[3] The language design also includes several decisions to avoid the pratfalls of other homoiconic languages (Primarily the readability of Lisp), but those are explained in the docs.
Wait, so they wrote their own bytecode, functional language (complete with compiler and libraries), functional operating system, TCP replacement, and distributed storage protocol? And will presumably write all their own apps, or some kind of interface layer that will allow non-Hoon code to run on their system efficiently? That has to be one of the most ridiculously ambitious projects I've ever seen. Good luck to them I guess, but I've always been a believer in not reinventing too many wheels at once
Perhaps it makes a bit more sense if it's explained that the original goal was to keep the whole codebase (outside of the C support layer, which adds no semantics) at 10Kloc. Unfortunately this is now rapidly slipping toward 20.
Of course, Kloc is a deceptive measure of algorithmic simplicity in a functional language, if compared to procedural. Also there is about another 15Kloc of C in the interpreter and another 10K that wraps an event layer on libuv.
Bear in mind that Urbit can also be seen as an ACID database on the log-and-snapshot principle - events are transactions. You can of course export application state to some other storage system, but you don't have to. (If you change the source code, your app reloads - you do have to write a function that maps your old state to your new state, though.) So there is a lot of strange futzing around with mmap at the bottom.
Peter Thiel is known for his financial support of both neoreactionarism and various types of out-there futurism; I wouldn't be surprised if he's involved, and I'm sure he's not the only jillionaire with similar interests.
On the other hand if someone told me that a certain angel is funding Urbit, I would probably have guessed that it was Peter Thiel. It seems like just the kind of ambitious/ridiculous and politically loaded project he would be interested in.
I have no idea whether Urbit is as significant an idea as Mr. Moldbug suggests, but it's too interesting not to try out at least.
Sure. He funds Eliezer Yudkowsky through MIRI; "friendly AI" futurism is at this point a branch of the "Dark Enlightenment". Thiel himself has famously said "I no longer believe that freedom and democracy are compatible".
The first sentence above is completely false. Eliezer Yudkowsky has explicitly stated that he has nothing to do with the reactionaries, and the latest stats on the friendly AI community (as gathered at lesswrong.com) show about 2% of members are neoreactionaries. Further, with FHI at Oxford, FLI at MIT, CSER at Cambridge, the Superintelligence book being endorsed by everyone from Elon Musk, to Stephen Haking, to Morgan Freeman, calling MIRI/Friendly AI a branch of the reactionaries can't be anything else but a deliberate attempt to smear people's reputation.
I'm on the record as having several severe criticisms of EY/MIRI, but being neoreactionaries is not one of them.
EY has made a point of saying that he has nothing to do with Moldbug, but ideologically there isn't much room between Moldbug-style techno-monarchism and the 'friendly AI' utopia.
I'm referring more generally to the "Dark Enlightenment" than just neoreactionaries. In that group I include pretty much any group focused on alternatives to liberal democracy, with carve-outs for Marxists and fascists. EY fits comfortably into that category (see his "Politics Is The Mind-Killer" tract).
EY definitely fits comfortably in the gerrymandered definition of Dark Enlightenment that you just created, as does a vast amount of people, including the Pirate Party for instance. EY himself would agree with you on this.
Whether this is what most other people mean with the terms you just used, is another point entirely.
"It's definitely air travel. It's not exactly flying."
This phrase has an Alan Kay-ish quality, and was enough to pique my interest. Anyone seriously trying to execute on a project that addresses this concern seems worth following. Who knows, you might even succeed :)
i find it an interesting exercise to explain to non-technical folks that i have several virtual computers running in the cloud. .. looks on their faces says it all.
I've been watching this since it's started. It has some magical atmosphere around it and you never now whether it's serious or not; whether it's a real technical project or some kind of artistic performance.
Earlier versions of the site featured a larger quantity of Moldbug's baroque, trollish prose style. Plus there was an extended argument about language design, due to the author being present in the thread.
Frankly, I'm disappointed. It used to be clearly the personal work of a lunatic. Now it feels more like a quirky open source project.
The choice of 0 for true and 1 for false is _wrong:_ On the Curry-Howard correspondence (and above) a type corresponds to a false sentence when it is not inhabited, like 0, and to a true sentence when it is. The Curry-Howard correspondence is fundamental, and so that is the. correct way to go, absent evidence in the other direction. I don't know whether Nock is a better foundation for a programming language than the Lambda Calculus, but it is at least as elegant—I'm glad someone is seeing where it leads I'm a fan of internet freedom, which often requires anonymity, and so I have my doubts about an OS that preserves exerything as a feature of the design.
IANAM, but I thought really if there was one great lesson of 20th-century mathematics, it's that nothing (ie, no system of axioms) is fundamental and superior to all others. For instance, Church-Turing equivalence does not tell you that Church's model of computing is more fundamental than Turing's or vice versa.
If you look at where lambda comes from, it comes out of this same project of metamathematics that originates in the 19th century with people like Frege - how do you describe math in terms of math? Then it was discovered that this project could be repurposed as a definition of computing. Reusing old standards is a thing that happens, but sometimes you'd come up with a different approach if you were starting from scratch. As I've pointed out elsewhere on this thread, the shared theoretical foundation of lambda does not lead to a family of languages that are compatible in any meaningful sense.
It was probably a bad decision to make 0 true and 1 false, but not for mathematical reasons. There is always a lot of C coupled to Urbit, and what ends up happening is that the easiest way to handle this problem is to write C in a sort of dialect in which 0 is true and 1 is false. This is slightly cumbersome, and the perceived elegance of 0 == true isn't at all worth it. However, it's also nowhere near awful enough to revisit the question, which would be quite tricky considering the self-compiling nature of the system!
It's been sorta mentioned elsewhere on the thread, but there is another (IMO simpler) mathematical intuition behind 0:1 :: false:true that doesn't involve any lambda fundamentalism. It's the algebraic analogy disjunction:conjunction :: addition:multiplication :: union:intersection :: ... which also turns up pretty often in computing.
For instance, if you've got anything like regular expressions, then you've got something with a structure where the "unit" (trivial match) is an identity for sequencing, and the "zero" (failed match) is an identity for disjunction and a zero for sequencing. It's not exactly a formal argument for preferring booleans to loobeans, but a failed regex match sure feels like a "false" to me.
I don't doubt that it's not worth changing at this point, but don't throw the semiring baby out with the lambda bathwater.
Aside from the zero/one thing, if I'm understanding this right, Nock is based on the SKI combinator calculus, where the combinators are
S: λxyz.xz(yz)
K: λxy.x
I: λx.x
From there, you can express anything from the lambda calculus, and vice-versa. So I think it's reasonable to say that Nock is just a machine-friendly way of expressing the lambda calculus.
I'm just speculating, but the HN upvote/downvote arrows are tiny, and it's easy to click on the wrong one accidentally. (This is especially true on tablet devices.) Your comment clearly doesn't deserve a downvote, though, so I've thrown you an upvote to help make up for it. :-)
Nock works perfectly, of course, but it's poorly optimized. The various "jets" (optimized implementations) in our runtime haven't been adequately tested and there are probably a few corner cases, errors, etc, where the behavior doesn't match. Even the new Nock runtime (I just rewrote more or less the whole interpreter) is probably a little too bound to Urbit specifically.
But the system has been self-hosting for quite some time, and there's certainly no doubt that the general approach works...
It's probably the second time that I come across Urbit and found it attention worthy, but... how hard would it be to explain at least some of the concepts in plain English and with some nice intuitive drawings/schemas on the side, really?
I mean, this http://doc.urbit.org/doc/hoon/tut/1/ is supposed to be a "tutorial" but it's definitely not what anyone else would call a "tutorial". It's more like a "philosophical introduction" to a PhD paper, which would be ok if labeled as such. A tutorial should be about "how do I do X using Y, without bothering to really understand Y" because this is what a tutorial is about, a "mostly wrong" "mental shortcut" that you take in order to gain some kind of "feel for how something works" before actually delving deeper and reading up on the theory.
Our documentation has advanced since this was posted. I'd be interested to talk more out of band, but can't find your email. If you're up for it I'm galen at tlon.io.
This is fun stuff. Reading around a bit I found this:
"While nowhere near the simplest such automaton known, and certainly not of any theoretical interest, Nock is so stupid that if you gzip the spec, it's only 374 bytes. Nock's only arithmetic operation is increment. So decrement is an O(n), operation; add is O(m * n)... "
Don't they mean 'subtraction' rather than 'add'?
edit: This is so fascinating, it has me totally enthralled. Think smalltalk meets lisp meets some wild eyed programmer who knows just how to appeal to the general frustration most programmers should have (do they?) about the state of our art.
Best post on HN in a long time, very curious how this one will turn out in the long term. May all your ships come in ;)
edit2:
Digging around a bit more: Peter Thiel and a bunch of others have apparently invested in this through a vehicle called 'Tion', https://angel.co/tlon (the Thiel reference is that Thiel backed John Burnham, who is co-founder in Tion).
I'm not sure on the specifics here, but I'd have thought that subtraction would be O(n). To perform n - m on a simple register machine with only increment, you just count from m up to n (and count from 0 up to n, outputting 0 if you reach n, since subtraction is partial).
EDIT: It seems they are going for a naive implementation where subtract just repeatedly calls decrement, so yes, that's going to be O(m*n).
The n is just n: it's the number you are subtracting from. Of course, it's impractically slow, but I believe it's just example code in a opening tutorial for the Nock language.
There are only 10 operators in Nock, and when you get to the last one, you see there's a space for a "hint" to the compiler. This is how Nock becomes fast in Hoon. You start with an extremely slow, naive implementation, and then you write something called a "jet" which is proven (I think usually a weak proof is considered sufficient, like by fuzzing) to be semantically equivalent to the nock that it replaces. Then you do your best to make sure you use that exact expression anywhere you mean that, and don't reinvent it again later.
Jets are currently written in C, since the vere platform is written in C. Learning how they work and how to write them is on my bucket list.
Is it just me, or is anyone else reminded of the story of Project Xanadu? (A great read[1], if you have an hour to spare.)
Especially this part:
> Shapiro also discovered that the group had been working together so long it had developed a kind of private slang. It took months to comprehend what the programmers were talking about. Most of them were book lovers and trivia mongers who enjoyed developing a metaphor based on obscure sources and extending it via even more unlikely combinations. For instance, the object in the Xanadu system that resembled a file was called a bert, after Bertrand Russell. With files called bert, there had to be something called an ernie, and so in the Xanadu publishing system, an ernie was the unit of information for which users would be billed. To understand the details of Xanadu, Shapiro had to learn not only the names for things, but also the history of how those names had come to be.
They had me going in the Nock spec until "We should note that in Nock and Hoon, 0 (pronounced "yes") is true, and 1 ("no") is false. Why? It's fresh, it's different, it's new. And it's annoying. And it keeps you on your toes. And it's also just intuitively right."
There is one ground truth, origin, fixed point like the North Star: zero. There are an infinite number of possible falsehoods: all nonzero numbers. But Urbit chooses 1 as the canonical false.
But on POSIX systems, there's a reason for that. There's only one success (Since "success" should do the same thing every time), but many types of errors which can be indicated by the return value. You could argue that 1 should be success, and >1 should be failure, but that's a minor quibble.
Conversely, here it's just because "it's different". I feel that this is a bit if a shame - some of the other parts of the project appear quite interesting, but making fundamental decisions in downright wrong ways just to mess with expectations comes across as silly to say the least. Why deliberately increase the learning barrier and drive people away?
I absolutely understand your reaction, but, believe me or not, I remember myself wondering as a child why it is opposite. That time it seemed to me completely intuitive and natural that 1 should be "false" and 0 — "true".
However, years later I was introduced to boolean algebra, where 1 should be true and 0 — false, if we want multiplication to be "and" and addition "or". And it feels right, because intersection of two sets is (intuitively) multiplication, and joining — addition.
So, yeah, it doesn't seem like a good idea to me as well after all these years.
Compare to probability. Probability of element being in one set AND another distinct set is multiplication of two probabilities (which are, in reality, actually defined by sets).
"Pairs" (and, respectively, cartesian product) is a bit more complex, and not so much related to boolean algebra deduction concept.
x ∈ (A ∩ B) = (x ∈ A) * (x ∈ B) if "x ∈ A" equals 1 when x is in the
set A, 0 when x is not in the set A. Using "indicator functions" like
this also gives you a nice formulation for probability and
integration, etc. that falls apart if you use 1 to represent x ∉ A.
edit: I should add that I'm not claiming "you can't build measure theoretic probability from this formulation of booleans" is a strike against the project. Just addressing the math question.
I don't see how you lose boolean algebra, you just need to flip * and + in your equations.
x ∈ (A ∩ B) = (x ∈ A) + (x ∈ B) (and)
x ∈ (A ∪ B) = (x ∈ A) * (x ∈ B) (or)
which is useful when you define integrals and expectations:
E g(Y) = ∫ g(y) f(y) dy = ∑ᵢ g(cᵢ) P(y ∈ Aᵢ)
where Y is a random variable with density function f. Any integrable
function can be approximated as the limit of step functions, so this
is a well-behaved way to get a general theory of integration.
Of course, one could replace (y ∈ Aᵢ) with 1 - (y ∈ Aᵢ) if one wanted
to use "0" to represent the event (y ∈ Aᵢ) and "1" to represent its
complement and not affect the truth of the math, but then there will
be lots of termf floating around just to convert the notation into the
terms that you need for the math.
The project is based on a fundamental insincerity, which makes me suspicious. All material about Urbit makes a big point of their minimal spec, again so in the linked piece: "The spec fits on a T-shirt and gzips to 340 bytes."
What do people expect when they read a thing like that? I don't know about you, bit I'd expect that I could ignore the obfuscated strangeness of their higher level languages etc and just implement stuff to that minimal spec.
So you expect you can do that. But you can't. That minimal spec might as well not exist; you can't just read it and go on to implement, say, a programming language on top of Urbit.
"But you can!" you might object. Try it. Read nothing but that spec and implement, say, a tiny BASIC on top of it. Compile some simple programs. Run them. Now, what happens? Nothing, for now. But look, my processor core is heating up. Something must be happening! Just wait. Wait. Wait. No, still nothing. Wait some more. Just a few years, maybe? No, still nothing (except heat). Etc.
So no, that elegant spec doesn't give you anything but a way to heat your computing cabinet.
Realizing this, I put Urbit in the "suspicious, may be a con" bin. Hasn't escaped from there, yet.
> The project is based on a fundamental insincerity, which makes me suspicious. All material about Urbit makes a big point of their minimal spec, again so in the linked piece: "The spec fits on a T-shirt and gzips to 340 bytes."
Last I read something like this (I don't know if it was about Urbit or something else), it turned out that there was no IO included in that spec. So, useless for any real-world purpose, and as you said, insincere.
The IO are events, and the events come from unix (for example: signals, files and sockets).
Urbit is not meant to replace Unix. "They call us, we don't call them" is the fundamental precept in play here. I assure you there is I/O in Urbit, the vanes include an HTTP server, a Hoon-interpreting shell which accepts keyboard input from a terminal, and a UDP layer for facilitating ship-to-ship communications. All of that is written in Hoon, which compiles into Nock.
Some long term plans are crazier than others. I would like to have a machine that runs nock directly on the bare metal, but that doesn't mean it fits into the current architecture.
So far to my knowledge everyone running Urbit is doing so on a Unix system of one flavor or another. The point of "they call us, we don't call them" is not that Urbit never calls into Unix, it's that the set of reasons to reach out to Unix for one of the existing OS facilities is limited, restricted from the list growing any longer than absolutely necessary.
If your persisting disk filesystem evaporates from under you in the middle of operation, maybe Urbit can be prepared to deal with this, but as long as you have it there, might as well use it to allow the user to muck around with your internal representations in a way that is already familiar.
One of the problems Urbit doesn't currently seek to tackle is "a new text editor." Surely this is for the best, not because "a new text editor" is not an interesting problem, but for reasons that I think are already obvious to you.
The terminal (dill), HTTP server (eyre), socket UDP layer (ames), and shell (batz) are really expressed using the language in the spec. The filesystem (clay) has a reflection in your unix filesystem, but it's also really clay. In a very real way, even if they are not fully expressed by the Nock spec, they are built using the primitives that are entirely laid out there in Nock 5K. Hoon compiles to nock.
I feel like I'm trying to argue against "no true scotsman" but I don't know how else I'm going to convince you that the code "is really from mars," which seems to be what I want to tell you whether or not it's what you're really asking here.
This goes even deeper. Let's just go on and assume that Urbit's spec-bearing T-shirt includes everything you need for IO. Now consider this promise from Mr Yarvin, found in the older Urbit thread someone linked to: "And actually, if you don't like Hoon [Urbit's strange, obfuscated high level programming language] you can build your own language on this platform. So long as it compiles to Nock [purportedly documented on that T-shirt]"
So you hook up your new Nock back end to your favorite compiler, painstakingly hand-crafted to generate Nock code perfectly conforming to the T-shirt spec. You compile a few programs. You run them. Heat radiates from the CPU... and nothing else happens. You wait, patiently at first, then way beyond patience, yet nothing (except heat). Don't you feel cheated, just a little?
Why would you write a new Nock back-end at the same time as a compiler for a new language? Even if you did, you'd want to test the new interpreter with the old language, and the new language with the old interpreter, before going any further.
There is certainly no free lunch in this system. If your CPU is radiating heat, it is spending a bunch of time in Nock formulas which could be better optimized. For instance, maybe you rolled your own decrement, so the interpreter can't match it with the jet it uses for stock Hoon decrement.
That's okay, because the jet matching mechanism is not in any way specialized to the code. All you have to do is (a) get your compiler for the new language to emit a proper jet hint for your things-you-want-to-be-fast, (b) write correctly matched C implementations, and (c) add them to a static array.
Or to put it more briefly: a well-designed Nock implementation is not in any way coupled to the Hoon or Arvo layers.
One of the not-quite-finished things about our interpreter right now (other than that it interprets the tree directly and so gets about 1.5 Nock mips, which is pathetic) is that jets have to be compiled into the kernel and aren't opened by dlopen. Ideally we'd even load something like a PNACL/LLVM object, and we'd get it the same way we get the Hoon source, ie, via Urbit itself.
A given Nock implementation is intimately coupled to a matched Hoon compiler.
Attempts to write anything just to T-shirt spec and run on a given Nock-based platform are almost certain to fail. The real spec you'd be required to write to is implied by the platform implementation. Marketing problem: That real spec is not very elegant and it doesn't zip down to a handful of bytes. It doesn't fit on a T-shirt. It's not a stable specification. It doesn't describe an open platform. Yet Urbit marketing copy wants me to believe all those things.
If you s/is/could be/, I would agree. I would say s/is/should not be/. That is, it is possible to screw this up and create "intimacy" (high coupling), but obviously I recommend against it.
Put it this way: when it's properly done, the jets (specific optimizations) are coupled to your compiler and libraries about the way your display server is coupled to your graphics driver. As a programmer you can't, don't and shouldn't know what if any GPU is executing your GL. Or at least, if you do - mistakes have been made.
Added to this is the fact that it's possible to obtain at least decent performance just by using standard basic math, eg, decrement. Yes, you can roll your own decrement - but at this point you are just being perverse.
So T-shirt spec is like the written law of Urbit; strangers can read it, we can't help that. But of course there are the unwritten laws and rules of polite speech, here in Urbit just as everywhere else. There are special occasions (you just know!) where you must use exactly the right phrases (you just know!)
Ahh, but she slips, uses the wrong phrase. Now everybody knows: she's a pervert! Rolling her own decrement, the little... Quick, freeze her out of the conversation. She shall speak no more.
But look, isn't she hot? Really, it could be nice... but alas, there's just no way around Urbit gov's cryptographic chastity belt locks...
My Binary Lambda Calculus is specified in a few paragraphs in http://www.ioccc.org/2012/tromp/hint.html and includes basic IO. It blc-zips to only 29 bytes (size of self-interpreter).
The 25-lines-of-obfuscated-C interpreter is on the very shirt that I wear on my (clickable) homepage picture.
Except that some jurisdictions in the world don't have the concept of public domain, so, in those places, this code is legally in a very weird place. It's much better (IMO) to assert copyright, and then release under the most permissive license possible.
It just sounded as if being public domain somehow makes project more ambitious (that is, means to be more difficult to complete than if it wasn't public domain), so I probably just misunderstood.
I definitely was thinking about something like this, but not exactly this. I mean, yeah, the only way to save ourselves from damnation is to finally announce "enough of 70's!" and rewrite everything from scratch, so the idea is dear to me, but that actual implementation of the idea is… maybe "weird" is the right word. I'll continue reading, but I already have a feeling that after I'll get myself completely familiar with it, I will hope it will not succeed.
I hope it will succeed but that it won't be the only thing that will succeed and that some will take inspiration from it to build something even better. Challenging the status quo as such is a useful property and showing that you can in fact reboot is useful as well.
That said, the more I read about it (a couple of hours so far) the more I'm thinking it will not succeed because of some of the weirder philosophical decisions that have already been cast in stone.
It's basically a variation on the 'landgrab' theme. Think bitcoin or the domain name system with half the space grabbed up by the ruling corporation, with an arbitrary 'ingroup' and a very large 'outgroup'.
You'd think that by now the whole point the web rammed home with its unbridled success is that open is better but I guess the mobile walled gardens have whetted the appetite of future wannabe corporate overlords.
Anyway, I still wish them best of luck, that's an economic move on my part, it costs nothing and I suspect they are self limiting enough that it will not achieve the world domination they are dreaming of. See also: the singularity.
I prefer to think of it as: you can be "free as in beer" or "free as in speech," but not both. Sith Lord Zuck has set you free as in beer, but not in speech. Urbit will set you free as in speech, but not as in beer.
The difference is simply that while there are a limited number of Urbit identities (or at least memorable identities, ie, 32-bit "destroyers") in the world, and therefore they can't possibly be made free as in beer, neither I nor my evil minions have any practical power over a destroyer once we create it.
Moreover, if you genuinely think of any individual or organization involved with Urbit as evil, there are plenty of independent carrier-holders you can get a destroyer from. (~del, for instance, will ship destroyers when we do, I think. He's some guy in Rochester whom I don't know from Adam. So, we'll let him serve the first 16 million people who want to escape from our evil dictatorship.)
Yes, by default the normal way to get onto Urbit will feel like the normal way to get onto Facebook. However, even if you are initially issued and hosted by us, you can move your image at any time to any other host, or to your own machine. Currently there's no way to stop using your issuing carrier for network services, such as firewall hole punching, but this is obviously a 1.0 feature. By design, there is no sort of leverage you can't escape from.
So Urbit strikes a sort of balance between governed and ungoverned networks. The limited address space will hopefully make it economically impractical to abuse the network for profit - a spammer is always a Sybil attacker. But as a destroyer you are not ruled by any master you can't easily escape from. The only thing you can't escape is your own reputation. Which you shouldn't be able to escape from.
We should probably highlight this difference a little more in the doc...
I will hock destroyers, although I don't know if I can sell them, for reasons I can't go into. Also offering Yachts, to anyone who thinks that a destroyer is just too militaristic.
I can't decide if it's a science fiction novel written in code form, a programming environment, a game, a radical creative take on computing and networking ... Whatever it is, it's genius.
It seems like a work of genius, but it may be many years before an artificial intelligence can fully interpret its unique significance.
In fact it reminds me about a massive book on cellular automation that I once picked up, which went to incredible length in interpreting the potential of dynamic patterns in 2D space. It was a monument to extreme intellectual rigour and dedication, if nothing else. If only I could remember the title.
I know enough to be interested. Not enough to know if its nonsense.
Although, I read this:
"A pier is an Urbit virtual machine that hosts one or more Urbit identities, or ships. When you run bin/vere -c, it automatically creates a 128-bit ship, or submarine. Your name (a hash of a randomly-generated public key) will look something like:
Its fairly distinct. A destroyer is like owning a bunch of submarines, in a way. This is the one point where the model kind of breaks this layer, though.
Basically, it can't run out of address space because of the abundance of subs.
At present it's a bunch of programs that run on UNIX systems. I guess the Hoon interpreter is a C program that uses readline, talks to the network over sockets and the like. But everything on top is quite self-contained and in principle one might remove the layers underneath.
But I'm not sure that's necessarily the aim. As I understand the project's aims (and I'm not involved, I just read all the docs a few months back), it's more important that the Internet of the future is not a tangled mess of technologies that are insecure on different layers. Then it doesn't matter so what OS people's local machines are running.
I hope the essay at the URL below is still relevant, it seems to describe the aims rather well. "The result: Martian code, as we know it today. Not enormous and horrible - tiny and diamond-perfect."
Or as I sometimes say: "on the bottom it's a new kind of math, on the top it's a new kind of social network." This is kind of shameless hype but not as much as you might think.
Sounds like a 10-year research project, right? Actually I started working on Urbit in 2002, as a sort of "unsupervised PhD thesis." I figured it'd be done by 2008 or so.
We seeded a startup last fall and are busy turning a prototype into a product. You can now write a pretty decent web app in Urbit, but it's still too raw to stand the light of day. For instance, the network works, but we often run global flag days when we shut the whole planet down and change the protocol.
One of the main problems with Urbit is that extraordinary claims demand extraordinary evidence. If you're going to ship something this crazy, it has to work extremely well right out of the box. We had a sort of abortive non-launch last year when the project kind of got accidentally leaked, and decided we had to "unlaunch" and go back into quiet mode. I hate selling things I can't quite deliver.
But can you please explain why you couldn't use existing math (language and compiler) on the bottom layers (1 and 2) before replacing them by your own? That would help me understand the level of your ambition :)
One, as we know from Church-Turing equivalence, lambda and infinitely many other models of computing have the same expressive power. That doesn't mean they have the same practical utility, though.
Lambda in its Lisp incarnation is actually a rather poor substrate for something like a Lisp machine, I think, because it doesn't layer very well. You can define quite a simple Lisp model, but when you want to turn it into a practical Lisp, you don't add another layer - you grow hair on top of the existing layer. You grow a little hair, you get Scheme; you grow a lot of hair, you get Common Lisp.
I've never seen a lambda model (Qi/Shen perhaps a partial exception, but even there the underlying model is not very simple) that layers a complex language on a simple kernel. I think this is because lambda defines abstractions like symbol tables and function calls, which are user-level language features, in the computational model. The bells and whistles get mixed up with the nice clean math.
Another example is the fact that a modern OS should present itself to the programmer as a single-level store, meaning effectively an ACID programming language in which every event is a transaction. So, you're not constantly moving data across an impedance mismatch from transient to persistent storage, each having its own very different type system and data model.
But, if you're building persistently stored data structures designed to snapshot efficiently and remain maintainable, you really want your data model to be acyclic and not require GC. This goes in a very different direction from almost all the dynamic language work of the last 50 years.
Or, for instance, if your system is designed to work and play well in a network world, it really ought to be able to be good at sending typed data over the network. And validating it when it gets to the other side. Your type system ought to be able to do the same job as an XML DTD or JSON schema or whatever. Well... this wasn't exactly a design requirement when people designed, say, Haskell.
I could go on - there's a lot of stuff like this that is built the way it is because it made very good sense in the 60s, 70s, 80s or 90s. But the requirements really have changed, I think.
Can you, for example, spell out what you mean by "you really want your data model to be acyclic and not require GC"? What is cyclic about the "data model" in e.g. Go or Lua?
I can field that one. If a has A reference to B and B has a reference to C and C has a reference to A, then we have q cycle. Cycles play merry hell with garbage collectors and reference count destructors. Languages that force references to be a strict tree or DAG get cheap destruction in return - see C++ without pointers for example - I pop a local object off the stack and it's gone, along with all its children - no non-deterministic GC pause or delay, no Pythonic cycle check. Rust is built somewhat on this principle.
Right. The way I'd put it is: in system software, the iron law is that you don't make the programmer pay for anything she doesn't need to buy.
What you pay (GC) for the privilege of having pointer cycles in your data is grossly disproportionate to the benefit. Yes, there are a lot of things that are O(k) with pointer cycles and O(log n) without them. If what you buy from this optimization is worth the cost of GC, you are doing a lot of these things... a lot.
So "you really want your data model to be acyclic" means, "you want your programming language to forbid construction of cyclic data structures"? So, for example, connected graphs are not objects you can represent?
They do - but arguably being forced to define the distinction between direct hierarchical references, and symbolic links, helps constrain your data model and make it more rigorous.
Note that relational databases don't have back pointers either. All references between tables are symbolic. Last century, people tried to make databases that were general pointer graphs (google "network database" or "object database") - broadly speaking, it was a disaster.
Here's an idea: maybe anyone accusing someone else of being "too ambitious" should share one of their own projects that demonstrates exactly the right level of ambition.
No, some projects can be too ambitious, including even recklessly or dangerously ambitious. But I'd rather hear that judgement from someone with a reputation for appropriate, successful ambitiousness, and with some supporting reasoning.
The grandparent comment comes from a pseudonym linked to no evaluable projects. It offers a costless, totally-generic pooh-poohing of a real project as "too ambitious". But that project is actually shipping code that works, with 126 contributors, many with a known history of contributions in related spheres.
Against that, the comment even uses an appeal to "HN standards"! As if, we should all be discounting this sort of stuff, on its face.
I'd prefer "HN standards" encourage such ambition, backed with code – not casually mock it with a ascii-smiley.
Again, the contributor list in github is WAY misleading, because it accidentally somehow pulled in all the contributors of the libraries we bundled (eg, libuv). We need to fix this, I think.
> But that project is actually shipping code that works
I get that this is more than just vaporware, but getting past the vaporware stage doesn't necessarily prove that something isn't too ambitious. When something is attempting to enact a paradigm shift, the final goal is adoption/usage, not just a working/functional product.
You seem to be using "too ambitious" as a synonym for "unlikely to succeed". But that's something different.
"Too ambitious" implies someone shouldn't even be trying for this goal. That's a corrosive attitude, and I'd like it kept-in-check by a requirement that sources of such negativity show their reasoning/experience/work.
Nevermind the fact that the parent post believes its possible to quantify "exactly the right level of ambition," the poster believes that the only people that should be "allowed" to comment on whether or not something is too ambitious should be people that are currently undertaking ambitious projects themselves.
This smacks of privilege. Not everyone is or can be in a position to pursue ambitious projects. To say that these people should be barred from commentary is ridiculous.
I've not suggested anyone be 'barred' from commentary.
Rather, if you want to dump generic negativity on a real, delivering project, you should justify why your negativity is relevant, for example by documenting your own related efforts.
If you want your cynicism to be respected, then yes, that's a privilege that needs to be earned. You can always say it, but it should be called-out as empty bullshit until backed with detailed reasoning or experience.
And what is the reason why an existing language and compiler doesn't suffice for 1 and 2?
For example, Haskell and GHC took decades to become what they are now (a powerful language and compiler). It seems quite wasteful to rebuild that work. If absolutely desired, Haskell can also be converted into combinator form, and vice versa, so why not use it? Seriously.
Because it's incredibly fun to work on? It's a brilliant VM once you wrap your brain around it, and the hackers behind it got to really try something new?
Judging by the documentation, Urbit looks to be less about leveraging pre-existing work and more about reinventing computer science and software engineering from the ground up.
They already basically have all of the above, as a working tech demo. They actually had it like a year ago or more, but bits are missing or in progress still.
Interesting distribution of contributors [1]. Usually what I see on most projects is one or two guys contributing the most of the code since the start of the repository, with minor contributors coming and going. These contributors chart seem to show different guys taking the helm at different points on time though.
BTW, since I know how much does the hackernews crowd appreciate baseless speculation (sorry!), I'd add this: I wonder if Ryan Dahl is involved with this project. Working on something like this would be compatible with some of the things he wrote about before disappearing [1].
When I read through the urbit git page and it finally clicked to me what this is and what it could mean, I literally said out loud in my room "Oh, that's really fucking cool."
"More specifically, Urbit is a personal cloud computer. Right now, the cloud computers we use run OSes designed for minicomputers in the '70s. An ordinary user can no more drive a Linux box in the cloud than fly an A320. So she has to sit in coach class as a row in someone else's database. It's definitely air travel. It's not exactly flying."
I doubt very much that the average person wants to either fly their own airplane or manage their own cloud computers, no matter how simple you try to make either of them.
What's so easy about driving a car? Over 30,000 people die per year in the United States due to automobile accidents. And that's after requiring every driver get a license, and teaching basic driving skills as part of our high school system. Driving a car isn't all that easy, and we're terrible at it. There's a reason that Google is inventing self-driving cars, not personal airplanes.
And the amount of possible simplicity is constrained by the problem you're trying to solve -- you can't manage to make a plane as simple as driving a car, because the car only has to navigate in two dimensions while the plane has to navigate in three. If a car stops, the car's just sitting there, whereas if a plane stops gravity is going to pull you down in a likely to be fatal incident. You can't make a plane as simple to use as a car without changing it into something other than a plane.
I suggest the car / airplane analogy was already strained before we got here.
Driving a car is easy, as evinced by the large numbers of people that are capable of it. Whether driving a car successfully (ie. without dying) is easy is obviously a matter for debate. With 0.000015% failing in this severe way, it might be argued that driving a car and dying is not particularly common, and could (no disrespect to the people involved) be thought of as a rounding error.
We have no evidence that Google isn't trying to invent self-driving or personal airplanes.
Try suddenly stopping your car on the autobahn, and see how tranquil the 'just sitting there' experience is.
Anyway, perhaps we are all missing the underlying point - an overly complicated solution (to an otherwise straightforward problem) has been the received wisdom for decades, and a re-think from fundamentals is almost definitely worth the effort.
"Anyway, perhaps we are all missing the underlying point - an overly complicated solution (to an otherwise straightforward problem) has been the received wisdom for decades, and a re-think from fundamentals is almost definitely worth the effort."
I don't know that the solution is overly complicated or that the problem is straightforward. Think of every human being as their own little Y Combinator startup -- everybody is their own little Me, Inc. Or Me, LLC. Whatever. Most people follow SOME version of "outsource everything that isn't a core competency." Like, it varies a lot from person to person -- some people go to McDonalds, some people are making their own meals from ingredients picked up at the local farmer's market, but very few people (not even most farmers) are fully self-sufficient farm-to-table for most of their meals.
So as a result of this, most people don't think of their computer in terms of it being a computer. And that's the most obviously computer computer they interact with! There's all sorts of even more abstracted away computers they deal with -- smartphones, cloud computers, etc. Most people don't care about computers, and don't in and of themselves want computers. They want to do things like "write a document," "share some family pictures with friends and relatives," "play a video game," so on and so forth. For those people, not only don't they CARE about HOW the computer is accomplishing those things, they get very upset whenever they see the wizard behind the curtain. They want all of those things abstracted away from them. From the point of the view of the most typical use case, trying to make computers easier to use by making sure the specification fits on a t-shirt seems to be rather besides the point.
Sure, I entirely agree that there's a large spectrum of user types (or people) out there.
You used the phrase 'most people' three times there to describe a quite likely common user type for off-the-shelf consumables. I am quietly confident the guys working on this are not targeting 'most people'. Or if they are, they are quite candid about it not being ready for them yet.
> You don't think the average person would want to fly their own airplane if it were as easy as driving a car?
Most people don't enjoy driving. Presumably (for most people) flying would be just like driving - exciting, interesting, and liberating at first, but would eventually become a dull and boring chore.
This sentence made me laugh more than it should have. Or maybe exactly as much as it should have, given that I spent a year drawing a Tarot deck that riffs on the Crowley/Harris Thoth...
This seems super interesting. One of the people behind it is David Irvine [1], who apparently "was
Designer / Project Manager of one of the World’s largest private networks (Saudi Aramco, over $300M)." [2]
MaidSafe looks more similar to the original idea of Wuala. But they noticed they basically needed storage servers anyways, because the p2p nodes on people's computers and laptops had too much churn to keep enough copies online.
"Comedy writer" isn't quite how I'd describe it, but I'm surprised no one has mentioned that Urbit is run by the guy who writes/wrote under Mencius Moldbug, who is perhaps one of the more controversial people to achieve internet fandom. If you read through all the archives on his site and have no opinion (probably somewhere in the "I hate him" or "I find this reasonably convincing" statistical clusters), then your sense of outrage has been diminished, I would say.
Note: I myself am unconvinced by his writings, and find him to be just another ironic instance of the ironic nature of American culture, eager to avoid the appearance of naivete by abandoning all hopes that are fragile. Do with that what you will.
> A painting is a form of language, but can a painting be said?
I suppose if you use a very broad definition of language, that might be true, but in the sense of natural languages, paintings are not generally a form of language. They may be a form of symbolic expression, but that doesn't make them a language.
The programming languages you're familiar with don't consider themselves to be similar to natural languages. This one apparently does, and why not? Why do we even use the word "language" to refer to our computer UIs when we balk at mundane language phenomena like context-sensitivity?
Don't really get what this is technically, but conceptually I do understand it and would love to back a company like this. Technologies like this could kill Microsoft (Windows) and many others.
the only bad things seems that everyone is dependent from a server run by one company. why is there a need for this kind of registry? wouldn't be a distributed system better for this?
Man, I just wrote you a big long explanation of how you can update your filesystem even if your parent ship is gone, with shell command examples, but then my battery died before I had a chance to post it, and I'm totally not rewriting it now because I'm sure you don't care that much, and the real punchline is all of this information is about to be obsolete anyway.
The long and the short is, there are carriers (I have one), who are enumerated in the source code (like by public keys) and they are not dependant on each other, except for the sake of being able to route to child ships. So if ~zod is down, and everyone depends on ~zod, the network is indeed hosed. Everyone with active links can continue talking to each other, but nobody new can find anyone, and nobody can find anyone new. Every ship depends on a carrier for routing.
By convention, every time you launch from your pier, you get a new random port number and any ship that was looking for you at your old address needs to hear about your new address from a higher-up ship.
So it is distributed, but finitely distributed. There are only 256 carriers allowed, and they are the only ships that are really independent. Though there has been talk about fleshing out how ships can be separable from their "sein" domain, or parent ships, currently almost everyone is from the same lineage and thus depends on the same pair of servers for pretty much everything regarding the network.
This "centralized" design is indeed a core feature of the network, and it even relies on DNS to be bootstrapped, another centralized system. Now, can you show me a decentralized system that doesn't need any of that? I don't know exactly how magnet links or tor, or freenet work, but I am skeptical that they are honestly less centralized than this when it comes to bootstrapping.
> The first time a client joints the DHT network it generates a random 160-bit ID from the same space as infohashes, and then bootstraps its connection to the DHT network using hard-coded addresses of clients controlled by the client developer.
This sounds really quite like the process of creating a submarine, and reaching out to the carrier ~zod. The fact that the init process is hardcoded to trust ~zod doesn't change that you can really init your ship's filesystem from any other ship, provided you can reach it.
Carriers are all found through the existing DNS infrastructure. Other ships are able to be reached "in-band" by asking a carrier for an introduction. Without this seed or hardcoded list (or by conducting a brute-force search), or some kind of broadcast mechanism, bootstrapping a distributed network is not actually possible.
... ~zod is your parent ship. Without him, you simply have no forwarding address.
If your batz wizardry is sufficient and ~zod is not there (both of these are things that we are not counting on being true when you run Urbit for the first time) you will still get the same result in your terminal, you can reach out and contact other carriers and their friends, even without ~zod. They just can't find you. And you need to find another way to bootstrap your clay (filesystem) if you care about having local clay.
Think about the difficulty of coordinating multiple stakeholders, compared to the ease of explaining the "benevolent dictator" model that most of computing is really based on today. If the only thing that ~zod needs to do is point submarines at one another and facilitate the existence of other signing ships under his domain, that's very easy. Doing those things without ~zod's cooperation will require some further thought and more explanation.
I am a carrier owner (~del) and I count on ~zod to make it easy for new subs to find me. It's very convenient this way. But, anyone who knows how to type:
:~del/main=/bin/hi ~dalnel
... can also reach out to my carrier and run "hi" (become mmy neighbor) to find ~dalnel with my help, and exchange keys with him (~dalnel is the cruiser, like ~zod's ~doznec).
~zod never needs to get involved. There is, however, no magic under the hood. My carrier ~del works exactly the same way, except he is not preferred in the same way by the submarine client implementation.
Uber-hackers of the future will write code for nanobots that cause them to self-assemble in to an android, the android will then write a program that generates a system like this that describes a new programming paradigm.
Or, something.
This reminds me of the Newton-Liebniz debate on creation.
I always find the forceful use of female gender pronouns to be very odd. If you are conscious of and disagree with the general use of male gender pronouns, wouldn't you wan't to push the use of gender neutral pronouns?
Personally, I'd rather see the use of some of the new made up neutral pronouns than thinking swapping male to female pronouns is a viable long-term solution.
All of that stuff works now. If you're asking how long until it's supported, I'm not an employee of Tlon, so don't ask me. I can tell you that most of this stuff is still very slow, even if it works.
Check out the demo app, you can click a button and increment a number on the page, and it generates an event that gets reflected out (through some like AJAX) through the server, back out to as many subscribers as are currently viewing the page in their javascript-supporting browsers, happens almost instantly.
I have to apologize I have no short instructions to get you to this demo, it exists but I'm sure it may still rust more before it becomes a priority to build things based on it. I'm not sure if you're meant to be impressed by this demo, but there's also twitter API client you can use to tweet which is more obviously useful if not quite as whiz-bang.
You'll find the code in your pier/ship/main/app/demo/core.hook
To reset the counter (wipe the app state), at your command-line, do:
:wipe %demo
I just tested it, I can see that it still works, but like I said none of this is guaranteed to go on working. It's all very alpha.
This is called a %gall app. It's always running because the %gall vane runs the main/app/ apps in the background without intervention, like radio/core.hook that runs the server for the :chat app. See you in :chat maybe, if you have questions about how this works you will certainly be welcome.
I like this article which introduces it:
Urbit is perhaps how you'd do computing in a post-singularity world, where computational speed and bandwidth are infinite, and what's valuable is security, trust, creativity, and collaboration. It's essentially a combination of a programming language, OS, virtual machine, social network, and digital identity platform.
http://alexkrupp.typepad.com/sensemaking/2013/12/a-brief-int...