Hacker News new | past | comments | ask | show | jobs | submit login
Are “jets” a good idea? (2017) (lambda-the-ultimate.org)
97 points by networked on July 17, 2018 | hide | past | favorite | 75 comments



It occurs to me that ASM.js (before WebAssembly) was full of jets. That is, they wrote weird idioms that the interpreter could understand and optimize, while technically keeping the semantics of the language itself (JavaScript). The most visible example was someVar|0, which was a kind of way to assert that the expression would be an integer. Not really a type declaration, since it didn't ensure someVar would be a number, only that the expression was integer.

In the examples though, clearly the goal is to have some provability in the core language, make that provability feasible with a small language, and keep that as the language becomes further optimized. It seems subtly different than an optimizing compiler.


Somebody mentioned "failure of full abstraction" on HN a few months ago (actually they said something a bit different), which I tracked down to a late 90s paper by Martin Abadi. It refers to a situation where the object code is exploitable because it doesn't fully reflect the semantics of the source code.

One use of jets, in the secure code setting, is to provide a core language that is so simple that it can obviously be compiled in a "fully abstract" way.

That is my understanding, anyway. The notion of fully abstract compilation seems to be quite hot, and it comes originally from Abadi's paper: http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-154.pdf


Both BuckleScript and PureScript use the someVar|0 trick.


This is already a standard piece of how JITs work today. See the discussion of Java intrinsics and Array.fill() here [0].

> An intrinsic function is applied as a method substitution [...] and the compiler will replace it's apparent source-level implementation with a pre-cooked implementation. In some cases intrinsics are sort of compilation cheats [...]

> The intrinsics are all set up in vmSymbols.hpp and if you look, you'll see Arrays.fill is NOT on the list. [...] Because it is something like an intrinsic... [...] What is it then? Arrays.fill is a code pattern substitution kind of compiler shortcut, basically looking for this kind of loop:

``` for (int i = 0; i < arrayB.length; i++) { arrayB[i] = b; } ```

And replacing it with a call into the JVM memset implementation.

[0]: http://psy-lob-saw.blogspot.com/2015/04/on-arraysfill-intrin...


That has bugged me since the first time I read about Nock. JITs use intrinsics to work around inefficiency of generated code when the maintenance burden of the intrinsic is worth the performance boost, and the grand plan is to make a VM so terrible that you need a shitload of intrinsics for programs to terminate before the heat death of the universe? Years later I still haven't gotten past that initial bafflement. It just makes no sense at all.


> JITs use intrinsics to work around inefficiency of generated code when the maintenance burden of the intrinsic is worth the performance boost, and the grand plan is to make a VM so terrible that you need a shitload of intrinsics for programs to terminate before the heat death of the universe?

Put it this way: the JVM is perhaps the most sophisticated JIT on the planet, maybe a million lines of code, the product of decades of programming research and billions of programmer-hours, and it still needs intrinsics to improve performance. So how about we skip the billions of programmer hours and just use lots of intrinsics?

I don't know if it's going to work, but it's interesting enough to be worth a try.


I'm baffled by your conclusion and that of your parent post.

> So how about we skip the billions of programmer hours and just use lots of intrinsics?

This is very nearly the equivalent of: Why don't we skip writing a compiler and just USE glibc?

My post was about the opportunity to forgo the use of an intrinsic and instead do pattern recognition as part of compilation. The point being that the "jet-like" pattern matching phase that exists in many compilers today, my example being from the hotspot JVM's JIT compiler, but GCC will match the for loop and INTRODUCE a call to memset too.


I was using "intrinsic" inappropriately, I think, to mean both substitution of method calls and substitution of bytecode patterns. Yeah, the JVM already does the second thing. It's not roses for them, either[0]. The thing that I've been looking for in all this discussion is acknowledgement that the JVM team has been doing this, has encountered engineering challenges to the point that they're willing to change javac to simplify a jet, and an explanation of how these weird VMs are so different that none of that matters and using a shitload of jets is perfectly fine.

[0] http://openjdk.java.net/jeps/280


I made this case in 2013.

Tlon didn't want to hear it.


I wanted a reminder of Nock, and looked for an example, but the first hit is the enourmous pile of hoops to jump through to make "decrement" work! https://urbit.org/docs/nock/example/

Reminds me of nothing so much as INTERCAL.


Have you taken a math course at any point in your life?


If you assume the program is being passed through a JIT (a thing that is already common for interpreted code, and which also tries to guarantee that it doesn't change the resulting program's semantics), then one can think of jets as simply explicit hints built into a JIT regarding how to optimize particular known idioms well known in the corpus of programs the JIT was designed to work with.

This is exactly how GPU drivers get gradually more optimized for existing games: the driver's shader compiler actually has an ever-growing library of shader-source-code patterns, with hardware-vendor-optimized replacement sections.


"Are optimizing compilers a good idea?"

Look if you replace 1 representation with another that is _correct_ (to within the human author's reason), but faster. You have the identical discussion, and an answer (yes).


Well, there's another aspect: "Is it OK to design a language that will be unusable without an optimizing compiler?"

Running any normal language without an optimizing compiler is totally fine and routinely done. Here we're talking about languages that might be thousands or millions of times slower.


Compared to a optimizing compiler, a fresh implementation for a language would probably also be thousands/millions of times slower, unless the codebase implements the optimizations itself.

And then you end up speeding up the compiler to reach a decent state, to speed up the code

But time to reach initial compiler is much faster with a jet-based one (there's barely anything to the language).

and then you end up adding jets to reach a decent state

I'm not sure there's a significant difference in this regard. The main question, I think, is which is easier/simpler/faster to optimize into a proper state, and in the case of compiler optimization being better, if its sufficiently better to beat the ease of initial implementation for a jet-dependent language


It would be great if optimizers made code 1000x or 1000000x faster, but apart from some edge cases like the removal of ineffectual loops, disabling optimization in any compiler shows that ~10x is more realistic.

My day job is writing compilers (for traditional languages), and I have no doubt that it's significantly easier/simpler/faster to implement the jet based one.

However, the jets only apply to known algorithms, so it's really more like an optimized library than a compiler technique. Code that uses the "library" will be really fast with little effort, but anything that requires functionality not directly found in the library will get the 1000-1000000x overhead, while the traditional compiler will run it all with ~10x overhead.

I totally agree that it's the perfect fit for long-lived, highly heterogenous, correctness first applications like blockchain, but I don't see how it'll be useful for general purpose languages.


This is the problem I have with Lisp, but exponentially worse. Languages like this try and stretch ever-closer to an imaginary "perfectly pure language". It's an interesting exercise, but in the process they get further and further away from concepts that are palatable to the human mind, and end up esoteric and minimalist beyond usefulness. We sometimes forget that programming languages are not designed for computers, but for humans who want to express ideas to computers.


These really minimal languages aren't intended for programmers; they're compiler targets for higher-level languages. So the debate about jets is really about how best to implement language runtimes.


I wasn't aware of that, thanks for the clarification


Arguably, that "perfectly pure language" has existed longer than programming has - lambda calculus.


There aren't any concepts here that are "palatable to the human mind." It's ok to have a bias, but don't be so foolish to pretend it is universal.


There are some very abstract common threads in how the mind conceptualizes the world. See Gestalt Psychology: https://en.m.wikipedia.org/wiki/Gestalt_psychology

There's also this; a fascinating study that investigated the ways people intuitively try to articulate algorithms when they haven't been biased by experience with existing programming paradigms: https://dl.acm.org/citation.cfm?id=373422


>There are some very abstract common threads in how the mind conceptualizes the world.

Of course, this means nothing pertaining to a preferred language, formal or natural, for encoding it such notions.

>a fascinating study that investigated the ways people intuitively try to articulate algorithms

Try to articulate algorithms. And generally fail, because they don't have any experience expressing complex systems in their shoddy, careless notation.

Naturally, when the physicist sits down to express her algorithm using Lie theory and tensor calculus, she's really a moron because one should obviously grab an inexperience lay person and have them dictate your notation for you. The natural consequence of this is that you'll see things you couldn't have seen before and discover many novel and interesting structures in the language, through a process of lay magic.


There's no need to be rude.

Just as using a more computer-compatible language (assembly, at the extreme) more optimally utilizes the computer's available resources, using languages with cognition-compatible paradigms more optimally utilize the brain's available resources, reducing cognitive load and enabling anyone - intelligent engineer or layperson - to express and mentally parse greater and more complex concepts than they otherwise would have been able to.

The power of the Lisp syntax comes from the fact that everything looks the same, but this also makes it harder for a human to parse. For example, using different symbols to wrap function arguments than you do to wrap procedural blocks gives the brain something to latch on to, reducing the cognition necessary to differentiate between the two. This is also why code editors make the same types of tokens the same color.

Two big takeaways from the linked study were that event-based programming and set-based operations are intuitive. These aren't layperson schlock, these are real paradigms. Every notation, every language, is made-up. There's nothing fundamentally more true about one or the other; each offers different metaphors for expressing complex concepts. The question is what those metaphors should be, and in my experience many programmers don't think about general human cognition when deciding which to prefer.


When my brother was a little kid he had trouble learning to tie his shoes. He wanted to use velcro shoes, so he got them. Does he wear velcro shoes today? No! He learned to tie his shoes, even though it wasn't an intuitive process. Partly out of necessity, but also because tying knots is a generally useful skill (aside from attaching shoes to feet) and because learning new things is an important part of life.

If you stick to intuitive operations, you'll suffer a lack of perspective. You never expand your intuition. Better metaphors are those that more closely encapsulate the facts of the problem at hand, not those that more nicely fit human mental models. The human mental model is wrong by default. Aristotle's intuition about gravity was wrong, and a significant amount of damage would be done to force discussion of gravity to match what he found intuitive.

People can learn new notations. They do it every day. People can adapt to complex metaphors. But complex problems will never simplify themselves to fit human preconceptions and intuition.

If you're dealing with a set-like problem, then a set-based language is good. If you're dealing with an event-like problem, then an event-based language is good. Human intuition is not an important factor, save for our bias toward seeing a given problem as set-like or event-like. An arbitrary problem may be better modeled using another frame of mind. But you won't see that if you're requiring that problems be solved using 'intuitive' language.


And yet wildly different paradigms are used to solve the same problems every day. A video game (for example) can be programmed imperatively, or reactively, or with OOP, or following a compositional pattern, and so forth. All of these metaphors are broad enough to express computation in general. What I'm saying is, when designing general languages, it's worth consciously factoring in the general ways in which the human mind works. It's not the only factor, but it's one that's often ignored.

To be doubly clear: I'm not saying that unintuitive, highly-formalized syntaxes are useless, just like assembly isn't useless. Each is crucial for certain uses. What I am saying is, they shouldn't be necessary for (and often aren't even well-suited to) solving the average programming problem. We don't write software in assembly any more, but we haven't evolved as far beyond its paradigms as we like to think.

There is a huge number of problems that software engineers work on every day, whose domain (what's being modeled) is fully understood by many laypersons (or nontechnical subject-matter experts). And yet those laypeople lack the ability to express their ideas to a computer, and those engineers waste huge amounts of effort translating those simple ideas into needlessly esoteric code. This is a fundamental failure in language design, and it needs to be addressed.


> fully understood by many laypersons

Lol, like what? Go have a layperson check if a file exists. Or pass a substring of UTF-8 to C function. Or check equality of two floating point calculations. Or multiply two signed integers together. Or fix anything that doesn't work due to performance problems.

And when they do all these wrong, ask them to show you how to debug and correct a useful but 'simple' program in production.

Lay people can't express their ideas to a computer because their ideas don't work in a computer. The code is esoteric because logic is esoteric, and the human brain is just not good at reasoning about edge cases without years of practice and experience.

>I'm not saying that unintuitive, highly-formalized syntaxes are useless, just like assembly isn't useless.

Ok, good. So why did you object to the existence of LISP? Why would you say it is a mistake?

In any case, my overall point in all of this is that there's a difference between simplifying a language and simplifying the learning experience. We should always be looking to make things easier to learn, but that doesn't actually require changes to the language. You can do that by finding better ways of teaching, better documentation, and better explanations for how and why things work.


Do you mean Scheme, specifically? Because I have a hard time seeing that in lisp in general. Racket's small but fairly pragmatic, Clojure is pretty big, and Common Lisp is frankly a bit of a kitchen sink language.


Not your parent, but I personally didn't find S-expression syntax to be superior. Treating code as data and vice versa is metaprogramming, and it should be something you know how to do when you need it, easy enough to where you're not really going out of your way to do it when you do need it, but weird enough so that you can notice it and immediately see that you are going to need to put on your thinking cap in order to proceed further down the road of understanding this program, when you come across it.

I've yet to see a better approach than Ruby's.


Amusingly, I've yet to hit as much frustration as I have when using a few ruby packages at work. Holy hell is it confusing to figure out where things came from, sometimes.

I don't necessarily care for the data as code, most of the time; but I do like that most everything has a sensible way to serialize to a file. Even better, unless I am wanting to get optimized, the round trip can take care of itself.

Curious what the syntax is for a basic list of key value pairs? Well, it is the same as any list of pairs. Which is the same as any list, unless you are trying to get fancy.

This freedom really is liberating, at times. Yes, some folks have made macros so that they have a syntax to have a different literal for different styles of maps/tables/etc. By and large, life is easier if you don't use those and just stick to the minimal syntax necessary to do this.

Which really helps with your concern. When I can see what the transformation is, I can literally substitute into my mental model what the macro is doing, so that I don't have to juggle a bunch of crap around.

And all of this, oddly enough, lands you into not caring if you are calling to a function or a macro. I have more trust in calling things in most lisps I've worked with, because I can understand both ways in terms of themselves easily enough. With ruby, I'm often wary of using a new method in a source file, because I really can't trust all of the interactions that are assumed for it to work out.


I'm not sure how much of our respective experiences is due to familiarity. For example, I have REPL power tools and a really good understanding of Ruby's object model to help me figure out what something's doing. So that almost never concerns me all that much.

Whenever I pull in a new gem, I usually budget a few minutes to wrap my head around the semantics of how the gem is expected to be used. Occasionally the documentation isn't good enough and I need to source dive.

What I never have to deal with is new syntax. No matter how Ruby is being used, the syntax is always the same comforting, pleasing-to-read near-prose. What changes when you use metaprogramming in Ruby is semantics. You can't add new syntax to the language unless you actually change the language.

Like you, I find people misusing metaprogramming. For me usually this is fairly easy to detect. If I look for the source of a method and it's dynamically generated, then showing the source will take me to where it was generated.

I won't pretend to have chosen Ruby over Lisp because I liked the language better. I chose it because I figured it would help me get a job better. But as time went on I found myself at home with Ruby.

I don't think it would take me all that long to get used to any given Lisp. But I don't believe it actually gives anyone superpowers, at least, not superpowers you can't have with Ruby. I have yet to hear a lisper actually articulate a solid advantage for code as data.

When there isn't any functional reason to choose between alternatives, all there's left is the aesthetic. Which is where all those parenthesis finally become important.


Hmm, you seem to be indicating that people use macros to create new syntax all of the time. I have not actually found that to be the case that often. In particular, there are only two macros I know I use somewhat regularly that do this. LOOP and FORMAT. And even those always follow the same general syntax of lisp. Unless you get crazy, it is not usual to actually use things that drastically change the full syntax of lisp.

But again, in lisp, you probably make use of more metaprogramming than you'd realize, precisely because it is indistinguishable from normal programming.

(I'm tempted to contrast this with some of the DSLs I've seen in the likes of scala and ruby... I have a hypothesis that all of the hoops you have to jump through to make a macro like construct cause people to go all in when they do so.)

Now, I'm also not claiming lisp gives superpowers. Again, I'd almost go the other route. Most lisp is typically quite simple, because you can accomplish quite a lot that way.

Which does bring it back to aesthetics some. The parens just flat out don't bother me. They are a bemusing joke, that never actually made sense. Too often, the difference is minimal compared to most C programs. Foo(bar) becomes (Foo bar). Which isn't that big of a deal. The basic math changes from infix to prefix is a bit to get used to. However, I find it is much more pleasant to deal with generics (+ matrix1 matrix2) as opposed to matrix1.plus(matrix2) of java and the like. Of course, I was also a huge fan of my HP calculator in college. RPN for the win! :)

So, yes, I do think it comes down to preference. I don't think you're preference is wrong/bad. I do think your view of lisp sounds colored more by bemusing jokes than actual problems in lisp code. But, I fully grant my experience with ruby may be colored by an overly ambitious project written in it.


In ANSI CL, the macro related to formatting is called formatter; format is just a function.

The format function can take a function as an argument in place of the format string.

The formatter macro can compile a format string into such a function.

    (format nil (formatter "(~a-~a)") x y)
formatter spits out a lambda that is passed to format; format then passes it the stream, and the remaining arguments.


Indeed, apologies for messing that up. :) My point was more on the rarity of "syntax breaking" macros. And, even that one doesn't really change the syntax much. It is just a very complicated function, for all intents and purposes.


> My point was more on the rarity of "syntax breaking" macros.

Why don't you consider macros like DEFUN, DEFCLASS, COND, etc to be "syntax breaking"?


Not speaking for grandparent, I would say that cond isn't syntax-breaking because it doesn't have to employ grammar -driven parsing to recognize and delimit the clauses and their constituent forms. The nested list structure of cond that you see is the real syntax.

defun could be like this, though its ANSI CL specification is quite screwed up.

The sytax is given as:

   defun function-name lambda-list [[declaration* | documentation]] form*
where the double square brackets are described in "1.4.1.2.1 Splicing in Modified BNF Syntax" which denotes that a list of elements is to be spliced into a larger structure, and the elements may appear in any order. The description of this idiotic bullshit is convoluted, but an example section gives the gist of just how idiotic:

"For example, the expression

  (x [[A | B* | C]] y)
means that at most one A, any number of B's, and at most one C can occur in any order. It is a description of any of these:

  (x y)
  (x B A C y)
  (x A B B B B B C y)
  (x C B A B B B y)
but not any of these:

  (x B B A A C C y)
  (x C B C y)
Note how the B elements generated by B* can be interspersed! So in the case of defun it means that we can have variations like:

  (defun name (args ...) decl doc decl decl decl body...)

  (defun name (args ...) doc decl decl decl decl body...)

  (defun name (args ...) decl decl decl decl doc body...)
There can be at most one docstring, and an arbitrary number of declarations, and they can be mixed in any order.

Note that if we interpret A|B*|C as a simple regex notation, it wants all the B's clumped together; someone really had to work hard to concoct this counterintuitive BNF extension.


That all said, I can't imagine that most folks are surprised by the general shape of most defun statements. The complexities that go into how it is implemented is crazy, but it is not at all like the first time you see a LOOP invocation, is it?


> I can't imagine that most folks are surprised by the general shape of most defun statements.

The general shape of a LOOP expression is just a list of symbols and lists. A complicated LOOP expression that uses many features at once doesn't end up looking very Lispy, but a shorter one like (loop :repeat 10 :do (print 'hello)) shouldn't look at all foreign, especially if you write the keywords as keywords, which many people do.

I guess when people talk about the regularity of Lisp syntax, I think they usually mean that (op ...args) applies the operator OP to the list ARGS. LOOP obeys this very well: the only LOOP syntax I can think of that breaks it is `using (hash-key k)` or with HASH-VALUE. Universally accepted macros break this rule all the time, though. By shoving everything into lists, they totally abandon trying to keep with the (op ...args) form. If you see that regularity as a great advantage of Lisp's, then it seems like you should be much more bothered by the more "mundane" macros than you should be by LOOP.

    (loop :for i :below 10 :do (print i))
Is less "syntax-breaking" than:

    (dotimes (i 10) (print i))


Commonly used loop syntax is replete with syntax that breaks "op args ...":

  when <condition> do
  for <var> in <list>
  for <var0> = <init0> then <step0> and
    for <var1> = <init1> then <step1>
  collecting <expr> into <var>
The normal macros that don't have a flat argument structure, but shove everything into lists are keeping with the idea of leveraging the nesting so that the construct is "parsed on arrival" (POA?) into the macro. All that remains is to access its structure.


Syntactically, those are things that could have been implemented as operators, but aren't, they're just LOOP keywords; lots of functions take keyword arguments (edit: even if LOOP's are a little different from &key parameters). LOOP avoids using unquoted lists for things other than calling operators.

edit: Oh, right, destructuring, too! I guess that also counts.

Out of curiosity, how would you feel about a macro like this (ignoring that it doesn't enforce any keyword order)?

    (defmacro for (var &key = (then =) while do)
      `(do ((,var ,= ,then))
           ((not ,while))
         ,do))


I was taking "syntax" to be "ubiquitous syntax". To that end, those don't really pose a big surprise.

Cond might. But really, if I said to prepare a list of test to action pairs, it has the somewhat obvious syntax to execute the first one with a true test.

Pcond is a bit crazier. But you can go a long way without ever touching that one.

Defclass is powerful, though again, you can go a long way without it. And if you are in a codebase using it, it is probably a touch on the ubiquitous side.

Loop is just famous for having a ridiculously powerful language inside it. Much more so than the others, which, quite frankly look like normal function calls most of the time.


> (I'm tempted to contrast this with some of the DSLs I've seen in the likes of scala and ruby... I have a hypothesis that all of the hoops you have to jump through to make a macro like construct cause people to go all in when they do so.)

I don't think we have any points of contention left to argue over, but I will say that many Rubyists just aren't familiar enough with Ruby's object model and scoping to be able to do DSLs cleanly. It's certainly possible, but there's a learning curve.

Actually I can articulate how to do DSLs in a paragraph. Create a DSL class where all instance methods are DSL calls that set instance variables. The instantiator takes a block, the DSL input, and simply runs instance_exec on the block so it's executed in instance scope. Hand the DSL object with all the ivars to something else to actually do things with. Nothing in DSL behavior should do anything other than set ivars. The only metaprogramming is instance_exec.


Apologies if you took my posts as trying to argue you out of a position. I think the developer happiness is a fairly important point. And aesthetics being what they are, different strokes for different folks. So, I was just chiming in that I have an amusingly contrasted experience.

To that end, thanks for keeping the discussion going! This is a social site, so it is fun to flat out be social. :)


Oh I certainly don't see you doing that, that was why I said "I don't think we have any points of contention left to argue over." It's rather unfortunate that in a textual medium, tone must be inferred from prose. I just wanted to add a bit of extra color.


To be fair there’s a lot of poorly designed hard to read ruby meta programming out there.

I find that the language is flexible enough that I never really reach for meta programming when tackling problems.


I've always been amused by how quickly and deeply some people reach for meta programming. To the point that I really just don't get why.


Some Lisps indeed do away with the "minimalist" aspect when it comes to functionality, although I was mainly referring to syntax.


Unrelated, but have LTU's site admins just disappeared? I signed up for an account probably over a year ago and it still hasn't been verified.


Same. Maybe you just have to hijack them at ICFP.


There were some issues with the signup process over the past year. You should email them directly:

http://lambda-the-ultimate.org/node/5535


> Urbit's Nock, which, other than the jet concept, I honestly think is either a strange art project or snake oil, so I won't discuss this further.

Urbit is very real, though whether it will go anywhere or not is a different question. You can run the code today, the language is.... weird as, but there it is, running and doing what it does.


Snake oil != vaporware.


I think the standard term is Idiom Recognition.


Did this end up going anywhere interesting? As written up, it sounds like an attempt to simplify programming that would just turn into a "turtles all the way down" problem.


They haven't made a lot of public gains yet but the project has only recently raised enough money to grow beyond a few developers.


What about debugging? Without an IDE and/or a stepping debugger you're in for a lot of no fun.


Are you referring to debugging in insanely simple languages or to debugging in general? Because I have written code in a text editor and debugged with PRINT (or whatever the language offers) for decades. It was a lot of fun!


The language specification should guarantee that when you implement an algorithm from your algorithm book, you get the same time and space complexity as described in the book, with reasonably small constants. You should be able to hit your big-O target without knowing which "jets" get special treatment by the compiler. That's all I ask for, and it's not too much to ask.


This is basically asm or microcode at a higher level.


Yeah I was pretty sure I've seen this in both Nim and Common Lisp, but don't compile-time macros in general allow for this? A comment even mentions source-text replacement with C functions. So it seems like the "jet" is just a type of macro that is formally specified to behave as a pure transformation...


If macros are "GOTO", a jet is a "COMEFROM". A programmer knows they're calling a macro, and expects that code to change. A jet, meanwhile, affects code that was written without knowledge of the jet.


It sounds like a aspect oriented programming.


It's the opposite.


That's a stretch. If the only dimension you care about is whether or not the language spec is guaranteed to be invariant, sure, but that's essentially nitpicking. One more layer of abstraction gets you an asm->asm transpiler that never changes.


Take microcode: one human-visible instruction gets implemented by many microcoded instructions.

With jets, a string of many very simple human-visible instructions gets translated to just a few actual machine instructions.

So it's the opposite in the sense that a microcoded system expands each instruction you write, while a jet system contracts strings of the programmer's instructions into more efficient code.

As one of the comments says: "The point is that jets create a particularly vicious abstraction inversion whereby a programmer must simultaneously think inside the object language and a kind of metalanguage."

To take advantage of jets, the programmer has to get the shape of the code right as well as the semantics. It's a bit like writing poetry, with the extra requirements of rhyme and meter in addition to the semantic requirements of straight prose. You have to keep an eye on what jets might match the code you write if you are going to get the required performance.

ASM and microcode have none of this intricacy.


> To take advantage of jets, the programmer has to get the shape of the code right as well as the semantics.

Not true at all, because jets are almost never something that exist in the language the programmer is writing in. They exist in the language the compiler is targeting.

Here's a concrete example (something otherwise missing in this thread): the keccak256() hash function in Solidity.

The keccak256() function just looks like any other function, from the perspective of writing code in Solidity.

But when you compile your Solidity program containing a keccak256() call, it compiles not to an inline implementation of keccak256(), or to a single intrinsic EVM opcode for "doing keccak256", but rather to a call to a keccak256-implementing smart contract at a "known" address, created alongside the particular Ethereum network.

That smart contract does have the plain EVM code in it to compute the keccak256 hash for a given input, and if you implemented a "dumb" Ethereum VM for your Ethereum network node, that's what would happen. It would be very slow and expensive to run, but it would work just fine.

But, instead, in less-naive EVM implementations (including the reference EVM), there's a jet: the EVM opcode sequence for "call the known keccak256 smart contract" is pattern-matched, and instead of actually doing that, a native keccak256() function is called instead. (Or, alternately, the definition of the "CALL" op checks the call address, and in the case of the known addresses, calls the native function instead. Same difference.)

The Solidity programmer remains completely unaware of this. Instead, it's a contract between the developers of Solidity, and the developers of (some) EVM implementations, that

1. Solidity will emit code in a structure that the EVM can pattern-match; and that

2. the EVM devs will ensure that such code is valid on all EVM implementations, with or without the jet (in this case by working with the networks to ensure there's a smart-contract in place at the known address that does keccak256 hashing.)

That's all that's required to get a jet working: mutual knowledge of the jet between the developer of a compiler targeting the ISA, and the developer of the interpreter/VM for that ISA.


> it compiles not to [...] a single intrinsic EVM opcode for "doing keccak256",

Sorry, no. That is exactly what it compiles to. Opcode 0x20 (erroneously called SHA3) computes a keccak256. See the yellow paper.

Maybe you meant SHA256 or RIPEMD160, which are indeed implemented as so-called "precompiled contracts".

> That smart contract does have the plain EVM code in it.

It does not. No EVM reference implementation is provided for the precompiled contracts. Pre-compiles do things the EVM is not able to do. It is not possible to have fallback EVM implementations.

> the EVM opcode sequence for "call the known keccak256 smart contract" is pattern-matched

No it's not. Precompiles are handled as any other call and results in a call to something like `evm_do_call(address, input)`. This function then special cases the addresses of precompiles. AFAIK no-one does any kind of pattern matching.

> [conclusions]

The whole thing has little to do with pattern matching or (in)formal agreements between Solidity and EVM devs. Such a thing would be quite annoying.

A much better analogy is that the EVM comes with a built-in library of basic functions, much like `libc` does. And Solidity is aware of this library and offers them using a function call syntax, much like `printf`. The way they are called is not much different from how user-written libraries would be called.


> Maybe you meant SHA256 or RIPEMD160, which are indeed implemented as so-called "precompiled contracts".

Oops, yep, that's the one. I misremembered SHA256 as Keccak's SHA3_256. (I've been writing EVM ABI codegen lately, there's a lot of going back-and-forth between them.)

> Precompiles are handled as any other call and results in a call to something like `evm_do_call(address, input)`. This function then special cases the addresses of precompiles.

Yes, that's what I said:

> Or, alternately, the definition of the "CALL" op checks the call address, and in the case of the known addresses, calls the native function instead. Same difference.

There are really two entirely-separate implementation techniques for "jets", and I sort of smushed them together, trying to explain the one using the other. I apologize. Let me be more explicit:

• There's one implementation technique that uses pattern-matching during the instruction decode pipeline stage of a VM or CPU. The EVM doesn't do this one.

• But there's another implementation technique, which the EVM does do, to the same effect. It's also the technique shared by Urbit's Nock VM (and Urbit made up the term "jet", so it definitely applies here.) Under this technique, functions (or in the EVM's case, smart contracts) are loaded into a virtual memory address space (contract address space) at predictable addresses given their content (in the EVM's case this is only true for the precompiles, but in Nock it's for everything†), and then the VM is written to rely on those predictable jump-target addresses (contract addresses), using either a LUT or hard-coded special-casing in the CALL op to find VM-intrinsic native code to jump to when the instruction-pointer would otherwise move to a "jetted" jump-target.

This call-site technique is essentially the same‡ as the pattern-matching case, in terms of its effects on the VM's interpretation speed, the constraints it places on the code, and the level of support a compiler-author must provide if they want to trigger the optimized behavior.

The only difference is that, in the second implementation, you name your bytecode sequences at some level (i.e. give them particular, predictable addresses), such that, rather than pattern-matching on the bytecode itself, you just have to pattern-match on these names when you're calling/jumping, in order to get the "jet" effect.

---

† Nock is not actually a bytecode VM, but a raw AST-level interpreter where the things being "named" with predictable addresses are the AST nodes loaded into the interpreter's memory. Thus, the Nock interpreter goes a lot slower than a regular bytecode VM by default, but in exchange has the ability to "jet" any AST node at random with a native replacement. Given this, Nock could actually add an arbitrary-sub-function-level JIT (expression-level JIT?), despite being an AST-level interpreter that never generates intermediate bytecode. This essentially makes Nock equivalent in potential performance to the "instruction-decode-stage-jetted bytecode VM" implementation, but with fewer, more general "patterns" to recognize, since ASTs are more normalized than bytecode is. (It's like Erlang's parse-transforms, where you're pattern-matching a Core Erlang expression AST and replacing it with a NIF call; but instead of happening as a macro-expand step at compile-time, it happens at expression beta-reduction time for runtime-eval'ed code. This would be costly to pattern-match if you had to look at the expression as an AST tree; but, like I said, Nock gives the AST nodes predictable names based on their content—I think using a cryptographic hash of the subtree with variables hygenized—so you just have to pattern-match on the name.)

‡ The call-site technique is also very similar to a JIT in terms of how the VM's bytecode-level CALL/JMP op is implemented. The difference comes down to how the LUT is being populated—by the JIT from optimized code, vs. by the VM's "special knowledge" of known intrinsic names that un-optimized code will predictably refer to. If you don't know your native-function "names" until you have a look at the code (or a precompiled function cache that came from looking at the code), you've got a JIT. If you know the "names" in advance (like in Ethereum!), you've got jets.


By your description, a jet is an intrinsic written by people who don't know about intrinsics. Plus the bonus of "if you don't trigger the intrinsic it literally costs you money" which is the kind of language-gotcha innovation I would expect from crypto-whatever.


That’s the way you’d look at it if the jet was there before the code got there. The usual point of jets is to make existing code go faster, without recompiling it (such as in the case where all code is immutable forever.)


Um yes JIT intrinsics do that. Well, the JIT is recompiling as needed (or more often), but your bytecode isn't changing.


Jets exist without a JIT. In fact, the whole point of distinguishing "jets" as an idea is that they're a potential feature of naive bytecode interpreters, rather than of JITing interpreters or of (potentially optimizing) compilers.

I mean, you can think of an interpreter with jets, but without a JIT, as an interpreter which passes the code it loads through a specific kind of JIT that only does native codegen for explicitly specified patterns, and has no "generic" path, instead leaving everything else as calls back into the interpreter.

But that's not actually what's happening, because there is no JIT in such interpreters. JITing necessarily happens either during code-loading, or asynchronously with a profiling thread. Jets, meanwhile, happen within the instruction-stream decode stage of a VM, where the VM has enough of a read-ahead buffer that it can decode a long sequence of plain instructions that fits a given pattern, to an intrinsic. Jets are "recognized" within the VM's instruction pipeline itself, each time the pipeline's state matches a given bytecode-sequence pattern. They're a register-transfer level optimization.

In essence, jets are an implementation technique for bytecode interpreter optimization, alternative and complementary to a full JIT.

---

Also, I'm speaking about VMs here, but jets apply as a thing you can do just as well in a hardware CPU design, too.

An example of a common "hardware jet" is recognizing a multi-byte no-op sequence (that is, not a multi-byte NOP intrinsic, using a long instruction, but rather a sequence of single-byte NOPs), and making it have the same effect as a multibyte no-op intrinsic of the same size (i.e. given a NOP sequence of length N, the replacement would free up the ALU and other later pipeline stages for N-1 cycles.)

(There's probably another name this technique is known by in the hardware world—I'm not a hardware guy. I'm just highlighting the equivalence.)

And this isn't just a particular kind of microcode expansion, either. Microcode expansion is effectively a kind of cheap, throw-away JIT: CPUs expand their ISA to microcode not during the decode stage of their pipeline, but rather when the instruction pointer's movement causes the CPU to copy a new chunk of a code page into a cache-line. The cache line is expanded as a whole, and the result stored in a per-core microcode buffer. This works for regular instructions, since regular instructions are necessarily cache-line aligned; but the sort of composite, multi-instruction patterns a CPU might want to recognize aren't guaranteed to be cache-line aligned, so they won't get caught by this pass. A "hardware jet", on the other hand—since it happens at the register-transfer level—can optimize these instructions just fine. So CPU designers use both.


Thank you (seriously) for the detailed explantion.

JITs already do this, to my mind. What else are you going to call it when HotSpot sees bytecode for a loop initializing an array and replaces it with a call to memset? Special-casing instruction patterns is applicable whether or not you're doing native codegen. The big difference is that you have these VMs which have decided not to JIT so they need to fix their perf problems somewhere else, because (and Nock is especially bad about this) a naive interpreter is unacceptably slow.


I was talking about someone programming in the very simple language (e.g. Simplicity, as discussed in the LtU conversation) in which the jets do exist. I guess for you that would be a compiler implementor.

Edit: NB I'm drawing on this particular comment: http://lambda-the-ultimate.org/node/5482#comment-95188


What's the best way to say this..

It's a masochistic method of programming where you insert a randomly generated layer of abstraction between the hardware and the programmer, and tell the programmer to make sure to add enough hints to his code to make sure that it runs quickly.

The benefits are unclear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: