Hacker News new | past | comments | ask | show | jobs | submit login
Klisp – An implementation of the Kernel programming language (klisp.org)
105 points by ycmbntrthrwaway on March 16, 2016 | hide | past | favorite | 74 comments



> Try that in your programming language ;)

Short circuiting in Nim, seems simple enough:

    template myAnd(a, b: bool): bool =
      if a: b
      else: false


This doesn't quite hit the mark. The example given in Klisp will work with an arbitrary number of arguments. What looks to be the second of two named arguments, e, is actually the dynamic environment from which $and? is called.


> e, is actually the dynamic environment from which $and? is called.

I.e. this and operator is an anti-deluvial beast once known in the Lisp word as a FEXPR.

If a compiler is developed, every such operator will have to be reimplemented as a macro.


Context:

Special Forms in Lisp [0] by Kent Pitman (1980) is about FEXRs vs. MACRO:

> It is widely held among members of the MIT Lisp community that FEXPR, NLAMBDA, and related concepts could be omitted from the Lisp language with no loss of generality and little loss of expressive power, and that doing so would make a general improvement in the quality and reliability of program-manipulating programs.

> There are those who advocate the use of FEXPR's, in the interpreter for implementing control structure because they interface better with certain kinds of debugging packages such as TRACE and single-stepping packages. Many of these people, however, will admit that calls to FEXPR's used as control structure are bound to confuse compilers and macro packages, and that it is probably a good idea, given that FEXPR's do exist, to require compatible MACRO definitions be provided by the user in any environment in which FEXPR's will be used. This would mean that a person could create a FEXPR named IF, provided he also created an IF MACRO which described its behavior; the FEXPR definition could shadow [12] the MACRO definition in the interpreter, but programs other than the interpreter could appeal to the MACRO definition for a description of the FEXPR's functionality.

[0] http://www.nhplace.com/kent/Papers/Special-Forms.html


But, of course, if you just write the macro IF, it's pointless to then write a FEXPR. This is because the interpreter can use the IF macro just fine. Nobody is going to write every operator twice in a large code base, first as a FEXPR operator and then a macro operator. It's extra development work, plus extra work to validate that the two behave the same way and are maintained in sync.

Kent is writing very theoretically there and being very generous to the idea.

Single stepping through macro-expanded code is perfectly possible. There is no debugging disadvantage between stepping through a macro-expanded control flow operator, versus one which is interpreted. In both cases, the single-stepping interpreter can know the source code location where the argument expressions came from and jump the cursor there, providing visual stepping.

Not to mention that compiled code can be stepped through a source code view; countless programmers have been doing this in C for decades, and similar languages. Given that we can write an if statement in C, compile it and step through it in gdb, the position that we benefit from an FEXPR to do the same thing in a Lisp interpreter is rather untenable.


but not a macro in the sense Common Lisp knows them... FEXPRs are first class objects. FEXPRs can be passed around as values of variables, whereas macros cannot


FEXPRs operate on a second-class representations of code at run-time: raw source code made of nested lists, which are selectively evaled.

Speaking of which, eval is also a first-class function; that doesn't mean it's a good idea to use all the time.


The point isn't that eval or fexprs are good to use all over the place. It's interesting as an exercise in language design to see what is possible when they are available.

And to the fact that fexprs operate on second class data. It's still a win that they are first class objects. It means you can dynamically pick which fexpr (or applicative operator) to call on a set of arguments, which like you said, can be selectively eval'd.


As far as the FEXPR itself is concerned, exactly that is possible with (some-fexpr env arg1 arg2 ...) as what is possible with (some-function env 'arg1 'arg2). It's just to that you have syntactic sugar there in not having to quote the arguments to suppress their evaluation.

The enabler of interesting semantics is not the FEXPR but the env: that the environment is available to the program itself, reified as an object. We can write code which somehow receives this env as an argument and then use it in eval. (Then it's basically an afterthought that we can put such code into functions, hook them to operator names, and have the interpreter dispatch them for us, and automatically pass them the environment.)

Given access to the environment, we can explore questions like, "what if we dynamically build a piece of code, say, based on some external inputs, and then evaluate it in the environment where it can see the local variables of the current function?"

Ultimately, this sort of thing is entertaining bunk, which could be why it disappeared: the evaluation-semantic equivalent of Escherian impossible waterfalls and such puns and ironies. (I just coined a term: trompe d' eval).

Or, maybe the ancient Lispers were wrong; was there a tiny baby hiding in the bath water? Was it really just chauvinism (our main program is research into better compilers, and whatever gets in our path is to be pushed aside).

Possibly, the Algol people and lexical scoping had an influence: lexical scopes encapsulate and protect. You don't want to reveal run-time access to the environment, which breaks the doctrines of lexical scoping, allowing a function to peek into or mutate another's environment, if it only it receives that environment as an object. That would have been repugnant to the Wirths and Dijkstras of that heyday.

We have a less powerful version of this in the lexical closure, which binds a specific piece of code to a specific environment, without revealing that environment as an object. The closure is reified; the environment isn't, being considered something lower-level that remains hidden under the hood (and subject to a myriad implementation strategies which make it hard to model as a cohesive object).


Great response. I admit I am interested in fexprs because of the syntactic sugar; not having to quote arguments means code can look more like words at the top level, and smaller functions can deal with how they are interpreted.

As far as the search for compilers is concerned, I think what is considered powerful notation should be kept around, even if it's tough to compile at the moment.


If you have a compiled Lisp with an interpreter also, adding fexprs creates the interesting possibility the fexprs themselves may be compiled.

Suppose the Lisp is bootstrapped in some other language, like C or assembler. The special operators in the interpreter are written in C. If you write the IF operator in C, and that operator itself needs an if operator, it uses the C if statement or ternary operator. (Obvious, right? No level confusion.)

If you add FEXPRS, they are interpreted code themselves: interpreted code controlling the interpretation of code. If you write an IF FEXPR and it needs an if operator, and you use IF, then you get infinite regress/recursion: while trying to interpret IF, the IF FEXPR calls itself, and then runs into the same situation, calling itself again, ...

If the Lisp has a compiler and macros, then you can write an IF macro, and compile that FEXPR. Then, when the interpreter evaluates an IF form, it now dispatches a compiled function. When that function needs IF, it's just running the compiled code, and not recursing any more; the IF FEXPR is only for interpreted code.

FEXPR's can do some "impossible things", and if you want to do those things fast, compiled FEXPR's could be useful.


I think you mean antediluvian


The example given in Klisp will work with an arbitrary number of arguments

So will this (Racket):

  (define-syntax my-and.v2
    (syntax-rules ()
      [(_) #t]
      [(_ a) a]
      [(_ a b ...) (if a
                       (my-and.v2 b ...)
                       #f)]))


In fact, that ought to work not just in Racket, but in any Scheme implementation that conforms to R5RS or later (or even R4RS plus appendix).

The Klisp authors picked a pretty bad example for demonstrating the power of fexprs. You can think of them as being first-class macros in a way [0].

Joe Marshall demonstrated that fexprs can be divided into two distinct classes: safe and unsafe [1]. He showed that all safe fexprs could be implemented as macros with no loss of expressiveness. (An unsafe fexpr is one that relies on metacircular fixpoints (whatever that means)).

[0]: That's not exactly true. Macros are syntactic transformers whereas fexprs are procedures that can syntactically modify and selectively evaluate its arguments in a given environment. Despite this semantic difference, there's a very large overlap in their use-cases.

[1]: https://www.brinckerhoff.org/scraps/joe-marshall-on-FEXPRS-a...


The first of the two named arguments gets eval'd -- what could go worng?


Easy, and this code helps creating the computer that will run your code. It is a 0th order class language.

  library IEEE;
  use IEEE.STD_LOGIC_1164.ALL;

  entity and_or_top is
      Port ( INA1 : in  STD_LOGIC;    -- AND gate input
             INA2 : in  STD_LOGIC;    -- AND gate input
             OA   : out STD_LOGIC;    -- AND gate output
  end and_or_top;

  architecture Behavioral of and_or_top is
  begin
      OA <= INA1 and INA2;    -- 2 input AND gate
  end Behavioral;


Funny, I just nabbed the IEEE VHDL standard from IEEExplore yesterday.


IO:

    myand := method(
        if(call evalArgAt(0), call evalArgAt(1), false)
    )

    myand(true, 3 println)  # prints 3 to stdout
    myand(false, 3 println) # nothing happens
This should also be possible in TCL. Scala has some kind of lazily eval'ed arguments, too. And, of course, Haskell has this built-in.

EDIT: better example code.


(forgot to mention, this IO code generalizes to any number of arguments)


Cool! I read Shutt's thesis on Kernel years ago, but I didn't know there was an implementation available yet.

Vau interpreters can actually be simpler than lambda interpreters, and make a great playground for language design. I wrote a blog post back in 2012 about converting a simple Python Lisp interpreter into a vau interpreter (http://gliese1337.blogspot.com/2012/04/schrodingers-equation...), and that eventually grew into an undergraduate capstone paper on designing the denotational semantics of and implementing another vau-based language called Vernal (https://github.com/gliese1337/CS598R).

I don't expect any of these will ever become mainstream industry languages, but they have a theoretical elegance and simplicity that makes them fun to play with for the theoretically-minded.


There's also wat.js and my wat-pl port to perl5.

They're vastly fun for prototyping complicated logic in.


Kernel is what I really want scheme to be. It's like scheme, but better. And if you're wondering why I say better, more first-class objects is always a good thing.


That's a pretty simplistic explanation. There are lots of ways to mess up Lisp in an attempt to enable the style of meta-programming that Kernel supports. What distinguishes Kernel from these other approaches (many of which have been seriously considered in the past) is mostly what you're not able to do in Kernel. That is, the vau construct still allows encapsulation in a useful way. Kernel may have "more things first class" than Lisp, but that's not all that sets it apart.


That's certainly one way of looking at it, and it IS one of the reasons I like kernel. fexprs > macros because of this.


When you get first-class fexprs, you pay for it in performance. They're nearly impossible to compile, and even interpreting them efficiently is a challenge.


Read Sutt's paper, then decide if he didn't fix the performance problem. I haven't gotten around to it yet, so I'm not sure, but apparently $vau-calculus has some interesting properties.


I read Shutt's thesis back when he first published it. I'll admit I'm due for another reading. The vau calculus is indeed very interesting, but there are fundamental reasons why fexprs can't be implemented efficiently (at least, not on commodity hardware). Of course, depending on your use-case, they might be fast enough, which is what really matters in the end. (Shutt's homepage offers the "official" interpreter that goes along with the thesis, and even he admits that it's painfully slow).


Indeed. Thanks for letting me know.

So is the problem with fexprs the fact that subexpr elimination can't happen because you don't know what will be a fexpr at optimization time? Or is it something else? Because if it's just that, then why can't a fexpr just be an ordinary function whose canonical name is actually a macro that `quote`s its arguments? I'm assuming I'm missing something, because somebody must have tried that by now.

Or you could just `quote` the call yourself, like TCLers do.


> So is the problem with fexprs the fact that subexpr elimination can't happen because you don't know what will be a fexpr at optimization time?

That's part of it. But the bigger issue is, I think, that you can't even generally compile[0] the arguments before runtime, because you don't necessarily know what expressions they'll be by the time they get evaluated.

> Because if it's just that, then why can't a fexpr just be an ordinary function whose canonical name is actually a macro that `quote`s its arguments?

That's exactly what they are, along with the environment at the call site. Then, when (and if) you need the arguments to be evaluated, you do it explicitly with a call to `eval`. But keep in mind that the fexpr body is free to modify those arguments as data before evaluating them.

> Or you could just `quote` the call yourself, like TCLers do.

Tcl is similarly a very difficult language to compile and/or optimize.

[0]: I mean ahead-of-time compilation here. I don't see any reason why a JIT compiler couldn't be effective.


Okay. So this is essentially the "eval is slow" problem all over again. If that's the problem, then it's endemic to all Lisps. Fexprs just encourage that style of programming.

According the summary of the Wand paper, though, another problem is that optimizations can't happen without full-program analysis. And now the question becomes, if fexprs can be trivially implemented in lisp using macros, or just hand-quoting args, then why don't non-fexpr lisps have this issue? And if they do, why don't we just add fexprs, and first class environments, and eliminate macros entirely?


> more first-class objects is always a good thing

Only if they're not enforcing some (actually, the worst possible) evaluation strategy.


True, I suppose.


Fexprs do not offer that much vs. mere static compile-time macros, but they're making compilation impossible, limiting an implementation to a lowly and buggy interpreter.


Can't build it on Debian, getting:

    /usr/include/features.h:374:25: fatal error:    sys/cdefs.h: No such file or directory
    #  include <sys/cdefs.h>
(I installed uuidcdef, for what it's worth).

Also I don't understand the code for the operator and, isn't it gonna return #t whenever x is null?


Builds just fine on Arch Linux.

Maybe this article might help you: http://askubuntu.com/questions/470796/fatal-error-sys-cdefs-...


Yep, it works, thanks for pointing it!


Yes, it is. But x isn't one of the operands -- it's a list of all the operands. And (and) should evaluate to #t for the same reason as (+) should evaluate to 0 and (*) should evaluate to 1. (And also why 0! = 1.)


Ha got it, thanks! But what's e then?


The environment object which eval needs to resolve variable references.

This operator is defining a function which interprets and operator calls.


One of Shutt's advisors (on his thesis) was Shriram Krishnamurthi who certainly understands Scheme deeply. Makes me curious to read the thesis in an effort to understand what's new and interesting here.


In Swift:

   func myAnd(lhs: Bool,
              rhs: @autoclosure () -> Bool) -> Bool
   {
     return lhs ? rhs() : false
   }
(Heavily inspired by https://medium.com/swift-programming/facets-of-swift-part-5-.... Not tested; any bugs likely are mine)


I don't know Swift, but it seems[0] that it is not possible to apply this with multiple arguments, like in the original example (e.g. (and a b c d)).

[0] https://stackoverflow.com/questions/29750244/variadic-autocl...


> Kernel is a Scheme-like programming language by John N. Shutt in which all objects are first-class.

Ok, but what problems are this programming language trying to address?


Looking at Shutt's thesis (I've just started), he makes the point that Scheme is pretty 1st class, but not entirely: It has a few reserved forms, e.g., if, lambda, quote. This never particularly bothered me, but errhaps this is what Shutt is attacking.


What problems were a Turing machine designed to "address"? Sometimes the things that have the most value are the things that we can't wrap our minds around yet. Think forward.



http://clojure.org/ used to have a macro definition of either (and) or (or) somewhere on one of its basic pages. They reorganized it recently though so I can't find it. But yeah, I'm not really sure what Klisp offers over Clojure so far. This sensationalist version of a hello-world isn't really anything new or special so far.


You cannot pass macros as values as they do not exist at runtime. (So no macros can be passed to map/reduce as opposed to the fexprs).

P.D. Msimoni has written about fexprs (and a Lisp dialect with fexprs on JS, wat.js) if you want to read about them some more.


Long ago, when I was learning Clojure, and thought "it's so inconsistent that I can't pass `if` to a function," I would have found fexprs interesting. But I've since thought about it more and realized there's absolutely no need for it at all.


> Try that in your programming language ;)

Does this count?

    function and(a,b) { return (a ? b ? b : false : false) }


Now try to run this:

    and(false, console.log("This shouldn't run"))
A critical part of the semantics of the kernel function, and the javascript built in &&, is that they are _short circuit_, that is, they only evaluate the second value if the first value evaluates to false. This is very important if you're programming with side effects.


This is from the SBCL source, and I think it's pretty elegant:

    (defmacro and (&rest forms)
      (cond ((endp forms) t)
            ((endp (rest forms))
             ;; Preserve non-toplevelness of the form!
             `(the t ,(first forms)))
            (t
             `(if ,(first forms)
                  (and ,@(rest forms))
                nil))))


Right, what's interesting about klisp is that you can define that as something that can be called at runtime, rather than having to be implemented as compile-time macro expansion.

I sort of regard operative lisps as "lisp, but even more so", as it were (and for sheer brain melting power when prototyping weird logic they're even more fun than normal lisps, at least to me)


I just like them because fexprs are in some ways more elegant and easier to understand than macros.


His function will short circuit. It'll expand to:

false ? console.log("This shouldn't run") ? console.log("This shouldn't run") : false : false

The ? operator checks the left hand side, branching on whether it's true or false. So it'll only execute b if a is true. Otherwise, it jumps to the false section, which returns false, short circuiting. However, there is a problem with that code in that it'll run the b command twice if they're both true. So

and(a,b)(a?(b?(true):false):false)

will both short circuit, and will only evaluate each variable once. It gets to the first test, a?, and checks a's value. a is not true, so it jumps to the false section, which returns false, never executing b.

Another way of rewriting it would be

if (a) { if (b) { return true } return false } return false


His function will short circuit.

Not in JavaScript. This is prevented by 11.2.3(3) which specifies that a function call evaluates it's arguments, and which occurs before function evaluation. Note that the actual call to the function to have it's code run doesn't occur until 11.2.3(8), so by then the full arguments list has already been evaluated. Since this is part of the specification, it is not possible for a JIT to completely elide the argument evaluation unless it can prove that there are no side effects (which may or may not be something that individual engines do).

"Let argList be the result of evaluating Arguments, producing an internal list of argument values (see 11.2.4)."

"Return the result of calling the [[Call]] internal method on func, providing thisValue as the this value and providing the list argList as the argument values."

This is also super easy to test:

  > and(false, console.log("This shouldn't run"))
  This shouldn't run
  false


Okay. I was testing it in C# where it does short circuit, and functions aren't called until the conditions require them to be called. This is also how the Kernel language seems to operate. I have...misgivings about javascript's pre-evaluation of functions.


Unfortunately, you are also wrong about C#[0].

http://csharppad.com/gist/0408e33984025e970013

This behavior is defined (in the C# 5.0 specification document) at section 5.1.4 "Value Parameters"[0].

"A value parameter comes into existence upon invocation of the function member (method, instance constructor, accessor, or operator) or anonymous function to which the parameter belongs, and is initialized with the value of the argument given in the invocation."

[0] - https://www.microsoft.com/en-us/download/details.aspx?id=702...


C does the same thing. I didn't know C# followed the functional train all the way to lazy evaluation of arguments. But given the fact that C# isn't purely functional, that seems like a bad design decision.


C# does not have lazy argument evaluation. Arguments are fully evaluated for each call.


Yeah, I thought as much. Haskell and (some?) LISPs are the only languages I can think that actually advertise this (but there's probably others). C# isn't purely functional, so lazy argument evaluation would be a very wonky feature.


Only functional lisps, and those are few and far between.


I had just realized that prior to coming back here. The example given would evaluate. I could probably do the same with some operator overloading to emulate lazy evaluation, but it would be a bit more complicated.


I'm pretty sure the specification for C# disallows any instance where the arguments to a function are not evaluated at the time the function is called aka lazy evaluation. This, of course, is the whole point of the statement 'try doing this in your language' that the klisp website makes. In other words, it's not possible to create an 'and' function in C# (or JavaScript) that does not evaluate it's arguments before executing. You can certainly hard code that type of logic in your code (using && in both languages), but a generic function with a signature of and(a, b) is out of reach. As a specific example, imagine B is a function that evaluates to a list of all even whole numbers and A is always false. C#/JavaScript/etc will never complete, whereas klisp (and other languages with lazy evaluation) will have no problem returning false immediately.


No, because when calling your and(a, b) you will evaluate both a and b, while I expect the one on that page to short-circuit.

PS: Also, I'm not sure because I don't know Kernel, but it could be that the function of the page can take any number equal or greater than 2 arguments (and a b c).


> it could be that the function of the page can take any number equal or greater than 2 arguments (and a b c)

It can. That is what line 5 is for. If eval returns true, $and? is applied to the rest of the arguments, aka cdr.


Why not just:

    function and(a,b) { return (a ? b : false) }


Still won't work. See comment above for why.


Yes, I posted before seeing the other comments, the problem is that you can't force lazy evaluation of the function arguments in javascript. On the other hands it can be as easily written in languages that support it, for example in Scala:

  def and(a: Boolean, b: =>Boolean) = if (a) b else false
And with Scala it can be easily made to work as an infix operator:

  implicit class AndOp(a: Boolean) {
    def and(b: =>Boolean) = if (a) b else false
  }
Verification:

  def foo = { println("foo"); false }

  scala> false and foo
  res2: Boolean = false

  scala> true and foo
  foo
  res3: Boolean = false


Hey, neat. Now can you rewrite the code you call and with, inside of and? I kid.


#define and &&

heh


iso646.h already `#defines and &&` for you: https://en.wikipedia.org/wiki/C_alternative_tokens


Yep, had the same thought. #define AND(a,b) a && b


See answer above about arbitrary number of arguments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: