Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: SectorLISP now fits in one sector (justine.lol)
323 points by jart on Oct 30, 2021 | hide | past | favorite | 75 comments



Every time I read something by Justine Tunney, I am continually reminded of my mediocrity.

I once reached out to them personally to tell them I hope I can be like them when I grow up ;^)

On a serious note, this is one of the coolest things I've ever seen.

The person mentioned that helped them go from 700 -> 500 bytes, the said "Assembly Heisenberg" appears to be entirely obscure on GH too.

Seems like the right set of eyes just happened to find their way to the project at the right time:

https://github.com/agreppin

I wish I was born like this.


Agreed, Jart's work is jaw-droppingly incredible, on the order of Fabrice Bellard's achievements.

However, I don't see "I wish I was born like this" -- their works encourages me to think and do lots of crazy shit. My previous limitations were just a lack of imagination and creativity.

SectorLISP also encourages the "do anything" viewpoint. The original version was published even though way larger than a sector. Other people contributed, sometimes only by publishing a related project (SectorFORTH). By having an idea and publishing something Justine created something unique.

Lastly the above page shows the "run program and watch its memory in realtime" technique, which is incredible in itself. The tool is called Blinkenlights https://justine.lol/blinkenlights/


It's a nice story about how sometimes the best way to ultimately succeed is to not be afraid to admit when something was tried and failed. By publishing SectorLISP in an unfinished state, I attracted a lot of smart people to the problem. Now that it's complete we can look back and say we did it.


I was already impressed by this person's work when I read about APE (Actually Portable Executable)... Something that should really be a thing IMHO


> According to The History of LISP, the same was true for LISP 1.0. Native integer support was added later as a performance optimization.

I feel like this just distills the ethos of LISP so perfectly, I'm gonna use this to explain it to people now.


jart makes me realize that my high school counselor Stella Sawicki was absolutely right when she said I wasn’t living up to my potential


In your defense, Justine's been at this for a long time! Give it time and effort, and you too can do amazing technical things. Potential isn't a concrete thing, it's a set point you can move around.

Throw yourself at the computer, and the computer will do interesting things with you, too.

If it's not working, go at it from a different angle. Sometimes it's helpful to go back in time to come across things. I'd argue that 90% of cool tech phenomenon arises from studying the past to make something cooler in the future rather than any force of sheer will.

A lot of Justine's projects seem to fit this mold (if I remember right she came across the idea for Cosmopolitan after finding that a legacy UNIX feature let you avoid specifying your shell; SectorLISP, McCarthy's metacircular, etcetera).

She even cites someone whose entire deal is doing precisely that: Nils M. Holm.


Yes, Justine's amazing. I have also been following and reading Nils M. Holm's books. I program in J/APL, and I picked up Nils' book on Klong, but I have not fully worked through it yet. It gives me a perspective on J/APL. His Lisp stuff is great.


Nils his works are excellent; I always buy all that come out.


He's very prolific and the content is rich. I always feel I need to step up my game. People like Justine and Nils inspire me to keep at it!


Thank you all for your kind words! Now I almost want to write another book.


I also wish to thank everyone here for the kind words. (Justine here btw.) Also Nils, awesome book! Glad I had the opportunity to cite it in the blog post.


Please do!


Thinking about it, but writing a book is a large effort, I am not getting any younger and, to be honest, it is a bad idea from an economic point of view. My bestsellers, if you can call them that, sell about 30 copies per year.


> Sometimes it's helpful to go back in time to come across things.

Yeah, I remember her posting about the original Unix source code. I've never read it but I probably should. I bet there's plenty to be learned.

Sometimes I come across papers from the 70s and 80s too and they blow me away. It's amazing how much our predecessors achieved. Sometimes it feels like we're trying to rediscover or reinvent old technology.


If that were true then modern authors would just be guys trying to reinvent Shakespeare. As programmers, the programs we write can be art as much as it can serve a business utility. Some things progress but some things never change too, and each generation finds a way to express those things in a new and beautiful way. While learning to become a better programmer, I've found it helpful to read and be inspired by things like the UNIX source code, or texts written by guys like Vannevar Bush or John McCarthy, just as fiction authors might find inspiration reading the classics.


There's a lot of knowledge hidden in plain sight: it's accessible, and therefore known, and therefore people don't read through and learn it.

If it were hidden and secret, maybe people would take it more seriously.


How old are you? Is it too late for you to live up to your potential?


What made you reject the maybe a priori more likely alternative hypothesis that you just have less potential?


Why is this quoting the top-level lambda expression arguments? Unless I'm mistaken that's not how JMC did it. For example, to define a binding the cited resource http://www.paulgraham.com/rootsoflisp.html uses

    (defun null (x) (eq x '()))
Replacing the defun with an env binding, I believe the value translates to:

    (lambda (x) (eq x '()))
I don't think this should be quoted? Even though lambda isn't typically assumed in the "seven primitives," it _is_ required by the runtime. Are your function calls implicitly dropping quotes when evaluating arguments?


Author here. The quotes are indeed needed. Emacs does the same thing. For example:

    ((lambda (x)
       (message "%S" x))
     (1))
    ;; fails: no such function `1'

    ((lambda (x)
       (message "%S" x))
     (quote (1)))
    ;; prints (1)
The original evaluator is defined so that if the first item of a list, is a list whose first item is LAMBDA, then what it does is it calls Evlis() and Pairlis() which is sort of like a glorified version of Python's zip() function, to attach all the subsequent list items to the parameter name list.

    ((LAMBDA (ARG1 ARG2 ARG3)
      <YOUR PROGRAM GOES HERE>)
     (QUOTE VAL1)
     (QUOTE VAL2)
     (QUOTE VAL3))
This is the "venus fly trap" that the blog post mentions. The process of evaluation causes the expression the clamp in on itself where the args get digested into an association list which can be referenced and then the payload is evaluated. This was all non-obvious to me, even after using LISP casually for years, since we normally don't write code that way.

Now it's possible to add extra code to the evaluator so that if a naked lambda is encountered, i.e. it sees (LAMBDA ...) rather than ((LAMBDA ...) ...), it just lets it pass through, so you don't need the quotes. We did that originally, but deemed it non-essential and was removed, since that would have made it much harder to hit the 512-byte goal.


By Emacs do you mean Elisp? Your session there is not surprising to me. I don't have emacs handy but what does this do?

    ((lambda (f)
      (f))
     (lambda() 42))
You might need to throw a 'funcall in there or something, but my hypothesis is that this will work (return 42) without needing the top-level argument to be quoted. 'lambda is a deep special case, basically.

I understand that functions always zip through their arguments, evaluating each one. But when you quote the args in the call, I tend to expect the arguments to not be evaluated within 'evlis.

I'm surprised that quoting args makes the interpreter smaller! That's pretty interesting. But I think this interpreter will have some behavior that is surprising to anyone used to all extant lisps for examples like this:

    ((lambda (sym)
       (cons sym 3))
     (quote a))
Here there's no binding for 'a. I expect this to return (a . 3) in JMC's original Lisp.


The lambdas are quoted because they’re lambdas, not because they’re arguments.

In a more fully-featured Lisp, the code (LAMBDA () 42) would evaluate to a special function data type with no direct representation in terms of cons cells. That’s a luxury this implementation can’t afford, so its data representation of a function is the same as its code representation: a cons cell whose car is the atom LAMBDA. With this design, the code (LAMBDA () 42) would be self-evaluating: it would evaluate to the same thing as the code (QUOTE (LAMBDA () 42)). But since the latter syntax works, there’s no need to waste bytes supporting the former syntax.

In the absence of that special support, the default meaning of the code (LAMBDA () 42) is to call a function named LAMBDA, just like the code (FOO () 42) calls a function named FOO.


I see. Kinda. The 'lambda in the first line isn't quoted. And the eval itself seems identical to JMC. So this is just the substrate level and only when evaluating args? Does that seem right?


The first item in the list is evaluated, just like the other items in the list. For example:

    ((LAMBDA (ARG1 ARG2 ARG3)
       <YOUR PROGRAM GOES HERE>)
     (QUOTE VAL1)
     (QUOTE VAL2)
     (QUOTE VAL3))
Is the same syntax as a normal function call:

    (PROGRAM
     (QUOTE VAL1)
     (QUOTE VAL2)
     (QUOTE VAL3))
The only difference is that, since it's the root-level program, the binding of the name PROGRAM can't have happened yet. So it's simply inlined into the expression. The Apply() part of the evaluator has a special case for recognizing this, which causes evaluation of the first argument to terminate:

    int Apply(int fn, int x, int a) {
      int t1, si, ax;
      if (ISATOM(fn)) {
        switch (fn) {
        case ATOM_CAR:
          return Car(Car(x));
        case ATOM_CDR:
          return Cdr(Car(x));
        case ATOM_ATOM:
          return ISATOM(Car(x)) ? ATOM_T : NIL;
        case ATOM_CONS:
          return Cons(Car(x), Car(Cdr(x)));
        case ATOM_EQ:
          return Car(x) == Car(Cdr(x)) ? ATOM_T : NIL;
        default:
          // recurse to turn (FUNC ARG1 ARG2) into ((LAMBDA ...) ARG1 ARG2)
          return Apply(Eval(fn, a), x, a);
        }
      }
      // evaluate ((LAMBDA ...) ARG1 ARG2)
      if (Car(fn) == ATOM_LAMBDA) {
        t1 = Cdr(fn);
        si = Pairlis(Car(t1), x, a);
        ax = Car(Cdr(t1));
        return Eval(ax, si);
      }
      return UNDEFINED;
    }
The Eval() function which calls Apply() has already evaluated all the list items except for the first one, which is left to Apply() to figure out. The reason why things are this way is because, in order to save space, the REPL loop doesn't maintain a persistent set of global variables. Due to the NIL here, each expression is its own hermetic job:

    void Repl(void) {
      for (;;) {
        Print(Eval(Read(), NIL));
      }
    }
If you changed that NIL to be an alist that's populated by things like defun / setq / etc. in the global scope, then the usability of the language improves dramatically. We didn't do it due to the singular focus on size. But at the same time that simply means we left fun opportunities for improvement to anyone wishing to hack on the codebase and make it their own.


Ok that makes sense. Thanks!


> Why is this quoting the top-level lambda expression arguments?

I suspect mostly because the implementation stumbled into the dynamic binding/lexical binding/FEXPR traps that hit all the early Lisps.

> Here it becomes clear that, in its most bare essential form, beneath the macros and abstractions, LISP actually has an unpleasant nature where name bindings (or assignments) look like a venus fly trap.

Ayup.

A lot of these problems were addressed by the late John Shutt in his thesis about vau calculus in the Kernel programming language:

FEXPRS: https://en.wikipedia.org/wiki/Fexpr

John Shutt's vau calculus thesis: http://www.wpi.edu/Pubs/ETD/Available/etd-090110-124904/unre...

John Shutt's Kernel Language--"Revised-1 Report on the Kernel Programming Language" https://core.ac.uk/download/pdf/47187352.pdf

The "magic" is that the "$vau" operative closes over the environment when it is defined so that it can execute later in that environment--this is the "static/lexical environment". However, when you actually execute the "$vau" operative, you also pass in the "execution/dynamic" environment so that the code that the "$vau" operative runs can choose to do what it wants with the arguments--do nothing, evaluate them in the lexical environment, or evaluate them in the dynamic environment.

It's a powerful concept, but difficult to compile so got kind of relegated from the Lisp/Scheme languages.

For something which seems to implement it properly, see the "Oh" UNIX shell:

https://github.com/michaelmacinnis/oh

https://www.youtube.com/watch?v=v1m-WEZz46U

I would mumble that the fact that "list" is defined as "cons pairs with a final nil()" is also a pain in the ass. I understand why it was originally done this way, but it makes the idea of a "sequence" abstraction so very much more painful. It's one of the reasons why Clojure dropped it.


I love chatting about Kernel :D Here's my most recent post: https://lobste.rs/s/d0hogq/problem_with_macros#c_nozcrm

Thanks for showing me Oh! It really has f-exprs?! I didn't immediately see it in https://github.com/michaelmacinnis/oh/blob/main/doc/manual.m...


See the video @ 28:51: https://youtu.be/v1m-WEZz46U?t=1731

It's such a shame, but "Kernel" and "Oh" sort of drives home the fact that good ideas also need good marketing to propagate. "Oh!" is also a poster child for "videos suck for searchability and discoverability--post a transcript and your slides."

It's particularly bad with "Oh" because he has some truly nifty hacks. The pipeline that is reconfiguring itself as a filter while it's filtering is stunning. The "Transmit the code in one place and execute it in another" is also clever.

The thing that dragged me to Kernel was that I needed a small language to be able to debug embedded systems on the fly. So, the interpreter couldn't be hamstrung and I never had a compile step available. I needed both small code and small data.

Kernel, the idea, fits the bill. Kernel, the implementations, for some reason all really obscure the point for reasons I'm not particularly clear on.

Maybe once I've got it all clear in my head, I'll try to do an implementation in Rust. That should help keep things clean since you can't rely on "metacircular" tricks to implement things.


I'd love to chat more about it. Email address in my profile. I spent a few years getting inspired by Kernel when I built my fexpr-based Lisp: http://akkartik.name/post/wart So I'd be happy to answer questions as you have them. Perhaps we can pore over Shutt's dissertation together.


Not really on topic but I found myself looking around this website and this is easily the most interesting personal site I've seen on HN, and then from reading all the angry posts by the right people Justine also seems like the most interesting personality I've seen on here.


I also agree with you. My specialties are machine learning and knowledge graphs, very different than what Justine works on, yet I find digging into her posts fascinating.


Justine here. I used to work on TensorBoard and I contributed a few hundred thousand lines of changes to TensorFlow. These are my hobby projects so we might not be as dissimilar as you think :-)


Agreed - As far site content that gets posted. I also like rachelbythebay.

But for commenters, there are lots of interesting people.


I didn't remember an FS register... it was added on the 386, I wonder what would happen if this was run on an old IBM XT?

I realize that something had to go in order to fit it in the size limit, compatibility with 8088 CPUs isn't the worst choice.

[Edit] Sorry if this sounded too negative... I'm astounded that it's even possible to do this in a sector at all.


Intel 8086 and 8088 did not have invalid instruction exceptions.

The opcodes that were not documented were either redundant, doing exactly the same thing as other documented opcodes, or they crashed the system.

Later, in 1982, 80186 and 80286 introduced invalid opcode exceptions, so starting with the IBM PC/AT any program intended for later CPUs should abort with an error message.

Most IBM PC/XT were made with Intel 8088, so the behavior of an 80386 program is unpredictable.

Some PC/XT clones were made with 80186 or 80188, or, more frequently, with NEC V20 processors. All these had invalid opcode exceptions, so they should behave like a PC/AT.


It's not too negative because to be honest it's something I felt guilty about. It looks we actually have enough room left to fix that, since we're not currently using %bp. We even have room to add the necessary push/pop for %bp to work around a bug in the original IBM PC BIOS implementation. If we make that change, then I've just confirmed with xed -r -isa-set -i sectorlisp.o that's it's pure i8086 and should work. One of us should pull out an old IBM PC and record a video of sectorlisp running on the original hardware. Anyone want to volunteer? I have one but it's difficult for me to use since I only know how to type dvorak.


These sorts of minimalist bare metal abstract PL articles are so thrilling to me. APL comes to mind.

I wonder how high you can go with only 1MB say.


Assuming you mean static code size, my Mu project currently supports an interpreted Lisp (macros, fewer parens, infix) implemented in a compiled memory-safe low-level language without _any_ C in 450KB. The "runtime library" (mostly in the safe low-level language) requires 120KB. Without unit tests, the binary sizes drop down to 200KB and 30KB respectively, which gives you some sense of the space devoted to tests in each case.

As the tests hint, Mu doesn't do anything to try to reduce code size. It's small just by focusing on the essentials. What's left is written for ease of comprehension.

(An example of "essentials": Mu still cannot free memory. I run my programs in Qemu with 2GB which seems to suffice.)

More details: https://github.com/akkartik/mu


Author here. I assume you're referring to https://github.com/akkartik/mu/blob/main/shell/evaluate.mu? It's hard for me to tell, since there's 819k lines of code in Mu according to find . -type f | grep -v '\.git' | xargs wc -l. Does it have lambda and support metacircular evaluation? If not, then why do you think it's LISP-like? If you do nothing to reduce code size, then why do you call it Mu? (I assume by "Mu" you're referring to what UNICODE calls the MICRO SIGN.) 30kb is two orders of a magnitude larger. I don't see how it's relevant to SectorLISP.


Sorry to tramp over your thread! Mu isn't too related to SectorLISP, I was just responding to the side-discussion about what we can build in under 1MB.

To answer your question, evaluate.mu does support lambda (I call it `fn`). My estimates of size were based on `ls -l a.bin` after `./translate shell/*.mu`.

I actually didn't really think of the micro interpretation of 'mu' until years after I started the project. The interpretation I had in mind was https://en.wikipedia.org/wiki/Mu_(negative)#%22Unasking%22_t...


It that case it'd be really cool if you implemented John McCarthy's metacircular evaluator in Mu LISP syntax to demonstrate it's a LISP. Sort of like how when John McCarthy shared the idea for LISP one of the first things he had to do was show that it's able to do the same things as a Turing machine.


No upvote needed :) but sure I'll post it here.

(I haven't done this so far because I find metacircularity to not be very interesting. It was interesting when JMC proved it could be done. Mu's whole reason for existence is linear rather than circular bootstrapping. We can disagree over whether I get to call it a Lisp or not, but if it has the full power of macros I'm happy.)


And Justine, if you want to join our ever continuing argument about bootstrapping processes and trust in programming language toolchains, Kartik is the right person to talk to :p


He probably was just responding to my 1MB limit. Don't be too harsh.


the variable/register idea is neat, I always wanted some veneer on top of raw registries.. almost like a cpu monad


major points for having a thorough test suite, it has saved my a** on my own projects countless times, and I still encounter projects too often IMHO that have barely any test suite if any at all


1 MB is huge.

Keep in mind that IBM PC/XT had only 640 kB, but there were compilers and interpreters for any language, which were available for it.

Moreover, before IBM PC, a CP/M computer with Zilog Z80 or Intel 8080 had usually only 32 kB or 48 kB, but you could use without problems Basic interpreters, Pascal, Fortran, Cobol and PL/M compilers and many others.

However, in order to fit in 32 kB, the compilers themselves were typically written using a macro-assembler, and not in a high-level language. The C language became popular for such tasks somewhat later.


The original Macintosh had 128k and managed to load a graphical OS and an app in that space. The ROM certainly helped, but it was quite a feat.


> Keep in mind that IBM PC/XT had only 640 kB

And that's the maximum (without bank switching schemes like EMS). The first version had only 64 KB. There were even plans for a 16 KB version with no disk drive but I don't think it was ever released.


The base model of ZX Spectrum only had 16 Kb, but it was enough for it to run BASIC.

And then there's FORTH, which is tiny even in a naive implementation, but can be taken to extremes as well: https://pygmy.utoh.org/3ins4th.html


k9 [0], an APL like, is under 160KB and includes a database in that.

[0] https://shakti.com


I shall gather all these projects on a wiki.


https://www.red-lang.org/p/about.html

Almost as batteries-included as python.


I love Red and want to use it more. Sadly my professional life keeps giving me hardware that I can't run it on, so Red is getting marginalized. I wish I had more skill and time to help them out with the 32 to 64-bit transition. [1]

[1] https://gitter.im/red/red?at=5e09152cd5a7f357e6a1af83


Ha, I didn't know the whole distribution fit in 1MB. Thanks


L - an embedded subset of Common Lisp used in Roomba, ran in 1MB of ram total afaik including compiler and support OS.


1MB is really a lot if you know what you're doing, what language you actually can't fit into 1MB of ram due to inherent specifics of language itself, not because interpreter/compiler doesn't optimize for such cases?


oh wow, roombas still run this subset of CL ??

ps: ohh that paper was on the frontpage not long ago, I just didn't realize it was used in Roombas


A lot, we only had 640KB to play with on most MS-DOS, Amiga and Atari computers, and 8bits had 64-128 KB.


hmm but were there high level languages being usable on these ? I remember a CLISP impl on some atari (but I think the dev had a ram addon).

all in all, I'd really love to have or make a 1MB minimalistic shell with a tiny lisp/prolog/smalltalk


Plenty of them, here is a non-exaustive list.

C, C++, Modula-2, Pascal, AMOS, Clipper, FoxPro, Turbo Basic, Quick Basic, Turbo Prolog, PC-Lisp, Native Oberon,...


Most then existing HLLs were very usable in 640Kb. If you go down to 8-bits, while translators for many of these did exist the utility was rapidly diminishing. You'd essentially have system's native version of BASIC + assembler for anything practical. The rest were more of parlor tricks.


In that context Forth was quite practical. That was its heyday.


INTERLISP was available for the Atari 800 (which max'd out at 48KiB RAM).


I wrote a full CAD/CAM suite for lathe/mill work in a few hundred K, working RAM was another 500K or so...


Honestly you and people writing similarly small applications should gather and write a book.

I don't know if I'm an alien but whenever I see frugal yet non trivial application my brain rejoyces.


This was back when 1 MB was considered and amazing amount of memory. It took two years to write this, about 50K lines of high level code and another 10K or so of assembly. Today you would approach such a project in a completely different way, you'd use much more time to make it look pretty and just that part would probably be much larger than the whole package that I wrote.

The funny thing is that even so many years later the original software is still in use in some places and there is a whole company centered around that core that has been re-written a couple of times to keep it up to date and to expand its functionality.

I highly doubt the present day version would fit in something that small. What's interesting to me is that that old stuff tended to be super productive to work with, zero distractions, just some clearly defined task in a clearly defined environment, if it worked it was bullet proof. No hackers, SaaS, a million connectivity options and no eye candy. Just that one job to be done and done as good as the hardware would allow you to.


But that's exactly my point. I think we forgot to aim at super productive, zero distraction, lightweight. And it would be of high educative value to see this sort of work again. Both on the small scale details and the pragmatic value.

The sad part to me is that most web apps reenact the same functions but in a css-transition-capable DOM. But functionally I'm not sure you get more.


I've tried to adhere to the same principles when building pianojacq.com, even so the linecount is huge for such a limited amount of functionality.


Does this pass any tests to prove it is a general purpose lisp? (I suppose, writing a lisp evaluator in it might be such a test, but not 100% sure)


Was still spooky downloading and executing the same blinkenlights binary on Windows and Linux (WSL). Unfortunately it doesn't actually work in either environment:

    Error: your CPUID command does not support command 0x80000006 (AMD-style L2 cache information).
AMD Ryzen 7 1800X


Author here. What are you executing? It works fine on AMD Ryzen for me. That error message doesn't appear in the blinkenlights.com binary. It also doesn't perform that CPUID check. The only place I can find that error message online is for a fizzbuzz codegolf. Perhaps you're confusing sectorlisp with a another project? Also, Blinkenlights should work fine in the Windows 10 Command Prompt. So WSL isn't required but it does run on that too. https://justine.lol/sectorlisp/spooky.png


Oh, silly me. I hadn't noticed you need the `-t` flag to open the TUI and assumed that since the TUI didn't open that the error was from blinkenlights.


Glad I could help. Hope you enjoy Das Blinkenlights. Press '?' for help. Useful shortcuts for controlling execution are 'c', 'C', Ctrl-C, Ctrl-T, and Alt-T. Memory panels can be scrolled with the mouse wheel. You can also zoom in or zoom out the memory display with ctrl mouse wheel, although that might not be available in CMD.EXE and you might need to use something like KiTTY, XTerm, or PuTTY.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: