Hacker News new | past | comments | ask | show | jobs | submit login
Metaprogramming for Madmen (2012) (fgiesen.wordpress.com)
68 points by lifthrasiir on Nov 12, 2020 | hide | past | favorite | 20 comments



See also Terra, http://terralang.org/

> Like C/C++, Terra is a statically-typed, compiled language with manual memory management. But unlike C/C++, it is designed from the beginning to be meta-programmed from Lua. The design of Terra comes from the realization that C/C++ is really composed of multiple “languages.” It has a core language of operators, control-flow, and functions calls, but surrounding this language is a meta-language composed of a mix of features such as the pre-processor, templating system, and struct definitions.

> In Terra, we just gave in to the trend of making the meta-language of C/C++ more powerful and replaced it with a real programming language, Lua. The combination of a low-level language meta-programmed by a high-level scripting language allows many behaviors that are not possible in other systems. Unlike C/C++, Terra code can be JIT-compiled and run interleaved with Lua evaluation, making it easy to write software libraries that depend on runtime code generation.


Before you read the story, I suggest that you look at the result in awe:

https://youtu.be/2NBG-sKFaB0

That whole game is 96k. Meaning it's roughly 0.07% of the size of an empty Electron app.


Electron's got so much built in. A comparison that hits me a bit harder is that a 96k demo is about half the size of enwiki's landing page. Just the html.


If you ever think you're great at programming something like this comes along. This is insane.


This is insane! Wow


I’ve been thinking about how to package macros across a variety of programming languages. Macros are the ability to generate code structures, which are then executed. For example, [“setnull”, “x”] might be compiled to x = null in JavaScript, which is then evaluated. Notice you cannot do this with functions — you can’t access un-evaluated arguments, like the name “x”, or the lexical environment in which it appears (to say nothing of dynamic scope).

It’s a hard problem. Clojure has some prior art in this space, but the packaging system relies on creating “their own world” — classes in Clojure aren’t really plain Java classes. Or more specifically, protocols are a completely different thing from what normal Java applications write their programs with.

I don’t think building one’s own programming world is a good thing when you bootstrap macros on top of an ecosystem. The most powerful use of this technique is, for example, to have native access to Numpy with python + lisp macros. You can’t really do that with a runtime that has to be embedded with all of your modules.

pg’s Bel has some interesting things to say about the topic, if you focus strictly on the interpreter. The ability for functions to be represented by lists — along with their lexical environment being represented by those same lists — is something that appears in elisp, but nowhere else I’ve seen. Self, maybe?

It’s an ongoing process. The reason I bring it up is that macros are one of the finest tools of metaprogramming available, so it’s a problem worth solving.


Rust macros are just programs that take source code as input and produce source code as output.

These programs can do anything that the user running them does.

You can compile them to WASM, ship them over the network, and run them on a WASM interpreter with restricted permissions, like "no file-system access, no random-number generator access, no network access", etc.

WASM runs very fast (compiled to native code), is supported on every platform that has a web browser or a WASM interpreter, of which there are many open source implementations available, and has no undefined behavior (WASM modules either fail to validate or trap during execution but they don't "read out of bounds" or "segfault").

So to me at least this is a solved problem.

I guess one can do the same with common lisp macros, but nowadays at least, the infrastructure for WASM is just much better.


Check out Walker's ATLAST (the idea more than the specific implementation, I mean.)

https://www.fourmilab.ch/atlast/

> Most programs start out as nonprogrammable, closed applications, then painfully claw their way to programmability through the introduction of a limited script or macro facility, succeeded by an increasingly comprehensive interpretive macro language which grows like topsy and without a coherent design as user demands upon it grow. Finally, perhaps, the program is outfitted with bindings to existing languages such as C.

> An alternative to this is adopting a standard language as the macro language for a product. After our initial foray into the awful menu macro language that still burdens us, AutoCAD took this approach, integrating David Betz’ XLISP, a simple Lisp interpreter which was subsequently extended by Autodesk to add floating point, many additional Common Lisp functions, and, eventually, access to the AutoCAD database.


This is a remarkable find. Thank you so much for pointing it out. I'm not sure I could've hoped for a better reference to dig into.


Cheers! It had a big effect on my perspective when I read it.


you really need to read the partial evaluation book.

https://www.itu.dk/people/sestoft/pebook/


Thank you for the reference. I've started working my way through it.


Common Lisp has solved macro programming a long time ago..


Sure, but Common Lisp has its own packaging system. Suppose you want to ship macros with python, or any other language; that’s the challenge.

It may seem strange to say that you can have macros in other languages, but you can. The key is to compile lists of expressions into code, and then evaluate the resulting code. But the hard part is “then what?”


That’s true. I guess other languages are more popular so it will be useful to solve this problem for them


I wonder if they could have used existing profiling tools to collect the information needed for lekktoring.


8 years ago, there weren't that many existing profiling tools.


The article is from 8 years ago. They wrote the game in 2004. Regarding profiling tools, plenty existed both 8 and 16 years ago. We used a tracing profiler in our commercial work to identify un-exercised code paths to either let us know the code couldn’t be reached (dynamically because static analysis said it should be reachable) or that our tests were incomplete. I’m not sure the status of free tools at the time.

The 00s weren’t the dark ages.


I think they were short of time (as the post mentions, everything Lekktor happened within a single week; this is the most impressive part of the entire post) and had rather special build environments that are unsuitable for many profilers. For example the kkrunchy executable packer [1] used by Farbrausch doesn't pack the DLL sections at all, so any instrumention relying on them will be ruined. Such things.

[1] http://www.farbrausch.de/~fg/kkrunchy/


I get that, but even the simpler side of the instrumentation tools we used wouldn't have been a problem to use for them. The most basic one basically just (with knowledge of the language) inserted outputs at function entry/exit and branch points (if, else, loops). However, it was a commercial tool that did this for us as part of their suite of tools. I have no idea what the status of open source/free tools would have been in 2004 to do the same thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: