Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: “Crafting Interpreters” chapter, control flow and Turing-completeness (craftinginterpreters.com)
116 points by munificent on June 26, 2017 | hide | past | favorite | 40 comments



A bit of nitpicking:

"the function to compute the truth value of a given proof" should be "the function that returns the truth value of a given statement" (1st change because it is confusing to use the word compute in two different ways, 2nd change is simply an error correction)

"equivalent in power" should not link to the Church-Turing thesis, as the former is a mathematical statement, proved in any introductory course on computability, and the latter is a philosophical one.

w.r.t. the Neil Gaiman quote: https://www.goodreads.com/topic/show/1407558-misattributed-q...


More nitpicking:

> You can prove that by writing a simulator for a Turing machine in your language. Since Turing proved his machine can compute any computable function, by induction, that means your language can too.

That's not by induction, that's by composition or something.

> We almost have the third too. You can create and concatenate strings of arbitrary size, so you can store unbounded memory.

That's not true! Eventually you'll run out of memory & disk space. Turing machine tapes are infinite, you can't fit one in this puny universe.

EDIT:

> Syntactic sugar has a bad rap among the PL intelligentsia.

Um, what? I presume the author isn't aware of the large PLT group at Northeastern that continuously writes papers about macros, i.e. user-defined syntactic sugar, and develops Racket, a language built almost entirely out of syntactic sugar. Or maybe they're defining "syntactic sugar" to not include macros: if so, they should be more clear about that.

Also, I'm doing a PhD focusing on syntactic sugar, so I probably have authority to say: syntactic sugar has a very good rap among the PL intelligentsia. Languages with a needlessly complicated semantics have a bad rap, but that's different from having a large grammar. PL researchers are a lot more concerned about semantics than syntax.


> That's not by induction, that's by composition or something.

Good point. I'll fix that.

> That's not true! Eventually you'll run out of memory & disk space. Turing machine tapes are infinite, you can't fit one in this puny universe.

Sure, but in practice, most people loosen that requirement. Otherwise, no implemented language is Turing-complete.

> Um, what? I presume the author isn't aware of the large PLT group at Northeastern that continuously writes papers about macros, i.e. user-defined syntactic sugar, and develops Racket, a language built almost entirely out of syntactic sugar.

The mini-essay is about what syntactic sugar the language designer chooses to add to the language. Language features that let users extend the syntax are outside of the scope of that.

I do note:

> Lispers famously claim their language “has no syntax”, while Smalltalkers proudly show that you can fit the entire grammar on an index card. This tribe has the philosophy that the language doesn’t need syntactic sugar. Instead, the minimal syntax and semantics it provides are powerful enough to let library code be as expressive as if it were part of the language itself.


> The mini-essay is about what syntactic sugar the language designer chooses to add to the language. Language features that let users extend the syntax are outside of the scope of that.

What counts as part of the language? If you've coded in a lisp, you've used `and`, `or`, `cond`, `let*`, `letrec`, etc. All of those are defined by macros, but I don't think it's realistic to exclude them from being considered part of the language.


> What counts as part of the language?

Very simple: the arbitrary set of requirements documented in the reference manual for the language and (consequently) provided in the implementation.


> Sure, but in practice, most people loosen that requirement. Otherwise, no implemented language is Turing-complete.

The common hand-wave for this is to assume that (1) the machine has big enough storage for the problem, otherwise you could not have the problem in the first place and (2) the algorithm can have an arbitrary time/space trade-off, so if it needed more memory than the physical bound, you could just have it run longer. Note that (1) is a non-sequitur, since the input tape is usually not included in the machine memory in the first place.


Common Lisp has a whole bunch of syntax and syntactic sugar. e.g. ' (quote), #', &rest, &key, &optional and more.


I just wanna add that there is also a dedicated group at Utah working on hygiene and macros for Racket, headed by Matthew Flatt (who's somewhat relevant within the scope of Racket).

(Not contradicting you; just wanna throw my school in where I can!)


The colored sets ?


I think you mean Flatt's "Binding as Sets of Scopes" [1]. Yeah, that's definitely something he's working on. There are a couple PhD candidates in his group who are also working on macros and hygiene, but I don't follow their research directly (we're just often in the same room so I hear about it).

[1] http://www.cs.utah.edu/plt/publications/popl16-f.pdf


Cool, from what I recall Flatt said there were issues in his process so I'm curious about what's new.


Ah, I wish I could tell you! He's recently taken on some more teaching duties, so I dunno to what extent that is impacting his research abilities. (Hopefully not too much!)


May I ask your team website if there's one ? I'm curious about your research


Thanks for this! When I get home this evening, I'll tweak this. You're right on all accounts.

Except for the Neil Gaiman quote. I like Gaiman's wording better than Chesterton's and I think it's different enough that it makes sense to attribute it to him.

Also, man, what was up that Petêr guy? Who peed in his Cheerios?


> "It turns out that the answer to both is “no” and, in astonishingly, the two questions are deeply intertwined."

Did you forget a word here?


Related to that, the two questions are

> “Can all true statements be proven?”, “Are there functions that we can define but not compute?”

And the answers are "no" and "yes", so to fit with "both answers are 'no'", the second question should be "Can we compute all functions we can define?".


Oh, God, you're right. Thank you! You wouldn't know I actually edited this chapter twice.


Yes, yes I did. Would you believe I did three drafts of this?

I'll fix it tonight. Thanks!


Haha I wouldn't worry about it.

I've seen more typos in professionally edited articles on major online newspapers.


I just want to thank Bob for the amazing "Game Programming Patterns" book. I can't recommend it highly enough. And since I'm also interested in compilers, I will buy "Crafting Interpreters” without a hesitation.


Thank you!


I'm reading it too, bit by bit! I've been writing file importers for an old game that I'd like to recreate, but I've never built a game engine before, and I'm getting to a place where I need to start tying things together to build the cohesive whole.


Which game?


Ultima Underworld. About 5 birds with one stone: game dev, reverse engineering, real mode X86 assembly (covered a subset of ARM in college), technical details of DOS, OpenGL graphics, and a dip into writing software renderers too.

I consider it a long-term hobby; it's not exactly a quick solo project.

I'm also looking at another one called Solar Winds. It came out at a similar time, but the whole game is smaller than just UW's binary, with a lot more of the game's content encoded directly in the binary. UW tends toward deep call stacks, and significant portions are written in C. SW is pretty flat with very few function calls, and I think that about 1/3 of the code is made of Sound Blaster drivers linked into the binary.

I took a look at Lemmings, but that was going to be kind of annoying. Most of the binary was encrypted and would be decrypted on the fly, sometimes in a 2-step operation, where it would be decrypted then copied backwards into a new location. There are already engine implementations for that, so it didn't seem worth the effort.


Some typos : cook it down to a small number of primitives that the "back end" can do its magic on. backend. Most other syntax trees always evaluate their subtress. subtrees. Good read, thanks for sharing.


Thank you! I'll upload a fix for "subtress" when I get a chance. I believe "back end" as two words is still the most common formulation:

https://books.google.com/ngrams/graph?content=backend%2Cback...


The author asks why && and || need different precedence (Unless I misunderstood).

This is because `a && (b || c)` has a different truth table than `(a && b) || c`


&& having higher precedence than || makes sense because && is boolean multiplication and || is boolean addition.


This makes so much sense but I never thought of it this way before so explicitly. Neat!


Has anyone ever found a link between boolean (F2) and "normal" base (F[n], or N) arithmetics ?


That doesn't explain why they would need a different precedence. `a + b - c` is not the same as `a - b + c` even through the precedence's of + and - are the same (in most syntaxes).


Granted, the intersection of precedence and associativity is a great source of confusion (and any number of annoying Facebook memes)

I'm not a formally trained mathematician, but one way to rationalize + and - having the same precedence, is to consider them as inverses of each other, so your two examples can be written as

a + -b + c

And

a + b + -c

At which point precedence and associativity no longer matter. The same can be done with multiplication and division


This is not a Show HN.


It is because it's a substantial project (a book), posted by the author, and free for everyone to 'try out' (i.e. read). It's fair to have different standards for different categories of project.

It's overdoing it a bit to post the same project every few weeks instead of every few months, but that's a separate issue.


Is the posting of a book chapter different from a blog post? The Show HN rules say blog posts can't be Show HNs. And the reason given is they "can't be tried out".


Yes, it's different. A book (assuming it's a real book and not something slapdash) is a major piece of work. Obviously some of the finer distinctions (e.g. what counts as a book chapter vs. an article) can get pretty arbitrary. But the broader lines aren't. The intent of Show HN is for people to share what they're working on. If we applied the strictest definition, the only thing anyone could easily 'try out' would be web apps, and that's much too narrow.


> A book (assuming it's a real book and not something slapdash) is a major piece of work.

It is, I hope. But if you're prefer I post these without "Show HN", I'm happy to. I thought it was worth doing it to make it more obvious that I was self-posting, but I can not do that if that's what others would prefer.


to make it more obvious that I was self-posting

Not really what Show HN is for, though, so if you skip it for just chapter updates (rather than the whole book) you'll probably get less guff.


It's fine. We always intended books to count as valid Show HNs.


I like and appreciate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: