> An interpreter is still producing x86 instructions at some point, right?
Not dynamically. They just call predefined C (or whatever the interpreter was written in) functions based on some internal mechanism.
> Or else what does the CPU execute?
Usually either the interpreter is just walking the AST and calling C functions based on the parse tree’s node type (this is very slow), or it will convert the AST into an opcode stream (not x86-64 opcodes, just internal names for integers, like OP_ADD = 0, OP_SUB = 1, etc) when parsing the file, and then the interpreter’s “core” will look something like a gigantic switch state statement with case OP_ADD: add(lhs, rhs) type cases. “add” in this case being a C function that implements the add semantics in this language. (The latter approach, where the input file is converted to some intermediate form for more efficient execution after the parse tree is derived, is more properly termed a virtual machine and “interpreter” generally only refers to the AST approach. People tend to use “interpreter” pretty broadly in informal conversations, but Python is strictly speaking a VM, not an interpreter)
In either case, the only thing emitting x86-64 is the compiler that built the interpreter’s binary.
> Am I totally misunderstanding how interpreters work?
You’re confusing them with JITs.
If every interpreter had to roll their own dynamic binary generation, they’d be a hell of a lot less portable (like JITs).
Not dynamically. They just call predefined C (or whatever the interpreter was written in) functions based on some internal mechanism.
> Or else what does the CPU execute?
Usually either the interpreter is just walking the AST and calling C functions based on the parse tree’s node type (this is very slow), or it will convert the AST into an opcode stream (not x86-64 opcodes, just internal names for integers, like OP_ADD = 0, OP_SUB = 1, etc) when parsing the file, and then the interpreter’s “core” will look something like a gigantic switch state statement with case OP_ADD: add(lhs, rhs) type cases. “add” in this case being a C function that implements the add semantics in this language. (The latter approach, where the input file is converted to some intermediate form for more efficient execution after the parse tree is derived, is more properly termed a virtual machine and “interpreter” generally only refers to the AST approach. People tend to use “interpreter” pretty broadly in informal conversations, but Python is strictly speaking a VM, not an interpreter)
In either case, the only thing emitting x86-64 is the compiler that built the interpreter’s binary.
> Am I totally misunderstanding how interpreters work?
You’re confusing them with JITs.
If every interpreter had to roll their own dynamic binary generation, they’d be a hell of a lot less portable (like JITs).