Hi, I'm the author of that intro. The talks which Ivan has been giving - there are links in that intro - go into everything in much more detail. But here's a quick overview of your specific questions:
1: we manage to issue 33 operations / sec. This is easily a world record :) The way we do this is covered in the Instruction Encoding talk. We could conceivably push it further, but its diminishing returns. We can have lots of cores too.
2: its process agnostic; the dial goes all the way up to 11
3: the on-chip cache is much quicker than conventional architectures as the TLB is not on the critical path and we typically have ~25% fewer reads on general purpose code due to backless memory and implicit zero. The main memory is conventional memory, though; if your algorithm is zig zagging unpredictably through main memory we can't magic that away
>>: the on-chip cache is much quicker than conventional architectures as the TLB is not on the critical path
I would really like to know your reasoning that the TLB is a major bottleneck in conventional CPUs. CPUs execute a TLB lookup in parallel with the cache, so there is usually no latency except on a TLB miss.
Basic research on in-memory databases suggest eliminating the TLB would improve performance only by about 10%, this certainly isn't a realistic use case and most of the benefits can be obtained simply by using larger pages. So I don't really know where your claims about 25% fewer reads is coming from in relation to simply getting rid of virtual memory.
Right, most modern caches use the virtual address to get the cache index and use the physical address for tag comparisons[1]. Since on x86 the bits needed for the tag are the same between the virtual and physical address the entire L1 lookup can be done in parallel, though for other architectures like ARM you need to finish the TLB step before the tag comparison[2].
But while I think the Mill people are overselling the direct performance benefits here, the single address space lets them do a lot of other things such as backing up all sorts of things to the stack automatically on a function call and handling any page fault that results in the same way that it would be handled if it was the result of a store instruction. And I think they're backless storage concept requires it too.
The reason the TLB is so fast is also that it is fairly small and thus misses fairly often. Moving the TLB so it sits before the DRAM means that you can have a 3-4 cycle TLB with thousands of entries.
33 operations/cycle require memory with (at least) 66 ports: 33 for reads and 33 for writes. Otherwise it is NOOP. For two-operand instructions the count goes to 99=33*3 and for three operand instructions (ternary operator) it goes to 132 ports.
As far as I know, Elbrus 3M managed to achieve about 18 instructions per clock cycle, with VLIW and highly complex register file, whose design slowed overall clock frequency to about 300MHz on 0.9um process. To get everything in comparison, plain Leon2 managed to get about 450MHz in the same process, without any tweaks and hand work and Leon2 is not a speed champion.
So the questions is: do you have your world record in simulation or in real hardware like FPGA?
1: we manage to issue 33 operations / sec. This is easily a world record :) The way we do this is covered in the Instruction Encoding talk. We could conceivably push it further, but its diminishing returns. We can have lots of cores too.
2: its process agnostic; the dial goes all the way up to 11
3: the on-chip cache is much quicker than conventional architectures as the TLB is not on the critical path and we typically have ~25% fewer reads on general purpose code due to backless memory and implicit zero. The main memory is conventional memory, though; if your algorithm is zig zagging unpredictably through main memory we can't magic that away