Hacker News new | past | comments | ask | show | jobs | submit login

>We are now to the point where we have almost more gates than we know what to do with,

limits of von Neumann architecture (and its underlying mathematics of recursive functions) as a mental framework for our thinking about computing.




You've piqued my curiosity; could you elaborate please?


I shall try, but this stuff gets really hairy, really really fast.

Von Neumann architecture is what almost all computers use today: you have (very roughly) an ALU (arithmetic logic unit) hooked up to a memory bank which stores both program data and the instructions the program consist of.

Now you can add a couple of cores to that, but you pretty soon start to run into problems -- threads which try to access the same data, race conditions, etc.

But the biggest problem is that under the Von Neumann architecture all memory is shared so any thread can access any other threads memory. This puts rather drastic limits on how much benefit you can get from new cores.

You also run into issues like the limited speed it is possible to access the main memory banks with, etc, etc. This is possible to compensate to some degree with caches, but they have their own problems.

But the fundamental problem with them is that they were from and of an era where the clock speed kept increasing and increasing.

Today we have a situation where transistors gets smaller and smaller. But if you are trying to use this to make a traditional CPU with these new transistors, all this gets you is a really small chip.

What we need is an architecture inspired by something else. Personally I am kinda hoping it will be some form of message sending -- you run a lot of small (green) threads which each have their own memory as well as the ability to send and receive packages of information to/from the other cores of on the CPU.

You can have access to a (comparatively large but slower) shared memory bank too (like RAM today).

I like it because it works well with how you would design a cluster of computers (where you cannot afford the illusion of shared memory), how computation is organized under the actor model (which I prefer to threads) and it would be possible to implement with not that much new changes in the CPU.


That wasn't a "try" - that was a success. Thank you!

If I may attempt a paraphrase: CPU caches stop being a bandage for slow access to RAM and become a valuable first class citizen for each core of the CPU when coupled with the actor model.

Did I understand you correctly? Again, thank you.


Well you could do that today if you as a programmer could manually tell the system "please load addr x, y and z into the cache".

But if the cores of the CPU starts to communicate with the actor model then you wouldn't be using the memory close to the cores as a cache but as a storage area for messages that haven't been sent/processed yet as well as possible for thread local storage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: