Hacker News new | past | comments | ask | show | jobs | submit login

I cool a wafer scale with a liquid that boils at 43 C and immersing the wafer in this liquid. The bubbles (cavitation) of the boiling liquid should not damage the surface layers of the wafer, of course. This boiling liquid is further cooled by water and a sonic heat pump moving the heat into a water tank where the stored heat is used for showers or cooking [1].

Given 45 trillion transistors (45x10e12) times 3 femtojoule (3x10e-15) to switch each transistor at 1 Ghz (10e9) you get 1.000.000 joules/sec = 1 megawatt. These are ball-park numbers, back of the envelope calculations. In reality I make full physics simulations and electrical SPICE simulations of the entire wafer on a supercomputer aka on the wafer scale integration FPGA prototypes and the wafer supercomputer itself.

The EDA (Electronic Design Automation) software tools we write ourselves in Smalltalk and Ometa, and these also need our own supercomputer to run on. Of course the feedback loops are Bret Victor style visualizers [3][2]. Apple Silicon or this small company demonstrate that only with custom EDA tools can you design ultra-low power transistors to prevent our wafer to melt.

The FPGA prototype is a few thousand Cyclone 10 or Polarfire FPGA's with a few terabytes/sec memory bandwidth or a cluster of Mac Studio Ultra's networked together in a Slim Fly network that can double as a neighbourhood solar smart grid [5]. You need a dinosaur egg to design a dinosaur, or is is it the other way around? [6]

A TDP (Thermal Design Power) of 1 megawatt from a 450 mm disk is huge, it will melt the silicon wafer. But then not all transistors are switching all the time and we have the cooling effect of the liquid.

We must power the wafer from a small distance inductively or capacitively, best with AC. So we need AC-DC inverters on the wafer, self-test circuits to make sure we find defects from dust and contamination and isolate those parts and reroute the network on the wafer.

[1]https://vimeo.com/731037615 at 21 minutes

[2] https://youtu.be/V9xCa4RNfCM?t=86

[3] https://youtu.be/oUaOucZRlmE?t=313

[4] https://bit-tech.net/news/tech/cpus/micro-magic-64-bit-risc-...

[5] https://www.researchgate.net/profile/Merik-Voswinkel/publica...

[6] Frightening Ambitious Startup Ideas (dinosaur egg)

https://youtu.be/R9ITLdmfdLI?t=360

http://www.paulgraham.com/ambitious.html




OK, one thing I don't understand. You're talking about a ~1MW supercomputer. With 100K funding you could just about pay for the cost of electricity of this thing for 3-4 weeks (using US electricity prices). Actually building it would be on the order of at least 10s, if not 100s of million. I gathered from one video that you're an independent researcher group - how is this all being funded?


I am an independent researcher, my funding is zero and I am therefore rather poor. I get paid for technical work on the side, like programming or building custom batteries, tiny off-grid houses or custom computer chips (to charge batteries better). I am for hire.

Solar electricity prices can be below 1 cent per kWh [1]. I generate 20kW solar in my own garden and store some in my own custom battery system with our own charger chips. The prototype supercomputer warms my room. I hope to move to my own design off-grid tiny house in a nature reserve in Spain or Arizona to get 2.5 times more energy yield and even lower cost of living and cheaper 10 Gbps internet.

If you only run the computation during daylight and then move the computation with the sun to two wafers in two other timezones when that location has sunlight you keep below 1 cent per kWh. Some supercomputers do this already. In contrast, running 24/7 from batteries raises the cost to almost 2 cents per kWh, still far below bulk electricity prices in datacenters. Batteries turn out to be more expensive than having three solar supercomputers in three time zones. You learn from all this that energy costs dominate the cost of compute hardware, even our cheapest transistors cost. Hence our ultra-low transistors, not just to prevent melting of the wafer but mostly to make cheaper compute (for the cloud).

The wafer scale integration at 180nm costs around $600 per wafer to manufacture, only once it cost $100K to make the mask set, amortised over the $500 wafers you mass produce, this is how you get to $600 for 10000 cores at >1 Ghz.

These $600 wafer supercomputers use less than 100-700 watt with normal use, because not all transistors switch all the time at 1 Ghz. They are asynchronous ultra low power transistors, no global clock wasting 60% of your energy and transistors and you don't touch all SRAM locations all of the time. The larger 3nm wafer scale integrations won't use 1 MW either, just a few kW, less than a Watt per core.

Actually building these supercomputers will cost $100k for 180nm, $3 million at 28nm or around $30 million at 3nm. The FPGA prototypes cost $10 per core, similar to GPU prices. This includes the cost to write the software, the IDE, compilers, etc.

You can run X86 virtual machines unchanged on our 10000 - 1000000 manycore wafer scale integrations at 1 cent per kWh. This is by far the cheapest hyper-scale datacenter compute price ever and may come to outcompete current cloud datacenter which currently consume more than 5% of all the worlds electricity. And by locating our wafer supercomputers in your hot water storage tank at home [6], you'll monetise its waste heat so the compute cost drops below 1 cent per X cores (dependant on the efficiency of your software [5]). Another place you need these ultra low power wafer scale supercomputers is in self driving cars, robots, space vehicles and satellites, you can't put racks of computers there and you need to be frugal with battery storage.

These CMOS wafer scale integration supercomputers are themselves prototypes for carbon solar cells and carbon transistors we will grow from sunlight and CO2 in a decade from now [2]. Then they will cost almost nothing and run on completely free solar energy.

Eventually we will build a Dyson Swarm around our sun and have infinite free compute [3] called Matrioshka Brains [4]. To paraphrase Arthur C. Clarke, if you take these plans to seriously you will go bankrupt. If your children do not take these plans seriously they will go bankrupt.

[1] https://www.researchgate.net/profile/Merik-Voswinkel/publica...

[2] https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf

[3] https://en.wikipedia.org/wiki/Matrioshka_brain

[4] https://gwern.net/docs/ai/1999-bradbury-matrioshkabrains.pdf

[5] https://youtu.be/K3zpgoazRAM?t=1602

[6] https://tech.eu/features/7782/nerdalize-wants-to-heat-your-h...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: