Hacker News new | past | comments | ask | show | jobs | submit login

I remember seeing a demo where A.K. used an experimental OS made with ~1k LOC using STEPS approach to actually run his slides. Never found the link to it again (if someone has it I'd appreciate), but even more importantly, I'd love to know what happened with that OS. It would seem like a great research OS going forward if it really had GUI, networking and FS expressed with such low amount of user code. It also seems to me the project coming closest to Engelbart's vision (as their NLS also did everything just by meta-programming to an assembler with increasingly high levels of abstraction).



Alan Kay addresses Qualcomm https://vimeo.com/82301919


Thank you! Is he actually running their own OS here, or is it just a scripted slide application? What I saw was more of a smaller talk given to students if I remember correctly, where he goes into the technical details of his setup a bit.


I am one of three people who have this code running live. It is way more amazing than you think, it is not scripted at all. Its a full OS/GUI personal computer in 20 KLOC, no external libraries. The graphics for example are just 435 lines of code (versus millions for Cairo).


Have you considered creating e.g. a YouTube Series going through it? Or contacting e.g. Computerphile? This is way too awesome not to share with the world. How did you get involved, have you been working for Alan Kay?


I got involved when I was 17 years old back in 1981 reading about the Alto and Smalltalk in Byte magazine. Alan Kay and Dan Ingalls at Xerox PARC had build this amazing GUI, programming language and virtual machine [4]. By 1985 I was building my first Smalltalk Transputer Supercomputer and typing in the code listing of the Blue Book. Byte magazine even invited us to publish this supercomputer on their front page as a DIY construction kit for their readers.

Things got really interesting in 1996 when Alan and Dan released Squeak Smalltalk with Etoys as free and open source with this almost metacircular virtual machine.

In 2008 we had progressed to designing SiliconSqueak, a Smalltalk Wafer Scale Integration, a 10.000 manycore microprocessor with the late bound message passing Squeak Smalltalk as its IDE, operating system and the RoarVM virtual machine with adaptive compilation. We are still working on that, it costs $100K for the mask set that you send to the TSMC chip fab and you get back a 180nm wafer with the 10 billion transistor supercomputer for $600 a piece. Getting funding for mask sets at smaller nodes like $3 million for 28nm or the most advanced 3nm node what costs over 50 million for a million cores is a life's work.

We have not been directly working for Alan Kay, Dan Ingalls or David Ungar but we exchange emails, write scientific papers [2], give lectures [1] and meet in online video sessions [1] with the vibrant Smalltalk community.

When these researchers release the source code like the STEPS project, RoarVM or the Lively Kernel we try to port it to our SiliconSqueak supercomputer prototypes and of course we develop our own Smalltalk of the Future, parallel adaptive compilers, virtual machines and hardware X86 emulators.

So to answer your first question, yes, there are hundreds of lectures and talks on Youtube and we share all this work with the world. Bret Victor's, Dans or Alans lectures are just a small part of that.

The hard part of our research is getting $100K funding together for the 10.000 core supercomputer, a $2000 wafer scale integration (WSI) computer is a little to big an amount for a crowdfunding project.

So I still hope YCombinator will fund me, but they have this silly 'no single founder' restriction. You seem to be a researcher at ETH Zurich, why don't you join me as cofounder?

We make a 3 cent Smalltalk microcontroller (an ALTO on a chip) and a $1 version with 4 MB and gigabit ethernet, with Smalltalk, Etoys and Scratch built in you get a superior Raspberry Pi/Arduino successor that 5 year old children can program because Smalltalk and Etoys where designed with children in mind.

Our Morphle WSI would be a great desktop supercomputer but the real advance would be the $20.000 (retail price) costing 3nm wafer scale integration. More than 40 trillion transistors, a runtime reconfigurable amount of 1 million cores and the full IDE, GUI and OS in 10.000 lines of Smalltalk language, IDE, GUI and OS at exaflops per second. Way more advanced than CUDA on a GPU. I gave a 2 hour talk on that:

[1] https://vimeo.com/731037615

[2] https://scholar.google.nl/citations?user=mWS92YsAAAAJ&hl=en&...

https://scholar.google.nl/citations?hl=en&user=6wa49gkAAAAJ

[3] https://web.archive.org/web/20140501222143/http://www.morphl...

[4] https://youtu.be/id1WShzzMCQ?t=519


Super interesting stuff, will go through it! Somewhat unfortunately I've mostly departed from research and have defected to the financial industry. I actually recently gave a talk about Engelbart and his ideas to my colleague, in case someone here finds this interesting:

https://www.youtube.com/watch?v=jIlzXEaOH1I


You seem to be in a perfect position to advise Bret Victor or us about financing options for this work, especially the non-research parts. For example, we apply our wafer scale technology and Smalltalk software to energy systems and energy production at 1 cent per kWh, around 60 times lowr then the European Grid prices. That should interest the financing sector and asset management.


... holymoly.. that's certainly a kind of perseverance, fortitude and probably obsession :)

how are you going to cool the wafer? what's the TDP? :o

100K sounds very doable for crowdfunding - or maybe you need to find just one eccentric multi millionaire.


I cool a wafer scale with a liquid that boils at 43 C and immersing the wafer in this liquid. The bubbles (cavitation) of the boiling liquid should not damage the surface layers of the wafer, of course. This boiling liquid is further cooled by water and a sonic heat pump moving the heat into a water tank where the stored heat is used for showers or cooking [1].

Given 45 trillion transistors (45x10e12) times 3 femtojoule (3x10e-15) to switch each transistor at 1 Ghz (10e9) you get 1.000.000 joules/sec = 1 megawatt. These are ball-park numbers, back of the envelope calculations. In reality I make full physics simulations and electrical SPICE simulations of the entire wafer on a supercomputer aka on the wafer scale integration FPGA prototypes and the wafer supercomputer itself.

The EDA (Electronic Design Automation) software tools we write ourselves in Smalltalk and Ometa, and these also need our own supercomputer to run on. Of course the feedback loops are Bret Victor style visualizers [3][2]. Apple Silicon or this small company demonstrate that only with custom EDA tools can you design ultra-low power transistors to prevent our wafer to melt.

The FPGA prototype is a few thousand Cyclone 10 or Polarfire FPGA's with a few terabytes/sec memory bandwidth or a cluster of Mac Studio Ultra's networked together in a Slim Fly network that can double as a neighbourhood solar smart grid [5]. You need a dinosaur egg to design a dinosaur, or is is it the other way around? [6]

A TDP (Thermal Design Power) of 1 megawatt from a 450 mm disk is huge, it will melt the silicon wafer. But then not all transistors are switching all the time and we have the cooling effect of the liquid.

We must power the wafer from a small distance inductively or capacitively, best with AC. So we need AC-DC inverters on the wafer, self-test circuits to make sure we find defects from dust and contamination and isolate those parts and reroute the network on the wafer.

[1]https://vimeo.com/731037615 at 21 minutes

[2] https://youtu.be/V9xCa4RNfCM?t=86

[3] https://youtu.be/oUaOucZRlmE?t=313

[4] https://bit-tech.net/news/tech/cpus/micro-magic-64-bit-risc-...

[5] https://www.researchgate.net/profile/Merik-Voswinkel/publica...

[6] Frightening Ambitious Startup Ideas (dinosaur egg)

https://youtu.be/R9ITLdmfdLI?t=360

http://www.paulgraham.com/ambitious.html


OK, one thing I don't understand. You're talking about a ~1MW supercomputer. With 100K funding you could just about pay for the cost of electricity of this thing for 3-4 weeks (using US electricity prices). Actually building it would be on the order of at least 10s, if not 100s of million. I gathered from one video that you're an independent researcher group - how is this all being funded?


I am an independent researcher, my funding is zero and I am therefore rather poor. I get paid for technical work on the side, like programming or building custom batteries, tiny off-grid houses or custom computer chips (to charge batteries better). I am for hire.

Solar electricity prices can be below 1 cent per kWh [1]. I generate 20kW solar in my own garden and store some in my own custom battery system with our own charger chips. The prototype supercomputer warms my room. I hope to move to my own design off-grid tiny house in a nature reserve in Spain or Arizona to get 2.5 times more energy yield and even lower cost of living and cheaper 10 Gbps internet.

If you only run the computation during daylight and then move the computation with the sun to two wafers in two other timezones when that location has sunlight you keep below 1 cent per kWh. Some supercomputers do this already. In contrast, running 24/7 from batteries raises the cost to almost 2 cents per kWh, still far below bulk electricity prices in datacenters. Batteries turn out to be more expensive than having three solar supercomputers in three time zones. You learn from all this that energy costs dominate the cost of compute hardware, even our cheapest transistors cost. Hence our ultra-low transistors, not just to prevent melting of the wafer but mostly to make cheaper compute (for the cloud).

The wafer scale integration at 180nm costs around $600 per wafer to manufacture, only once it cost $100K to make the mask set, amortised over the $500 wafers you mass produce, this is how you get to $600 for 10000 cores at >1 Ghz.

These $600 wafer supercomputers use less than 100-700 watt with normal use, because not all transistors switch all the time at 1 Ghz. They are asynchronous ultra low power transistors, no global clock wasting 60% of your energy and transistors and you don't touch all SRAM locations all of the time. The larger 3nm wafer scale integrations won't use 1 MW either, just a few kW, less than a Watt per core.

Actually building these supercomputers will cost $100k for 180nm, $3 million at 28nm or around $30 million at 3nm. The FPGA prototypes cost $10 per core, similar to GPU prices. This includes the cost to write the software, the IDE, compilers, etc.

You can run X86 virtual machines unchanged on our 10000 - 1000000 manycore wafer scale integrations at 1 cent per kWh. This is by far the cheapest hyper-scale datacenter compute price ever and may come to outcompete current cloud datacenter which currently consume more than 5% of all the worlds electricity. And by locating our wafer supercomputers in your hot water storage tank at home [6], you'll monetise its waste heat so the compute cost drops below 1 cent per X cores (dependant on the efficiency of your software [5]). Another place you need these ultra low power wafer scale supercomputers is in self driving cars, robots, space vehicles and satellites, you can't put racks of computers there and you need to be frugal with battery storage.

These CMOS wafer scale integration supercomputers are themselves prototypes for carbon solar cells and carbon transistors we will grow from sunlight and CO2 in a decade from now [2]. Then they will cost almost nothing and run on completely free solar energy.

Eventually we will build a Dyson Swarm around our sun and have infinite free compute [3] called Matrioshka Brains [4]. To paraphrase Arthur C. Clarke, if you take these plans to seriously you will go bankrupt. If your children do not take these plans seriously they will go bankrupt.

[1] https://www.researchgate.net/profile/Merik-Voswinkel/publica...

[2] https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf

[3] https://en.wikipedia.org/wiki/Matrioshka_brain

[4] https://gwern.net/docs/ai/1999-bradbury-matrioshkabrains.pdf

[5] https://youtu.be/K3zpgoazRAM?t=1602

[6] https://tech.eu/features/7782/nerdalize-wants-to-heat-your-h...


The OS (in 20K lines of code) is called "Frank" and in the talks where Alan uses it for his slides at one point he zooms out and you can see a cartoon Frankenstein monster in the top left corner.

You might find this list of Kay's talks interesting:

https://tinlizzie.org/IA/index.php/Talks_by_Alan_Kay


Please see the comments on this Morphle HN account for those Alan Kay talks or mail morphle &at& ziggo &dot& nl for all those Alan Kay links student lectures you remember.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: