Hacker News new | past | comments | ask | show | jobs | submit login
Rosette: a solver-aided programming language that extends Racket (emina.github.io)
93 points by fitzwatermellow on April 2, 2016 | hide | past | favorite | 8 comments



This is seriously impressive stuff. "Solver-aided programming language" might sound esoteric, but it's the natural evolution of the same trend towards higher-level, more declarative programming that replaced 'goto' with 'for', and 'for' with 'map'. You can judge the merit of programming approach by how much it punches above its weight, and the list of applications written in Rosette is short but impressive[1] (especially considering it was only released in 2013). I'm particularly impressed by Chlorophyll, a "synthesis-aided programming model and compiler for GreenArrays GA144, a minimalist low-power spatial architecture":

>We used Chlorophyll to compile an accelerometer-based hand-gesture recognition application for GA144. This application occupied 83 nodes out of 144 nodes. In this demo, GA144 was running on 1.8 V, and the application drew 0.24 mA. This means we should be able to run this application using two lemons![2]

If you've never heard of the GA144, it's an amazing chip designed by Chuck Moore (inventor of Forth) that provides computational grunt comparable to an i7 for a few milliwatts, on a 180nm process no less, at the cost of an awkward architecture of a 12x12 array of chips with very little RAM whose machine language is Forth. Because of this architecture, and despite considerable interest, few have actually managed to program the GA144 to do anything interesting. This demo of an application written in Rosette might just be the most interesting thing anyone's ever gotten a GA144 to do. On that basis alone, I'd say they were on to something.

Aside: imagine a "demoscene" of who could run the most advanced application on a lemon!

[1] http://emina.github.io/rosette/apps.html [2] http://pl.eecs.berkeley.edu/projects/chlorophyll/


This does seem very interesting! Because of your comment I started reading the architecture document for the GreenArrays[0] chip, and it's totally different to anything I've seen: a network of processing units, and the whole thing without a clock anywhere!

I am interested what makes you say it "provides computational grunt comparable to an i7 for a few milliwatts" though - could you elucidate? Do you mean in terms of performance-per-watt?

[0]: http://www.greenarraychips.com/home/documents/greg/PB002-100...


> I am interested what makes you say it "provides computational grunt comparable to an i7 for a few milliwatts"

I've been quite interested in Green Arrays, but I'm quite sure that's basically nonsense. The Ga144 can do lots of interesting stuff, but comparing it with an i7 is ill adviced. If for no other reason than that you have to provide your own dram interface, while the i7 typically comes with that in the form of chipset support.

The Ga144 can do md5, but I'm not convinced that it can do it better or equivalently than 4/8 i7 cores: http://www.greenarraychips.com/home/documents/pub/AP001-MD5....

In short, I think such a claim is misbranding the Green Array chip, setting potential users up for disappointment and at the same time selling the chip short.


Interesting, it reminds me a bit of the game TIS-100[0], similar idea of an array of very simple units interacting by I/O ports, blocking until a read/write to another unit can complete. I didn't know there was a real architecture like that!

edit: It seems the transputer[1] was similar, hadn't heard of that either.

[0]: http://www.zachtronics.com/tis-100/

[1]: https://en.wikipedia.org/wiki/Transputer#Links


The transputer actually has a successor called the XCore[0], which is produced by XMOS. They're founded by the same fellow that came up with the transputer (David May).

It's a similar concept again - it's a multi-core chip for embedded applications, and the cores communicate using an on-chip network. Each core is like a regular CPU with a clock though - the GreenArrays approach seems to be completely asynchronous.

[0] http://xmos.com/


Exactly. Why would I write a custom, small, incomplete and much slower solver by myself for my compiler, when I can just use a fast SAT solver (cryptominisat and friends), which are complete and have an acceptable license.

Looking at scala, pony, haskell, but really any modern language working on dependent types: python, php, cperl, ruby, ...


Interesting, but I wanted to read an easier example. For example see: http://emina.github.io/rosette/doc/rosette-guide/ch_essentia...


Something very similar for Haskell, as a library: http://leventerkok.github.io/sbv/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: