I really love greenarrays and that entire social circle. It's an older group of folks that bought a historical house in Wyoming and do chip design, programming, everything with their own home-brew minimal set of software. It's an alternative universe of programming. It's almost a software and hardware design religion. I think everyone should know about it and that it's possible to work this way, even if you wouldn't personally want to. You may even find some of their arguments compelling. I love it.
Has anyone used the greenarray technology for any real world applications? It seems to fit somewhere between an FPGA and a microcontroller, but it doesn't seem that obvious to me what the benefits are, maybe less power than a microcontroller but I can't find any clear cut evidence that is the case for a real world application.
you may find the following presentations in using ga144 entertaining, even if stretched. as Chuck Moore once said (I'm trying to get the ref) Forth is about sharing ideas rather than code. these videos are not pure coding session (even if they have their own share) but a mix of engineering and problem solving.
- echolocation like bats
- heart monitor
- digital image processing
- building ga144 controlled robot
- low speed usb host
I think it's more or less a continuation/extension of the ideas of FORTH. You should read their app notes and read the writing of Chuck Moore, Greg Bailey, Jeff Fox, Charley Shattuck, and other FORTH people to understand the ideas. I've also found very friendly to talk to over the phone.
From your original comment I got the impression that there was a Forth collective somewhere on a commune in Wyoming. I was VERY interested in that but will settle for the wisdom of the Forth luminaries.
Here I was wondering how I never heard of the famed Forth collective of Wyoming... Still might visit them sometime just to see what cool things they might be doing. I wonder if they have a discussion venue there.
I just warn you... I think the impulse is to look at this software and think "ok this stuff is kinda rough around the edges. I know, I'm gonna write my own IDE and Compiler and all that in my own language of preference, and use that instead." Do not do this. Learn their tools, their way of doing things, and really try to immerse yourself if you want to learn. Write your own FORTH. It really is a religion with strict opposition to unnecessary abstraction and complexity to a degree that will put most contemporary programmers off balance. If you go in with an open mind, you'll get a lot out of it.
Greetings from Greg Bailey in Cheyenne, Wyoming where GreenArrays can say, paraphrasing Steve McQueen at the end of the superb movie Papillon, "We're still here, you Bastards!!!"
As an experiment, I will be happy to address any question that anyone might have, including those appearing in this thread, at the GreenArrays shop and lecture hall in Second Life. I propose to stand there, virtually, at 0800 PDT / 1500 UTC / 1700 CET / Midnight in Sydney, AU, on Wednesday, 13 May and again on Saturday, 16 May. The SLURL for the shop is http://maps.secondlife.com/secondlife/Deneb/80/228/27 and the name of the shop is "GreenArrays Ventures". My name in SL is GregBailey with UUID f64abbc5-7136-40db-9dd0-a174d93d8742 and I will wait for one hour before logging if no one has shown up.
The experiment will be interesting, whether or not it produces positive results. Failing that I may find time to reply on these threads, and we do reply to questions sent to sales mailbox, domain greenarraychips.com.
So, see you Wednesday or Saturday, perhaps. Advice: To minimize wasted time, kindly go to http://www.greenarraychips.com and do a little studying, or perhaps audit some of the lectures on the arrayForth Institute that is linked from the above, or preferably both.
For the questions about what is it good for, anyone who can multiply the average power of a typical edge device in the "Internet of Things" by whichever fanciful power of ten (we are up to 12 now, are we not?) the Visionaries expect to exist of those edge devices, and then contemplate the obscene amount of average power that product represents, should be able to grasp that by wasting orders of magnitude less resources, including energy, than others do, we ought to be obviously good for something there if nowhere else:)
And now back to work on a software defined GPS receiver. I look forward to meeting anyone who might make it on Weds or Saturday!
I stood around for an hour and ten minutes this morning; Daniel could not make it due to a family afternoon outing in CZ. End of experiment, with negative results. However, thanks to whoever made the Second Life account RawNerve, but could not make it as "early" as 0800 PDT, for your interest. Back to emailing us if you have any questions. We do answer.
Daniel Kalny and I waited the full hour in Second Life but no one else showed up. Daniel of course had never heard of Second Life but had no problem creating an account, making an avatar and finding the shop yesterday. Perhaps on Saturday I will see someone from this rather ephemeral thread :) Why ask questions but not come for answers? Best - Greg
Reminds me a lot of the Parallax Propeller: many simple cores optimized for bit-banging I/O and programmable logic. Of course this is much lower power and has many more cores.
I think that many core architectures can actually be simpler to program in an embedded context than the traditional single core systems. It's akin to a hardware implementation of the actor model rather than an interrupt state machine or interrupt event loop. Being able to dedicate cores to single I/O tasks means you can bitbang almost anything. Here's HDMI running in a few lines of code on a Propeller 2 with no external components but the HDMI connector: https://forums.parallax.com/discussion/comment/1475075/#Comm...
The chip is closer to AsAP [1] and XMOS xCore [2]. I remember that these two chips-competitors already existed when I was programming the predecessor of Greenarrays GA144. By the way, it was very fun to program these Forth-based processor arrays!
The DEFCON 22 badge was done on a Propeller. I did a little pokeing around on it--very interesting architecture. But they don't all run at once.... It's just a way to switch contexts instantly.
The shared hub memory is only accessible to one core at once and which one has access is switched between in round robin fashion, each core getting a turn to access it once every 16 clock cycles. Perhaps that’s what you were thinking of?
They do all run at once. All eight "cogs" exist as discrete hardware within the chip, and can perform operations on their cog RAM and I/Os simultaneously.
What doesn't happen all at once is access to main RAM. That's handled by a round-robin arbiter.
I bought a propeller 1 as a teenager but never really did much with it... I recall that the propeller 2 is coming out soon, so perhaps it's time to jump back in!
The grid layout and having chip(let?)s communicating with the four neighbors reminds me a lot of TIS-100 [1], I'm wondering if the game was inspired by this chip.
You can place orders for chips and evaluation kits right on the website; take Product Offerings in the menu on the left side of the browser, and you will see this page: http://www.greenarraychips.com/home/products/index.html
Strange it is all software like. Any hardware to see, to buy one or few times try, emulator to see etc. both the video and the YouTube by another guys seems no such thing.
I saw this talk live at Strange Loop 2013, and I'm going to be a contrarian: I wasn't impressed. I don't mean to bash Chuck Moore. Forth is cool, and this chip looks neat, but my impression is that this is a solution looking for a problem. Chuck Moore is having a lot of fun working on it, but he fails to convey why this is useful.
The impression that I had is that the whole talk was Chuck Moore going "look at all the cool assembler tricks I can do!". At no point did he ever demonstrate how his chip could be used to solve anything resembling a real-world use case. Instead, what he demonstrated was that his architecture was very esoteric and impractical to use. Some time could have been spent explaining how this chip can do some common computation (matrix multiplication, image processing, anything) more efficiently than an ARM chip or something we're already familiar with.
He didn't really provide any benchmarks of anything, be it performance or power consumption, and my understanding was that, at the time the talk was given, based on the Q&A session, that he had no customers buying said chips (I wonder why?).
They're making their second-generation chip right now, and I think there are a bunch of problems that this chip is an obvious solution for.
Chuck Moore is bad at presentations to outside audiences. This should surprise no one familiar with the man. This doesn't make the tech less useful, though.
The most obvious one is AI. These things are incredibly parallel & entirely self-contained, and a consumer desktop PSU could power countless amounts of them without a second thought. At $2,000 for 100 chips ($20/chip), you can start to do some really interesting stuff for the cost of a consumer laptop.
It's natural that AI is a problem this chip excels at, given Moore studied under McCarthy and all.
If you feel morally okay with playing with proprietary software, I really recommend playing a little bit with the simulator of the chip. Given what's intuitive with one or two, you'll probably be able to see the possibilities much better after an hour or so of play.
I seriously doubt that. The cores on that chip are extremely memory-limited: 72 bytes per core, that's about 10 kiB per Chip! And if you want to access SDRAM, you need an external SDRAM controller and bit-bang it. Gee, I wonder what that does to throughput and latency...
This chip's competition isn't GPUs and AI engines, but very low-cost, low-performance FPGAs, and it would probably lose. Given the extremely constrained environment of the individual cores (no cache, extremely little memory, a stack-based architecture), FPGAs are probably even easier to program.
Are they? You mean a generation following the GA144? That sounds really interesting! I see tantalizing tidbits on http://www.greenarraychips.com/: "Latest developments:
As of early 2020, shipments of the EVB002 evaluation kit and of G144A12 chips continue. Work continues on improvement of the arrayForth 3 integrated development system. Design of a new chip, G144A2x, continues; this will be upward compatible with the G144A12, with numerous significant improvements."
Chuck Moore talked about it in a EuroForth interview a year or two ago; a company with a use in mind funded them, and he seemed really enthusiastic. They've updated the website a few different times since; that update is really recent. If you check archive.org you'll probably find interesting little hints. I agree, it's really cool!
Which problems for those of us who don't find it obvious? I'm a big fan of Chuck and Forth in general, but I don't do any embedded work since college, so I don't have much personal use for this kind of system that I can think of.
It'd be unimaginative to just consider embedded uses: there are terribly practical uses for it. Consider that most big companies buy hardware based on performance per watt, then consider that GA144 blows most things out of the water on that front.
GA also has a few recommendations (ripped from their site):
APPLICATIONS
Energy harvesting applications
Portable devices
Remote sensing and data reduction
Ideal for parallel and/or pipelined work
Image processing
Complex control systems
Cryptography
High speed signal processing
Simulation and synthesis
Inexpensive, massively parallel systems for research and education
Artificial intelligence, neural nets
Has there been any concrete information released about their second-generation chip? The GreenArrays website only contains a few vague hints, suggesting, for example, that it might have a 32-bit datapath.
An ARM SoC chip has a lot of stuff inside: it has USART(s), SPI controller(s), I2C controller(s), SDRAM controller, sometimes LCD controller, USB controller, ethernet controller, a crypto coprocessor, graphic acceleration... Of which you only use a fraction.
It has logic to multiplex all those I/O to GPIO pins, and sometimes you have to choose between a second USB controller and a third UART.
Because all this gates eat a lot of current, it has a lot of logic to power off parts of the core.
In a way, popular ARM SoCs are a lot like bloated software.
I think the GA144 can replace a lot of this, without all the hardwired logic, with less gates, less power, less heat dissipation, and probably less pins.
Regarding his way of presenting it: I did cringe at times a little too. But at least he doesn't present it as silver bullet than can do anything easily with a C compiler and a Web programmer. Clearly his chip is as an oddball as his creator, so it requires some commitment.
And there's nothing wrong with that. I think programming is fun, and hardware design can be too. However, if you're a company trying to sell a product based on some specific claims, you probably should do more to convince potential users.