I am always a little surprised by efforts like these coming from big companies. There have been multiple efforts in the past to put languages meant for the web on small footprint devices - especially JavaScript.
- Tessel (closed down, domain used to be tessel.io)
Note that some of these projects are over a decade old! Maybe I am the "old man yelling at the cloud" meme, but I don't see embedded developers who have to maintain a project for many many years, using a programming language that changes often.
It's obviously exaggerated and reductive, but you have to admit that not knowing how to use React properly is massively prevalent in frontend, let alone being able to recognize _when_ to use React.
Most FE engineers I know have a very deep understanding of react’s inter workings. It’s the full stack engineers that think it’s JS with extra HTML sauce and call it a day.
>with thousand page data sheets and programming manuals.
Why aren't those just a library? There is no way everyone deplicates the same work for interfacing with a chip for every single project that wants to use it.
There are plenty of libraries: from low level hardware abstraction layers to high level libraries like you'd find in the Arduino ecosystem. Those closer to the former (the HALs) require more understanding thus more reading, the latter expose less of the power of the chip because they target the least common denominator.
However, when you're making a mass produced piece of hardware with a $10 BOM cost and it turns out that you can save even 25 cents by using 90% of the power of a cheap chip instead of 5% of the power of an expensive chip, you're going to have to dive down into the documentation to figure out how to get every last ounce of processing power out of the cheaper chip.
That usually means exploiting the unique configuration of peripherals on a chip which can't be cleanly abstracted away by a software library. My first few projects, for example, I had to work around silicon errata in the family of chips the projects had chosen: STM32F4s couldn't run both DMA channels simultaneously at max speed when outputting to different peripherals. I wouldn't have been able to figure out what's going wrong at any level of abstraction without reading the documentation.
That's why I'm now in frontend instead of embedded.
There are many libraries, but the peripherals on modern microcontrollers are wildly configurable and the odds of any particular library supporting every single mode possible is essentially nil. And even if it did, then the library documentation would approach the chip documentation in terms of complexity.
I work in an investment bank that builds, operates and sells solar parks, and while the main builk of the work we do in the development department is in the fields of finance and asset management we do sometimes work with the solar inverters and similar. Being a non-tech enterprise means we're a rather small IT department in a large company that doesn't really care about tech beyond it working, as a result we've adopted Typescript as our "main" language. I've still build stuff in C and Rust, and our ML/AI employees do some of the work with Python, but over all we tend to use Typescript when and where we can because it's easier for our developers to switch and support each others if we work in the same language.
If we can use a Typescript subset maintained by Microsoft on the embedded devices at the solar plants, it'll be a absolutely amazing. Regardless of what came before.
Don't get me wrong. I wouldn't mind if something different took over. In many ways it would be much easier for us if we could do everything in Rust, but I don't think anyone in our team believe we'll see a realistic alternative to JavaScript on the frontend in our careers, so well...
I see things like MicroPython as a nice tool for prototyping hardware side rapidly (if you just need to test some SPI/I2C stuff in more complex way than i2c interface allows) and for projects simple enough that you don't ever will use most of the CPU and can afford to spend $ for wasted RAM/Flash.
Like, you just want to flip some bits based on some inputs in as simple way as possible.
But then eh, Arduino is easy enough for same thing...
I'm functionally oriented and being able to use TS instead of Arduino would be a blessing in this regard especially. I like using TS for functional coding because it forces me to really define my input/output types and helps with composability.
these tiny languages for MCU by design are not for 'significant complexity" to start with, even Lua might be too large and complex for "tiny iot devices".
Other comments have put better than I could, but I'll add that what people call ecosystems nowadays is completely unthinkable in embedded. Sometimes you have critical applications and you bet you need to certify every single line of code that goes into your application. Small embedded devices are more geared towards engineering than other kinds of development, and in my opinion that's why applications work well, in comparison with the mess we have in the web nowadays, for example.
Nicer from which side? Not OP, but I do like modern languages and great type systems - I’ll be the first to say that TypeScript is fantastic and maybe one of my favorite typed languages.
However, embedding Lua and writing bindings for it is really easy to do. The entire thing fits in a nice, neat single directory of a few ANSI-stone-age-C files and “Just Works”. You can drop it into your codebase, write a couple of simple bindings and be off to the races in no time. And you can easily fit its world model into the model of your application. Lua’s simplicity makes it a very, very powerful tool.
Ecosystems are as much a burden in embedded development as they are a blessing. Either you put in the time to support much of it across all your Tier 1 platforms or you punt on the problem, defeating the purpose of using a language with an ecosystem in the first place.
Take Javascript, for example. Which APIs are you actually committing to support? Are you going to support the Date class? BigInt extension? Node's fs, path, and crypto libraries? The browser's local storage, indexed DB, and other high level APIs? What about setTimeout and setInterval? Or are you only supporting pure-JS libraries, forcing the user to do a manual review of each dependency they consider?
A lot of microcontrollers - probably the vast majority in circulation - wouldn't have the ROM to even support most of those APIs, if they made any sense on an MCU to begin with. Look at MicroPython to see the kind of tradeoffs they have to make - it isn't pretty.
JavaScript is the most common language for building interfaces and being event-driven. These two aspects make it uniquely suited for hardware applications.
But even as a JavaScript (and specifically not TypeScript) zealot, I cannot get behind JavaScript on hardware. The reason we use C or C++ is specifically the control over the very limited memory and anything requiring a runtime or manages it’s own memory (a la garbage collection) is a non-starter. Firmware is written once and seldom updated. A node.js app is not a bad way to handle the server-side component on IoT given it’s event driven nature and access to low level constructs.
What do you consider by "very limited memory?"
The processor I generally use for my hobby projects has 16M Flash and 4M RAM. This isn't the 80's.
> Firmware is written once and seldom updated
Not these days where everything must have an App as an alternate frontend. Device Firmware Update was such a frequent request that we just made it a fixed-cost line item on our quotation forms when I worked at an engineering services company. Projects where the customer didn't want DFU were far rarer than the ones that did.
My most recent project uses an STM32F030C8 which has 64Kb of flash and 8Kb of SRAM. This is actually a beefier STM32 than I usually go for but was cheaper and more available at the time of the design.
I didn’t say it was never updated. Every IoT project or company I’ve worked on or with has the ability to update firmware and a lot of development effort goes into making sure that can happen. I have not worked at any place that pushed more updates per year than I can count on a single hand. Everything goes through, often manual, integration testing (ad nauseum in some cases), and field testing before it’s even considered as a release candidate. Compare this to release cadence of a CI driven development team for the web and server applications it connects to that deploys sometimes multiple times per day.
OK. My professional projects tend to be very large, so an 'F030 would be rather small. One of my current projects is based on an STM32F769, although I think we're using the smaller 1Mb Flash version.
DFU shouldn't get used a lot, I agree, but when it's needed it's a lifesaver.
Not to forget Java ME (or whatever it is currently called). I'd argue that it is by far the most successful of all these failed attempts because it still is on many (maybe most even?) smartcards.
MicroPython has managed to find its niche and be moderately successful, I don't see why this couldn't manage similar status. But in the same vein I don't think this will become default choice for embedded anytime soon
I wonder if the compiler could be adapted to target other platforms, such as desktop, mobile, WebAssembly, or WASI?
TypeScript is a wonderful language to develop in, and it would be awesome to be able to build TypeScript code down to small apps that don't need a JS Runtime to execute. I'd definitely be up for sacrificing some of the dynamic part of JS for being to build small native executables.
I have always been interested now that I've bought and renovated my home into some nice experiments with IoT devices or building my own stuff.
What would be an interesting starting point? I honestly don't even know what I want to build to be fair but always been interested in what the possibilities could've been with this kind of pico boards up to the raspberry (which is in a totally different range).
Wasm is just a lobotomized assembly. On an embedded device your code is likely the only thing running so just use normal asm? Your free copy of GCC will happily compile c to it. Try.
I think it's a bit simplistic to assume native, already compiled code is the only payload worth considering.
The portability of wasm will be pretty excellent, and over time there may be a great cross-language ecosystem surrounding wasm that native may not match.
Wasm also seems like a potentially better target than what we have here, which seems interpetter focused. Wasm otoh might actually be able to jit/aot compile down (which the mentioned wasm3 interpretter eskews with a list of good reasons, for anyone looking for counterarguments). And likely will have more invested in the ecosystem in doing optimizations in general.
> I think it's a bit simplistic to assume native, already compiled code is the only payload worth considering.
WASM IS compiled code
> The portability of wasm will be pretty excellent, and over time there may be a great cross-language ecosystem surrounding wasm that native may not match.
1. They said that about java too.
2. Problem is...you need to compile something to WASM, and currently a lot more somethings are compileable to native than to wasm, and i doubt this will ever change, since compiling to wasm is a strict superset of compiling to native, in terms of work involved in making a compiler. (this is not true for old esoteric architectures, but i do not see you offering me a WASM runtime for the 8051 either)
> Wasm otoh might actually be able to jit/aot compile down
You know what else does that? Any compiled language ... to native code... In fact, we "AOT" it from the start in a process we call ... compilation. Your local free copy of gcc can do this for you. Check it out.
> And likely will have more invested in the ecosystem in doing optimizations in general.
You know what else does good optimization for a given target? You local free copy of gcc. Check out the "march", "mcu", and "O" flags
Point is: I buy the use of interpreted code on IoT: so people who cannot program can still make a light blink. But as soon as you go to compiled code, might as well compile for your actual target, and not a pointless IL (which is what WASM is until you show me your HDL for a WASM cpu)
That way, you get the portability of C, the optimisation power of GCC or your favourite C compiler and the portablity and determinism of WASM! Obviously there’s some overhead but there are definitely situations where this is a good option, especially where there’s a compiler available for WASM but not for whatever obscure platform you want to use
I feel like you are super trapped in a very small way of thinking.
Say I run an IoT system across a variety of embedded systems. I could dynamically load small behaviors & scripts to all targets with this. User scripts could target all devices.
Your aim that native code can target everything seems to be pretty limited. A lot of languages can't or won't invest in wide micro-architectural & embedded support.
I feel lile you are confining yourself to a very very narrow position, & refusing to see possible middle grounds or uses. We needn't adopt such stark framing.
> The portability of wasm will be pretty excellent, and over time there may be a great cross-language ecosystem surrounding wasm that native may not match.
I'm old enough to have lived the Java experience: "Write Once, Run Everywhere".
Interesting choice to build a brand new JS engine from the ground up. Wondering if there's a life for that outside of the project as an embeddable JS / TS engine (especially after Microsoft abandoned Chakra/ChakraCore a few years back).
There are many choices we have made that wouldn't make sense on desktop class cpu... 5 orders of magnitude of difference in RAM size is somewhat significant in design choices :)
I know companies are desperate to replace every programming language with javascript so they can just hire cheap junior web devs to do everything, but this is ridiculous.
I've been having a lot of fun with https://kalumajs.org/ recently for silly JS embedded things. It's been great so far, and has a cool little community!
It's a runtime built on top of JerryScript, which has been pretty neat to look into as well: https://jerryscript.net/
Very cool, looking forward to seeing more integration with the IDE for debugging (eg: serial interface monitors, register watches, interrupt control, logic trace analyser etc).
I've been using PlatformIO [1] (albeit with C++) in VScode, it would be a good DX baseline for a comparison.
I have personally been trying to find either a language which transpiles to c or c++ or one with near performance to run on a ESP32. It would be nice to also make use of existing c++ libraries. I’ve seen NodeMCU, go lite, mini python, you name it. Any suggestions? I am considering rust or zig…
Would be cool if they could host the compiler and the runtime on the target device like in Forth. Do my hardware hacking and experiment over serial port that way without the edit compile flash reset cycle..
I love the idea of the code in my grandma's pacemaker having dozens of exploitable vulnerabilities from all the fucking JavaScript code in there! Yaass, slay (old people and people in elevators and heavy machinery)!
IoT includes all kinds of big industrial stuff. One would hope the people building it do a good job keeping the connected bits away from the critical bits, but past decades of that industry don't make one too hopeful about it. (Although something like this specifically would be less likely to be used, yes)
- Tessel (closed down, domain used to be tessel.io)
- Toit Lang: https://toitlang.org/
- Moddable: https://www.moddable.com/
- Espruino: http://www.espruino.com/
- mBed tried it too: https://os.mbed.com/javascript-on-mbed/
- https://github.com/coder-mike/microvium
I am sure I am missing a few here..
Note that some of these projects are over a decade old! Maybe I am the "old man yelling at the cloud" meme, but I don't see embedded developers who have to maintain a project for many many years, using a programming language that changes often.