Hacker News new | past | comments | ask | show | jobs | submit login
DeviceScript: TypeScript for Tiny IoT Devices (microsoft.github.io)
204 points by glutamate on May 24, 2023 | hide | past | favorite | 101 comments



I am always a little surprised by efforts like these coming from big companies. There have been multiple efforts in the past to put languages meant for the web on small footprint devices - especially JavaScript.

- Tessel (closed down, domain used to be tessel.io)

- Toit Lang: https://toitlang.org/

- Moddable: https://www.moddable.com/

- Espruino: http://www.espruino.com/

- mBed tried it too: https://os.mbed.com/javascript-on-mbed/

- https://github.com/coder-mike/microvium

I am sure I am missing a few here..

Note that some of these projects are over a decade old! Maybe I am the "old man yelling at the cloud" meme, but I don't see embedded developers who have to maintain a project for many many years, using a programming language that changes often.


But being able to fish in the ocean of JS devs for embedded is the dream of every manager.


Frontend developers can barely be arsed to read the short React docs once a year to catch up on any updates.

There is no way in hell we're going to be able to function in an industry with thousand page data sheets and programming manuals.


Lol, I know that people like to joke around that frontend engineers are dumb, but this is just unnecessary.


It's obviously exaggerated and reductive, but you have to admit that not knowing how to use React properly is massively prevalent in frontend, let alone being able to recognize _when_ to use React.


I daresay that not knowing how to use React is even more prevalent outside of frontend.


Most FE engineers I know have a very deep understanding of react’s inter workings. It’s the full stack engineers that think it’s JS with extra HTML sauce and call it a day.


>with thousand page data sheets and programming manuals.

Why aren't those just a library? There is no way everyone deplicates the same work for interfacing with a chip for every single project that wants to use it.


There are plenty of libraries: from low level hardware abstraction layers to high level libraries like you'd find in the Arduino ecosystem. Those closer to the former (the HALs) require more understanding thus more reading, the latter expose less of the power of the chip because they target the least common denominator.

However, when you're making a mass produced piece of hardware with a $10 BOM cost and it turns out that you can save even 25 cents by using 90% of the power of a cheap chip instead of 5% of the power of an expensive chip, you're going to have to dive down into the documentation to figure out how to get every last ounce of processing power out of the cheaper chip.

That usually means exploiting the unique configuration of peripherals on a chip which can't be cleanly abstracted away by a software library. My first few projects, for example, I had to work around silicon errata in the family of chips the projects had chosen: STM32F4s couldn't run both DMA channels simultaneously at max speed when outputting to different peripherals. I wouldn't have been able to figure out what's going wrong at any level of abstraction without reading the documentation.

That's why I'm now in frontend instead of embedded.


There are many libraries, but the peripherals on modern microcontrollers are wildly configurable and the odds of any particular library supporting every single mode possible is essentially nil. And even if it did, then the library documentation would approach the chip documentation in terms of complexity.


And it may be a foolish one because that ocean is filled with sea monsters that even TS can’t tame.


I’d rather say plankton than sea monsters.


I’ve never seen plankton sink a ship or halve its performance but sure. One can only take a metaphor so far.


Rust (or Zig) is a much better pathway for a "user friendly" language on embedded devices than JavaScript of all things.

And developers love Rust, advertise a job for an embedded Rust developer and you'll need to build a moat to keep the hordes out.


My best guess would be that Rust developers skew away from UI and interaction expertise.


And remains a dream far away from reality.


/s?


I work in an investment bank that builds, operates and sells solar parks, and while the main builk of the work we do in the development department is in the fields of finance and asset management we do sometimes work with the solar inverters and similar. Being a non-tech enterprise means we're a rather small IT department in a large company that doesn't really care about tech beyond it working, as a result we've adopted Typescript as our "main" language. I've still build stuff in C and Rust, and our ML/AI employees do some of the work with Python, but over all we tend to use Typescript when and where we can because it's easier for our developers to switch and support each others if we work in the same language.

If we can use a Typescript subset maintained by Microsoft on the embedded devices at the solar plants, it'll be a absolutely amazing. Regardless of what came before.

Don't get me wrong. I wouldn't mind if something different took over. In many ways it would be much easier for us if we could do everything in Rust, but I don't think anyone in our team believe we'll see a realistic alternative to JavaScript on the frontend in our careers, so well...


I see things like MicroPython as a nice tool for prototyping hardware side rapidly (if you just need to test some SPI/I2C stuff in more complex way than i2c interface allows) and for projects simple enough that you don't ever will use most of the CPU and can afford to spend $ for wasted RAM/Flash.

Like, you just want to flip some bits based on some inputs in as simple way as possible.

But then eh, Arduino is easy enough for same thing...


I'm functionally oriented and being able to use TS instead of Arduino would be a blessing in this regard especially. I like using TS for functional coding because it forces me to really define my input/output types and helps with composability.


The truth is: we, actual embedded developers, much rather compile a LUA interpreter and stick it in there and be done with it.


Yikes, not true. I can't imagine developing anything of significant complexity with Lua. Also it hasn't been all-capitals for a while.


these tiny languages for MCU by design are not for 'significant complexity" to start with, even Lua might be too large and complex for "tiny iot devices".

For me, Zig and C will do it just fine.


Zig and C are much better at managing complexity because of their type systems.


Wouldn't a modern language with a great typesystem and a huge ecosystem be nicer to work with?


Other comments have put better than I could, but I'll add that what people call ecosystems nowadays is completely unthinkable in embedded. Sometimes you have critical applications and you bet you need to certify every single line of code that goes into your application. Small embedded devices are more geared towards engineering than other kinds of development, and in my opinion that's why applications work well, in comparison with the mess we have in the web nowadays, for example.


Nicer from which side? Not OP, but I do like modern languages and great type systems - I’ll be the first to say that TypeScript is fantastic and maybe one of my favorite typed languages.

However, embedding Lua and writing bindings for it is really easy to do. The entire thing fits in a nice, neat single directory of a few ANSI-stone-age-C files and “Just Works”. You can drop it into your codebase, write a couple of simple bindings and be off to the races in no time. And you can easily fit its world model into the model of your application. Lua’s simplicity makes it a very, very powerful tool.


Ecosystems are as much a burden in embedded development as they are a blessing. Either you put in the time to support much of it across all your Tier 1 platforms or you punt on the problem, defeating the purpose of using a language with an ecosystem in the first place.

Take Javascript, for example. Which APIs are you actually committing to support? Are you going to support the Date class? BigInt extension? Node's fs, path, and crypto libraries? The browser's local storage, indexed DB, and other high level APIs? What about setTimeout and setInterval? Or are you only supporting pure-JS libraries, forcing the user to do a manual review of each dependency they consider?

A lot of microcontrollers - probably the vast majority in circulation - wouldn't have the ROM to even support most of those APIs, if they made any sense on an MCU to begin with. Look at MicroPython to see the kind of tradeoffs they have to make - it isn't pretty.


Note that Toit has nothing to do with JavaScript or the Web. It was from the beginning designed to run on embedded devices.


TypeScript is popular and has static types. The others don't.


JavaScript is the most common language for building interfaces and being event-driven. These two aspects make it uniquely suited for hardware applications.

But even as a JavaScript (and specifically not TypeScript) zealot, I cannot get behind JavaScript on hardware. The reason we use C or C++ is specifically the control over the very limited memory and anything requiring a runtime or manages it’s own memory (a la garbage collection) is a non-starter. Firmware is written once and seldom updated. A node.js app is not a bad way to handle the server-side component on IoT given it’s event driven nature and access to low level constructs.


What do you consider by "very limited memory?" The processor I generally use for my hobby projects has 16M Flash and 4M RAM. This isn't the 80's.

> Firmware is written once and seldom updated

Not these days where everything must have an App as an alternate frontend. Device Firmware Update was such a frequent request that we just made it a fixed-cost line item on our quotation forms when I worked at an engineering services company. Projects where the customer didn't want DFU were far rarer than the ones that did.


My most recent project uses an STM32F030C8 which has 64Kb of flash and 8Kb of SRAM. This is actually a beefier STM32 than I usually go for but was cheaper and more available at the time of the design.

I didn’t say it was never updated. Every IoT project or company I’ve worked on or with has the ability to update firmware and a lot of development effort goes into making sure that can happen. I have not worked at any place that pushed more updates per year than I can count on a single hand. Everything goes through, often manual, integration testing (ad nauseum in some cases), and field testing before it’s even considered as a release candidate. Compare this to release cadence of a CI driven development team for the web and server applications it connects to that deploys sometimes multiple times per day.


OK. My professional projects tend to be very large, so an 'F030 would be rather small. One of my current projects is based on an STM32F769, although I think we're using the smaller 1Mb Flash version.

DFU shouldn't get used a lot, I agree, but when it's needed it's a lifesaver.


https://twitter.com/manas_saksena/status/1662320073661640706...

TIL that Whirlpool used JavaScript on their ESP32 based products and shipped millions of devices


Not to forget Java ME (or whatever it is currently called). I'd argue that it is by far the most successful of all these failed attempts because it still is on many (maybe most even?) smartcards.


Java by itself was supposed to be a simple and compact language for embedded systems.

AFAIK Java Micro Edition is mostly dead except for retrocomputing enthusiasts. Java Card is a reasonably big thing.


Right, Java Card was the thing I meant. ME was the other thing we had in the half-smart phone era.


AssemblyScript can be used for that too, though it’s not it’s main purpose


A (once?) popular one was BoneScript for BeagleBone.

Remember also that although the language may change frequently, that doesn't mean you have to keep up with those changes.


If there are no breaking changes, is it really a disadvantage for a language to add new capabilities frequently?


I wouldn’t put JS on a device, but I’d sure as shit put TS on a device.


It's a subset of typescript : https://microsoft.github.io/devicescript/language

Microsoft did something similar before (without a vm) called Static Typescript : https://www.microsoft.com/en-us/research/publication/static-...


Same people :)

We tried to make DeviceScript closer in semantics to real JS (prototypes etc) and easier to port. You pay with performance.


MicroPython has managed to find its niche and be moderately successful, I don't see why this couldn't manage similar status. But in the same vein I don't think this will become default choice for embedded anytime soon


.net micro framework was also an interesting attempt


FWIW, this lives again as .NET nanoFramework:

https://www.nanoframework.net/

Enterprising folks have even managed to run it on the Raspberry Pi Pico...


I wonder if the compiler could be adapted to target other platforms, such as desktop, mobile, WebAssembly, or WASI?

TypeScript is a wonderful language to develop in, and it would be awesome to be able to build TypeScript code down to small apps that don't need a JS Runtime to execute. I'd definitely be up for sacrificing some of the dynamic part of JS for being to build small native executables.


AssemblyScript might be what you're interested in, it's a typescript-like language that compiles to WASM code: https://www.assemblyscript.org/


The DeviceScript project already provides a WebAssembly VM: https://microsoft.github.io/devicescript/api/vm


On that site, cmd+f makes the video full screen. yuck


I have found the browser extension StopTheMadness to be invaluable to browsing the web in 2023. https://underpassapp.com/StopTheMadness/

It even works on Chrome: https://underpassapp.com/StopTheMadness/support-chrome.html


TIL about the `ping` attribute on HTML anchor elements.


I have always been interested now that I've bought and renovated my home into some nice experiments with IoT devices or building my own stuff.

What would be an interesting starting point? I honestly don't even know what I want to build to be fair but always been interested in what the possibilities could've been with this kind of pico boards up to the raspberry (which is in a totally different range).


I've been doing home automation for many years.

From what you described, using Home Assistant in the center would be the way to go (works fine on a raspberry pi, in the beginning at least)

For the devices, I mainly use zigbee sensors and lights etc. and bunch of fully DIY stuff running mostly on ESP32 with ESPHome.

For the starters, you can skip the zigbee devices all together and experiment with Home Assistant + ESPHome


Note that the rpi is way overpriced for what it is currently, just get a mini pc with real storage, unless you need the gpio


M5Stack (https://m5stack.com/) could be a good starting point, especially if you do not know exactly what you wanna do.


Access control with NFC (sticker on the phone) is a big upgrade. I’ve developed some hardware around this https://instanfc.com


"TypeScript is a superset of JavaScript."

"DeviceScript is a subset of TypeScript."

We need to go deeper.


TypeScript is a superset of JavaScript.


It'd be nice if AssemblyScript + wasm could compete here too. Quite the effort! Made their own small vm interpetter.


It can, wasm3 is a wasm interpretor ported to a lot of bare metal microcontrollers: https://github.com/wasm3/wasm3


Or you can compile it to C with wasm2c or w2c2 for much better performance


Wasm is just a lobotomized assembly. On an embedded device your code is likely the only thing running so just use normal asm? Your free copy of GCC will happily compile c to it. Try.


I think it's a bit simplistic to assume native, already compiled code is the only payload worth considering.

The portability of wasm will be pretty excellent, and over time there may be a great cross-language ecosystem surrounding wasm that native may not match.

Wasm also seems like a potentially better target than what we have here, which seems interpetter focused. Wasm otoh might actually be able to jit/aot compile down (which the mentioned wasm3 interpretter eskews with a list of good reasons, for anyone looking for counterarguments). And likely will have more invested in the ecosystem in doing optimizations in general.


> I think it's a bit simplistic to assume native, already compiled code is the only payload worth considering.

WASM IS compiled code

> The portability of wasm will be pretty excellent, and over time there may be a great cross-language ecosystem surrounding wasm that native may not match.

1. They said that about java too.

2. Problem is...you need to compile something to WASM, and currently a lot more somethings are compileable to native than to wasm, and i doubt this will ever change, since compiling to wasm is a strict superset of compiling to native, in terms of work involved in making a compiler. (this is not true for old esoteric architectures, but i do not see you offering me a WASM runtime for the 8051 either)

> Wasm otoh might actually be able to jit/aot compile down

You know what else does that? Any compiled language ... to native code... In fact, we "AOT" it from the start in a process we call ... compilation. Your local free copy of gcc can do this for you. Check it out.

> And likely will have more invested in the ecosystem in doing optimizations in general.

You know what else does good optimization for a given target? You local free copy of gcc. Check out the "march", "mcu", and "O" flags

Point is: I buy the use of interpreted code on IoT: so people who cannot program can still make a light blink. But as soon as you go to compiled code, might as well compile for your actual target, and not a pointless IL (which is what WASM is until you show me your HDL for a WASM cpu)


> but i do not see you offering me a WASM runtime for the 8051 either

As long as there’s a C compiler for it, just use https://github.com/WebAssembly/wabt/blob/main/wasm2c/README.... or https://github.com/turbolent/w2c2

That way, you get the portability of C, the optimisation power of GCC or your favourite C compiler and the portablity and determinism of WASM! Obviously there’s some overhead but there are definitely situations where this is a good option, especially where there’s a compiler available for WASM but not for whatever obscure platform you want to use


I feel like you are super trapped in a very small way of thinking.

Say I run an IoT system across a variety of embedded systems. I could dynamically load small behaviors & scripts to all targets with this. User scripts could target all devices.

Your aim that native code can target everything seems to be pretty limited. A lot of languages can't or won't invest in wide micro-architectural & embedded support.

I feel lile you are confining yourself to a very very narrow position, & refusing to see possible middle grounds or uses. We needn't adopt such stark framing.


"WASM IS compiled code"

Yes, but to a VM, so the same code can run wherever there is a wasm runtime (number is growing).


But you can compile that very same code to a much larger number of architectures. Using gcc


It's not the same because with Wasm you only need to distribute a single binary to support all current and future platforms.


Say it with me:

Jaaaaa Vaaaaaaa


The amount of WebAssembly push, ignoring all bytecode based formats since the late 1950's is incredible.


> The portability of wasm will be pretty excellent, and over time there may be a great cross-language ecosystem surrounding wasm that native may not match.

I'm old enough to have lived the Java experience: "Write Once, Run Everywhere".


I believe Moddable's XS was the first to compile JavaScript to bytecode on a computer for embedded deployment: https://www.moddable.com/faq#xs-in-c

(XS is a complete ECMAScript 2020 implementation, though.)


Interesting choice to build a brand new JS engine from the ground up. Wondering if there's a life for that outside of the project as an embeddable JS / TS engine (especially after Microsoft abandoned Chakra/ChakraCore a few years back).


There are many choices we have made that wouldn't make sense on desktop class cpu... 5 orders of magnitude of difference in RAM size is somewhat significant in design choices :)


Hey, desktop apps that embed JavaScript engines would also love to get an option 5 orders of magnitude less memory intensive :)

Even ES5 (the rest can be polyfilled).



I’m interested in that too. There are a lot of simulators out there, but they tend to be either under-featured or very user-unfriendly.


I know companies are desperate to replace every programming language with javascript so they can just hire cheap junior web devs to do everything, but this is ridiculous.


I've been having a lot of fun with https://kalumajs.org/ recently for silly JS embedded things. It's been great so far, and has a cool little community!

It's a runtime built on top of JerryScript, which has been pretty neat to look into as well: https://jerryscript.net/


https://jerryscript.net/getting-started/ => "Currently, only Ubuntu 18.04+ is officially supported as primary development environment." :(


I've been using Espruino with TypeScript code for some years now, but this looks quite interesting too. Time for a new IoT project to test this out.


Very cool, looking forward to seeing more integration with the IDE for debugging (eg: serial interface monitors, register watches, interrupt control, logic trace analyser etc).

I've been using PlatformIO [1] (albeit with C++) in VScode, it would be a good DX baseline for a comparison.

[1] https://platformio.org


I have personally been trying to find either a language which transpiles to c or c++ or one with near performance to run on a ESP32. It would be nice to also make use of existing c++ libraries. I’ve seen NodeMCU, go lite, mini python, you name it. Any suggestions? I am considering rust or zig…


Would be cool if they could host the compiler and the runtime on the target device like in Forth. Do my hardware hacking and experiment over serial port that way without the edit compile flash reset cycle..


Relevant: ESPHome is a firmware for esp8266 / esp32 series IoT chipsets that can be configured mostly with a web gui or yaml file.

https://esphome.io


I've used C++ on hobby esp32 projects, this is appealing to me. I could not find details on resource usage in the documentation.


What's the minimal reasonable footprint in terms of memory requirements and processing power for such a 'tiny IOT device'?



For good and for bad, let's not forget it won't support (at least a big portion of) existing npm modules.


What those poor devices did to deserve such malady be placed upon them...


Looks nice, didn't see in info what devices are supported yet?


Looks like at least the Pi Pico and ESP32 are supported: https://microsoft.github.io/devicescript/devices


When do we get a TypeScript compiler for Windows/Linux?


I love the idea of the code in my grandma's pacemaker having dozens of exploitable vulnerabilities from all the fucking JavaScript code in there! Yaass, slay (old people and people in elevators and heavy machinery)!


I'll just leave this here on the doorstep:

https://www.fda.gov/medical-devices/digital-health-center-ex...


Nothing you mentioned falls into the realm of IoT. This is not for life- or safety-critical devices.


IoT includes all kinds of big industrial stuff. One would hope the people building it do a good job keeping the connected bits away from the critical bits, but past decades of that industry don't make one too hopeful about it. (Although something like this specifically would be less likely to be used, yes)


What about C?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: