This looks like a cool project but there's something I'm confused about.
Is the goal of this project to allow people to run realtime software? if so, isn't using lua a problem because of it's memory management causing GC stalls?
It doesn't appear to be addressed in the README of the github. One other explanation is that I'm missing the point, and it's just using RTOS as a small embeddable OS on which to run lua, is that the case?
I think anything truly real time would be interrupt driven and use C/assembly. This would act as the glue between that critical code. I don't feel like we've hit peak embedded development yet. I love the idea of rapid embedded development that deploys scripts over a network or RS232, but most scripting languages aren't great for this. I would like to see a statically typed actor system with no GC or optional per actor GC that is pre-allocated and can only cause a single actor to fail in a predictable way.
There are languages invented for this purpose, such as Ada. I'd look at that instead of C when you're doing real time systems. Although there are many different kinds of real time systems so it depends on the goal. Soft real time systems aren't as time critical as hard real time systems so the way you program and reason about them is a bit different.
How real time do you need to be? some real time is you need to react within milliseconds or bad things happen, humans will notice if there is 10ms lag in real time audio. Some real time is you need to react within seconds. A machine might need several tens of seconds to get up to speed, so controlling the motor only needs to update a once a second or so for the speed to remain within tolerance. And everything in between.
Depending on where you are on the need for control different technologies can work for you.
I'm probably wrong, but isn't real time not about how fast you can react but if you can react on a defined time constraint? Or is there a time threshold you can't consider real time any more like 1 day?
Edit: just like you said, 10ms, 1s, etc.
The time depends on your problem. The real time is more about bad things happen if you don't respond on time. If you click on a webpage and your browser times out loading it is a real time failure, but the failure isn't bad enough that anyone thinks of browsers as real time because the failure isn't really harmful.
For what I see from the discution it seems to be more a practical decision to not use real time resources and techniques for much higher times. So 1 day for example would be a waste of development time to try to use real time, you could use alerts and background jobs to fix issues. But when you reach shorter and shorter timeframes and bad things can happen a real time approach starts to make more sense.
Now, some parent comment mentioned RTOS. Maybe for a real time OS there would be a practical hard upper bound. But for real time systems in general this upper bound would be totally domain specific.
Those examples were naval ships. When you can float a huge data center in water and dump all the waste heat you want, it doesn't matter that the Perc vm used is 2.5x as slow as the sun vm. Or that it uses even more ram than a normal java vm which is already a lot. This is running on a microcontroller.
Both fair points. Hard real-time is absolutely doable with GC if it is deterministic. It has throughput and memory penalties to get low and predictable latencies but it is regularly done. I have written soft real time java myself. And other than avoiding garbage like you would do with any low latency code with GC, it was idiomatic. But that one change does increase cognitive burden to the point I didnt find it any more productive than C++. If I had needed reflection I might have felt differently.
I've done lots of hard real time c++. It certainly has development time overhead, but the memory usage was the same as idiomatic c++, just everything was preallocated at max usage. No throughput hit either, if anything it was faster because of no allocations.
I've done safety critical that wasn't MISRA. But you have reminded me that for years we were leaving optimization off so we could verify full code and branch coverage in the assembly. At which point Java is almost certainly faster, though we never would have fit in memory with Java. Eventually we started turning optimization on and it was harder to verify but not impossible.
Lua's garbage collector can be driven 'manually' quite easily.
That is, one can start it, stop it, run it to completion, run a 'step', tune the size of the step and the aggressiveness of collection, all from within Lua.
It's true that you can't get hard realtime guarantees while using Lua naively, you do have to be aware of what you're doing. If you need to be 'cycle perfect', probably use something else.
But there are an enormous number of applications where what Lua offers is just fine, and there's no reason a Lua program should have GC 'stalls', if that means unexpected and lengthy pauses in execution.
All the real time guarantees would only ever happen in the libraries you are calling out to, not in the scripting language. That's just glue code. If the scripting language has the equivalent of eval() there is no way it can be made real time anyway.
I can't speak for this projevt specifically, but you can do GC in a real time system. IBMs metronome garbage collector is a real time garbage collector.
I do soft real-time in .NET5 without any problems.
I find that if I completely abduct a high-priority thread and never yield back to the OS, things on the critical execution path run incredibly smoothly. I am able to execute a custom high precision timer in one of these loops while experiencing less than a microsecond of jitter. Granted, it will light up an entire core to 100% the whole time the app is running. But, in today's world of 256+ core 1U servers, I think this is a fantastic price to pay for the performance you get as a result. Not yielding keeps your cache very toasty.
Avoiding any allocation is also super important. In .NET, anything touching gen2 or LOH is death sentence. You probably don't even want stuff making it into gen1. Structs and streams are your friends for this adventure.
I'm curious about this as well. And it's not just GC: just allocating memory is not real-time safe unless you're using something like a bump allocator. Lua seems very much like the wrong language for this.
If your heap is on the order of 100kB the GC stalls may not be so bad. A bigger problem may be pulling your code from external SPI flash - typically you will need to put all your real time code in RAM and you have only so much of it.
can you disable the GC? in my last role we had a large C++ application that had embedded lua. I didn't touch it much, but I would have thought that while most of the stuff it did was calling out to our C++ api, the "lua objects" and tables e.t.c. would still be created and need to be garbage collected as normal.
For such a dynamic language Lua is quite good about avoiding hidden dynamic allocation. Creating a new closure (or plain Lua function), coroutine, string, or table will, of course, allocate a new object. But all of those are either explicit or fairly obvious. Lua's iterator construct is careful to avoid dynamic allocation--I believe it's one reason why iterators take two explicit state arguments. And Lua has a special type for C functions (as opposed to Lua functions), allowing you to push, pass, and call C functions without dynamic allocation.
Likewise for lightuserdata (just a void pointer), and of course numbers (floats and integers) and booleans--no dynamic allocation.
Nested function calls could result in a (Lua VM) stack reallocation. But Lua guarantees tail call optimization. And the default stack size is a compile-time constant.
Finally, Lua is very rigorous about catching allocation failure while maintaining consistent state. Well-written applications can execute data processing code from behind a protected call or coroutines resume (which is implicitly protected) and still keep chugging along in the event of allocation failure anywhere in the VM. The core of the application, such as the event loop and critical event handlers, can be written to avoid dynamic allocation after initialization.
For a usage like that, no as you'll probably rack up lots of allocations and need to GC eventually. But if your goal is this from the outset, there's way to do it as other post mentions. If you never create more than a fixed amount of Lua objects, closures, or unique string values you can certainly indefinitely postpone the GC
Execution can be interrupted, though, through debug hooks. It could be set up to yield every N instructions. [1]
There's a few caveats, though, in that this will not be called if you've called into C code. That is, you will only yield while executing code in the Lua interpreter.
I thought there was something wrong on that page:L I wanted to try the IDE but just got this video filling the page and that single red button: "sign in with your google account".
Why does an IDE for microcontrollers requires a Google Account?
Guess I'll never know.
Very interesting project on RTOS and hopefully this can support the new RISC-V based ESP32-C3 MCU [1].
I wonder if the performance can significantly be improved if this is ported to Terra language, a system programming language meta-programmed in Lua [2]. It's going to stable 1.0 version real soon.
https://github.com/crmulliner/fluxnode follows the same idea providing a JavaScript runtime for the application development, runs on ESP32 and supports LoRa. Fewer features as this is a hobby project.
Is the goal of this project to allow people to run realtime software? if so, isn't using lua a problem because of it's memory management causing GC stalls?
It doesn't appear to be addressed in the README of the github. One other explanation is that I'm missing the point, and it's just using RTOS as a small embeddable OS on which to run lua, is that the case?