> Ultibo core is a full featured environment for embedded or bare metal (without an operating system) development on Raspberry Pi (all models). It is not an operating system itself but provides many of the same services as an OS such as memory management, threading, networking and file systems.
The more I read about it, the more interesting it seems.
So I'm not 1000% clear on what this is. Is it a package to use a RPi booted into a normal Linux environment to do embedded development on a different chip/board connected to the Pi? Or is it to kinda turn the RPi into more of an embedded chip, with a bare metal OS?
It's closer to bare metal than Linux but it's not a real time operating system capable of precise timing like you'd see on a microcontroller. I see some configs that allow for pinning a thread to a core that they claim can be used for real time workloads so it'll be a lot more predictable than a Linux kernel. However, the RPi has a memory controller with a caching layer so it'll still be different than using an STM32 or Atmega with or without an RTOS (i.e. with Ultibo you can use malloc to allocate memory and you'll have to watch out for the cache if you're touching DMA).
From the FAQ: "We fully intend for Ultibo to support other boards, which ones and when depends on what the community shows interest in."
This project is freaking interesting. I would welcome some support for smaller Nano/Orange PI boards and others with Allwinner/AMlogic/Rockchip/Sitara etc. MCUs. All chips already employed in TVs, media players, cellphones, industrial boards etc. as opposed to the Broadcom MCUs which are used only in the Raspberry PI, so there's already a huge load of iron this environment could become useful to write software for.
The Oberon OS is a unequaled pinnacle of user interface design, IMHO.
(In addition to the Oberon language, which is great (in context), and the internal organization of the Project Oberon OS in an education in itself. (Prof Wirth has kindly made the book into PDFs for distribution, available here: http://www.projectoberon.com/ This book is soooo good for computer system design).)
I'd say that Wirth himself was/is more a fan of the "80%" part than the "Rust" part, as making things fool-proof would complicate language design and implementation too much, with all the follow-up effects this has.
I can't think of anything "modern" that comes close to that, most languages these days are quite feature sink-y. And yes, that includes Go, if we consider Oberon to be somewhat of a baseline.
Heck, even his successor's language (Eiffel) is more minimal than what's hip these days.
Back when I was learning programming as a kid I read a quote in a pascal book
> Pascal generally has one way to do a thing, the right way.
Right up through Object Pascal I always felt that was the case and in 2018 it's only now that VS2017 feels nearly as productive as Delphi 6 did nearly 20 years ago.
But beyond Eiffel, most developers ended up moving mostly to C++ (like myself back then) as it provided the C compatibility, with most of the Pascal's safety if one cared to use the C++ features.
Then Java took over the show for security minded developers during a couple of years, until we finally arrived at Go, D, Swift, Rust, .NET Native, SubstrateVM.
IMO they weren't successful because you actually need good 'marketing' to make a strict and safe language popular. For most people it will feel like having to fight the compiler to accept your code. The Rust community does a really good job informing developers that it is definitely worth the trouble.
It's too complex for that from what Ive seen. Two, key parts of Wirth's philosophy in judging language design are favoring simplicity of compiler implementation over app implementation and using heuristic of is how long compiler takes to compile itself. Although useful, I think both of these are wrong when considering performance optimization, high-productivity approaches like DSL's, benefits of stronger typing, and so on.
Nim is a good counterexample to Wirth's philosophy showing program readability, dev speed, and runtime performance can all go up if we increase language/compiler complexity where it makes sense. So, the reason Nim is a great project is that it's not a Wirth language. Look at Component Pascal if you want to see one like he'd want for industrial use. It was even commercialized with Blackbox at one point.
On top of other answers, I'll add he balanced language power/complexity to make them easy to learn and compile super-fast. The latter boosted productivity by almost eliminating the waiting time compiles would give you in something like C++. Then, they'd crash instead of get hacked on many common errors. And if you needed, building a compiler for them was easy enough that it was a project for compiler students in colleges. Free Pascal, although more complex than those, is likely so portable partly due to simplicity/power ratio of Pascals.
Aside from Free Pascal w/ Lazarus, the closest thing to that experience today is Go language with Oberon-like style, safer-by-default, and fast compiles. One of its inventors stated a design goal was realizing the fast-paced, smooth experience he had with Oberon-2 in the past.
Wirth languages lost to C in part because they weren’t flexible enough.
It’s true that C’s flexibility comes at the expense of safety - but during the “Cambrian explosion” of personal computers, Wirth took too long to address limiting factors.
E.g. open arrays iirc were added in Delphi; the functionality makes life significantly easier. As a result, pascal people had to break the language shackles (and safety guarantees) and doing that created a just-as-unsafe-but-more-cumbersome situation compared to C.
Wirth languages sucked at variable length dynamic memory.
Open arrays were in Delphi from the beginning, but Dynamic arrays were added much later (circa 4.0 IIRC.)
Object Pascal was always a lot more free with setting the bounds of an array too - you could specify any arbitrary range or even an enumeration. That's one thing I miss in C like languages. Being able to define an enumeration, then set the array bounds to that type - and then access it via the enumeration's values.
On a side note, I also miss sets. They were a first party part of the language, not a set of bolted on classes in the runtime library. I remember that I would quite often defined an enumeration, and array that used that enumeration as the subscript and a set to allow parts of the enum to be turned on and off when accessing the elements of the array. That was pretty cool. It made certain things a lot simpler - like defining errors that were raised by an action and then getting their textual error messages - all without having to do any conversion or anything fancy.
Yeah I loved sets, enumerations and sub-ranges as well.
You can achieve some parity in C++ and other languages with similar type systems, but it does require some tricks and I am not sure if everyone would appreciate it.
Too late, when Windows 3.1 was slowly getting adoption?
Outside expensive UNIX workstations C was just yet another systems language.
In those days 16 bit software was still mostly written in Assembly for those that cared about performance, including games and OSes.
On my region of the globe using C on MS-DOS was to bring work home.
At my technical school we were sharing a Xenix workstation between all class, meaning taking turns to seat at it.
On OS/2, Windows, Mac OS and later BeOS, C++ was already winning ground as the main language for application development, even though the kernels were being written in C.
It was the rise of free UNIX clones that helped C turn the tables into its direction.
I guess it's a regional thing then. In my corner of the world, Pascal was already dying commercially in 1992, Delphi gave it a final breath of life, but not enough to save it.
Windows was starting to get adopted, Unix was not yet a thing in the homes (but enthusiasts DID start Linuxing by 1994-1995), Assembly was getting irrelevant except for demos, games, and the occasional tight loop, 16-bit was seriously dying -- everything new was 32-bit with dos extenders, Watcom was the best compiler ever, and DJGPP was common -- C and the shiny-new-but-not-yet-convoluted-C++ were kings.
And there was essentially no viable Pascal for PCs other than Turbo Pascal -> Borland Pascal -> Delphi, so it doesn't really matter what the standard said.
The first thing that struck me was to try to write, or port, some bare-metal emulators.
E.g. extend an Atari TT/Falcon emulator to use as much as possible of the RasPi's resources -- all the RAM, an emulated blitter & FPU, the SD card as a big hard disk. There are several FOSS OSes for the ST now; this would make an interesting selection of old ST OSes accessible to a new audience on the cheap.
The only FOSS Amiga OS I know of is AROS and they're already working on a native port, but a bare-metal Amiga emulator would be fun to have, too. Classic MacOS would also be great. :-D
Not exactly for the same use case, but I created info-beamer hosted (https://info-beamer.com). Its a digital signage platform, so more for visual results than interfacing with hardware, but it's fully programmable in Lua. You can see the Lua API here: https://info-beamer.com/doc/info-beamer. On top of that info-beamer hosted provides a complete operating system that can trivially installed on a Pi (https://info-beamer.com/doc/installing-hosted) and you can configure and run your code on any number of Pis and control everything through a dashboard or API (https://info-beamer.com/doc/api). As an example of what a full "package" looks like, have a look at this minimal example: https://github.com/info-beamer/package-cec-test. Everything is fully programmable and you can even 'git push' code to the system. Let me know if you have any questions.
Hey there! I know this is unrelated but I have been trying to learn about programming embedded systems mainly for IOT. I started with pi and right now I want to create minimum O.S for my pi.one of the challenges i have faced is using updating without corrupting or bricking device. I have managed to get and understand abit about SWUpdate which is build for this . But I am totally lost when it comes to which O.S I should start with. Yocto vs buildroot. Also I would love to hear your views on Ultibo.
I don't use yocto or buildroot, so I can't really comment on either. I also put together my own updating mechanism which uses A/B booting (so it boots from partition A, updates into B, makes that active and then tries to boot into B. Once successful, it switch A and B and the cycle starts again). It's deeply integrated into the rest of the system (like the success condition of updates and timeouts waiting for that condition). The fear of accidentally bricking all live devices motivated me to build something I truly understand and is minimal (it's core is at most 200 lines of python).
Ultibo seems pretty nice, although I only read a bit out it and didn't play around with it. I'm a big fan of small systems and fondly remember writing Pascal back in the Borland Pascal days.
The advantage of Object Pascal is that it is compiled to binary. Lua is generally interpreted. I guess you could implement Lua execution on top of this - so you could make it happen. Depends if the Free Pascal compiler can link in the Lua interpreter for this platform, and what level of integration Lua has in to the Object Pascal language.
I don't see a problem with that, when you write your core logic in lua and provide a binary "runner" (lua bindings or plain lua executable). Your point to get lua to run there in the first place stands, of course.
> Ultibo core is a full featured environment for embedded or bare metal (without an operating system) development on Raspberry Pi (all models). It is not an operating system itself but provides many of the same services as an OS such as memory management, threading, networking and file systems.
The more I read about it, the more interesting it seems.