> As we expand the universe of software we wish to run on Fuchsia, we are encountering software that we wish to run on Fuchsia that we do not have the ability to recompile. For example, Android applications contain native code modules that have been compiled for Linux. In order to run this software on Fuchsia, we need to be able to run binaries without modifying them.
This seems to be new (circumstantial) evidence that Fuchsia is supposed to eventually replace Linux in Android.
Well it’s what has been always said about those who wanted to replace Windows: A Windows replacement must be able to either run Windows applications / games (Like running WINE on Linux) or have the same officially supported native version of this application / game. (mostly like macOS)
What we can clearly see here is that an Android replacement must be able to either run Android apps / games or have the same officially supported version of this said app / game, (Like Adobe Photoshop Express or Pokémon GO for iOS or Android or ‘New Platform’)
The first route is the quickest for them to support the existing Android app ecosystem from day 1 or the alternative is to rewrite these apps for this new OS platform which takes years. The former route was the obvious choice for them.
Windows 95 ran on machines with 4MB of RAM (8 recommended). NT 3.1 required 12 (16 recommended). That extra 8MB was a significant amount of money in 1995. NT 3.1 also required twice the hard drive (75MB vs 35MB). Windows 95 offered better support for direct DOS mode access, VxD, etc. Which was crucial at the time for games and multimedia applications in a manner that NT simply couldn't match.
I've seen exactly that argument used for why Purism and Pine phones should not be able to run Android apps. But I'm sorry, as much as I would like to migrate to one of those, it's simply not going to happen if it means ending up with a phone that cannot run various important apps.
Yes, supporting apps for a competing platform gives some developers a convenient excuse to not develop for your platform specifically. But you still get the benefit of their work for the other platform. And not supporting those apps gives users a solid reason to not use your platform at all.
This has been stressing me out this week. I have had the Pinephone and Purism shopping cart tabs open for a week, and I just can't bring myself to do it.
I could maybe justify the current iteration of Pinephone as an experiment, try and make an app and buy it through my company as a freelancer, but Purism would need to have quite a few major apps before I could justify the cost. My heart wants me to support these projects. It almost feels like a moral imperative if we want to actually own and control our mobile hardware ever.
But it almost feels like going off and living in the woods to save the environment, the altenative needs a few more of the mod cons for any kind of serious adoption.
Sure, but I think a more apt comparison in this context would be to Windows NT, given that Android and Fuchsia are both made by the same company: the ability to run win16 apps was crucial to its initial success. If every existing Windows 3.1 app had to be rewritten and rebuilt to work on NT at all, it would have been a much more difficult migration.
OS/2 was really only ever used in IBM shops: big enterprises that bought all their IT infrastructure, from Mainframes to network technology to printers to desktops from IBM.
OS/2 was never really successful outside of that domain.
Neither mobile platforms nor MacOS are really trying to "replace" Windows, never have been. They have some overlapping markets, but the target has really been very different all around.
On the other hand, Apple has been very careful to include this sort of "backwards compatibility" in all their systems for a while. It's usually not maintained as long as Microsoft does, but from m68k -> ppc -> i386 -> amd64 -> armv8 they're always supported at least the last architecture and runtime, so early OS X could run apps from System 7/8/9.
Android putting graphics drivers outside the kernel and not under the GPL has created enough of a disaster. I don't look forward to the day when everyone is running a kernel that doesn't force SoC manufactures to share even the most basic drivers.
We're already there, anyway. Proprietary drivers are super common not just in Android, but across embedded devices everywhere.
Some companies just flagrantly violate the GPL, and little ever comes of it.
Others ship minimal shims in the kernel, and put the proprietary bits in userland code or some such. This seems to be more common, where vendors will technically release their kernel fork, but the code will be obfuscated or just generally useless without the proprietary blob that goes with it.
I'd much rather the kernel just have a stable ABI so these kernels could at least be freely updated instead of being stuck with the hacked together fork with its proprietary shims.
If you are aware of a particular Linux kernel violation, please report it to the Software Freedom Conservancy, who recently announced a new strategy for achieving GPL compliance.
> We're already there, anyway. Proprietary drivers are super common not just in Android, but across embedded devices everywhere. Some companies just flagrantly violate the GPL, and little ever comes of it.
Sometimes I wonder if Linus could go back and do it over, if he would license Linux under the BSD license instead of GPL. He's never seemed like a particular stickler for free software ideology, and has welcomed connections between Linux and industry. I don't know that it bothers him that much to see Linux used in proprietary software.
Not trying to start a flame war over licenses here, just speculating. And maybe this could be easily refuted by a link to one of his email rants, who knows.
His own words: "I love the GPL and see it as a defining factor in the success of Linux"
So I very much doubt he would license it under BSD or similar.
Having said that, he has also stated that he is glad he stuck to GPLv2, as he is not a proponent of the TIVO clause, IIRC his words were something to the effect of 'I just want the code contributed back to the kernel'.
My own impression has been that he does not really care one way or the other about tivoization. If that clause had been present in the GPL since the beginning, it would not have stopped him from using it.
However, he sees things like adding it as being a pretty significant change to the GPL, and does not like the fundamentally changing the deal. If GPL v3 consisted only of wording clarifications to ensure it worked properly in more regions and perhaps to allow apache license compatibility, he would presumably have had no objections, but of course, would be unlikely to have been able to relicense the kernel, given the kernel's numerous contributors and lack of "or later version" clause.
It's not a problem in the Windows world that drivers are not open source. I think rather than forcing them to put the drivers under GPL, we should fix the fact that embedded systems always need custom images.
On x86, you can just grab a Windows or Linux .iso and it will enumerate all hardware via PnP and install the appropriate drivers. This is something that is sorely missing in the Arm world. Instead you get semi-closed BSPs, source code thrown over the wall, and you have to write your own device-tree files and often kernel code to get something to run. SSBA is a start but only targeting the server market.
Maybe it's not a problem for you if you upgrade your hardware frequently (and don't care that it stops working on newer versions of Windows because the vendor couldn't give a shit about keeping the driver up to date), and buy exclusively high quality (and high price) parts. I had enough fun with Realtek / GPU drivers (from all three major vendors) on Windows to hate the very concept of a proprietary driver to the end of my life.
The very same hardware somehow works perfectly fine on Linux. No kernel panics (not a single one in 12+ years of running it on Realtek crap), no intermittent connectivity problems, no weird GPU bugs, nothing of the sort.
I forget that you have to hedge every comment on HN :-P
Of course there are not zero problems on Windows. I had a bunch of trouble when Windows 10 came out with a laptop I bought just before, I never got the touchpad working as expected. With Linux the touchpad works fine, but then yellow looks like mustard on the screen...
What I mean is: I can go to the store and buy a random thing, and I can just assume there will be a driver at all and my PC will find it.
I work for a company which manufactures industrial PCs with touch interfaces, and one thing we noticed with our X86 line is that we can just stop caring about drivers. Although everything down to the touch panel and the mainboard is custom, you can just throw a windows OR a Linux (Ubuntu) iso at it and it will work 99%. This is not about Windows vs. Linux, but about PnP vs. no PnP.
My netbook still has working drivers for Windows 10 with DX11 and GL 4.1 support, whereas the super cool open source AMD driver doesn't offer anything better than GL 3.3 with hardware video decoding that kind of works on full moons.
Well yeah, trying to use modern hardware on ancient software is going to suck. It's like trying to run a RTX3090 on Windows XP. Debian stable is not a sensible choice for new shiny things, especially GPUs, and that's not really a "Linux" problem, it's a Debian problem.
> It's not a problem in the Windows world that drivers are not open source
Yes it is. Let alone the inability to study the source code and other goodness of open source, there are old drivers that are not maintained anymore and that can't run on newer versions of Windows, where hardware that is not maintained anymore by the manufacturer still works on Linux because the open source driver can still be adapted to run on newer versions.
I'd say the reason generic kernels/images are missing in the ARM world is because people generally don't run their own software on the ARM hardware they own. And the reason for that in turn is that often people don't have the ability to run their own software on their ARM hardware. Which brings us back to the GPL - this is standard "Tivoization", something that the GPLv3 tried to prevent, but unfortunately Linux is GPLv2... Still, for drivers that are open source and upstream, a practical consequence is typically that people are allowed to run their own kernels on the associated devices - which I think would give the PnP, generic image world you want.
This is all just to say, I don't think "GPL drivers" and "PnP generic images" are as separate as you're suggesting.
I have working hardware that is no longer usable under Windows because drivers no longer work. Hardware companies have no interest supporting old hardware and they drop support in a few years. I recently got a new computer at work with a part that it's already considered "legacy" in the vendor's web site.
Hmm, makes me wonder if it's feasible (for really popular hardware where there is sufficient interest) to just port the open source Linux drivers over to Windows kernel and release it under GPL. Maybe even charge some money for the effort/to be able to download the installation kit (as I think there may be costs involved in getting access to the Windows kernel driver development kit that would allow to build the driver).
Spoken as someone that is not struggling to run older hardware on modern Windows. Windows 10 likes to screw up my audio (Creative X-Fi Titanium) every major update. No problems on Linux with updating the kernel almost every week for many years now.
There are arm phones you can buy now that will boot the aarch64 defconfig kernel from mainline. If a phone doesn't do that it's because the manufacturer doesn't want it to, not because there's something special about arm that prevents it.
Android has been a mostly positive thing for linux, but it could be much more if so google desired. Google had the influence and power position to force many important drivers to become blob-free or open source (GPU, VPU, GSM modems, wifi, bluetooth), but it looks like they just didn't care.
Android has been one of the most negative contributors to general computing.
We entered the new millennium with the concept that most computers could use most extension hardware with (relatively) little effort, and you could upgrade your computer's basic operating system at will; even change it to another one entirely.
I'd love to update my still-decent Android phones that have long since stopped receiving security updates, but I can't. This is objectively inferior, a massive regression from general computing.
> We entered the new millennium with the concept that most computers could use most extension hardware with (relatively) little effort, and you could upgrade your computer's basic operating system at will; even change it to another one entirely.
> I'd love to update my still-decent Android phones that have long since stopped receiving security updates, but I can't. This is objectively inferior, a massive regression from general computing.
this is one of the smelliest BS i've read on hn in the last year or two. phones before android were not upgradable AT ALL. if you were lucky, you could put a bigger memory card, and that was it.
nowadays you can access most of the sources of the os for your average android smartphone, you can rebuild the os and reflash it yourself. doing such thigs was a basically just wet dream in early 2000s.
> phones before android were not upgradable AT ALL.
They weren't general computing devices. They were for making phone calls, sending SMS messages, and tracking contacts. The security risk profile did not include access to social media accounts and/or online banking.
For many folks, and a growing many folks, their phones are now their primary computing devices. For many folks, their primary computing devices no longer receive security updates.
Please tell me an android phone that upon flashing will not null out the proprietary camera firmware and other things? I want a truly open OS but android just hides behind AOSP being open and it can only exist on certain phones with quite some negatives.
If you show me such a device, I would instantly ditch my iphone, pretty much the only reason I have one is the great hardware and the lack of google supervision. I even bought a pinephone, but it needs a lot of work (and a new hardware iteration) before it can become truly usable.
The Nokia N900 got its last official update in November 2010, if I recall correctly. It had only been launched a year and a half before. Yes, there was then a community effort to improve some of the userspace, but the device was eternally stuck on a 2.6 kernel due to blobs. I loved my N900, but let's not pretend Nokia made their hardware future-proof.
> We entered the new millennium with the concept that most computers could use most extension hardware with (relatively) little effort, and you could upgrade your computer's basic operating system at will; even change it to another one entirely.
And who enabled that? Not everyone wants to admit it, but it was IBM together with Microsoft which made this possible. This openness was thanks to a proprietary vendor wanting to make one operating system that ran on a wide variety of hardware.
It had the opportunity to be a positive thing for Linux but I would argue it really hasn't been. Your comment makes a good point: Google had the power to fix a lot of this but didn't.
Very interesting. So it's very similar to WSL on Windows (not WSL2), where it translates Linux syscalls into the appropriate Zircon ones.
From the proposal:
> An important cautionary lesson we should draw from WSL1 is that the performance of starnix will hinge on the performance of the underlying system services that starnix exposes to the client program. For example, we will need to provide a file system implementation with comparable performance to ext4 if we want Linux software to perform well on Fuchsia.
Windows NT was also originally implemented with translation layers for OS/2 and POSIX.
The WSL is a revival of the POSIX layer.
"Broad software compatibility was initially achieved with support for several API "personalities", including Windows API, POSIX, and OS/2 APIs – the latter two were phased out starting with Windows XP."
Even WSL1 is technically very different from the old POSIX layer. The old POSIX layer used classic NT processes and POSIX APIs were routed in user space to NT syscalls via ntdll. WSL1 uses lightweight picoprocesses and implements the Linux syscall API in-kernel, rather than doing POSIX API to NT syscall mapping in user-space. Windows NT POSIX (and Interix/SFU/SUA) are conceptually much closer to Cygwin than to WSL1.
> So it's very similar to WSL on Windows (not WSL2), where it translates Linux syscalls
I wonder whether they will end up with the result of falling back to a real Linux kernel running in a VM eventually if they go with this route, as WSL -> WSL2 ended up with.
These userspace runners seem pretty revolutionary, especially their ability to catch syscalls. As I understand it, this is what WINE is trying to add to Linux (so that they can correctly emulate Windows calls for some DRM/anti-cheat).
This is very, very, cool stuff. It positions Fuschia as approximate to a lingua franca OS.
At the end of the day, x86 is x86. Different operating systems use different loaders/dynamic linkers, different filesystem layouts, different APIs, but still the same machine code and still the same calling conventions. WINE, in a nutshell, merely presents the Windows environment (by wrapping Linux calls in Windows calls) to an executable that is running otherwise natively, there isn't a virtual machine or an emulator involved. There is no sandbox or security (as that is a non-goal).
The problem is that various DRM and anti-cheat products use undocumented Windows syscalls to do their job, and syscalls are something that you can't just wrap with a function. This extension will allow WINE to ask Linux to send it unhandled syscalls (via SIGSYS), so that it can emulate them (making it a misnomer for the first time).
Unfortunately you can no longer create syscalls in NT subsystems (XP was the last time that was possible). I assume you have to be Microsoft, or a specially privileged partner with source access, in order to do that.
I'm surprised they're not following Microsoft WSL2 and taking the virtualization approach. Ultimately the virtualization approach has better performance (fewer context switches) and much better compatibility. AFAICT one of the big reasons Microsoft gave up on WSL1 is that it's an infinite amount of work to reimplement all Linux kernel features. People kept running into obscure Linux features that were not supported.
Yes, virtualized Linux is somewhat less well-integrated, e.g. at the filesystem level, but that is getting better with tools like https://virtio-fs.gitlab.io. Most importantly, it's path that others are also taking, so you can share their work. This Fuschia-specific approach won't benefit from anyone else's work.
They even mention that they're going to support virtualized Linux as well, so here they're just proposing to do a lot of extra work.
I guess Google doesn't have to care how much extra work they create for themselves.
Every time I hear "something something run unmodified linux programs something something" I get an ambiguous feeling. Part of me fears a EEE and part of me hopes that people will simply just target linux.
I get the opposite fear. That people will simply just target linux, and we'll have another millenium of UNIX+POSIX monoculture imposed with no room for alternative designs...
So if I understand this design doc correctly, a Linux program invokes SYSCALL to call mmap(). The Fuchsia kernel throws an exception to signal a separate starnix userspace process which peeks into the Linux process to read its arguments and then it issues even more SYSCALLs indirectly via the Fuschia DLL in order to ask the kernel to manipulate the PML4T page table radix trie which controls the Linux process memory, before finally issuing yet another SYSCALL to signal it to resume running.
Since the overhead of each SYSCALL is ~1000 cycles and that this overhead just keeps getting worse thanks to things like Spectre, I'd say that's a real high performance operating system you've got there Google. History shows us that developers never choose microkernels. The only reason people use the things is because big corporations force us to. Like how Intel preinstalled MINIX on its chips without the consent of its customers. Where are the early authentic Google values that technology should be simple fast and stay out of the way? Your o/s is bad and you should feel bad for all the reasons Linus explained to Tannenbaum back in 1992.
If Google was smart they would support something people love which makes perfect technical sense, like Cosmopolitan, which enables normal POSIX programs to boot as unikernels that needn't incur any syscall overhead at all. The Actually Portable Executable format is also a valid disk image that can boot on bare metal in Google's Cloud. This is perfect for our brave new world of cloud services where instances only really need sockets to talk to network services like bigtable and therefore don't care about the overhead of a timesharing system. Google created this new world, yet it fails to understand its own creation if we consider how Fushcia is dragging us into the past by resurrecting designs that were too slow even during the days where multitenancy on a single machine mattered. See https://github.com/jart/cosmopolitan and https://github.com/jart/cosmopolitan/issues/20#issuecomment-... to learn more.
One of the goal of microkernels is to get rid of a whole class of security issues. In some cases it may trade performance for that. In some cases you can get better performances, but you have to rewrite your software to take into account the microkernel.
Regarding Cosmopolitan, one issue is that different executables can't talk to each others efficiently because they are in separate unikernels.
If they are running in the same unikernel then you have essentially zero security, unless your unikernel is actually running another kernel!
In other words, unikernels aren't great for consumer devices.
What Google is doing is hobbling anyone who doesn't want to rewrite their app to be locked-in to Fuschia's arbitrarily incompatible system interface. Muh security isn't an argument. You should adopt Windows. It's got real orwellian acl system that's had a lot of impact annoying people.
At one point Fuchsia was using parts of gVisor, which is a userland linux kernel implemented in Go (to secure containers) as part of their network stack. Does anyone know if gVisor is being used for this as well?
The idea is t run a binary in userspace called 'starnix' that will work as a glue between the linux syscalls and the fuchsia kernel.
This is possible given they designed it in a micro-kernel like fashion, where things are more modular and a lot more can be done straight from the userspace without going for the expensive 0 ring barrier.
From the docs:
> Rather than running Linux binaries in a virtual machine, `starnix` creates a
_Linux runtime_ natively in Fuchsia. Specifically, Linux program can be wrapped
with a _component manifest_ that identifies `starnix` as the _runner_ for that
component. Rather than using the _ELF Runner_ directly, the binary for the
Linux program is given to `starnix` to run.
> In order to execute a given Linux binary, `starnix` manually creates a
`zx::process` with an initial memory layout that matches the Linux ABI. For
example, `starnix` populates `argv` and `environ` for the program as data
on the stack of the initial thread (along with the `aux` vector) rather than
as a message on the bootstrap channel, as this data is populated in the Fuchsia
System ABI.
More or less. As far as i know gVisor work with a host process that serves as a proxy for the os syscall and the child process dispatch the calls to the parent process through IPC which in turn execute it.
This is basically also how Chrome works, except that it serves otjer api's.
On Fuchsia, apparently the OS attach a binary which is a program on its own, to the binary that is executing and that in its "manifest" tell it will need a linux syscall proxy, which is this starnix that get stitched to the executable doing all calls in the same process.
On Fuchsia case this will end to be much more efficient because there will be no IPC communication involved just jumps on the code sector.
Dalvik is a jvm, just not fully compatible with openjdk.
Android is slowly migrating towards toward openjdk, I expect Java 17 to put them enough to shame to have to react
Most of the Android ecosystem is on Kotlin at this point. So Java language changes don't really matter for Android. Jvm changes matter even less because Google implements their own runtime.
I always though that Google would do this, but with gVisor instead. Why they would prefer a new development instead of an existing one that they already control?
The interesting thing about Linux is that, since it has focused so much on backwards compatibility, it's a great target for emulation. A few years ago there was a great paper and presentation by Jon Howell et al from Microsoft research that's still worth getting familiar with, even more so now that WebAssembly is solid.
The memory model is similar (a flat, linear address space). The OS APIs are quite different. Fuchsia's OS APIs are designed around an object-capability discipline. For example, almost all the syscalls take a zx_handle_t as their first argument, which means they are operations on an object the process has previously been granted access explicitly. In Linux, many OS APIs operate on the "ambient" environment rather than on a specific kernel object.
Not really, you can be a microkernel hobbyist or just curious. I've had some idea of how Zirkon (Fuschia's microkernel) worked for about a year now, and I'm very much in the above categories.
It's fairly different; the OS is built on capabilities and exposing its system API via libc. By default applications get nothing. Resources are objects. IPC is done using a wire format called FIDL.
Fuchsia's not a real thing and it's never going to be. You don't have to worry about it, it's just a make-work project for some people at google to feel cool.
More likely, Google have done threat modelling of their entire business and worked out that if there was ever some legal problem with using Linux (like the infamous SCO case), the amount of economic damage it would do to them would be more than whatever resources they are spending on coming up with an alternative (which will hopefully never need to be used).
Also, just as Chrome gave them a seat at the table of web standardisation, and a chance to try new ideas that didn't have to go through Mozilla's approval process, having a kernel of their own allows them to influence the OS design space in ways that are more aligned to their business goals, and without needing Linus's approval.
They can use BSD to build whatever they want if they're worried about the GPL.
I think that someone at google built fuschia for the sake of building it. Their core business is still search, that's all Android is. If they launch their own fuschia based handset, people are unlikely to switch, OEMs are unlikely to switch. It's just a failed concept.
I'm talking about BSD the operating system concerning the OP's assertion of managing risk.
Last I checked (years ago), the userspace was implemented partially with musl, which reads
> 'musl, pronounced like the word "mussel", is an MIT-licensed implementation of the standard C library targetting the Linux syscall API,
Now, they're announcing running Linux programs on Fuchsia. Sure seems to me they're implementing a Unix-like system, even if their kernel was written from scratch.
The BSD world doesn't consider the userspace very separate from the kernel space, it's all part of a single system.
I would argue it's very much 'based on Unix' as soon as they start adding POSIX APIs. They're adding POSIX APIs because they work.
I can take it from your username that you feel that Fuchsia is a threat to your beloved Linux Desktop to the point of having complete denial that the project is not real. It’s OK to be scared.
But I’m afraid you have to someday accept that some of Big Tech is ‘using’ Linux as a stop gap to either create their own proprietary subsystems around it or a completely new OS compatible with Android for which they pretend ‘they care’ about Linux.
They only care if it’s only for their interests as we can evidently see for Fuchsia Android support and WSL2 GPU hardware acceleration support.
Just the opposite. I don't think Fuchsia is much of anything at all, let alone a threat.
As far as Big Tech, EEE is a legitimate concern.
As far as Google, they've demonstrated countless times they're not to be trusted. If someone does business with them, they are being foolish. Fortunately, outside of search and chrome, they're not very competent, so they're not much of a threat.
I’m not sure how you can argue Fuchsia isn’t much of anything at all considering it’s mostly open source, you can see it’s development, and has came a long way since it was announced. You don’t want it to be much of anything but it is very real.
The amount of human resources dedicated toward improving Linux is order of magnitude higher than the amount of devs working of fuschia. That + the fact that Linux is actively improved since decades.
It's quite obvious it's dead on arrival to anybody that allocate a few neurons to the economics question.
People would have said the same thing about Linux compared to proprietary Unix or even early Windows in 1993-1994. Also Linux has millions of lines of legacy that people are having to maintain and work around while Fuchsia and their team gets to start from scratch with all the knowledge learned.
As to your ad hominem in the end there, perhaps you should learn to engage with a level of respect presented to you by your peers, as it is that kind of discourse does little but put on display how your harsh words are little more than a projection. A better type of conservation is expected here.
People would have said the same thing about Linux compared to proprietary Unix or even early Windows in 1993-1994.
I don't think the comparison apply, linux had a unique value proposition at the time by being welcoming for opensource and free. Fuschia in comparison is redundant the same way BSDs are nowadays.
Also Linux has millions of lines of legacy that people are having to maintain and work around
This claim lacks concrete evidence. The number of lines of code that are here for legacy reasons is unknown but 1 million since like an exageration. More importantly, most of legacy code is confined (does not affect non legacy code). Of course linux because of backwards compatibility has many suboptimalities and developer productivity could be higher without it, but this argument isn't enough to outweight the order of magnitude difference in human resources (and expertise (hardware makers))
with all the knowledge learned
All? I think most of the fuschia devs had no significant role in the Linux kernel codebase, it's not like they had hired Greg kroah-hartman. But essentially, fuschia has not access to useful knowledge that is only encoded as comments in the Linux kernel. And the knowledge about past mistakes is lost unless you spend years reading the linux mailing lists and commit history.
Hence the economics point still stand, of course a world without my ad hominem would be better, but such an hypothetical world is precluded for as long as a community like HN is unable to see such striking NIH issues.
Fuchsia isn’t redundant because the literal king of the internet is putting its weight behind it, so you’re economics point doesn’t just not stand, it’s almost a fallacy.
As for the point about Fuchsia not having knowledge because they didn’t personally build Linux, that isn’t my point. What I’m saying is that they can look at what Linux (and other OS’s) have done and skip over lots of pathways that have been proven over the last three decades to be dead ends or inefficient, while mimicking those systems and conventions that are proven to work. I’m sorry but you seem to be going out of your way to ignore the fact that Fuchsia is, whether or not you want it to be or think it should or will be.
The interesting thing about Linux is that, since it has focused so much on backwards compatibility, it's a great target for emulation. A few years ago there was a great paper and presentation[1] by Jon Howell et al from Microsoft research that's still worth getting familiar with, even more so now that WebAssembly is solid.
One approach I didn't see mentioned in the documentation is running Linux programs in a virtual machine to handle isolation and system call trapping (but not actually run the Linux kernel in the guest–just the program), then in the hypervisor translate and forward all the calls to the Fuchsia kernel, in essence making the VM "dumb computing" that would exit to the handler whenever the Linux program made a syscall.
I assume it means coming up with a reasonable emulation of signals, fork(), etc. Something similar to how WSL1 works, since Fuchsia is significantly different from POSIX in some areas.
yes but are they not re-using the MS put it a container strategy of how they handle the differences between their specific kernel and hoe Linux kernel and Unix POSIX standards are implemented?
There are some similarities. The main difference is that Wine is mostly in-process whereas, in this design is mostly out-of-process. I'll add a section on Wine to the "prior art" section of the doc.
This seems to be new (circumstantial) evidence that Fuchsia is supposed to eventually replace Linux in Android.