Hacker News new | past | comments | ask | show | jobs | submit login
Fuchsia: a new operating system (lwn.net)
503 points by rbanffy on March 31, 2017 | hide | past | favorite | 317 comments



What makes Fuchsia different then so many other attempts at writing a new OS? They aren't writing a new OS, at least, not in the complete sense.

They are using the IPC system developed in and extracted from Chrome. They are drawing everything in userspace with fast graphics render but the logic for all system components written in Dart from the Flutter project. They use musl for the libc. They are using the little kernel for the core kernel.

As a long time Linux desktop user myself, I'm really excited about this project. A secure desktop without tons of system calls? Userspace graphics? Not HTML/JS based? But could still be used for development? Yes Please!

It's really easy to compile and get it running. Try it out!


I wonder if they intend to keep Flutter as the primary UI toolkit, or if that was just used out of convenience. If this really ends up being an Android & Chrome OS replacement, and a large portion of code being written for it also ran natively on iOS, that'd be fantastic. Maybe even cool enough to get me to use Dart ;)


Is there a reason not to use Flutter? From what I've seen of Flutter it seems like a very competent cross platform UI toolkit. In fact, I wouldn't be surprised if they announced that you can start using Dart/Flutter to write cross platform Android/iOS apps at I/O 2017.



(disclaimer: I work on the Flutter team.)

You can use Flutter today to write an app that runs on iOS and Android. :)


How is it for writing cross-platform desktop apps? There seems to be a dissapointing trend of UI toolkits only going for IOS and Android, leaving out the still very essential need of toolkits for cross-platform desktop apps in languages other than c++.


Is there any Google apps made using Flutter at the moment?


How is it in turns of speed?

On one hand, the website says that it's compiled to native code (so it can be same speed/faster than Java), but on the other hand, it's based on a soft-type language, which makes optimization difficult (even with V8, JS is still slower than native code).

Side question:

I understand that Dart was soft-typed because it was supposed to replace/compile to JS, but what advantage does soft-types have against "implied-types" (like auto or go's := ) combined with operator overloading (to allow string+int, for example)?


Even the original, untyped, Dart was designed to be easier to optimize that JavaScript (e.g. it's class based, not prototype based, so the runtime doesn't have t work as much to optimize object access).

Recently Dart has added a strong mode (https://github.com/dart-archive/dev_compiler/blob/master/STR..., http://news.dartlang.org/2017/01/sound-dart-and-strong-mode....).

One of the reasons for strong mode is to enable better optimizations (https://www.dartlang.org/guides/language/sound-faq#why-stron...).


I'm going to save these links for the next time I get downvoted for saying that js object system and dynamic typing suck for performance :).

When JS performance threads come up I always mention that JS will never be as fast as Go/Java. Some people have a tantrum when I explain these issues. The sources I usually cite are dense and people just continue to downvote without reading them.


You might want to watch this talk on AOT compiling Dart, I'll link right to the perf benchmarks: https://youtu.be/lqE4u8s8Iik?t=9m28s

I think it's a work in progress, but main benefit at least at the time was faster startup.


AOT compiling was initially all about iOS (where JIT isn't allowed), but you can now do that everywhere with "application snapshots" which were recently added with Dart 1.21.

Unlike the script snapshots, application snapshots also include CPU-specific native code in addition to the serialized token stream of the source code.

So, you don't just skip the parsing stage, you also skip the warmup phase. Your application starts instantly and it runs at full speed from the get-go.

https://github.com/dart-lang/sdk/wiki/Snapshots


I tried out their demo apps on Android and they were pretty fast but there were still some hiccups. It's definitely not as fast as good Java apps but it is pretty close. Also the animations are on the level of iOS apps - far superior to Java Android apps.


Dart has strong mode (this is enabled by default for its Angular2 projects) - Check this talk on sound Dart for more details https://www.youtube.com/watch?v=DKG5CMyol9U


Dart supports strong typing.


read on the flutter site that text input still wasn't implemented, so i'll probably wait some more.

I didn't find a doc on how low level is this framework going. Or to put it differently , where does it plug itself on the iOS stack ? opengl ? calayer ? uiview ? webview ? etc...


I was just looking into this two days later..

It's fairly low level. It uses Skia, which is a 2d rendering engine Google bought 12 years ago that powers Google Chome / ChromeOS, Android, Firefox and Firefox OS. Skia is built on top of OpenGL, but they've been working on a Vulkan backend that is close to feature parity.

This seems to be how they get the high performance animations. It also explains how they are able to do cross compatibility between devices (You really only need C++ and OpenGL).

Here's more info from their FAQ: https://flutter.io/faq/#what-technology-is-flutter-built-wit...


Last year from what I have watched on Dart conference in Munich, Flutter was still very much WIP, not sure how much it has progressed since then.


Not that I'm aware of, although the site still says it's a technical preview.


Dart has really picked up a lot of traction in the past few years. If you do end up using Dart, I've got the perfect server-side framework for you to try out... ;)

https://github.com/angel-dart/angel/wiki


I think this definitely qualifies as a new OS. It's true that not everything is completely reimagined compared to what has come before. So what? Great artists steal.


>They are drawing everything in userspace with fast graphics render...

Dumb question, does this mean that it's limited to software rendering only? You need to go through the kernel to talk to the GPU, right?


What happens these days is that GPU buffers are mapped into user space, where they are filled with textures and drawing commands. Then, they are submitted to the kernel GPU scheduler for execution. See e.g. https://lwn.net/Articles/283798.


These days you don't even submit them via a kernel call these days; you've got your own GPU MMU context so you can kind of just go to town within your own process.

That's sort of the whole point behind Mantle/Vulkan/Metal/DX12.


In addition to the other comments, I would be excited about a graphics driver not being able to take out my system.


Does this actually happen with enough regularity to care?

In the cases where it does, I imagine it's a situation where you're actually doing 3D-accelerated renders of the user interface. In which case, when the graphics subsystem craps out, hasn't your running system been rendered effectively unusable anyway?

I get that this is a nice idea in theory, but does it actually improve anything practical in practice?


Yes? I haven't played games in a while, but when I did, crashes were frequent enough to be annoying, and ~100% of the time it was video drivers.


The Linux i915 driver manages to hard freeze my laptop every few days without any 3D games or even a compositing window manager ... It would be lovely to take it out of the kernel and isolate it with IOMMU.


Right, but that's my point. You're playing a game and the display crashes — I guess it's nice in theory that the rest of your system stayed up, but you're still more than likely just going to reboot, no?

If not, how exactly do you plan to restart the graphics process?


Windows automatically restarts the GPU driver after a crash; the screen goes black for a few seconds and then all the windows come back up. It can also upgrade the GPU driver without a reboot. It's not used often but it's pretty handy.


I think that was the moment I realised I liked Windows 7 more than XP. Installed new graphics drivers, expected "you must reboot your system", instead the screen went black (I had a moment of 'oh crap') and then came back up with 'installation complete.' Very impressive if you're used to XP.


This impressed me a lot a couple of months ago, when my GPU driver started crashing about every 5 seconds while playing a game, and Windows kept restarting it, without even killing the game. The performance was abysmal of course, but still pretty neat.


Lot's of options here:

ssh from my phone and restart the driver.

Go to a fallback ui, and restart the driver.

Have a system util notice the driver is misbehaving incorrectly, and restart it.


If the OS itself doesn't automatically recover the graphics process, you could always fire up your OS's built-in screen reader and use it like a blind person until you can get the graphics back up and running.


"The graphics drivers for Fuchsia also exist as user-space services. They are logically split into a system driver and an application driver. The layer of software that facilitates communication between the two is called Magma, which is a framework that provides compositing and buffer sharing. Also part of the graphics stack is Escher, a physically-based renderer that relies on Vulkan to provide the rendering API"


Depends on the OS design. It's really not a good design, but you could run everything in ring0 (x86), supervisor mode (ARM), etc. However, now any fault (user or OS) could halt your system and you have no memory protection or process separation.


You've also have Singularity style software isolated processes where the OS statically verifies memory safety so that it can safely run all of the code in ring 0.


You only have to run the privileged process that talks to the GPU in ring0 right?


No. GPUs typically work over the PCIE bus, and one can talk to PCIE via user space as well. In legacy systems like Linux the mapping of virtual to physical address and generation of scatter-gather-lists (SGLs) resided in the kernel. If one moves the same functionality to the user space without loss in performance (which is what magenta seems to do), there's no benefit to kernel GPU drivers.

Then there's the whole "GPL mafia" in the Linux world who'd like to force vendors to open up their drivers by moving as much of the critical pieces to the kernel as possible. In theory, you cannot write a kernel driver without violating the GPL. Fuchsia will have no such impositions. If someone wants to open their driver up, they could. If they believe their offering is superior, and a secret sauce needs protecting, they can keep it closed


Having control over the virtual to physical address mapping and scatter-gather lists for GPUs is effectively equivalent to kernel-level access anyway, though, because it lets you carry out DMA to and from arbitrary physical memory addresses. Some proprietary drivers for mobile GPUs have even given this level of access to untrusted user processes in the past leading to privilege escalation to root.


Not with IOMMU (Intel VT-d and similar).


And GPUs these days have their own MMUs too.


Unfortunately, the GPL mafia is right, and hardware vendors will continue to abuse their users with proprietary blobs unless they are forced into change.


How long has the GPL Mafia been saying this?

Do you see many vendors being forced into change?


AMD is finally changing. Unfortunately, The amdgpu-pro limbo right now is not ideal... but I am very excited for the future.

AMDGPU is already very usable. I remember ~5-10 years of waiting to use my Radeon 9600 with a recent kernel until the free Radeon driver was finally up to snuff. Catalyst and fglrx were awful.

Proprietary drivers just don't make sense. They are practically unmaintainable.


For GPUs, AMD is changing and even nVidia is using Nouveau for some chipsets.

For non-GPUs, only some WiFi drivers are out-of-tree (Realtek and Broadcom).


Typically only big vendors can afford not to change


You make open source drivers sound like a bad thing.


No, but pushing that agenda via architectural level changes is not exactly a way to get a good architecture.


An architecture is a structural design made to achieve certain goals. They just happen to have another goal to achieve.


It's even bad by that standard. When you compromise the function of the system to achieve that kind of goal, you end up losing out on both.


offering no alternative but "open source" on Linux is certainly not the most business friendly way to go about it.


It certainly is. "Business" just needs to internalize the fact that baking a bigger cake gets you more cake than fighting like a starved peasant over crumbs.


Why would I or anyone care if it's business friendly. The people who just want to sell hardware should have no qualms about open source.


Being business-friendly is not a goal, and shouldn't be a goal.


> Being business-friendly is not a goal

Sure, it is, for lots of people. Even, apparently, the FSF, hence the reason non-consumer products are not subject to anti-tivoization rules in GPLv3.


I wonder why the BSDs haven't caught on for embedded appliances and phones while Linux did with a more restrictive and less business friendly license. Sure some companies like iXSystems and Juniper adopted BSDs, but the vast majority used Linux.


The manpower behind Linux is superior. Not sure if it's the "vast majority", but for many business cases following the GPL(v2) rules simply is not a problem.

Both the PS4 and the Nintendo Switch run on FreeBSD, by the way.


>Both the PS4 and the Nintendo Switch run on FreeBSD, by the way.

Not exactly from what I've read, the PS4 runs 'Orbis OS' which is based upon FreeBSD 9, meanwhile the Nintendo Switch is using the network stack from FreeBSD, but also stuff from Android and the custom OS they wrote for 3DS.


Linux was available under a free license earlier (and alone) and as such developed a huge lead in community that no free OS has caught up with; that has quality, choice, and skill availability impacts that generally dwarf licensing issues for businesses.


Being business-friendly is not and should not be anyone's goal in licensing software. Being business-friendly is shit.


In my opininon, any particular license can only be friendly or unfriendly towards particular business models but not towards business in general.

Some businesses feel threatened by some open source licenses and other businesses are using the same open source licenses to do the threatening.

The funding for many important open source projects comes from global corporations that use it as part of their strategy, sometimes dominating entire industries based on open source.

A large number of small consulting businesses are based entirely on open source software as well.

Open source is not meant to be anti-commercial, nor would that make any sense, because "commercial" is what most of us do to make a living. That is not what takes away anyone's freedoms.

What does take away many freedoms is the fact that none of the widely used open source licenses fulfill their original purpose.

Linking rights to distribution once meant that users of the software got those rights. Now, in the age of the data center, end users get no rights at all and software can be modified freely by those who run the data centers without granting anyone access to those modifications.

Software has become more opaque than it has ever been before since the dawn of the PC age. Even without access to source code we had more control over Microsoft Excel than we now have over anything that runs in a data center, regardless of whether it is nominally "open source" or not.


As to your last point, this is what the AGPL is intended to solve.


I know, but almost no one uses it.


"Being business-friendly is shit". --> Who pays your bills mate? Fairies or a business who employs you, pays you a salary and you know, the inherent Darwinist biological urge to move up the food (value) chain to survive built into all living beings?


The "GPL mafia", how dare they demand their software's license be respected!?


  > They aren't writing a new OS,
  > at least, not in the complete sense.
Neither was Linux. It was based on Unix.

Neither was Unix. "The success of UNIX lies not so much in new inventions but rather in the full exploitation of a carefully selected set of fertile ideas . . . " --- Dennis M. Ritchie and Ken Thompson, "The UNIX TimeSharing System"


Maybe Google got tired of "System D" trying to take too much control ...LOL!

OR it could be a UEFI experiment...

"This project contains some experiments in software that runs on UEFI firmware for the purpose of exploring UEFI development and bootloader development." -https://github.com/fuchsia-mirror/magenta/blob/master/bootlo...


Oh they're using Dart. That's really cool. Dart is surprisingly pleasant to work with.


Yes I'm quite excited as well.

Building user land OS's is a fascinating (if not new) idea that I really love to tinker with a myself. It really would be nice if they can pull it off.

Read through what I could find of the IPC code a while back. Currently it seems workable but a bit "baroque", then again if, as you say, it is working inside Chrome, I guess it is battle tested.

Messaging has always and still is the future for very loosely coupled systems, and driving it all the way down would be really great.

Been following this for a while and can't wait to give it a whirl. Plus everyone likes to root for shiny new OS design projects.


Curious on advantages besides being secure. I normally have garbage computers so I like to run lean on stuff. Not to the point of using Arch but Ubuntu with i3. Also will "regular" programs still run. Like as a developer I need VS Code, Filezilla, file manager, Kate, LAMP stack installed.

Are there concerns on non-dedicated graphics cards, or running say an ARM-based processor.

Also I'm not sure what you mean by '...easy to compile...' How do you compile this? I'll have to read up on it. Get that checksum bruh.

I remember trying to use Slackware and for me that was a lot of work.

Edit: Oh I see at this time it is limited to three physical machines. Interesting on the Pi 3 part. I wonder if I could try it in Virtual box. But why try it in the first place?


Not often someone says lean and VS Code in the same statement.


Egacs. Eight Gigabytes And Constantly Swapping.

Moore's Law. Give it another decade and it'll be Etacs.


It’s not very fast, but it’s surprisingly easy on the resources, especially memory.

Compare using VS Code w/ Typescript Language Service to any type of dev stack that includes the word “Scala.”


> It’s not very fast, but it’s surprisingly easy on the resources, especially memory.

Open VS Code and then Sublime and be amazed.

Atom and VS Code can never be 'easy on the memory' because they have to load an entire Electron instance just to idle.


> pen VS Code and then Sublime and be amazed.

At what? Using 5% more total resources? Great tradeoff!


It costs MONEY!!! Noooo.... haha


Scala with Emacs + Ensime


Do you have any experience using it with scala.js?


It's no joe, but it sure as hell beats NetBeans.


If you tell me your beef with Netbeans I might be able to help.

I have significant experience with all big Java IDEs, 3 months full time with Visual Studio (in addition to testing it on and off for years). Use Sublime regularily. Can use vim productivily for server config. Have used emacs enough to get a taste of it.

But lately I find myself getting back to NetBeans whenever possible, not because of price but because of features, sensible defaults and stability.


It runs like a slug and sits atop a mountain of RAM that it hordes like a jealous dragon.

OTOH, my hobby programming is done on an HP Mini 101.


Ok I guess even I would possibly go with something else if I was seriously resource constrained.


Yeah I used to use Atom. I know Vim is what I should use for "lean" but it's not easy to use. I did see some pretty themes 'space wrap' or something like that.


> I know Vim is what I should use for "lean" but it's not easy to use.

It's easy to use, but it's not easy to learn.


The ol' learning curve


Vim is well known for its bloated, messy codebase. I'd hardly call it "lean".


It's a messy C codebase, which is orders of magnitude leaner than a clean electron codebase.


I wasn't aware, I just think it's command line so it must be "lean" right? No GUI. I don't know.


There are plenty of "mid-range" editors and IDEs: mousepad, gedit, geany, kate..


I recently tried Geany as the chromebook I bought is ARM-based so VS code wouldn't work on it. It's alright.


I always recommend Kate to people who want a programmer's editor but don't want to deal with the esotericness of Vim.

(I personally alternate between both Kate and Vim, depending on whether I'm in more of a GUI or a CLI mood)


s/Vim/EMACS/ and you're correct. Unless you want to learn Vi, Vim is not what you want; rather EMACS is what you want, and I will agree wholeheartedly that it is "not easy to use."


"The little kernel" appears to be a popcorn brand FWIW.


The modular approach seems interesting. Call by meaning?


Capability-based operating systems must be the future. If they are not, then we are all doomed to continue to exist in a messy world where security problems crop up every minute. Capability-based access controls are one of the best options for getting out of our current mess, but they're also the type of thing that must be implemented very low in the system in order to work.

Hopefully, when we start ripping out *nix before 2038, Fuscia and other capability-based OS's can take over.


> If they are not, then we are all doomed to continue to exist in a messy world where security problems crop up every minute.

Side-tangent, but security enthusiasts need to calm down on the "world is ending" talk. Those of us who lived through Windows ME, where logging on IRC basically gave you a 25% chance of having your computer hijacked by some random script kiddie, think it's laughable to say that security is anywhere near as bad as it used to be. Night and day.


> Those of us who lived through Windows ME, where logging on IRC basically gave you a 25% chance of having your computer hijacked by some random script kiddie, think it's laughable to say that security is anywhere near as bad as it used to be. Night and day.

Those were people just messing around. Annoying and harmful to your local machine certainly, but the harm that was caused was localized.

Security vulnerabilities are now industrialized, with large-scale public attacks and widespread silent compromises used to gather and sell your information for profit. The situation is still bad, it's just bad in a different way.


Individually, computers today are more secure than they've ever been.

But there are zillions more computers, doing more things, with more interconnections than ever before. So the problems are compounded and I think the situation is worse overall.


> * Individually, computers today are more secure than they've ever been.*

Unfortunately, that only holds if you take the view that IoT appliance != computer.

Because let's be honest - those things are dangerous to general network hygiene and wellbeing.


True, IoT devices are less secure than their internetless predecessors. But they're more secure than Windows 95 computers.


But there are significantly more devices out there these days, that we don't traditionally think of as "computers". Cable boxes/DVRs, webcams, vacuum cleaners, mobile phones, routers, hell a directory traversal bug was found in a dishwasher.

Many of these devices aren't patched after they're sold, and can be compromised within minutes of being assigned a publicly routable IP - this is what happened with Mirai.

Security is better for desktop computers, but if you include everything with an operating system, it's much, much worse.


But it still feels like it. Security news should be “someone managed to breach one of the 20 layers that protect you,” not “someone got root, again.”

At the very least you should be able to stay secure if you know what you are doing. But even that’s not enough. You also need to be lucky && uninteresting && not actually using computers.


Security practices may be better now, but applications are a lot more complicated (meaning much more attack surface area) and there's a lot more at stake now.


For a while I had a Linux machine running just to use bitchX; in '96 IIRC. :D


And they've been such since the 70s: http://homes.cs.washington.edu/~levy/capabook/

(This book is well worth reading...)


The future? The System/38 (aka AS/400 aka iSeries aka System i) had capabilities from the beginning, almost 40 years ago.


I sometimes wonder if addition to a Ethics class, Computer Science students need to take a Computer Archaeology class. It might not be a bad thing to bring up a lot of the concepts that aren't in the main stream anymore that have been tried.


Surely, then younger generations would learn how Algol and PL/I were used to write OSes, how Xerox PARC and EHTZ managed to write OSes without a single line of C code and that UNIX wasn't the genesis of operating systems.


I definitely support that. Many tips I give to people for their projects came straight out of 1960's-1980's CompSci or industrial work. It has been a ridiculous amount of effort finding all of it, though. So many silos. It needs integrated on a per topic basis, cleaned up, an executive summary, and references available for follow-up. Preferably with FOSS tools if it's an analytical method, language, etc.

Then, people might quite reinventing the wheel or missing obvious things as much as they do now.


We certainly should. We've been through a lot of ideas that were not practical at the time that could be practical again in the future.


The Future of Programming would be a good introduction: https://vimeo.com/71278954

It's both inspiring and depressing.


It's not a new idea, but that doesn't mean it isn't perhaps the future of somewhere it hasn't been before.


Looks like it's written in C++. Regardless of how "clean" the code is right now, it's only a matter of time because it becomes roughly as bad as any other OS written in C++ in terms of vulnerabilities found in it per million lines of code.


Android and ChromeOS also have several components written in C++, yet the majority of userspace is only available to Java or Web applications.

On modern OSes, that aren't plain UNIX clones, C and C++ are being pushed down the stack just to provide some hardware abstraction layer, with everything else being done in more safer, productive, languages.


Capabilities? FreeBSD, and other Unix flavors have had caps for decades.

No one uses them. Security is useless if no one uses it. Capabilities are too hard, too complex, to manage.

Great idea. Horrible implementation.


Unix 'capabilities' (before Capsicum, iirc) were a different thing with the same name. Confusing. Unix actually does have a different construct that is like a classical capability: an open file descriptor that you can send to another process over a Unix domain socket.


FreeBSD got Capsicum only in 10.x and many projects started using them.


One interesting thing built on top of that is CloudABI:

https://nuxi.nl/ https://www.bsdcan.org/2015/schedule/attachments/330_2015-06...


I'm curious; do you have any details on exactly why the existing implementations are too complicated to be usable, and on whether this looks likely to be just some specific design flaws or inherent in the concept?


Not at the kernel level, is it?


I've been waiting for this to be released. I suppose everyone has been.

Capabilities. Like fine grain locks, these are very powerful and very hard to get right. That's the lesson from Hydra, the 432, .... No, it's not a hard mechanism for the microkernel to get right; it's a hard policy for the application programmer to get right. However, that's probably more of an opportunity rather than meant as a criticism. Our tools are massively more evolved than they were in the 70s. It'll be interesting to see what happens with capabilities.

C++. Oh lord. Why are you writing a microkernel in C++? If there was anything they learned from L4 (Xen, Linux, ...) it is that C is sufficient. Why do you want to implement something small with something that is large? This one is a real head scratcher.

Provably secure. They did this with the SEL4 microkernel so this is a doable thing. If they're going to hang their hat on security (capabilities, microkernel, ...) there's no excuse for not having done this already and delivered a provably secure microkernel out of the box.


To be fair, this is not the entirety of C++ but C++ following the Google C++ Style Guide, which limits use to a more manageable subset of the language. For instance, exceptions, which can be especially hairy in a language with manual memory management, is prohibited. Take a look at the Fuchsia codebase. The code is really quite clean and readable.


Random muse, but God damn I love Google's code quality.

Compared to the midnight bamboo forest my company built. A challenging codebase, not by its merits but because you need to be clever just to find your way through.

Anyone know if Google is hiring webforms developers?


  Why do you want to implement something small with
  something that is large?
Consider the size and amount of equipment involved in making watches. Or microchips, for that matter.


Do you carry that watch-making equipment on your wrist along with your watch?


Of course not. I only need them in my workshop/foundry/dev machine.


>C++. Oh lord. Why are you writing a microkernel in C++? If there was anything they learned from L4 (Xen, Linux, ...) it is that C is sufficient. Why do you want to implement something small with something that is large? This one is a real head scratcher.

Because if there was anything proven from years and years of using C, it's that it is woefully insecure and should not be trusted to write a kernel with.

You can still fuck up with C++, sure, but std::shared_ptr offers more guarantees than *.

EDIT: HN stop eating my stars


Yes, we know that C is not secure. We also know that C++ is not secure and moreover it definitely isn't C, now with security. It's just more bigger C. So use your phrase, you can still fuck up with C++. You can just do it in more clever and insidious ways.

seL4, a third-generation microkernel is 8,700 lines of C code and 600 lines of assembler.

http://web1.cs.columbia.edu/~junfeng/09fa-e6998/papers/sel4....

So we also empirically know that you can formally verify a microkernel written in C. Now if the Fuchsia folks had gone ahead and formally verified their microkernel, this issue would be moot. But they haven't. So we are instead left with std::shared_ptr offers more guarantees than *. While this isn't nothing, it isn't much either.


I wouldn't use SEL4 as an argument. It is completely unreasonable to build a production kernel the way they did - it took them ages to get what they had. Google is probably looking to develop features/ move code a lot faster than is possible using their technique.

Good coding practices take no time to implement, you learn it once and you write that way. It is not bullet proof, neither are capabilities. Still, at least they're practical.


I think you miss the point of a microkernel! The point is to keep all your "features" outside of it, and to use it only to implement the core set of functionality necessary to have secure shared access to the hardware. The general L4 concept has proven industry use, so there's no reason you can't take advantage of the already accomplished work on seL4 to bootstrap your own secure OS.

It's true that seL4 is not a magic safety or security bullet; in particular, their FAQ says that they haven't completed extending their proofs to DMA safety via VT-d or SystemMMU on ARM, so DMA-accessing services would have to be separately verified. And its particular feature set may not be appropriate for all situations. But if its API does work for you, it would be foolish not to at least consider using it.

It is really unreasonable to dismiss seL4 simply because it took them a lot of effort to create it. That effort is now done and can be re-used and magnified by further effort.


I didn't realize you were recommending to use sel4, I assumed you meant they should have proven theirs using similar methods.


How about Muen separation kernel in SPARK Ada with automated prover eliminating many classes of undefined behavior? Or C with tools like Frama-C and Astree Analyzer (like SPARK for C)? C++ can't reach the safety of its competition due to more complexity and less tooling.


To be fair, if Frama-C is an option, as someone on the C++ side of the fence (C vs C++) I would advocate High Integrity C++ is also an option. :)

http://www.ldra.com/en/software-quality-test-tools/group/by-...


I thought about it. The reason I left it off is a lack of static analysis, automatic generation of tests, certified compilers, etc. There's tons of FOSS and commercial vendors for doing such things in C with quite a bit for Ada. Whereas, I could find only one or two products for C++ that seemed like it would be really helpful as opposed to just kind of. So, I pivoted my recommendations to go with languages that have huge, tooling ecosystems in academic R&D and commercial. You can certainly use it but might get less value in long term.

Embedded, real-time Java could also be on the list since Java ecosystem has a verification tool for about everything. CompSci loves Java for some reason (probably mandated classes). Yet, verifiable C and SPARK are closer to the metal than embedded Java plus no Oracle risk. So, left off Java as well.


I guess so, I just know it from reading about it, so it might be that when using it won't be as suitable as I think it could be.

On the Java side you can probably check what Aonix used to do, they are now part of PTC.

http://www.ptc.com/developer-tools/perc

Also there are the guys from MicroEJ.

http://www.microej.com/products/device-software-development/...

There is no Oracle risk with Java, when companies play by the rules.

PTC, Aicas, IBM, MicroEJ, Excelcior JET, HP, Cisco, Ricoh, Azul, Red Hat and so many other companies are selling JDKs with their own set of features, without having had any issue with Sun or Oracle.

Only Google has an issue with Oracle, because they decided they were the cool kid better than anyone else whose rules don't apply to.


I'm only commenting on seL4's approach not being very viable. I'm not advocating for C++ in the kernel either.


Fair enough.


"folks it's ok, we're using shared_ptr's. We're safe. Problem solved."


Shared_ptr is slow. It has an atomic variable for reference counter - it is supposed to be thread safe...


Are there enough clues to figure out if this is intended to eventually displace Android, ChromeOS, the Ubuntu distribution most googlers use on their company machines, or the Linux they run their data centers?

As far as I know, Google hasn't telegraphed the purpose of it. I can't tell if it's targeted at all, none, or some of the above.


More likely this exists simply because Google could afford it. If something comes out of it, fine; if not no problem. When money is no object why not let some engineers play with experimental tech.



Thanks, but would sooner recommend my third article: https://techspecs.blog/blog/2017/3/15/fuchsias-hypervisor.


The link in the bottom of your first article was very helpful, and seems to confirm ChromeOS and Android would be the likely targets.

https://fuchsia.googlesource.com/magenta/+show/master/docs/m...


Thanks. Yeah, they really could not be more transparent about that. Most of the bring-up is on mobile + PC silicon as well.

There's no need to kill Chrome OS for the education, mind.


Looked at it again, and it could be an old comment about magenta, and maybe not applicable to fuchsia?


Well, the OS is most strictly the kernel, right? It's honestly all together. It's an SMP system that you can already run on normal ARM and x86 devices.

The team has stated that everything in the kernel is true, and that there's quite a lot there to infer from. What they can't say publicly is the how and why, e.g. how would they hypothetically implement backwards compatibility with Android apps?


I'd just like to interject for a moment. What you’re referring to as Fuchsia, is in fact, GNU/Fuchsia, or as I’ve recently taken to calling it, GNU plus Fuchsia. Fuchsia is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.

Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called “Fuchsia”, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project. There really is a Fuchsia, and these people are using it, but it is just a part of the system they use.


It's built with Clang and has a mostly BSD userspace.


If I had to guess, this looks to be a strong candidate for the embedded OS market. There are still lots of folks running VxWorks, QNX, ThreadX, Mentor Graphics' Nucleus, Green Hills' Integrity.

In fact, it looks a lot like the same general design as Integrity, a microkernel capability-based architecture with as much as possible in user space.


Its modern rendering pipeline would suggest it has uses for far more than the embedded OS market.


I'm not so sure. Perhaps "embedded" is the wrong term, but more and more devices have full touchscreens on them, and if you are selling a high end device like a refrigerator, having the UI be fluid and feel high end is important as well. Not to mention TVs, car UIs, security system pads, and more.


Its rendering pipeline uses Vulkan which requires a very capable GPU. I'm sure you could put an expensive SoC into a refrigerator or TV, but it's going to cost you.


Can't wait to play games on my fridge!


The one thing that stood out to me is the name of former Be employee Travis Geiselbrecht. If you are unaware, he created NewOS, which was forked years ago and became the kernel for Haiku.

I didn't realize that he was working for Google.


I don't understand what the value is in writing a new original microkernel from scratch in this day and age when sel4* is free and open source, performance tuned for 20ish years, is security hardened, and is provably correct for both security and features?

This doesn't seem like a wise path to take.

* See http://sel4.systems/


I know nothing about differences between fuchsia and sel4 but it would be rather strange if something as complicated as a OS kernel wouldn't have many parts where you have to make trade-offs. People compete fiercely in the field of todo applications, I don't see why there shouldn't be competition in microkernel space.

Things changed in hardware in last 20 years and maybe we've learned something about software as well.

Not to mention that sel4 is GPLv2, which is kind of problematic for a commercial company.

In the end, the micro-kernel is a very small part of the whole stack.


I think author meant L4 kernels were performance tuned for 20 years to be the fastest. Then, one using applying such lessons was mathematically verified for correctness down to assembly. It's proprietary and open-source depending on what your end product will be. They'd probably even add features to it for a Google-specific version that leveraged as many proven compinents as possible.


And most of us have a mobile phone running OKL4 on the radio CPU.


SeL4 is still working on multithreading. It is probably not good enough for a user interfacing OS yet.


That's true in its current state. Remember, though, that they're working on multithreading in a way that ties into their proof from high-level spec down to the code down to the assembly. Googles people are just putting together a kernel with review & testing. They could take seL4, modify it for concurrency, check it with concurrency-related tooling, and still get quite a bit of assurance from original work. Proof no longer applies but most things it applied to haven't changed. And model-checkers for concurrency are among easiest tools to use in formal verification w/ TLA+ getting adoption even by mainstream companies.


No kidding. All kinds of robust operating system design went into seL4. It's simply crazy not to make use of it.

I can see potential problems already from looking at the Magenta kernel docs. For instance, they say that all syscalls are non-blocking except for explicit wait and sleep calls, which means the kernel is very likely vulnerable to various DoS attacks, which aren't a problem for L4 kernels.


The micro-kernel used by Fuchsia is not built from scratch. It's based on a heavily modified version of the LK kernel.


You are asking why NIH is a problem for a project that seems to be at least partly driven by NIH?


That whole handle idea of the 'capability' system, but especially its implementation with 'handles', sounds exactly like Win32 to me. I haven't looked at the source, can anyone confirm or explain what exactly is different?


Roughly put: in a capability based system, if you have a valid handle for a service, then you can use that service. But the only way you can get a valid handle is to ask your parent process for one --- handles are unforgeable. So your parent gets to check that you're legitimate.

...but your parent, in turn, has limited permissions, because the only way it can get a handle is to ask its parent. And when you ask your parent for a filesystem handle, your parent doesn't have to give you the handle it has. Your parent might give you a limited filesystem handle which only allows writes to /tmp.

...and when you, in turn, start a subprocess --- say, an instance of sqlite for use as a database --- you have to give it handles for the things it wants to do on your behalf. But your filesystem handle only allows writes to /tmp. That means that your sqlite instance can only write to /tmp too.

There's more to it than that, because in real life you also want to forward capabilities to other running services as part of RPCs, but you tend to end up with a natural container system with multiple redundant layers of isolation, all running at minimum necessary privilege, and all hardware-mediated.

Another really interesting capability-based OS is Genode:

https://genode.org/about/screenshots

https://genode.org/documentation/general-overview/index

It'll allow you to run complete OS kernels as capability-restricted Genode processes, effectively allowing virtual machines as first-class citizens.


That's actually really beautifully simple.

Thanks for this explanation, it really helped the idea "click"


And for another beautiful convergence: nowadays most software development is done in languages that naturally express capability patterns, namely memory-safe languages. That is, if you have a reference to an object or value, you have the authority to invoke any of the methods on that object or call any functions that accept such a value. So object references are capabilities.

Most such languages only go too far by allowing "ambient authority", whereby any component in your program has the ability to turn a reference to a value that carries little authority, into one that carries significantly more authority. For instance, consider the file open API: you're essentially turning a string, which exposes no unsafe or security-critical methods, into an object that can literally destroy your computer. And literally any library or component you use can open files. It's madness!

To make a memory safe language capability secure, you simply remove all sources of ambient authority. This mainly consists of globals that permits transitive access to mutable state, like file system APIs, global static variables that can reference transitively mutable data, etc.

These insecure APIs are then replaced with equivalent object instances which you can pass around, eg. you can only receive the authority to open a file if you're given an instance to the directory object which contains that file. The entry point of your program would then change to accept various capabilities instead of just string parameters.


And as an even deeper correspondence: Once your language is memory-safe and capability-secure, you don't even need a kernel, let alone a microkernel. Rather than use hardware address spaces to separate and modularize components (the primary idea behind microkernels), you just pass capabilities around to your components. One component can't crash the whole system, because all it can access is the APIs you pass in to it. If you want to add isolated processes to your OS, just implement actors in your language: They're equivalent.

Of course, you can always have a bug in your APIs that allows an attack or bug to propagate. But that was always the case even without capability-safety. Capabilities can't give you everything for free. :)


JX Operating System does something like this within a JVM. It runs on a microkernel but I think the mediation is app/language level.


That was the approach taken by Mesa/Cedar, Modula-2+ Topaz, Modula-3 SPIN OS, Oberon Native Oberon, ....


> To make a memory safe language capability secure, you simply remove all sources of ambient authority.

Isn't this more or less what Sun tried to do with Java applets, which in practice turned out not to be so simple while providing a rich API?


> Isn't this more or less what Sun tried to do with Java applets, which in practice turned out not to be so simple while providing a rich API?

I'm not familiar with the Java applet model specifically, but Java's general security model is based on stack inspection, which is nothing like capabilities.


http://www.cs.cornell.edu/home/chichao/sip99.ps

http://www4.cs.fau.de/Projects/JX/

Note: JX takes a different route but shows POLA can be done simply and performant.


> That is, if you have a reference to an object or value, you have the authority to invoke any of the methods on that object or call any functions that accept such a value.

Except private methods, right? That's where I feel that this analogy breaks down


Yes, you can only invoke the publicly interface of an object. It's not an analogy, object references really are capabilities.


It's a simple concept whose flexibility & performing implementations can be hard. Usually takes geniuses to come up with one that works well in concept and practice. It's why Butler Lampson always thought they were a waste of time. I don't. So, here's some examples of successful application. :)

http://www.cl.cam.ac.uk/research/security/ctsrd/cheri.html

https://en.wikipedia.org/wiki/EROS_(microkernel)

Note: Definitely see KeyKOS as it was commercially deployed with both POLA and persistence of app data.

https://www.combex.com/tech/darpaBrowser.html

http://www.cs.washington.edu/homes/levy/capabook/index.html

Note: Especially see System/38 which became AS/400 & IBM i. Still selling. And they run and run and run. Architecture & POLA go a long way there.


So, going back to Win32, it's as if OpenFile also took a HANDLE that represented your abilities (or capabilities if you will ) within the security model, with the explicit ability to forward these handles (or new handles that represent a subset of the original's capabilities) to other processes if you choose.


In Win32, this handle is called the AccessToken https://msdn.microsoft.com/en-us/library/windows/desktop/aa3... and the calling thread's current access token is used by OpenFile to grant or deny the requested access.


I'm no Windows expert, so I didn't know that windows handles were used as security primitives.

I thought they were just a bit like file-descriptors or X11 window ids, or indeed pointers. Such handles do have a kind of role in authorization: once a process has convinced the system to give it some resource, then the (handle, processid) pair is all the system needs to check access.

However you typically gain the handle through something like `open()`, i.e. an ACLed request for a named resource. But with true capabilities you just inherit authorisation from some other capability -- possibly one granted to you by a different process.

That said, the difference from existing systems might be small. Namespaces are really useful, and are probably here to stay. But as long as things can be accessed by names, the access will need to be controlled by something like an ACL.


> Namespaces are really useful, and are probably here to stay. But as long as things can be accessed by names, the access will need to be controlled by something like an ACL.

Not necessarily. If your system supports first-class namespaces, then you can just build a namespace consisting of only the objects to which a program should have access. No need for any further access control.

A file open dialog from a program is simply the program's request for you to map a file into its namespace.


>Not necessarily. If your system supports first-class namespaces, then you can just build a namespace consisting of only the objects to which a program should have access. No need for any further access control.

Unfortunately this is usually quite heavyweight. In Unix-like systems, and in Plan 9, the canonical way to do this is to implement a filesystem. "Filesystem" is the universal IPC layer. But implementing a filesystem usually takes quite a lot of effort.

Do you know of any systems where this can be done easily enough for it to be used ubiquitously?

One would much prefer to just put together a hierarchy of objects at the language level and automatically expose it... or something like that anyway.


> "Filesystem" is the universal IPC layer. But implementing a filesystem usually takes quite a lot of effort.

It needn't be. A file system is just a set of nested hashtables. The complexity of traditional file systems comes from the durable representation, but that isn't necessarily needed for first-class namespaces. You can just serialize and deserialize a namespace as needed on top of an ordinary file system.

> Do you know of any systems where this can be done easily enough for it to be used ubiquitously?

Plan 9 obviously. The Plash capability secure shell [1]. There are probably a couple of others, but not too many overall.

[1] http://www.cs.jhu.edu/~seaborn/plash/plash-orig.html


I explicitly mentioned Plan 9. Plan 9 doesn't do anything to make it fundamentally easier to write a filesystem, it just takes the (wise) approach of standardizing a simple filesystem protocol. (Which is a big help practically of course.) Nevertheless, actually implementing that protocol is still difficult. While it would be nice to represent a filesystem as a nested hashtable, 9P doesn't give you that for free: you need to keep track of fids you hand out.

Plash uses capabilities aggressively and only gives subprocesses access to what they need to access. But it doesn't make it any easier to write new applications which construct namespaces and proxy access.


Hmm, that sounds a bit awkward -- every capability on the system would, at least conceptually need a separate copy of the pruned name-tree. There might be good ways of implementing that, but it's not obvious.

On the other hand, I can see how capabilities would mix with ACLed namespaces to improve security. Essentially it would define the semantics of things like cgroups and chroot jails.

In fact sometimes I think that mainstream virtualisation is slowly reinventing -- in an incremental way -- what various alternative OSes advocated in a radical way. E.g. Linux cgroups and "one process per container" sounds quite a lot like Plan9's per-process view of the system.


> Hmm, that sounds a bit awkward -- every capability on the system would, at least conceptually need a separate copy of the pruned name-tree.

First-class namespaces are an implmentation of capabilities. You wouldn't necessarily need capabilities on top of that. You'd pass around namespaces or perform namespace mappings to grant or revoke authorities.

> On the other hand, I can see how capabilities would mix with ACLed namespaces to improve security.

You can use ACLs to implement capabilities [1], which the capability folks did with Windows XP as a proof of concept with a modern OS [2] (I don't know why Microsoft didn't just adopt Polaris to be frank). But in general you shouldn't use ACLs to try augment capabilities. They restrict the class of expressible security policies without actually adding security (and in fact, they introduce insecurity).

[1] http://www.webstart.com/jed/papers/Managing-Domains/

[2] http://www.hpl.hp.com/techreports/2004/HPL-2004-221.html



Lots of interest in Microkernels since everyone got tired of kernel vulnerabilities. So who won the Tanenbaum–Torvalds debate? It is too soon to say (Zhou Enlai said that of the French revolution - almost 200 years after the fact)

https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...

The article says that the focus is on 'PCs, tablets, and high-end phones'. Wouldn't a more secure OS be relevant to server environment? Is the performance cost of a microkernel considered to be too high for a server OS? Is it too difficult to do?


It was the only thing that worked without lots of physical redundancy in high reliability. It's only thing that worked against good pentesters in high security. It's widely deployed in embedded. The monolithic OS's started copying some of its traits for their benefits but kept things in kernel mode for performance. Why? They're running on CPU's optimized for monolithic instead of microkernel designs.

So, I'd say evidence leans in favor of microkernels being better. The cutting-edge isn't that, though. It's hardware/software combinations that give more reliability and security with better usability & performance than microkernels. Lots of work in languages, compilers, and CPU extensions. CHERI and CHERIBSD is probably top example with Spin OS, JX OS or Redox OS on something like Watchdog-Lite CPU being representative of language-oriented work.


Given the amount of shipped embedded systems and the hybrid designs from OS X and Windows, I would say Tanenbaum won.

The majority of embedded OSes have a microkernel design.

Also Windows and OS X have a kind of hybrid design, even if not a proper mikrokernel.

On Windows case,there are now a sandboxed kernel and drivers.

https://channel9.msdn.com/events/Ignite/2016/BRK4010

https://channel9.msdn.com/Blogs/windowsserver/Device-Guard-i...


Even with new features like Device Guard, Windows is all but a microkernel; parts of the GUI high-level primitives (like fonts) are in WIN32K.SYS. Even OS X moved a lot of drivers to kernel space, it's not a pure microkernel design like the Hurd.

If anything, the closest thing to a microkernel that is in wide use is Xen, or Hyper-V.


If you mean wide use on a desktop, sure. QNX is definitely microkernel-based, though, and it's used widely in automotive head units and now Blackberry devices. Way nicer to write device drivers for than Linux!


Which is why I said hybrid, I didn't say it is a mikrokernel.

In any case it is better than Linux will ever be.

Regarding Xen and Hyper-V, there is a systems paper that states hypervisors are the revenge of mikrokernels.


I am not sure what's left of the original microkernel design in the NT kernel. You could say Linux is hybrid too because of FUSE and VFIO.

But yes, I was thinking exactly of that paper when I mentioned hypervisors. It only applies to type 1 hypervisors though: not KVM, Beehyve, or OpenBSD vmm. Even VMware ESX is more of a hybrid kernel.


Windows 10 thanks to UWP, Pico procresses taken from the Drawbrigde project and the MinWin rearchitecture is probably more closer to that model than the NT 4.0 descendants were.

Regarding hyperviors, actually I think only type 1 hypervisors make sense.

The type 2 were just a workound due to lack of hardware support.


With Xen you have Linux running as dom0, not exactly a microkernel; what am I missing?


Having Linux as dom0 is just a matter of convenience for the Xen project not to write everything themselves, they could eventually get rid of it.

The point of the paper is that an hypervisor behaves just like a mikrokernel, with guest systems running exactly the same way as applications would do on a mikrokernel OS.


> The default Fuchsia filesystem, called minfs, was also built from scratch. The device manager creates a root filesystem in-memory, providing a virtual filesystem (VFS) layer that other filesystems are mounted under. However, since the filesystems run as user-space servers, accessing them is done via a protocol to those servers. Every instance of a mounted filesystem has a server running behind the scenes, taking care of all data access to it. The user-space C libraries make the protocol transparent to user programs, which will just make calls to open, close, read, and write files.

Plan 9 is not dead, it ideas live on in other projects.


You are confusing things that derive naturally from a microkernel architecture with Plan 9.


I dont understand your point.

I may be mistaken,but my understanding is that userspace servers accessible through the use of the fopen/fread/fwrite, etc. system calls was an idea that originated in Plan 9.

Furthermore, some people who worked at Bell Labs at the time now work at Google.


I'm not sure Plan 9 was first with it. It was inspired by UNIX which was a watered-down version of MULTICS. That was a microkernel-based system with generalized I/O with open/read/write:

http://multicians.org/rjf.html

Now, the question is when was same concept applied to user-space servers? It was possibly Mach that was UNIX-like with all processes communicating through ports that were like standardized pipes.

https://en.wikipedia.org/wiki/Mach_(kernel)

You got user-space components to open them, read them, write them, etc. That was 1985 whereas Plan 9 hit universities in 1992. Both models had security and performance issues that led reliability- and security-oriented OS's to go different routes on purpose. Plan 9 was a step up from UNIX but a step down from high-assurance (eg KeyKOS, NonStop) and high-flexibility systems (eg SPIN OS, LISP machines).


The memory mapping model is really interesting, since it moves a lot of that out of the kernel and into user-space, but it seems like it has more disadvantages than advantages. What am I missing?


Are you asking about VMOs or VMARs?

In general: it gives a process a huge amount of flexibility in terms of how it sets up its own address space, communicates with other processes, and in how it can communicate with the kernel.


Big advantage is better security.


And it can be a lot faster. Jumping in and out of the kernel is slow, shared memory is relatively fast.


I know that I am excited that they will be providing soft realtime threads. I've felt this is a necessary addition for consumer media operating systems for a long time.


> This is a new take on open-source development where it is out in the open, yet secret.

Hasn't this always been the way Android operates? Developed in secret, source thrown over the wall every release?


I think this is more, that things are being developed without an explanation of its purpose. There's no "here are the new features in Android O" blog post


In this case the source isn't just being thrown over the wall, though. It's being developed out in the open, just without any indication of what it's for.


I can see the major point of making a microkernel is to allow kernel or other service (like driver) upgrade to be de-coupled; it would also allow much easier integration of non-open source driver and service, which could be a pain with Linux as it doesn't provide a clean way to do without rebasing (which could explain why Android upgrade is much lag behind for existing phone). But don't get me wrong, I totally support Linux philosophy.


Fuchsia on RISC-V. That would be interesting.


Is there an ISO?


It's relatively straightforward to build a bootable image:

https://fuchsia.googlesource.com/docs/+/master/getting_start...


superb prose


Fuchsia sounds awesome. Allowing user space processes to do more of their own work frees up the kernel from providing standardized interfaces to hardware.

This makes it significantly easier to build a closed platform with unbreakable barriers between processes, and this is a great thing in terms of security and fine grained access controls for each process. Individual process isolation is extremely important for most of todays use cases where only a single user is logged into each system at one time and most running code is trusted.

In practice this means you can prevent user space processes from accessing anything you don't want them to touch while still giving them substantial low level access. This will be a boon with device makers because it allows them to preventing a users apps from compromising the carrier experience. Companies like Google will also have less concern about users installing malware like ad blockers. The movie and music industry will also greatly appreciate an operating system finally designed for 21st century IP protection. This will even be embraced by hardware manufacturers since they no longer need to provide open source drivers for their hardware that could be ported to other platforms. Overall a win-win for everybody.

In the end we can trust that this will result in a better user experience with more secure apps and devices.

Did anyone hear the whispers of Xooglers a few years back talking about "big changes" coming to Android that were absolutely horrible for users and done to placate industry? Hmmmm... This Fuscia thing looks pretty suspicious.


This is a complete misunderstanding/misrepresentation of what a microkernel is. Even GNU is developing their own microkernel, the HURD.


I may be overly paranoid, but the majority of what Fuscia is trying to accomplish is extremely bad for open platforms.

The outcome of moving drivers to user space will be proliferation of binary blobs and black box drivers. If you think binary GPU drivers are bad now, imagine an "open source" OS where every single driver is a binary blob. It will become impossible to run Fuscia devices on any other operating system because you have no drivers, sealing off the Android platform permanently.


>If you think binary GPU drivers are bad now, imagine an "open source" OS where every single driver is a binary blob.

The current situation is they're tied to a version of the kernel, and typically abandoned by the vendors. With userspace drivers and driver APIs, the "android upgrade problem" would be solved.

It'd then be a matter of reversing these drivers, which should be far easier when they're running in userspace and completely bounded.


People that question the existence of Fuchsia need only remember why Chrome was created. A lot of people thought Google was wasting their time by building a browser, including Eric Schmidt, and look how that turned out. Now, I'm not saying that Fuchsia will have the same success as Chrome, but it's clear that they think that having an OS that they can control the direction of is important to them.


And let's not forget this fantastic article about Android from back in 2007.

https://www.engadget.com/2007/11/05/symbian-nokia-microsoft-...


Thanks for that link. These quotes are golden:

Palm's not the only company that isn't afraid to speak out on the Open Handset Alliance. Nokia, Microsoft and Symbian made it most clear today that they don't perceive danger from the new initiative and corresponding Android OS, with Nokia stating it quite bluntly: "We don't see this as a threat." Microsoft was a bit more on the defensive. "It really sounds that they are getting a whole bunch of people together to build a phone and that's something we've been doing for five years," said Scott Horn, from Microsoft's Windows Mobile marketing team. "I don't understand the impact that they are going to have."


Chrome was based on WebKit, and they bought Android (already based on Linux). Neither one was made from scratch.

Google certainly has the resources to do something like this. But neither of those projects (others mentioned Android) were started from scratch at Google.


And Fuchsia is using code from Chrome, Android and other open source projects. Not much these days is really made from scratch. iOS wasn't nor was MacOS or even MS-DOS for that matter.


They’ve build Webkit sandbox from scratch AFAIK. That’s not exactly rebranding.


The security of Chrome was inspired by OP Web Browser and SFI schemes. Clever but actuslly weaker than predecessors to boost speed.


It's also interesting that there's a lot of long-term web guys, almost all of them ex-Chrome (and some who were working on Firefox at Google before Chrome was a thing), many of them pretty senior at the point they left Chrome.


Two of the Fuchsia members also worked on BeOS if I recall correctly.


ChromeOS is only a major success in the US schools, I am yet to see someone use it here in Europe.


Then what about ChromeOS? Do you think they're trying to build a "full-fledged" OS for the modern era, with safety and security in mind, but not as lightweight as ChromeOS?


It's likely to be lighter than ChromeOS.

ChromeOS was a locked down userspace on top of Linux. Fuchsia gets rid of the Linux.


Linux was the only remotely useful, sane, and lightweight part of ChromeOS.


It was also the primary vector for security vulnerabilities.


Linux may be hacky, but it isn't exactly heavy.


I think this is a symptom of internal rivalries at Google. Seems like a combination of Dart (arch-rival to Go) and ChromeOS (arch-rival to Android).

That's not necessarily a bad thing -- Google's M.O. has always been to try lots of different things at once. But it may mean they literally don't have a solid long-term plan for it yet.


"arch-rivals"? That's overly dramatic. Go and Dart are hardly competing with each other, as are ChromeOS and Android.


I was being a little tongue-in-cheek, sure, but only slightly. They're both fairly recent languages, both representing a vision of how to fix the mistakes of the past. I'm sure the leaders of the teams see each other as rivals.


Yup, and the end user/developer suffers.

Why should I rewrite my Android app just because Google can't work out their internal politics?

And yes, I know that you'll probably be able to run Android apps on Fuchsia, but what about bitrot? Will JVM-language based Android be put on Life Support?


Yeah, look how that turned out. Now we have Chrome, which is slow, uses shitloads of memory, needs to be restarted constantly, has been bad for web standards, is controlled by Google, constantly phones home, etc.


I see great advantages to offering a POSIX compatibility layer. It can be sand-boxed to the app's context instead of making it a system wide dependency. Getting existing apps to run out of the box, and then convincing them to adapt a leaner, narrower set of system calls is probably more valuable than requiring from-the-scratch development.


Google has repeatedly prommised, and failed to build a lightweight open OS. Every time they have produced one it was full of bizzare political constraints and closed source spyware. I would be shocked if this time was any different.


I don't recall this happening, are you referring to chromeos and android? Can you share some links if you're not?


I don't want or need any new OS or product from this company. Everything they touch, initially looks super exciting and positive, until they grab you by the balls once the competition is out.


Specific examples might help to evaluate this generic claim.

(I work for open-source projects at Google, and don't see your claim being universally or even mostly true)


How many antitrust cases there are pointed at Google in EU alone? It's a one thing to work for open-source projects, another is to orchestrate the entire charrade that's about to be unleashed onto the users.

Everything at Google, from it's original product (search engine) to embedded devices, medical research, entire Alphabet portfolio is aimed at total dominance of every area in people's lifes. It's an evil company in my eyes, it's been a long time since I retained at least some trust in this company and it's vision.


I somewhere heard about a Google X OS? Is this it?


Was anyone else bothered by the passive voice in this article? I found it really distracting.


While this is a cool project, I can't really see it making financial sense...

> Lets throw away the last 20 years development on the linux kernel by thousands of people, and rewrite our own.

> How much will it cost?

> Ooh - I dunno - If you lend me 1000 engineers, we should be done in about 10 years, cos we're really smart and don't need to implement legacy SCSI support...


> How much will it cost?

Good thing it's funded by someone who has lots and lots of money!

> Lets throw away the last 20 years development on the linux kernel by thousands of people, and rewrite our own.

To suggest that linux is as good as it gets is a mistake. Operating systems are challenging to write, but certainly not impossible. If you're Google, you see big security risks involved in Linux's design. Google tries to mitigate some of those by funding oss-fuzz and a ton of other great options that narrow the opportunity for exploits. But what if you could write your own OS and make particular security guarantees as a design goal?

Google has already designed and/or reimplemented almost every aspect of the OS excluding the OS kernel.

The biggest drawback for starting a new OS is generally the cost of porting existing software to the new OS.


Linux isn't as good as it gets, but it moves significantly faster than pretty much any other software project. The majority of the code is hardware driver/support, and it's relatively well understood in terms of performance now. Linux today is not much like Linux 10 years ago.

Making something better than Linux in some way isn't hard. Making something consistently and sustainably better in enough ways, at the rate Linux moves, is a very hard task.


I don't believe the intention of Fuchsia is to take on Linux and become the jack of all trades OS but rather the master of some trades.


UNIX, as a viable OS model, isn't the last word in operating system design. It has too much design decisions and other baggage rooted in technology dating back to the 1970s. As Rob Pike, one of the people who worked on UNIX, said "Not only is UNIX dead, it's starting to smell really bad."

On top of that, Linux essentially sucked all the air out of the UNIX development space by killing off all the commercial UNIXes. Carrying Linux forward rests largely on the shoulders of the kernel team and whatever contributions other businesses feel like making.

So where is the next major OS coming from? Google and Microsoft are the last two companies with the talent to build a future OS and enough resources and clout to push it enough to get traction. (In theory, it could also come from the community but people like Linus only come along once in a generation.)


> Google and Microsoft are the last two companies with the talent to build a future OS and enough resources and clout to push it enough to get traction.

I think that is a stretch. Apple has significant investment in Darwin and are a huge contributor to LLVM. Also, don't count out Amazon ... they have their fingers in so many pies these days ...


Amazon is way too business and money oriented to make such a long term and risky investment.


That's a pretty ridiculous statement. Amazon won't dump money into BS just because, but they are absolutely willing to take big risks if the payoff is there.


Both big risk and long term? I don't think so. They're not a company with much philosophical vision. They provide good, useful, services to make money. That's not enough to start a successful operating system.

I don't mean to bash Amazon. They do well what they do, at the scale they have. I only think the project would not fit the company as it is now.


I'm saying you fundamentally misunderstand Amazon. The entire thing is based on long term thinking. It's why they spent decades building out a massive logistics infrastructure instead of turning a profit. It's why they spend tons on robotics, drones, and grocery stores that can eb run without people. They have been manufacturing tablets for years. They built kindle. They built a (disaster of a) phone.

Amazon doesn't waste money just because. But if they saw the potential return in developing an OS (say they decide that having a custom OS as their base for AWS) they are absolutely the type of company that would pursue it.


Maybe you're right and I'm mistaken.

That said, I'd like to note that a custom OS for AWS (servers), and a user OS (desktop or mobile) are extremely different things.


> Apple has significant investment in Darwin and are a huge contributor to LLVM.

Apple could make the next OS for Apple (presuming they were willing to take the leap to do something other than incremental improvements to their existing stable of OSes), but they aren't likely to make OSes for anyone else, so they aren't really relevant to the discussion.

> Also, don't count out Amazon ... they have their fingers in so many pies these days ...

Amazon seems to be pretty consistently plucking the low-hanging fruit from their current position. They might have the right talent to build a future OS, butz for a slightly different reason than Apple, it doesn't seem to be consistent with their orientation.


"Not only is UNIX dead, it's starting to smell really bad."

That's not an opinion necessarily shared by others who worked on UNIX and still do in some cases.

Certainly, there are commercial UNIX releases today that have relatively recent innovations and capabilities that still aren't available in other operating systems.


> Linux essentially sucked all the air out of the UNIX development space by killing off all the commercial UNIXes.

Only because they have done a really good job.


Linux wasn't really good before it became popular.

I think it won because it offered neutral grounds (a place where lots of players who don't have huge market power can meet without fear of being swindled) and, partly, out of luck (you could call it 'good timing', but I don't think Linus considered that)


What OS was really good before quite a few iterations?


Eh. The biggest thing Linux had going for it in terms of winning market share was running on commodity x86 parts in a time period when the commercial Unices weren't touching it, and x86 made really strong gains in beating everybody else at performance per dollar.


> The biggest thing Linux had going for it in terms of winning market share was running on commodity x86 parts in a time period when the commercial Unices weren't touching it

From the 1980s to 1993 there were:

    v7 ports: Microsoft Xenix (later became SCO), Venix, Coherent
    System III ports: PC/IX (later 386/ix)  
    SVR3 ports: official Intel, ESIX from Everex
    SVR4 ports: Dell UNIX, Novell UnixWare, Microport


Strictly speaking, Coherent wasn't a V7 port – they wrote the code from scratch rather than using any of AT&T's code, and they never paid AT&T any Unix license fees.

AT&T was suspicious, but even after a careful investigation by Dennis Ritchie himself, they couldn't prove any of their code had been copied.

Given both the V7 source code and Coherent source code have now been released, you can compare them yourself and form your own opinion, if you'd like.

V7 source code: http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7

Coherent source code: http://minnie.tuhs.org/cgi-bin/utree.pl?file=Coherent4.2.10


I am actually on the lookout for Coherent 2 or earlier sources. It would be very interesting to see how it worked on the 8088.


Also AIX (ran for some time on PS/2 machines), NeXTSTEP (3.1+ ran on PCs)... The list is, actually, quite extensive.


Microsoft abandoned Xenix when AT&T decided to commercialize Unix with SVR4.

OS/2 was, in fact, created to be Microsoft's Xenix replacement (and then when Microsoft fell out with IBM, we got NT as Xenix's replacement's replacement).


"Microsoft abandoned Xenix when AT&T decided to commercialize Unix with SVR4."

They not only kept supporting it via SCO Group but even used that company to bankroll an attempt to kill Linux in court. IBM, dependent on Linux, promised a billion dollar battle. Company eventually went bankrupt but its "real UNIX" is still in use by Fortune 500 companies as legacy systems. That they sold inventory-management solutions on top of SCO Server means quite a few are mission-critical. That money coming in is probably why they keep fighting over whether it should stay bankrupt or not.

https://en.wikipedia.org/wiki/SCO_Group

Microsoft truly abandoned UNIX when they went with OpenVMS:

http://windowsitpro.com/windows-client/windows-nt-and-vms-re...

Note: Fun to remind UNIX/Linux users about that when they joke about an OpenVMS desktop. One dominates the market. ;)


Okay? Red Hat Enterprise Linux wasn't released until 2000, so I don't know what Microsoft's late-80s UNIX distribution has to do with it. Google was running Linux from the early days, and they launched in 1998, which is nearly a decade after Xenix stopped being updated.

EDIT: Yes, Linux has its origins sooner, and if you want to know why Linux beat the BSDs to mindshare, what was going on in 1993 is very important. But Linux wasn't competing for mindshare with Xenix, it was stuff like Solaris (which did have an x86 port, but was mostly running on SPARC), AIX (which was mostly IBM big-iron and POWER), HP-UX (which was PA-RISC and IA-64), etc.


If you want to talk about post-1993, saying that Linux won because there weren't any commercial x86 Unices makes even less sense. Xenix did not stop being updated, it became SCO UNIX. SCO bought and continued to sell UnixWare from Novell in 1995. x86 Solaris came out in 1993, but Sun actually had an x86 Unix since 1991 when they bought ISC.


Red Hat was rather late to the game. Both Debian and Slackware came out 7 years earlier in 1993.


The first time I saw Linux in a work environment was in '96 as a Samba file server on our office LAN, which was otherwise dominated by Windows NT machines. I think it was set up partly for novelty (someone wanted to play with Linux), and partly to save money on not buying one more Windows license.


Just like Linux we're (Samba) still around :-). Only now we can be an Active Directory Domain Controller as well as just a fileserver !


You make it sound like Linux was the only option on x86.


More like because they've done a good-enough job and because most distributions are free of charge.

(Don't get your knickers in a knot. I'm not banging on Linux.)


Actually the free of charge aspect limited uptake of Linux in the commercial sector for a long time. It has it's faults but Linux won by being a great software.


>On top of that, Linux essentially sucked all the air out of the UNIX development space by killing off all the commercial UNIXes.

For good reason too. The commercial UNIXes were absurdly expensive, and they were all incompatible with each other too. There's a reason all the software developers abandoned UNIX and went to Windows. UNIX killed itself with ridiculous license fees and fragmentation. Linux saved it by making it free, open-source, and unifying it and using common standards instead of a bunch of vendors trying to make their own hoping to gain marketshare and ending up losing everything to MS. Of course, this hasn't prevented Linux from having its own share of fragmentation (the different DEs chiefly), but it's nothing like the UNIX Wars of the 80s.


Yes, because we all know how compatible each GNU/Linux distribution are among themselves.


A lot of differences are pretty superficial.


You mean like package formats, audio subsystems, configuration files that didn't exist on original UNIX, window managers, init systems, ...


Lets start with package formats, people discuss this matter as if it were building an app for ios vs android. 99.999% of the work is the application. Package formats are just different sets of instructions for building the same source code the works ultimately on any system so long as its required libraries are present.

Continuing on with audio, virtually everyone uses pulseaudio. JACK is pretty much reserved for audio production and is its own animal.

The majority of configuration files are similar/the same, most differences are minor. For good or ill most distros are adopting systemd.

Window managers or desktop environments are just components virtually all of which are able to be run on any distribution.

Got any more?


No, package formats aren't just different sets of instructions, because on each distribution certain files might land on different places.

Plus someone has to keep track of those instructions for every single distribution.

Finally, supporting the same format isn't enough, for example a RPM for SuSE isn't the same as a RPM for Red-Hat.

Well, apparently you forgot there are people still using ALSA and OSS.

When doing desktop applications anyone that cares about UI/UX of the respective users wants to integrate with the menu system, notification on the toolbar, context menus, drag-and-drop of the window manager, printing,....

So it isn't just components.

Of course, if the goal is to have a plain twm experience, then forget about what I am saying.


The percentage of desktop users not using pulse is is a rounding error. Firefox doesn't even work without pulse anymore Notifications and menu systems are standardized. Printing doesn't require special work to work on different distros. Drag and drop just isn't part of what a window manager does period its more what your file manager does.

Your valid issues are pretty much limited to the fact that software must be packaged for several distros in order to be suitable for distribution on even most systems and file manager integration is still something that requires you to integrate with gnome AND kde to support most users.


Apple?

Amazon does release a modified Android. They could probably spend the money and get there.

Doesn't oracle do a fair bit of linux work?

Red hat as well. Building a new OS they do have the knowledge at least.

IBM?


Have you ever used a Microsoft product? Have you ever tried to get support for a Google product?

You do not want these companies involved in your operating system.


Who does that leave to help with your OS? Apple? Their support is pretty bad too.


What about, the people that made it in the first place?

The support I've had from companies like Red Hat is unparalleled.


If you pay a monthly subscription.


Pretty cheap if it ends up saving the Android ecosystem.

Android is maybe using 10% of the functionality in the linux kernel but is paying all the overhead and friction of maintaining a branch for each and every device.

I think google wants to heavily encapsulate the hardware vendors drivers and customizations and provide a stable API so google can pretty much update devices with on its own without much interaction with the hardware manufacturers. (Which are very complacent about their updating)


Why Linux kernel can support lots of CPU architectures in mainline tree but somehow requires to make a fork for each handset made by all these home appliance manufacturers? If you let all these vacuum cleaner companies to fork Fuschia, they'll fork it and put their "unique features" right into microkernel.


It's because all the stupid vendors refuse to share anything, and won't publish their sources for the various device drivers needed. So every time there's some new flash chip or whatever, there's a custom closed-source device driver for it which doesn't get mainlined or updated for newer kernel revisions, and devices with that chip are forever stuck on an ancient kernel version.

There's only two ways around this: 1) somehow force companies to open-source their driver code, or 2) have a static, unchanging (or at least very backwards-compatible) ABI for device drivers, and then just suffer with shoddy drivers written by vendors which cause all kinds of glitchy behavior, as Windows was infamous for for so many years.

This doesn't happen with the mainline tree because it's all open-source and gets debugged and inspected by others, but that just doesn't happen with the cellphones/Android.


> It's because all the stupid vendors refuse to share anything

I'm not going to defend the vendors (much). But it's not free to upstream your code. It's work. And I'd argue that it's an investment that will pay off, but you can't fault others much for seeing it differently.


I disagree. It's not free (usually) to upstream your code yourself, but there's nothing preventing you from publishing it for anyone to use. With a popular mobile phone, that'll very likely end up with someone else grabbing it and upstreaming it for you.

There's no guarantee some unpaid volunteer is going to do this kind of work for your company, but the chances are much higher if it's some kind of popular device.


For all your complaints about lack of a stable ABI, when you run Linux on a PC most of the hardware just works, most of it without out-of-tree drivers. (In some areas like graphics proprietary drivers give you a better experience, but even there typically there is some baseline support in tree.)

The reason this is possible on the PC and not on phones is because there are more standard hardware interfaces that don't exist in the ARM world. The lack of that is why the phone vendors need to do a lot of custom work.


It's also because ARM doesn't have a stable, standard BIOS the way x86 does. ARM is otherwise great, which is why people use it, but it has that one big downside. Every phone has a different hardware configuration, and getting that hardware working is always a fiddly bespoke job.


> won't publish their sources for the various device drivers needed.

And how is Fuchsia supposed to help that?


It will go down the Windows model of having a stable kernel ABI (I assume). And suffer/enjoy the same trade-offs as Microsoft.

That model is not perfect, but it worked for MS -- and Google is in a similar enough position.


Just because there's a stable API doesn't mean the manufacturers will actually upgrade.

Unless there's some legal requirement, it's going to be the same fragmentation


If they have a stable driver ABI Google can be responsible for the upgrading. They just have to require the OEMs allow them to do so.


Not only Windows, the majority of operating systems are like that, even the old time commercial UNIXEs, only GNU/Linux forces an unstable driver ABI on developers.


Linux and *BSD.

FreeBSD will break ABI every major version, roughly every 2 years.

https://wiki.freebsd.org/VendorInformation

OpenBSD even goes so far as to break the user-space libc ABI in minor releases:

http://www.openbsd.org/papers/eurobsdcon_2013_time_t/


Ah ok, I though the *BSDs followed the UNIX old school of stable ABIs.

Besides toying around with FreeBSD in the late 90's, I never used it properly.


Except that the "stable ABI" thing doesn't work in practice. Look at all the people who have to throw away perfectly good hardware every time a new Windows version comes out, because the hardware maker doesn't care about updating their drivers and just tells customers to buy a new version.


It works if it's architecturally designed to work. Mainframes and AS/400 does that. On System/38 (AS/400 predecessor), apps compiled down to a microcode against standardized interfaces. These got converted by special compilers to actual hardware. That's why apps written decades ago still run. You also got modern stuff as updates came in.

https://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf

OpenVMS gave customers binary translation tools plus let clusters run with multiple, CPU architectures. That migration wasn't as painless but at least happened. Microsoft eventually applied this strategy at application level combined with OpenVMS's cross-language programming in the form of .NET CLR. It can work at much lower level as IBM showed in the 70's or 80's, though.


What will be with custom ROMs?

Right now you can bootstrap a ROM because the manufacturer has to OS their kernel (GPL).

Now, with the kernel being proprietary, getting a custom ROM will probably be as hard as getting android to work on iPod.


That would be awesome. Like AOSP for everyone


> > Ooh - I dunno - If you lend me 1000 engineers, we should be done in about 10 years, cos we're really smart and don't need to implement legacy SCSI support...

Correct, they don't:

> From piecing together information from the online documentation and source code, we can surmise that Fuchsia is a complete operating system for PCs, tablets, and high-end phones.

Their baseline isn't hardware from 40 years ago, it's that of today and only a subset of it. That means that you can forego an enormous amount of things which are all potential sources of problems. The world has evolved a lot, new theories around OS design and security have come and gone.

Just because Linux serves us well doesn't mean we shouldn't explore other options, or that those other options need to support every old data exchange format and hardware port.


>Lets throw away the last 20 years development on the linux kernel by thousands of people, and rewrite our own.

Chrome has more code than the Linux kernel.

The numbers I see for 2012 (way back) are: "4,490,488 lines of code, 5,448,668 lines with comments included, spread over 21,367 unique files."

Linux, on the other hand, was ~ 200K lines of code for the kernel (the rest is 7 million for drivers, about which the new OS doesn't need to concern itself, as it will on custom hardware) and 2 million lines for various architectures (ditto).

Don't overestimate the complexity of building a complete kernel -- even teams as resource lacking as NetBSD can do it.

And with other parts of the stack, they already have tons of experience (e.g Skia for graphics etc).


But how does it compare to Emacs? :)


Linux is humongous. I don't know about what Google is trying to achieve with Fuschia, but I if you're willing to make trade-offs, I believe you can make an OS that is multiple order of magnitudes smaller than Linux (or other Unixes, or Windows) and still makes a lot of practical sense.


Reducing the attack surface of the kernel is one of the primary benefits. Companies like Qualcomm are a constant source of exploits due to the amount of buggy drivers they introduce into the kernel each year.


A few days ago somebody posted a link to some very old UNIX documentation from the early 70s. Something that struck me was how similar it was to modern UNIX systems. A lot of abstractions underpinning Linux, the BSDs, and OSX have not changed much in the 45 years since UNIX V1 was developed. It wouldn't hurt to take a fresh look at things.

Your argument could have been used against almost every project Google's worked on. Email, search, Go, phones, Chrome, maps, etc. all had established proprietary and open source competitors.

It might be expensive, but it's cheaper, more practical, and more likely to succeed than something like self-driving cars or Google Glass.


Of course that's largely because those abstractions were good ones.

Someone in a comment above noted that Rob Pike thinks Unix is obsolete. But then he was probably thinking about Plan9 which was indeed better than UNIX -- because it even more unixy. Filesystems all the way down.

Capabilities are the first new abstraction I have heard of that really go beyond the Unix model.


You need to read more. A lot more. (-:

For starters, read the headlined article that observes that capabilities pre-date the UNIX model.

Even the "classic authorities" on this stuff, including Bach, Comer, Deitel, and Tanenbaum, pointed to many things that went beyond the UNIX model. And this was almost thirty years ago.

There is a wealth of experimentation and diversity that already exists and that has been done over the intervening many years. In addition to the operating systems already mentioned on this very page, there's Helios which was a capability-based system with a POSIX layer on top from the late 1980s/early 1990s. The BSD 4.4 Log-Structured Filesystem had global wear levelling in the filesystem in 1990. OS/2 1.0 went beyond terminal escape codes and multiple mouse protocols to a device-neutral video, keyboard, and mouse paradigm for TUI applications in 1987. People have done operating systems where everything really is a file, "object oriented" operating systems, network distributed operating systems, microkernel operating systems with multiple "personalities", ... all sorts.


They get to control everything with this and answer to no one. They can learn from past mistakes. They aren't bound to linux legacy support.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: