Hacker News new | past | comments | ask | show | jobs | submit login
Zircon Kernel, Core Drivers and Services (googlesource.com)
200 points by ra7 on Sept 15, 2017 | hide | past | favorite | 80 comments



Note: It's not a new kernel. This was previously known as Magenta, and it's now been renamed [1] to Zircon.

[1] https://www.phoronix.com/scan.php?page=news_item&px=Fuchsia-...


That's helpful. I was trying to figure out whatever happened to Magenta. So it's just a rename.


That is most likely because Magenta now is a Tensorflow project, talk about communication inside the same company!

https://magenta.tensorflow.org/


That Tensorflow Magenta looks like a totally unrelated and separated project from the Magenta OS though.


Of course it is, but imagine referring to Google's Magenta project without context.


Now if only we knew what they hell they are for.


From here [1]:

"Zircon targets modern phones and modern personal computers with fast processors, non-trivial amounts of ram with arbitrary peripherals doing open ended computation."

It's likely that this is what Google wants to replace Linux with on Android phones. The kernel/OS both seem to be designed to scale down to very small devices and all the way up to full GUI desktops.

[1] https://fuchsia.googlesource.com/zircon/+/master/docs/zx_and...


The name immediately makes me think of Mr. Zurkon from the Ratchet & Clank franchise. Wonder if it's mere coincidence or if somebody is a fan.


I suspect it's named after Zirconium [1], similar to how Chromium/Chrome are also named after a metal.

[1] https://en.wikipedia.org/wiki/Zirconium


Other pieces of the system are called Garnet and Topaz, so I assume Zircon is more about the gemstone.


Oh yeah, zircon is also a mineral (zirconium silicate): https://en.wikipedia.org/wiki/Zircon


Surely Google should realise that Topaz and Garnet clash with font names on AmigaOS?


I'd guess though it has to do with circles as in ring-0 from OS theory and maybe recycled (pun not intended) from google plus circles. With the other ones creativity might just have left them.

Garnet is the composer for ui. What is topaz?


I'm pretty sure when the OS itself will get released it will also be renamed.

I mean Fuchsia name is super unintuitive to pronounce.


Wouldn't be surprised if they just call it "Android 10.0" or something. If they keep Java/NDK compatibility, the rest doesn't matter to most people.


Simple to pronounce. Spelling is hard.


I know multiple languages and to be honest I still don't know how to pronounce it :)


Few-shah.


I suppose you are getting close to "future".

I'd naturally say "fucks-ya", informed by the German, but /fjuːʃə/ seems to be informed by the French, although Leonhart Fuchs was a German, whom the plant was named after. Fuchs means fox, neither fudge nor few, by the way.


Don't blame me, I don't make the rules. That's at least how we Americans pronounce it. Then again, we mispronounce quite a few things, so it may have a more localized pronunciation in its native language regions. I can't speak for those, as I wouldn't know.


BTW, this googlesource.com is probably the fastest Google website and much faster than typical website. 7.5 KB of html, 5.2 KB of CSS, tiny images and even 8.7 KB font. And no javascripts at all!


the way the internet should be.


And it uses markdown instead of html, that's pretty neat.


Here's the project for it, which is also on the same site: https://gerrit.googlesource.com/gitiles/


There are two parts on googlesource.com, the ones without "-review" suffix (like the link in the OP) are on gitiles, and the ones with "-review" suffix are on gerrit.


How does that work? Are they using CSS to render and format it?


It's interesting the timing on this rename to Zircon going through and the Fuchsia team attempt to get Fuchsia support into Go[0].

EDIT: sounds like the rename was in the works for a while and the Go port was just holding off until it completed:

> "this change has been a few months in the works..." "...I delayed this email until it was almost done."

[0] https://groups.google.com/forum/#!topic/golang-dev/2xuYHcP0F...


> The fuchsia kernel is running on GCE, with support for network and disk coming very soon. That's how I intend to run builders for build.golang.org.

Wow!


That's where I got the link for Zircon :) I thought about posting the Go discussion initially which is really interesting on its own.


Speaking of unconventional system stuff in Go, Google has a Linux boot FS, which, instead of featuring the usual BusyBox, ships with an init, the Go toolchain and a source tree. The /bin replacements are symlinks as expected, but the twist is that they point to a binary that is compiled on first boot.

http://u-root.tk/


This reminds me of TCCBOOT.


Thanks for the link.

> In particular, the network stack is implemented Go, so for any program to make a TCP connection on Fuchsia, we need the Go port to be working:

It looks like systems programming to me. :)


This is part of their move to rename layers of the OS after gemstones.

Zircon

Zircon is the operating system's foundation: it mediates hardware access, implements essential software abstractions over shared resources, and provides a platform for low-level software development.

For example, Zircon contains the kernel, device manager, most core and first-party device drivers, and low-level system libraries, such as libc and launchpad. Zircon also defines the Fuchsia IDL (FIDL), which is the protocol spoken between processes in the system, as well as backends for C and C++. The backends for other languages will be added by other layers.

Garnet

Garnet provides device-level system services for software installation, administration, communication with remote systems, and product deployment.

For example, Garnet contains the network, media, and graphics services. Garnet also contains the package management and update system.

Peridot

Peridot presents a cohesive, customizable, multi-device user experience assembled from modules, stories, agents, entities, and other components.

For example, Peridot contains the device, user, and story runners. Peridot also contains the ledger and resolver, as well as the context and suggestion engines.

Topaz

Topaz augments system functionality by implementing interfaces defined by underlying layers. Topaz contains four major categories of software: modules, agents, shells, and runtimes.

For example, modules include the calendar, email, and terminal modules, shells include the base shell and the user shell, agents include the email and chat content providers, and runtimes include the Dart andFlutter runtimes.


The docs refer to 'LK' (Little Kernel) as an inspiration for the lower levels of the design. It's another project started by Travis Geiselbrecht of NewOS fame.

https://github.com/littlekernel/lk


> Travis Geiselbrecht of NewOS fame

To save others from bit of googling, apparently NewOS eventually became the basis for Haiku kernel, Haiku being FOSS re-implementation of BeOS. Travis also worked on some kernel stuff for BeOS, so that lineage makes some sense. Not sure if I missed anything?

Also see this comment thread: https://news.ycombinator.com/item?id=12271839


It is more than inspired by LK isn't it? I'd seen it more characterized as a fork that evolved.

For example...different, but the lineage seems there:

https://fuchsia.googlesource.com/zircon/+/master/kernel/kern...

https://github.com/littlekernel/lk/blob/master/kernel/mp.c



I have never heard of LK before but apparently it used in Android bootloader, so it makes sense that zircon is based on it.


LK is also Qualcomm's little kernel bootloader, it might also have something to do with that

(Pdf) https://developer.qualcomm.com/qfile/28821/lm80-p0436-1_litt...


iirc he works on zircon now.


Looks like this is just a rename of the Magenta kernel: https://github.com/fuchsia-mirror/zircon/commit/f3e2126c8a8b...


100 syscalls for a microkernel sounds like a lot. Is that standard? I thought the reasoning behind microkernels was for dispatching to userland processes to handle things instead of needing a lot of syscalls.


Here's a list [1]. I count 131 syscalls. Most of them are related to hardware (PCI, timers, CPU, framebuffer), a bunch are related to logging/tracing. There's a small set that address kernel "objects" (processes, ports etc.).

[1] https://github.com/fuchsia-mirror/zircon/blob/master/system/...


Some more detailed info here: http://zircon.fyi/syscalls.md http://zircon.fyi/concepts.md

The PCI syscalls and a lot of the other DDK scaffolding stuff will be going away in time (and are not accessible to general userspace processes).

I'm hopeful that we'll hit 1.0 with fewer than 100 public syscalls and hopefully not many more than 100 total.


I'm curious what the trade off is between a hard realtime kernel like LK and a kernel that only offers soft realtime guarantees.

I would presume there are some costs associated with providing hard real time guarantees ... otherwise every OS would be a real-time OS right?

And if there are tradeoffs, why did Google choose a hard realtime kernel to be the basis of its new OS? Since Linux and BSD seem to be fine bases for mobile OS (as evidenced by iOS/Android).


I guess a very big German Telecom objected to 'Magenta'?


Other than using a microkernel, I'd be interested in reading in what ways Google has focused on security with Fuchsia vs Android, if at all.


Fuschia has a capabilities-based permissions model, as reported in https://lwn.net/Articles/718267/

That article has a good overview.

The best place to read about this in the official docs is the page on filesystems [1], and the page on sand boxing [2]:

https://fuchsia.googlesource.com/docs/+/master/filesystems.m...

https://fuchsia.googlesource.com/docs/+/master/sandboxing.md


I'd be interested in reading any sort of documentation about intent or design goals, but I can't find anything?

There's a little about some aspects of it here: https://fuchsia.googlesource.com/docs/+/master/book.md


Is this bad news for Linux?


I don't know the context very well but according to what I read, Zircon seems quite tuned for its job, so I bet it's technically superior to Linux there. So, does it mean that Linux, like any other technology, could die by obsolescence ?


Why would you think that?


Because I think that Google is a big user of, and contributor to Linux, and the fact that they are developing their own kernel probably means their participation will decline on both counts.


Funny. I also have a program named [zircon](https://github.com/Hexworks/zircon) which is a text gui library.


One thought on reading those design documents...

...files?


I do guess that files are probably not handled by the microkernel, and dealing with them is likely done through passing messages to some sort of VFS server. But I haven't looked into it in depth.

Update: https://fuchsia.googlesource.com/docs/+/master/filesystems.m...

"Like other native servers on Fuchsia, the primary mode of interaction with a filesystem server is achieved using the handle primitive rather than system calls. The kernel has no knowledge about files, directories, or filesystems. As a consequence, filesystem clients cannot ask the kernel for “filesystem access” directly."


To expand on what snvzz said, microkernels are typically only responsible for:

- Booting up

- Processes

- Message passing

- Page allocation

- API for manipulating ring 0 resources (drivers)

Everything else is implemented in userspace services and interfaced with via message passing.


Looks like filesystems can be accessed via thinfs (Go implementation of FAT12/FAT32)

https://fuchsia.googlesource.com/thinfs/

My guess is that apps will be encouraged to use Fuchsia's object store

https://fuchsia.googlesource.com/ledger/


Files have awful semantics in just about every way. We should get rid of them ASAP.


I'd gladly read up on alternative concepts to file systems if you'd be so kind as to supply some searchable terms.


Look at object and capability machines.. Back before OS's became so homogeneous there were a _LOT_ of ideas that didn't map to the modern concept of a file. Some of these machines still exist. For example the AS400/iSeries doesn't really differentiate between ram/storage with its object storage, which means its a perfect fit for a modern non-volatile RAM machine. The original PalmOS, had as similar concept.

Of course all the rage the last couple years are key/value stores, which in old terminology one might call KCD (key, count, data) or to rearrange it a bit CKD, aka the technology used for persistent disk storage on IBM mainframes.

This is actually one of the things that has gotten a lot easier on the internet the past few years as book scanners have become more common. There now seems to be an effort to preserve old burroghs/whatever manuals online rather than collecting dust in peoples attics.


http://research.cs.wisc.edu/adsl/Publications/ibench-sosp11....

> We analyze the I/O behavior of iBench, a new collection of productivity and multimedia application workloads. Our analysis reveals a number of differences between iBench and typical file-system workload studies, including the complex organization of modern files, the lack of pure sequential access, the influence of underlying frameworks on I/O patterns, the widespread use of file synchronization and atomic operations, and the prevalence of threads. Our results have strong ramifications for the design of next generation local and cloud-based storage systems.

> The iBench tasks also illustrate that file systems are now being treated as repositories of highly-structured “databases” managed by the applications themselves. In some cases, data is stored in a literal database (e.g, iPhoto uses SQLite), but in most cases, data is organized in complex directory hierarchies or within a single file (e.g., a .doc file is basically a mini-FAT file system). One option is that the file system could become more application-aware, tuned to understand important structures and to better allocate and access these structures on disk. For example, a smarter file system could improve its allocation and prefetching of “files” within a .doc file: seemingly non-sequential patterns in a complex file are easily deconstructed into accesses to metadata followed by streaming sequential access to data.


File systems are just nosql databases, hierarchical key-value blob stores. There are obviously ton of other ways to model databases that could be used. For the other extreme end I think Oracle DB runs quite happily on raw disks, or at least did so at some point.

Of course I'm not sure if parent was meaning files as a way to structure/store data (having that hierarchical blobstore) or as a way to access data (something you `open`, `read`, `seek` etc), as they are slightly different things.

For a more real world example, take a look how mainframes, especially AS400 (edit: meant System/360 successors), managed data. At least afaik they fundamentally work on a more structured level.


Oracle DB's preferred method of data storage is for you to hand it disks for Automatic Storage Management, ASM. It then takes care of replication and storage by itself.

In practice, this might be a little more performant but incurs significant manageability costs. If you're a committed Oracle shop, it's worthwhile. If you just want one or two database servers and you already have preferred storage methods, use those. (Or, more realistically, use PostgreSQL.)



The Newton had a type of object database call soups in place of the file system. It supported queries and frames. The coolest feature was when you removed storage and it still worked with what was still there.


link to a rant as to answer why files are a bad abstraction? I remember reading here that its not ammendable to metadata, for one thing.


Try https://danluu.com/file-consistency/ for one point: the filesystem isn't really even viable for data persistence.

The filesystem's "you can only persist uninterpreted bytes" policy means software can't maintain any kind of data invariant across runs; everything has to be revalidated if your process ends.

ACLs (e.g. unix permissions) are widely regarded as a mistake.

File locking is broken: https://gavv.github.io/blog/file-locks/

File metadata is easy to accidentally mangle (e.g. atime) and hurts performance (even "relatime" is slower than not causing a write for every read).

The filesystem is used both for users to organize their data files and for sharing of machine-interpreted data between programs (e.g. shared libraries and system configuration). Humans need human-readable names, and machines get confused by humans renaming things (and should probably be addressing by content, rather cryptographically or in terms of type signatures or specifications).

There are no asynchronous syscalls for interacting with the filesystem itself (e.g. `stat()`; for file contents things are onl slightly better).

Probably I'm still forgetting a number of problems, but these come to mind offhand.


What is it with this open-source + patents bullshit. Why can't we just work towards abolishing software patents altogether. They hold back pushing the state of the art.


I'm not a lawyer, but after reading the patent file, it seems to me that it only covers the kernel implementation; unlike facebook patent clause which covers everything.


It covers the entire Fuchsia OS.


Right, Facebook wants to be able to do anything to your sister (or IP) it wants and you can't ever sue FB, if you dare use their code. Google just wants to trade any rights you have to the specific implementation of the code you've deciding to use. They won't sue you or you them re that specific code. Not the same universal IP grab at all.

On the other hand, change one iota and you have no patent grant from Google of any kind, I believe.


I don't get why they wouldn't just use the Apache v2 license at this point, it seems the same thing as MIT + patents grant & invalidation-on-suit.


Ouch. That would be a worse patent-grab.


The GigaBoot20x6 bootloader speaks a simple network boot protocol (over IPV6 UDP) which does not require any special host configuration or privileged access to use.

It does this by taking advantage of IPV6 Link Local Addressing and Multicast, allowing the device being booted to advertise its bootability and the host to find it and send a system image to it.

Uh oh. Totally insecure remote boot with discoverability. What could possibly go wrong? If this shows up in some IoT device, trouble.


Using GigaBoot20x6 is intended to support development / debugging.

This whole project is still in a very early stage, but it seems unreasonable to assume GigaBoot20x6 will be the preferred / only bootloader when this is ready for production.



Wouldn't be the first hacky thing done to support development temporarily. At least during one point of Fuschia development (might still be there for all I know), it took unauthenticated keyboard input over the network connection or something like that in case they broke the keyboard driver.


Verified boot support will arrive prior to 1.0, and as an earlier reply speculated, leaving the ipv6 boot path enabled at all is a development feature.


Somebody will build some dumb IoT device with that feature enabled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: