Hacker News new | past | comments | ask | show | jobs | submit login
Google’s not-so-secret new OS (techspecs.blog)
634 points by techenthusiast on Feb 15, 2017 | hide | past | favorite | 538 comments



It was unfortunately obvious that the writer had insufficient tech chops when use the phrase

"a post-API programming model"

But pressing on how somehow manages to blame the lack of updates to android phones on the modularity of the Linux kernel. The joke of course being that linux is monolithic and googles new OS is a microkernel ergo more modular.

The quote is "...however. I also have to imagine the Android update problem (a symptom of Linux’s modularity) will at last be solved by Andromeda"

Its hilarious that he can somehow defying all sanity ascribe androids update issue to an imagined defect in Linux. Android phones don't get updated because for the manufacturers ensuring their pile of hacks works with a newer version of android would represent a non trivial amount of work for the oem whom already has your money. The only way they can get more of your money is to sell you a new phone which they hope to do between 1-2 years from now.

In short offering an update for your current hardware would simultaneously annoy some users who fear change, add little to those who plan to upgrade to a new model anyway, decrease the chance that a minority would upgrade, and cost them money to implement.

Its not merely not a flaw in the underlying linux kernel its not a technical issue at all.


There's one design decision in Linux that makes this slightly harder than it needs to be in this situation: Linux's lack of a driver ABI.

At the moment, phones include all sorts of custom drivers for very specific versions of the hardware. The OEMs ought to send these upstream, but don't want to. You can't build your own kernel and upgrade without breaking all the binary-only drivers.

Android falls between two stools. Google own the userland but the OEMs are responsible for updates and the SOC manufacturers (mostly Qualcomm and Mediatek) are responsible for closed-source drivers. Arguably the cleanest and least achievable way out of this is trying to have an OS-only phone.


"Linux's lack of a driver ABI."

RedHat Enterprise Linux actually solved this with their kABI. This allows vendors to ship binary driver RPMs for kernels in the same RHEL major version (eg, RHEL 4, RHEL 5, etc). However, this entails a major effort on the part of RHEL, as it forces them to carefully backport improvements from upstream in a slow and careful way, so as to avoid breaking binary compatibility.

The RHEL kABI model was quite nice to deal with as a 3rd party NIC vendor (and you had to do it, even if you upstreamed your drivers, as they were frequently not backported to RHEL/Centos at a rapid clip, so RHEL customers would be using ancient buggy drivers unless you put together a driver RPM for them).

However, the source changes to support all the backports were something else entirely (which you needed if you wanted support for newly backported features). For a 10GbE driver that was roughly 2000 lines of C, I had a roughly 1000 line hand-made configure script, roughly 1/2 which was checks made necessary due to RHEL's backports. Checks that could have been a simple check to see what the linux kernel version was were complex spaghetti, trying to detect how many arguments some function took. This is because EVERY kernel was 2.6.18 for RHEL5, even if it had backports from 10 or more versions higher.

I'm guessing that Google felt that it was better to invest in building an OS they could control from the ground up, rather than hiring a building full of people to backport upstream patches in a binary compatible way.


So, taking a tangent here... If you fork, and that includes backporting security fixes upstream won't, you need a new version.

Obviously, you can't just take .28, apply a fix, and call it .29... But you have to do something.. ideally something that indicates binary ABI compat with upstream .28, (so software vendors can just decide, your .28, you get .28 features.

Maybe something like SemVer needs a model for versioning of forks? (I'm not a SemVer fan or hater, it's just the best known effort in this area..)


It has been several years, but I'm sure there was a version. However, part of the problem was that it wasn't just RHEL that backported stuff. Other distros (like SLES) did similar things. In the end, it was just easier to do a configure-ish script to see if a function foo() took 3 or 4 arguments than it was track N-different version numbers.

And you're right, if you always build on the base version, then you have greater compat. However, you also loose out on the backported features, many of which were important for performance.


> There's one design decision in Linux that makes this slightly harder than it needs to be in this situation: Linux's lack of a driver ABI.

This is just an excuse. The reality is the manufacturers still have the mentality that once something is sold their responsibility ends. We see the exact same results on certain OS's that do have stable ABI's. You've probably made a transaction recently on a device that uses an old and unpatched version of windows CE.


> You've probably made a transaction recently on a device that uses an old and unpatched version of windows CE.

I'm still shipping software for WinCE4.2 devices. They're supposed to be on their own little LAN segment, not bridged to the internet, with a border PC managing them. The upgrade path would almost certainly both an expensive nightmare and possibly infeasible - they have exactly enough Flash for the OS they shipped with.

It's really a cost-benefit tradeoff; all these software updates cost money. Who pays for that, under what circumstances, and why?


> The reality is the manufacturers still have the mentality that once something is sold their responsibility ends.

Not my experience. I have three hardware devices that worked on Windows, Linux and FreeBSD 5-10 years ago, and now only work on Windows and FreeBSD. (Amusingly enough one of them is a windows CE device)


Isn't that another example of manufacturers not supporting their hardware? Are the windows/FreeBSD drivers still maintained or do they just still work?


I think I quoted the wrong line. They're not maintained, but they still work because Windows and FreeBSD have stable driver ABIs.


WebGL and webusb make that a big security risk.


So if we had a stable ABI the manufacturer writes a driver, ships, and forgets about it, and the driver just continues working when I update the OS.

Thus, with what you describe, the argument can be made that the Linux driver model doesn't appropriately take into consideration the incentives and needs of hardware manufacturers.


Until there is a security flaw found in the driver. Then what do you do? The linux driver model is much better for users and security.


>The OEMs ought to send these upstream, but don't want to.

Isn't this a rampant GPL violation? Why do we put up with this?

We are about to lose the war for general purpose computing due to insufficient GPL enforcement.


  Isn't this a rampant GPL violation?
Eh, it could just be that there isn't sufficient overlap in the incentives of the vendors and upstream.

For example, an OEM can bash out low quality code which is hard to maintain, secure in the knowledge that they'll only have to maintain it for the year or two that phone is getting updates, and it won't have to survive any big kernel updates or be compatible with anything that isn't on the phone.

Upstream, on the other hand, wants to maintain device support forever so they want good quality code that they can get through kernel updates with a minimum of fuss, and they can't accept anything that breaks existing code.

So upstream won't accept the OEM's shitty patch, and the OEM won't pay for the engineering time to make patches that upstream will accept.


In a lot of cases, the OEMs publish their kernel source, with their drivers and changes, so it's not a GPL violation. But it takes a tremendous amount of work to get that kind of thing into the mainline kernel, not least because of the atrocious quality of a lot of OEM kernel code. And the incentives are absolutely not there to get them to do it.

(Of course, some OEMs absolutely do violate the GPL, and then there's also the issue of binary drivers and blobs that violate the spirit but not the letter.)


Dealing with the upstream linux community is a painful, time-consuming process. A lot of hardware vendors would rather just not bother, and release their source code to maintain GPL compliance, but never attempt to upstream anything because the process is so time consuming, with little to no immediately apparent upside. This is especially true if they are a parts supplier with a handful of large customers who don't care about upstream support.


No it's not a "rampant" violation. Some people believe it is a violation and some not. Linus Torvalds being one who thinks its's fine. See e.g. https://lkml.org/lkml/2006/12/14/218


From the link: > But if the module was written for other systems, and just ported to Linux, and not using our code, then it's very much debatable whether it's actually a "derived work". Interfaces don't make "derived works" per se.

Interesting, wonder where this puts Android's Dalvik VM (is the VM called this too? I don't remember). Edit: Not sure what the JVM Dalvik is based on is licensed as though.


Software Freedom Conservancy has been saying this for a long time, most the Kernel Devs, Linux Foundation, etc all seem to want to treat the Linux Kernel more like BSD than GPL.

They do not want to piss of their corporate masters..


> Why do we put up with this?

Who is "we"? As in, who will pay the lawyers?


Just for correctness sake, the quote from the article:

"a post-API programming model"

looks like it came directly from the source repo.

https://fuchsia.googlesource.com/modular/+/master

/----------------- Modular

Modular is the application platform of Fuchsia.

It provides a post-API programming model that allows applications to cooperate in a shared context without the need to call each other's APIs directly.

/-----------------


> The only way they can get more of your money is to sell you a new phone which they hope to do between 1-2 years from now.

The thing is, this only works on countries similar to US where most people are on contracts.

In the rest of the world, where people are on pre-paid, we use our phones until they either die or get stolen, which is way more than just 1-2 years.


Bump it to 3-4 years then. Bloated manufacturer updates combine with bloat in most popular applications and the regular web bloat to make the phone unusably slow after few years.

Between that and fragility of smartphones (mechanical damage, water damage), most people are bound to replace theirs every few years anyway.


> Bump it to 3-4 years then. Bloated manufacturer updates combine with bloat in most popular applications and the regular web bloat to make the phone unusably slow after few years.

My S3 is 4 years old now, and it is works perfectly fine.

When it dies, I will most likely adopt one of my Lumia devices as main one, or will buy a 2nd hand Android device, instead of giving money to support bad OEMs


My Nokia N900 is 8 years old and still working good. Since a lot of it is free SW there are still some bugfixies made. Workarounds for quirks are well known and tested. A community is still there.

Just spent it a new battery very easily for 7 € for the years coming.


My SO's S3 is of more-less the same age and it's so slow that it's barely usable now. Still can't track down why - she is not a power user, she wasn't installing apps beyond the few things I installed her and the OS updates. My old S4, currently used by my brother, suffered the same fate, being slow even after a factory reset. I wonder where this comes from?


I'm surprised that the factory reset doesn't resolve the issue, it always has for me, turning my slow old phones back into shiny new ones again.

However do be aware that your perception of what is slow changes as you use newer devices which perform better, making older ones feel slower than they did before - much like a shiny new work PC makes your home PC feel slow (or vice-versa) - so that may partly explain it.

Finally, I've just breathed new life into my ageing Nexus 7 (2013) tablet by installing the custom ROM "AICP" on it, which has made a huge difference to its performance & battery life. I recommend giving it a whirl on your SO's S3 - download from http://dwnld.aicp-rom.com/?device=i9300 (though you'll need to read up elsewhere about installing custom ROMs if you've never done it before)


My S3 is also about the same age and is running mostly fine, except for two things: 1- the USB port is damaged and the phone sometimes doesn't charge or charges very slowly; I know it's probably a soldering problem, but I can't be bothered to fix it, and 2- the Facebook app frequently crashes and reboots the phone. I suspect this has more to do with Facebook being awful on Android, since I've seen it happening in other phones as well.

Point is, I won't change my S3 until it crumbles to dust, if I can help it. I don't understand the obsession with buying the latest mobile every 1-2 years.


Personal experience suggests that Android (or perhaps Linux?) slows down as the internal storage fills up.

Also, the GC on older Android devices is a mess. It does not take many apps hanging out in the background before things slow to a crawl.


I had some performance issues, but they got sorted out when swapped the battery for a new one.


A battery? How would that help? Not doubting you, just honest question. I feel like I'm missing something very important about how smartphones work.


Mmm... it also happened to me with my (now defunct) Galaxy S1. I can't explain it, but at some point its battery developed the "bloated, about to explode" look and the phone worked but it was very slow and crashed frequently. I changed the battery and everything was ok. Later it died of unrelated causes.


I guess the processor wasn't being fed properly in regards to electricity, meaning not executing at the speed it was supposed to or other side effects.


It's Google services. It's a huge library that's only growing larger.


> My S3 is 4 years old now, and it is works perfectly fine.

Is it still getting security patches? If not then it's not running perfectly fine.


No and I don't care, because even Google doesn't care about us.

2 years update + 1 security for mobiles that cost more than a computer is not something I am willing to pay a premium for.


So if you're phone was part of a botnet wasting your battery and data cap would you care?


You should install lineage nightly to get most up to date security: https://download.lineageos.org/i9300


Thanks, but I am against rooting devices, never saw the value in risking to brick my expensive devices.


It might be running perfectly fine for his use case, you never know. The benefits are almost always compared with the financial liability and they vary for each of us.


Sorry but you are nowhere near the mean. We can't use your experience as a stand-in for the mean.


Sure, but we cannot use your words either.

Do you have an study available that I can refer to?


Same here. My iPhone 5s is simply not giving me any reason to update. My guess is it will work fine till next year's model.


> Bump it to 3-4 years then

My smartphone's getting on for 9 years old; I don't think I'm a key demographic ;)


But you may be the leading edge of what could easily become a "key demographic": people who don't upgrade for many years. It's possible that "things I could do with a new phone that I can't with my existing phone" will be worth less than the cost of a new phone for an increasing number of people. Or not. I don't think anyone knows for sure how that will evolve over the next decade or so.


Given the shift toward streaming services and away from relying on phone-local storage, do you think it's possible that phones could move from an ownership model to a rental model?

If so, then there might be greater pressure from phone rental agencies on the manufacturers to stabilise and fix bugs so that they can extract maximum value from the hardware.


Replace 2 years with 7 years and the logic doesn't change.


Not really. There's a considerable amout of people that regulary change their phone every 1 or 2 years because they like to have the last model even if they don't really need it.


Is there any research on how many do that across Europe, including new devices versus 2nd hand ones?


My anecdoticat evidence from even poor European countries suggest phones are renewed quite often (~2years). The horrible manufacturing quality of most modern devices ensures the market for 2nd hand devices stays small, because these things break easily.

To be honest, subjected to a "mildly aggressive & negligent" usage pattern, like mine, even an iphone will barely last more than 3 years and it will be in "far from good" condition after 1 year of usage!

Modern smartphones are simply not built to last unless you take exceptional good care of them. Or maybe it's just that me and the people I know tend to be "exceedingly violent" with our smartphones, dunno...


Personally, 5 of the 6 phones I've used in the last 14 years were still in great condition after the ~2 year contract period that I used them for (plus the one I'm currently using, but I don't get credit for that yet as it's only 4 months old). The one exception got dropped on a bus, and still works fine, but has a cracked but still functional screen.

I think most people are more like me, but there are indeed a non-negligible segment of the population like you (and my sister) who just destroy their phones. Based on everything I've seen, it's definitely more of a you thing than a modern phone thing.


I heard about that in TV some years ago but I don't remember the source. I suppose you can google for it and find the relevant information. I'm not sure if that resesrch included second hand devices but I'm pretty sure exists detailed information on this topic. Edit: typo


> until they either die or get stolen or screen cracked or phone soaked in water (but these problems will diminish when IP68 and Gorilla Glass 5 become more mainstream)


They say that for every new Gorilla glass. It'll always be super resistant against anything. Yet, the newest iPhones still scratch if you just have them in the same pocket as your keys for more than a day (personal experience). And it's still very easy to destroy the screen if the phone drops once (even though less of the screen will crack nowadays).

We are very far away from phones that you could use without screen protection and not see damages after 1-2 years. For a device that is supposed to be carried around all the time, modern phones show little resistance against scratches. But maybe that's just not possible to archive.


Scratches are one thing; you can work around them with disposable screen protectors. But we're talking about a device that's constantly carried around and used (I probably have my phone on me more often than my wallet). Gorilla glass or not, it falls from a meter onto a hard surface, and you have a screen to replace. You accidentally sit on it, and you may have a screen to replace. And given the prices of replacement parts, it often makes sense to just live with the crack[0] until you get a new phone, which is probably why I see so many people with spiderwebs on their phones every day.

--

[0] - Unless you want to take a risk and get someone to replace you just the broken glass, a process which involves some manual fumbling with heatguns and UV-hardened glues. Or, unless you live in Shenzhen, where they'll replace you the whole phone front (with electronics and all) for cheap, if you give them your phone front (which they presumably fix up later and resell to the next person).


> Gorilla glass or not, it falls from a meter onto a hard surface, and you have a screen to replace.

I dropped my Xperia Z1 Compact a bunch of times from that height, the screen is still fine...

> You accidentally sit on it, and you may have a screen to replace.

Solution: Don't ever put your phone in your back pocket.


That's an interesting insight on what happens in Shenzen. If the cost is a 2x or more lesser, I wonder why that is not a thriving business, even across countries


Yeah. I fixed my S4 in Shenzhen this way about a year ago. The official way to repair broken glass is to replace the whole screen, which then went for around $170. A repair shop once asked me for somewhere around $60 for just replacing the glass manually (heat & UV glue method, glass cost included). In Shenzhen, the girl replaced the whole phone front with all electronics and replaced my back camera for total of ~$20, in exchange for my broken phone front.

I used the occasion to ask for a front for S3. She sold me one for somewhere around $50 - because I didn't have the broken one for exchange - which I later used to fix my SO's phone myself. It's not that hard after you've seen how the Chinese do it, though a little stressful - I had to use a needle to punch through a speaker canal, which for some reason wasn't hollowed out properly :).

I guess the reason it works in Shenzhen is because that's the place where the phones are made and recycled - they have tons and tons of parts for every model imaginable, both from factories and from broken phones. Given how many different phone models are out there, I doubt any city except a major metropolis could sustain this type of market.


Also it only works for Samsung and Apple. As they are the only ones who actually profit from phones.


> It was unfortunately obvious that the writer had insufficient tech chops when use the phrase

That's a direct quote from the linked page. Unless you are suggesting that the authors of fuchsia have "insufficient tech chops".


They can be caught up in the same functional stupidity as everyone else at a large corporation.

If someone important enough says "decouple" and "monolith are bad" often enough, those arguments will be used to support more or less all changes as they already got leverage in the organisation.

The actual reasons could be something else, possibly as simple as they would like to control the OS, and possibly also that it's cool to write OSes.


>The only way they can get more of your money is to sell you a new phone which they hope to do between 1-2 years from now.

Right, but they will never get my money if they don't deliver timely OS updates for the old model.

You're right of course that this issue has absolutely zero to do with Linux.


> The only way they can get more of your money is to sell you a new phone which they hope to do between 1-2 years from now.

Which is why I would absolutely love to see a hardware vendor adopt paid updates, with a mission statement along these lines:

"we cannot believably promise a reliable progression of software updates, because too many before is have tried and failed. What we can do, however, is rewrite the rules so that we will have a much stronger incentive to follow through than or predecessors".


Could that back fire? There are cases where humans judge taking no action because it avoids costs as inherently better than taking action to bring in profit even when others in general are better off. I could see a lot of people getting mad at being charged for updates which cost money who wouldn't get mad when there are no updates because the company doesn't want to lose money making free updates. It's almost like a section of society judges not helping as better than helping for some price, even though anyone who doesn't want to pay is no worse off than if they were never given an option. I see this trend outside of business as well.


It would certainly not be good business for a consumerism-centered brand like Samsung and it would even be suicidal for Apple to suddenly start charging for updates. But for an Android brand built around sustainability, like e.g. Fairphone, it could be a considerable credibility gain if they replaced a promise of future gifts with a proposal for future business.

No matter how updates happen, if they do the effort will be paid for with money coming from the customer, one way or the other. Any payment scheme other than pay on delivery requires trust and that specific form of trust has evaporated a long time ago.


Funny, today we were talking with my co-workers about the new Samsung ads... "Samsung A5 - a new Samsung every year" Samsung A5 - more than you need" Rediculous...


> Its not merely not a flaw in the underlying linux kernel its not a technical issue at all.

I believe he's talking about having a driver API. Fuchsia also runs its drivers in userspace, which is the proper design in a post-Liedtke world. (principle of minimality)


Even when talking about driver updates, the argument is nonsensical. Android updates already require a full reboot of the device at the end of the update process, so new drivers can (and are) already be loaded into the kernel.

The update problem on Android is indeed a policy issue, not a technical one.


Based on what I've seen elsewhere in these threads, the idea is that a stable driver ABI means that manufacturers/chipmakers need not rewrite drivers just because there is a kernel update, removing that as a block to doing said kernel update. The willingness of Qualcomm to do the driver updates for a given chip to work with the latest kernel in Android was a big determiner of which phones have any chance of getting the latest Android, for example.


> Android phones don't get updated because for the manufacturers ensuring their pile of hacks works with a newer version of android would represent a non trivial amount of work for the oem whom already has your money. The only way they can get more of your money is to sell you a new phone which they hope to do between 1-2 years from now.

The fact you need Android device OEM's support is precisely the problem here, and that's mostly (mostly - admittedly not entirely) to blame on the way that manufacturers need to update their Linux kernels. Linux' monolithic nature makes this a major pain, especially since component manufacturers (for the SoC, touchscreen, fingerprint reader, camera sensor) implicitly require maintaining some non-mainline Linux. The manufacturer now serves as the central hub coordinating this 'mess' to update their devices. Mostly (again, not entirely) due to Linux, Android updates are dependant on manufacturers compiling update packages which are essentially full system upgrades. Each time they make an upgrade for a phone, they need to make an entire new device image.

The manufacturers have clearly shown to be incapable of handling this responsibility, both in terms of their abilities as well as their motivations. A more modular OS would go a long way in taking away these responsibilities, since it will be technical means by which responsibilities can be given to a better motivated third party. Clearly we should be looking towards a model more akin to Microsoft Windows, where there's a more stable driver API/ABI for drivers, that allows the OS' original manufacturer to update critical components and introduce new features without depending on an incapable/unwilling manufacturer. Make system updates not full system upgrades, but make the system able to update some components with reasonable confidence that things will keep on working. With such a system Google or the "Open Handset Alliance" - or depending on the openness of this new Andromium, others like the LineageOS community - can take care of updating phones instead of the device OEM.

In such a scenario you might still be stuck on graphics drivers from several years ago with serious rendering bugs, but at least security issues like Stagefright [1] and Dirty COW [2] can be effectively dealt with in a matter of days. That's a huge improvement over the current situation, where the majority of Android devices are still affected by both issues, years or months after their publication.

Imagine how unlucky we should be to be fully dependant on Asus or LG for software support on our laptops. But that's exactly the situation on our phones.

[1]: https://en.wikipedia.org/wiki/Stagefright_(bug) [2]: https://en.wikipedia.org/wiki/Dirty_COW


I absolutely agree. I installed Windows 10 on a old Core Duo clone I bought off Kijiji. Everything worked flawlessly, including its old Nvidia video card. It received the latest updates and security patches directly from Microsoft.

This is what we need with phones.


> would represent a non trivial amount of work for the oem whom already has your money

This is it; unless crucial apps (or some other necessity) only work with a newer Android version (which could push OEM's to update) they never will. Much better for OEM's to hope that you'll "update Android" through the purchase of a new phone.


The carriers bear some blame as well. Apple negotiated for the right to upgrade the OS without carrier review, but most Android device OEMs don't. So the carriers drag their feet as well, hoping you'll buy that new device from them and be locked in for another 2 years.


And even if they upstream their changes, whatever versions of Android they are trying to get in at launch would still need the changes back ported to whatever kernels those run. Plus you're not guaranteed that android will move to whatever kenrel version the upstream changes landed in.


Not only the OEM hacks but also supporting various hardware


Both of you are half-right. It's not an underlying technical Linux issue, but it does come from Linux's "everyone can have one" ecosystem model. While on Windows and macOS there is only one master, so only one master can be responsible for the updates.

So either Google fully controls Andromeda, as it does with Chrome OS, so it can update it (Google could probably still release an open source "Andromedium" version later on, like it does with Chromium OS), or it somehow forces all OEMs to update on time right from the beginning.

But the later sounds like a real pain from Google's perspective, as it would probably have to one day completely retract Samsung's license for instance, or sue it, if Samsung doesn't comply, and this could create all sorts of PR problems for the Google. Or Google would have to compromise and allow the OEMs to delay updates from say 1 month to 3 months or more. And then we'd be right back to square one.

I don't think there's a real practical solution to the update problem other than Google fully controlling the codebase.

> The joke of course being that linux is monolithic and googles new OS is a microkernel ergo more modular.

If you're suggesting that because the microkernel is modular that means "it should solve the update problem", I disagree. I don't think it would be much better than what we have now. Sure, it may be easier for Google, or say Samsung, to update their modules faster. But what about the other modules in the market? Will they be updated just as fast by noname OEMs? Probably not.


It bothers me that Google does not seem particularly interested in doing the one thing that would make their Android platform absolutely dominant: Allow Chrome to run Android apps on Mac and Windows.

Google has already done 90% of the necessary work by adding Android apps to ChromeOS. Two and a half years ago it created "App Runtime for Chrome" which demonstrated that Android apps could run on Windows and Mac in a limited, buggy way [1]. If Google had put meaningful effort into developing such a strategy we would by now have a relatively simple way to develop software which runs on 99% of laptops and 85% of smartphones and tablets. Developers would now be targeting 'Android first' instead of 'web app first then iOS then maybe Android'.

Sundar, if you're reading this - do it!

[1] https://arstechnica.com/gadgets/2014/09/hack-runs-android-ap...


Sun tried that, back in the day. Maybe you heard about Java applets, maybe you didn't. They were the slowest thing about the web, insecure even with a sandbox, and just an overall pain. Short of having a jvm always running on your machine, the performance of Android-via-Chrome will completely turn people off the Android ecosystem.


IMHO, the history was different. Java applets were initially secure in a sandbox and faster than what was possible with the "javascript" of that era.

Applets have become slow to start many years latter when bloated "enterprise" applications have been produced in abusive ways.

The security of applets has started to deteriorate a bit slightly before the death of Sun. It has become a security hell only since it is in the hands of Oracle.


The startup time was horrible. The speed, once it was running, was good, but the startup time made it completely unusable on the web. The ugly default UIs in Java did not help.

I did my fair share of applets back in the day, starting with the very first public versions and have very vivid memories of the loading screen :)


You should also take into account that internet speeds of that era were not as great. Today, we download multiple megabytes of javascript. Back then downloading the same multiple megabytes of java was slow because of network.

At least that was the case for me...


I always wonder -- if Sun had forced people to build their own loading screen UI per applet -- how the perception of Java (and Java Applets) would be different.


It's worth remembering that Applets were active at a time when the Internet, as a whole, was a lot slower.

Although yeah, even with our newfangled fibre connections applets are still pretty bloated.


I recall loading webpages in IE where the browser would completely stop for 30 seconds while the JVM fired up.

JavaScript at the time was mainly used for snow flakes on the web page or annoying mouse trails on the web page, as I recall. Oh, and maybe rollovers. I remember Microsoft pushing DHTML and seeing IE4 as a leap forward compared to IE3 but I do recall Java being slow. And slow.


I remember two issues with Java Applets:

* Java itself was slow for a long time

* The Browser would hang while loading an Applet

The first is no longer an issue. They can just use a modern just in time compiler and it wont run slower than Java on Android. Chrome already has one to deal with JavaScript powered Web 2.0 applications.

The second was as far as I can tell an API issue. Applets would block everything by default until they were loaded. A really bad idea in a single threaded environment when you had to send several MB over low bandwidth and the JVM itself took long to start. Just making the load async with a completion callback could have solved this issue and I remember a few Applets that actually used an async download to reduce the hang.


You missed the biggest issue: "write once, mediocre everywhere." Windows, Mac, and X were all different, and Java Applets were necessarily bad at emulating all of them. While there are fewer Unices today, there are more GUIs, and cross-platform apps suck at least as much.


"Write once, mediocre everywhere" was a problem with Sun's implementation, not with the concept of cross platform code. There are tons of webapps which are very successful, despite being written 'once'.

In any case, Google doesn't need to be as strict as Sun was. It is free to implement "write 90% of your code once and 10% customised for each platform".


> There are tons of webapps which are very successful, despite being written 'once'.

Actually they suffer from most of the same problems, only computers have gotten faster (masking performance issues) and our expectations have lowered. How many of these web apps obey the native OS themeing for instance?


1) The fact that webapps can run relatively well strikes me as hopeful, considering how much more inefficient using HTML/CSS/JS is compared to Java applets. Or is the latter not the case (honest question)? 2) I'm not sure if our expectations have lowered much. Perhaps it's more that mobile interfaces are generally simpler and thus easier to make 'native enough'?

Although I think there's more going on in regards to 2. I was never bothered so much by the UI of a java applet looking different. What bothered me was that even very fundamental stuff like input fields and scrolling felt both alien and shittier than native. And while it's certainly possible to make a web app just as shitty, if you rely on 'stock' html elements, a lot of the subtle native behavior carries over.

Just a few weeks ago, for example, I built a web-app for mobile devices. It felt off immediately because the scrolling didn't feel right. All I had to do was turn on the momentum scrolling (with a line of ios-specific css), and the scrolling suddenly felt native. Had I used a hypothetical Java applet equivalent, I might've had to either go for a non-native-feeling scroll or build it myself.

While I of course can't prove any of this, I think what people care about is that things feel native, not the 'skin' used to display it.


> 2) I'm not sure if our expectations have lowered much. Perhaps it's more that mobile interfaces are generally simpler and thus easier to make 'native enough'?

I think people finally called the bluff that users have any expectations. And even if they had, what they don't have is choice. The current market is that everyone is building a walled garden around their selling proposition, so if a company decides to make a web app instead of a native one, then that's all you have. Nobody will make a better one and risk getting sued. If a service doesn't want third party applications, then they won't happen.

As you note in your comment, if one sticks with default, "stock" elements in their web app, things look and behave OK on a given platform. But nobody does that, for some reason everyone has to screw this up with tons of CSS and JavaScript that make the whole thing maybe prettier, but also noticeably slower and without all the native idiosyncrasies.


> considering how much more inefficient using HTML/CSS/JS is compared to Java applets. Or is the latter not the case (honest question)?

It's a really interesting question actually because it's so hard to compare the two. On any objective measure, today's web apps are much better than applets in terms of responsiveness, etc. But then again, an applet could run on machines with 16MB of RAM total. I think you'd be hard pressed to get plain html page in a modern browser to run on a machine like that. Either way, in both cases we had a much better solution in native apps.

> 2. I was never bothered so much by the UI of a java applet looking different. What bothered me was that even very fundamental stuff like input fields and scrolling felt both alien and shittier than native.

Modern web apps can score better here, but quite often they don't. The more complex the become the less native they get, scrolling, text input, etc are generally OK (unless your an arshole that overrides scroll behaviour), but html still doesn't have an equivalent for native table views and the goodies (navigation, resizing, performance) that comes with them.

For me the skinning does matter though, I have a beautiful, consistent desktop that browsers (not even electron apps) shit all over. When something doesn't look quite right from the second you open it it magnifies all the other differences.


> Modern web apps can score better here, but quite often they don't. The more complex the become the less native they get, scrolling, text input, etc are generally OK (unless your an arshole that overrides scroll behaviour), but html still doesn't have an equivalent for native table views and the goodies (navigation, resizing, performance) that comes with them.

Oh yeah, complex UI stuff is definitely a good reason to avoid web apps.

But for many, probably even most apps it's precisely scrolling, text input, and other 'basic' stuff that matters, and in those cases a web app's 'default' will be more native.

> For me the skinning does matter though, I have a beautiful, consistent desktop that browsers (not even electron apps) shit all over. When something doesn't look quite right from the second you open it it magnifies all the other differences.

I agree on a personal level, but I suspect we're outliers. Can't substantiate that at the moment though, so I might be wrong.


> How many of these web apps obey the native OS themeing for instance?

Forget the theme - how many of these web apps obey the native OS GUI features? TAB-navigation, arrow navigation (in e.g. lists), accelerator shortcuts, editing shortcuts, not to mention a lot of visual idiosyncrasies that together make the interface feel "right"? Ironically, if you use default HTML controls, most of the things will be OK on a decent browser. But no, designers and developers absolutely have to make it worse by applying tons of CSS and JavaScript.

This applies to web apps pretending to be mobile apps, too. You can quickly tell one from another; the web app is the one with mediocre UI that behaves "wrong" in more or less subtle ways.


> Actually they suffer from most of the same problems, only computers have gotten faster (masking performance issues)

If an issue no longer affects anyone in any way, is it still an "issue"? Odds are that all the code you've ever written would have been considered criminally bloated at some era of computing history, but it hardly matters now.


> If an issue no longer affects anyone in any way, is it still an "issue"?

I said it was masked, not gone. It still causes a lot of issues for people on resource constrained machines.

> Odds are that all the code you've ever written would have been considered criminally bloated at some era of computing history, but it hardly matters now.

For much of computing history where were making clear gains with newer hardware. Up to the 90's software was getting more bloated but it was doing more. Most apps today really aren't doing much/any more than we were doing in the 90's but require vastly more powerful machines.


The problem was that the UK of java apps was really ugly and blurry. They were painful to use.

Modern web apps are non standard but pretty.


The problem is not technical and never was, it's that Apple and Microsoft would do anything to maintain an 'application barrier' that makes genuine cross-platform coding as hard and inconvenient as possible. It's all about developer lock-in and control over the customer, even more today than it used to be in the 90s.


How would you caracterize Microsoft's open-sourcing of the .NET stack, support to run it on Windows and RHEL (actually support to run RHEL on Azure), VS code universal electron app, the whole Office 365 paradigm, and multiple apps like Remote?

It seems to me that the 'new' Microsoft (since Satya Nadella took leadership) is changing their closed/proprietary stance on many topics. Not everything of course, they still have to sell stuff, but as far as "genuine cross-platform" is concerned, they are certainly giving developers all the tools to both make and target all major operating system.


> How would you caracterize Microsoft's open-sourcing of the .NET stack

Giving anyone not on Windows a second rate experience? Core as the name says provides only a subset of the Windows .Net framework and most .Net code in the wild is written with the implicit assumption that it runs on the Windows framework.

> VS code universal electron app

Instead of making their main IDE a proof of concept for .Net Core they wrote a Web3.50/NodeJS IDE. I am very sensitive to high latency IDEs so that is something I wont ever touch.

> the whole Office 365 paradigm

Trying to keep up with the competition, Google Docs ring a bell?

> since Satya Nadella took leadership

Nadella 2014. Linux on Azure 2012. Office 365 2011. Mono based on Microsoft’s promise not to sue 2004. Open sourcing parts of .Net is really the only thing you can assign to Nadella, everything else was still done by the good old triple E leadership.


Webapps in the sense of "fancy JavaScript" are no better than Applets. Google has the infrastructure, money, and business model to put most of the code on their servers and write native clients. Modulo privacy issues, they have found a solution.


Oh I bet they are! Like, anything is better than those slow, buggy and annoying Java Applets and I'm so thankful to god they're over. Did anyone ever notice how slow they were not only to 'run', but to initiate and start doing just about anything. I always knew it was a java applet even before it started anything because of its characteristic loading behavior.


Webapps today are the same. Their characteristic behaviour is rendering the skeleton of their UI, and then greeting you with a spinner. They take about as much to start up as Java Applets did.

Now, consider the orders of magnitude improvement in computing power over the years, and notice that webapps aren't really doing anything more complicated than old applets did...


Now remember that they ran on machines with 16MB of RAM. Could this page even render on a machine like that?


Not sure on that. When I was in my University we used these Java Applets. I had 4 GB of RAM at that time in my laptop, and they were still very bad.


I feel the same way when a browser Window freezes, full of JavaScript assets bigger than Doom.


There is one BIG difference. The Android frameworks are already the default on a major platform. Googles Material Design already looks native on both Android (majority phone platform) and ChromeOS. Java did not have a major platform where it was the default framework. It was not the default on Windows or Mac, not even on Unices (with the exception maybe of SunOS or Solaris, but not even there). If Java framework was not the default anywhere, then the primary reason to use Java was because it works everywhere. The primary reason to use the Android frameworks and Material Design is to publish on Android. Having it work everywhere else is a great bonus.

Android and Material Design will not be "write once, mediocre everywhere". It may become "write once, great on Android (majority of phones) and ChromeOS, mediocre elsewhere." But writing for Android does not exclude creating native versions for other platforms. Using Java did exclude creating native versions because that was the reason to use Java, to not have to write native versions.


I thought it was "write once, test everywhere" because the ideal of cross-platform software forgot about developers doing system-specific things (eg. file system paths)


I never understood the need to "emulate" all of them. Linux has always been a mess of tools written against different UI styles ( GNOME, KDE, MOTIF, ... ), Microsoft also reinvents its UI with every release leading to outdated application UIs and I don't know how consistent Apple was. I think the best you can do is choose one and stick with it, its not like most websites look and feel native.


You missed the biggest difference: Chrome is the same everywhere.


So were Java Applets with AWT. Users hated that, so Sun tried PLAF, a half-assed emulation of each platform. That didn't work either, so Applets died.

If Chrome manages to provide better versions of most applications on most platforms, it may win. Otherwise, people who use those applications will hate it with the heat of a thousand Suns, and it will go the way of the Java Applet.


> So were Java Applets with AWT. Users hated that, so Sun tried PLAF

Uh, Java AWT was the native toolkit. Swing was the non native UI with the ugly METAL default Look and Feel. There are some nice custom Look and Feel implementations that don't try to emulate a platform, I think Matlab uses one for its UI.


And the Metal plaf was the least ugly of the three that were originally shipped with JVM (other two tried to match how AWT would look on Windows and Unix). But IIRC, AWT was not native (in the sense of "calls native OS GUI components") but only tried to look native and failed horribly at achieving that.

One thing that strikes me as weird is that almost any widget set that does it's own drawing or even just it's own automatic layout and tries to match look and feel of native UI invariably does not match even the basic look because various UI components use wrong size, are placed slightly differently and so on. For example everything I've ever seen that tried to match how windows 3.1 Ctrl3D looked draws window decorations one pixel narrower than the original, which is plainly visible and ugly, similarly things that attempt "looking like Motif" usually use different thickness for various lines and borders and also often mix-up meaning of focus rectangle (which should move by tab) and bevel around default button (which should stay in the same place irrespective of which control has focus). I see no technical reason why either of these things cannot be done right, is there some legal reason for introducing such small differences, that are small, but big enough to be annoying?


> But IIRC, AWT was not native (in the sense of "calls native OS GUI components") but only tried to look native and failed horribly at achieving that.

A quick check of wikipedia backs my memory, the Java classes were just a thin wrapper around the native components. AWT was mostly bad because it was limited, it does not even have a Table.

> even the basic look because various UI components use wrong size, are placed slightly differently and so on.

The windows API does not come with a layout manager AFAIK. I vagualy remember setting every bit of relevant size/position data by hand last time I used it directly. Same could be done with AWT, so this is mostly likely caused by programmer lazyness.


The part about sizing being wrong was not that much about AWT (although IIRC it also has this problem, at least on Unix, where it looks decidedly non-motif) as about just about any non-native toolkit that tries to look native. Windows does not have layout manager, but has API for getting preferred sizes for various low-level UI parts (in Windows 3.1 small part of of this was even user configurable).


Oddly you see non-native toolkits these days for many apps, and users don't seem as bothered anymore, unless I am mistaken?

For example, look at Windows 10 and the mishmash of controls (are they flat? or do they have a bevel?) available from the control panel, settings app, old COM dialogs, MMC etc. etc.


> and users don't seem as bothered anymore

What choice do we have? Everyone's doing their own walled garden, so it's not like I can go and find an alternative SaaS / operating system with same features but better UI...


Linux has two options that offer a lot more consistency than you'll find on windows.


> For example, look at Windows 10 and the mishmash of controls (are they flat? or do they have a bevel?) available from the control panel, settings app, old COM dialogs, MMC etc. etc.

One of the last straws for me was the built in mail app that had this awful background image. Too many flashbacks of shitty access apps.


Users will say the software looks "different" but what they really mean is "ugly". The UX differences is not always a problem. See: Winamp.


> I remember two issues with Java Applets

I remember issue no.3 :

United States Court of Appeals,Ninth Circuit.

SUN MICROSYSTEMS, INC., a Delaware Corporation, Plaintiff-Appellee, v. MICROSOFT CORPORATION, a Washington corporation, Defendant-Appellant.

No. 99-15046.

Decided: August 23, 1999

http://caselaw.findlaw.com/us-9th-circuit/1260682.html


Don't forget about the security issues that left machines with unpatched Java versions vulnarable to attacks.


They were also completely insecure.


So were Flash and JavaScript and I still don't fully trust either. It helps that blacklisting almost everything with NoScript actually speeds up 90% of the sites I visit.


Chrome is already hanging and crushing on Ubuntu 14.04. I imagine adding applications will be worse.


Maybe you somehow haven't noticed that even mobile devices now are an order of magnitude faster than desktop computers were back in the days of applets. Desktop computers can run Java applications with no significant overhead.


Not totally. We support a number of java applets at work. The clients still take "way too long" to load and feel bloated. "Way too long" is a subjective measure based on the current hardware/OS. For a JVM to not feel slow it would have to speed up relative to itself...and I haven't seen them do that.


If it's specific apps, usually it is just bad coding. 'Enterprise Java' style coding where performance is not even tested for, let alone designed into the algorithms.

Usually culprits are things like downloading multiple data files in a single threaded block, or insanely deep object graphs with thousands of memory fetches per real operation.

That's not really anything to do with the technique (code in browsers) or the runtime (JVM) or even the programming language (Java) -- and everything to do with poor development.


They're not really enterprise apps. They're just betting odds displays. But I hear you on the criticism of enterprise software still. I just don't think it's the thing here.


Why would betting odds displays escape from Enterprise Java? Seems like the kind of thing that would get it in spades for regulatory and CYA reasons.


They could still be enterprise apps. From what I've seen of them, they appear to be little more than what you'd get off espn if espn were 100% betting. My point is it's not the system that they use to sync up lines and whatever...I think.


This outlook is why lots of people have to go to work and deal with absurdly slow trash software written to run on absurdly fast hardware.


No. I'm not saying terribly designed, lazily coded enterprise apps are ok, I'm saying if Java can be fast on a $200 mobile phone, it can be fast on a desktop.


> I'm saying if Java can be fast on a $200 mobile phone, it can be fast on a desktop.

It's not fast on a $200 mobile phone though. It's still pathetically slow.


Java should have focused on manipulating the DOM rather than creating their own canvas. Was not obvious at the time though.


Java did have an API for accessing and manipulating the DOM since (iirc) JDK1.4 -- org.w3c.dom and sub-packages. It was not at all well known or publicised, though.


Yep, this is what I was referring to. It would have been really interesting had people started doing things with it.


You mean like JavaScript nowadays uses Canvas and WebGL?


I think the key difference is you get to that incrementally. You start by "making the monkey dance" on an otherwise-static page, and you can work your way up to rendering everything on a fully-programatically-controlled canvas, a little bit at a time. Forcing you to do everything yourself from the start as TCL (remember tklets?) and Java did makes the barrier to entry much higher.


It's fairly rare to have web UI's built with canvas and WebGL - usually because they need to do something that is impractical using the DOM. They'll still use the DOM for the more conventional parts of the UI.


Sun's execution was poor. Google's doesn't have to be.


Eric Schmit come from Sun...


Why would that have any relevance at all?


Because he wasn't that good at Sun. (I was there.)


Somehow I doubt that he'd be involved in a hypothetical android-on-chrome below maybe a go/no-go decision.


Satya Nadella came from Sun as well ;-)


I agree on all counts, but have to say that Java Webstart was cool, it was a pretty neat way of installing and running applications directly from the web. Maybe not very secure, though.

One problem I see with Google's ecosystem is that they've betted on the wrong horse - Java is a pain in the ass and Android's Java foundations are its second largest weakness. (The first one being the Google-Vendor relationship that makes all but the very latest Android devices unpatched and 100% insecure.)


> ... Short of having a jvm always running on your machine ...

This is basically how Dalvik/Zygote worked on actual Android. From what I know, Chrome too uses always-running background processes even if you don't have a browser window open (for mainentance work and to improve startup times).

So I'd assume the project would do exactly that.


What? Android runtime is not JVM, it's AOT compiled to native for quite some time now.


It's a mix, actually.


Phones and desktops are completely different form factors with different constraints. Running an Android app on Windows would be a horrible experience.


Now this is a reasonable objection. Small form factor touch screen interfaces just don't work the same way as mouse-based desktop interfaces with acres of space. Trying to use a phone UI on a desktop is heinous (as is pointed out above with Windows 10 as the example.)


I used to regularly remote into a windows box (GUI mode, classic desktop) from my phone. 2560x1440 pixels of 21st century glory. Believe me when I say you do not want to use a desktop UI on a phone either. No, pinch-zooming hasn't proved to be a sustainable workaround.


Just fire up windows 10 and install something from the app store for an example of why the convergence makes no sense.


I don't know. I think many "tablet" apps would work fine on the Surface 4 (if they got scaling right).

I still think touch screens is the future - also on desktops (there either in tablet form or "drafting table" form).

I think editors more like acme and less like vim might rise up. Along with new input types like the power bar and surface wheel.


> I think editors more like acme and less like vim might rise up.

Acme, and anything like it, would be completely horrendous on a touch interface. I use it regularly (although I find that I prefer Sam).

Edit: Then again, I have touchscreeens on many of my laptops. I don't find them useful. With the exception of drawing art, I wouldn't miss them if they disappeared.


I'm no so sure about "anything like it". A pen for easily selecting text, a one-two-three finger tap and/or a wheel/secondary input might work with a similar interface to ACME?


What does acme do to help on touch screens? I while ago I toyed around creating a vim keyboard for android, instead of having a virtual keyboard popup it show commands/motions instead. I abandoned the idea, but if text edition on a touch screen ever becomes feasible then I think the virtual keyboard has to go.


> What does acme do to help on touch screens?

Unlike emacs or vim, acme leverages the mouse/gui for powerful editing - and I believe (multi)touch screens have the potential to be better guis than than screen+mouse. For one thing mice generally utilise at most three fingers and one hand - and IMNHO while eg blender/photoshop combine mouse and keyboard - the combination is awkward and not very intuitive.

I think (but am far from certain) that acme is a more promising approach.


I'm not saying that the app would have exactly the same UI on a phone and a desktop computer - the developer is free to customise the UI for the device.


Most of them already fail to do so for tablets, why would that change?


'Most' apps do not get used even once [1] so their user experience doesn't really matter.

Most popular apps are designed for both tablet and phone. The ones that haven't specifically been designed for tablet are usually basic things like Guitar Tuner or Flashlight which don't suffer much from having a phone layout on a tablet.

Sure, you will still sometimes encounter a crappy app which has a horrible user experience on your device - similar to web pages and webapps which assume you have a 24-inch display - but not often enough for users to avoid the platform.

[1] http://www.phonearena.com/news/400000-apps-in-the-App-Store-...


At that point it's less complicated to have multiple apps than a single all form factors one.


> have a relatively simple way to develop software which runs on 99% of laptops and 85% of smartphones and tablets.

We already do. It's called a webpage.


You could argue that webpages were never meant to host the kind of rich experiences and workflows native applications are known for. We just made it that way through years and years of momentum.


Would android apps be a good fit for desktop? Most of them do not even work well on a tablet and I think they would scale very badly to even touchscreen laptops.

I doubt that most developers would do any effort to have their apps "responsive".


Many of them would make great desktop widget kind of apps, such a mini music player, or weather app.


I wish I could run the Trello app on my touchscreen laptop, if only so I could drag cards to the top to archive them (for some reason the web UI doesn't offer that). I wish I could get Google's suggested pages there. And for a lot of Amazon's apps, the Windows version feels like an out-of-date port of the Android version where I'd be better off running the Android version.


I don't think we really need another locked down OS where the vendor will control everything.

The other issue I have is that I don't see Android apps as efficient way of getting my work done, applications that don't need to worry about the mobile form factor will most of the time offer a superior user experience.

You already have a 99% cross platform way to ship an app, you can create a web app.


+100

This would have been the way to build Linux into the next great desktop platform. I dont think people mind a 1GB Chrome runtime if it opens up a billion apps for them.

I think apps can be handled well by both browser and mobile phones. Considering that ART runtime also JIT compiles the java code to native, performance should also not be a worry.


There are products that already allow the running of Android apps on Windows: http://www.greenbot.com/article/3129740/android/the-best-pro...

I've never tried it or seen anyone do it though.


Well, it's mostly emulation, with some paravirtualization on top. Not wonderful, but recently it's gotten at least bearable.


Google has Chromebooks securely locked down, down to the hardware crypto -- there may be (speculating) inherent security issues with running mobile apps in the browser when you can't lock down the external environment.


I worked with a company that was part of the beta for ARC Welder and it was a very experimental product experience. Things were hit or miss on chromebooks. Support from Google engineers was amazing though.


There was a talk at the X Developers Conference 2016 about the ARC (App Runtime for Chrome). Seems like it is still being developed: https://www.youtube.com/watch?v=4PflCyiULO4&t=2h10m50s


Android is already dominant on mobile. Desktop market is shrinking. "Running" doesn't correlate to a good experience. Google probably thinks about that stuff.


Actually the PC market has stopped shrinking and is now about steady: http://www.gartner.com/newsroom/id/3468817

'Running' can be pretty good, even when it's not a native app - there are plenty of successful web apps, including a few written by Google.


Tell me, why would I want to run Android app on Mac?


The fact that it is an android app doesn't really matter. Your question becomes essentially "why would I want to run any app on Mac?" which I think you can figure out the answer to.


In my opinion, android and its accompanying apps are simply a book google intends to shelf.

A fresh OS for devices that had chrome as an app and its android hornet behind it would make a bunch more sense.

Pushing the jvm kart is similar to something like bowser vs Yoshi


Does anyone think this is where they are going with Angular?


The amount of additional code required to support Android apps on the Chrome browser would be far too great. No one wants to download a 500MB+ browser.


> The amount of additional code required to support Android apps on the Chrome browser would be far too great. No one wants to download a 500MB+ browser.

So, divide up by services, and download only the services needed by installed apps with the first app that needs them. Adds basically nothing to the browser install.


My Realtek sound drivers for PC were around 450 MB (!!!). Nvidia drivers are around 350 MB and update quite frequently. That's insane. And Chrome has great update mechanism.


Yeah, but you were also downloading the whole "Experience Apps" on top of the drivers.


Maybe I'm just jaded from downloading video games, but, at least on a landline, I don't think most people would care?


It could make sense as an optional download if you do want Android support. But, Google would never increase the download size of their browser to that extent.


For people who've already committed to Chrome, it might just be a one-time annoyance. But the time when Chrome had an seeming lock on the top seat in the browser wars has come and gone. I can definitely see this hurting new adoption, which would rightly make Google nervous.


With Chrome, people could let Chrome download it for them in the background.


You're forgetting something: This will not work for iOS. Why develop something which excludes a big chunk of the cake? I'm happy Google is not going this way... Apple is doing already enough damage with exclusive features.


> This will not work for iOS. Why develop something which excludes a big chunk of the cake?

By that logic nobody should even create native Android apps today. You're saying that if Android apps could run on even more platforms it would somehow stop being worth it to create them because iOS is excluded? Makes absolutely zero sense.


I would also go even further and would say it makes in MOST cases no sense to develop a purely native app today. From a economic standpoint most native, one platform apps make no sense nowadays (also many people are unwilling to install more apps) unless you have deep enough pockets. I'm happy that Chrome embraces the web and pushes web application even further. I see really no benefit in running Android apps on my Mac (this results in mixed touchscreen, non touchscreen experience). So Chrome is pushing technology for this 80-90% of devices but it is not Android (with its touchscreen focus) and I'm really happy it is the web which I can use on more devices with different UX handling and without installing stuff.


You mean the future holders of 8% of the market across all form factors?


It's awesome that Google is doing this and in public too https://fuchsia.googlesource.com/

Unfortunately, the hard part of an operating system isn't in a cool API and a rendering demo. It's in integrating the fickle whims of myriad hardware devices with amazingly high expectations of reliability and performance consistency under diverse workloads. People don't like dropped frames when they plug in USB :) Writing device drivers for demanding hardware is much harder than saving registers and switching process context. The Linux kernel has an incredible agglomeration of years of effort and experience behind it - and the social ability to scale to support diverse contributors with different agendas.

Microsoft, with its dominant position on the desktop, famously changed the 'preferred' APIs for UI development on a regular cadence. Only Microsoft applications kept up and looked up to date. Now Google has such a commanding share of the phone market - Android is over 80% and growing http://www.idc.com/promo/smartphone-market-share/os - they have a huge temptation to follow suit. Each time that Microsoft introduced a new technology (e.g. https://en.wikipedia.org/wiki/Windows_Presentation_Foundatio... WPF) they had to skirt a fine line between making it simple and making sure that it would be hard for competitors to produce emulation layers for. Otherwise, you could run those apps on your Mac :)

There are many things to improve (and simplify) in the Android APIs. It would be delightful to add first class support for C++ and Python, etc. A project this large will be a monster to ship so hopefully we'll soon (a few years) see the main bits integrated into more mainstream platforms like Android/Linux - hopefully without too much ecosystem churn


> It's in integrating the fickle whims of myriad hardware devices with amazingly high expectations of reliability and performance consistency under diverse workloads.

So much this; Linux Plumbers conference years ago was bitching about how every gorram vendor wanted to be a special snowflake, so even though the architecture was ARM, you basically had to port the kernel all over again to every new phone. I haven't kept up with it, but I can't imagine it's gotten better. The problems they're listing as reasons to move to a new kernel aren't caused by Linux and they won't go away until you slap the vendors and slap them hard for the bullshit they pull, both on developers and users.

As for kernel ABI, this has been rehashed to death: just release your fucking driver as open source code, and it will be integrated and updated in mainline forever: http://www.kroah.com/log/linux/free_drivers.html


Overall I agree with your sentiment but it's not just a case of "releasing your drivers" but also of getting it accepted by maintainers. If you don't have an awareness of this process from the beginning of your development cycle then it can be a massive amount of work.


A product I worked on was a victim of Microsoft changing preferred APIs. Their competing product which the prior year had less market share somehow sported the new UI instantly when the UI became publicly available. Monopolistic behavior if you ask me.


It's why I got out of the MS ecosystem. I bought quite heavily into WPF for the Vista launch, and it was obvious that was a dead end within a couple of years. Not only that, but I could see exactly the same thing happening to all the other neat stuff I was planning to look into at the same time. That was not the way to encourage my long-term membership of the Visual Studio clan.


WPF is alive and doing pretty well.

Doing new WPF application development for the last three years for the biotech industry.

It is also the official API for classical desktop application and shares a lot with UWP.


> and it was obvious that was a dead end within a couple of years

As the other reply probably indicated, it's still not apparent to .net developers. MS really dropped the ball with providing a clear path for desktop development.


I was specifically doing 3D stuff in WPF. That end of things didn't get much love after the initial release (or if it did, it came too late for me).


How were you doing it? It's definitely possible to embed DirectX in a WPF control, and not particularly hard to do, with SlimDX or SharpDX, in my experience.


Declarative 3D data in XAML, not touching DirectX directly at all. Viewport3D at the root, then doing the right magic to get the sort order of the objects right (working around the built-in logic, in other words).


Driver support should not be a problem for Google. They can reuse existing drivers with a rump kernel approach [1].

And on mobile devices, many hardware component vendors provide custom drivers (with binary blobs) anyways. It will not be hard to convince them to support Fuchsia for new hardware releases. If they loose access to Android otherwise...

[1] https://en.wikipedia.org/wiki/Rump_kernel


> Driver support should not be a problem for Google. They can reuse existing drivers with a rump kernel approach [1].

"Linux is a free set of buggy device drivers." https://news.ycombinator.com/item?id=8470638 .


More and more it seems to be the prevailing attitude even within the Linux "community".

Just observe how systemd is overruling and countermanding Linux behavior any chance it gets.


Given that it's Google, I wonder if it will support the languages favored by bootstrappers and small startups—Obj C, Ruby, JavaScript and more recently Swift and Elixir. I get the distinct impression that they're heavily optimizing large team productivity and aren't a fan of functional or highly expressive languages.

It's too bad, given how much nicer their app approval process, etc is than Apple's that the Android dev experience has been so much worse all these years.


Well going off common sense, it seems likely that Obj C and Swift definitely would not be supported on purpose.


How is that common sense? Both are open source and Google is happy to lure iOS devs, presumably.


I wouldn't write Swift off yet.


it doesn't need to be a general purpose OS and is probably going to target mobile devices and laptops. I don't think supporting a wide range of hardware is even on the road map.


The drivers-wifi repository contains a stub for a Qualcomm QCA6174 driver[1] which is found in the Nexus 5X[2], OnePlus 2[3] and meant for smartphones[4]. The drivers-gpu-msd-intel-gen repository contains drivers for Intel 8th and 9th gen integrated graphics[5]. I think it's fair to propose that Google plans on running Fuchsia on both smartphones and laptops…

[1] https://github.com/fuchsia-mirror/drivers-wifi/blob/master/q...

[2] https://www.ifixit.com/Teardown/Nexus+5X+Teardown/51318#s112...

[3] https://www.ifixit.com/Teardown/OnePlus+2+Teardown/45352#s10...

[4] http://www.anandtech.com/show/7921/qualcomm-announces-mumimo...

[5] https://github.com/fuchsia-mirror/drivers-gpu-msd-intel-gen/...


Is this an actual plan of Google as a company, or is this some sort of Microsoft-style war between divisions where the Chrome team has just decided on its own that the future is based on Chrome and Dart?

Also, considering the way that the ARC runtime for Chromebooks was a failure and had to be replaced by a system that apparently essentially runs Android in a container, will it really be possible for a completely different OS to provide reasonable backward compatibility?


I have my doubts. Hiroshi Lockheimer, the SVP of both Android and ChromeOS stated, "There is no point in merging them. They're both successful. We just want to make sure that both sides benefit from each other," referring specifically to rumors at the time that said that ChromeOS and Android were merging.

http://bgr.com/2016/12/13/android-chrome-os-merging-google/


I suspect this is true, in that Fuchsia is simply a potential replacement for both... not a "merge". But bear in mind, marketing voices like Hiroshi's job is to promote and sell people on the existing product... right up until the day they decide to officially announce something else.

So, even if they were presently 100% focused on merging the two OSes, Hiroshi's job would be to convince you they aren't as not to risk impacting the bottom line of their sales and their partnerships with OEMs that are continuing to print money for them.


    But bear in mind, marketing voices like Hiroshi's job is to promote and sell 
    people on the existing product... right up until the day they decide to 
    officially announce something else.
Absolutely. I'm reminded of how Steve Jobs claimed that Apple was absolutely, 100% committed to PPC... right up until he announced the first Intel Macs at Macworld in 2006. And Apple, at that point, didn't even have OEM partners to worry about.


Yup. The Osborne effect[1] is widely known now, so most companies are very careful not to telegraph a future change that may impact current sales.

[1]: https://en.wikipedia.org/wiki/Osborne_effect


Dart is part of Ads, not Chrome. Chrome didn't want them, while Ads Frontend is heavily dependent upon them.

Aside from Google+ (which was pushed directly by Larry and grudgingly integrated-with by the rest of the company), Google hasn't really had plans "as a company" since the mid-2000s. Big companies (other than Apple under Steve Jobs) don't actually work like that; once you've got a product-focused org chart and strong executives that push their own focus areas, you will necessarily get product-focused initiative that respond to resource availability & market opportunity. The executives are not doing their jobs otherwise.


The blog talked about many different things. Not all of them may be related to each other. It is pure speculation from the author.


I think Google just wants to transition to an OS with a micro-kernel architecture with the smallest possible attack surface. Also, user space drivers should help with the update problems they've always had with Android.


I think it's just a speculative effort that probably won't go anywhere.

I googled for "google magenta", and all the top hits are actually about an entirely separate (I assume?) project about AI music: https://magenta.tensorflow.org/welcome-to-magenta. So they didn't think very hard about the name for a start.

I'm also skeptical that a big new effort like this would be done entirely in the open. The Chrome team has something of a history of doing that and then throwing stuff away (e.g. Chromium mods and hardware configuration for a Chrome tablet that never got off the ground).

The Android team, on the other hand, seems to prefer developing stuff in private before open sourcing it. And their stuff seems to have more traction (or maybe we just don't see all the aborted efforts because they're private).

I feel like the Chrome team really believes in open source, and developing in the open, whereas the rest of the company (and especially Android) doesn't care as much and prefers being secretive. But as Sundar Pichai used to run Chrome, maybe he's changing things up a bit?


Like the plan9 people working at google pivoted plan9 into golang, this is the BeOS people working at google re-inventing BeOS into something new.

But golang could be started in internal use with incredimental steps. fuchsia/andromeda in contrast have non-code barriers for entry like management approval and industry adoption. My guess is that it will pivot from a full-blown android replacement into something more focused.


I would say that Google is trying to replace JavaScript with dart in any way they possibly can. The reason is simple, JavaScript is an open standard, dart is owned by google.

Their reasons that "dart is better" is the typical google koolaid before they attempt a market takeover. As we've seen over and over with Android, chrome, and AMP especially. Google loves to make glass house open source projects you can't touch. You're free to look at how great it is, feel it's well refined curves and admire the finish, but God help you if you don't like how the project is going and want to fork it for yourself.

Don't bother trying to commit a new feature to any of Google's software that they don't agree with. It will languish forever. Don't bother forking either, because they'll build a small proprietary bit into it that grows like a tumor until it's impossible to run the "open source" code without it.

Fuck dart, I don't care how great it is. Microsoft is being the good one in this case by extending js with typescript, google is trying to upend it into something that they control


Eh? Dart was trying to replace JavaScript in some fashion, but that obviously failed (they had good intentions, but bad execution). Seeing that Chrome hasn't included it yet, I doubt it will. So that dream is dead.

Dart is a replacement for GWT at this point. See AdWords being written in dart now[0]. Though it's not clear now Flutter.io will play into all this (that's targeting mobile with no web target).

As for typescript, Google actually embraced that fairly heavily with Angular2 being written in it.

[0] http://news.dartlang.org/2016/03/the-new-adwords-ui-uses-dar...


AFAIK, Angular Dart is used heavily within Google.

https://webdev.dartlang.org/angular


Give me a break. It's just as accurate to say "Dart is an open standard, Javascript is owned by Mozilla". There may be valid technical, pragmatic or moral reasons to prefer Javascript, but this is just FUD.

(See http://www.ecma-international.org/publications/standards/Ecm...)


Really? Google's once rosy history with open source project isn't looking too friendly these days.

And yeah ECMA is a totally open standard with committee members from all sorts of companies and backgrounds. Dart is not. I don't care if JS is slightly worse, as least I know that for now and the foreseeable future I won't be paying a google tax to use it.

After the open source community "stole" mapreduce and hbase google has begun offering maglev and spanner as "services" rather than giving them to the OSS community. Maglev was supposed to be open sourced a while ago, and google now offers DDoS protection service on Google cloud instead, most famously with their Krebs PR stunt. Maybe they forgot about it? Did I mention they removed "don't be evil" as their motto a while back because it was "immature"?

Google has begun down a decidedly different path since the Alphabet transition a while back. It's no longer the brainchild of Sergey and Larry, it's losing its soul and becoming a shareholder cash machine. Maybe the floundering of some of their moonshot projects is taking a toll on the companies' confidence to remain a market leader while maintaining their traditional values of openness and shunning of questionable marketing tactics? I'll admit that's pure speculation but I really wish I knew what happened to the Google I remember.

Since I'm being accused of FUD I might as well throw a bunch more speculation in for the hell of it. Their most recent papers are conspicuously lacking enough detail to make your own implementation, and read more like marketing whitepapers on how to use their services and how great they are. Their tensorflow library was probably released as truly open only because they couldn't hire enough devs with machine learning experience to meet their needs. They needed to introduce the world to enough of the secret sauce to meet their own demand and they remain completely silent on how their real moneymakers work.

My extreme speculation? They started using machine learning for search a few years back and found out just how easily their previous search algorithms, developed and perfected for years, were utterly outclassed within months. A start-up with these techniques could have been their undoing. This oversight cannot be repeated, they cannot offer too much of their technology back to the world anymore lest they risk being beaten to death by their own weapons. Thus google threw away a lot of what made them google, and rebuilt themselves as a semi monopolistic oligarch that's much more in line with traditional too big to fail companies.

They now spend more on political lobbyists than any tech company by far. They like to release nice things for free when a competitor just happens to be a making a decent living charging for the same thing. They engage in a lot of the typical corporate warfare now that doesn't seem natural for a company with a nice playful exterior and an original motto of "don't be evil".

As far as the FUD accusation, does it count that I don't work for or with any company that has anything to do with google or the other tech giants? These are just my opinions based on observations, and a lot of those opinions are backed by verifiable facts.

You're free to put the same data together and make your own conclusions, which would lead to more interesting discussion than dismissing my points just because.


One addition I would like to make, every corporation is a shareholder cash machine, and Google has always been one, it didn't suddenly become one. The problem is in the institution of corporations itself, which has a lot of flaws.


> Did I mention they removed "don't be evil" as their motto a while back because it was "immature"?

https://abc.xyz/investor/other/google-code-of-conduct.html

"Don't be evil" is the first and last thing stated.

Why do you post stuff that's trivially searchable and trivially called out as bullshit? Why would I bother reading any of your rant if you can't get trivial details right?


> And yeah ECMA is a totally open standard with committee members from all sorts of companies and backgrounds. Dart is not.

Dart is an ECMA standard: https://www.ecma-international.org/publications/standards/Ec...


Spanner is as much about Google's data center, hardware and network architecture as it is about the code.

It is not clear how they would "contribute" that to OSS


JavaScript sucks because it has a weak standard library, ugly syntax, and its monopoly in web development has the industry stuck in a state of mediocrity, in my opinion. I have a VERY hard time believing that the apex of engineering intelligence and ingenuity is found in JavaScript. Also, as much as I love Elm, for instance, languages that transpile to JavaScript are just lipstick on a pig, and do little to solve the underlying problem. I'm not a fan of Dart either, but at least Google made an attempt to solve the JavaScript issue in the best way possible with Dart; by aiming to get rid of it.


I agree with you that JavaScript sucks balls in far more ways than is reasonable for such a widely used language. The design is seriously shit when compared directly to really any popular language, even PHP.

I disagree with transpilers not being a reasonable answer. Eventually JavaScript will be okay to work with, some day. Until then, transpilers offer nearly unlimited freedom in redesigning the bad parts of the language while maintaining 100% fowards and backwards compatibility. It's really as good as it can get.

Since they compile down to a Turing complete language there's really no limit to the heaps of dog shit they can abstract away. Historically, c++ is nothing more than an insanely complicated C preprocessor and it has more than proven that such a strategy can be viable long term. In fact, the first c++ compiler made, cfront, is still available and literally outputs raw C code from c++.

Typescript is easily my favorite since it's designed to compile down to very human friendly JS. Getting typescript out of your stack requires nothing more than one last compilation with optimizations turned off. Unlike most transpilers (looking at you babel) the output JavaScript uses standard JS workarounds like the crockerford privacy pattern for classes. This gives typescript fairly practical fowards and backwards compatibility. You can always target output to a newer version of js or convert your codebase out of typescript back to js at any time.

If it catches enough traction, browsers will begin implementing native typescript parsing since it offers many potential performance optimizations on top of what js is capable of. At this point you just maintain your typescript codebase and use some library to give your legacy clients some transpiled J S on the fly.

If typescript gets enough adoption it will fix JavaScript for good, in the same way the original c++ compiler (which just transformed to c) led to native support, so I'm really rooting for it.


> since it offers many potential performance optimizations on top of what js is capable of.

It doesn't, unfortunately. TypeScript's type system is unsound, so the VM can rely on types for optimization.


I can see your point about transpilers. Of all the transpilers I've used, I like Elm the best, due to its functional nature, syntax, strong typing, compiler, and debugger. It isn't fully stable yet, as a language, and there have been breaking changes in each release since I started using it, but it offers the most promising departure from JavaScript. I guess anything that facilitates the de-turding of web development in general is a good thing.


Haha de-turding is a great way to put it. I just don't think a new language is a reasonable option. There's what maybe... 50,000 different versions of the ~500 web browsers from different eras still running out there somewhere?

If having code work almost everywhere is important for a project, that project will be using vanilla ES3-5 JavaScript for the next 10+ years. Maybe not the latest startups but all sorts of enterprisey ancient stuff that needs to run needs some path forward. If typescript can provide that it will become the lowest common denominator at any company that ships both new and legacy codebases.

Typescript to JS transpilation is extremely similar to the strategy that produced C++ from C. We know it will work, and it's been done before to great success. C++ isn't perfect but I think everyone agrees it's definitely a lot nicer to work with than C, and that's exactly how I describe Typescript as well


I do see your point about C++. I was programming when it first came out, back in 1985. However, I've always thought C++ felt "Band-aid-y." It never felt elegant and cohesive to me, the way Objective C does. C++ is like a chainsaw-hang-glider-shotgun-bat; badass, to say the least, but still a clumpy work of bailing wire and duct tape. Typescript feels the same way, only not quite so badass. It's more like the Robin to C++'s Batman.

Having said that, my only exposure to Typescript has been in Angular 2. Having used other tools like Ember, React, and Elm, Angular 2 seems like a magic step backwards to me. I will concede that my opinions on Typescript may be tinted by my experience with Angular 2 though, so I'll give Typescript a stand-alone, honest evaluation, and adjust my opinions as necessary.


Looks like alternative facts have reached the tech world too?

You can take as hard look at Google as you would like, but choosing Microsoft over Google (one for-profit company over another), while not caring how the technology, the licensing or the workflow compares is a bit hypocrite. (e.g.they are both open, and they both have rules of commits).

I'm wondering, why do you need a throwaway for such heavily invested FUD? Your other comments here are in similar tone, and I'm surprised to see such hatred without any obvious trigger. Maybe if you would come forward with your story, it would be easier to discuss it?

disclaimer: ex-Googler, worked with Dart for 4+ years, I think it is way ahead of the JS/TS stack in many regards.


>I think it is way ahead of the JS/TS stack in many regards.

In what ways do you consider it ahead of Typescript? Personally as someone who's particularly fond of static type systems (Haskell and the like), Typescript's type system seems way more advanced and powerful than Dart's (union and intersection types, in particular, and non-nullable types). Map types (introduced in Typescript 2.1) also seem pretty interesting.


Some of my earlier notes are in this thread (it is more about the day-to-day feature I actually use and like, and less about the fine details of the type system) https://news.ycombinator.com/item?id=13371009

Personally I don't get the hype around union types: at the point where you need to check which type you are working with, you may as well use a generic object (and maybe an assert if you are pedantic).

Intersection types may be a nice subtlety in an API, but I haven't encountered any need for it yet. Definitely not a game-changer.

I longed for non-nullable types, but as soon as Dart had the Elvis-operator (e.g. a?.b?.c evaluates null if any of them is null), it is easy to work with nulls. Also, there is a lot of talk about them (either as an annotation for the dart analyzer or as a language feature), so it may happen.

Mapped types are interesting indeed. In certain cases it really helps if you are operating with immutable objects, and mapping helps with that (although does not entirely solves it, because the underlying runtimes does allow changes to the object).


I agree about union types. They can quickly result in insane variable declaration statements that are hard to understand.

I dislike nulls though, I always wish people would just use a flag or error handling when objects are undefined, instead of "hey this object is the flag and sometimes it's not actually an object!"

You'd think language designers would learn after dealing with null pointers :)


So Dart hasn't really incorporated any lessons from 20 years of Java, has it? Google's answer to Tony Hoare's billion-dollar mistake is... "The Elvis operator"?


TypeScript has some really cool type system features. Union and intersection types are fun and really handy when interacting with dynamically typed code. (If you go back through history, you'll find almost every language with union types also has a mixture of static and dynamic typing. See: Pike, Typed Racket, etc.)

Self types (the "this" in the return type) is handy.

I can see us adding some of those to Dart eventually.

Non-nullable types are great, which I've said for a very long time[1]. We are finally working to try to add them into Dart[2]. It's early still, but it looks really promising so far. It kills me that I've been saying we should do them for Dart since before TypeScript even existed and still they beat us to the punch, but hopefully we can at least catch up.

The main difference between TypeScript and Dart's type systems (and by the latter I mean strong mode[3], not the original optional type system) is that Dart's type system is actually sound.

This means a Dart compiler using strong mode can safely rely on the types being correct when it comes to dead code elimination, optimization, etc. That is not the case with TypeScript and at this point will likely never be. There is too much extant TypeScript code and JS interop is too important for TypeScript to take the jump all the way to soundness. They gain a lot of ease of adoption from soundness, but they give up some stuff too.

In addition to the above, it means they'll have a hard time hanging new language features on top of static types because the types can be wrong. With Dart, we have the ability to eventually support features like extension methods, conversions, etc. and other things which all require the types to be present and correct.

[1]: http://journal.stuffwithstuff.com/2011/10/29/a-proposal-for-...

[2]: https://github.com/dart-lang/sdk/pull/28619

[3]: https://github.com/dart-archive/dev_compiler/blob/master/STR...


Typescript is definitely an improvement over type-free JS, but it's still wedded to the JS type system so unfortunately, it will still let you shoot yourself in the foot in ways you might not anticipate if you have experience with other languages with a stronger type system.

For example, If you have a string-typed foo and a number-typed bar, "foo + bar" is still a valid statement in TS because they have to maintain backwards-compatibility with JS's unfortunate language design choices.


Typescript and dart are completely different animals. I can leave typescript for good by compiling to js once and it's designed to output human readable code. The js it produces will be immediately usable as javascript, and I'm totally free from the semi open language that MS controls.

Dart is a different language, it has no fallback to something familiar. I don't doubt that it's many years ahead of TS in every way but it's still rather proprietary compared to TS that I can shut off at any time with minimal effort.

The openness of typescript and dart are comparable. Both being run primarily by their champion companies with code free to review and fork but with limited ability to commit changes. They both require you to sign over copyright of code committed which I don't like for my own reasons but the license is open source.

The big difference to me is that typescript offers an escape hatch and dart does not, because one is pretty much a JavaScript enhancement and the other is completely different. I hate vendor lock-in and loss of the open web in general and you will see this as a common theme to most of my more flamey(controversial) comments. The web is closing off in so many directions and as an open source developer in my free time this is of great personal concern. I don't like that hackernews and reddit can be an echo chamber and posting contrary opinions usually makes the discussion more balanced even if a lot of people don't like it.

I'm not ex google, MS, or any of the tech giants. I'm not smart or dedicated enough to work anywhere you've heard of :). Most of my comments on throwaway accounts are unpopular, that's why I don't use my normal account. I'm not some invisible super shill, hackernews knows all the accounts I use and I'm fine with that.

I've just got my own opinions and when they're controversial it's not in my best interest to comment using my normal account. It wouldn't be for anyone. It would be utterly stupid to hurt my open source projects or reputation as a developer just because somebody doesn't like my opinions. My code and my work have no opinions, and I like to keep it that way. Throwaways are my way of keeping my opinions to myself, and I don't see anything wrong with that. Separation of church and state if you will.

I'm not totally against Google or any company in general. Microsoft in particular has an extremely rocky history when it comes to open source projects. They've probably done more harm to Linux than any company in existence. If typescript and dart both had equal migration paths I would choose dart in a heartbeat. I love tsickle and the closure compiler and the fact that the angular team is using typescript. Still, I feel like my criticism of dart has some truth to it at least.

I've taken aim at Google for the past week for what they've done to the openness of Android, AMP, and dart. Am I wrong? It's hard to argue that any of Google's platforms are as open as they were a few years ago. Some of my really unpopular opinions were posted in reponse to other poster calling me FUD or a shill, and can you blame me? It's one thing to say "I disagree and this is why" but pretty rude to just say "I don't believe you because you're obviously lying or getting paid to say that". To that I say well screw you I'll post what I want without being polite at all if you're going to be so rude. I'm replying nicely to you because you genuinely asked why I used a throwaway and said that you worked with dart at Google, way more than most would admit.

Having an unpopular opinion just gets you labelled as a shill or FUD and that's a lot of the reason I use throwaways. I've actually gotten death threats before for disagreeing with people on the internet. It's hard to say I would be better off getting death threats from people that can easily find my name, occupation, and address. Look at more of my post history and you'll see that I'm probably not a shill, or a really sneaky one if you don't want to believe that.

I just bashed Intel for removing ECC support from their desktop lines a few days ago, and Facebook and Microsoft a day before that for their shittastic timeline algorithm and irrational fear of Linux taking over corporate clients respectively. A day before that I trashed PayPal for some of their recent cronyism and praised the quality of Google's Guava libraries.

In my more ancient post history I mention how much C#'s ecosystem sucks compared to java and how even open sourcing the language doesn't mean much when it only targets windows using visual studio. I bitched about the baby boomers screwing the mellenials and asked how it's possible to start a side business. I said I don't like Python because it's slow and gave people online marketing tips on how I write high ranking blog articles. I mentioned that .NET core sounds great but it's alpha quality and it sucks that I have to use the new version of Windows server to have http2 support. I talked about how WordPress is absolute shit for anything even medium traffic. Mentioned a quip about using monotonic grey codes. Brought up some arguments about montsanto and GM windblown crops. Mentioned that Uber doesn't give a fuck about their drivers and some examples of this despite their press releases saying otherwise.

I'm getting a lot of comments about how I'm some full of shit corporate mouthpiece again so I guess it's time to cycle the old throwaway again.

I'm not asking you to agree but keep in mind that some people will have opinions completely contrary to your own. Sometimes their reasoning will be logical even though you come to a different conclusion, people are just different


Your worries about being locked in may have been valid 2-3 years ago (1), but things have changed a lot since:

Dart has an ongoing project (Dart Developer Compiler) which has a goal, among others, to produce readable, idiomatic EcmaScript 6. That is as close to your TypeScript fallback as it can get. (2)

Somebody also demonstrated Dart to LLVM compilation is possible. The language has a decent library for parsing the Dart sources, worst case, if you are that heavily invested in your product, you could also write something that does transpile your codebase. I did try to do it on small scale and specific examples, it is actually not _that_ hard to do, if my business relied on it, it would be certainly within reach.

(1) I'm not sure if you can call it lock-in as it is entirely open source, you can fork it, build it for yourself, change it if you have special needs. The same goes for perl, php, python, go, whatever language you prefer. Yeah, most people don't do it. Why? Because most people don't need it. If you become Facebook-size, it may look better to invest in the PHP toolchain and VM than in transpilers. YMMV.

(2) From the pure technical point of view, I wouldn't call it reassuring that the default fallback platform is JavaScript for so many people (even on the server-side). It is sure depressing that we are stuck with "1" == 1 and wrong ordering of "[1, 2, 10].sort()" for as long as we fall back to JS, and TypeScript does not improve on it.


I didn't know of the developer compiler, when that reaches production I won't have any real citicism of Dart.

For now Js fallback is the only realistic option for running code on the web. Even if we get native typescript or dart support tomorrow we will still need to put up with JavaScript for like 7-10 years. For this reason a readable JS fallback seems like a vital feature to me at least. It's depressing but reality for the majority of web projects.

Does dart have a pluggable compiler framework similar to Roslyn or Antlr AST's? That would make it a lot easier to write your own conversions.

One more point in Typescipt's favor though... It would be a lot easier to modify the JS VM in browsers to support native typescript than dart. In my mind it's a lot more likely to happen because of this(less work)


> when that reaches production I won't have any real citicism of Dart.

It's still got bugs, of course, but we have internal customers working on real projects using it on a daily basis.

I agree totally that picking a language is a huge commitment and you want to do that with an organization (company, standards committee, group of open source hackers, whatever) that you trust.

Google is a huge company and has done lots of good and bad things, so it's easy to find enough evidence to support assertions that we should or shouldn't be trusted based on whichever view you want to demonstrate.

One way I look at it is that instead of answering the absolute question "Can I trust Google to shepherd the language well?", consider the relative question "Can I trust it to shepherd the language as well or better than the maintainers of other languages I might choose?"

Assuming you've got some code to write, you have to pick some language, so the relative question is probably the pertinent one. I hope that we on the Dart team are a trustworthy pick, but different reasonable people have different comfort zones.

> Does dart have a pluggable compiler framework similar to Roslyn or Antlr AST's?

All of our stuff is open source[1], including all of our compilers and the libraries they are built on. Most of it isn't explicitly pluggable because plug-in APIs are hard and Dart in particular doesn't do dynamic loading well.

But it's all hackable, and much of it is reusable. In particular, the static analysis package[2] that we use in our IDEs also exposes a set of libraries for scanning, parsing, analyzing, etc. that you can use.

[1]: https://github.com/dart-lang/sdk [2]: https://github.com/dart-lang/sdk/tree/master/pkg/analyzer


Yes, Dart has similar constructs (kernel for low-level parts, and AST and analyzer for high-level parts).


Associating your real name with a controversial opinion is basically just a bad idea.


Exactly. Even if you're right it won't do you any good


But the thing is, you really can fork. There's nothing preventing forking. I think you're complaining about merging.

But every open source project has standards about what they merge. Try getting a patch into Linux and see what they say; it won't be a rubber stamp.


Yeah, you can fork. Take AMP or Android for example.

With AMP, the instant you fork and change a single character of code it becomes incompatible because part of AMP is a verifier that makes sure only the official version is used with Google's cache. Without being able to serve your custom AMP pages from Google's AMP cache the entire point to its existence goes away. The reason? Typical "security". "Tampered" versions of AMP could "do bad things", laughable considering that vanilla web pages allow you to do absolutely anything javascript allows and google has no problem showing those pages in search results or letting them freewheel in a Chrome tab. If Google wanted AMP to be open they would have built it into their chrome browser so the browser could enforce restrictions while allowing users to run whatever customized AMP implementation they want.

And Android. Android used to honor the promise of being open. Years ago. This was before every manufacturer was encouraged to lock bootloaders, and back when platform SDK's and drivers for hardware were generally available even if they were kinda hard to get. This was also before the Android kernel heavily diverged from mainline Linux, and before "google play services" grew from a tiny app to a framework that powers half the OS features.

Nowadays you can only run your own Android on devices specifically built for it. Open distributions like CyanogenMod are dead or dying. Google Play services is closed and proprietary, and probably about 95% of popular apps require it to work. Even if you manage to get your own Android distribution built and running you will need to side load all your apps, and most apps just don't work because they've been built to depend on proprietary bits that Google has snuck in all over the place.

Google is better at the "embrace, extend, extinguish" strategy than Microsoft ever was. So good, in fact, that they have many well intentioned people defending them to the death even as they choke off the very open source projects they created. Virtually every platform that Google runs for more than about 5 years goes from completely open to something impractical to run yourself. If you don't believe me look into any of their older projects that are "open source".

After a certain point it's free software as in "free coupons". Somewhere in the mix, eventually, the price of of their "charity" is passed on to you.


Hardware made a big difference with Linux and its growth. It could run on PC/x86, alpha, sparc because those are platforms. ARM is a spec sold to manufactures that all have their own SoC that attach random shit to random pins and implement the worst kernel hacks that can never be upstreamed.

http://penguindreams.org/blog/android-fragmentation/

We can't have the 90s Linux revolution for handhelds because they each need customized kernels and drivers. Many fall into disrepair and go unmaintained, even in things like Cyanogen. (On two phones I tried running newer CM images on old hardware and ran into speed and performance issues).

This is why things like Plasma and Ubuntu mobile have such limited phone support. Porting is difficult.

Also notice that I said "PC" above. There are plenty of x86 systems that are just as difficult to port to (PS4, Wonderswan, those old T1 cards with 4x486 processors on them). At least Microsoft forced their ARM manufactures to use UEFI. Too bad those platforms have locked bootloaders. I'd love to see some Lumia running Plasma.


First up - I do not argue with you. Android is not as open as it was in the old days. But I think its not fair to only blame Google here.

Google making Google play services was a natural reaction to manufacturers never updating Android on their phones for years leading to all kinds of vulnerabilities and bugs on Android that kept it far behind iOS in quality and features. Lets face it - Android used to be sneered at, the red-headed problem-child OS that used to the butt of jokes till it grew out of puberty and pimples in Ice Cream Sandwich. If manufacturers had truly honoured OS updates, Play Services may never have been built - it allows Google to update Android without updating the OS. And yes, they will retain full control over Play Services - I completely understand the need to fully possess it and ensure a high level of quality assurance.

Also blaming the fall of CyanogenMod on Google is ridiculous. CM fell because of mistakes made by Kirk Mc Master and several others. He attempted to be a dictator even going to the extent of banning OnePlus phones selling in India - this was fought and resolved in the courts. All goodwill for CM was destroyed. OnePlus ditched CM and moved to Oxygen OS. CM had a stroke and died. Now Lineage is the new shiny OS rising from the cooling corpse of CM.


Interesting info about CM, didn't know there was more to it.

I still disagree with play services because it wouldn't be that hard to force manufacturers to support updates when you command such a large part of the market


Google is 100% to blame.

There are these things called contracts, and if there are clauses for an OEM to be allowed to have access to Google services, Google lawyers could certainly add a few more sentences regarding compulsory updates.


Android is open source, and OEMs see long-term updates as a money-losing proposal. If the "lost sales" (to them) costs outweigh the benefit of shipping Google services, OEMs will fork/ship AOSP and call it a day - they want profits more than they respect/fear Google. Google's negotiating position is not unassailable.


It sounds like you have no idea how deep Google's MADA contracts with OEMs already go. =) Shipping devices with Google services included requires you sign over your entire business decisionmaking to Google: They have approval/veto power over every single device and software update you release that contains their services, and they also forbid you to sell any devices running Android that don't contain their services, just to make sure you don't try to exert any independence on the side.

Android isn't open source, except in the hearts and minds of fanboys everywhere. =)


> OEMs will fork/ship AOSP and call it a day - they want profits more than they respect/fear Google.

Good luck getting all those apps running without Google services, or getting the devs to re-written them to use alternative APIs.

It only works in countries like China because of the way the government controls everything.

I pretty much doubt anyone cares about Amazon or Jolla's fork, or cared for the Blackberry's one.

It is just about not caring one second to enforce updates.


Thank you. I wonder how many open source advocates recognize this.


Thanks so much. I get a lot of downvotes and use throwaways because of comments like this, so it's nice to hear some praise every once in a while.

Google's projects all seem very inviting from a distance. Usually it's not until you're ready to implement something that you find out that you're fucked, and how.

Serious ranting below but something I never get a chance to say:

I'm a born skeptic and avoid the silicon valley mindset even though I'm a driven person. I used to find myself often in disagreement with others because they don't or refuse to see the truth. Some people don't like to be told they're wrong. Many of those will fight other opinions just to justify their own decision, but will secretly reconsider. Others will hang onto beliefs with every ounce of strength as their mistake builds into a Maelstrom that consumes everything they care about.

With some people, after challenging their beliefs, they will end a friendship rather than admit you were right in the first place. Especially if you refused to do something their way and it saved them from disaster. Some can't stand to be THAT wrong. As if I was some asshole who saved them from their fate, and now they're a spirit left wandering the earth until they can fulfill their original destiny. Its like I helped them cheat without telling them about it, stealing the joy from victory. This is something I learned the hard way more than once.

In real life I keep my opinions to myself to avoid this nastiness, and offer opinion only when asked. The people open to advice even if they disagree learn to ask my opinion since I always tend to have one. The majority of people I know, including some good friends, have no idea what my personal opinions are on many subjects. It would cause pointless pain and argument with people I care about regardless of their beliefs.

I'm not loyal to any platform or company and I will freely throw a strongly held notion to the wind if I find disturbing evidence that I was mistaken. Most people are not so malleable.

A lot of people take their beliefs too seriously to the detriment of society. At least on the internet I can express my opinion, however "uncool" using throwaways.

In the real world the best and most meticulously researched advice I've ever given is at exit interviews. The one time you can be open, honest, and politically incorrect with coworkers. Multiple companies made serious operational changes after giving my exit interview. Others have told me in nicer words "that's really fucking great to hear I'm pretty happy I never have to talk to you again".

The problem is, you never know how somebody will respond. During exit interviews I'm treated more like a person than a subordinate since the boss relationship is formally over, which helps I'm sure.

In real life, the way to influence a strongly held opinion is best decribed by watching the movie Inception. You introduce nothing more than minor inconsistencies while outwardly expressing little opinion, then wait to see if your clues are enough to lead them to towards the promised land.

My other common tactic is to do things without asking any opinions first. You at most come off as insensitive, aloof, rather than someone to intentionally disregard their advice. Usually the opinion matters less in practice than if you had asked in the first place. Classic forgiveness is easier than permission.

Ive sometimes wondered if this makes me a physcopath or if that's just how some people tick. Anyways, god bless throwaways and the internet


At the risk of down votes, I'll be as blunt and honest as you claim to be - after reading this screed - you mostly come off as a someone with an inflated view of your own importance and abilities. While I might agree with you that the silicon valley mindset is harmful - anyone who would rather keep a friendship and watch a friend go over a cliff in a barrel, isn't really worth keeping around - either as a friend or an employee.

Fighting the good fight, fighting for the things that are just, and true, and good - are nearly always worth it, the key is to back off before it becomes a pyrrhic victory.

That's a lesson I had to learn the hard way.


I think parent poster is using the term "friend" in a more liberal sense. I would counter that anyone who you're afraid would berate you for honest feedback can't honestly be considered a "true" friend. In most cases anyway. Even then, some people just have to learn the hard way.


I'll save somebody but only if it's worth the cost of losing a friend. The better the friend the more I let them learn from their mistakes. The truth is that losing a good friend would hurt us both more than helping them mend the wounds after smaller stuff.

It's not being evil or that I'm always right. The comment was mostly in reference to those that have been calling me a shill the past few days and how they should keep in mind that their opinion is not fact.

I gave up the good fight years ago. The worst was when I helped turn around a failing small business. We all wanted the same goal, the company to be successful. It sucked so bad that I learned that it's better to be nice to your friends than to dedicate yourself to a cause or try to fix all their problems.

If that means letting them fall sometimes that's okay, as long as you don't let them get any deeper than you can reach. If you help pull them out in the end you're still a good friend.

So the company turnaround, it worked in the long run but at great cost. Cutting employees that sucked at their jobs but were friends and helped us with the initial plan. Cutting moochers that I loved but were sucking the company dry with constant unscheduled time off and freebies. Redoing our systems to automate as much as possible made us our first profit in years but a lot of that was from jobs eliminated. Hiring people of a higher caliber than existing employees by raising application requirements above what most of the current employees would meet. Offering our new more qualified people more money than Bob who's been here for 15 years but did our financials on pieces of scrap paper.

By the end of that process a few years later, my lesson was that I made the owners a lot of money at the expense of losing about half my friends. Most of the other half resented me for what I had done and thought I was a traitor, even though I had just helped implement exactly what we had agreed upon a few years back.

We planned to cut dead weight and streamline and automate operations. To add new talent with up to date skills. To cut our benefits slightly to money to invest in the company's future. Everyone wanted this until it was their benefits or their job being automated. I followed through with the cause and at the end I felt like a Judas figure and packed up and left in shame.

You could say it was a pyrrhic victory for sure, but after that I'm very wary to set anything in motion that's too heavy for me to stop on my own


I agree with a lot of your sentiments and am only responding to rubbish your psychopath claims. I don't think you need to worry, in particular if you don't exhibit cruel or violent tendencies. You demonstrate concrete moral reasoning, even if at odds with others, so not sociopathic. "Psychotic" perhaps but your reasoning seems lucid enough. The one reservation I would have is about your "omega man" mentality; that you could be suffering unescessary mental anguish as a result. Also if I were operating an online community I'd be somewhat concerned with your obvert circumvention of moderation checks and balances using throw aways etc. However - I think you're submitting perfectly valid opinions in a respectful way and I share your unease at how there seems to be a groupthink at play shaping the quality of discussion.


Hahaha psychotic ish ramblings are fun to write sometimes though :). I'm about to retire this throwaway so IDGAF about what I'm writing as much as usual.

Omega Man is an interesting term, never heard of that before. You're totally right that it's how I try to operate but only when I'm doing controversial things. Perhaps I'm doing it right if I seem to be going about it in the most quiet and passive way possible :) .

You don't have to worry about me running any communities online. I'm a productive member on a bunch of online communities including HN and I don't use my throwaways to respond to, upvote, or otherwise sockpuppet my regular account except a couple times I admittedly may have upvoted the same thread on different accounts by mistake. Most of my less opinionated stuff is under my real name

The only reason I respond sometimes is because I disagree. Sometimes my controversial opinions prove to be a lot more popular than I thought. And possibly miraculously, all of my throwaways eventually gather substantial positive karma despite the fire and brimstone rained upon some of my comments :)


This reduces to the old argument that the Four Freedoms model of open-source software is basically moot in a world where the value of software is dominated by network effect, not modifiability.

It continues to be a weakness of the Four Freedoms model.


> This was also before the Android kernel heavily diverged from mainline Linux, and before "google play services" grew from a tiny app to a framework that powers half the OS features.

How diverged is it? Would the ever be merged back together?


Last I checked some devices were running mainline kernels many years old, like 2011, with zero code back to mainline. One of the other posters mentioned rampant hacks to the kernel to get things to work in stupid ways which I've heard a lot of as well.

Android is missing a ton of new Linux features on many devices and the kernel is getting increasingly unusable by ARM devices in vanilla form because of these badly done third party modifications


Can you name one open source project, where I can commit a change the maintainer doesn't agree with?


WTF are you talking about? Dart is being used to develop apps on their Fuchsia OS.


If Dart ever became the real thing, it must be supported in not just Google Chrome, but also Firefox, Edge, Safari, etc. At that point Google will lose its control.


Google has decided not to put Dart VM in Chrome long ago. Of course Dart tools has very strong web development support and JS conversion.


Microsoft is lobbying to get their favorite syntax into JS6/7. Who wanted the class syntax? etc And TypeScript and WebAssembly are part of their plan. Ultimately, they want to recompile their 27 year old Office codebase from C/C++ to web browser.


To be fair, it seems to me like the typical webdev coming from C#/Java really wants the class syntax. I disagree with it, but I don't think it's just MS that's pushing it through, and even if it is, there's definitely an audience for it.


I've used a ton of languages over the years and vastly prefer Java type syntax when working on larger projects. The forced organization tends to lead towards some level of mandatory code clarity. Something greatly lacking in Js land.

OO is a bad word these days and functional is all the rage, even though functional languages were largely superceded by OO languages eons ago for many reasons people are slowly redicovering.

There's a huge push to put more structured language concepts into js now that it's being used for substantial projects and it's out of necessity more than convenience.

When I'm hacking together a quick Python script all that stuff gets in the way but when working on larger systems strong typing and object syntax are practically a neccesary evil for maintaining readability


> strong typing and object syntax are practically a neccesary evil for maintaining readability

No, it's not like that.

You can write readable code in any language as long as you can write readable code. It sounds tautologic, but what I mean is that ability to write readable code is a skill separate from writing code or knowing a particular language.

Strong static typing - as just about any tool and language feature - can have both good and bad effects on code readability. In the end, the readability (so also maintainability and other related metrics) depends on the skill of a particular developer in the largest part.

Both OO and FP techniques, as well as all the language features, are the same. You can misuse (or ignore) them all.

What we need is to make an "average developer" better at writing code, not more bondage and discipline in our tools. The latter is (a lot) easier, so that's where we focus our efforts, but - in my opinion - it's not going to solve the problem.


Could you elaborate on how "functional languages were largely superceded by OO languages eons ago for many reasons people are slowly redicovering" ?


Erlang and common lisp have been around for a long time, and functional programming is nothing new. The reality is that most business problems map conceptually to communication between objects, and that IDE's which greatly help developer productivity work a lot better with objects.

Functional programming has origins in lambda calculus and academia because mathematical problems map more easily from pure math to functional programming. It's really popular in the circles where it's more useful/easier than OO.

Honestly I don't think the people 20 years ago chose OO for most business languages over functional out of ignorance. They had a choice and decided that OO was better for business problem solving languages like Java even though a large majority of programmers from that era were math majors and familiar with functional syntax.

I feel like we're in one of those cycles where a large number of a previous generation have retired and it's time to learn some of these lessons all over again.

Notice how many wood commercial buildings have been going up in the last 15-20 years? A lot, and just long enough after everyone involved in all the great city fires of WW2 to be too dead to object.


Who "choose" OO 20 years ago and (much more importantly) why?

I'm going to ignore the social component... that said, we work in a wonderful profession where the world is changing completely every decade and many design decisions from the previous generation make no sense anymore. The business case for developing your application in COBOL rather than Common Lisp may have been sound 20 years ago, but today many of the reason why you didn't choose lisp are invalid (e.g., garbage collection takes milliseconds rather than seconds).

Note that this is not the case in more mature fields such as construction.


The idea of factions in Microsoft fighting for control is interesting to me from a historical perspective - do you have any books or sources I can read to get the full story?


I don't have any detailed links handy, but it's definitely common knowledge that this is the case (or used to be, anyway). Manu Cornet's comic captures it well: http://www.bonkersworld.net/organizational-charts/


I'd agree it's common knowledge, but I love reading detailed accounts of that kind of thing. Cheers anyway!


How credible is this source?

I don't understand half the decisions outlined in the article.

> I also have to imagine the Android update problem (a symptom of Linux’s modularity)

I seriously doubt the Linux kernel is anything but a minor contributor to Android's update problem. Handset developers make their money by selling physical phones. In two years, your average consumer probably doesn't care if their device is still receiving software updates. They'll jump onto a new phone plan, with a fresh, cool new mobile, with a better screen, newer software (features!), and a refreshed battery.

Maintaining existing software for customers costs handset manufacturers $$$, and disincentives consumers purchasing new phones (their cash cow). The money is probably better spent (from their POV) on new features, and a marketing budget.


> In two years, your average consumer probably doesn't care if their device is still receiving software updates. They'll jump onto a new phone plan, with a fresh, cool new mobile, with a better screen, newer software (features!), and a refreshed battery.

This might be true for the US, where 75% of subscribers are on post-paid (contracts). It's not true for the rest of the world.

* Europe: < 50% post-paid

* Rest of the world: < 22% post-paid

I'd also argue that Android users will be more likely to be pre-paid than post-paid customers (compared to iPhone users) in all of these regions, but I have no data to back it up.

Anyway, I agree that it's probably not very profitable, if at all, for android handset makers to support their devices for > 2 years. But I think many customers would benefit from it ...

[1] http://www.globalrewardsolutions.com/wp-content/uploads/GRS-...


I think the "post paid" connection leading to a 2 year lifecycle is suspect. There's a big after market repair industry in the US. Many people have a singular cell phone as their internet device - and it's often ancient by IT terms. 2+ year old hardware needs to be getting software/OS upgrades.


I favor the proposal of requiring a prominent "Best Before" date for new devices, indicating how long the manufacturer will guarantee the availability of security updates.


That's an SLA and we should all b getting them.


But it is not something related to 2 year upgrade plan. In EU phones have 2 year warranty, after that period most people would like to upgrade it. And more over in parts of the world outside US, the best phones are not always available on contract, you need to buy them on your own (e.g. Nexus line was mostly like that, Pixels probably too).

And more over cellular companies put a lot of bloat on the phones, some of them would make malware creators proud (e.g. an app that if you open it after 1 month trial period it will add a recurring cost to our cell bill).


Also we usually use the mobile phones until they die or get stolen, which is way more than just 2 years.


> I seriously doubt the Linux kernel is anything but a minor contributor to Android's update problem

The Linux kernel is at the very heart of Android's update problem - not because of "modularity" but because it lacks a stable ABI. Because of this, Android requires handset makers and SoC manufacturers like Qualcomm to provide updated drivers; these parties are perversely disincentivized to do so as they would rather sell more of their latest model/chip. If the Linux kernel's ABI were stable, Google could bypass the manufacturers altogether when sending out updates.

The Android ROM community struggles with this - you can update the userspace or overclock the kernel to your heart's content, but you will forever be stuck with whatever kernel version the OEM-supplied drivers support.

Edit: added ROM paragraph


Just like RHEL/CentOS, couldn't Google maintain a stable kernel ABI throughout a given android major release such that manufacturers could write a driver once, and have it continue working for the lifetime of that major android release?


I'm sure they could. Although it seems they would much rather move away from the Linux kernel altogether. Who knows what sort of ABI stability timescales they are aiming for in the replacement kernel.


The pertinent question is how Google think they're going to convince Qualcomm to write drivers for their new OS, and not just have exactly the same problem they have now. Qualcomm probably quite like the current arrangement where they get to deprecate everyone else's gear on their own schedule.


With wall street's love for next-quarter accounting, I'm sure Qualcomm will consider the offer, even if it's a one-time bump.

Qualcomm is dominating the SoC industry, in part due to how it uses its IP. Fortuitously, Qualcomm are being sued by the DoJ and Apple for anti-competitive practices in multiple jurisdictions and this might blow the SoC field wide open, depending on the rulings. If other chip makers can license Qualcomm's patents on a FRAND basis, Google could offer the deal to MediaTek if Qualcomm declines.


> The Linux kernel is at the very heart of Android's update problem - not because of "modularity" but because it lacks a stable ABI.

What are you referring to when you say the Linux kernel ABI is not stable? I ask because the A in ABI means application, and Linux has maintained a consistent ABI for decades.

I have a suspicion that you're trying to suggest that in-kernel interfaces be kept rigid and unchanging to satisfy some unspecified number of out-of-kernel driver developers. The simpler solution would be for out-of-kernel driver developers to get their code up to quality and merged into mainline so that they'd be ported automatically whenever there's an in-kernel api change.


> The simpler solution would be for out-of-kernel driver developers to get their code up to quality and merged into mainline so that they'd be ported automatically whenever there's an in-kernel api change.

Or we could drive linux into irrelevance and not have to worry about that anymore. Windows and MacOS users do not have to suffer from this, why should users of an open source operating system? I for one welcome Google in cleaning out this mess with a competing kernel. You know your suggestion will never happen, you know some hardware manufacturers will never open their drivers, and some just can't be avoided (people doing GPGPU work will not switch from NVIDIA, most gamers won't switch from NVIDIA, NVIDIA is owning the GPU market for the purpose of machine learning with their SOC and is working on a lean vulkan driver for linux for things like Tesla's self-driving cars).

During the early days of constant wifi APIs churn within the kernel, there were many, many out of tree drivers that ran perfectly fine but took a very long time to get mainlined because hey the kernel devs have """standards""" as they like to call it. I couldn't use a distribution that was prone to upgrading its kernel version, like Fedora, because those out of tree drivers broke on a regular basis. One of Ubuntu's major advance in its very early days was to provide fresh software every 6 months while keeping a kernel that was in sync with all those drivers and they integrated said drivers into the distribution. That gave a middle ground between using something like Fedora/debian unstable with constant breakages, or suffering the glacial pace of debian stable.

... until I discovered NDISWrapper. An implementation of the Windows API for network drivers. Yes, really. It worked so good, and because it was a well maintained project, it quickly got updated to new kernel APIs and allowed anyone with any of the working windows drivers to be freed from this suffering. It was great. Nowadays the linux kernel has support for most wifi chips in-tree (Broadcom is still a problem though) but in those days it was truly liberating to be able to use Windows drivers in linux.

Talking about wifi drivers, the Windows XP driver for my first usb wifi adapter still worked in Vista and 7 despite the manufacturer never updating it. Identical driver binary working on 3 OS generations. Talk about commitment to not breaking things. NDIS 5.1 was "deprecated" for NDIS 6.0 but MS kept the support for it for as long as reasonable for the age of the hardware.


> Or we could drive linux into irrelevance and not have to worry about that anymore

I know this is going to sound flippant, but believe me it's not: who's "we" in your sentence? Google? Because I wouldn't want to see Linux "driven into irrelevance" unless it gets replaced by another free software OS which works and is not completely owned and controlled by a single corporation. Actually, I'd rather see Linux's problems fixed or improved rather than adopting Google's brand new OS.


The problem with lack of stable driver ABI in Linux will not get resolved as long as Linus et al. are calling the shots. They are fundamentally opposed to the idea, and there's nothing you can do about it.


There's not much I can do to influence Google's vision either. At least Linux is Free software (note: I'm equally fine about other pieces of software which are not Linux but are Free software).


The Linux on a standard distribution and Linux on an Android device aren't the same thing.

The one on the Android device is a fork with quite a few APIs trimmed off, like for example UNIX V IPC.


I think he is just making it up as he goes along. If you look at the fuschia docs or newsgroups all it says is that it they are working on this OS and intend to combine it with flutter to make a fast platform.

Nobody says anywhere that it will replace android. It looks just like a lot of these other Google projects. They put people on it. If turns into a contender then they might use it but if in the meantime android introduces features that make it more competitive then maybe they will throw fuschia away. That's my understanding anyway.

Also: "I have very strong reservations about Dart as the language of the primary platform API" and then later... "I am not a programmer"


Also, the customers that do care basically don't have much choice. With the Nexus line gone, you have to spend a lot of money for a Pixel to get a decent chance of having updates for a few years.

For the past year I've been every now and then looking for a mid-level Android tablet and I've always given up, since they all ship already with an outdated version of Android and slim chances of even getting an update to the current one (let alone future versions)


> Maintaining existing software for customers costs handset manufacturers $$$, and disincentives consumers purchasing new phones (their cash cow).

Maybe Google should split play store/app ad income with manufacturers, adding some incentives (50/50 profit split for phones on latest Android, 20/80 for one trailing major version - and then nothing. Or something along those lines).

That might have the added benefit (for Google) that manufacturers would have a stake in a healthy play ecosystem.

It's the ability to impose a private tax on ads and software that's supposed to be the monetizing strategy for walled gardens, isn't it?

If hw is commoditized - manufacturers need another leg to stand on.


Unintended consequence #1: phones that are too slow to adequately run the latest version will be force-upgraded Windows-10-style even if it degrades the user experience.


Well, maybe be more lenient on trailing versions if, and only if there are timely security updates?

I'm not sure the alternative is much better - lots of unpatched insecure phones in use?


> In two years, your average consumer probably doesn't care if their device is still receiving updates

Well, that's exactly why I've bought an iPhone this time. Exactly €1000 lost for the Android ecosystem, just because I want encryption. I know, Android 6.0 has encryption by default, but 4.4 doesn't, and that's the point. The whole story displays a lack of professionalism, in my opinion.


If nothing else comes out of this, I hope we end up with an Android OS that works better than the current one.

I've been running Android since the Nexus One so I'm no newbie to the platform, but the ease with which iOS manages to get all UI interactions at ~unnoticable FPS and outstanding battery life is staggering when you're used to Android. It feels like some really fundamental choices were made badly on the platform that make it incredibly inconsistent and unreliable. A fresh start would be fantastic.


As a fellow Nexus user, I've owned the Nexus One, 5, and currently use a 6P, how much of this is due to the OS versus hardware? Will Google ever be able to achieve Apple level battery life or overall UI smoothness, not to mention update support, without having their own custom SoC?

I was very happy with the 5, even with the intermittent lags, especially considering it's price at release. I suppose I'm not a very heavy phone user, and I never play mobile games, but I've been very happy with the 6P on Android 6.0-7.1. Battery life could definitely be better, and it does get fairly warm at times, but overall it's been a very good experience for me considering the Snapdragon 810 it's using is generally poorly regarded.


Apple has had fluid UIs since the start, despite off-the-shelf, low resource Samsung SoCs. They only started making custom ones after with the 4S, as I recall.


Yep. Out of the starting gate, Apple forced tight constraints on background and multi-threaded processing---so tight that the first versions of the iPhone OS couldn't support some types of application that the Android OSs could unless Apple wrote the program and could take advantage of the private APIs in the OS. But the advantage to that B&D approach was responsiveness and battery life, relative to an Android OS where any developer could write an app that would spawn an ill-behaved background thread and suck your battery.


Good point, Apple's A series SoCs are so well regarded I sometimes forget they've only been around for half of the iPhone's existence.


exactly. it's not (only) a hardware issue.


Google has been hinting at their own soc for the third gen Pixel devices though, so that is something to look forward to.


i have a nexus 6 and its ruining android 7. its better then samsung devices, but it has its own problems. i run the screen at min brightness because thats the only way i can get reasonable battery life. my car is parked -2 so i lose signal every time i have to go there. and sometimes i have to restart the device to get the signal back ( no tricks like airlaine mode work) the ui sometimes gets so slow i dont know why.


I wonder how much of the performance issues are related to Android being Java vs iOS being C? The old Blackberry OS was also implemented in Java and I remember well the spinning hourglass and the odd pauses in its UI.


Not much, modern devices and modern Android runtime is fast enough that GC and Java aren't a problem. There's bunch of other issues around though, mostly coming from the fact that devices use problematic and slow I/O controllers, apps do blocking I/O and OEMs add bloat and misconfigure behaviour of the OS to prioritize their services at the expense of apps you're actually using.

But bad I/O is the killer (e.g. fun things like triggering 2 image load requests at once which then take 2500ms vs. doing them sequentally which takes 400ms on some Samsungs. This happens also if several processes collide). Apple side-steps that by throwing money at the problem (good controllers, expensive flash) which probably won't happen in budget Android market.


Wow, this was a major TIL for me.

I don't read about Android and iOS [hardware] nearly as much as I'd like to, and HN doesn't seem to generally cover the subject too well. What are some sources you could recommend I read to stay updated?


I'd love to see data that concludes that iOS has better battery life than Android. I haven't seen such data and doubt the truth of that statement.


iOS devices routinely ship with batteries ~60% the size of flagship android devices and have competitive amount of battery life. If apple chose to ship an iphone with a 3000mah battery like what can be found in Android flagships it would absolutely crush the competition in terms of battery life.


Maybe it would, maybe it wouldn't. Without research into this it's all just conjecture.


LG G4: 3000mAh

HTC One M9: 2840mAh

iPhone 6: 1810mAh

iPhone 6 Plus: 2915mAh

In streaming video playback, the iPhone beats out all of the other devices.

Galaxy S6: 6.3 hours

LG G4: 6 hours

HTC One M9: 5.5 hours

iPhone 6: 8.8 hours

iPhone 6 Plus: 11.1 hours

http://www.trustedreviews.com/opinions/which-phone-has-the-b...


Bingo. This is precisely what Google needs to pull ahead. Apple's going to commence soiling their collective knickers in 5... 4... 3...


> I don’t see the average garbage-collected language using a virtual machine allowing for a target higher than 60fps realistically.

But... "average garbage-collected language using a virtual machine" doesn't describe any of C/C++, Dart, Go, Java, Python, or Rust. Nor Javascript.

I get greater than 60 fps with my existing Vive three.js WebVR-ish electron/chromium linux stack. Even on an old laptop with integrated graphics (for very simple scenes). Recent chromium claims 90 fps WebVR, and I've no reason to doubt it. So 60 fps "up to 120fps" seems completely plausible, even on mobile.


Some people seem to have skimmed over the discussions about the pros and cons of garbage collection and come away with the idea that all garbage collected languages function by stopping the world for 50-100 millisecond pauses four or five times per second, minimum. There are real performance issues that GC can create, but there's a looooooot of vigorous overstating of the issues.

Slightly in those people's defense, it is true that while GC relieves you of the need to track lifetimes and worry about using dead pointers, it doesn't relieve you of the need to consider performance as one of many factors that go into your code. So while I think the performance issues of GC'd languages are very frequently overstated, it definitely is true that a UI framework written in a GC'd language by someone who isn't giving any thought to performance implications of allocation can very quickly exceed its targets for 60 fps, let alone 120 fps, even on very simple GUI screens. But that's only maybe 10% the fault of garbage collection... 90% is that someone is writing a GUI framework without realizing they have to pay a lot of attention to every aspect of performance because GUI frameworks are very fundamental and their every pathological behavior will not only be discovered, but be encountered quickly by all but the most casual programmers. It doesn't take long before someone is using your text widgets to assemble a multi-dimensional spreadsheet with one text widget per spreadsheet node or something, just as one example.


One way to look at it is:

In a manually managed language, the performance of the application's memory management code is limited by the skill of the application developer. In a managed language, it's limited by the skill of the GC developer.

GCs have gotten a lot better in the past twenty years, in large part because they have the luxury of amortizing their work across a million applications. That makes it financially viable to throw a ton of person-years at your GC. That's not the case for the malloc()s and free()s in a single application.

So just through economies of scale, we should expect to see, and indeed have seen, managed languages catch up the the memory performance of the average manually managed app.


> But... "average garbage-collected language using a virtual machine" doesn't describe any of C/C++, Dart, Go, Java, Python, or Rust.

I'm curious; what would be an example of something you would describe as an "average garbage collected language using a virtual machine"? Java would certainly be the first language I'd think of for that description.


In addition to the 2 nice previous answers (though I'd not include CPython myself, as writing high-performance CPython means writing/using C libraries; and note that Lua and LuaJIT are different)...

I italicized "average", and wouldn't include Java, because most languages are very small efforts, and thus are different than the top few. A person-century or person-kiloyear of optimization effort has an impact. Observations that "implementations of strategy X generally have characteristic Q" can be true, but there's a hidden context there of "severely resource-limited implementations of X".

But two caveats:

Sometimes you are trapped. In CPython and PyPy (but not in Jython or IronPython), parallelism remains defined by the GIL.

Language implementation tooling sucks less than it used to. Now even toy languages have JIT and rich compilation infrastructure.

Aside: In the late 80's, before ARPA was hit by Bush I, project managers had a great deal of autonomy. There was discussion of "what more neat things could we do to accelerate progress?" One observation was that JIT expertise was highly localized, and we could either wait many years for it to slowly spread, or pay someone to stand on people's desks and catalyze its being written up. But that kind of micro-grant didn't yet exist, and time ran out on creating it. Society chose option 1, but a human generation has now passed, and we finally have accessible JIT infrastructure. So, yay?

(In fairness, note autonomy and "old boy network"ness was a less happy thing for potential researchers at other than the few main research institutions. Some change was needed, it's just not clear it required societally critical tech to remain largely unfunded for decades. We don't even have a (language) wiki. Though even national science education improvement efforts have failed (but oh so close) to attempt one.)


Java is not using a VM on Android since ART was introduced, although on Android 7, it is a bit more complicated, given the mix of pure Assembly interpreter, PGO driven JIT and AOT compiler.


> I'm curious; what would be an example of something you would describe as an "average garbage collected language using a virtual machine"?

Ruby 1.8, Lua, or CPython.


Python was included in the list of things considered not to be in this category; I probably agree with you on that one, but the idea behind my question was that the languages listed as not "average garbage-collected language using a virtual machine" included several that I'd include in that category.

What do you think makes Ruby and Lua more "average" than Java?


Since Fuscia is a new kernel, that means it will probably only support Google hardware.

The status quo right now among android hardware vendors is to violate the GPL, and they have faced few if any repercussions for doing so. I wonder if Fuscia is sort of viewed as the way forward to addressing that.

Anyone care to speculate why there isn't a community version of chromium os? I'd donate to it for sure. It sounds like getting android apps working on it would be pretty easy: https://groups.google.com/a/chromium.org/forum/?hl=en#!topic...


Well, there's Cloudready[1] which is ChromeOS for any computer. I installed it on mom's laptop (ancient device) and it works well.

[1]: (https://www.neverware.com/freedownload/)


>The status quo right now among android hardware vendors is to violate the GPL

No it's not the status quo. The major OEM's do release their code. Yes, there are some Chinese OEM violators, but that's typical of China.


You can release code and still violate the GPL in other ways. For example, there are binary blobs out there and the GPL is pretty unequivocal on this point: "The source code for a work means the preferred form of the work for making modifications to it."


It depends whether the binary blob is a derived work or not.

http://yarchive.net/comp/linux/gpl_modules.html


Releasing binary blobs is definitionally not releasing the source, though.


Seems like Free Software that propelled early Internet pioneers served its purpose and those companies are turning their backs on it - first with Apple, GCC->LLVM, now with Google, Linux->Fuchsia :( I am getting afraid of another dark age on the horizon... I guess it's going to be inevitable as 90% of SW developers will find themselves redundant when inferring AI capable of composing code blocks and listening to/reading speech/specifications arrives in upcoming decade, making creation of typical web/mobile apps trivial.


LLVM is free software. Google spends huge load of money on Linux security and some of the Linux developers are notably unfriendly against contributions they dont like, especially in the security space. I hope that Google delivers a new kernel using microkernel architecture that will significantly improve on kernel security.

http://unix.stackexchange.com/questions/59020/why-are-the-gr...


I think Google is probably the most Free Software friendly of the the new big three (Amazon, Google, Microsoft). They haven't disappointed with Fuschia which appears to be entirely copyleft:

https://fuchsia.googlesource.com/magenta/+/master/LICENSE https://fuchsia.googlesource.com/fonts/+/master/LICENSE


That's BSD-license though, which doesn't force companies to respect your freedoms. Parent was talking exactly about GPL being dropped in favour of licenses that allow vendor lock-in. It means a philosophical departure from user-first towards corporation-first and the Free Software world the FSF envisioned getting trampled.

The zeitgeist is moving towards conservatism in general, so it doesn't surprise me, but it's still sad.


Of the big three MS has definitely been the best lately, unlike the other too they've embraced open community development, not just open source. I still wouldn't trust them not to pull a bait and switch, but they've been better than the other two.


How are LLVM and potentially Fuschia anything but great additions to open and free software?


This is either fantastic satire or hopelessly optimistic. What you are describing is a human level artificial general intelligence. Probably more than 10 years out.


Not really, for simple apps you can use what we already have in place with some meta-programming rules prepared by humans (currently only a few companies posses this capability though). You can use ML like deep learning variations to learn association between your wishes and corresponding code blocks. Initially apps like that would be simple, i.e. "make a web page", done, "change background color to pink gradient", done, "place gallery of images from my vacation to the center", done, "show me the photos", "remove this one", "make this one smaller", done, "add this title", "add my paintings underneath", "add a buybox next to my paintings", add "checkout to the top" etc. NLP is now there already, but initially you'd need a lot of human "drones" for associating "wishes" with code - doable by companies like Google (they can bake in learning into Android Studio) or by scanning GitHub etc. We already have stupid mobile apps generators, I don't see any reason why we wouldn't have what I described within next 10 years.


I read a story years ago about a guy who changed careers from being a Programmer when Visual Basic was launched. He reasoned that anyone would be able to create applications so it wouldn't be a viable career anymore.

> You can use ML like deep learning variations to learn association between your wishes and corresponding code blocks

I suggest you read up on ML.


I am currently working on deep learning as well as generating custom programming languages. Maybe you could consider updating yourself? ;-)


Programming touches both comprehension of the code and the world it interacts with. Any program which can write programs based on comprehending natural language ought to be able to rewrite itself. Can you please explain how what you are describing is distinct from AGI?


I view current deep learning as a cumbersome counterpart to retina-level cells (well, from 30,000ft). Anyway, DL can roughly do what a few specialized biological neurons can do, like in the case of retina identifying direction, speed etc. It's far far away even from a whole brain of an insect, but it can do some amazing things already. Here what you can do is to utilize those things it can do (and they will be getting better), and add some human-made inference/anotation/ontology/optimization system on top of these partial components. The human-made system can be chipped away slowly as we figure out how to automate more and more of its functionality.

So for example, for simple programs you don't need to understand what individual code blocks are doing. All you roughly need are some well-defined procedures/visual components (able to ignore unsupported operations) that can be composed LEGO-style. Here AI can learn to associate certain compositions of code blocks with your sentences, e.g. you can teach it with touch what does it mean to resize, move to left, change color etc., and even provide those code blocks. To help it, you have to annotate those code blocks so that you maximize chance of valid outcomes. ML by itself is not capable of inference, so inference must be done differently. Yet what your AI learns with associating certain sentences with outcome in your code blocks will persist. And for making associations you can unleash millions of developers that might be working on your goal unknowingly, e.g. by creating a safe language like Go for which you have derived nice rules that you can plug into your system. Initially you could only to pretty silly things, but the level of its capabilities will be rising all the time and there is a way forward in front of you, even if a bit dimmed.


Sounds like something that could be used for customizing a CMS a little bit, but software development is something very different from what you're describing.

Developing software requires understanding of completely open ended natural language. NLP is nowhere near that level of AI and doubt that it will be in the next 30 years.


Yes, but when you talk to regular people and their needs for web pages, apps, they are often either very trivial or unbearably complex. You can potentially automate away those very trivial with the current state of ML already, and there is a bulk of money there that goes to a lot of independent developers and smaller companies. And once you have such a system built, you can extend it as new advances in ML/GPU come, automating away more and more in the process. Even if you just prepare some vague templates for often performed tasks in business with limiting variations, those can be super helpful.

The point is that only really good SW engineers have any chance to survive, those low-skilled ones will be gradually replaced by automated reasoning.


>Yes, but when you talk to regular people and their needs for web pages, apps, they are often either very trivial or unbearably complex

Mostly they are unbearably vague and based on tons of false assumptions about how things work. Separating the trivial parts of a user request from the unbearably complex parts is itself often unbearably complex. It requires a conversation with the user to make it clear what is simple or what could be done instead to make it simpler.

The examples for trivial user needs that you have given are all within the realm of what we now use WISIWIG editors for. Not even that is working well. The problem is that you can't layout a page without understanding how the layout interacts with the meaning of the content on the page.

The logic capabilities of current ML systems are terrible. It's like, great, we have learned to sort numbers almost always correctly unless the numbers are greater than 100000!

Even in areas where AI has advanced a lot recently, like image recognition, the results are often very poor. I recently uploaded an image I took of a squirrel sitting on tree branch eating an apple to one of the best AIs (it was Microsoft's winning entry to some ImageNet competition).

It labelled my image "tree, grass", because the squirrel is rather small compared to the rest of the picture. Any child would have known right away why that picture was taken. The tree and the grass were visually dominant but completely unremarkable.


Just imagine that you can interactively by voice or by touch tell AI what/how to adjust stuff and it will use it to improve itself for your future similar tasks. Now project there will be 1,000,000 users like that, telling app what exactly did they mean and pointing to proper places in the app. So exactly this will be the conversation you desire, you'd directly tell your app builder what you want, and if it is not doing what you like, you either show it to builder by simple gestures or rely on some other user going through the same problem before you and app builder taping onto that knowledge. Obviously, first for simpler web or mobile apps. This sounded like sci-fi just a decade ago, but we now have means to do simple app builders like that.

ML by itself is incapable of inference, hence you need some guiding meta-programming framework that could integrate partial ML results from submodules you prepare.

As for squirrel example, it was probably one of "under threshold" classifications of ResNet, i.e. tree was 95%, grass was 90%, but squirrel was 79%, so it got cut out of what was presented back to you. Mind you, this area went from "retarded" in 2011 to "better than human in many cases" in 2016. I know there are many low-hanging fruits and plenty of problems will still be out of reach, but some are getting approachable soon, especially if you have 1M ML capable machines at your disposal.


>Now project there will be 1,000,000 users like that, telling app what exactly did they mean and pointing to proper places in the app. So exactly this will be the conversation you desire

That's not a conversation, that's a statistic. A conversation might start with a user showing me visually how they want something done. Then I may point out why that's not such a good idea and I will be asking why the user wanted it done that way so I can come up with an alternative approach to achieve the same goal.

In the course of that conversation we may find that the entire screen is redundant if we redesign the workflow a little bit, which would require some changes to the database schema and other layers of the application. The result could be a simpler, better application instead of a pile of technical debt.

This isn't rocket science. It doesn't take exceptionally talented developers, but it does require understanding the context and purpose of an application.


Sure, but AI listening to you can be exactly that conversation partner, maybe by utilizing "General's effect" - i.e. just talking about some topic gets you to solution, even if the person next to you have no clue and just listens to you. Here AI can be that person and you can immediately see the result of your talk in a form of changing app you are building, and easily decide something has to be changed. Initially granularity of your changes will be large, i.e. the pre-baked operations will be simple. Later you can get more and more precise, as AI will be developing, and more and more people will be contributing more specialized operations.


Yes, some are very easy to implement, just look at squarespace's customers. One could build an NLP interface (bot?) to configure squarespace sites and this would take you quite far. Not sure I'd call that AI though.


It's a good start ;-) To get better you'd then have to enhance your meta-programming abilities as you see new possible cases opening in front of you. Will see how far would this go soon.

It's not general AI but even less-than-general AI can erode our ability to earn money from developing software.


Drag and drop website building is a monumentally easier challenge and hasn't obsoleted programming.


Sure, but very few regular people have patience even doing D&D websites. Imagine though that you just take your phone, tell it "make a website", then look at it and say, "well, change background to this photo", "hmm, place a gallery from a wedding there", "make a new page linked on the left and call it 'about me'" etc. And you see the changes happening immediately after you say it, and you can even correct it. This is doable today, you just need to have a set of "code blocks" that would allow you to generate proper web app and via engines like phonegap even mobile apps anyone can make.

Imagine you run a small business and need just some simple site with your contacts, and you are able to assemble it with voice in 10 minutes. That would be a complete game changer for most regular people.


Article author here. Posted this in the notes, but possibly too buried:

For anyone interested, I intend to write quite often about consumer technology on this blog. Topics will include hardware, software, design, and more. You can follow via RSS or Twitter, and possibly through other platforms soon. Sorry for the self promotion!

Thanks for reading. Please do send any corrections or explanations.


Great first post, hope you keep it up! Only feedback is that I had to reread this sentence several times before I was confident I'd parsed the negations correctly: "I also can’t imagine the Android update problem (a symptom of Linux’s modularity) won’t at last be solved by Andromeda, but one can never be too sure."


Thanks for the kind words. I debated that sentence, but you're right - always avoid double negatives. I'll fix it now.


I have nothing constructive to add aside from that I appreciate the simple uncluttered design of your blog. The typography and glasses logo are very pleasing to my eyes as well. Good luck and I look forward to reading more.


Thanks a lot! Hope you like my future articles.


Very nice article but I found the washed-out text hard to read on my phone, which is a 6+ so it's a decent size. I actually gave up and pulled out my iPad.


Hiroshi Lockheimer has publicly stated several times that there is no merger of Chrome OS and Android.

https://chromeunboxed.com/some-andromeda-perspective-hiroshi...

I think you alluded to this, "cue endless debates over the semantics of that, and what it all entails," but it might be worthwhile to add the official statement.


But there are frequent commits to multiple repositories on the fuchsia code base[1]. I don't really see where Google is going with this, if it's neither meant to replace chrome OS or Android.

Maybe a long term project? I think Google is at a position where they can write a great OS from scratch, learning from the mistakes of others, and it has a chance of becoming the greatest OS that ever was.

With the talent of it's engineers, they can bring new ideas that can be better implemented, from scratch on a new OS. They already have a bunch of languages, web frameworks, and so many more technologies from Google that can be well integrated in this.

And looks like the project is mostly BSD licensed, which is great! I'm excited for just that alone.

[1]:https://github.com/fuchsia-mirror


This is like Apple saying they are committed to PPC right before they announced the intel transition, or Nintendo stating that "The DS will not replace the GBA". If they become successful in building that OS, that statement will be thrown away as something they had never said. If they stumble upon roadblocks while trying to build this, they will have this statement to back them up.

It's typical market-o-speech.

They are not actively working on this :

https://github.com/fuchsia-mirror/modular/commits/master

For no reason.


I recommend looking through all the open source commits :)


Wondering what your background is as you state you're not a programmer but you can obviously parse code (at least at a high level) and have familiarity with the OS ecosystem. Market research? Great analysis btw.


That's very kind, thank you. And yeah, you guessed right - I was an industry analyst for four years.


I added an Update section for anyone wondering why it looks like Andromeda really is the same as Fuchsia.


Why can't the pure web replace apps and programs? All the pieces are almost there: hardware acceleration, service workers, notifications, responsive design...

I currently "add to home screen" for most things. I edit my images online, and develop code using cloud9 ide, etc. There are few things I need apps/programs for right now, and that's improving day by day.

iPhone is dropping heavily in world wide market share, but they still have a lot of the wealthy users. There is a non-zero chance they get niched out of prominence by Android (aka every other manufacturer in the world), at which point network effects start encouraging Android-first or Android-only development. There might be a point where Apple needs to double down on the web, and/or maybe kill off apps, like they did flash, to still have the latest "apps".


Because it just sucks, browsers were designed for interactive documents, not applications.

No HTML5 UI/UX comes close to what is possible to achieve with native APIs in any platform.

For old dogs like myself, it always seems that younger web dev generations are rediscovering patterns and features we were already doing in native applications during the 90's.

Also solutions like service workers look like some sort of kluge to sort out the problem to do offline applications in browsers.

WebOS, ChromeOS (barely used outside US) and FirefoxOS are all proofs that the experience is substandard.


I take photos miles from where there's cell signal. I write code on the bus while heading to doctors appointments. The web is about as far from a panacea as you can get. It's slow, it's bloated, falls apart when you don't have a connection, useful applications die when the company dies. Were some of the midi devices I use for music "web-based" they'd have probably become doorstops decades ago. A web-based IDE would be horrible for trying to develop code with an intermittent connection. The web is not a good time.


The intermittency issues can be fixed but I agree that the dependency on web app providers and their fickle business models is scary.

The way it works is to funnel all the profits into a few huge conglomerates that benefit from exclusive access to all personal data and train users to never depend on anything that isn't a core product of one of these conglomerates.

Using their 80% margins they can afford to at least give us some time before scrapping software that doesn't look it's ever going to reach 4bn consumers.

The result is stability. Until they all get toppled by the next technology revolution. Years later, regulators will crack down hard on some of the side issues of their former dominance and once again miss the currently relevant issues :)


The main reason I wouldn't want web apps, even if they somehow became as fast and integrated as native apps, to become the standard, is because they automatically update at the developers' whims. Vim won't change unless I make it change.


"Almost there" does not count. It has not just be there but be better at everything. And I will argue it is far far away from even "being here". I am getting tired of repeating this each time, but "web everywhere" folks simply have no idea what native SDKs offer.

    > iPhone is dropping heavily in world wide market share
And taking all the profits. Android being everywhere does not mean that every Android device is being used as smartphone, quite often they are just replacements for a feature phones.


Or on the tablet side, just replacements for televisions.


Because it's a horrible experience.

That's just my anecdotal view, but I have never tried a web based app (electron native app thing or webapp in the browser) that is as great an experience (UX and UI) as the best of the best native apps on Mac and iPhone, and I'm not sure it's possible to push web tech that far without reimplementing everything in the web stack and making it as close to native that we're better off just writing native apps.

EDIT: spelling.


> but I have never tried a web based app (electron native app thing or webapp in the browser) that is as great an experience (UX and UI) as the best of the best native apps on Mac and iPhone

I would say I have never tried a web based app better than average native apps. (Except for GMail, because I don't like the sync feature of mail clients).


What is pure web? As the current state of web it is everything but pure.


This blogpost has waaaay too much assumptions. When reading about this it seems easy to rip out Kernels, OS & Software and put it like a layer on a cake on top of a new OS. Even for Google this is crazy complicated. It will not be that easy. For sure not... and I also see no clear strategy WHY somebody should do that. It's like baking the cake with too much ingredients. ;)


I can see the kernel thing happening. Just the licensing and breaking ABI is one of the biggest factors in not being able to have an easily upgradable android.

I only see this as a good thing if this ensures an easier upgrade path than in Android; and if vendor ROMs can easily be replaced by a stock OS (like on Windows).


I definitely can not see the Kernel thing happen. Ever thought of power management and keeping the whole system fluent? This are all not easy problems which you solve in 1 or 2 years. It may only work for very specialized hardware... speaking of hardware. Hardware driver support is also something most other Kernels suffer from in comparison to e.g. Linux.


> Hardware driver support is also something most other Kernels suffer from in comparison to e.g. Linux.

So?

Google doesn't have to support all hardware, they can pick to support only the hardware they want. That's what they already do with ChromeOS. Installing ChromiumOS on unsupported hardware can have its issues. The reverse is true too, installing not-ChromeOS Linux or another OS on Chromebook does not always work well, although it's fine on some specific models.

Android is like that too, and in a much worse way than for Chromebooks. We're not talking about stellar linux kernel support for all the custom ARM SOC that are out there. All manufacturers write their own closed source hardware support for android and this is how android ends up having issues with updating, since whenever Google updates the linux kernel it breaks the ABI and all the support manufacturers wrote for the previous version, and manufacturers do not want to spend so much time on needless busywork such as keeping up with kernel API churn that exists just to satisfy the dev team sense of perfection.


> IDEs written in Java are wildly slow…

My favourite IDE to use today is IntelliJ, and I prefer it over my experience with Visual Studio (though to be fair, I did not use VS intensively in the past 3-4 years).

I don't experience IntelliJ as "slow". It launches faster than VS did when I used it, and once it is running I keep it open pretty much the entire work-week without any issues.


Other than Netbeans and Eclipse being faster and don't turn my dual core into airplane mode like Android Studio does, which forced me to enable laptop mode on it.


I don't really understand what you are trying to say, sorry :/


There are steps in the Android tool chain build process that are slow (i.e. much slower than pure Java build) but IntelliJ/AS platform IDE is plenty fast on my (admittedly top-of-the-line) 2012 MacBook Pro.


It's an odd statement to make, given that he states he's not a programmer in the notes.


"Fuchsia" and "magenta" are pretty gutsy names to choose, given how similar it sound to Apple's vaporware "Pink" OS from the 90s (AKA Taligent, AKA Copland). Somebody has a sense of humor!

It's really hard to tell if this is actually something that will ship, or yet another Google boondoggle to be swiftly discarded (like the first attempt at ChromeOS for tablets). Google under Larry Page built and discarded a lot of stuff; I wonder if it's the same under Sundar Pichai.

https://en.wikipedia.org/wiki/Taligent


Sounds like a stretch having to go all the way back to the 90's to get a similar color code name.


It was the first thing those unusual names made me think of. But I'm a long-time Mac developer, so probably pink and purple colors as OS names won't have the same connotations for other people.


This could be the first time Apple needs to truly worry about Google. The one massive lead Apple still has over Google (and the other major players) is the incredible OS they inherited back in 1997 and continue to extend and maintain today.

Neither Android nor Windows nor Chrome OS nor your favorite Linux distro have ever been able to truly compete with the NeXT legacy as it lives on in Apple.

Google is smart enough as a whole to see this, and so it's not surprising that they're attempting to shore up their platform's competence in this particular area. What IS surprising is that it has taken them this long.

Perhaps what's truly surprising is just how much mileage Apple has gotten out of NeXT. It's astounding, and I know Apple realizes this, but I question whether or not they know how to take the next step, whatever that may be. And if Google manages to finally catch up...


> Neither Android nor Windows nor Chrome OS nor your favorite Linux distro have ever been able to truly compete with the NeXT legacy as it lives on in Apple.

I find this a funny statement. Apple has not seen runaway success in terms of market share, not on desktop platforms (where the top OSes are various versions of Windows), not on mobile platforms (where it is a distant second to Android in the worldwide market), not on server or supercomputer platforms (where it's effectively nonexistent).

Nor is it influential in terms of operating system paradigms. The only thing I can see people citing as a Darwin innovation is libdispatch. Solaris, for example, introduced ZFS and DTrace, as well as adopting containers well before most other OSes did (although FreeBSD is I think the first OS to create the concept with BSD jails)--note that Darwin still lacks an analogue.


it's not about market share. it's about profit share. android/ios may be 80/20 on market. but they are 20/80 on profit.

market share won't feed nobody. that's all apple needs to care about. just look at their market cap and p/e ratio.


> This could be the first time Apple needs to truly worry about Google.

Er... what?

Apple has been worried, and actually threatened, by Google every day since 2008, when the first version of Android came out.

Without Android, Apple would probably have a 90% monopoly on mobile phones today. Saying they might be "worried" is beyond an understatement.

They are absolutely furious at Google, as Jobs was until he passed away.


Android's global market share, even with Apple, is 85%. Bear in mind, the iPhone is only popular in the US, which is responsible for most of the remaining 15%. Outside the US, Android is a completely unquestioned monopoly.


You seem to be responding to a different post than mine.


You know, I think on the second to last line on your comment, I transposed Apple and Android or something like that. :/


I'm a minority I know but I don't like material design because it's terrible at "scaling." It looks great, it's beautiful, but you lose too much damn functionality. When I had to redo apps to material design we had to completely remove multiple buttons due to them not fitting material design standards. I really hope they have some way to alleviate this problem without using 50px icons for all the extra buttons.


I think this OS will be mostly for entertainment so there's no need for lots of UI, only "Play" and "Pause" buttons.


Why not bend the rules a bit before omitting vital components?


They weren't vital components but useful for the user. We moved most of the "excess" buttons to the top bar and overflow menu but still had to remove a button here and there completely (we still had the functionality in a different part of the app it was just more tedious to use from our testing).


So Google is going with a DartVM on this one. Dart is cool and all, but why DartVM? It's the same restrictive model we have with Android (dalvikVM) where you can only develop with languages that can compile down to Java bytecode. In this case, however, we will be using languages that can transpile to Dart source instead! Why not JavaScript engine? With the current movement with WebAssembly, I see a lot potential use cases. The biggest point being the ability to code in any language that compiles to wasm. The engine could be exposed to communicate with the OS directly or sth. If they are going to consider V8 alongside DartVM, then that would be cool. I truly hope they don't repeat old mistakes.


> Why not JavaScript engine?

Having your bottom level language semantics be dynamically typed seems to place a real cap on application performance. Given that the underlying machine code is typed, I don't think it makes sense for the lowest level language you can target to be dynamically typed.

(WebAssembly is basically an acknowledgement of that fact.)

> The biggest point being the ability to code in any language that compiles to wasm.

Conversely, I don't think WASM is a great target language either. It doesn't include GC, and I don't think it makes sense for each application to have to ship its own GC and runtime. WASM is a good target for C/C++, but not for higher level languages like Java/C#/JS/Python/Ruby/Dart.

They say they intend to support GC, but they've been saying that for a while and I haven't seen much motion yet. I don't think Fuchsia can afford to wait around for that.


Dart can compile to JS, so that edge is covered. Dart bytecode is arguably an easier target for other language compilers than, say, assembly. The DartVM's byte code is fully specified, there is some adoption, it's openly accessible, and there's a bold, production-ready reference implementation. For wasm, only half of that is true.


Compiling Dart to JS is a very different problem than compiling JS or other languages to Dart.


I understand what you mean and agree. But then, you don't need to compile JS into Dart. Rather, JS into DartVM byte code.


Conjecture is fun, but the linked piece takes some enormous liberties with crossing massive chasms effortlessly. Not only is Fushia not Andromeda (a project), the needs of IoT is massively different from the needs of Android. And the net investment in Android is absolutely colossal, and making some new APIs or a microkernel does not a replacement make.


Google could have taken firefox and improve it to make it better, but they created something new.

Now instead of improving the linux stack and the gnu stack (the kernel, wayland, the buses, the drivers), they rewrite everything.

They put millions into this. Imagine what could have been done with it on existing software.

They say they are good citizen in the FOSS world, but eventually they just use the label to promote their product. They don't want free software, they want their software, that they control, and let you freely work on it.


Everybody's modern linux stack is built on 40 year old design and assumption. While it's remarkable how well these work so many years later, I welcome that somebody like Google tries something new.


> Google could have taken firefox and improve it to make it better, but they created something new.

This does not work. They were initially working on webkit but technical decisions they wanted were different from apple/webkit team so they had to fork. It is much better they implement their technical ideas with their resources.


Isn't Google paying to Mozilla for Google searches from inside search bar in Firefox ? Isn't like main money flow for Mozilla? More people use Chrome, more money Google saves. It's all about the data, making Firefox better didn't benefit Google as good as creating new browser. Now they do not have to pay so much for all their users searches to other companies and they have so much more data that they can use internally for other products.


No, that ended in 2014. Google no longer gives Mozilla anything.


"The pitch will clearly be that developers can write a Flutter app once and have it run on Andromeda, Android, and iOS with minimal extra work, in theory."

How's that going to work? iOS, specifically? Is Dart a supported language?



From the first link:

"The engine’s C/C++ code is compiled with LLVM, and any Dart code is AOT-compiled into native code. The app runs using the native instruction set (no interpreter is involved)."

Thanks!


Google is just afraid of GPL I think.


Most likely, just like Apple, Google is getting rid of GCC.

https://android.googlesource.com/platform/ndk/+/master/docs/...

<quote>

Remove GCC

GCC is still in the NDK today because some of gnustl's C++11 features were written such that they do not work with Clang (threading and atomics, mostly). Now that libc++ is the best choice of STL, this is no longer blocking, so GCC can be removed.

</quote>


Handset vendors and, especially, mobile network providers, would rather ship their handsets with proprietary unmodifiable software that they can cram with antifeatures that will be that much harder to detect or remove.

Having said that, the reports I've seen contain scant actual evidence that Google actually plan for Andromeda to replace Android, or even that Andromeda is at all important to Google. I take these reports with a mountain of salt. I remember that the "Pixel 3" was definitely just about to be announced back in late 2016, and was definitely going to be running Andromeda OS.


So their strategy is to go full-blown closed source?


The OS Fuchsia, including the kernel Magenta, appears to be open source, mostly under BSD 3-clause, with parts of the kernel under MIT and similar licenses.

https://fuchsia.googlesource.com/fuchsia/+/master/LICENSE

https://fuchsia.googlesource.com/magenta/+/master/LICENSE

https://fuchsia.googlesource.com/


K8s is Apache 2.0. Who's to say they don't open the whole project when ready as they have in other cases?

Source: https://github.com/kubernetes/kubernetes/blob/master/LICENSE


Open sourcing the product will help the competition to catch up quickly, all they have to do is take googles product and change its look. The next half is infrastructure, which companies like Microsoft, facebook, amazon, alibaba all have. Plus services like AWS will help future version of dropbox & netflix.

A good example of that is Visual code. I am sure some at github (atom's paprent) is pissed.


Wikipedia says it's not a fork of atom but based on electron. Does that make a difference?


VS code is MIT licensed although... And standalone web app in a slightly modified browser window isn't a high moat.


I mean they just hate GPL, not open source. Just look at the removal of BlueZ GPL library in 2012.

[1]https://lwn.net/Articles/597293/


Or they've just had it with Linux and all of its baggage.


There's a ton of good reasons to replace Android beyond licensing. Security being the largest one. Fuchsia is designed with security in mind from the ground up, Android arguably is not (at least not from the sense of what is considered security today).


So in the near future billions of devices will no longer be running Linux anymore? That would be quite a blow to the OS in terms of chances of dominating operating systems that are being used by end users. Or will they simply fork it and strip it down until only the parts they really like will remain?


This is also a concern of mine. What will this mean for rooting devices? Will it still be root, or will it be "root" as it an iOS jailbreak?


Rust as a first-class language?


I believe the article over-sells Fuchsia's use of Rust. Raph Levien wrote some bindings to the OS runtime, and he does work at Google, but his Rust work is not-official.

(or that's the story as I remember it)


Which is a shame, they are missing an opportunity to ditch C/C++ in favor of a safer language and set a precedent in OS history.

Imagine how easier contribution would be if you could write the OS parts with less lines, guarantied to no introduce most security and concurrency bugs we know about.


Ah, thanks for clarifying. I updated the article to be more conservative about Rust's inclusion.


ANDROid + chROME + DArt = ANDROMEDA?


re: "Flutter was [...]"

A bit weird to use the past tense here since it's not reached 1.0 yet. You can try it out today (tech preview) to create apps in Dart that run on Android and iOS:

https://flutter.io/

(Googler, not on the Flutter team itself, but working on related developer tools.)


Fair point! Just fixed that, thanks. I had only meant that Flutter was not originally intended for Andromeda, as far as I can tell :)


Trying to see the other side of the coin : what economical reason is there for this project ?

A company the side of Google, with all its internal politics, doesn't work as a startup. Starting a third operating system project and hoping it to replace two major ones means convincing people inside the company to loose part of their influence. Now it may happen if chrome or android were failing, but they're clearly not.


I updated the article with the following clarification at the top:

I use Andromeda equivalently with Fuchsia in this article. Andromeda could refer to combining Android and Chrome OS in general, but it's all the same OS for phones, laptops, etc. - Fuchsia. I make no claims about what the final marketing names will be. Andromeda could simply be the first version of Fuchsia, and conveniently starts with "A." Google could also market the PC OS with a different name than for the mobile OS, or any number of alternatives. I have no idea. We'll see.


I hope the userland is POSIX/Linux compliant. There's a TON of useful software reliant on this compliance that will go to waste if it isn't compliant out-of-the-box.


It doesn't seem to affect ChromeOS, iOS or Android.


As a sound/music app person the inclusion of ASIO for audio is exciting, Google's new is should be on par with iOS for sound with ASIO audio drivers at the core.


AFAIK the flutter UI framework is a react-like framework written in Dart (with C++ as OS glue) including the UI -> graphics rendering layer. It builds upon Skia and Blink. I am not sure how that will allow compatibility with other languages. The only language for UI apps looks to be Dart. Which isn't bad - its a pretty well designed language, but I don't see how apps can be written in a wide variety of language as the author suggests.


>the main UI API is based on, yes, Dart

Won't the Dart's single thread nature be bad to take advantage of Murli core processors? Or they are embracing web workers?


Dart, in the context of Fuchsia, isn't really a web based language. So yes, it'll take advantage of multi-core processors.


Dart supports full concurrency via actor-like Isolates. Currently on the web, you need to use a web worker though, yes.

https://lucamezzalira.com/2013/06/11/isolates-how-to-work-wi...


The author calls it Andromeda OS, but is this really the Andromeda OS we've been hearing about? I'm not so sure about that. What we do know right now is that the OS is currently code named Fuchsia.

Fuchsia repository: https://fuchsia.googlesource.com/?format=HTML


There has been other reporting about this going back to last fall. I don't think Fuchsia is the marketing name.


Link to the source code:

https://fuchsia.googlesource.com/


The article says it's a microkernel, I wonder if it will be a more secure general purpose OS, well windows NT started as microkernel but they changed that wit NT 4,let's see if it will be different. I also wonder about driver support and battery consumption. Good luck to Google.


> The pitch will clearly be that developers can write a Flutter app once and have it run on Andromeda, Android, and iOS with minimal extra work, in theory.

This is worrying for Apple. I can see the following playing out

- Apple continues releasing machines like the TB MBP, much to exasperated developer dismay.

- Other x86 laptop industrial design and build quality continue to improve.

- Fuchsia/Andromeda itself becomes a compelling development environment

- Developers begin switching away from Mac OS to Fuchsia, Linux and Windows

- Google delivers on the promise of a WORA runtime and the biggest objective reason not to abandon Mac OS, i.e. writing apps for iOS, disappears.

- Apps start to look the same on iOS and Android. iOS becomes less compelling.

- iOS devices sales begin to hurt.

Granted that the App Store submission requires Mac OS (Application Loader) and the license agreement requires you only use Apple software to submit apps to the App Store and not write your own, but it seems flimsy to rely on that.


Here is a link to the documentation:

https://fuchsia.googlesource.com/magenta/+/master/docs


I didn't see in the article explanations why those decisions were taken and not others. On the surface it feels like this is an OS insufficiently different from others to justify switching to.


We definitely need a universal OS for all our devices and I really believe Google is in a great position to get us there.

It would really surprise me if Apple got there first. Tim lacks vision and will keep on milking iOS even if the iPad Pro is a failure as a laptop replacement.

Windows is still king in the desktop space, at least as far a user base goes, but it's terrible on tablets and phones. MS has all the tech in place with UWP, but it's still pretty far in the race in terms of simplicity and usability.

Chrome OS ticks all the right boxes, and is experiencing a huge growth, but it's not universal. If Andromeda is real, and it's able to become a universal OS that merges Chrome OS and Android it might be the best thing since sliced bread.


You may "definitely need" a universal OS but I'm far from convinced I do.

I don't buy that a finger and a mouse/trackpad pointer are equivalent input devices. One obscures the display and is imprecise. The other, well, isn't.

I'm fine with a different server OS than desktop. I see no compelling reason why I need a single OS for all of my personal devices.


You listed windowing/UI concerns. And, while they may commonly come bundled with the OS, they're not the same thing.


They're a platform concern.

Not only does the OS UI reflect the input method, every single application you run does as well. A touch-optimized application will be clumsy and primitive when using a mouse; a mouse-optimized application will be miserable when using a finger.

If an application can be written once for both, often it'll be a poor compromise or only properly support one of the two input methods.

Look at how awful early Java "write once run everywhere" applications were. Yes, some of that was the Java platform itself, but developers were given the opportunity to ignore platform-specific UX concerns and many eagerly embraced it.


Again, the problem is pretending to use the same UI for all input methods. And I agree that is a mistake.

But what if you could use the same language and frameworks on all devices? The same tools (IDE, code editor, compiler, etc)? That's what I'm referring to.


I think a single platform for all our devices: mobile, tablet, desktop, etc, is a mistake. The word platform means more than just the OS and includes the entire L&F/experience with the UI.

I think a single OS for them all is fine.

Consider gnome+linux vs android+linux. They're both linux, and they're not the same platform.


By that definition, I could argue Apple got there years ago by using the same core kernel and frameworks for OS X and iOS.

That's clearly not what the original poster was arguing.


Well, it's the definition. I also believe that OS X and iOS have diverged somewhat in the kernal/framework space.


iOS and macOS are much more different than a change of UI.


The UI can adapt to the input devices, that not an OS concern.


The UI is but a small part of an OS or an application.

It's obvious that the UI has to be different depending on the device, input, etc.


As much as I'd like to see it, I don't think Windows will ever quite get there. Not, at least, in any recognisable form. They value backwards-compatibility way to much (which is to be commended in some ways, but won't save them here). While they seem to have everything lined up to be able to turn it into something, developers seem too reluctant to get on-board too.

Apple will have to go through all of the growing pains Microsoft has already weathered, so they're miles off, but maybe they can do it faster because they don't care as much about what people think of them, and are willing to do whatever they think is right (if it is or not is to be seen).

Google is well placed because, while they have a bunch of platforms, the only one in the desktop space is basically dead, and really didn't ever have too much investment in it, so not too many feel put out if they get rid of it. But that not even the case, because they're able to engineer everything so that it stays relevant.


> developers seem too reluctant to get on-board too

Might be limited to your specific environment though. As a counter-example among the people I know there are far more .NET devs than the total sum of iOS/Mac/Android devs.


I'm not referring to .NET. I know tonnes of people getting on board to .NET train. I'm referring to the UWP apps. Nearly no one seems to be getting on board that.


> I'm referring to the UWP apps.

Then I have to agree with you; UWP will go the way of Windows RT & the dodo, unless MS forces it as the main dev paradigm (which seems unlikely).


Limited to their specific environment? You mean the world?


Well there is life outside of the SV :-p


Does Tim lack vision, or does he just not share your vision? You might think we "need" this but not everyone agrees. Combining Chrome OS and Android is uninteresting to me, since I wouldn't be able to develop on it or really do any of the things I use a computer for. Unless they plan to really build on it and attempt to compete with more mature desktop operating systems.


He lacks vision, not because he doesn't share my vision, but because of everything he has done in Apple. That's not to say he is a bad CEO, he isn't IMO, but he is only polishing the same ideas from 10-15 years ago. I'm not even arguing that's a bad thing per se, it's just how things are.

Also, I really don't see how a developer could argue against the notion of having a single language, API, IDE, compiler, etc, for working on all devices.

> Combining Chrome OS and Android is uninteresting to me, since I wouldn't be able to develop on it or really do any of the things I use a computer for.

Time will tell.


> We definitely need a universal OS for all our devices

Wait, do we? Why?


I'd say to have a common interface towards hardware that is quickly becoming similar across smartphones, tablets, and even laptops (read Chromebooks)?

I think why they would _not_ have the same interface between similar hardware and similar applications is a better question.

As for the user interface and bundled applications, let's not confuse "Operating System" with that although it's popular for some stupid reason. The one and same OS could of course have e.g. completely different window managers adapted for different human-device interactions and use cases. But that's very distanced from the actual OS, that is pretty much only interested in how to run and expose the hardware to the software.


Save man-centuries of platform-specific work?


Terrible on tablets and phone? Phones I can agree with but the surface line is getting to be really solid. I've been an iPad guy for years and my next tablet will be a surface.


A surface isn't a tablet, it's a laptop with a touch screen. I've tried them and they are really nice as laptop but they fallback to a finger hostile UI way to often to be called tablets.


Indeed. The tablet aspect is pretty mediocre.


I had a Surface Pro 4 for a while and it is really great for the first few weeks but when the honeymoon period is over, the warts start to show. Slowly but surely I gravitated back to my iPad and laptop. The day the SP4 went back to Costco was when I realized it had been collecting dust for over a week.


Yes, we do. I want my phone and my laptop to be in total sync, I want to be able to write code on my mobile which I can just continue on my laptop without any hindrance, currently I have a mac and an android phone, I do have Go and Python installed on my mobile, but it isn't that great to code on my mobile, I have to host the repo on an internal version of gogs to get the code synced up and I still have to manually push the code around.

All hail Universal OS!!


And can a universal OS be everything to a power user and still be useful to a beginner?


This is what Smalltalk was intended to be, back in the day. The answer is yes, but you may not be able to get more than a few people to adopt it anyways. (There were no smartphones back then, but it did get to the point where grade school children, professional developers, and researchers were using basically the same programming environment, just with customized interfaces for their particular group.)


Yes, for a beginner who does not need the advanced stuff, they can just ignore it right?

As of now, I never use Google drive sync, I use syncthing to sync folders between mobile and my Mac.

Universal OS rocks! (I hope it is in line with my vision of how the concept of universality is) I want to just visually ssh the phone and the machine, or rather the OS is in the phone and I just connect it to the machine, there was some project recently I don't know if it is available now.

Edit: Why the downvotes?


I don't know why the downvotes, but your "beginners can just ignore advanced stuff" comment strikes me as naive.

Beginners are typically terrified of anything they don't understand. I used to teach computers 101 to university students, and they'd pretty much freeze up every time they saw something they weren't expecting.


Why naive? I was a beginner 3yrs ago in web development and Go. I wanted to learn how to build a webapp (I didn't and still don't know enough JS), I didn't get lost in the big maze of angular, react. I ignored the complicated stuff, I learned how to write a webapp[1], that was using pure HTML, then I learned Go, then I wrote a webapp. I wrote a todo list manager [2]

Then, after I was comfortable with everything did I start learning AJAX, I chose Vue.JS and I migrated the app I wrote in pure HTML to use Go [3] and wrote a guide about it [4].

I pissed a lot of people of VueJS project when I raised this [5] issue, yes, in hindsight I do realize that I was rude to them, but their documentation had a problem (and I had already said sorry if I was rude)

I still don't know how to use websockets and what not, or how to write load balancers or distributed database, or web proxy or cache or other things to be done at scale, I choose to ignore them as of now, as I build my capability to understand or until I need of them.

This is what I meant by that statement.

>they'd pretty much freeze up

You know, in 2010, I still remember the first C programming class (I had not understood a single word) I had in my college, I froze and for a minute questioned if I had made the right decision to enter computer science. The point is that newcomers need to know what they can ignore while they learn the basics, it is not possible to learn everything in one go.

Nobody teaches you real life, it just happens, and as a newbie it is my responsibility to learn in the best way I can without getting overwhelmed, and no, it is not a disparaging statement!

Also, this is why I started Multiversity[6], this is a YouTube channel which teaches by example.

[1] https://github.com/thewhitetulip/web-dev-golang-anti-textboo...

[2] https://github.com/thewhitetulip/Tasks

[3] https://github.com/thewhitetulip/Tasks-vue

[4] https://github.com/thewhitetulip/intro-to-vuejs/

[5] https://github.com/vuejs/vuejs.org/issues/565

[6] https://github.com/thewhitetulip/multiversity


You could always develop remotely, use your phone to ssh into a more powerful machine. Use tmux or screen and pick up right where you were on a laptop or desktop. This is far more compelling IMHO.


To be usable, SSH to a more powerful machine requires a good and stable connection. That's almost never available, in my experience. If I'm on my phone, I'm likely to be on trains/planes/in tunnels/in the countryside/etc.


Use mosh.


Tmux or screen? Wouldn't it be nice if there were a graphics-over-the-network system?


I remember the glee of the first time I got an X-Window application to run over the network. I was so confused though because the "x-server" is the software you run on your client machine.


You and the Unix Hater's Handbook authors.

There is a small terminology issue here: a "server" is a program that offers services to remote "client" programs. The clients make requests and the server responds to them. A client program will make a request like "allocate me a chunk of the screen and put these here bits in it", or "let me know about any of these events that happen". The server manages the screen and notifies the clients about things they're interested in.

IT MAKES PERFECT SENSE, DAMMIT!


This made me laugh (in a good way).

I agree, it actually does make total sense - but that doesn't mean I won't get confused :).

My only prior exposure to "GUIs over the network" were web applications, where the roles are essentially reversed. That is, the part responsible for accepting user input and rendering the UI is the client (the browser), and the part that performs the application logic is the server.

I naively assumed that X would work the same way, but it wasn't too hard to unlearn that misconception.


Are you being sarcastic?


Yes, I have to figure out the typing issue with mobile first!


Yeah you really need an external keyboard to be productive. And at that point the benefits are debatable


Why stop there? I want to dock my phone to the TV and work with a remote control, I want to dock it to the car and control it with steering wheel buttons.


> I want to dock it to the car and control it with steering wheel buttons.

...about that steering wheel...


Of course, that's the natural progression.


No, I don't want it in my car. If it connects to my home then I am fine.


On OS X all my stuff syncs between my iDevices


But I don't have an iPhone, I have an android phone, I use syncthing, but this isn't the kind of sync I hope, I want both the OSes to be quite same with inbuilt sync not through the internet but through my LAN


Apple will never outright tell you this but the answer to that is "you're using OSX wrong". I really think for best results you have to be vertically integrated and have Apple gear for all things you want to work together and to do it well. If not I guess rely on someone making a third party app that does what Apple does for its devices.


What exactly are you trying to sync? For music/files rsync is great.


I have syncthing for syncing files, that isn't an issue. I want a universal OS which allows me to just plug my phone into my machine at home and it'll do magic and I can then continue my work. I have no other alternative than reading when I am on the bus. I want the mobile to be a native extension to the laptop


Operating System is about applications. Even if Google comes up with a universal OS there will be many applications that only run on Windows forcing people to continue to use it


And this is why we need an open source clone of Windows, which is king of the desktop OS. Beside geeks, the average people doesn't care about operating systems, they care about being able to run their apps.

It's sad that ReactOS doesn't get more support from the community.


Exactly, Google is in the best situation. Their Android is by far the most used OS/shell (world wide market share).

Apple's CEO lacks the vision and is milking the status quo. Their iOS market share is a lot smaller and limited to a few devices.

Microsoft's CEO lacks the vision and is aggressively milking the status quo. They aggressively try to enforce a switch to software-as-a-service plus they are no in the gray adware/spyware business capturing way too much end user data which shades bad light. And the killed their QA. Their products since 2010 are a disaster. That's why XBoxOne tanked in all market beside US, WinPhone tanked world wide, Win8 and Win10 marketshare is lot smaller than Win7, which is the major desktop OS - and there is little reason to switch away from Win7. MSFT would profit from a 180 degree U-turn with a new management.


Do you really want Google, that has DNS, fiber, search engine, emails, calendar, phone contacts, ads, deep learning, app store, CDN, analytics, web browser, maps, power plants to ALSO handle your desktop OS ?

That seems very, very dangerous to me.

Actually we already are in the red zone, I just wish we don't go crimson. But I doubt people will care. They didn't up to now, no reason it changes.


What? God no.

We need competition. A single OS controlled by a single corporation with their own conflicting interests is a million miles from what we need.


We don't need a universal OS. I'd go as far to say that a universal OS is bad. A universal OS leaves no place for experimentation, no place for different UI concepts, no place for people to customize things to the way they want. Would you really want every shirt to look the same? Every refrigerator, every door? Style is important.


People usually confuse an OS with a UI.

Obviously different devices need different UIs.


> Windows [...] [is] terrible on tablets and phones.

Surface tablets seem to be doing OK though.


What percent of people regularly use them as tablets without the keyboard?


I think there are several different characteristics that make us categorise a device as a tablet vs. a laptop, input methods being just one of them; there are many others e.g. form-factor, weight, battery life etc.


My point is I don't think people are buying Surface Tablets, I think they're buying Surface Computers (that can operate as Tablets).


How many touch optimized apps are on the Surface? I haven't seen many. The majority are still desktop apps and using desktop apps on a tablet isn't a very good UI experience.


Yes I know where you are coming from but as I said in another comment I don't think that the input method should be the main element that defines the difference between a laptop and a tablet.


guys i hate to tell you this, but it's Feb 15 here in New Zealand, and Google has cancelled Andromeda.



I think it's intended for entertainment and content consumption just like Android and Chrome OS. And Apple is trying the same with merging iOS and desktop. How long it will take until all computers will be set-top boxes where you can only netflix and chill and if you want, for example, to draw something, you have to buy Professional Grade Computer for $50000?


wow! it supports Golang, since it has glsl, this will have nice UI.

https://fuchsia-review.googlesource.com/#/q/project:third_pa...


Does anyone have a more detailed explanation of the component called “modular”?


Honestly I'd rather keep the linux and ditch the JVM


NIH syndrome plus large organization people looking for job security, and rough, fat FLOSS full of maintenance and security vulns hell equals "Emperor's new clothes."


The comment about Java based IDEs being slow is not entirely objective and fact based. I'd say it's more of an emotional argument.



Quote: "I am not a programmer, so if anything stated above is incorrect, please, please send me corrections. I would also greatly appreciate any clarifying comments, which were one of my primary motivations for writing this piece." Essentially a bunch of nonsense, in other words.


Microsoft is opening up more and more, and Google is closing down more and more.


It would be a real shame if Google wasted this once in a decade or perhaps once in multiple decades opportunity to not have an OS written in a language other than C++.

Also, it would be mind-boggling if they didn't actually fix the update problem this time, and if it wasn't a top 3 priority for the new OS.


Too many negations, I can't parse. Do you want an OS written in C++ or not? :-)


It's written in C.

There is little beyond syntax that a different language can offer because a modern OS cannot afford features like garbage collection. Indeed, this was one of the research aims of MS's Singularity project.


They could have written it in Rust. No garbage collection, more security guaranties. Easier to contribute to the code properly.


Rust performed 3x slower and hacking around the language made it somewhat of a mess [1]. Much like Singularity, this is hardly a success story. Although Singularity was interesting from a research perspective nobody doubted that an OS could be written in Rust.

https://scialex.github.io/reenix.pdf


That paper is very old, before Rust 1.0. There was also a lot of discussions about ways that they could have used Rust better at the time, IIRC.

Today, there is no reason Rust should ever be 3x slower, especially in an OSdev context, where you currently have to use nightly.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: