I just did an interesting side-by-side test of scrolling within a browser displaying Google on an new iPhone 4 and an older Android phone (HTC Incredible. I didn't actually see much difference as to how "smooth" or "choppy" the scrolling was, but there were distinct differences to fast motions.
The iPhone seemed better at keeping up with a fast moving finger. If you put your finger on a line at the top of the screen, and scrolled quickly until your finger was at the bottom, the same line would still be under your finger. On the Incredible, the line you started on would only end up 2/3 of the way down the screen.
More surprisingly, this seemed to happen to a lesser extent even on very slow scrolls. Put you finger down on a line, drag it to the bottom, and the amount scrolled is always significantly less than the distance your finger moves. Rather than being a rendering problem, this seems more like an event loop that throwing away some of the events!
Besides the event flooding mentioned previously, the built-in Android GestureDetectors leave something to be desired. In the revamped Firefox Mobile that's under development we're noticing all sorts of bugs (e.g. http://bugzil.la/706684) that are related to gesture detectors malfunctioning, and we already reinvented the kinetic panning gesture detector because we were unhappy with the responsiveness of the built-in one (it does too much rounding).
I have to kind of wonder if Apple is buffering touch events on their controller and then sending them onward to the UI as postprocessed events and gestures.
Maybe I'm the only one left with a boggy 3G running iOS4, but when the system lags down I can usually keep typing and the keystrokes will catch up. Does the same thing happen on Android?
If gesture processing is being done on the controller side, that would take a lot of perceived bog out of the UI.
This post basically amounts to: "There is no problem with graphics on android, it was designed to work this way because of the open nature of android, and other excuses"
Well, the design is broken then. Clearly, it is possible to achieve a smooth UI as demonstrated by iOS and have a solid and stable system. This post is actually worse than saying "we can't fix it" -- it's denying a problem exists in the first place, because "hey, we do it just like iOS." Except it doesn't work like iOS.
I recently went back to iPhone because I got so frustrated with Android's jerky UI on my 1-year-old phone (Galaxy S). I went back to a iPhone 3GS that runs iOS 5 faster than the Android ran Froyo (which, btw, I had to jump through 30 different hoops just to upgrade from 2.1). The takeaway is this: Android lost a customer because of slow, laggy UI, and I'm sure I'm not the only one.
As someone who prefers iOS to Android I found this article very enlightening (moreso than the earlier article) and I appreciate the design tradeoffs more.
I don't think the problem is that Android has jerky animations, it's that it had certain design tradeoffs from the beginning that were all about its security model and allowing third party apps but not about the kinds of UX stuff Apple obsesses over. That's fine -- I don't actually think that all the animated UX crap adds that much value. I like it, but I could live without it.
I know there's this cult of "delightful" UX design with animated window transitions and so forth, but every time an app stalls or stutters while trying to animate a transition gratuitously it pisses me off. (Make no mistake, some apps use transitions very skillfully to help the user retain context, but these are usually very simple apps. There's plenty of things I'd fix in Photoshop's UI but adding transitions is not one of them.) I know plenty of Mac OS X users who love Apple to death but switch off every UI animation they can. Let's put it another way -- I love Dropbox and don't think adding "UX" crap to it would improve it in any way.
The problem is that having made these tradeoffs in Android, either Google or the Carriers then tried to layer a bunch of half-assed UX crap on top of it, and surprise it doesn't work so well.
I'd love a mobile or desktop OS that was all about functionality (which means good, attractive UI design but doesn't require UX frippery). But what we get instead is something caught between two worlds.
I initially fell in love with the freedom of Android, all the potential that came along with it, and the apps I could never have on iOS. Here's my post some 500 days ago: https://news.ycombinator.com/item?id=1586373
The funny thing is, I still love the idea of Swype, and using the phone as a USB stick. I just fell out of love with Android because, frankly, the UI lag just made it unusable for me. For example, I want to SoundHound or Shazam a song, click the icon, then wait _at least_ 30 seconds for the UI to come up (by which time the song could be over), click the "listen" button, and have the UI freeze for another 10 seconds without any indication what's going on (did I hit the button? should I tap it again? Is that going to cancel my request if the UI is just slow? I have no idea! Even spinning activity indicators would freeze). 2.2 update broke Swype so I couldn't delete a word from the dictionary. For example, I would type "don't" and it would spit out "Duong" (some person's last name I typed in once), delete it, only to have the same problem next time (tried things to fix it, but no luck).
Anyway, I don't expect a fancy UI with animations and transitions, but I do expect a responsive UI -- this is why I went back to 3GS: it works. By the way, I'm one of those people that turns off most of the UI animations (they slow me down).
I haven't used Android (aside from on my Kindle Fire) enough to form an educated position on it. The Kindle Fire's UI is both (a) gratuitously animated and (b) laggy, but I tend to assume that if the UI were more utilitarian it would be less laggy. If this is not the case, then Android is simply broken.
It still seems to me that there are different routes to implementing UI elements in Android and some work better than others. E.g. some aspects of the Kindle Fire's UI are consistently inconsistent as it were, while others seem to be perfectly solid. If I were in charge of Android I would focus on getting the basics right and forget about the eye candy. It's not that Android can _never_ compete with iOS (or whatever) in the eye candy department, but that you should crawl before you walk etc.
My impression of Linux from my various encounters with it (including trying to live with Ubuntu as my primary OS for a couple of months) is that it fails both as a utilitarian OS and an aesthetically pleasing one. (And yes, I'm sure there are plenty of suggestions for alternate UI systems from KDE to flux box that will somehow not have whatever issue I find.)
I agree wholeheartedly. I didn't use a Fire for longer than 10 minutes, but my impression was exactly like yours, gratuitously animated and laggy.
Inconsistency is not just limited to the UI elements on the Fire - for example, Fire doesn't have the pull-down "drawer menu" UI element (from the top) like all other Android devices. Of course, the nature of Android is such that it allows manufacturers to customize, but at this point it is diluting the brand.
Regarding your Ubuntu point: right on. It has come a long way in the last 5 years, but things just seem hacked together. Linux is a great server OS, but it is just not a good, balanced, productive desktop OS. Users need consistency, but it seems like developers focus on making flashy animations (i.e. rotating desktop cube, exciting stuff) instead of putting emphasis on use-cases, workflows, etc (boring stuff).
Funny, what I read was: "This is how Android graphics work, criticize it or praise it, but don't go off explaining it in your own words using wrong facts or incorrect assumptions and telling people you are an expert on the subject"
That was just one part of the statement, and I agree with that general sentiment. However:
"There are certainly some advantages to how iOS work, but this view is too focused on one specific detail to be useful, and glosses over actual similarities in how they behave...
...This is very different from iOS’s original design constraints, which remember didn’t allow any third party applications at all."
Really, who cares if they allow third party apps or not? I don't have a jailbroken iPhone, but I've seen jailbroken apps run smooth as butter, so this mystical "Apple quality stamp of approval" argument goes out the window.
The post continues:
"In fact it was just not feasible to implement hardware accelerated drawing inside windows until recently. Because Android is designed around having multiple windows on the screen, to have the drawing inside each window be hardware accelerated means requiring that the GPU and driver support multiple active GL contexts in different processes running at the same time."
Translation: we designed the system to behave in a way which didn't utilize system resources the best way possible. I hope that this is fixed now, but I have my doubts.
"I saw an interesting comment from Brent Royal-Gordon on what developers sometimes need to do to achieve 60fps scrolling in iOS lists... I am no expert on iOS, so I’ll take that as as true. These are the exact same recommendations that we have given to Android’s app developers, and based on this statement I don't see any indication that there is something intrinsically flawed about Android in making lists scroll at 60fps, any more than there is in iOS."
Nobody said iOS is perfect; yes they have their own flaws (e.g. Safari crashesevery on every once in a while, 3GS sometimes runs out of memory in iOS5, etc), and finding a (edge?) case in which the UI is not smooth on a competing platform seems petty.
Really, who cares if they allow third party apps or not?
If payed attention to the security concerns mentioned, it's pretty obvious that Apple was able to sidestep a lot of problems by ensuring a walled garden. They test and approve apps, they prevent the trivial pushing of broken malicious things out to the customer (mostly).
Now, I understand that you might not have the background in systems engineering of the folks on the Android team, but at least as an intellectual exercise please consider how allowing unsecured third-party apps might impact the sorts of corners that can be cut in your implementation.
Translation: we designed the system to behave in a way which didn't utilize system resources the best way possible.
Please, by all means, explain to me how lightweight an OpenGL context is. Tell me about the ease of instantiating multiple copies of an oh-so-simple state machine and having it reasonable timeslice on the GPU, especially when you don't control the hardware vendors. Please, tell me more.
finding a (edge?) case in which the UI is not smooth on a competing platform seems petty.
It really does, doesn't it? Especially when such smoothness is a tradeoff in favor of increased security.
> This post basically amounts to: "There is no problem with graphics on android, it was designed to work this way because of the open nature of android, and other excuses"
Java is memory safe, and classloaders have been used since forever to sandbox applications (remember applets?). The whole 'designed to be open' argument is nonsense.
Let's face it, Android was hacked together by a small team on a deadline. It doesn't seem like a lot of thought went into the design and architecture of the system. The code is buggy, there are no tests, and half the tree doesn't even have comments. Bug reports are ignored, and there are about 22k open issues that are not even being triaged. Case in point: http://code.google.com/p/android/issues/detail?id=19078 (not fixed in ICS)
Android is a rush-job, and the team's obvious lack of experience in handling large software projects is biting them in the ass. Rather than making excuses for their botched together software maybe they should just get their shit together.
Your mileage may vary on the quality of the team's job.
From where I stand Android is one of those big corporate projects that lets people watch its source code. The ugly parts are on display for all to see, point and rage at, like you just did.
The point is that openness is irrelevant in this discussion. I really don't understand why the auhor of the post attempts to wage the open-vs-closed war, except to distract from the real issue: Android's UI is laggy.
> Well, except that Android allows apps to run arbitrary native code using the NDK.
I was under the assumption that they can support native code because of the way they designed Android, not the other way around. I highly doubt that supporting native code was the design rationale here. I think they just designed Dalvik to use Linux threads and assume it to run inside its own process. That certainly makes things much simpler for the VM. The fact that you can safely load native code follows naturally.
Still, supporting native code does not have to dictate how you ensure memory protection, or whether each app has to run inside its own process. You can always run a native library (i.e. a video codec) inside a separate process and use memory-safe communication with the process that runs the Java app. You can also go the NACL route.
There's absolutely no reason why supporting native code would result in a choppy UI on Android.
Yes, you can run arbitrary code using the NDK, but you're still sand-boxed into your application's process. Android relies on the Linux security model where each app runs in its own process as its own user. As such, just like with a Linux application, you'd have to find some sort of privilege escalation exploit before you can start mucking around at will.
Devil's tongue:
"This is very different from iOS’s original design constraints, which remember didn’t allow any third party applications at all."
This argument resume the whole article's intent. I'm not using iOS. I dislike Apple's way of enforcing it's products. But I know BS when I see it.
iOS's security and sandboxing, while technically different, is practically similar to android. Each app is sandboxed.
Oh and guess what, the AppStore let you install third party apps, and since before Android became Google's Android.
And for the actual thing that matters, Android uses multiple GL contexts by design, iOS doesn't, so Android is slower.
iOS has multiple processes on the system using hardware acceleration at the same time, and always has. A few examples are the system-rendered status bar at the top of the screen, or the "Notification Center" that you pull down from the top, or even the little volume adjustment popups. All of those are hardware accelerated and rendered out-of-process in the system UI manager ("SpringBoard"), not inside the app.
Both of those have smooth scrolling and animations, at the same time, overlayed on top of each other. I don't think there's anything here in iOS that shows the lack of "multiple GL contexts"; and I definitely don't think that is a significant contributor to making anything slower.
Probably the key paragraph, in relation to the recent discussion:
> In fact it was just not feasible to implement hardware accelerated drawing inside windows until recently. Because Android is designed around having multiple windows on the screen, to have the drawing inside each window be hardware accelerated means requiring that the GPU and driver support multiple active GL contexts in different processes running at the same time. The hardware at that time just didn’t support this, even ignoring the additional memory needed for it that was not available. Even today we are in the early stages of this -- most mobile GPUs still have fairly expensive GL context switching.
So an attempt at a brief summary of (the non-thread-priorities portion of) her post:
The Android model of providing rendering isolation between apps/windows has limited hw acceleration of window contents rendering, which is a cause of some of Android's lagginess. That iOS forgoes this isolation and achieves better responsiveness is a reflection of a different security philosophy.
If this is an accurate summary, it's an interesting example of how platform design is wicked problem with complex interactions between concerns that you might expect to be orthogonal.
That sounds accurate to me. I'm not sure if its simply that iOS forgoes isolation or they've implemented isolation in a mechanism that allows for the context to be shared without security consequence.
On iOS, all standard rendering is done in a single context by the Core Animation window server which lives in SpringBoard. Only when an app adds an OpenGL ES layer to the view hierarchy does a separate context need to be created. When that happens, the render graph is split into subgraphs that are rendered to surfaces and displayed as overlays (with SpringBoard rendering all the standard layers and the app rendering the OpenGL layer)
That might have been true before, but my understanding is that the newer mobile GPUs (coming out now, 2012, etc) have pretty much free context switching. Or so the vendors like to claim.
Even if it's not true, shouldn't Google pushing for the GPU drivers to be fixed? This is a pretty obvious huge flaw [that has been pretty much solved on Desktop].
There is one thing that can be observed here, imho. That iOS was made to be sleek right from the start, from there they sacrificed pragmatic system design, which efficiently crimped a lot of flexibility; flexibility such as 3rd party input, etc, of which has gotta be now implemented from system up.
As an Android user, given my (now) better understanding of the Android architecture design, I am happy and safely say that Android was made for the future, and is only gonna get better from here on.
As someone who spends a significant amount of time reverse engineering how iOS works, that is incorrect. Currently, the iOS keyboard is managed by the app itself (and I agree: this was likely chosen originally for performance). However, iOS does not expose any APIs that would break by moving this out of process, and in fact, many parts of the UI while inside an app are rendered and displayed out of process, by a separate application (in this case, the window manager, "SpringBoard"): the "Notification Center" is one, but also the "multitasking tray" and the little overlays you get when you change the volume.
The reason that the keyboard is required to be implemented "from the system up" is that Apple idea of an "App Store" is fundamentally opposed to the idea, not because of any technological restriction. Apple only sells "apps" with "icons" (by choice), but this gives them one fundamental advantage: all apps are identical, and completely self-contained. I don't agree with that — I use a jailbroken iPhone, for one. But I definitely do see the value for users who don't know what the idea of "an alternate keyboard" would even mean.
(On a jailbroken iPhone, I could likely move the keyboard out of process with a week of work; Apple could surely do it even faster. They just don't have any need to: there is no possible situation where, to them, the "flexibility" of alternate input methods is superior to the mental simplicity of just selling an "app" with an "icon".)
iOS applications also run in separate processes and are just as secure as Android apps. There's nothing "made for the future" in the poor up front design choices the Android team made early on.
In fact, we can clearly see that the iOS team made all the right decisions because drawing/compositing in iOS on a 3GS is faster and smoother than on a shiny new Android Inspiration XL HD 4G.
Also, Dianne shows a fundamental misunderstanding of how drawing/compositing works on iOS. So this entire series of posts is just moot. The other guy (Windows Phone guy) had no idea how Android or iOS worked, either.
The most annoying thing about all this is that none of these views are based on actual facts -- and we all know what happens when someone is wrong on the Internet.
At the end of the day, who cares which one is better or how it's implemented? I don't care about the "Android has windows" explanation because it's irrelevant to me as an end user. It's all about perception.
This entire Android vs iOS thing is getting long in the tooth.
Edit: also, it's going to take a long, long time to be able to pull off something as awesome as Core Animation. Or Core Text. Or Core Graphics. These things have been stewing in OS X for more than a decade.
First, what poor design choices did they make? The entire post was attempting to debunk the perception that there was some simple decision that leads to the issues people describe. Second, I don't believe Dianne has a fundamental misunderstanding of anything regarding a mobile OS as she is one of the most preeminent people in the field.
Its frustrating and annoying to read posts like this where you try to walk a "I'm impartial" line and then pepper it with biased remarks that are misleading. You are unlikely to get more factual discussion about the situation then the post she offered, as she's a creditable expert who has written a not insignificant portion of the OS under discussion.
Its clear that iOS has some really technical achievement they should be proud of, but your desire for facts about the situation is subverted by your very own use of statements like "drawing/compositing in iOS on a 3GS is faster and smoother than on a shiny new Android Inspiration XL HD 4G" without anything to support your position. Its not a fact because you make a statement.
Well, she does (have a fundamental misunderstanding of something regarding a mobile OS): she is an Android developer, and it is pretty clear from this article that she has no clue what she is talking about when it comes to iOS.
Seriously: you should trust Xuzz from this thread much more than Dianne about issues related to iOS surface compositing. Xuzz worked on jailbreakme with comex, did much of the work for the recent Siri-on-non-iPhone-4S ports, and has written numerous modifications to the code of SpringBoard (the main window manager and graphical shell process on iOS) over the last few years he has spent reverse engineering that system, and most of his posts on this thread are corrections to statements that Dianne seems to have simply made up for this article of hers.
(For "people to watch" on this thread, I should also point out ryanpetrich, who is responsible for such modifications as DisplayRecorder (which records the iOS framebuffer to a video file) and DisplayOut (which splits the framebuffer out to a television), both available for jailbroken devices.)
In what way exactly Skia (which is the vector graphics library that powers Android and Chrome) is any less "awesome" than Core Graphics?
The same goes for Core Text or Core Animation - they are not doing anything especially hard or mind blowing. Android displays text just as well and has an animation APIs that are arguably easier to use that Objective C APIs.
It may just be the (in my opinion at least) bad typefaces Android ships with, but you're the first person I've heard say that Android's text rendering is as good as iOS's... Perhaps it's also that most of the Android devices I've used have had worse displays than my iPhone's as well, but I've not found Androids text to look that good...
Be sure to take a look again with a phone running Android 4.0 when those are more available: the new "Roboto" typeface is pretty good on more hi-res secreens, especially compared to the "Droid Sans" they were using before.
I completely disagree. As someone else who reverse engineers how iOS works, one of its core principles is in fact flexibility. Almost every key feature in iOS - search, backup, Siri, sharing, etc. is implemented using bundles, which are modules of code. This system provides a convenient interface for Apple and jailbreakers alike to add functionality to the OS.
Your assumptions are based on the fact that no official APIs are provided to modify the OS in a sweeping way. Apple could very easily implement such a thing if they wanted - they would simply have to shift some code around. The reason Apple does not do this is because Apple favors simplicity over flexibility, or rather, maintains a balance between the two.
However, if iOS is able to evolve additional flexibility as it is needed without too much pain, then perhaps they made the right design choices at the start?
I think both Google and Apple made the right design choices at the start. They just made different ones is because they had different business goals.
Google wanted Android to be ubiquitous and free to quickly evolve into whatever it needed to be to be in order to capture territory in the mobile device world. That means supporting a wide range of devices, technologies and, most importantly, partners (both hardware, software, carriers, etc.). So Android was designed for flexibility and has extension points everywhere (input frameworks, sharing, alternative browsers, launchers and so on). Google gave up performance along many dimensions in order to maximize Android's adaptability.
Apple is very different. They have a vision and want to build that vision as precisely as they can because that is what has driven their success. That means Apple doesn't want hardware partners. And even at the software level, thsat means they are very careful about who and what they allow into their user experience.
You can see this in how they built new features for iOS 5: Apple added a split keyboard for the iPad rather than adding support for alternative keyboards. They bought Siri rather than exposing relevant extension points. Apple baked Twitter into iOS 5 as a one-off, rather than riffing off of Android's sharing (as they did with notifications). And so on. These are all business decisions, not technical ones (as Cydia often proves). Instead, iOS's technical architecture flows from Apple's goals in the same way Android's flows from Google's.
You're inferring engineering decisions based on product outcomes. You have no idea how the various extensions you describe were implemented, and with what regard to future extension.
For example many people claimed early on that iOS 'didn't support multitasking', which was correct at a feature level, but not at an engineering level.
The part about the graphics contexts and process limitation is key, and an area where iOS is far ahead of Android, since iOS does not have that limitation.
(Neither does Mac OS X, and it took Apple years and years to get to that point -- each year at WWDC you'd see more and more of the UI hardware accelerated.)
The Android sandboxing feature is frequently invoked
as providing dramatic security advantages, but there's
really no reason to believe it.
Local privilege vulnerabilities [edit: enabling escape from the sandbox] in Linux kernel are dime a dozen, and Android phones are notoriously outdated wrt security patches.
There simply hasn't been a lot of OS level vulnerability exploitation in Android malware so far. Probably the biggest reason is OS fragmentation + malware apps can just request any privileges they need on the market and the users will haplessly click accept, and the biggest threat to the malicious apps is Google removing them from the Market when
they're found out.
It's easy to understand the point that window management in Android is complex and this complexity hampers graphics performance. On iOS, one process has the entire screen at a time. There are no widgets, no plugin viewports, no 3rd-party UI - but all this stuff is happening on Android. The question becomes whether this is actually a good paradigm to have for a mobile platform and whether this compartmentalization mechanism is really needed.
For example, take this statement: "This [window management technique] is also why Android can safely support third party input methods." Clearly, the perception at Google is that this model makes Android safe, and in a way that's understandable because the platform is way more open than iOS and thus needs more inherent resistance. But look at the concept of "safe support" for third party input methods. But does compartmentalization of UI operations in this case actually make, say, 3rd-party keyboards safe to use?
It all depends on what you mean by "safe". It is certainly possible to create input methods that behave as spyware, that's why Android puts a big fat warning telling you so whenever you switch to a third-party input method.
I think what is meant by safe in this context is that an that is not an input method cannot pretend to be one. Likewise, the interface between the input method and applications means that an application cannot normally wire itself into the input method and monitor everything the user types.
Yes, exactly, it's very open to debate what safety in this case actually means.
I think what is meant by safe in this context is that an
that is not an input method cannot pretend to be one.
But is that really a concern? If you're putting together a spyware app disguised as a keyboard, why not just include a keyboard and "fool" this protection by actually providing the advertised functionality?
I also believe that the Android people in this specific case define "safe" as "safe from crashes" in the way that a 3rd-party external component cannot easily crash the foreground app because of the windowing and thread-based compartmentalization.
Users: Wow, android is neat, but why is the interface performance so crappy?
Google: silence
Users: Hmm, I bet it's X, because of ____.
Google: No, no, X in android is great, it's awesome, and even if it weren't awesome it totally wouldn't be the cause of anything anyone might think is bad.
Users: Ok, hmm, then I bet the clunkyness and chippiness are due to Y, because of _____ and _____.
Google: No, no, Y is actually great in android; android has had Y since the beginning.
Users: Ok, so why is the android interface universally clunky?
Google: silence
"I am not writing this to make excuses for whatever things people don’t like about Android"
Dianne, don't you dare insult our intelligence by pretending the worst thing about the android experience is a case of personal preference by a small group of loonies.
The iPhone seemed better at keeping up with a fast moving finger. If you put your finger on a line at the top of the screen, and scrolled quickly until your finger was at the bottom, the same line would still be under your finger. On the Incredible, the line you started on would only end up 2/3 of the way down the screen.
More surprisingly, this seemed to happen to a lesser extent even on very slow scrolls. Put you finger down on a line, drag it to the bottom, and the amount scrolled is always significantly less than the distance your finger moves. Rather than being a rendering problem, this seems more like an event loop that throwing away some of the events!