Hacker News new | past | comments | ask | show | jobs | submit login
Android doesn't use the GPU for its UI (code.google.com)
80 points by basil on Sept 12, 2010 | hide | past | favorite | 41 comments



The Innovator's Dilemma guy also talks about integrated vs. modular architectures. By integrated, he means made in one elegant piece, meshing together just so; by modular he means you can mix-and-match (eg. from different suppliers, or to customize attributes easily), with clearly defined standard interfaces. Maybe it sounds like modular is always better; while it is more customizable and cheaper, integrated can always outperform it. Which one wins in a market depends on what that market most demands at that time, with respect to where the technology is at that moment. Is it fast enough? Does it have enough functions? Is it flexible enough? Is it cheap enough? Markets tend to move back and forth between them, as they evolve over a period of years.

For smartphones, performance is still an issue (and will be for 2-3 years I'd guess, by Moore's law), and so integrated platforms do the best job at this. This is Apple's forte - but they have limited time to make hay and get as firmly established as possible.

It's limited, because once the market's demand for performance is sated ("fast enough"), the market will base decisions on other issues - such as price. And that is not Apple's forte, nor does Apple want it to be. Thus, Apple already has the iPad lined up. After that, there'll be in another market at a stage where Apple's extraordinary ability to integrate can shine.

To be clear: Android is modular.


When were computers fast enough? Each computer I've gotten is exactly as fast as the previous one, from the 100 MHz pentium I had in 97 to the quad core I have now. Programs do more things, but that's why the speed has remained the same.

My only hope that this will change is with SSDs. I swapped my laptop's SSD for a HDD and now it's a dog, which reminded me how big the difference in performance is.


My anecdotal experience is different from yours. In the early 2000s I finally reached a point where I generally didn't care if my computer was the newest/fastest kid on the block. But of course Your mileage and usage may vary


I agree with your overall point, but I think your memory might be tricking you a bit. Mostly because the improvements have been gradual, and you've come to assume them.

Your subjective impression of speed probably hasn't changed, but I bet if you dig out your '97 Pentium and fire up Windows 95, you'll be surprised at how it compares to your memories of using it.


Allow me to answer in terms of integrated vs. modular architectures: they were fast enough when Java's virtual machine became viable, when .Net's virtual machine became viable, when interpreted scripting languages (Javascript, Python, Ruby) became viable.

Basically, whenever an extra layer that soaked up performance in return for other benefits (such faster development time, better error checking, more flexibility, easier, more intuitive) was actually adopted by real people, it meant that the computer was fast enough in their opinion to trade some of that fastness for other benefits.

relevant: http://xkcd.com/676/


Garbage collection (mentioned in the article) is also a great example of integrated vs modular, with the same tradeoffs: when you handle it manually, it is integrated with the code - and it's possible to optimize/customize its performance much better than an automatic garbage collector.

An automatic garbage collector is a separate module: the details are separated from your code; different garbage collectors (with different strategies/tradeoffs) can be used without affecting your code; and development is cheaper.


The average programmer skills probably stay constant but the average line of code is dependent on how many programs have versus how many lines of code they accumulated so far. However, you can bet older products will have more features by the virtue of code accumulation. With carelessness of some programming team, they become slower. Others, being more careful, will remain the same. Still others, gain in performance with practices that allow them reliable improvement in code quality.

Programs might have done more stuff, or not depending on what software you're looking at. Browsers on the whole do more stuff, but they also do it fast and efficient.

The evolution of software is much more dynamic and tricky to figure out. They are like corporations. Some corporation accumulate bureaucracy, others remain so-so, and others shed accuracy as a matter of survival.


In my opinion, one of the main reasons people like the iPhone/iPad is the look and feel including the icons, animations, transitions, etc. The fact that iOS has included hardware acceleration from the very beginning speaks to how important the user experience was for them and using the GPU allowed them to provide a faster, smoother user experience.

I have to admit that the user experience in Android is severely lacking and is no where near as good as iOS. Let's hope this gets addressed in Gingerbread both for the core OS and for third-party apps.


This is one reason why I am not counting out Microsoft with their new WP7 platform. Your average U.S. consumer makes their device purchase decisions in the wireless carrier store. If Android and WP7 are shown side-by-side in a showroom (assuming comparable hardware and price) my bet is that windows phone will win its fair-share of battles given its approachable and responsive interface.


My Motorola Droid doesn't stack up against Apple, but I've played with both Apple store and current-gen Verizon phones, and the current-gen Verizon Droids are just as responsive as Apple hardware. I'm not sure this GPU thing is much of an issue anymore.

To be fair, my experience with them is mostly playing with them in the store, but as you say, that's where it counts. In day-to-day usage it tends to be less of a big deal. You adjust how you use the device.


In day-to-day usage it tends to be less of a big deal. You adjust how you use the device.

I'd argue the opposite when it comes to UI. If you're tapping on the same link 2-3 times bc you're not sure whether Android/iOS registered the tap, that's an intractable problem that gets worse with time. You can't adapt your behavior to that annoyance.


In a sense, you're both correct. It really depends on the user and the UI annoyance. A lot of UI quirks become second nature to power users and they don't even notice them, but things like lagging on a keypress, etc, is probably going to be a persistent source of annoyance.

iPhone users, for example, don't seem to mind life without a back button. It becomes a fact of life and users adapt their usage of the device accordingly.

On the other hand, when I updated my iPhone 3G to iOS 4, the responsiveness of the UI suffered by orders of magnitude. After more than a month of usage, I still found things like typing a text message excruciating. (Mostly fixed with the 4.1 update, praise god.)


The actual question should be why isn't the UI faster and more responsive? Stating a solution as a requirement is rarely a good idea.

We know that the iPhone and Droid (and successor Droid X) has pretty similar hardware, both license the GPU IP block from Imagination Technologies. http://www.anandtech.com/show/3826/motorola-droid-x-thorough...

However, we do have some details missing in comparison with the iPhone. For example, do we know whether Apple's UI is h/w accelerated? If it is, does the GPU have dedicated graphics memory (and how much?) or could a blitter be used?

AFAIK, most if not all Android phones will have a GPU just simply as a function that Surfaceflinger (the Android window compositor) is an OpenGL implementation. It's only the emulator and during bring up that you'd use the software implementation of libagl/OpenGL.

There's also a question that how good is a mobile SoC GPU at doing 2D. As mentioned, it isn't de rigueur that a mobile GPU has dedicated (fast) memory which is completely different in the PC space which has copious amounts. In the Droid X case you're weighing a 1GHz CPU vs 200Mhz GPU with a small number of cores. In fact, consider that we're fairly used to spending as much if not more for the GPU in desktop systems as the CPU.

I do know for a fact that Imagination's 2D interface (PVR2D) has a large setup overhead which makes blitting of small bitmaps very slow, but somewhat ok with larger bitmaps. Add, alpha and it sucks in any case.

You shouldn't also discount that perhaps the Android UI and app. framework is naively implemented compared to Apples.


It's pretty straightforward to observe how iPhone's Safari is accelerated while Androids is not by using CSS 3D transforms.

Google's original 2D layer (Skia) can only render affine transforms and not perspective transforms in webkit (largely because it just hasn't been configured in WebCore). Mobile Safari can in fact render perspective transforms indicating that it is using a different graphics engine than Skia (in fact Apple's documentation says that it is hardware accelerated).

While this is only concerning the browser it allows for some benchmarks - try any transitions or 3D transforms in the browser and you'll find them much faster on the iPhone. Additionally Skia, which is used in Android's browser is used elsewhere in Android as well so its performance characteristics are more broadly relevant beyond just web browsing.


Did anyone else notice how early on in the bug Google's Romain Guy pointed out that The "choppiness" and "lagginess" you are mentioning are more often related to heavy garbage collection than drawing performance as well as saying that newer devices might allow them to use the GPU as well.

Did anyone pay any attention to that, and ask sensible questions about tuning that? Of course not...

On the Galaxy S (under Android 2.1) for example, the lagging seems to be caused by keeping temporary data on the slow, internal SD memory. That can be fixed by moving it to external memory, and there is a one-click fix for it in the Market. That fix triples the phones performance in some benchmarks, and has completely solves any lagging issues for me.

It's possible the HTC phones suffer from a similar problem, but here everyone assumes it's the GPU so you'd never know.

Never performance tune without benchmarking first. You'll fix the wrong thing


The complete lack of responsiveness in the Android UI, regardless of how difficult it is to fix, needs to be remedied, and needs to be top priority.

The UI is the connection between the user and the device. When it's laggy and unresponsive, the experience of using the device is always going to be sub-par, no matter how many new features and optimizations the developers have added.

Whenever I've had a chance to test out the latest and greatest Android phone, I've always been blown away by how awful the scrolling performance and animations are, to the point where I really don't understand how people put up with it on a day-to-day basis. My iPhone 3G, at least when it's not randomly hanging due to 4.0, feels more responsive than any Android phone I've tried, and it's running on 3+ year-old hardware (it's got the same processor and memory as the iPhone 2G, IIRC).

Smooth animations, not benchmark results, are equivalent to speed to the average person, and who can blame them?


Even the -illusion- that it's working quickly will help. Remember those checkerboards that would show up on the iPhone when the screen couldn't paint fast enough? People tolerated it because the -activity- on the screen matched what the user's finger was doing.

I find it a weak defense for Android by saying "well the newer phones don't have that problem". Most people can't afford to change their phones with every new hardware rev, so it does matter that the OS makes every attempt to make all user experience optimizations on any given piece of hardware. You can't rely on Moore's law to improve usability for existing customers.

With the exception of the 4.0 OS upgrade, I can't say that I've had major performance complaints on the iPhone 3G either. Myself, I'd like to use my current phone as long as possible before giving any phone manufacturer more of my money.


It's interesting to realize that one could be holding a phone and using a fraction of its computational capacity because the two companies that collaborated on creating it... don't talk enough?


It can also simply be a matter of available (human) resources. Remember how the iPhone was missing Copy & Paste? Probably not because they did not know how to do it. More likely simply a matter of shipping priorities and resources. I think the same is going on here.


While this is a good case for prioritization, I would argue that Google has made a mistake by allowing the fundamental interaction model of the phone to suffer from unnecessarily poor performance, either by taking less-than-full advantage of the GPU or (less likely?) not dealing with the problem of “heavy garbage collection” that is cited by the first project member to respond on the linked thread.

Copy and paste functionality is nice, but smooth-as-butter scrolling and near-instant responsiveness, a la iPhone, (or lack thereof) arguably define the experience.


Scrolling on an Evo or a Droid X is choppy, especially on a large webpage (zooming in and out as well). It's incredibly noticeable when comparing the most high-end Android devices next to an iPhone 4. If this isn't a priority, I don't know what is.


..or just don't care. Also, if the UI is not built with animation / OpenGL support built in from the ground up, it might be a substantial job to add good support for it later on. OSX did all of this from day one, so that making an iPhone without OpenGL support was probably out of the question. I personally feel that the 3D effects in Windows and Linux feel grafted on "just for the fun of it", and I usually turn most of them off. But I never used the iPhone and felt that I'd do without the 3D effects - it will even sometimes drop the 3D effects in favor of responsiveness when it needs to, which i think says a lot about how much work they put into it.


Hardware accelerated window composition wasn't introduced until Jaguar (http://en.wikipedia.org/wiki/Quartz_Compositor#Quartz_Extrem...), which was, admittedly, early on in OS X's life, and the first version that was usable for most people. 2D drawing is still done on the CPU (outside of Core Image).


Not just phones, there's plenty of Apple laptops for example that don't use the H.264 acceleration built in to their GPU.

It's also a fact of life for many using Linux.


I find that piece pretty interesting: "The "choppiness" and "lagginess" you are mentioning are more often related to heavy garbage collection than drawing performance.".

Wasn't that supposed to be a thing of the past with modern garbage collectors?


I also thought so, but maybe not with mobile devices with limited memory? I always thought thought it was strange to disallow GC in ObjC on iOS, but they might have tried it with less than stellar results.


The results can be observed on just about any android device that I've used: random lags of varying severity.

If I had to pick the single biggest annoyance with my android devices (which I otherwise love) then this would be it.


I stopped experiencing this once my device was upgraded to Android 2.2. It's much smoother. I'm not sure if this is because of the JIT or some other new feature, but I'm guessing you've used older versions of Android.


Sure but that's just because of faster, in order CPU's and faster execution time due to JIT. Still a "brute force" solution.


> Still a "brute force" solution.

That's Moore's law for you.

I sometimes wonder how much more elegant our solutions to computing problems would become if transistors per integrated circuit stopped increasing exponentially.


It's the downside of all GCs, and I've yet to see one that alleviates it.

It does seem likely that it is GC issues and not whether the GPU is used or not: The visual choppiness is completely inconsistent, lending itself more to brief thread stops for GCs than to any sort of CPU starvation.

On the bright side, it is far less common on 2.2 than before. It still is by no means a deal breaker, but pretty much every user of both the iPhone and Android makes note of the much more consistent smoothness of the former.


I bet Android has a lot of room to improve not just by improving garbage collection but by decreasing garbage creation. The compiler can do a lot of optimizations to reuse objects instead of allocating new ones. I don't know what the state of the Android JIT is, but Sun's JIT took many years to incorporate some basic-seeming optimizations.

One thing that could be done in the meantime is to rewrite performance-sensitive Android UI code in a way that eliminates object allocation. In Java this would result in horrifying code, but it might be worth it, and they wouldn't necessarily have to use Java.


Android's Dalvik VM has only had a JIT for 4 months (shipped in 2.2)

Hilariously enough, they started using V8 in 2.0 a year ago — so in 2.0 and 2.1, Javascript is JIT-ed but Java isn't!


Windows phone 7 seems super smooth and has a gc. But Microsoft has modified to gc behavior to limit interruptions. There is no reason the gc needs to be firing during ui scrolls. Google should be able to do the same. From http://blogs.msdn.com/b/abhinaba/archive/2010/07/29/windows-... :

1. GC is not run on some timer. So if a process is not allocating any memory and there is no low-memory situation then GC will never be fired irrespective of how long the application is running

2. The phone is never woken up by the CLR to run GC. GC is always in response to an active request OR allocation failure OR low memory notification.

3. In the same lines there is no GC thread or background GC on WP7


Android's GC is very aggressive. Further, pretty much everything is an object, so it's constantly being worked.

I would save judgement on Windows Mobile 7 until it really is running various apps and doing active day to day work. A fresh Android phone is very smooth. Once you have it checking all of your accounts, have twitter and facebook feeds and dropbox running, rdio in the background, etc, the demands on the GC are much higher.


Which leads into apple's extreme aversion to background apps....


Which is what was Windows Mobile 6 and earliers' big downfall. Once it crashed, everything crashed, once it froze, it all foze...


Apple run loops are more flexible and adaptable than this

http://android.git.kernel.org/?p=platform/frameworks/base.gi...

via http://stackoverflow.com/questions/3321587/anatomy-of-an-and...

but since Looper is OSS it should be improved. Here's an idea, make it customizable like NextStep/Apple's architecture. Be able to change run loops on the fly when scrolling, select on different input sources, etc. Could be as simple as that...

Or if you want the long answer, it could be NextStep's key design choices which have paid off over the past decade.... Might as well learn from them. Will really flexible run loops solve single OGL context issues? Probably not. But they might just make things less jerky...


If you read between the lines, the reason they have tried and failed to implement this is fragmentation of the market. While more modern devices would likely support GPU usage, the older devices (or even new low end devices?) would not. So they either have to live with the current fragmentation or fragment the market even more.

Fragmentation is, of course, a huge point for Android detractors. This is a very real manifestation of that which shows that Android may stagnate in the future due to the inability to develop features for a significant number of devices.

However, for the most part, Android as it sits today has a low level of fragmentation compared to what could happen. I recently developed an Android app for 1.6-2.2 and only had a couple fragmentation issues (performance related and contacts API). Google and its partners really need to take a long look at the direction of Android and resolve the fragmentation now while its possible.


Why can't they just wait for the market to move forward? My G1 is almost 2 years old, meaning time for a subsidized upgrade. In a few months I'm going to have Flash on my phone for the same price I paid (about $200) for 1.6.

Google has said before that it is going to break out the user interface portions of the OS so they can be updated without a (carrier in the loop) OS upgrade. To me that says that OS level innovation is going to slow, and so will fragmentation. At that point the only thing stopping you from taking advantage of an Android feature will be horsepower.


Btw, Symbian^3 does actually feature a hardware accelerated UI layer and should remove much of the "slowness" of the S60 5th edition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: