Hacker News new | past | comments | ask | show | jobs | submit login
Apple A8X’s GPU – GXA6850, Even Better Than I Thought (anandtech.com)
153 points by shawndumas on Nov 12, 2014 | hide | past | favorite | 6 comments



Seems like they're pulling a GeForce FX in some ways though, using FP16 in shaders in benchmarks. The GFXBench PSNRs seem to suggest 50% worse than Maxwell.

NVidia was raked over the coals in the GeForce FX era for running FP16 shaders because FP32 ran far slower (ATI was using FP24 which was a nice sweet-spot at the time)

I'd like to see more image quality comparisons for mobile GPUs like the Desktop Era, because there may be cheating involved in IQ department.

Also, in comparisons with the K1, keep in mind NVidia claims K1 is DirectX12 class, so some of the die space is being used for functionality not benched in GFXBench.


In OpenGL ES there is no default precision in the fragment shader. Applications must explicitly specify the minimum precision as highp (FP32), mediump (FP16), or lowp (10 bit integer).

If the application uses mediump or lowp the GPU is free to run those computations at a lower precision.

https://developer.apple.com/library/ios/documentation/3DDraw...


Yes, but it's still apples-to-oranges. If drivers are defaulting to lower precision, you're essentially comparing two different workloads.

(I'm not sure why people are voting it down, it is a legitimate point to want to see Apples-to-Apples benchmarks. GFXBench has a high-precision Render Quality bench, but it doesn't appear to have a high-precision framerate benchmark)


Hey, at least they aren't FP8s (no, I'm not kidding -- I had to get a shader demo working 6 or 7 years ago and I encountered some verrry strange rounding behavior on an intel GPU that I eventually traced back to this fantastic little "feature").


Got to wonder where it will be when Maxwell goes 20nm and maybe others got new architectures soon..Sure they can afford to use much bigger SoCs than others since they make their own but it doesn't seem all that efficient. In the end we measure mobile GPUs by perf in synthetic benchmarks and that has no real relevance since all wee need is for games to run well.Wish mobile benchmarking would get better tools already since GPU testing is rather misleading and we all look for best perf not good enough perf.


This seems rather like the A5; a slightly custom job, using off the shelf core designs. It'll be interesting to see what the A6/Swift equivalent is like in a year or so...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: