I've noticed on my SE that everything is running much much faster with iOS 12. It seems like the app store is actually really usable now. Does anyone have any idea what they did?
Good list, on top of that, they made the share sheet loading asynchronous, so it pops up faster and can be used sooner. Made complex workflows a breeze.
I noticed it on my SE as well. The WWDC keynote from this year highlights some of the performance improvements that iOS 12 brings to old phones. Something about ramping up processor activity and quickly ramping it down when you don't need it..I don't remember the details.
The end result is that with iOS 12, apps launch twice as fast, camera app opens 70% faster, and the keyboard comes up twice as fast too.
The biggest change was ramping up processor instantly, and then back down to maximize performance. So for launching an app / scrolling, CPU reaches highest state, until that portion is over, and immediately get back to efficiency setting for preserving battery life. Previously CPU time would ramp up slowly.
It's not, my 6S did the same thing with intensive apps, like Instagram. With IG it's probably ALWAYS ramped up constantly scrolling, playing , downloading. They definitely sacrificed battery life for speed in older phones. My XS lasts like two days though.
I thought so too, but I noticed that the update also increased the display brightness considerably. So maybe that's what's causing the slightly faster battery drain.
It's definitely not. On my 6s, once it gets below 40% or so, it's prone to rapid drain (2-5% all at once) and random shutdowns. None of which happened on ios 11.
Why would it be paranoia? Waking up the CPU faster and maximizing its performance will definitely drain the battery faster.
I assume the iOS12 update includes a combination of more aggressive waking and maximizing of performance along with a reverting of what they did previously to old phones when they lowered their performance to maintain the battery life of the degraded battery. If you remember there was quite a bit of scandal about this earlier this year.
My guess is your old iPhone's battery will also last half as much as it did originally within months. Yes, it's a factor of the new changes, but let's not forget it's primarily a factor of Apple using batteries that lose a significant percentage of their capacity after 2 years in the iPhones, and then hiding much of that by slowing your iPhone down. Maybe Apple should put higher quality batteries in their phones - just saying.
>Yes, it's a factor of the new changes, but let's not forget it's primarily a factor of Apple using batteries that lose a significant percentage of their capacity after 2 years in the iPhones, and then hiding much of that by slowing your iPhone down.
Compared to what available and suitable miraculous technology?
> let's not forget it's primarily a factor of Apple using batteries that lose a significant percentage of their capacity after 2 years in the iPhones
As far as I can tell this simply isn't true. My iPhone SE is at 89% of maximum capacity after 2.5 years of very regular use. It's standing up pretty well.
I think in general a lot of batteries in mobile devices for a couple years dropped off dramatically if charging to full regularly (ex: on the charger all night every night). My Nexus 4 and Nexus 6P were really bad here... my Pixel 2XL has been pretty good on battery life still after about a year now.
I think it was mostly immature technology adopted relatively early. There have been advances in battery tech, and it takes time to work kinks out.
Honestly I'm kind of surprised how little buzz there is about the A12's neural engine. ~5 tera-ops in your pocket, and it can run your models. For this particular set of math, that's comparable to a top-of-the-line desktop GPU from 3 years ago.
Thats a good point. I mixed up the iPad Pro 2 with the 6th Gen iPad from 2018. The 2018 iPad just squeaked through my threshold for having enough data to be included here, so it's possible that this is just noise. I'll dig into the variance as more data comes in. The article is updated to reflect it. Thanks!
It's interesting to see that ML performance is about equal on iPhone X and older iPhones, considering that iPhone X (and 8) has a neural engine and the other phones do not.
It's my understanding that the neural engine API wasn't actually exposed to Core ML until the iPhone XS. The GPU is the important factor for performance of those models
Are there any apps I can use that would let me see the difference in performance? Or even just use the Neural Engine for anything outside of the built-in camera app and the Measure app.
EDIT: I see that Heartbeat is mentioned in the article, but I'm also still wondering what else is out there.
Impressive but with a specialized coprocessor it's not unexpected. It's a bit of a gamble for Apple though, as other ARM SoC's are probably able to bridge the gap in raw single-core and multi-core performance in a year. The whole ML thing is a bit fuzzy for a lot of people even for programmers like me that are still wondering how their regular applications are going to benefit from all this silicon.
You clearly haven't really kept up with Apple and ARM for the past 4 years have you. They're cleaning everyone's clock (pun unintended) with no signs of that abating.
Last years X is still a generation faster than the fastest Qualcomm/Samsung chip today let alone what generic designs ARM licenses to the cheaper chip companies.
Oh that's why I get all of the downvotes, finally I understand it.
But you haven't really kept up with Apple and ARM for the past 4 weeks have you? Performance of the A12 is up something like 15% up - which is still impressive if you compare it to what Intel does on a yearly basis - but a lot less than it used to be for Apple SoC's.
In the mean while the latest Qualcomm SoC's (845 is like 35% faster than 835) are performing almost as well as the A11, while the performance gap used to be more than two years.
If Qualcomm manages to get another 35% increase next year their SoC will be faster at least until the A13.
So:
- Qualcomm is catching up
- While single core and multicore did not improve as much as it used to YoY for Apple
Apple doesn't use ARM SoCs. They have their own designs from the PA Semi team and are really just using the ARM instruction set at this point. This is one of the reasons there is such a big performance gap between the top android phones vs the top iPhones.
I guess the same can be said for AMD's x86_64 CPU's? Still it's fair to compare them to Intel x86_64 CPU's. The big difference is that ARM has something like a reference design where Intel doesn't want to share anything if they don't have to.
It's actually the other way around, humorously enough.
x86_64 is also known as AMD64, and for good reason. AMD created the core 64-bit x86 design, but, more importantly, they have patents to key pieces of the ISA required to legally implement it.
So, long story short, it is actually AMD who didn't have to share anything with Intel if they did not want to - Intel's 64-bit x86 CPUs are fundamentally AMD64-compatible CPUs.
The most interesting thing is it resulted in a bit of an interesting legal relationship between the two. Intel licenced AMD the required rights to produce x86-compatible processors. AMD, in turn, ended up licensing Intel the IP required for Intel to create x86_64 chips.
I don't think that's the reason. That's a similar situation with Kryo / Mongoose / Denver CPUs that are the respective companies' own designs and are really just using the ARM instruction set.
It really is the reason. Their CPUs are better than the CPUs that Android phones have access to. You can see this in the benchmarks. Compare Geekbench scores between them:
I think because Apple are dogfooding this ML stuff themselves (camera, face detection, filters, Siri, etc), exposing that API publicly isn't that much more risk to them.
Other ARM SoC's haven't been able to bridge the relative gap in raw single-core performance going on 5+ years, and haven't hit multi parity since iDevices went beyond 2 cores. I don't really see a gamble here at all - adding ML blocks is relatively cheap and icing on the cake.
Take Geekbench (and any other benchmarks) with a grain of salt of course, but benchmarks & reviews all corroborate the same story. See these GB4 results for instance:
Android [1] - 3323 single, 8894 multi
iPhone XS [2] - 4794 single, 11151 multi
Especially for single-core, the latest, fastest Android phone (Galaxy S9) is a bit slower than the iPhone 7 [3], and every other Android phone (running Qualcomm's latest Snapdragon 845) is equivalent to the iPhone 6s / SE. The delta is just huge.
It's actually a structural / incentive problem that keeps Apple at an advantage in this case, so I wouldn't count on the gap being bridged or even reduced anytime soon. Apple has built an incredible flywheel that lets them earn much more $ per chip than Qualcomm or Samsung by bundling them into super high-margin, high-volume iPhones.
Everything equal, my wild-ass guess is Apple can probably throw at least 33% more transistors at workloads than the competition, due to being on leading edge nodes, being comfortable with lower yields, running multiple chip teams in parallel for maximum efficiency & manual layout, etc. This means huge caches, fancy pipelines, lots and lots of specialized silicon, etc. They also have arguably the most talented chip team in the industry.
And that's just the chip. Because Apple is so vertically integrated, they're better able to optimize the device as a whole. That means more expensive components like faster ram & disk, better integration between IP blocks like CPU/GPU/ML, and more optimized OS/drivers that all play into overall performance and differentiated use cases. It's a business structure that is nearly impossible to replicate and will continue to create a lasting advantage.
With regard to how ML is used, it fits the approach Apple is taking to keep things on-device (vs in the cloud) as much as possible. So any use cases where ML is used in the cloud are potential use cases for ML on the device.
I, too, just ended up on the Geekbench browser and hadn't noticed until now that A12 would appear to be roughly at single-core parity with Kaby Lake-U:
i7-8559U: 1901/GHz [0]
A12 Bionic: 1998/GHz [1]
I'm not familiar enough with the actual tests to know how comparable those scores are, plus, as you say, grain of salt, etc., but the rate at which the gap has been closed continues to be remarkable.
Your point about the integration between software and hardware is spot on. Even the Android devices with powerful GPUs or AI accelerators are really difficult to access because Android APIs (even the NNAPI) is really tough to use. Core ML "just works" with the CPU / GPU / Neural Engine.
Qualcomm can DESIGN a chip and HOPE it’ll go in 20 million devices in a year. If they take big enough risks it’s possible no one buys the chip and they just eat a huge loss. I do the phone manufacturers don’t want it, or they don’t think it’s worth the extra three dollars compared to a previous chip.
Apple has inly one customer, they can design a trip and KNOW it’s going into 150 million devices within a year. Even if the APIs aren’t given to third parties, it may still be a big win for them.