Hacker News new | past | comments | ask | show | jobs | submit login

This article is very strange, on page 2 they spend tons of time lamenting the 360 for having a terrible CPU, then run tests they themselves created which don't really support the level of disdain they're showing.

They then almost completely ignore the results of their own tests but tack on a point about "well floating point sucks, so that explains our criticisms." Except it doesn't. A much more likely candidate (which they themselves hint at) is using poorly performing storage or having software glitches.

So I cannot tell if the author didn't understand the results or just wanted to moan that the 360 had an old CPU and didn't really care what the data actually said (they also provide no source for the power consumption claims).

I won't be buying a 360 simply because it has terrible battery life and costs $250. But this article is a little off. The second page just isn't consistent with itself.




The old http://en.wikipedia.org/wiki/Motoactv smartwatch by Motorola also had a OMAP3 CPU and somewhat surprised that was not mentioned as a possible reason for the company's over-comfort with such a dated part in contrast to the alternatives.


The CPU benchmarks don't make it a good CPU. It's old and power hungry, that's primarily why it sucks.


The age is not an issue, people thought the iPhone 4 was wicked fast at the time, the speed seems adequate for a smart watch I would contend the software is too bloated - Java is not the fastest tool in the shed. However the fact that it's power hungry is a deal breaker in a watch.


Cortex A8 was considered power hungry even for its time, among other 40nm chips.

Cortex A9 was both a more efficient and more powerful chip. Now, what most OEMs use in low-end phones, and even smartwatches is Cortex A7 which is roughly as powerful as Cortex A8, but far more efficient, both because of its design but also because it comes at 28nm.

I think this comparison was made by ARM itself:

http://archive.linuxgizmos.com/ldimages/stories/arm_cortexa7...

It's really strange that Motorola would go for the 3 year old Cortex A8 core, when Cortex A7 has been available for more than a year, and several chip makers produce them in different varieties.


Cost and Availability are two big reasons to use an older technology.


The OMAP3 is not THAT power hungry, and power consumption depends a lot on frequency anyway.


Not power hungry for a phone but for a watch it's a dog. Recharge a watch twice a day, how is that "not THAT power hungry"?

A watch needs an order of magnitude less CPU power than a phone, perhaps several. In 2014 if you see a watch product with a smartphone CPU, you know they blew it.


You mean it /should/ use an order of magnitude less CPU power, but ATM that's not the case - smartwatches seem more like a previous generation of smartphones sized downwards, with a patched OS.


I don´t know if it makes much sense comparing a smartwatch with a traditional watch.

They share only the form factor. I would go as far as saying they do not even share the functionality of keeping track of time: nowadays traditional watches are basically used only as fashion accessories (as the article points out), since for most people that function has been carried out by mobile phones since the first Nokias.

A Smartwatch is more like a lightweight (in every sense) smartphone, with the significant drawback of having much less space available for a battery, but I agree completely that having to charge it twice a day is a deal breaker, and not acceptable from a user standpoint, regardless of the feat of engineering the watch actually is.


Most of the current crop of smartwatches trying to be lightweight smartphones are all turkeys and have fizzled. Kind of like the early tablets that tried to be computers when the actual product demands a 8-10 hour (work/school day) battery life. Battery life is even more critical with a watch which needs to last at least 16 hours, hopefully quite a bit longer, and you probably want it to keep telling time even if the main battery goes out unless that battery lasts the better part of a week. So the correct way to design a smartwatch is to, as Apple did with the ipad, figure out the device size and computing capabilities backwards from the battery life and then go to market when you're happy with what's possible.

The Pebble, to it's credit, has a Cortex-M microcontroller instead of a smartphone SoC and get's up to a week of battery life.


On the side note, people need to stop doing benchmarks on a freaking watch. The spec war on smartphones was ridiculous enough, and now they are bringing it to wearables.


Specs are largely irrelevant for consumers, and benchmarks shouldn't revolve around nonsense metrics like this.

Why not useful metrics like screen lag, or the time it takes to switch between tasks?


That's called progress: http://www.smbc-comics.com/?id=3465


"Typically OMAP 3s were built on a 45nm, which puts it at a huge power-usage disadvantage compared to the 28nm LP (low power) process used to make the Snapdragon 400 in every other smartwatch."

This line is especially suspect -- all else controlled, power usage should go up with line width, not down.

Probably, the reason they chose to go with the OMAP 3 is power usage -- it's so old but well-supported that its drivers have been really optimized and it's quite efficient under Linux.


Die shrinks reduce power consumption. 45nm consumes more power than 28nm.


Its not that simple. Smaller transistors switch faster and consume less dynamic energy in the process. But the smaller the gate the worse the transistor is at being able to turn off the current and leakage current starts to become an issue. This is the main issue that is killing Moore's law as we are no longer able to ignore leakage current as it becomes more and more dominant to the entire power budget. This along with increased density of the circuits also cause risk of thermal runaway. These factors mean that as we shrink geometries the transistor architecture has to get more and more exotic (while still being manufacturable) to deal with these issues. So, yes the switching power is decreased but EVERYTHING else gets much more complicated.


Historically they did. But leakage is growing faster than dynamic power is shrinking, and the performance of the wires is falling behind meaning all the transistors must be comparatively larger.


That's what I thought, too, but I asked an electrical engineer friend of mine, and he said they would increase power. I just looked it up now, using the term "die shrink" (couldn't think of it earlier), and, sure enough, you're right. Not sure why he said otherwise...


Depending on the context, your friend may be right.

Power dissipation in a CMOS gate is made up of two primary components: static leakage and dynamic losses. Dynamic losses are related to the "dissipation" capacitance (Cdiss or Cpd in CMOS datasheets) and the switching frequency. If you want to understand why, consider charging an RC circuit. How much energy is lost to the resistor and how much is stored in the capacitor at the end of charging? How does the selection of the resistor and capacitor value impact this ratio?

So for minimum dynamic loss you want to minimize Cdiss which involves making gates as small as possible. However, this makes the static leakage higher. So it may not be universally as simple as "a smaller process is lower power". For a system which spends most of it's time in sleep, or clocks slowly it may actually be better to eat the dynamic loss of a larger process in order to get the lower leakage, which is something I believe TI did with some of the FRAM '430 parts (but I can't find the link now).


All else being equal, it drops consumption. But generally when you go down in process, you increase what you pack on the die, bringing the power back up.


The A7 doesn't really do that though, it was created in no small part to provide the same power as the A8 with much better efficiency, and to be paired with an A15 in big.LITTLE for transient high load (which isn't necessary in a smartwatch)


Phone-oriented SoCs have aggressive frequency scaling and power gating, such that the newer chipsets have lower power consumption in ordinary usage and only scale up to similar power usage in high-utilization applications.


If that's really the reason, then that's pretty much a failure. According to ars' synthetic battery life test, the Moto 360 is the worse. It has a slightly bigger battery than the Samsung, and has only 56% of its battery life.

PS: it's not only in ars' synthetic test but also real life reviews said the battery life is too low.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: