Two things that are relatively surprising for me from the teardown.
One, I'm surprised that they include a USB PHY on board. Given that the machine has no externally accessible USB interface, that seems like a waste of power, BOM cost, and board area. The only thing I could imagine it being used for is through the mystical 5-pin programming header, for initial device programming; even still, I would imagine they'd prefer to cost-reduce and use JTAG (or another glueless interface) to do initial programming.
I could imagine that they'd use it for bringup, but even still, it seems like they'd want to have a non-bringup SKU for mass production that didn't have the USB PHY populated, and would just drop down the part if needed...
The other surprise was the MIPI DSI-DSI bridge. Having a DSI-DSI bridge that supports panel self-refresh allows them to keep the display alive even without having the OMAP (and its DRAM) turned on. The bridge seems to have 28Mbit of SRAM on it as a frame buffer, though, and that can't be cheap: I suppose the learning is that on OMAP, even keeping the DRAM powered on for a minimal refresh is more expensive in terms of power than a big external bridge.
Additionally, I'm vaguely curious about the external DSP; I wonder what they're doing with it? It's probably some operation to save power while the AP is turned off, but I wonder just how much power it really can save.
Neat device. It'll be interesting to hear the explanations of how people designed these first gen devices as they show up over the next decades.
The typical battery capacity for Moto 360 is 320 mAh and the minimum is 300 mAh. In the mobile industry, sometimes both the minimum and typical capacity is listed on the battery, with the typical capacity quoted as the official battery size. Both figures are included on the batteries of our Moto X, Moto E and Moto G devices. In the case of smaller devices, we aren’t always able to list both figures. For Moto 360 we only had room for one figure and choose to list the minimal capacity of the battery. We see how this can be confusing and we will look into ways to add the typical capacity as well in the future.
For anyone who hasn't paid close attention to the gory details of battery specs, this is pretty normal - most of them have both a typical and a minimum capacity. Which one gets advertised is mostly a marketing decision.
Like any specification there is a tolerance band on battery capacity specifications. One of the unique things about batteries is the tolerance band is often stated as being asymmetrical, since no one would reject a battery for having too high of a capacity (unless it's physically too large or something). So instead of a spec like "200 mAh +/- 10mAh" and the implied symmetrical distribution of real measured values, you get something like "200 mAh +20mAh -10mAh".
Probably manufacturing differences and legal reasons. If the average battery life is x and someone gets a battery they can prove is x - 50 they might get sued, where as if they put a generous minimum battery life they can cover their asses.
A guess: Small variations in the composition and microstructure of the various battery components give a certain range of errors that they have to account for lest they get sued for mislabeling.
Did I hear that right? Did they say OMAP 3? I didn't know anyone was still using OMAP3 outside of charity organizations trying to provide very basic tablets to children living in terrible conditions in the third world.
Why they gotta be so cheap on their flagship smartwatch?
Yeah this is a pretty odd choice, it would be fascinating to find out Motorola's reasoning.
Motorola was comfortably TI's biggest OEM back when TI was actively pushing OMAP for smartphones. The original Droid (Milestone outside the US) through the Droid 4 were all OMAP (first OMAP3, then OMAP4) and I believe also used TI WiFi/Bluetooth.
The last "flagship" Android phone to use OMAP was the Galaxy Nexus, which had Android updates stop with 4.3. It's been rumored this was because TI had stopped maintaining up-to-date kernels/drivers for use with Android.
This would make it a bigger chore to bring-up the current version of Android than it would be on a current Snapdragon.
Also, the OMAP3630 is just an old chip: 45nm vs. 28nm used by the other (Snapdragon 400) Android Wear watches.
If I had to guess: Motorola had a bunch of OMAP3630s still left in stock (and was quoted a very good price by TI for more)
Why they gotta be so cheap on their flagship smartwatch?
More expensive also tends to imply more processing power, which also implies more power consumption.
More is not better, considering what's going to fit on your wrist.
Also, with an older platform, if they've had time to work out all the issues and greatly refine the power management (so that everything stays active for only as long as necessary), it can end up using less power than a SoC that uses a better process technology but doesn't have as good software support.
The Apple Watch is much slimmer. Part of the reason for that is the SoC that Apple uses is more modern and compact than OMAP.
OMAP3 isn't the best processor to use for a watch. I'm sure if we spend some time going back and forth we will identify at least one or two more modern processors that beat OMAP3 on efficiency, size, AND performance.
I don't thin Motorola should be skimping on the processor for their first watch, if they are trying to convince us its a premium product that we should feel proud to wear on our wrists. People see watches as prestige items, and they will pay more for a well constructed and designed device. There's already enough competition in the plastic trashy wrist-wear segment.
Both the OMAP3 and OMAP4 are 12mmx12mm BGA packages, with pop-memory (package on package). This does add to the height, but the alternative is to put the SDRAM on the board which chews up a lot of board real estate.
OMAP3 isn't the best processor to use for a watch. ... I don't thin Motorola should be skimping on the processor for their first watch,...
You still haven't explained what the OMAP3 watch doesn't do now, that it could do with an OMAP4.
There's already enough competition in the plastic trashy wrist-wear segment.
That's actually likely to be a problem with the Apple watch too. If it doesn't prove to be too popular, then will future versions of iOS support it? Sure, 8 will, and possibly 9. But after that? In a couple years there will be a watch with some other whiz-bang feature and everyone will want to upgrade to that.
These tiny wearable computers will suffer just as much obsolescence as our phones do. A Seiko mostly just tells time, and the standards for don't change much.
My guess would be time to market, if they would otherwise have been blocking on bringup work for a better SoC. Possibly indicates a crash program late in everyone else's smartwatch development cycle?
That would be my guess too.... I hope it means they're working right now on something better for v2 which will improve the battery performance (which seems the biggest issue with most of the current generation of smartwatches)....
How do you program the circular screen on this thing? Is it a radial coordinate system (which would make it hard to draw parallel and perpendicular lines). Or is it a rectangle but you have to keep track of when you're inside the circle? Or maybe you're not even allowed to draw on the thing and you have to use higher level api functions?
As the other poster said, it's a square with bits cut off. You can program it per pixel if you want to, but for most typical use-cases developers will be using higher level APIs that handle the details for you.
This bit from ifixit about the battery saying 300mAh vs 320mAh is about a day or two old, but ArsTechnica is the one who got the actual clarification/explanation from Motorola.
One, I'm surprised that they include a USB PHY on board. Given that the machine has no externally accessible USB interface, that seems like a waste of power, BOM cost, and board area. The only thing I could imagine it being used for is through the mystical 5-pin programming header, for initial device programming; even still, I would imagine they'd prefer to cost-reduce and use JTAG (or another glueless interface) to do initial programming.
I could imagine that they'd use it for bringup, but even still, it seems like they'd want to have a non-bringup SKU for mass production that didn't have the USB PHY populated, and would just drop down the part if needed...
The other surprise was the MIPI DSI-DSI bridge. Having a DSI-DSI bridge that supports panel self-refresh allows them to keep the display alive even without having the OMAP (and its DRAM) turned on. The bridge seems to have 28Mbit of SRAM on it as a frame buffer, though, and that can't be cheap: I suppose the learning is that on OMAP, even keeping the DRAM powered on for a minimal refresh is more expensive in terms of power than a big external bridge.
Additionally, I'm vaguely curious about the external DSP; I wonder what they're doing with it? It's probably some operation to save power while the AP is turned off, but I wonder just how much power it really can save.
Neat device. It'll be interesting to hear the explanations of how people designed these first gen devices as they show up over the next decades.