One thing that I found counter-intuitive is that building a device that periodically receives data wirelessly is generally more expensive on the battery than a device which periodically transmits.
Naively, I assumed that it must take more power to transmit than to receive, which is true on an instantaneous basis, but false on an average basis.
A device that wakes up every so often, transmits, then goes to sleep can use very little average power as compared to a device that must constantly have the receiver powered up to listen.
>compared to a device that must constantly have the receiver powered up to listen
But there's no need for the receiver to constantly stay awake to listen or poll the transmitter like in wired network systems.
Low power wireless protocols use time slots since forever, where receivers wake up only in their dedicated time slots to check if any messages are addressed to them, and if so, then they wake up the entire CPU block and start processing the payload and reply to the message, but if not, then they put the receiver back to sleep till their next time slot. Simple and very energy efficient.
Therefore receivers are more efficient than transmitters as transmitters are constantly operating as beacons for every time slot which is what you want when the base station can be powered on AC, while the IoT receivers are usually battery powered and need to last for years.
The only tricky part is building a self compensation mechanism in firmware for the receiver wake-up time jitter as all receivers inevitably start to drift in time as per the drift of their oscillators, including the transmitter which also drifts, especially when using low-cost oscillators with horrible drift.
I do a lot with LoRaWAN, and I like the simplicity of its approach for class A devices, which is to wake up and transmit and then listen for a reply in two defined time windows. These are shortly after the transmission, so clock drift is less of an issue with cheap oscillators.
I'm not sure if a sensible timeslot based system could get down as low as the battery usage that something like zigbee battery powered sensors can get.
Those are transmitters running on battery, usually with the radio fully powered down, only powering them up to transmit if they need to report a changed value, or to report in at least once a day, so the hub knows they haven't died.
Something like a magnetic Reed switch door/winodw sensor would wake up the mcu via a level sensitive interrupt, so obviously the "sensor" portion for some sensor types can have negligible power draw.
How could any battery powered receiver possibly get power usage that low? Like even timeslots of only once a minute would seem likely to use considerably more power, and is likely too much latency for many purposes. But surely more frequent timeslots would only increase power drain?
I think this understates the complexity of getting the time slots right, since you have to factor in drift from the firmware, drift from the hardware (which varies based on temperature), and also the propagation time of the radio signals. Essentially, you want the timeslot to be much larger than the sum of all the jitters, while also making sure there are enough timeslots per system cycle to account for all of the devices, at the data rate you want.
Not to say its impossible to solve (although infinite precision synchronicity in distributed systems is impossible to solve), just that the more devices you throw at the system, the trickier it gets, in a way that does not scale linearly with the system size.
>I think this understates the complexity of getting the time slots right
It's tricky, but not impossible to solve by any half decent firmware engineer with low level understanding and some battle scars in the industry.
>Essentially, you want the timeslot to be much larger than the sum of all the jitters
Not really. Drift is inevitable but it's not so bad that you get huge fluctuations so quickly that you need to take such wide margins. Just sync all your receivers to the drift of the transmitter every few minutes/hours or so and you'll be fine. Depends on environmental conditions of course which you should know up front when designing you product.
>just that the more devices you throw at the system, the trickier it gets, in a way that does not scale linearly with the system size
Not really, you just sync all receivers to the drift of the base station via the same drift compensation algo. If it works on one device it will work the same on all. Of course you will reach a number of devices limit based on the max time slots you can have which is based on the amount of bandwidth you have and the access time you want for rach device slot. We got a couple of thousand device on one base station lasting ~1-2 years on one button cell so it was good enough.
If you make the addressing in AM, you can use just the radio waves to power a decider that can wake-up your device, and then you do the actual communication in FM.
But the wave can't be too faint, so I guess you co always need a bit more power at one side.
Think in terms of energy per symbol and it makes sense. When waiting in rcv mode you're paying energy for zero symbols.
The other is transmitting short packets at high power/data rate is a win vs low power low data rate long packets because your energy per symbol is lower with the former. And people that show know better seem to make that mistake a lot.
> Naively, I assumed that it must take more power to transmit than to receive, which is true on an instantaneous basis, but false on an average basis.
Even that is not necessarily always the case in low-power systems, since often receive amplifiers need to be run at a fairly high current to achieve a low noise floor, and power saving tricks like envelope tracking power supplies are harder to implement on the receive side.
For example, I've seen several Bluetooth LE radios where the instantaneous supply current is higher during receive than during transmit.
I'd assume the problem with receiving data on a periodic basis is that you still have to establish the link with the towers. Such that you are always "polling" from the perspective of the device.
That is, treat the times that you wake up to receive information the same way as the ones where you wake up to send, and I'd expect them to be roughly the same? That not the case?
Right, I'm assuming the device still has to look at and "find" the data in the received signal? As such, if you are just "spraying the data out there," you can do that with less thought and just transmit.
But the point was still that, if you are checking your time-slot at a greater rate than you would have been waking up to send data, then it makes sense that it would take more battery power. In essence, you are still "polling" based on your timeslot and not transmitting data at a presumably lower pace.
There's no need to "find" the data when the receiving powered IoT devices are in perfect sync with the transmitting base station, because then they can be addressed directly.
Let's say, for example, that you have 256 time slots of 0.1 seconds length, with 256 devices present in each time slot. Each of those 256 IoT devices wake up simultaneously in their respective time slot in sync with the base station beacon and listen if the base station is trying to address one of them specifically, then the rest who aren't being addressed go back to sleep.
I'm assuming you have to inspect the signal to confirm the opening frame of data. Mainly to guard against clock drift.
But, I'm also assuming that the person that talked of waking and sending data was not doing so as often as you would in this scheme. Which is the thrust of my assertion.
Granted, I'd assume waking every 25ish seconds is probably fine? My gut was more that the person seeing bad battery life on receiving data was polling in the sub second timeframe. But was waking to send data in the seconds time timeframe. If both are done in the 25ish seconds timeframe, I'd have expected them to both take roughly the same power. Transmitting more if you are having to send distances?
>I'm assuming you have to inspect the signal to confirm the opening frame of data.
There's not need for manual inspection. The analog receiver front ends of modern 2.4GHz microcontrollers of past ~15 years are smart enough that they have programable logic that can handle low-level packet inspection of the RF preamble and sync word to check if they're the ones being addressed, after which that will trigger full CPU wakeup where your code can start execution and process the message payload.
Sorry, I didn't mean you had to do that in your code, necessarily. I meant that the receiving hardware has to do it.
That said, the main contention was on timelines, still. My assumption being if you use the same sleep/active cycles between transmit or receive, then you should see similar battery life? Is that not the case?
> We appear to be in fairly solid agreement on why a receiving device uses more battery. Has nothing to do with one being more battery per byte sent/received. Has everything to do with whiffing on many more receives than you do on sends. And is in complete control of the person building/operating the device.
Ah I misunderstood your previous comment; yes we agree on all of this.
What you are proposing only works if you have a device that only receives replies. That is not always the case.
Think of it this way: the postal carrier arrives at your house at whatever schedule the post office decides. You don't have control over that. The only way to get your mail is to continuously check your mailbox to see if anything new has arrived. It only takes 3 minutes to check your mailbox. For the sake of this analogy assume if the carrier arrives the next day and finds something in your mailbox they return it to the sender.
Conversely sending mail is different. If you haven't written a letter or packed a box there is by your choice nothing to do. That situation can persist for days, weeks, months, or even years if you have nothing to send outbound. When you have something to send it takes you over an hour to drive to the post office, wait in line, mail the package, and return... but there are no consequences if you don't go to the post office other than delaying your outbound mail.
Yes sending a package is expensive but it doesn't happen often and you can decide to wait until multiple packages have piled up to be sent making your trip more efficient.
Conversely you're paying 3 minutes to check your mailbox every day whether anyone sent you mail or not. If you don't pay the 3 minute price critical mail may be lost forever.
If you send a package once per month that cost 60 minutes but checking the mail an average of 27 days per month costs 81 minutes. Receiving mail cost you more time overall despite each receive attempt being 20x cheaper than each send attempt.
If I was finding that I am spending too much energy checking my mail for a letter, I don't presume that it takes me more energy to check mail than it does for me to send it. Rather, I presume that I'm choosing to go check my mailbox too often. That is literally all I am saying here.
Your logic on why that is, is effectively my assertion on why someone would find that transmitting devices are easier on the battery. People build the device thinking "the message could come at any time, so we always have to check." Stop doing that, and you go easier on the battery.
That is all to say, yes? We appear to be in fairly solid agreement on why a receiving device uses more battery. Has nothing to do with one being more battery per byte sent/received. Has everything to do with whiffing on many more receives than you do on sends. And is in complete control of the person building/operating the device.
>My assumption being if you use the same sleep/active cycles between transmit or receive, then you should see similar battery life? Is that not the case?
I don't get your question. Use the same sleep/active cycles for what? If you're asleep you use almost no power. If you do CPU processing in that time you will sue more power. But do CPU processing for what? Most battery powered IOT widgets don't usually do any CPU heavy application, they just collect regular sensor data and forward it to a base station which forwards it to some cloud service. And any way, any modern ARM core consumes much less power in operation than a RF recover or transmitter which are the big gas guzzlers in this case.
The opening assertion that got me in this thread was that sending data was cheaper than receiving data. Opening post said that they were surprised to find that is the case in the work they did.
To that end, I was asserting that the problem the poster was seeing was that they were too aggressive in how they "listen" for data.
I’ve been working with IoT for about 8-ish years and the one thing that has rang true for across platforms, designs, customers, and use-cases is that you can only squeeze so much performance from a setup that wasn’t properly optimized for low power consumption.
I’ve had customers approach me desperately to me trying to make IoT device survive just one night on a small LiPo battery, enough so the sun in the morning will charge it up again, but their solution was a cobbled together mess with a ESP32 looking for their administration network to connect to, a uBlox modem powering up and sending off a 4MB packet every 5 minutes. Turns out, it would have more power efficient to leave the modem powered on and connected to the cell network as you need to exchange something like 25-50 packets per handshake or 2 packets per minute if you’re just idling.
I’ve had the curse of the guy who just fixed everything because I have a background in both hardware and software in addition to knowing cell networks at a low level and TCP/IP stack (usually DTLS, in this case). When I optimize stuff, I will attack it from all directions. For example, it costs more power to receive messages so for anything nonsensical (such as a periodic data packet) I use a non-confirmable UDP packets, i.e. fire and forget. I try to avoid using heavy RTOSes on my devices and opt for a simple messaging library to properly format data for optimal transfer over the cell network. My devices I build have low powered MCUs with a restart timer to wake up periodically. I managed to make a solar powered environment sensor with only an super capacitor as reserve power.
This went on a bit, but I think my point is that for well-architected, low-power devices, you need to start from the ground up, and sometimes that means ditching your IoT platform and spinning your own hardware and firmware. My last observation is that many hardware engineers are not the ones who install or test the solutions they design and are unaware of the power consumption outside of the specs in the data sheet.
Effective capacity also drops with load for many batteries, and there can be subtleties. Read all the data sheets and applications manuals from your suppliers.
Even very good firmware engineers can need reminders that everything you do with a battery operated device drains the battery.
Keysight, R&S, and Tektronix/Keithley all have nice battery test devices and battery test simulators. You can rent one if buying one takes your breath away.
Also IoT devices can require you to use very fast ammeters or sourcemeters to correctly measure net current or power. The RMS reading on your multimeter might not even register fast spin-up and spin-down on a BLE device. That's another use case for the Qiotech tool. Again, the big instrument makers make even nicer stuff. Call an FAE.
Use a CR2032 battery or be prepared for a life of misery. :)
To a first, second, and third approximation: CR2032 is the largest coin cell that exists. Anything else has terribly weird quirks and may not actually be better than a CR2032.
We're building a bicycle product involving loadcells, and one of the issues with them is that strain gauges typically have pretty low resistance. As we need readings at a relatively high sample rate (measuring pedalling dynamics) it's a lot of fun waking up loadcells, getting the sigma delta ADCS to get nicely-settled 24-bit results across multiple channels and synchronising the whole thing across 4 separate sensor nodes. Basically we have to pedaL and chew minimal joules at the same time.
Latest hardware has coulomb counters to keep track of charge state; you probably already know it but the nPM1100 has some good press.
Agreed, and the idea to make it a message queue (rather than a generic packet protocol) is inspired. I don't need to implement a hundred different protocols for a hundred different devices, I just need to process messages they send.
I've always been curious how the Ring video camera can go "Live" 5 seconds after you click a button on a web browser, but still last for months on a small battery.
A sometimes overlooked resource is your MCU vendor. It may have a power monitoring daughterboard w/ supporting software to help you optimize your battery usage against a development board. The last one I used was Nordic's and it was stellar, free (thanks to the FAE), allowing us to ship a BLE device that would run for at least 1 year on two alkaline AA cells.
Naively, I assumed that it must take more power to transmit than to receive, which is true on an instantaneous basis, but false on an average basis.
A device that wakes up every so often, transmits, then goes to sleep can use very little average power as compared to a device that must constantly have the receiver powered up to listen.