Engineer A: Hey how would we know whether loading this image as a .webp vs a .jpg would perform? Our lab testing is one thing but the different hardware out there vastly differs from phone to phone -- we need some data on the whether the average phone has enough optimizations for one versus the other.
Engineer B: Oh, what if we just run a background task on some of the phones that tries loading an image that we don't show, just to get the metrics?
Engineer A: Hmnn, that could work, we could load just one image across some small % of the user base to get a representative sample and report that data to system X so we can get some statistics and make heuristics based off of that.
Ethics is not a strong suit/required reading for software engineers, so not every engineer will read the above and detect something wrong. Hell, running experiments on wide swaths of users without consent isn't quite above board ethically either, but we've got a whole technological trees and industry dedicated to it.
Most of the time the person with enough ethics knowledge to realize there's a problem here is more incentivized to alert legal (so they can add it to the EULA/ToS that users don't read) rather than stop the behavior.
I suspect many engineers could read the conversation above and not think there was anything wrong there to begin with. And of course, once X teams across Y companies start using this, we've got a problem.
Note that I use engineers for all the roles on purpose -- the idea that it's only non-engineers doing all the sketchy shit is a deflection of responsibility, I for one am pretty proud of George here -- he's turned down what is very likely a lot of money to do the right thing. We can only guess at how many do not take this route.
[EDIT] I want to note that I do not put myself above this conversation -- being able to recognize that this is wrong immediately is not some innate skill that everyone has, it has to be reasoned about, and often where to draw the line comes down to widely-enough-held social mores/morality.
For example, I run ethical ads on my personal blog -- I know that ads incur unnecessary load on user machines that visit, and in-turn burning unnecessary battery on visitors' machines. Am I the same as Facebooks' engineers that work on this system? Probably not, but explaining that completely (and convincing yourself or anyone else) is more complicated -- maybe it's only a matter of degree.
Well, if you work for facebook, ethics is not your priority in the first place.
On the other hand, if I ever decided to throw mine on the toilet and work for a company that manipulate people, and perform mass spying on them, then I would go all the way and just do things like this.
You misunderstood. I gave other examples of good things.
To elaborate, I said “work on removing the negatives”. If you won’t work at fb, others will. But when you work there, you actually have the power to change the place so that they do less of the stuff that you described.
I wouldn’t work for Facebook because I don’t hate myself enough. But I also don’t clutch my pearls and think that any for profit company is feeding starving children.
You can make a negative case for any of the large tech company.
> You can make a negative case for any of the large tech company.
Yes you can, but it's a spectrum. Microsoft abusing their monopolistic position to strangle competition and adding adware and spyware in an OS people pay for is ethically and morally bad, but is much less bad than Facebook profiting from and doing nothing to stop an actual active genocide. Facebook is by far the worst, ethically and morally, offender of the big tech companies not actively involved in the military industrial complex.
> The invisible hand is a metaphor used by the Scottish moral philosopher Adam Smith that describes the unintended greater social impacts brought about by individuals acting in their own self-interests.[1][2] Smith originally mentioned the term in his work Theory of Moral Sentiments in 1759, but it has actually become known from his main work The Wealth of Nations, where the phrase is mentioned only once, in connection with import restrictions.
Can FB even do anything meaningful against "an active genocide"?
It's important to assign some responsibility to those who have/had the ability to prevent it, but it's not clear to me that FB could have prevented it.
In close races like the US presidential election it's plausible that FB was the deciding factor, but places where genocides happen are not known for close races.
And to be clear, it's not like FB couldn't have done a lot more with a lot less money to filter out certain kinds of content, but they intentionally wanted some kind of neutrality, right? And that's probably noble (and the maximally harmless thing) when it comes to - let's say Norwegian politics - but not so great in a lot of other contexts. However engaging in moderation with the intent to prevent social ills is very non-trivial.
To me it looks like they realized they were in a hard place and then basically bailed out from the hard problem and instead went all in on trying to make as much money as they can.
The question is not “how could a social network like Facebook have done any better given small tweaks to content moderation?” It’s “is Facebook fundamentally designed to create these kinds of misinformation bubbles?”
That second question is much more painful to ask if you’re an engineer at a company like this, because it means there’s no changing it from within.
but it's also probably false with a likelihood of at least 99%
Facebook is designed fundamentally to be a social graph, a representation of people's lives, blablabla. the more time people spend representing their lives on FB, the more attached they are to their online persona/profile/connections (and the dopamine rewards through likes and engagement), the more money it makes.
misinformation bubble? who cares. if that is what people want to represent, FB provides it, sure, but it's not what it is designed for.
You wouldn't get the tech-savvy people with that, no. But it's easy to imagine someone barely making it through a day, not quite cottoning on, but hearing how Android phones last longer - and dismissing reports about iPhones lasting longer because their iPhone... doesn't.
They're not replacing their phones over this, but for their next phone they're choosing an Android. For battery reasons. Fastforward 2-3 years and most of these folks are switching (with renewal of subscription).
My main question regarding such a scenario: could they find the right balance, ie. drain the battery enough to annoy, but not so much as to attract investigation that'll definitively identify facebook's app as the culprit?
"load just one image across some small % of the user base" sounds harmless, especially while the app is open. Different world from intentional draining.
But why not just limit such benchmarks to times when the phone is being charged? If they write internal guides on how to do this "thoughtfully" one would think they thought more about this.
Sure, they could (maybe they did, I have no idea) -- but note that George had this to say about the doc:
> The document included examples of how to run such tests. After reading the document, Hayward said that it appeared to him that Facebook had used negative testing before. He added, "I have never seen a more horrible document in my career."
I would imagine a company the size of facebook can look at their internal stats, see what devices most people use, buy the top 80% of devices 10pieces each and create a lab environment to test them, and that includes battery usage.
I would agree that it is not a trivial black-or-white question about ethics for a number of reasons:
1. It wasn't Facebook who invented testing of products on different audiences. This was happening long before invention of computers. A cook altering the ingredients of the soup or a garment producer changing their supplier of fabric, they all altered the features of their product, sometimes genuinely believing that there will be more good than harm in it.
2. Customers may have right to know if something has changed in the product they love to make informed choices. With complex products like a digital platform or a car, it is no longer feasible to become aware of all the changes and it is absolutely impossible to understand them. Now it is a matter of trust of regulatory supervision more than "do you want to try our new recipe?"
3. Engineers may have a duty to build their systems responsibly and with respect to the needs and rights of the society. However expecting them to execute right judgement on complex legal matters is a big stretch, especially given the diversity of modern engineering teams and their cultural backgrounds. What is acceptable in USA, may be completely unacceptable in Germany or Pakistan and vice versa.
Finding where to draw a line is important and it is not just (and mostly not) on engineers to do it. Engineers solve technical problems. This is a problem of trust and regulation, which must be considered by the whole society.
Better example would be a car manufacturer testing new production method that reduces safety of seat belts. Or aircraft manufacturer testing new software that compensates the significant changes in hardware design without airlines knowing about it. Though question is if it is really about seat belts or about diameter of cup holder in terms of safety.
Would you put holdouts of good features in the same bucket as negative testing? Let’s say someone develops an ML model that cuts down spam by 90%. It’s launched to all but 1% of users, so that you can consistently measure that the spam reduction is 90%, that the users are benefiting from it, and that the false positive rate is low.
From a different POV you’re intentionally worsening the experience for 1% of users. Is this a negative test? If so, is it morally different?
You're right -- this is a mitigation that highly ethical/moral companies will take. It's likely that EULAs/ToS permit experimentation at any time on any version of an application though.
There's also the question of how much of a lab rat people really consent to being -- given that you can't really know what experiments will be run ahead of time.
What are the disclosure rules? What is an experiment versus what is not? What is legal to use to experiment on people with? This is the kind of place regulation is normally present, and it hasn't caught up yet.
Sounds more than plausible, but I don't really see how that would be "horrible" (quote from the ex-employee). Also I don't think it jives with the reported name "negative testing".
A potentially simpler explanation is an A/B test of a potentially CPU-intensive feature, and a misinformed data scientist.
Testing a higher-compression image format might effectively be testing relative behavior between an experiment arm with higher data usage and lower battery usage and an arm with lower data usage and higher battery usage. I.e., "testing draining users' batteries."
There are too little details in the blog post or the original article to really figure out what's being discussed.
I said to the manager, 'This can harm somebody,' and she said by harming a few we can help the greater masses. Any data scientist worth his or her salt will know, Don’t hurt people,
Facebook has great utility for billions of users. E.g. keeping in touch with family, friends, relatives, exchanging knowledge in Facebook groups etc. Of course it also has downsides. But I don’t believe it’s reasonable to dismiss the upsides of Facebook.
PS. This does not excuse shitty behaviour of course.
From the article, it sounds like this was part of A/B testing. They want to see how people behave when their battery doesn't last as long... so they drain the battery on 0.1% (or less) of user's phones to collect the data.
Of course, if you ruin battery life for hundreds of thousands of users... you're likely to hit at least one user who really would have needed that battery.
Exactly. I found myself asking "but how would draining batteries help them?".
I think lots of companies already do stuff like this, for example, Microsoft rolls out updates to smaller groups before doing larger deployments. Google Chrome tests some experimental features before a full deployment.
It would be better if there was simply an opt-out of being part of this testing group. If the user goes to the effort of opting out, then they likely have a good reason for this. Also maybe Facebook also doesn't want to run this on devices with already bad battery performance for example.
They want to know how the app acts when a phone is in a low power state (because power profiles often kick in only then and that can meet to bad UX)
Thus the intent seems to be: delivering put some users' phones into the state desired for tests (ie by draining their battery till they're at that level) and then commence the tests.
In a lab it's all very clinical, but what if the test subject happens to be stalked at the time, desperately needs to make a phone call for help and then the device runs out of power at the critical point? That's obviously a made up, cherry picked example, but with large enough numbers you can expect that something like this would occur.
What's odd to me is that they don't seem to have been comfortable just getting test results from users whose phone battery state is at the desired level naturally (there's no major ethical issue where you, the tester, haven't caused the low battery provided you're not subsequently taking battery down to zero)
But why would you call something like that "negative testing"?
Maybe I'm being too cynical, but my interpretation was that they wanted to know e.g. how many would uninstall the app if it consumed more power. Would make business sense to test this, and would be logical to call it "negative testing": do something negative to the user and see if they react.
So they didn't explicitly ship a feature to drain users' batteries, they seem to have shipped a feature that does tests (testing how long downloading different image downloads take etc.) that drains a users batteries.
It's not clear to me why they wouldn't run a few tests for a few seconds across many/all of their hundreds of millions of users so the battery drain would be unnoticeable, rather than enough on a single user where it would be noticeable.
I read it as one of the tests being intentionally draining user's batteries, e.g. to see how important it is to optimize for battery ("we see X% lower engagement from users in the battery hungry category, so we should invest more in optimizing the app for that").
The problem is not one suffered by Facebook users, it's a problem suffered by people working at Facebook at the hands of evil management. Noble Facebook employees, fighting with their superiors, trying to do the right thing, whilst collecting a fat paycheck.
I've recently migrated to Pixel 7 Pro with GrapheneOS and noticed that my battery was lasting for about 6 hours. And there was nothing unusual in the battery usage report. Initially I thought that something is wrong with new Pixels or GrapheneOS.
So, I've started reading ADB logs and assumed that something might be wrong with Meta apps - (which I installed in work profile). The logs were full of various messages and stacktraces from their background services. So, I put them (facebook, messenger, whatsapp) into "restricted battery" mode and disabled background data. That was enough to keep battery for 12 hours.
I thought that it might be an issue with Graphene sandbox and bad Meta code, not a deliberately created "feature".
In the end I've also restricted Telegram and now my battery lasts for 17 hours.
So, I went to the extremes and now I do the same (restricted battery + disabled background data) for all the apps by default.
Reading ADB logs is useless for any kind of worthwhile analysis. Might as well fortune tell from coffee sediment.
If you want to figure out what's going on with your battery, grab a bug report (adb bugreport command is one way). Then upload the resulting ZIP into battery historian - https://github.com/google/battery-historian (there's a hosted version somehwere on the web is you're willing to trust it with your debug data).
That will give you a usable graph of what's been keeping your phone awake, using radio and what the state of your signal quality was.
What's wrong? GrapheneOS is the perfect tool for that.
They are installed into a work profile. Created specifically for all the untrusted shit around. Moreover, the networking in that profile is routed through a VPN.
To be fair I wouldn't be surprised if the cause of this is precisely the purity of GrapheneOS: given it lacks Google services and Firebase Cloud Messaging to centralize app notifications through Google, the apps need to do frequent polling or keep an active connection to Facebook's infra to receive notifications. Same with Telegram, each has a slightly different polling mechanism when Google-powered push is not available (a common in phones without Google services), which is most likely way less power efficient.
It doesn't lack google services nor Firebase. They work as regular apps, sandboxed by selinux (I'm not sure what else is involved, but they are sandboxed and they work flawlessly).
It is surprising that this blogger, Friedman, did not or could not get a response from Meta. Even if it were a simple “a Meta spokesperson said that no such feature exists.” It would give a tiny bit of balance to an article that otherwise echos this fired employee’s complaint. Journalism 101.
Hard to say from the article but I think the accusation is not that Facebook is deliberately running down batteries (why?) but that they are deliberately running tests in the background which run down batteries, eg. surreptitiously having a user's app run unnecessary image compression or decompression.
Yeah, a better word would be "knowingly" because the intent isn't to drain the battery. The guy's concerns aren't without merit, but it's a weird hill to die on.
I can't shake the feeling of how 'double' this is. One one hand, yes this is like criminal behavior. On the other hand: the difference in battery drain rate between a phone running stock Android vs the same phone running for instance LineageOS for MicroG is also pretty big whereas for the user-facing functionality it can do the exact same thing.
In other words: in my view if you get a smartphone these days you are in fact already buying into a device which can save your life, but needs to maintain a charge to do that. And on which the vendor is essentially already doing as they please, and not to your benefit, since you agreed with that somehow, and all of that drains of course battery. This particular operation is just one of many. Just somewhat more direct.
Agreed. As much as it's noble to say technology companies (especially the successful ones) should care more about their users/customers, these sorts of conversations can really get off into the trolly-problem weeds of ethicizing generic tools, and generic entertainment tools going by what users spend most of their time on. At some point we, the public, have to take responsibility for our own security (or whatever else as the specific case may be) by coming up with a plan slightly more sophisticated than "Hope my angry birds machine is up to the task of literally saving my life". It feels like back in the bad old days before smartphones, every suburban dad in cargo shorts understood that if he wanted to make sure he could call into work, he had to bring an extra battery along and not waste juice on superfluous calls, and preferred separate pocket computers for time-wasting entertainment or non-critical business applications. Nowadays we've integrated the two together for convenience, but still get this outrage cropping up every so often over not matching the reliability of the older machines that were designed to primarily just be reliable. Something had to give to, erm, give us our angry birds, and customers bought giving up reliability in emergencies for convenience in the 99.9% of time that's not an emergency. Not preparing for the off chance that the computer in your pocket will prioritize something other than utility-level uptime for call-making by carrying a second phone or extra power, regardless of who, what, or how, is a naive failure to plan that needs to be addressed more deeply than whatever nonsense Facebook's up to this month. Which doesn't have to be a problem, so long as we're not pretending things are different. Users are perfectly capable of running their battery down all on their own, or installing software (or as you point out THE OPERATING SYSTEM) that runs the battery down all on its own "naturally" because everyone involved has different wants until after they've actually had a heart attack at 2% battery.
Another regular reminder: install as few apps as possible on your phone, and treat all apps in an adversarial manner. Every single app you install may come with multiple downsides (usually just tracking and advertising, but apparently also draining your battery during A/B testing.) and there must be clear and necessary benefits in order to justify the cost.
i don't find it too compelling, but this test could potentially help you understand, to some degree, how important battery consumption of an app actually is to users.
so you run a test where you intentionally consume more of some cohort of users' battery power (you could have several cohorts where you consume more and more power, even). and you look to see if the rate at which your app is deleted/force-quit by the cohorts with more power usage goes up. if it does then you can assume that you're overdoing the battery drain. if it doesn't then you can assume users don't really care (or don't care enough; maybe they're unhappy but your app is too important to them to delete).
why this could be useful is when you're deciding what to prioritize - if you've got data saying that users don't care about excessive battery consumption and they'll keep using the app anyway, you can argue against optimizing for battery life in future development, presumably letting your developers do things faster/more lazily. or, it could show that battery life is super important and be a valuable argument to prioritize power optimization work in the name of keeping your users from jumping ship.
personally i'd rather just presume that battery life is important and that optimizing for efficient use of our users' batteries is the right thing to do, regardless of hard data, but i'm sure there are people out there that think differently.
hard to make that study work, it presupposes that users (a) understand that their battery life is decreasing and (b) understand that this application is responsible for it (and to what degree). Those are two big if's, it's more likely they'll chalk it up to battery aging than to a malicious application that used to be well-behaved. That doesn't however prove that users don't care - it only takes one article with a headline "Phone draining quickly? Facebook battery usage has increased by XX% the past year and responsible for majority of its users battery drain" to completely swing the pendulum to the other side where more users are deleting your app because of this new reputation, than would proportional to the actual battery drain; but without that trigger the study is not complete.
Annoyingly easy to discredit this article as a disgruntled former-employee trying to stick it to Facebook/Meta.
I'm no fan of Facebook, seriously.
But it's hard to take an article like this at face value.
Was he fired for not doing his job, and his grievance was a reframed legitimate problem (IE doing data analysis in the background will run down the battery)?
Was he just laid off during this most recent round of layoffs and is reframing it as an injustice for him fighting for users?
We only have his word, and there only needs to be a hint of truth to make it easy to digest.
See also: It's hard not to know what you're getting into with Facebook, especially in the role this person took. Data analysis is the name of the game.
I don't think someone would risk being attached the "faking the reason for firing" label and become harder to employ and fake stuff like that then go public with it. I mean sure it is possible, but lying about stuff like that is on stupid end of spectrum
I understand you point but the "How to run thoughtful negative tests" document is pretty specific. Now if that document does not exist, you are 100% right but if that document exists, you may be wrong and this would need to be looked at more closely.
I find it hard, despite hating Facebook, to believe the article with the lack of evidence and how easy it would be to explain the symptoms described as part of another (less intentionally malicious) operation.
This angle remains interesting because of resonance with the early
days of computer hacking law. The only tangible harm judges could
convict on was "theft of electricity". Though we since have decades of
cyberlaw on misuse and intrusion, this old concrete and measurable
harm is still in the background, but rarely used. I think it should be
used more creatively - with the added clout that squandering energy
and "wear and tear" inflict a societal harm through climate effects
and e-waste.
I get that there was a lawsuit and the ex employee said that they what they were doing might harm people. But what were they actually doing? Draining the battery on purpose for some reason? Or was it just an unfortunate side effect of something?
I’ve read this short article carefully twice now and I still find myself trying to guess why Facebook would do this. I conclude it’s not a good article.
Why? Just like for other telemetry: all they have to do is push a small modification somewhere and they instantly get access to millions of data points which they can then use to analyse what works best for them. Though I agree it is very, very vague if the 'battery draining' is really just that or it's more a result of something like 'for _ in range(10000000): measurePerfOfAvsB()'. But I assume the original source did/could not enclose that.
I had assumed it was something like “we could cache this locally on the device, but then our telemetry would potentially be delayed, where if we always fetch live from the network then we get to see what they wanted immediately which is better for facebook for (insert reasons)”. The network fetch would be much more expensive in terms of battery life.
Full disclosure: I have had to implement telemetry collection and have faced this kind of trade-off against power budget, but not for social media/personal/phone apps.
Anecdata: I pretty much never go on Facebook anymore and now my phone battery comfortably lasts all day. The current worst app on my phone is Duolingo, it guzzles 15% in about 15 mins and makes the phone noticeably hot. The most impressive app is Apple Music which feels like it should use a lot of battery but doesn’t.
Both apple music and Duolingo are basically just downloading sound files from the internet and playing them. One can do it all day and barely drop 10% battery. The other would drain your phone in an hour. OK Duolingo sometimes does some speech recognition but I bet that’s done server side and it’s downloading/uploading way less data than Apple Music.
I couldn't get to phonearena.com, and when I checked it's in one of the default lists used by Adblocker on OpenWrt (adaway, adguard, disconnect, yoyo):
:::
::: domain 'phonearena.com' in active blocklist
:::
+ phonearena.com
Any thoughts!!?
Edit: Is there a good site or script to run through the Adblock lists, check them and report the location of matches?
The only thing that I could find was from 2017, where phonearena were apparently re-injecting ads using WebRTC to bypass adblocking. So perhaps you are using an old blocklist such as this: https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt.
Thanks for the info. I found the raw blocklists on the router and the domain is in the 'disconnect' list, which I believe from a bit of reading is a tracker blocker list:
Z:\adblock-temp\extracted\adb_list.disconnect (1 hit)
Line 1367: com.phonearena
I did check the .json source files at the disconnect github page (about a month old) and the site is not listed. Beyond that I have better things to do with my Sunday!!
So many comments doubting the validity of the claim.
Kinda fishy, but the touchstone is simple:
- Does Facebook have an IRB?
- How are experiments documented and vetted?
We did not know about Plutonium Files, either (for the record, these were the experiments where plutonium and other radioactive compounds were injected into pregnant women, prisoners, mentally disabled children etc etc).
I had the facebook app on my phone 10 or so years ago (maybe less). At first, everything was fine. Then after a while, my phone's performance ground to a halt. Eventually, I figured out it was the FB app and uninstalled it, and my phone was like new again. I buy budget level phones, so I was able to notice the performance impact, whereas a higher-end phone might not have.
I mostly used the app for uploading pictures. Once I stopped using the app, FB started making the mobile site push you into using the app. That's around the time I stopped uploading pictures to facebook.
Not that I disbelieve the allegations, but the battery tracker on my phone consistently tells me that the largest consumer is Chrome, with Facebook Messenger somewhere way, way down the list.
If they are bumping up battery consumption, either I've never been in the test group or they aren't bumping it enough to be a real risk.
But that's probably because you're actually using Chrome a lot? That tracker isn't very accurate and its very biased towards apps you're actually using (which makes sense doesn't it?).
For the ab testing, I can think of testing whether having low battery would affect messaging to certain people(or if battery is low would they respond or how much time they are ready to spend if you combine with gps data it is even better)at given time and then ranging these people by importance for the user.
Anecdata at best: When I cancelled my FB and uninstalled the app (Android), my battery life at the end of the day rose by 10%-15%. Impossible to say if that was due to hidden app consumption or the fact that I was no longer doom-scrolling.
The claim in TFA is they run a/b style tests where a sample group would have their battery deliberately drained to test performance of the app in low-power situations.
Based on the general fixation on tests at Meta and Alphabet et al, it seems pretty plausible to me that they might try such a thing although:
1) the article is just the employee's claims in the context of employment arbitration, rather than evidence. For example they claim to have been shown a document about running such tests, but it's not clear whether that document has been entered into evidence (and certainly not made public). That document would presumably be subject to discovery in a lawsuit, although in arbitration it's unlikely it would be in the public record unless someone with standing made a freedom of information request (which may or may not reveal it).
2) It seems to me such a test would be unnecessary regardless of the ethics of it. You could just look at instances when a user's battery was low organically without draining it on purpose. The installed base is large enough that this would almost certainly work just as well for most practical purposes.
None of the mentioned tests had anything to do with directly testing power drain or low-power performance. He just thinks they aren't giving enough weight to battery drain as an undesirable side effect of features that don't directly benefit the user. That's it. That's his whole gripe. They're testing how fast an image loads, that uses up the user's battery, and that's bad for the user.
Okay, but the "you" in this sentence doesn't transfer to other readers. It's you when you read it, but when I read it, it's me. And I don't believe him.
> That document would presumably be subject to discovery in a lawsuit, although in arbitration it's unlikely it would be in the public record unless someone with standing made a freedom of information request (which may or may not reveal it).
Facebook doesn't retain documents for any significant period of time to the point where they lobotomize their own memory to avoid stuff like this in lawsuits (and to reduce costs of discovery).
Plus 2) would require additional design, engineering and QA work to implement the code to drain users' batteries. And it would cause plenty of users to uninstall/complain if Messenger was draining huge amounts of battery. It really doesn't make sense considering Messenger has 1.3 billion users, and remaining battery percent would likely be a bell curve across all users regardless of sample size.
You want to test your app in negative scenarios, and one of them is "when and how to fail gracefully when battery is extremely low".
My phone rarely goes below 10%, and maybe it has seen 5% one or two times. If Facebook waits for that, they will never run the tests. But if they detect my battery is at 10%, then drain it to 3%, launch the controlled failure testing, then drain the phone to complete shutdown, on the next boot they have their very valuable data.
You could buy 100 iphones... Or you could "borrow" a million for free, to get rock solid stats.
Also there are hundreds, if not thousands, of different phone models. The allegedly used way of negative testing is much better than buying phones, and is virtually free.
"Borrowing" a million phones and draining their battery down would make increase the perception of your app draining batteries.
Seems pretty dubious if you would be willing to do that just to get a bit better low battery prices.
Coming from a business whom founder said: "I have over 4,000 emails, pictures, addresses, SMS... People just submitted it. I don't know why. They trust me. Dumb f**s", ethical considerations has zero weight here.
For a random subset of users, when they view one image, the app downloads two images (eg webp and jpeg) and makes a note of which one decodes fastest; the goal being that in the future they can just download the fastest-decoding one, saving CPU load (and thus battery usage) for everybody in the future.
The argument is that downloading two images when you only need one is “deliberately wasting battery”.
What I can believe: Engineers at Facebook were micromanaged into dismissing issues such as users' batteries being drained.