Hacker News new | past | comments | ask | show | jobs | submit login
Samsung TVs appear less energy efficient in real life than in tests (theguardian.com)
133 points by davidbarker on Oct 1, 2015 | hide | past | favorite | 70 comments



Ok, so this is taking on the feeling of an Internet Meme, "Your <noun> has less/more <property> than the testing said it had."

Is there anyone here who doesn't believe if you make a testable property a price effecting feature of a product, the manufacturer will find a way to optimize for the property value they want, at the expense of the consumer or testing agency?


Even that is just a special case of Goodhart's law:

> When a measure becomes a target, it ceases to be a good measure.

https://en.wikipedia.org/wiki/Goodhart's_law


There are big differences in the impact of misleading tests though.

For a TV, meh. It's a little more energy used, and financially barely is noticeable by the owner. When you're done it's essentially disposable.

For the cars though... oh wow. It hits taxation levels in some countries, and fuel cost everywhere. It hits resale value, which likely hits financing companies. It actually causes additional pollution in crowded cities. It crosses thresholds in which large swathes of the market would choose not to purchase the vehicle based on those numbers (business leases).

Everyone knows that advertised numbers are approximations based on ideal conditions on a very small sample. The question is really how misleading those approximations are, and what the impact is on the consumer and any third parties.


The whole point of testing is to give consumers data, so if the test can be gamed by companies like Samsung, it is a bad test.

There are so many TVs and they run so many hours a day the aggregate energy use is non-negligible.

Consumers should avoid products from firms that game tests such as these. Toyota, for example, was singled out in the fuel consumption report as a company that did not cheat on the tests.


Any kind of improvement motivated by the test is kind of "gaming" the test. There's no clear boundary line between optimizing for test conditions and optimizing for something broader. I think the onus is on the designers of the tests to make sure they reflect realistic use. It can't help but be a cat and mouse game but at least the effects of cheating can be reduced to small enough levels that it's OK.

An analogy is a student studying for the exam. That's gaming it somewhat too. The exam is supposed to test what they learnt during the semester and retained over a long period, not what they crammed the night before and will forget by next week.


Interestingly many auto manufacturers UNDERSTATE horsepower and torque numbers. I can understand that a little, due to variances in actual engines, but these days that variance is low and the spread between specced power and dyne'ed power is pretty big on many cars.


Almost no manufacturers do it - historically, BMW and MB have been "caught" doing this, with 335i and C-class AMG engines.

A commonly accepted explanation for this is that the "cheaper" model is made to appear less powerful on paper to entire a buyer to upgrade to next model up (i.e. M3).

Relevant tests from 2010 ~ when this meme started: http://www.automobilemag.com/reviews/driven/1009_bmw_335i_33...

In reality, almost all cars "produce" far less ponies at the wheels than stated on spec sheets because HP is "[S]ampled [A]t [E]ngine" - so by time it reaches your wheels, about 20-30% are lost to drive train.

Edit: formatting


You're correct, but one minor point: SAE is actually an abbreviation for "Society of Automotive Engineers," a standards and certifying body.

The difference you are describing is between "gross" horsepower (SAE J1995 spec.), essentially horsepower measured at an unloaded crankshaft and "net" horsepower (SAE 1349 spec.) or the horsepower measured at the wheels.[1] The difference is due to drivetrain losses and power used to drive non-engine components.

[1] See http://www.edmunds.com/car-technology/horsepower-gross-vs-re...


There was the supposed Japanese "Gentleman's agreement" of 276 (286?)HP to keep the government off their backs.

There's also the common saying "German horses are stronger" because acceleration and dyno tests (long before the turbo 335) have shown them to be faster than they "should" be, in particular to certain American muscle cars. I'm not sure how much reality there is behind this, though.


What you might be thinking about is the agreement between manufacturers to not make motorcycle top speeds more than 300KPH (186MPH). There are no limits on horsepower, though, and the top speeds are simply electronically limited. This also has no relation to car models.


No, there WAS an agreement between Japanese car makers to limit the HP of their vehicles. Even the GTRs, Evos and Imprezas left the factory with approximately the the same stated HP figures. They could then be reasonably easily tuned to produce ludicrous amounts of power.


Both of you commenting in this chain are correct.

The Japanese "gentleman's agreement" was a limit on power of 207Kw. Towards the end, many of the cars hit by this were making more than that at the wheels, let alone the crank. It was possible to gain 50Kw on certain WRXes and Evos with a different air intake and exhaust, they were so restricted.

Also, for a while there was an "agreement" that sports bikes wouldn't go faster than 300km/h - they didn't limit them, the end of the speedo was 300km/h. Lots of R1s and Hyabusas will be revving somewhere in the middle of their rev range at 300km/h in top gear, then you can wind on another 4-5-6 thousand rpm, while the speedo stays pegged at 300km/h. You are, of course, going faster than that.


You do realize that speed and revs are linear as long as you're in the same gear? If you're "in the middle of the rev range" at 300 and double your revs, you're going to be going 600 km/h.


Sorry, you're right. I was trying to word that in a way to not say 10,000 rpm to 12,000 rpm, because all bikes rev differently.

Obviously you don't double the revs, but they for sure increase


I'd always heard that was to keep insurance premiums lower for those cars, but have no idea whether that's true.


This recent string of news stories makes me wonder, how many other products do also cheat on tests like this? Does this mean that our energy usage projections are seriously underestimated, and we're even more fucked than we thought?


And here is response from Samsung: http://global.samsungtomorrow.com/samsung-firmly-rejects-the... and http://www.samsung.com/global/article/articleDetailView.do?a... "• Motion lighting – Reduces power consumption by reducing screen brightness when the picture on the screen is in motion"


The claim from the article is that the "motion lighting" feature (or something that Samsung attributes to it) activates just when the content is played "faster than normal" and that it effectively recognizes that the international electrotechnical commission (IEC) test is in progress.

If Samsung would really document the exact conditions for the activation and they would be repeatable and independently confirmed then I'd believe it's a real feature. Somehow I don't expect that I'll read that though.

But it's also true that the modern devices don't have constant power load and that it's a good thing and that regulatory bodies didn't complain for even things still in use: My old PC used 30 Watts when "powered off" (!) and my cable box does that still (!) same 30 Watts load either powered on or powered off. That's a real sucker. The TV set uses a tenth of Watt when powered off.


I did some testing back in 2009 and I just looked up my spreadsheet and it really is hard to say definitively how much your TV will consume. At the time, my TV was a Samsung LCD A530 (I think).

For example, usage based on input type (Component vs. HDMI):

TV (Wii as input, Component) 48 to 68W

TV (PVR Cable as input, HDMI) 84 to 104W

Then you start getting into different TV settings for games versus TV, day versus night, and the list goes on.

So what should the TV manufacturer put? Should they put the min consumption which is probably the dimmest setting? Or the highest? Or the average? It is an interesting question.


The Samsung's feature from the article is apparently from 2011.


The default setting? The one shown most in the demo mode?


> The one shown most in the demo mode?

Which would lead TV manufacturers to introduce hacks that reduce power consumption solely for the demo mode.


I agree that a good test would have to cover a range of modes, but in this one specific instance demo mode is generally the one that will naturally be the biggest power waster as demo mode for modern TVs virtually always includes "crank the brightness and backlighting to ridiculous, highlight-crushing levels" as one of the settings.


Wouldn't that mean that the product is showing content at its most pleasing to customers but with reduced power consumption?


Random example from a moment's thought on how to cheat: demo mode typically runs in a store display, and likely has little opportunity for evaluation of audio (unless placed in a separate showroom as some stores do for a home-theater demo, but in that case it'll likely get tuned carefully). So, in demo mode, don't bother turning on the higher-energy portions of the audio system at all, because nobody will notice if the audio sounds tinny. And now you've potentially saved tens of watts in demo mode.

(Also, note that demo mode typically tunes for most attention-grabbing in a showroom, which often does not provide the optimal viewing experience at home.)


Testing for devices should not be done at the onset only. It should be done over time in the field through an independent study.

The Volkswagen issue would have never happened. It also makes it much harder to game, anything that can be gamed in cutthroat competition will be.

Over time we could also see products that hold up through wear and tear. Consumers should demand this type of testing after the gaming has been exposed.


Business idea: Make it easy for consumers to test their own devices / vehicles and aggregate statistics [of users who want to upload them].


Its easy to start - get a KillAWatt

http://www.p3international.com/products/p4400.html


I don't understand how the official EU test can be such that it differs from "real world usage". Why don't they just record a couple of hours of real TV shows, play them back and plug in a Watt meter?


> Why don't they just record a couple of hours of real TV shows, play them back and plug in a Watt meter?

That's essentially what they do. The allegation is that this manufacturer's TVs recognize the specific clip used for testing and drop their brightness when they recognize it.


That makes me remember of tricks used by video cards manufacturers to cheat on the WHQL tests http://blogs.msdn.com/b/oldnewthing/archive/2004/03/05/84469...

Also of how Samsung disabled speed scaling on their smartphone GPUs during benchmarks.


"“The Swedish Energy Agency’s Testlab has come across televisions that clearly recognise the standard film (IEC) used for testing,” says the letter, which the Guardian has seen. “These displays immediately lower their energy use by adjusting the brightness of the display when the standard film is being run. This is a way of avoiding the market surveillance authorities and should be addressed by the commission.” The letter did not name any manufacturers."


That reminds me of a certain car manufacturer. Did they license the "unit under test" heuristics?


Well, part of "real world usage" should also include whether any sensible user would want to use that feature, or if people turn it off because it looks funny.

From their short description of "motion lighting", it isn't obvious to me why this is a feature in the first place.


Dynamic lighting is quite common in most TV's to adjust for ambient light. In this case they also adjust the lighting in scenes with less motion as it has less effect on the contrast of the image.

In general all of these tests will "cheat" because there is no way around it, you can measure the TV usage while playing content from the built-in antenna with 50% brightness and 10% audio, and you can test it while streaming 4K video over wifi while recording a show from you setup box using the HDMI in to a USB attached storage at full brightness, in 3D 120hz mode.

The difference in the power usage in that case will be quite insane.

The problem begins with the whole grading system, people want to see the green A on anything they buy because they assume it will be of higher quality, this plus moronic testing and standards is what will push any manufacturer to cheat under any circumstances.

But when you brand TV's with AC's, Fridges, Washing Machines, Heaters etc. with exactly the same A++ to G when A it self isn't even green don't expect manufacturers to not cheat, it's a quite a big hit to get a yellow/orange/red marker on your product and it's not some minor detail that is hidden in small text it's marked very clearly usually it's bigger or as big as the image of the product on that details page/sticker they have by it on the shelf.

The power consumption of TV's is pretty much meaningless the powered consumption shouldn't be tested at all just the standby power which in some cases is quite insane even for A rated TV's, this isn't a multi KW device like an air conditioning or heating which has quite a bit of impact.

Oh and just to make it clearer they actually do not measure the standby power consumption of TV's at all for this rating which is actually probably several orders of magnitude more important because my Sony Bravia which is rated A+ barely powers down in standby mode vs lowest brightness since even in standby it's connected to the network and does some voodoo while it should be well off and it like probably every other TV out there is probably on only 10% of the time at most.


> people want to see the green A on anything they buy because they assume it will be of higher quality

I almost always assume the opposite. I assume the product was designed in one of two ways:

1. Designed fully-featured and then hamstrung to meet arbitrary energy goals, or

2. Designed with energy goals as the first priority

in either case, other useful qualities (performance, longevity, etc) are second-class.

I think my mindset started with low-flow toilets being introduced, those things could barely flush an empty bowl of water. To me, energy-efficient means crap performance.

I had a Prius for a while and it was clear that a lot of things were sacrificed so they could spend the budget on energy efficiency. Compare the interior or ride quality of a Prius with anything else in its price range and you'll see what I mean.


Early low-flow toilets were really poor. Modern ones clog less than ever. I have a 1.2 gallon-per-flush toilet now (Champion 4) and it's never once clogged even under trying conditions.

It's just a design problem to work around.

(Yes, the Prius interior isn't well finished for its price, but they managed to make it roomy inside and small outside even under aerodynamic constraints.)


Well i don't assume the opposite i just don't care about the energy rating that much. The only thing i would care for is for heating/AC because those tend to be on for very very long duration.

I bought a C rated fridge because I've read the testing procedure, it involves quite a bit of stuff like how long it takes it to cool down when the door is opened for 2 minutes, how much power it needs to cool down some volume of water etc, how much does it take to freeze X amount of what ever beef.

And i really don't care about that, I live with my GF, no kids, no one else, the fridge is opened maybe 5 times a day for no longer than 10 seconds. Dinner is cooked daily with "fresh" from the store ingredients or take away/delivery is used. So the fridge is used to cool fruits/soda/water milk, and and some toppings for snaks.

The freezer will have some emergency ground beef for the shit it's too late to order pasta, some hot dogs when you really get lazy, some bread, and vodka.

If i had 3 kids and would stick a half eaten turkey into the fridge 3 times a week i would probably care more about how much power it yanks to cool it down, but with the use i have i don't need to care because while it's less efficient in cooling it's just as efficient at doing nothing for most of the day because the insulation is still the same.

But the majority of the public doesn't do it, almost no one is doing research before buying something, they might get some reviews of CNET at best but more likely got to Curry's or Fry's or Best Buy and buy something they think is good. And since everyone has been conditioned to see green as good and everyone is conditioned to think of a higher grade is better when you have for the most part a meaningless grading system people will still buy what has huge Grade A+++ green sticker on rather than a orange/red B or C.


> Compare the interior or ride quality of a Prius with anything else in its price range and you'll see what I mean.

I'm pretty satisfied with my Prius, personally. My mother's 2010 Chevy Impala may be a little roomier, but not enough for me to care.


The Prius interior is shockingly large, it is an impressive feat. Nonetheless, the quality of materials is far inferior to other cars in its price range.


I won't disagree with you there, the plastic in particular feels pretty cheap (the molding around the bottom of the front seats in particular is a gripe of mine). Still, a used Prius is a pretty good deal compared to alternatives (I got a 2006 Prius - top trim - with 80K miles for $10K).


This might be a feature I could not stand on Samsung LCD TVs. It was most evident in space scenes where the star field in the background was panned. It would go dim and bright based on the motion. It was a horrible effect and I had no idea how to turn it off.


>The lab studies found that Samsung’s ‘motion lighting’ feature reduced the TV sets’ brightness – and power consumption – under international electrotechnical commission (IEC) test conditions. These involve the playback of fast sequences of varied material, such as recorded TV shows, DVDs and live broadcasts.


I read that, but how is this different from "real world" usage? Are people just displaying stills or what? I just don't get it.


If the test footage changes rapidly, displaying a sequence of very short clips, the TV might keep the "motion brightness" all the time. There wouldn't be any still images where the TV goes back to full brightness.


possibly they are running their tv in retail store mode is which much brighter (vibrant) and therefor consumes more electricity. I know my set has this setting, neither my phillips LED/LCD or my LED/DLP have it as their default setting but the difference between modes is striking and many might run it anyway. There are warnings on the DLP set about using it.


Is it detecting "Big Buck Bunny"? Would be hilarious.


There's no mention of standby power requirements. There was a big push a few years ago to get devices to cut down their standby power substantially. How's that coming along?


It's not measured for TV's or most other consumer devices at all for the rating which is dumb, the on power for TV's means nothing these are low power devices to begin with even a G rated TV will have maybe 200-250W power draw when on.

The Sony TV i have it's stated power on consumption is 227 Watts (rating A/A+ for big TV's it's very easy because the bigger the screen and more features they have the bigger the reference power consumption they are measured to is), when you turn everything on it's actually higher by about 20-30W but it's standby power is pretty much the same because it's tuner is on, it's wifi is on, all it's smart hub features are on etc.

The 2nd TV i have is a 42" LG Smart TV it's also rated A+ but it's standby power consumption is nothing on it's 50 off it's 0.3.

So the Sony costs me a relative fortune a year to maintain while the LG is costs me pretty much just as much as if it would be sitting in it's box in the closet.


There is a EU directive on that since december 2008 (http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:320...)

It basically limits standby or 'power off' power consumption to 0,50 W.

The EU isn't afraid to fine companies heavily, so chances are most equipment follows this rule. If not, I guess a few companies are sweating now that people start to question compliance of devices.

I also guess this directive has had implications all over the world, just like the ban of lead soldering in the EU did. once you have spent the money to design a low power solution, economies of scale do not make it worthwhile to keep an second design around for use outside of the EU.

I think 0,5W still is too much, though. With 20 plugged in devices (sounds like a lot, but count them in your house: coffee machine, water cooker, blender, audio system, televisions, computer, monitor for computer, external hard disk, printer, etc.), that still is a quarter of a kWh a day.

Certainly, home automation systems should aim to go way lower per power outlet.


Ugh, this explains that crap pair of Creative speakers I ordered some time back. The speakers themselves are very nice and sound good, but they have this infuriating habit of powering off if no sound is received for ~5 minutes. (And they don't automatically power back up when there's sound detected)

Now it sounds like this was to comply with this inane EU directive. Aren't there some lower hanging fruit to pick like major appliances before hamstringing comparatively lower users like consumer electronics?


Such devices need a standby power supply, one that delivers a few mA, to drive the circuitry that wakes things up. Or a power supply that goes to an ultra low power mode when its load drops. Both are available.[1][2] They add to the parts count. The cost penalty is minor for a TV, but noticeable for really cheap devices like speakers and wall wart power supplies.

This is one of those problems that yields to regulatory pressure. With some pushing, a solution like [2] below becomes standard, and adds about a dime of parts cost to everything that plugs in, probably paying for itself in the first month of usage. Without such pressure, all the crap power supplies don' have it.

[1] http://www.ti.com/lit/an/slua116/slua116.pdf [2] http://www.onsemi.com/pub_link/Collateral/NCP1015-D.PDF


That directive covers about every electronic device, including major appliances such as washing machines, dishwashers, and electric ovens.

Also, one could argue that this is more important for the smaller items. Most people will have way more of them, and having, say, a 5W power drain, 24 hours a day, on a €30,— toaster that you use for, rounding up, 1 hour a week, relatively is a much larger waste, and feels way worse than having that same drain on a €1000,— television that you use, rounding down, for 2 hours every day.


Ah, perhaps this is why I haven't found "vampire power draw" to be an issue. There's the trope that you should unplug all of your adapters when not in use, but I had a power strip full of 2x Apple laptop chargers, and various USB chargers from various manufacturers, all plugged into a Kill-A-Watt... which registers, effectively, 0 watts consumed over multiple days when nothing is plugged into the adapters.


The reason why you should still unplug or at least disconnect wall warts is because most of them are cheap-ass manufactured and more often than not the CE certifications are faked.

End result, happened to a friend: kitty knocks over a powerstrip, an adapter breaks down internally due to impact, causes fire. Thankfully he had a smoke detector.


This isn't very different from the benchmark cheating shenanigans Samsung has been busted doing previously.

https://duckduckgo.com/?q=samsung+galaxy+benchmarks+cheating

Slightly different media, same idea. Check for testing criteria, vary the output.


Looks like the European Commission (EC) needs all these product test results to be online and independently verifiable.

I'm sure members of the whole political spectrum can agree that uninhibited full-disclosure would benefit us all (except lazy/ malicious/ stupid manufacturers).


At least some Samsung TVs has an option to show information about the power consumption. I don't think it's directly measured because it doesn't change between light and dark scenes but it does change when picture settings are changed and at least it gives some estimate: http://support-us.samsung.com/cyber/popup/iframe/pop_trouble...


Coming up next - washing machines, air conditioners and fridges.


Maybe we’ll find out that washing machines eat on average one sock more during normal operation than during testing?

Maybe we’ll even find out that iPhones don’t even contain an Apple?


Is this news? Because I've always assumed the tests are very biased. my car says it can get 70mpg in extra urban, yet it gets 54 max, I knew 70 was a lie when I bought it. Same with the tv, light bulbs, fridge and so on. In no way is any of it real world efficiency.


The only statistics you can trust are those you falsified yourself: Winston Churchill

Seems to apply also to tests...


It all began a long time ago, when 1Kb was equaled to 1000 bytes.


Yeah well, "K" means "1000", always has. The metric system got there first! We tried to co-opt "K" for years, but we should admit the error and use Kib when we mean 1024 bits.


Soon: "Software appear less efficient in real life than in tests. Test-driven development considered harmful."


It's already happened. Many here probably remember the ATI "Quake/Quack" debacle where the ATI driver was changing the rendering pipeline based on the binary name: http://www.hardocp.com/article/2001/10/23/optimizing_or_chea...


It's no secret that GPU drivers are tuned differently to different games. That's a feature, not a bug. You can't expect every game to conform to some super-tight code conventions that optimize against the GPU driver, so Nvidia and ATI optimize their drivers for the various games.


I'm not sure if this is what you are referring to, but in the Quake/Quack incident the texture quality and mip levels* were different than the user-specified values when running a different named program. This was producing obvious visual artifacts, and I consider this cheating.

If I specify "16x Anisotropic filtering" and the driver changes that to "4x" or some other number behind the scenes, that's an unacceptable optimization.

The current "GeForce Experience" program that selects optimal settings for the current card is an acceptable optimization, I can see which settings are configured and change them if I like.

[*] I don't remember the specific things they were adjusting.


Looks like someone's uncorked the truth and it's spilling everywhere.


Are we going to have a Volkswagen for TVs now?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: