Hacker News new | past | comments | ask | show | jobs | submit login
Koomey's law (wikipedia.org)
128 points by shpx on March 31, 2017 | hide | past | favorite | 55 comments



I find the idea of power limits to be fascinating. As computing becomes more distributed, more detached, more mobile, then power, not computation, will be the boundary.

It's not just the cpu... it's the screen, and the antenna. A screen needs to produce a certain number of lumens of light to be readable. That provides a theoretical limit. An antenna needs to transmit clean signal over a certain range. That takes power, too. And the size of the device limits the size of the battery. How many joules of energy can we store in that physical space?

We may be coming up on limits for mobile devices much harder and faster than we think. Of course, IoT devices can be better off by not supporting a screen, but communication is still a problem.


Modern devices are actually pretty close(within a factor of 10) to the theoretical energy requirements for producing light.

Computation, on the other hand, is still no where near the theoretical limit[1]. We literally use more than 10,000,000x the theoretical limit to erase a bit of data.

[1]: https://en.wikipedia.org/wiki/Landauer%27s_principle


Nothing against your specific comment, but in general I wish that the Landauer limit had less cultural importance and salience in discussions of computation & energy. It's a simple result, but the assumptions leading to it are quite subtle. First, it relies on the widely misinterpreted second law of thermodynamics and second, it relies on the subtle notion of 'erasure'. In loose analogy, it reminds me of quantum mechanics - "what's the definition of a measurement?" "It's anything that causes the system to act like it was measured." "What's the definition of information erasure?" "It's anything that causes the system to obey the Landauer limit." In any case, this limit is so far away it's practically theory (though experiments purport to show it), and since the limit can be exceeded in theory, it doesn't seem like a useful principle to guide us.


That's because our computers continually destroy and re-create bits, instead of reusing them. Google "fredkin toffoli conservative logic reversible computing".

Edit: I am aware that claim sounds hilarious when expressed in this simplified way. Don't fret, it's hard science.


Taking your thought and running with it:

Yes, energy/cycle is the ultimate limit, and imposes bounds on what we can do with computers. There are other factors, including dissipating of waste heat. Both those concepts are joined in Gene Amdahl's observations. He noted that in paralellisation, though you could extend this to any efficiency improvement, your efficiency bound is imposed by the unaddressable task component. For parallelisation, it's the nonparallel component, for energy efficiency, it's the non-CPU power draws. He also noted that everything ultimately comes down to a matter of plumbing (including heat dissipation).

Screen tech has benefitted greatly from the transitions from paper => CRT => LCD (flourescent) => LCD (LED), and increasingly e-ink devices. Applications and devices which can utilise largely-static screens, where only state transitions require energising (as with e-ink and textual interfaces) have far less power draw than any emissive screen tech.

Other peripherals -- input/output, including keyboards, touch devices, antennas, ports, etc., also play a role. Physical keyboards have advantages over screens, though audio input/output might address size constraints (and impose numerous others...). The possibility of passive antennas (see the 1960s Russian "shield" audio bugging device in the US Embassy in Moscow) could offer other options, assuming that a fixed station would have a power budget mobile devices wouldn't.

There's been a lot made of small and adhesive solar PV innovations. I see the real use of that as offering the ability for very small computational devices to power and/or recharge themselves opportunistically when and as ambient light is available. Incorporating both battery storage and PV into designs -- say, an adhesive data-tracking and reporting label which could be attached to shipping packages -- as layers of PV, battery, compute, and antenna, would allow for very cheap monitoring of conditions (location, shock, temperature, humidity) for deliveries, at a cost of pennies, and without extensive engineering considerations.

(The hackability of such devices might ... pose interesting considerations.)

We're approaching a state where, theoretically, any device of more than a few dollars' value could have some integrated computational capacity, whether you like it or not.


With mobile devices a large majority of power is used by the screen and radio. These two parts already within maybe 50% of theoretical max efficiency.

The only thing to revolutionize mobile battery life will be a better battery.


or improvements in alternative radio or screen tech like e-ink


Unfortunately, due to bloating, we perform more than double the number of computations.


I look at things like "how long does it take to enter 100 values into a spreadsheet". Back in DOS days, it was as fast as you could type. With Google Sheets, there is some lag sometimes. Many orders of magnitude more computing power, but how have we improved the actual experience and productivity?


I don't know, I would argue that being able to share a document, work on it concurrently with other users, and access it across multiple devices while seeing changes in real time, with really no effort, counts as a pretty significant experience/productivity improvement.


Oddly enough, I was able to do that whilst using an on-campus Unix account, for pretty much the same reasons ("cloud" computing -- shared access to a central compute facility by the users I was interacting with), in the 1980s, at Uni.


Did that work on your cellphone and for simultaneous​ edit sessions?


What it worked on was numerous terminals accessible at many locations on campus. Which were far more viable for creative generative work than a 3-5" smartphone with a soft keyboard, or even a 9" tablet and Bluetooth keyboard (my primary driver of late).

No, not all of today's features were available, but you're dragging goalposts. The basic concepts existed. I used them. They worked. Often better than equivalents today, though not always.


Hum.... a PC-XT could consume 130 watts (https://en.wikipedia.org/wiki/IBM_Personal_Computer_XT) without considering a CRT monitor (100 watts, http://www.tomshardware.com/reviews/lcd-backlight-led-cfl,26...).

With much less power you can run a notebook today. And I don't how much energy it takes to host one Google Sheets in use, but it seems that a search doesn't consumes that much (http://techland.time.com/2011/09/09/6-things-youd-never-gues...). Even the typical user consumes just a bit every month (http://inhabitat.com/infographic-how-much-energy-does-google...)


Yes! It infuriates me how entering text into a field in a browser can now have lag and hiccups.


So why aren't you using that thing in Dosbox?


But that's network lag, not computation inefficiency. Compare either a spreadsheet program running on a mainframe back around 1990 to Google Sheets, or else compare a DOS spreadsheet to a spreadsheet running locally on Windows today.


There are many causes of lag. The parent was referring to the case of "I'm typing a bunch of data into a bunch of cells". Network connectivity should not (necessarily) cause lag in that in case: it should let the client side update without interruption, and then pass data over to the server as it is able.

If network syncing is interfering with user text input, something is wrong with that design.

Edit: The historical analogy would be like if you were entering data (offline) into your Lotus cells, but your computer would keep freezing up so the modem could reconnect. Somehow they kept those decoupled back then!


>But that's network lag, not computation inefficiency.

It's also huge computation inefficiency. Because suddenly to enter the text we involve network calls (hence the network lag), drawing in the context of a browser, the DOM, a JS engine, and other crap.


Would still be an interesting comparison, with some Electron apps using 16% CPU to blink the cursor.


Is that true? Google Sheets should work even without an internet connection. I didn't think the syncing would cause a noticeable slowdown.


I get multi-minute lag on interactions and text input all the fucking time on Android with either Firefox or Chrome.

I can generally avoid this by dropping to a dedicated editor app (by preference, now that I've got Termux, Termux-API, and clipboard access: vim from a bash shell).

But one thing I can assure you this isn't is network lag.


You should see the horrible lag in latest Excel on OS X


But why does the network need to be involved at all?


The Jevons Paradox, and quite probably a bit of Gresham's Law as well.

Jevons says that as the cost of a thing falls, the induced demand rises. This means, computationally, both that we see computers on devices that didn't previously have them (washing machines, dishwashers, toasters, light bulbs), and that the devices on which we have computers and software, most particularly personal computers (and take your pick: Windows, Mac, Linux, Android, whatever), stuff additional features in "because they're cool" (Gresham's Law), consuming the additional cycles.

Given various pathological performance failures (swapping, iowait, network lag, etc.), the performance issues can actually get markedly worse.

Using old software on new hardware is often a dream. That's among the reasons I'm so fond of the WindowMaker desktop -- 1989 software (at heart) on 2017 hardware.


Yeah, I was about to say, useful work performed per computation probably decays about as fast.


I wonder how much speculative execution in modern processors skews the numbers. (Or if you could actually consider those useful computations, since they optimize the overall time performance of the program.)


I found this linked to topic even more interesting: https://en.wikipedia.org/wiki/Landauer%27s_principle


Indeed, the idea of information (bits) as entropy is fascinating to me too. And not just Shannon entropy, they're all related somehow. See https://en.wikipedia.org/wiki/Bekenstein_bound

And then there is the fact that life encodes and persists information. There was a recent HN thread that got me interested:

https://news.ycombinator.com/item?id=13496133

I did read Gleick's "The Information" but was disappointed it didn't dig very deep into the concept. I got further following links on Wikipedia.


In that case, you may also enjoy Bennett's 1982 paper on the thermodynamics of computation: www.dna.caltech.edu/courses/cs191/paperscs191/IBMJTheorPhys(21)905.pdf

That said, there are a lot of subtle assumptions wrapped up in the Landauer limit (related to the 2nd law of thermodynamics and the notion of 'erasure') and my personal opinion is that the limit is easily overrated before those subtleties are appreciated.


There's a whole slew of interesting tangents off that article along the lines of computational efficiency, yes. Recommended reading.


The part at the end about the Margolus-Levitin theorem makes no sense to me -- that theorem gives a speed limit, while Koomey's trend is about energy use. Can anyone fill in the citation-needed?


This is incredible. I followed this link down into the Margolus-Levitin theorem, which puts a physical limit to how much computation (entropy?) you can produce with a single joule of energy (6 x 10^33 operations/second).

It seems that at some point the unit for powering computers would then have to be matter, since there's such an incredible amount of energy stored per newton.


>It seems that at some point the unit for powering computers would then have to be matter, since there's such an incredible amount of energy stored per newton.

I'm not sure what you're trying to say but that sentence doesn't make sense.


Has this law still held over the past few years ever since Intel has been unable to transition to 10nm?


A version, perhaps, at least for GPUs - you can tell because we've gotten much faster new chips like the Pascal GPUs, but no one has needed to run new power lines to their rooms in order to use them.


Pascal was indeed a big jump in efficiency, but that by itself doesn't really reflect the longer term trend. Prior to Pascal efficiency had stalled for a couple of years as GPUs were stuck on a 28nm process for much longer than previous node sizes. This pattern is likely to continue as advancements in fabrication continue to slow down and architectural gains become smaller.


This is my guess. I think that soon we will start to hit limits for single threaded performance so eventually we will just keep adding support for more threads to increase performance


That's only because it's cheaper to build a whole new facility


Uh... people aren't 'building whole new facilities' for their gaming desktops. The new GPUs also have the same rated power consumption.


It brings up reversible computing as a way to not require any energy.

What would a reversible computer look like? Could I run Firefox on it?


It ends in 2048(maybe before) -- Why call it a "law"? Probably easy to remember by giving a name, I guess.


In a lot of scientific and mathematical fields, "law" means "a mathematical relationship between two or more things that seems to hold under common circumstances". Take Newton's Laws. They're not true, as Relativity has replaced them, but they approximate certain regimes of reality, so they remain useful as shortcuts.


The article ends by saying "By the theorem, Koomey's law has the potential to be valid for about 125 years". One imagines that it could continue long after even this however with further ways of computations being discovered.


Reversible computation (which that 125 year number refers to) has to be done very slowly. There might be some applications where that would still be relevant, but it's pretty clear this trend don't last even until the 2048 mark, let alone for another 125 years and beyond.

The Feynman lectures on computation have some good sections on reversible computation that I would recommend.


Yet my phone still won't hold a charge for more than three hours.


pi/2 years, i'm trying to make sense of this fact.


It's a coincidence.


Of course, it must be. My theories explaining this were mind-boggling, at least.


It's because π seconds is a nanocentury.


I think it's not a fair game if we are going to find a new law every time progress stale in our industry.


it looks like Koomey's law will outlive the year 2038 problem[0].

0: https://en.wikipedia.org/wiki/Year_2038_problem


What do you mean by this comment? I don't see how the two are linked.


They aren't directly linked. I just like the idea of cataloging all of the time bounded limits in CS. Reading the wiki entry and having forgotten the exact date for when Unix time comes to end, although knowing it is close, i figured others might enjoy seeing the 2038 date too. Based upon voting, I was wrong.


Good news for robots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: