Hacker News new | past | comments | ask | show | jobs | submit login
How much load can be served from 1m² of sunlight (medium.com/timothy_downs)
186 points by forkfork on Nov 3, 2021 | hide | past | favorite | 138 comments



> With an idle load, this particular laptop draws 14W of power with the screen turned off.

That's an extremely high idle power usage, I also have a laptop with a Ryzen 9 4900HS (the ROG Zephyrus G14, the only laptop with this chip, to my knowledge), and it idles around 9-11W with the screen on. Most of that is actually because of the RTX 2060 that's bundled with it, and won't turn off in Linux because Nvidia doesn't give a shit. I also suspect the author doesn't have a lot of power saving tunables enabled.

By comparison, another laptop I have with a 4700U (also eight cores) and no discrete GPU idles at 2-3W.

EDIT: If this author is reading this, this [0] is a good page to start from, along with powertop. I'd install and enable TLP, disable boost for efficiency, enable the tunables suggested by powertop, and maybe try nouveau for putting the GPU in the lowest power state.

[0] https://wiki.archlinux.org/title/Power_management


Author here, you are correct, not tuned.

Will try your suggestions! Much appreciated.


Meanwhile my basement R710 idles at like 180W :)

(Obviously these two things aren't similar enough to compare directly, but it's fun to see the general progress/trend of powerusage over time. Cray-1 needed what? 100KW?)


You can use nvidia-prime to run on intel only right?


> run on intel only

How does that work on a Ryzen laptop?


The English doesn't (quite work), but Intel here refers to the integrated, power-thrifty GPU, so could be more pedantically stated as "run on integrated graphics chip and turn off the Nvidia discrete GPU (and save power)" instead.


I think you may have missed the point. It's Ryzen, meaning AMD, not Intel.


fragmede is saying that when aero-glide2 said "Intel" they were actually referring to the integrated GPU in the laptop in general. Of course there's no integrated Intel GPU in the Ryzen 9 4900 HS, but there is an integrated GPU (a Vega 8).


Check out https://gitlab.com/asus-linux/asusctl. It provides a cli mechanism for switching between igpu-only, dgpu-only, and hybrid (nvidia card sleeps unless called via prime). If you use version 3.x, everything is built into asusctl. With 4.x, they've extracted that functionality to supergfxd/supergfxctl (in the same project).


I'm currently using hybrid with PRIME offload and runtime D3 enabled, but no matter what, the runtime D3 status is indicated as "Not Supported".


I'm getting similar results with my G14 (4800HS) running Windows on battery power and silent mode, which is interesting, because I originally gave up trying to have it run Linux because the dGPU would just spin like mad.


For computing efficiency comparison purposes a very old Thinkpad x40 laptop with a single core CPU, 1.5GB RAM and the screen off idles at about 12.5W, at a unix load of 0.02


Adding the obligatory "what if M1" comment.

Anandtech tested power draw of an M1 Mac Mini and found 4.2W at idle, 26.5W for the average multithreaded workload. 1/3rd idle power and the same power draw while running multithreaded benchmarks compared to the laptop serving a single client. Would be interesting to compare.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


I was recently traveling with a M1 MacBook, and had a 20W solar panel + a 24Ah power bank for charging. Worked like a treat for coding.


Can you elaborate a little more on your setup? That sounds interesting.


Basically this: https://voltaicsystems.com/arc20w-kit/. It's a bit on the expensive side, I'm sure you can find cheaper kits. I like that the battery has a USB-C port, so I can charge the MacBook directly. To be truly off-grid, you need something bigger though. 20W is ok for basic work, but if you're compiling or doing other heavy work you need more power. If it's a bit cloudy then the battery won't fully charge with that panel, I'd recommend at least 50-100W. But with that size you're also less mobile. I had a car so mobility wasn't that important for me. The only important thing for me is that I can take the gear onto an airplane when traveling.

I need to investigate if USB-C car charger adapters can charge a MacBook. When traveling with a car and the sun doesn't shine, it would be a nice backup solution. I was basically living in a car + tent for four weeks, traveling and working.


Nekteck sells a car charger with 45W USB-C PD. I've seen others claiming 65W.

https://www.nytimes.com/wirecutter/reviews/best-usb-car-char...



I do similar with gear from https://GoalZero.com, typically using a car-portable 100W panel + 100Wh battery with USB-C support when seriously traveling, and a briefcase-friendly 20W folding panel + same battery when mobile. Wait for specials at https://REI.com to get good prices.



For the load expected for pulling data from a simple redis cache, is an M1 actually the most efficient chip? Isn't the point of the M1 that it supports a bunch of complex workflows while remaining efficient? Aren't there even more power efficient chips out there that focus exclusively on simple integer operations, etc.?


> For the load expected for pulling data from a simple redis cache, is an M1 actually the most efficient chip?

I think that's a good question with no trivial answer, there are certainly boards which consume significantly less energy than that and can serve traffic (using nginx and static content you can serve quite a lot on a watt or two, per https://solar.lowtechmagazine.com/2020/01/how-sustainable-is...), however if you factor in the need for actual CPU...


Mobile phone SoCs would likely be the way to go for the best processing per watt.

The problem really is one of "how much is enough" more than anything else. Assuming someone wanted to really optimize something like this, a specialty built ARM cpu with lots of cores at a low frequency would likely provide the most ability to act as web server with a small power budget.

Such SoCs, AFAIK, don't really exist. You don't need a particularly fast CPU for web service stuff. You certainly don't need all the mobile extras (AI chips, GPUs, etc). What you need more than anything is core count.


If you wanted to go full data slapstick you'd set up an elaborate protocol on the ISM band between a cluster of Nordic nrf52 or similar where each SoC serves exactly one page from its flash memory. Want to update the about page? Flash it on a spare board, connect it to the battery and disconnect the old version.

It will have awful latency, likely suffer hard from the shared medium if ramped to nontrivial loads, but it will hardly use any power at all.

And it will drive people nuts who know even the tiniest bit about computers, but those who don't will understand it just fine: "I disconnected that new about page and plugged the old version back in"


Sure. "STM32L1 MCUs also feature the industry's lowest power consumption of 170 nA in low-power mode with SRAM retention. ". But I think we're looking for the 'most efficient' general purpose computer.

https://www.st.com/en/microcontrollers-microprocessors/stm32...


Intel also made the quark which is a 486 that runs on a button cell battery, the same type that we'd call a cmos battery.


Rather than inverting 12V DC to 240V AC and back, you could skip a step and use a laptop car charger, converting 12V DC straight to ~20V DC.


You could also use a desktop with one of these ATX power supplies that runs on 12VDC instead of 120/240VAC:

https://www.cartft.com/catalog/il/2302


Wow, I was not expecting that price! Is this just a rare item or is there something particularity expensive inside it?


Definitely are cheaper options too: https://www.mini-box.com/picoPSU-160-XT


It's not a mass-market item so they're expensive to begin with, but I'd bet its current price is a function of the chip shortage we've been hearing about.


How much labor does it save?

How much trouble shooting does it eliminate?

How much experience does it make unnecessary?

Or to put it another way, at the point where 1500w of 12v source computer power supply makes sense, the price is not unreasonable.


Not at all. At that absurd price it's cheaper to buy more solar panels and ignore the inefficiency.


cheaper

Price sensitive shoppers are not in the product's market segment.


Looking at the size of this thing, I sort of doubt it's much more efficient than doing a DC->AC->DC conversion. In fact, I'll betcha that's exactly what's happening inside this beast to both get the right voltages and stabilize them.


It says 72% efficiency, so over 400W of heat at full load. Inverter to a normal PSU would be about the same, if not worse. It could be that it raises the voltage then drops it back down to 12V, but i'd sooner think it's just cheaper mosfets.

Anyway, these kinds of converters can usually regulate themselves.


That's true. If both conversions are 90% efficient, we are talking a 81% efficiency in both process, wasting nearly 1/5 of the energy used. That's not good at all.


Don't DC-DC transformers use AC internally?


Yes but it's kHz-range PWM square-wave AC; there's no 60Hz sinewave involved. Converting to/from a 60Hz sinewave requires extra circuitry that decreases efficiency. And in this application a 60Hz AC sinewave serves no useful purpose.


If you want electrical isolation between the input and the output, they use AC in the middle to achieve that, but if you don't want isolation, you could use a boost DC converter: https://en.wikipedia.org/wiki/Boost_converter


Which use alternating voltage, across the inductor, but the current is always (with load) positive. Neat!


In power electronics terms, no. AC is defined as waveforms that have an average value of zero, which you won't find in a DC-DC converter.

That wasn't the point of OP though. Their point was that you could remove some inefficient steps to improve the overall efficiency and energy capture of the system.


The point is to remove the laptop power brick. Let's say the laptop charger is 90% efficient, which is fairly typical. The post claims their 12VDC to 240VAC inverter is 92% efficient. A 12VDC to 20VDC voltage booster would only have to be >83% efficient to beat that setup.


Which isn’t that hard for a switch mode converter. Those are usually 85% or more IIRC.


Not necessarily. See voltage doubler:

https://en.m.wikipedia.org/wiki/Voltage_doubler


I did this for a year on my previous macbook and it destroyed the battery. The inverter creates cleaner power. Good luck finding a quality one that can handle the voltage changes from 14.6 -> ~10.5 the battery is going to spit out with the MPPT attached..


Maintaining a regulated voltage with input voltage changes is a primary feature of a DC-DC converter, like a boost converter. With a buck-boost converter, you can maintain a regulated voltage when the input voltage goes above or below the output voltage.


Wait, you tried to do it without a DC/DC converter to regulate it? Sounds like a bad idea.

A good off-the-shelf isolated DC/DC converter that would clean it all up and give you a nice, consistent, filtered DC voltage should be able to be had for $100 or a bit less.


I wonder how much the efficiency loss from a laptop car charger would compare to just using something like a NUC -- IIRC some of their models have really wide input voltage ranges (12v-20v or something like that, depends on the model). The laptop is really just bringing a battery, compared to a NUC, for this application, and they are going panel->regulator->battery->converter anyway so the laptop battery seems redundant.


Odroid H1/H2 iirc can run from like 10-22VDC, the issue is finding a solar panel that has an open circuit voltage less than 22V. I found a buck-boost that works with my old laptop and a "car charger" with my admittedly oddball solar panels, so this is possibly, but takes time shopping and reading specifications.


The design in the article (which does seem to have some redundant parts, so...) instead goes:

Panel -> regulator -> battery -> inverter -> laptop

So, the voltage of the panel shouldn't really matter too much, I think (I mean, you size the regulator input range as appropriate). OTOH, solar panels are a little magical from my point of view, so maybe that regulator ought to be replaced by some solarpanel specific thing, which might be more constricting.


It isn’t a regulator, it’s a charger - depending on the design, you can get even higher voltage spikes. 12V batteries charge at 13-14V, but most chargers design for lead acid can get away with reallly really noisy voltage transients due to the way lead acid works. It’s a pretty insensitive chemistry and dampens them normally.

Some chargers with ‘equalize’ or even worse ‘desulphation’ can intentionally go even higher in voltage than normal charging voltages.

So basically ‘if you just assume it wouldn’t kill your laptop to directly connect it, you’re playing Russian roulette with your laptop’.

With a decent spec sheet (and oscilloscope) to verify nothing too crazy that the charger is doing, some decent power filtering capacitors, and good DC-DC power supply you’d be fine though.


That's not a way to get good power out of the PV panel, though. An MPPT tracker (https://en.m.wikipedia.org/wiki/Maximum_power_point_tracking) is needed to get the best results, and the room for improvement is indeed substantial. There are good and cheap chinese ones available, such as Epever brand.


A very cheap, bad-practice, but effective solution is to run the inverter to AC and back to DC again with the ATX power supply, but to connect the 12V from the battery direct to the motherboards 12v rail.

The motherboard uses most of its power from the 12v rail, and due to the galvanic isolation in the ATX power supply, it should be safe to do.

The 12VDC from the battery may not be in spec for the motherboard, but typically they'll work anyway.


It's also worth noting that many "12 volt" DC batteries do not store and output at exactly 12 volts.

Depending on the tolerances of your board, you might want to throw a buck converter in the circuit and set the output voltage set to exactly 12 volts to account for any potential over-voltage coming from the battery/panel setup.

They are pretty cheap if you want to DIY (7 for $10 on Amazon). Worth the peace of mind, imo.

I imagine this is what most car-laptop chargers are, but with branded packaging.


the efficiency savings with this approach will outweigh any other effort


I don't think that's true. There are plenty of low-power setups that consume less than 2 watts, for an instant gain of 6/7ths or 85%. DC-to-AC-to-DC conversion is less than ideal, but even at a low estimate of only 90% efficiency at each step, that'd still be only a 1.0-(0.9^3)= ~28% loss.


I don't think % efficiency tells the whole story, because those numbers are measured under load. A 12V->240V inverter probably burns a lot of power just sitting idle, so the efficiency curve starts at 0%. You really need to know (or measure) that curve's overall shape.


Or use a 20v MPPT with a higher-voltage panel (or two smaller panels in series) and skip the car charger.


Wouldn't the battery voltage be the controlling factor here? Probably could go with 2 12v batteries in series? I assume a 20v laptop charger is fine with that.


You really can’t assume much of anything here frankly. Voltages here are nominal, and open circuit voltages can be quite high, or when under load can drop quite a bit. MPPT charger will attempt to maintain and output voltage within it’s target range, but voltage transients and ripple can be a problem - they aren’t designed as a live power supply, they are design to charge huge energy sinks with specific chemistries (aka batteries), so they usually cut corners that would matter in this case but don’t for batteries.


Better: a DC-DC converter that is designed for solar panels.


I run my static blog on a Raspberry Pi3B+ Powered by solar [0] and it doesn’t even flinch when it is hit by hacker news.

It idles at around 3.1 watts and that power usage includes a step down converter from 12 volt to 5 volt.

A bunch of lead acid batteries of various capacities provide backup. Lead acid is a terrible choice because charge times are long, but it is sufficient for now.

Just see it as performance art.

[0]: https://louwrentius.com/this-blog-is-now-running-on-solar-po...


I did this 20 years ago and ran the server for about 10 years. I down-scaled the server hardware when I transitioned to solar, but soon discovered that the low-power server just could not keep up with the demand. I upgraded to something in the middle that could handle a decent number of web users while still not using all the solar. I had 1280W of panels (16x80), a 1500W inverter, and three deep-cycle 12V 50AH batteries.

The remnants of the system can still be seen here: http://jsl.com/solar


What was the demand? 1200W seems like a lot, https://solar.lowtechmagazine.com runs on a 50W solar panel with a 168W lead-acid battery (non-deep-cycle so effectively 84Wh).

However it gets to have fully static content, and benefits from relatively modern hardware: for under 2W it runs on a dual-core 1GHz ARM with 1GB RAM (and a 16GB SD card but it doesn't use anywhere near 16GB).


Keep in mind, 1200w would be maximum peak power consumption most likely (as you need to size for that, depending on how many seconds the chosen inverter could sustain above peak). A good sized gaming rig would require that type of peak consumption, even if 99% of the time it’s consumption maybe be zero.


1200w is pretty insane even for a gaming rig unless it had dual GPU in SLI. any normal rig will be fine with maybe a 750w psu and even at full benchmark power it would only draw 600w of that

my 5800x and 3070 when benchmarking draws ~540w, playing any games draws ~350w so that is the max i would realistically ever use at a constant rate

oh and "idle" is a disgusting 200w draw, desktops are crazy


The normal CPU draw was on the order of 500W. Given that you only get full power from the panels for ~8 hours per day, the rest went into the batteries. I didn't actually run on batteries unless the mains was down. It was built as a "solar UPS" so the batteries were always kept charged when possible.

There is a long story behind this project. The gist is that my power was crap because the last leg was a 40 year old 4KV transformer and the power demands of the neighborhood outpaced capacity. When I first reported this to the power company, they thought I was a loon. None of the neighbors were complaining so I must be some kind of nut job (with a strong math/electronics background and several years experience working in power systems, before switching to satellites). Anyway, after a few months of nagging them, I got their attention and they assigned me a power quality engineer. He came out with a RVM (recording volt meter), but he (deliberately?) misconfigured it so it filled up its recording buffer within a few minutes instead of the one week period upon which he would return and collect it. So of course he found nothing wrong and I must be crazy. I needed reliable power so I built the solar UPS. After building the monitor and control system for it, I had a few spare data acquisition channels so I scaled the AC mains on one of them and made a web page for it.

When all of that was working, I emailed a hyperlink of the AC mains history chart to the power engineer who had ignored me. Within just a few hours I got a call from his boss, who offered to use my house as a load regulation point for their sorry ass power grid. (This was in an affluent suburb of LA so no excuse.) Apparently they did not like the fact that (even though I never published the link anywhere else) I was advertising to the world that the power grid was fluctuating by more than +/- 10% when the NEC says it SHALL not fluctuate by more than +/- 5%. Service was pretty good after that.

A few years later the power company replaced the final leg with a 16KV transformer and service has been excellent since then. Thus, no more need for the solar UPS system, which was taken down about seven years ago to get the roof re-done.


That's a great story and it does flesh out the original requirements, thanks for writing it up.



Probably hugged a bit, it works for me.


Link is dead. So how much did the upgraded server draw?


See above.


The title is a bit misleading as it's specific to the OPs setup (which is quite inefficient).

Would have liked to to see numbers of how much power is actually being output by the panel, and a shunt on the battery to see accurate consumption.


You won't get 200W from a 200W panel in direct sunlight, but it will still generate some watts at other times, so it should work out.

"A 4.6b year old yellow dwarf as a light source" loved that


I thought that was factored in when he talked about 3 full-hours of 200W across the full day? The day in Australia is certainly longer than 3 hours, so I figured that accounted for panel inefficiencies, indirect sunlight, and other such things. But could be wrong. Not sure how the 3 hours was calculated, but I am sure some solar calculators will take your location, average annual sunlight, efficiency and spit out an "effective daily solar hours" value that can be multiplied by the indicated solar panel wattage.


3 hours comes from averaging the daily power generation my Victron MPPT reports. I’m using a 200 watt panel and it reports and average of 600Wh since I set it up a month ago.


Even better than a calculator - it's your actual experience ;)


In power systems there is a number called the capacity factor that is used to relate the average actual output to the nameplate capacity over whatever time interval. In this article the author note that they are expecting 600Wh in a day, or about an average of 3h at nameplate capacity. That works out to a capacity factor of about 0.13, which sounds about right for a solar installation that wasn't purpose built in an area of high insolation.


Between experience and math, my rule is: on average, buffered with a battery, you'll get 10 watts out of a 100 watt panel (i.e.: 10% efficiency, all common conditions considered).


Did you mean "except in direct sunlight"?


The nominal rating is a peak rating, in my experience the sustained output will be somewhat less, even in direct sunlight.


Nod, unless you are in a place on the planet with perfect insolation (which would still only happen a few weeks of the year), it will nearly always will be below nameplate rating. The nameplate rating/testing if ever done, is done in a test bed with artificial light and perfect angle.

Edit: I remember I did once get an above nameplate actual power output from a panel once - using MPPT, at high altitude, at the maximum insolation time for the year (mid-California in the mountains, right around the summer solstice). It only lasted for 20 minutes though.


The armchairs here are all great. If they used a raspberry pi and rewrote their javascript in ARM assembly and reinvented a more power-efficient compression algorithm and solved your favourite problem of serving static pages instead of the problem they actually want to solve, they could maybe get more instructions per watt.

But really, I'm glad that they're actually doing it instead of talking about doing it and spending all of their time musing about _what if_ they did this other thing instead. This looks great.


I was confused about the "doing it" part. I see that they have all the materials and got some numbers like average power consumption, but they didn't seem to actually set it up and see how it worked?


Author here - I’m learning as I go. I thought of talking a out the actual sunlight starting at closer to 1300W and losing efficiency at each step, but I don’t get have the right tools to do that properly.

I’d def love to hear your suggestions though as I continue to iterate this one.


I don't have experience but wishing you best of luck!


To see another attempt of serving a webpage off of solar power: take a look at https://solar.lowtechmagazine.com/about.html -- it has techniques I find pretty unique for conserving power. All the images are rasterized in order to reduce page size among other things.


Obviously there is infinite room for optimization of this problem, but this was a fun blog post. I’m interested to see where the author goes with the series.


It is interesting concept for a blog post but ruined by very inefficient implementation.

I would expect orders of magnitude more work done on ~20W.

Also parsing a compressed 10MB JSON seems like an unusual request. It would maybe be more fun to put a Hello World or Pet Store and get some numbers that will be more relatable to a regular developer.


I mean, everyone's gotta start somewhere. I'll give the author props for coming up with an idea for an experiment, documenting it, and sharing their results. Not a whole lot of people do that out in the open, and it's great to see a bunch of responses with suggestions for further improvements. :)


You must have a very low level of expectations of your fellow HN reader if you think that anything above a Hello World tutorial is relatable to a regular developer.


How frequently do you test a new web framework with "10MB compressed JSONs"?

On the other hand you can find a lot of benchmarks that use basically Hello World just to test your request response or some rather small request/response sizes, because this is what most applications actually do. You can add a simple database query to it for more realistic load.

So, yes, this is more relatable to me as a backend developer because I can compare results more easily.


I see the issue with your premise. I don't test web frameworks. I do real work <ducks>

I very much routinely look at large data sets. They just happen to be wrapped up in a different container than ZIP. Typically, they are delivered in MOV, MP4, WAV, etc. I look at a 10MB file and try to remember the last time I counted that small.


But you do do a minimal HTTP-free baseline benchmark for better insight of your comparison purpose of all things above?, no?


The test is for a web server, so you're optimizing for throughput on limited power, a static site would seem to be the best test.

If you want to test CPU load, the author should have tried Prime95 or some other similar test.


Years ago someone made a potato-powered web server. It was an 8-bit microcontroller with a custom TCP/IP stack, if I remember correctly.


Intentionally inefficient. I’ve seen a number of my customers doing this pattern, working with ~10 meg JSON for the model layer.

Inefficient but not unrealistic outside of FAANG.


I regularly work outside during summer, powering a MacBook Pro exclusively by sunlight (panel about 1m^2). For a few hours I can get a good 50+ watts, buffering thru a 100Wh battery. Keeping the load typically around 10-15 watts isn't hard, so long as paying attention.

Of note, I persuaded Atlassian to remove the (rather nice) animated clouds from their "you have been logged out" web page because it pulled 30 watts (even when web page was hidden). Running on solar/battery exclusively, that was a problem; kudos to them for acting on it.


50W seems rather low for a a 1m^2 panel. I have some older 250W panels (1.6m^2) and get around 220W in the summer.

Also, how are you measuring a 10-15W load? When plugged in, the charger is going to pull it's rated load from your battery bank (I'm seeing ~84W including inverter losses).


System peaks around 57W, with battery charge rate limits and frequent environmental disruptions (clouds, angles, etc).

Battery (Goal Zero Sherpa 100AC) shows output load. Lowest draw on MacBook Pro 15" is 7W, usually idling 10-15W, pulls about 50W when charging.


The point of the piece is at the bottom:

>By increasing our utilization rate, we have increased power efficiency by a factor of 6.

>Economies of scale are more important than intuition would suggest for efficiently serving requests


Cool idea, the numbers in this blog post are very much a worst case scenario outside of running the entire process on 1080 GPU or something though


Of course, you can juice this a bit by doing two things: 1) using concentrating photovoltaics with multifunction solar cells to achieve ~30-40% efficiency. (This requires active cooling.) 2) use 2 axis tracking. 3) solar cells on the back of your array. (Bifacial solar arrays do this in an integrated manner but aren’t concentrating.)

These three together give you on the order of a factor of 3 greater total energy. (Note: concentrating buys you about 20% greater raw efficiency BUT means capturing diffuse light basically doesn’t happen. Might be better off with a non-concentrating bifacial multijunction panel that is still two axis tracked. Concentrators can still help financially because multijunction solar panels are EXPENSIVE.)

Also, if your laptop already has a beefy battery, you don’t need a separate battery. You’ll need a custom MPPT with the right output voltage, but you could hook it straight into your laptop. That saves money (potentially) and a lot of inefficiency.


This is an aburdly bad result, mostly caused by the questionable 14W base power draw, but also due to the questionable choice of gzip compression, which is not exactly on the frontier of compression technology. Even zlib-deflate would decompress twice as fast at the same size ratios, but something like lz4 or snappy would be an order of magnitude less CPU time cost for similar compressed size.

The real way to have an energy-efficient service is to amortize away your idle usage and all of these inefficient conversion steps by just hosting your junk on App Engine.


I’ll iterate towards a good solution. This was a somewhat realistic but naive solution (I have customers doing largely what I’ve described in the article).

If I could find a way to encourage people to run their work in a power efficient way using economies of scale from this series, I would be terribly happy.

Interestingly I suspect public-cloud FaaS solutions (e.g. AWS Lambda) will achieve highest utilisation rates due to high rate of CPU sharing - but I’m a long way off from showing that with data.


The authors "92%" efficiency calculations will be with the inverter and power supply at high load.

When drawing only 20 watts, I expect you'll see more like 80% efficiency, and maybe as low as 50%.


I’ll try to measure this. I guess I need to purchase a shunt.


If you have a regular $3 multimeter, it should be able to measure current and voltage between the battery and inverter. Then just multiply. double check you're on the right mode and using the right socket on the meter before connecting, or you'll get a big bang!.

For the AC side, it's much harder to measure - typical inverters have rather imperfect AC outputs, and unless you have a rather expensive multimeter you won't get an accurate power measurement. A kill-o-watt will probably be okay for a rough measurement, but there might well be a +- 20% error...


I look forward to your next posts in the series!

I'd start by skipping the inverter and using a car USB-C charger instead, then fussing with power settings to drive down the quiescent power of the laptop. Everything after that starts to feel like actual work, like changing the serving software or actively aiming the solar panel.

You might be able to find some single board computer with a modern CPU and no graphics silicon whatsoever.


I would encourage anyone interested in this to go plug example size of small pv systems into this calculator for your location:

https://pvwatts.nrel.gov/

You can use a round number like 1000W, which would be the same as four fairly cheap 250W 60 cell polycrystalline cell pv panels. You will get the cumulative kWh production expected per month.


One question of great relevance is the change in the statistic implied by the article's title over the past couple decades, and the projected change in the decades to come. Is this Moore's-law-esque? If so, eventually, these concerns will be trivial. Is an asymptote approaching? If so, this metric becomes increasing relevant and crucial.


It would be interesting to set up a server/PC that throttled The CPUs/GPUs by power use rather than thermal readings. It’s analogous to the burst credit system used on some EC2 instances, but it would be a cool kernel or hardware feature for devices that are completely solar powered.


Is there a name for this effect in an abstract sense? I see it pop up a lot where the gains from scaling are super-linear to the scale itself. Or is it just called "economies of scale".


This has very little to do with scale.

He didn't really scale up to get from 1 RPS to 12 RPS. It's just that he was already invested in a scale (full PC build drawing heaps of idle power) that didn't match his requirements.

i.e. it's faster for a Tesla to go 5km than a plane when both are from rest. That doesn't mean that you've scaled up by using a Tesla to get there faster.

You aren't going 12x faster by choosing the Tesla. You're just not going 12x slower. It's hard to write into words, but it's a dumb thing to say.


Marginal cost of production? If we take "energy consumption" to mean "cost", and "serving web requests" to mean "production", then your fixed cost is the idle energy load, and the marginal cost is how much more energy you would need to serve an additional request.


What about router/modem power draw? Needs to be some sort of connection to the internet to truly function as a website. (Not being connected would mean it's a wan site or intranet site)


Good point. There is: a HFC modem & 2x Google Wifi access points. Totals to about 15W of energy use - which only leaves enough energy budget for a Raspberry Pi.


Are smartphones a good option here in terms of energy use?


The curious brain cell in me would like to know if wget||curl + xargs -P $(nproc) + awk would produce more "efficient" results?


> 12v to 240v inverter

Can someone explain this part of his equation? What’s he doing with 240v? Or maybe he meant 24v for charging the laptop?


He's in Australia which uses 240v mains power. He's using an inverter to connect his regular old laptop charger plug.


He's using a household solar inverter designed to convert low voltage DC to AC at standard wall socket voltage. It's not the most efficient approach, as noted in other comments, but it's an easy approach since these inverters are common and a laptop's power supply is already set up to plug into a wall socket.


Yep! Inefficient, but it is what I have, and there is no risk to frying my moderately expensive laptop which I hope to keep running for many years.


Technically, it should be "200 watts". The losses happen due to all the inversion and conversion (I think... right?)


TLDR (although it's quite short), he got 21W.

The largest factor is the day-cycle. He gets 3 hours equivalent of Sun light a day. I really expected Australia to be more sunny than that.


Planting trees around your house creates a lot of passive cooling, which can save you a lot of power on making the interior comfortable.

It also makes roof top solar a bit of a joke.


Mostly clouds and rain bring this down to 600Wh/day. Washed the bird poo off the panel to get an extra 2 watts.


I think the title is misleading because the load depends on, well, the workload. But it’s a thought provoking idea.


What is the bottleneck is this system?


The detour of 12 volts -> mains voltage -> back to low voltage is a big one, for starters.


Also what most folks end up doing for convenience. It can be surprisingly inconvenient and expensive to source a power supply at 12v/24v/48v or whatever, and battery voltages are always nominal (think barely approximate), so you can’t get clean power regulated directly sufficient for running a computer. For instance, “12v” automotive voltages can vary from 8v to 15v during normal vehicle operation - like starting the vehicle, charging at full tilt on the highway, etc. a full battery can provide 13.7-14.8v depending on chemistry too.


Theoretical: Amount of energy in sunlight.

Practical: The creativity of the engineer building it.


What if you ditch the OS and code the server on the bare metal?


The OS is very rarely the bottleneck.


Until it becomes the bottleneck.


This person is running node.js. I seriously doubt the OS is the bottleneck.


The bottleneck is gunzipping & deserializing a huge JSON object using a very slow language (Node.js).

This was attempting to simulate the type of load I see in the wild, rather than serving "Hello world" using C++.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: