People (usually installers) also get hung up on having the correct angle for stationary panels.
In winter the solar gain difference on 'correctly' angled panels vs just having them flat on the roof is basically zero, and in summer you should have excess solar gain anyway so again it doesn't matter.
And even if you're losing a small amount due to an inefficient angle, just get more panels!
On a roof it's not worth it, you can even install panels facing the wrong way and get useful power out of em compared to retail electricity rates.
However, at grid scale a few percent difference in cost or output can have dramatic impact on profitability so there's a lot of seemingly trivial optimizations going on. Some instills go so far as to aim some non tracking panels slightly to the east or west because slightly more valuable kwh beats slightly more kwh.
At grid scale it doesn't matter either because most installation are saturated anyway. The periods where they care about are the ramp up to noon and the ramp down till sunset, if you can keep the installation producing at it's stated capacity for as long as possible you make a killing, middle of the day power often costs nothing or has a negative price.
People think renewables are efficient in some way. They aren't. It's literally the most wasteful way to produce power because it doesn't get here when we need it and grid scale storage is orders of magnitude more expensive than production.
It's not an orders of magnitude difference anymore. Grid scale energy storage costs about the same amount as nuclear power per KWh right now with the added benefit it doesn't need to operate 24/7 or massive subsidies to break even. Though only if they can be charged with ultra cheap solar power.
Which is why so many grid scale solar installs come with enough batteries to store ~50% of daily output. It's not about nighttime power it's about reducing demand for peaker power plants. Basically combined cycle natural gas operates at ~64% efficiency assuming long term operation, but open cycle has much lower efficiency and thus much higher costs.
It can provide 400MW of power for 4 hours before it goes offline and cost $560m.
You need 6 of them to provide 24 hours back up. That's $560M * 6 = $3.4B. That's not counting the three times as large solar plant that you will need to build to charge the battery or that the battery will lose 50% of it's output within a decade unless you build an even larger battery installation to prevent full discharge. On top of that to prevent yearly black outs you'd need somewhere between 3 to 20 times the capacity above depending on region and climate.
Meanwhile the latest nuclear power plant produces the same power without degradation for less than half the rosiest estimate above and was build in a country with no history of nuclear power and no indigenous expertise: https://en.wikipedia.org/wiki/Barakah_nuclear_power_plant
You double dipped, use outdated numbers "in 2020", hand waved, and still didn't get orders of magnitude difference.
Pairing batteries with solar provides actual useful power from solar for most of the day. You don't need both 24h of batteries AND redundant solar farms to get 24h of energy. Extremely redundant solar costs less per kWh. Therefore a combined system will cost less, provide more useful power, and charge the batteries at zero additional cost. I'll stick with batteries alone, but include that zero charge cost as part of a cheaper overall system.
Construction costs vary by country so you need a country with both battery systems and nuclear to get an apples to apples comparison. For the US "In 2023 costs had increased to $34 billion, with work still to be completed on Vogtle 4.", with work still to be completed on Vogtle 4, and that's just construction on two 1.1GW reactors. https://en.wikipedia.org/wiki/Vogtle_Electric_Generating_Pla...
400MW * 6 = 2.4 GW, 4h * 6 = 24h. 36 * 560m = 20 billion so well under that 34+B even before considering the cost savings from solar. We are comparing with Nuclear at 2.2GW which again goes offline for long periods, but we care about orders of magnitude so redundancy is a non issue.
Now you need to consider operating costs for both systems. In constant dollars without subsidies construction works out to roughly 1/3 of nuclear total lifetime costs ~1000 workers + fuel + insurance + new equipment + decommissioning adds up. I'll be conservative and double that 34 billion instead so 68B.
Actual studies look into foretasted costs. However for orders of magnitude 5%/year of install costs per year in maintenance would be equivalent to creating redundant facilities to cover 50% degradation in 10 years, so that's conservative as swapping out batteries would lower costs here. Lifetime costs should therefore be below 20B + 20B * 5% * 50 years = 70B ie roughly the same as Nuclear per kWh and far more flexible.
Now, you can easily quibble about these numbers but your not getting an orders of magnitude difference here.
> You double dipped, use outdated numbers, hand waved, and still didn't get orders of magnitude difference.
Feel free to provide better numbers. Those are the two largest most recent projects. That those numbers disagree with what studies say they should be aren't a problem with the numbers, but problems with the studies.
Also you should read up on the difference between power and energy since you're confusing the two in pretty much every line of your post. Watts aren't joules and joules aren't watts.
I did provide a US example for an apples to apples comparison. Providing lower numbers is irrelevant when higher numbers already prove the point.
> Also you should read up on the difference between power and energy since you're confusing the two in pretty much every line of your post. Watts aren't joules and joules aren't watts.
Don't hand wave, pick a single example.
If you don't understand my notation you can simply compare 2 * 1.1 GW nuclear reactors * 24h = 52.8 GWh per day. The battery example you gave was 400MW * 4h = 1.6GWh per day. 52.8 GWh / 1.6 GWh= 33x. I used 36x for a total of 20.16 Billion dollars.
We're already seeing it today but the "saving" part is still expensive. What we'll hopefully see in 10 years is this storage getting substantially cheaper, maybe even cheap enough that it's standard for any PV installation while still being competitive with the alternatives.
The vertical panels are bifacial so they get some of the light being reflected from the other side. This is a huge boost to their productivity.
The other major advantage here is that you can use them with farming and actually do something with the space.
In addition areas where it snows sees a dramatic uptick in solar power because of the reflection of the snow and not needing to brush the panels off.
vertical panel layouts really open up a LOT of options that were not available before. (using them as fences, putting them on sun facing walls of homes spaced off and letting the pass through sunlight hit the other side). There's a lot of possibilities here).
The benefit of vertical is that it produces a duck-curve. The normal cosine curve with a peak at noon is basically the inverse of grid-scale demand, which necessitates grid-scale storage.
So vertical makes a lot of sense on large scale solar farms I think, even more so when you consider the land can be better utilised.
On domestic scale, where you can easier shift your load to match your production AND you might already have a roof with the right front and pitch, normal pitched north/south facing installations might make more sense.
The "correct" tilt in the more northern and southern regions as well in higher places is "enough so the snow slides off the panels". I built a barn with a roof at the "correct" 36° angle specifically so that the panels - 36x400W - both produce the maximum amount during the summer months as well as shed snow without the need for manual intervention.
Do flat panels need regular cleaning though? I'm in a rainy location and was told to always have at least some tilt so that you don't have to clean them as frequently
Wow. I have panel for 3 years now, never cleaned them actively (even if the installers offered me the service for a nice monthly fee), and never noticed anything like that. Well, let's wait 1-2 months to see if it's still the case, at least (April/May is the best month as far as production is concerned)
They're super easy to clean if you can get to them. Just 30 seconds with a regular household mop and they're shining again.
I'd recommend doing it at night, because I got a nasty zap off mine when presumably some of the mopping water got into the electrics and came up the metal mop pole .. (the electrics are all supposedly fully waterproof, but I guess on my 10+ year old panels stuff has degraded)
Having them flat on the wall is even better. You lose production in the summer when you'd usually have excess power anyway, and you gain snow-shedding in the winter, which can be tremendously important.
If you are charging a battery and space isn't an issue then all you want is maximum solar kwh per day and more panels (and inverters etc) wins.
However, the duck curve means many grid scale solar instillation still use East/West solar tracking to collect power when it's more valuable ie mornings and evenings. It's a great example where optimizing for profit results in an counterintuitive result.
I was up cleaning a wasps nest out of our panels yesterday (not because it was affecting the panels, just causing trouble for the people sitting under them). The panels are still working fine but 10 years of accumulation of leaves and organic matter has left them not as shiny as they were.
But if there were any moving parts I couldn't imagine them still being operational after this time.
Depends where you live and the quality of panel you want. I got 16kW of brand new, high efficiency panels for $10.4k, or about $0.64/watt. In this US, this is considered surplus pricing.
If you're in Australia or something, sure $0.20/kW is a good price.
For another data point, we bought 19 kW of new panels from SignatureSolar for $6k shipped in the US not too long ago, so a bit more than 0.30/watt. Not top of the line efficiency at 20%, but high quality all-black monocrystalline Canadian Solar HiKu6 panels with thicker than normal glass.
Ironically, the little bits of metal to hold them to the ground cost almost as much, and the inverter and batteries both cost more.
SurplusSolar had prices in that range as well. For my build, I did go with very nice panels (lg445n2w-e6) so that factored into the higher price for sure.
Have a vague feeling India is among the winners. My (oversized for my use) residential system pays back in less than 33 months. I see US systems paid for by loan and pays back in 7 years.
Perhaps the tone (sarcasm?) of your post caused some downvotes, but your point is well taken. I didn't consider that the panels are not made in Australia. They are very likely made in China. I guess that Australia has much lower tariffs to import solar panels compared to the US.
From my Internet goggles, it seems like home solar panels is common for single family homes in Aus. It might also be that installation is more competitive, thus cheaper. Or planning and fees are lower.
I feel a better way to account for Solar panel costs is to look at the cost per KWH over the expected lifetime of 25 years. A system might pay itself off in 5-7 years which is fairly typical but the other way of looking at it is the power is about 1/4 the price of power from the grid. Makes you wonder how it is that power from the grid is so ludicrously above its actual capitol cost of production, the systems getting that power to places must be crazy expensive.
Tilt systems aren't worth the money. They cost too much to make and with the same money more panels produces more power.
> Makes you wonder how it is that power from the grid is so ludicrously above its actual capitol cost of production, the systems getting that power to places must be crazy expensive.
It's an apples to oranges comparison. Solar panels don't provide power overnight for example, but people place a lot of value in being able to use power at night. Likewise the grid scales up and down to meet the demands placed on it which a basic solar setup (i.e. without batteries) can't do. Then there's reliability, transmission etc.
Transmission costs are what people forget. It takes a lot of money to buy, build, and maintain the equipment & land that carries your electricity from its source.
Profits are what people forget. I live in MN, everyone has excel energy. They made a net profit of 1.85 billion in 2022, that's NET profits. They paid out $3.85 per share.
They've spent an insane amount of money on solar panels - that's all out of that number, those are net profits.
What happens when they have 50% or maybe near that 100% of power coming from renewable power?? They've made a lot of money off of us while we also paid for the costs to attain the renewable system, once it's in place - why should they get my money anymore?
Why are people profiting off of a necessity to life anyways?
You have batteries that cover overnight and that only adds about 20% extra cost and it pays for itself due to enormous difference in price compared to the grid.
That won’t get you all the way through the night, every night. At least not with anything close to typical household energy use in the US.
For that you need a properly sized off grid system, which can easily be 20-30 kW of solar and 50-150 kWh of batteries. Enough to get you through 50-10 cloudy days (depending on your location) with less than 1 kWh per day per kW from solar and 40-60 kWh per day of electricity use.
That’s so wild to me. I have a modest 6.6kw solar system in Australia and average 4kwh a day bought from the grid. No battery, no gas, live in a hot climate and run the A/C about 8 months a year.
How many square feet (or meters) are air conditioned? And you mentioned average daily purchases from the grid, but do you know what the maximum is?
In the US, I think the "pinch point" for off-grid systems is usually in the winter for an all-electric home. Shorter days and more likely to have multiple fully cloudy days in a row, at least for a lot of the US.
Just under 100m2 are air conditioned. Largest day on record was importing 23 kWh due to almost no sun, and me choosing to spend the day bulk cooking/catching up on washing. In a grid down/no sun situation I wouldn’t be doing bulk cooking and washing.
I have a 12kw system in Southern California. Recently it's been raining and overcast many days so I've had a shortage. Only in the winter though. I'm usually running a surplus.
Makes you wonder how it is that power from the grid is so ludicrously above its actual capitol cost of production
Delivery charges, mostly -- it costs a lot of money to build and maintain a power grid. Wholesale prices in my state are around 4.5 cents/KWh, but delivered prices are much higher (around 33 cents/KWh depending on rate plan).
Also it's another point of failure. If (or when) the motor fails, then you could be facing in an extremely suboptimal direction and it might take more than a few days to fix.
> Makes you wonder how it is that power from the grid is so ludicrously above its actual capitol cost of production, the systems getting that power to places must be crazy expensive.
This is my thinking behind delaying getting some panels for my own home, if they're getting cheaper AND more efficient, surely that's going to have knock-on effects on the suppliers too?
Utilities are cost + profit. They are a cartel of generation, transmission, and distribution companies working together to increase costs, which increases profits.
I don't work in this field (just adjacent to it), but I would start looking at Digi-Key, RS, Mouser, Farnell. It might be that people working in power electronics have other go-to suppliers but I'm not familiar with them.
Current for charge controllers scales really nicely with horizontal scaling, there isn't a huge reason to do it. With that said there can be various reasons to want higher VOC in a single string. Managing voltage drop, and value engineering on conductor size. The marketing on those is often aimed specifically at max input voltage, rather than charge current because of this. Victron is probably the prime example of higher end stand alone MPPT controllers.
These days, I recommend using DC optimizers that are attached to each module. They provide MPPT tracking on a module level, and make engineering calculations simpler.
They are also much cheaper than microinverters, safer, and more reliable.
Tigo optimizers are nice, and you can have just one optimizer for 2 modules. It's mostly meant for commercial bifacial modules, but it can work just fine for 2 completely separate modules. You can also use their cloud monitoring thingie to remotely track each DC optimizer's health.
Or you can use SolarEdge optimizers, they are simple and cheap (<$100 per module).
Both support automatic shutdown, so you can safely de-energize the entire system if needed. It's a problem for classic string inverters.
Most of the inverters for this are mostly installed by solar companies and so you tend to find them more on specialist websites dedicated to solar gear. Its the same with solar panels there is a very limited set available in standard retail channels especially of the 400W size and output.
That's incorrect. California and Arizona require a licensed electrician to hook up inverters to the grid. I have not checked other states, but I'd be surprised if it's not the same.
And most of the more powerful inverters require a grid connection because they can't work in an "islanded" mode.
Did Arduino recently regain their relevance for serious work? I haven't followed this topic for a few years, my previous impression was that Arduino products were only for simpler educational purposes, and you would buy a cheaper clone anyway. More serious hobbyists seemed to have switched to Raspberry products at the time, and now they have quite a range of MCU boards as well.
But now I see products like MKR, Portenta, even proper industrial DIN-mounted controllers that look quite capable. Those in the know, what's the current landscape, what ecosystem is worth buying into?
> Did Arduino recently regain their relevance for serious work?
IMO: No.
Arduino, as a company, wants that to change, but I have yet to see anyone take them seriously. Most of the "industrial" Arduino products are unsuitable for purpose; they lack critical features which would be required for those applications like protected inputs, wide range power input, or field bus interfaces.
> More serious hobbyists seemed to have switched to Raspberry products at the time
Why would you use a Raspberry when an Arduino will do everything required? If you don't need a full Linux OS, then why waste it on something that just doesn't need it?
There are RP2040 boards (e.g. Pi Pico) that offer similar functionality to Arduino boards, are more powerful than Arduinos, and are much less expensive than (official) Arduinos. If you are more interested in performance or memory than the I/O options offered by a microcontroller, a Pi Zero is about the same price as Arduino's cheapest offering. You can also develop software for the Pi zero on the Pi Zero, which is considerably easier than developing for microcontrollers.
I think Arduino boards are best suited for what they were originally designed for: education. While that covers some of the hobbiest market, it doesn't cover all hobbiests.
Sure, anybody can program when there are GBs of memory on a Pi as evidenced in the fact that I have a job. Those that can do it in the KBs of memory on an Arduino are on a different level.
Depending on all of the things you listed just sounds like lazy to me. /s
Sorry if I was a bit ambiguous. I was not referring to the gigabytes of memory or even the abundant number of libraries when I mentioned that it is easier to program Raspberry Pis. Those are certainly factors, though I doubt that many embedded projects will exploit that much memory. In cases where they do exploit the resources, it is probably needed (albeit, likely at the level of megabytes rather than gigabytes).
I actually meant the Pis are easier to program since development can be self-hosted. Uploading a program to a microcontroller after each build is a royal pain in the bottom. Debugging (with a debugger) is also annoying, both from the perspective of setting up the software and having the necessary hardware.
With respect to laziness, I hold two points of view. I definitely agree the inefficient of resources should be viewed in a negative way. This is especially true when a solution goes from bytes or kilobytes to megabytes or gigabytes. More critically, I believe that laziness is a negative when the inefficiencies lead to less robust code. On the other hand, one of the chief benefits of computers is to reduce the burden placed upon people. Setting up a toolchain for microcontrollers can be nearly as complex as developing code for the microcontroller itself. That strikes me as wrong.
Aside: I periodically write software for microcontrollers for fun. One of my favourite exercises involves prototyping with high level code, such as using Arduino libraries, and rewriting it using increasingly low level code. It is how I approach learning a microcontroller, but I am also extremely conscious of both volatile and non-volatile memory usage. I am definitely sympathetic with your notion of avoiding laziness.
On the other hand, we should also realize that microcontrollers are often used because we are "lazy". There are many cases where microcontrollers are used even when a basic circuit can accomplish the same task since it is cheaper to throw an absurd number of transistors at something than it is to pursue more efficient solutions. (Labour costs add to project costs, along with the number/type of components, board size, the relative ease of iterating code verses iterating on board revisions, etc..) There are reasons beyond the challenge of generating the most efficient solution possible to consider because the efficiency of development is also a factor. As someone who pursues software and hardware development as a hobby, I often bemoan the state of affairs of commercial products. Yet I also realize that commercial development, or even the motivations of other hobbiests, leads to different perspectives.
It's easier to evolve code with a real OS where you can update software with rsync instead of a USB cable, write log files, ssh in and run the code under a debugger, run the same code in a test mode on your laptop, etc.
The Arduino strategy seems to be similar to the Raspberry one - as people who grew up tinkering with Arduinos go into industry and are placed in charge of decisions, an upmarket edition of Arduino is a natural pick.
The benefit of Arduino is that your boss can say "we just need something that switches this water pump on whenever it's above 20C but not if the battery is low or for more than an hour per day", and you can have the whole project coded, soldered and working in an hour.
The advantage of the platform is speed of development, not engineering good practice or mass manufacturing.
I just received an Arduino Edge control unit to run some 9v valves but it was DOA. Well, it worked when I threw a bench supply at it but USB doesn't work. FWIW support has been amazingly responsive but not a great first go around with their professional line. Super curious about that WiFi enabled PNC controller though...
The article says he estimates that the tilt/tracking system will improve power output by 20-25%. Doesn't he already know what it will do? It seems like he should have tested that out by now and be prepared to share that data.
Cool, we also use Home Assistant on a Raspberry Pi to trim our EV charging based on current solar production. It's a great way to be pretty much 100% solar charged, if you can charge while it's sunny.
Take a look at "evcc". Careful: It's MIT licensed open source, but a a set of interfaces (to inverters, wall boxes, car APIs or smart meters) need to be unlocked with a sponsorship token; or patching the license check.
-> https://evcc.io/en/
It's a stand alone server with a simple Web UI and takes care of charging your car with solar (or grid, if there is not enough solar). IIRC it offers three modes: 1. charge solar only, 2. charge with at least 6A, boost if there is more solar, 3. charge independent of solar.
You can also configure charging targets, e.g. stop at 80% SOC.
I know from business or ROI the whole built isn’t probably worth it, but it’s still cool project and I like the amount of details and pictures, just how I write my projects, will bookmark it and get back to it later for any updates.
Few installations bother with tilting panels any more. 20 years ago, when solar panels were expensive, that was a thing. Not so much any more.
That you can get 13KW of new panels for US$6000 is amazing.