Hacker News new | past | comments | ask | show | jobs | submit login

It is kind of stupid that the penalties were MORE than the actual electricity. Shouldn't it have just been the cost of the electricity that they failed to use?

That way loopholes like this wouldn't have existed.




No, because disposing of the unused electricity has costs for the provider.

The way this works is that the provider offers a cheaper rate to ensure constant energy output, since the power plants can't be easily and cheaply scaled back.

As an example, Ontario paid millions[1] to get others to use their surplus energy; MS' provider could have been forced to do the same, hence the penalty.

[1]: http://www.thestar.com/business/2011/08/29/ontario_pays_othe...


It's not just about using the excess generation capacity. In fact, at this scale it probably has nothing to do with generation. For distribution utilities, they need to install new infrastructure to connect new large loads (new datacenter, new manufacturing plant, etc.) - substation upgrades, new conductor, etc. Because that infrastructure only serves that one customer, the cost is recovered from them through a minimum-connected-load + capital contribution contract rather than from the entire customer base through a rate increase.

Keep in mind that utilities are heavily regulated and can't typically touch their rates at all without regulatory approval. In many jurisdictions, there is an "obligation to connect" for residences - the utility /must/ provide power, at a certain rate, to any residence in their area of operation. They aren't allowed to directly recover the cost of infrastructure upgrades (e.g. new distribution transformer when somebody builds an expansion to their house) directly from that customer (up to a point - if you're building a mansion that uses 10x as much power as everyone else, they'll come after you), only from the ratebase as a whole. Otherwise, people who live in rural areas wouldn't be able to afford electricity.

No such obligation exists for businesses which are typically the largest loads anyhow. If you build a new 50 MW datacenter on my distribution system, now all of a sudden I need to put in 5km x 3 phases x 5 feeders of new conductor, 2 new station transformers, plus civil infrastructure, etc. It could even be special equipment I wouldn't otherwise have on my system - like DC equipment. All of that is additional to what I have to pay the generator for that 50 MW of power on a continuous basis. So, I calculate the payback period on 50 MW of power I'm going to sell you at a 5% margin, and if it's longer than the lifespan of the equipment I'm going to buy I make you pay up front to bring it down to say, 25 years. But that only works if you actually buy 50 MW of power from me for 25 years, so if you don't I charge you a penalty equivalent to the amortized cost of that infrastructure that now sits idle.

Hope this helps explain the economics of it from a distribution point of view. I am a power engineer.


I'm curious. So how much power (energy) can a typical residential user use? I'm thinking average lines and transformers.


Here in Toronto the current standard for residential service is 200 A @ 120/240V (24 kW). Older homes might be 100 A. That would be the max that the conductor between the distribution transformer and the home's panel can handle without thermal failure. But you wouldn't size distribution transforms based solely on peak load because a mineral-oil immersed transformer has a lot more thermal mass than a cable and takes hours/days of sustained load to actually heat up. Average loads used for planning and distribution transformer sizing are more like 5-10kW/house.


Thanks! I actually own a 2-family house that is very old and I have two 60-AMP panels. If I renovate, it will need to be upgraded.


It seems...unlikely that they actually needed to pay people to take their power (though I'm no EE; it may well be possible). My intuitive sense from that article was that it was a contract issue (and possibly not even that; they also throw in something about "below costs", which would obviously not be negative price).


Electrical generation and distribution is a tough business. Places where you have generation capacity aren't necessarily accessible via the transmission network. So if NYC isn't using power, you can't just send it to Cleveland. Sometimes, you have a certain amount of "fixed" generation capacity that you must use, or must commit to taking offline for days before restarting. 80% of your power may come from a few big, centralized, cheap power plants (nukes, big hydro, coal) and a bunch of geographically dispersed, small, more expensive plants that you spin up to meet peak demands. (natural gas, small hydro)

Utilities sign contracts with big industrial users who have steady-state energy demands to ensure that they have a place to send lots of power to. They also have contracts that when peak demand is reached, the industrial customers shut down lines in exchange for discounts. (I worked at a place that financed a DR datacenter largely by committing to failing over 50% of our energy workload to another region during a peak demand emergency.)

I'm not doing the topic justice, but there's alot of behind the scenes stuff there -- I'm sure it wasn't just a contract dispute.


From the article it doesn't sound like there was any sort of dispute...my guess would be that the contract formalized payment as a function of capacity and demand, in such a way that payment could be negative. This might not even be a mistake or from a need to distribute the power, it could just be a low-cost (due to rarity) sweetener.


Depends on the scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: