Previously we’ve seen that energy and power are related by time.
Capacity Factor is a term you’ll come across regularly in the power generation business. It’s another one that relates these three quantities: energy, power and time.
However it does so in a slightly different way, being a metric that relates the energy output of a power plant over a stated period of time to its maximum power (its “capacity”).
That’s important, because the maximum power output of a power plant is telling us something about its maximum potential to produce energy too (if it managed to operate at this maximum power for a defined period of time, maximum energy = maximum power x time).
Since it’s energy that is the revenue source for most power plants, a metric that relates actual energy output to maximum potential output – as capacity factor does – is an important one in terms of the economics of power generation. Simply put, the closer the output of a power plant stays to its maximum potential over a period of time, the more energy it has available to sell relative to the money invested to build it.
By contrast, low capacity factor means that the capacity you’ve paid for is operating far below its potential. That generally results from a combination of two things:
- Time spent not operating at all (e.g. reasons such as no wind, no sun, plant offline for maintenance, electricity price too low to justify burning fuel etc.).
- Time spent operating but at a power output (rate of energy generation) below the maximum possible. Output might be restricted because there is some-but-weak wind (or sun); or because there is insufficient electricity demand to make use of it, as examples.
In the video lesson below, we’ll briefly define what capacity factor is; including a couple of ways to calculate it and its relation to another term you may hear quite often: “full load hours“.