How are the financials calculated for a commercial battery storage system?
Battery storage economics are driven by two main value streams — peak demand reduction and energy arbitrage — weighed against the annualized cost of the system.
The financial case for a standalone commercial battery storage system comes down to whether the bill savings it generates exceed its cost on an annual basis. Here's how that math typically works.
Peak demand reduction is usually the primary value driver. Commercial utility bills include demand charges based on your highest power draw during a billing period. A battery system monitors your building's load and discharges during spikes to "shave" the peak. A common rule of thumb for modeling purposes is that a battery achieves roughly 60% peak demand reduction relative to its rated power capacity — meaning a 100 kW battery would reduce peak demand by approximately 60 kW on average across billing periods.
The savings from demand reduction are calculated by multiplying the reduced peak (in kW) by the applicable demand charge rate ($/kW) for each time-of-use period.
Energy arbitrage is the second value stream. The battery charges during low-cost off-peak hours and discharges during expensive on-peak hours, capturing the rate differential. The savings depend on how much energy the battery can shift relative to total consumption and the spread between peak and off-peak rates.
On the cost side, you account for the battery's capital cost amortized annually, plus round-trip efficiency losses (the energy lost in each charge-discharge cycle, typically 10–15%). These losses mean the battery always consumes slightly more energy than it delivers, so energy savings from the battery itself are technically negative — the value comes from shifting when you consume, not from reducing how much.
The net financial benefit is the demand charge savings plus arbitrage savings, minus the annualized system cost and efficiency losses.