Weather Forecasting for Utility Companies

Weather-sensitive business operations are primarily reactive to shortterm (three
to 36 hours) local conditions (city, county, state) due to the unavailability
of appropriate predictive data at this temporal and spatial scale. Typically,
the optimization that is applied to these processes to enable proactive efforts
utilizes either historical weather data as a predictor of trends or the results
of continental-scale weather models. However, neither source of information
is appropriately matched to the temporal or spatial scale of many such operations.

While near-real-time assessment of observations of current weather conditions
may cover the appropriate geographic locality by its very nature, it is only
directly suitable for reactive response. A potentially more valuable alternative
is using cloud-scale numerical weather models operating at higher resolution
in space and time with more detailed physics. These weather models may offer
greater precision and accuracy within a limited geographic region for problems
with short-term weather sensitivity. Forecasts based on these models can be
used to aid competitive advantage or to help improve operational efficiency
and safety.

Potential Business Value
As former U.S. Commerce Secretary William Daley
stated in 1998, “Weather is not just an environmental issue, it is a major economic
factor. At least $1 trillion of our economy is weather sensitive.”[1] A more
recent study reported in the Bulletin of the American Meteorological Society
estimates that one-third of private industry activities representing approximately
$3 trillion annually have some degree of weather and climate risk.[2] According
to the National Oceanic and Atmospheric Administration, during the period from
1980 through 2005, the United States sustained more than $390 billion in overall
inflation-adjusted damages/costs due to extreme weather events (i.e., more than
$1 billion in damage per event).[3]

Since these costs are across a wide range of geographic and time scales, consider
the more local and short-term impact of weather events. For example, according
to the Air Transport Association, air traffic delays caused by weather cost
about $4.2 billion in 2000, of which an estimated $1.3 billion could have been
avoided.[4] The U.S. Department of Transportation estimates that about 7,000
people are killed and 800,000 are injured each year in weatherrelated accidents
on U.S. highways. The economic impact of these and other weather-related problems
on the roads is estimated to lead to 544 million vehiclehours of delay and an
economic impact of about $42 billion annually.[5]

Applications to the Industry
Utility companies and energy producers
rely on weather forecasts provided both by government and private meteorologists.
They use this information to determine whether to power up peakers, manage their
assets or buy and sell energy on the world market, which is susceptible to hype
and vulnerable to forecasting errors. It has been estimated that the annual
cost of underpredicting or overpredicting electricity demand due to poor weather
forecasts is several hundred million dollars in the United States alone.[6]

For example, a three-degree Fahrenheit difference between the forecasted and
actual temperature for the Tennessee Valley Authority could result in a 1,350-megawatt
difference in demand. On hot days, that demand must be met by the use of older,
more expensive power plants, which, if used unnecessarily, boost supply costs
by $600,000 per day.[7] Conversely, the local cooling after a thunderstorm can
significantly reduce demand on a hot, humid day, but a utility usually provides
excess electricity based on the conditions before the storm because the forecasts
that they use lack the precision to operate more proactively. Predictions of
precipitation are vital to determining the amount of water available to operate
hydroelectric facilities. Similarly, precise local wind forecasts are critical
to predicting potential power that could be generated by a wind farm and providing
information to determine how equipment should be configured from day to day.

Applications also exist on the distribution side, where forecasts of local
severe weather are important for outage and asset management. These include
preparing for the impact of storm surge, winds and rain on oildrilling rigs
in the Gulf of Mexico from hurricanes; predicting and repairing transmission
facilities that fail from high demand and high temperatures; and scheduling
crews to repair downed power lines due to high winds or accumulation of snow
and ice.

In addition, there is an emerging industry for weather derivatives (as hedges
against weather-related financial risk), which has grown from nothing in 1997
to tens of billions of U.S. dollars today. Initially, this market was for energy-related
commodities but has expanded to other markets like agriculture and retail. While
it focuses primarily on the seasonal scale, it may evolve to include the dynamics
of the short-term market as the local impact of energy commodities grows.

To evaluate the potential benefits of improved weather predictions for the
energy and other industrial sectors, the IBM Thomas J. Watson Research Center
initiated a project to understand the applications of local high-resolution
short-term forecasting. This effort has been dubbed Deep Thunder.

What Is Deep Thunder?
Deep Thunder produces forecasts that provide detailed
four-dimensional information about temperature, winds, precipitation, etc.,
from the surface of the earth to an altitude of about 15 km. This meteorological
modeling effort is unique in the industry and the academic community. This forecasting
capability is designed to be complementary to that of the National Weather Service
(NWS). In fact, Deep Thunder would not be possible without leveraging the investment
by NWS in making data, both observations and models, available. The idea, however,
is to have highly focused modeling by geography with a greater level of precision
and detail while addressing the needs of specific industries, both of which
are outside the mission of NWS.

While one goal of this effort is to improve the technology, another is to understand
the business, safety and other value that such modeling can provide. In reality,
improving the effectiveness of weather-sensitive operations is not really about
the weather. Rather, it is one of optimization of business processes such as
resource allocation, scheduling and routing, which are constrained by specific
weather events. Hence, the value of these predictions will be maximized when
they are integrated into the business processes. Having detailed forecasts of
the right caliber is a critical prerequisite to enable optimization of weather-sensitive

This effort began with building a capability sufficient for operational use.
In particular, the goal is to provide weather forecasts at a level of precision
and speed to be able to address specific business problems. Hence, the focus
has been on high-performance computing, visualization and automation while designing,
evaluating and optimizing an integrated system that includes receiving and processing
data, modeling and postprocessing analysis and dissemination.

of the rationale for this focus is practicality. Given the time-critical nature
of weather-sensitive business decisions, if the weather prediction cannot be
completed quickly enough, then it has no value. Such predictive simulations
need to be completed at least an order of magnitude faster than real time. But
rapid computation is insufficient if the results cannot be easily and quickly
utilized. Thus, a variety of fixed and highly interactive flexible visualizations
focused on the applications have been implemented to enable timely use and assessment
of the model forecasts.

The initial focus of Deep Thunder was on general forecasting. As the technology
improved and become more practical, other applications were considered. These
include travel, aviation, agriculture, broadcast, communications, energy, insurance,
sports, entertainment, tourism, construction and other industries where weather
is an important factor in making effective business decisions. Essentially,
a further goal of Deep Thunder is to enable proactive decision making affected
by weather, by coupling predictive weather simulations with business processes,
analyses and models. Currently, Deep Thunder forecasts are being produced on
a regular basis for seven major metropolitan areas in the United States at 1-km
to 2-km resolution.

A Mighty Wind
On the distribution side, highly localized weather model forecast data can be
applied to operational decision making in the maintenance, repair and utilization
of the transmission network. Deep Thunder can provide sufficient precision to
enable utilities to plan for power usage, outages and emergency maintenance.
Consider, as an example, the severe wind storm that affected the New York City
metropolitan area on the morning of January 18, 2006. This was caused by a strong
cold front and heavy rain with wind gusts over 60 mph, which led to innumerable
downed trees and power lines. As a result, electricity service was disrupted
to more than 250,000 residences and businesses in the New York, Connecticut
and New Jersey suburbs. In some areas, it took nearly a week for power to be
restored. In addition, there was widespread disruption of transportation systems
(e.g., road and bridge closures, airport delays) and some flooding.

1 and 2 show just a few aspects of a Deep Thunder forecast for this event. Figure
1 shows a map of predicted sustained wind speeds at 7:00 a.m. on January 18,
2006 for Westchester County, New York, which was severely affected by the storm.
Figure 2 shows both surface and upper-air wind speed and direction as well as
other weather variables that illustrate the onset of the storm at White Plains
Airport in the southeastern part of the county near the Connecticut border over
a 24-hour period.

This operational forecast was available on an internal-to-IBM website before
noon on January 17, 2006; more than 15 hours before the impact of the event
was first observed. Imagine what local utilities and government agencies would
have been able to do if they had access to this detailed and correct prediction
as opposed to other forecasts which did not provide such information. At the
very least, the ability to stage resources in anticipation of this event may
have reduced the lengthy outages that many in the area experienced.

Deep Thunder can also improve generation-side load forecasting by providing
high-resolution weather forecast data for use in electricitydemand forecast
models. Integrating leading-edge data and analytics technology into the operational
decision-making infrastructure of the utilities industry enables a proactive
rather than reactive approach for weather-sensitive business processes. This
idea is illustrated in Figure 3, which shows a screen capture of a prototype
interactive application integrating a Deep Thunder weather forecast with a simple
loadprediction model.

Figure 3 shows a map of Georgia with forecasted heat indices at 8 km resolution.
Major cities and locations of the generators owned and operated by Georgia Power,
the local electric utility, are shown by name. Each power plant location is
also marked with a pin, whose height and color indicate a predicted electricity
demand. A dual encoding is used because the capacities of the power plants range
over five orders of magnitude. Hence, height is a linear mapping while color
bands are scaled logarithmically. The user has the ability to select the type
of power plant (fossil, hydroelectric and/or nuclear), the ability to select
what data to show on the map (e.g., weather, geographic or other customer/demographic)
and the ability to query individual power plants (i.e., by visual selection).
The results of the query include the predicted load at each time step (every
10 minutes) as well as a plot of predicted load over 24 hours with weather data
at that location.

Deep Thunder can be a powerful tool for the energy and utility
industry for use in short-term weather forecasting where precision and speed
are critical factors in making effective decisions. Deep Thunder can help companies
avoid being forced to react to weather events. It can enable their weather-sensitive
operations to be proactive, which will have the potential to aid in competitive
advantage and/or help improve efficiency and safety. For more information about
Deep Thunder, visit


  1. Congressional testimony.
  2. Dutton, John A. Opportunities and Priorities in a New Era for Weather and
    Climate Services. Bulletin of the American Meteorological Society. 83, No.
    9, 2002, pp. 1303-1311. 10.1175%2F1520-0477(2002)083%3C1303:OAPIAN%3E2.3.CO%3B2
  3. Billion Dollar U.S. Weather Disasters, NOAA/NCDC, 2005,
  4. Air Transport Association. State of the U.S. Airline Industry: A Report
    on Recent Trends for U.S. Carriers. Washington, D.C., 2002.
  5. Lombardo, Louis. Overview of U.S. Crashes & Environment. OFCM WIST II Forum,
    4-6 December 2000. day2/5_panel4a/2_lombardo.ppt
  6. Economic Statistics for NOAA, May 2005,
  7. Patrick Walshe. Role and Impact of Weather at the Tennessee Valley Authority.
    Fourth Annual User’s Forum, 86th Annual Meeting of the American Meteorological
    Society, 29 January–2 February 2006, Atlanta, Georgia.


Preparing for Automated Metering

Over the past year, technology advances and high energy prices have stimulated
interest in advanced metering infrastructure (AMI). Discussions are wide-ranging.
Some focus globally: Australian and Canadian “smart metering”; new European
Union requirements; the results of time-ofuse pilots that arose in the wake
of California’s deregulation debacle. Some discussions emphasize AMI benefits
– new products, “instant” outage detection, better load management. Others plunge
immediately into interval billing’s fine points: contracting, hedging and settlements.
Focusing utilities’ attention is the federal mandate that each state examine
the possible expansion of advanced metering within its borders.

Cost Focus

Midsize and smaller industrial and commercial firms are generally eager to
explore AMI benefits. Still, what this discussion lacks is a clear focus on
their primary concern: total energy cost. And despite relatively low U.S. energy
prices, when compared with prices worldwide, there is good reason for businesses’
cost concerns.

U.S. industrial firms spend an annual $53.6 billion on electricity[1] and another
$47 billion on natural gas.[2] Energy costs vary widely among specific industries
but typically comprise between 2 and 20 percent of the value of goods shipped.[3]
In some industries, attempts to reduce energy intensity have been successful.
But not in all. The Industrial Energy Consumers of America, for instance, cite
energy as a major contributing factor to the loss of 2.8 million U.S. manufacturing
jobs since 2000.[4]

While costs per commercial company are generally lower, energy costs loom large
for the sector as a whole. U.S. companies, for instance, spend in excess of
$100 billion on electricity[5] and almost $30 billion on natural gas.[6]

Possibly even more significant is price creep. Over the five years from 1999
to 2004, industrial electricity prices per megawatt hour rose 19 percent, while
commercial prices rose 12 percent.[7] And that’s before exceptional price escalations
of 2005-2006 – especially for natural gas – initiated today’s energy-cost outcry.

Competition and AMI

In the 1990s, continuing concern about energy costs and competitiveness led
many commercial and industrial (C&I) energy users to support competitive energy
retail markets they believed could reduce prices. And even though competition
remains limited in most states, there have been positive effects:

  • A recent report[8] credits competition for reducing the inflation-adjusted
    energy bills of most New York businesses, where competitive energy suppliers
    now service more than half of the commercial electricity load and more than
    75 percent of the industrial load;
  • A 2005 report from the Associated Industries of Massachusetts (AIM)
    Foundation, for instance, credited competition, in combination with restructuring-related
    rate caps, for a seven-year total savings of $2 billion and projected ongoing
    annual savings of $350 million[9]; and
  • While national averages can be no more than suggestive, given the
    difference in populations served, the U.S. Department of Energy shows energy-only
    suppliers as pricing commercial electricity at 6.58 cents per kilowatt hour,
    as opposed to full-service providers’ 7.91 cents.[10]

Few C&Is are as yet voicing such high hopes for advanced metering. In fact,
to some, AMI may look like a solution in search of a problem. Metering does
not address the high price of heating with natural gas, oil or propane. Time-shifting
of electricity use might work for businesses running multiple but changeable
shifts but it means nothing in the context of a retail store whose customers
are unlikely to shift their shopping to the pre-dawn hours. And when oligarchic
supply markets result in only the smallest of price spreads among suppliers,
even large businesses with dedicated energy managers cannot use hourly price
changes to significantly reduce costs.

In other words, AMI is not an automatic positive for the C&I customers often
seen as its primary supporters.

To improve the atmosphere for the discussion of possible metering changes,
utilities should examine the extent to which existing programs meet C&I customer
needs. Do existing utility programs maximize the customer benefits available
from whatever deregulation currently exists? Do customers understand their choices
among existing utility programs? Have utilities extended business programs to
all who might benefit, or are “special deals” available only to the largest
and most sophisticated industrials? Might existing or new services address the
unmet needs of a broader C&I audience? Answering those questions requires some
careful analysis and communication.

Step One: Analyze C&I Program Costs and Profits

AMI can, of course, be a win/win
proposition for utility and customer alike. But it is more likely to be accepted
as such if C&I customers already work with the utility in an atmosphere in which
their individual concerns are addressed.

Utilities have a long history, of course, in addressing the concerns of the
largest commercial and industrial customers. They frequently play a role in
regional economic development strategies that attempt to keep large local employers
in place and attract new ones. Discounts or special rates for large energy users
are common.

But the number of customers addressed with these special programs has, per
utility, typically been small. And utilities generally handle them as exceptions
to routine IT processes. To facilitate this “exception handling,” more than
half of all utilities – and more than 60 percent of electric utilities – use
key account programs that manage highly individualized billing approaches.

The key account approach is staff-intensive. That means high costs. Salary
and support for an account representative servicing a utility’s 30 largest customers
may be cost-effective when spread across revenues generated by those customers.
It becomes less and less so, however, when smaller C&Is are handled in the same

How small do customers have to be before it becomes cost-ineffective to handle
them as key accounts? To answer that question, utilities need extensive analysis
of the profitability of individual C&I customers, profitability by type of service
and profitability by group. They need to identify usage patterns and chart consumption.

Step Two: View Customers’ Business Drivers

The data analysis in step one is far more useful in the context of customers’
business drivers and trends in their markets. Also important is to understand
changes in customer needs through time. As C&Is’ energy sophistication increases,
the utility/customer conversation is likely to become deeper and more complex.
Among the parameters likely to change are:

  • Cost determinations. C&Is may initially be driven by cost per kilowatt
    hour. As they become more sophisticated, however, they will likely want to
    evaluate energy and related services as a single unit. They will also want
    to evaluate costs in terms of their own output – energy cost per car or per
    billable hour.
  • Risk assessments. Many C&Is are risk-averse. They are willing to
    accept the costs of risk reduction. That is particularly evident in the interest
    in switching – where possible – to competitive suppliers.

    • Switching initially appeared risky to many C&Is. Many new suppliers
      offered contracts that might or might not prove less costly than their
      utilities’ supply. And the penalties some utilities placed on switching
      back, plus the uncertainties of dealing with the new competitive entities,
      added to that perception of risk.
    • Over time, acceptance of retail, energy-only suppliers is growing,
      albeit slowly. Of the 16.6 million commercial customers in 2004, only
      about 445,500 (2.7 percent, representing 11 percent of total commercial
      load) were being served by competitive suppliers. Similarly, only 13,800
      industrials (1.8 percent of the 747,600 U.S. industrials in 2004, for
      about 9 percent of total industrial load) used competitive suppliers.
    • Increasingly, C&Is will want to explore competitive supply. But that
      does not have to mean the loss of a customer. In fact, it can be a spur
      to the utility’s ability to sell new services. Utilities may offer market-rate
      contracts, for instance, in conjunction with hedging strategies tailored
      to the individual customer’s needs.
  • Interest in demand reduction. C&Is avoid participation in demandside
    management programs when alternatives are too complex for facilities managers
    to readily evaluate. Utilities that need to expand demand response or load-shifting
    strategies with C&I customers may want to ease the burden by, for instance,
    providing basic demand-andload analysis without charge or by offering free
    consultations aimed at demand reduction.
  • Investment in tools to manage the energy profile. Companies attuned
    to the energy marketplace are more likely to seek facilitymanager training,
    decision-support software, on-site or backup generation and outsourcing strategies.
    Ultimately, large users may find they need to analyze their own consumption
    data and develop “what if” scenarios that help them manage their businesses.

Step Three: Lessons From Europe

Predicting C&I enthusiasm for various program options is never easy. To improve
success, it is helpful to look at a market where utilities have already been
forced to respond more fully to C&I demands: the European Union (EU).

Currently, all EU industrial customers and some commercial customers may choose
among suppliers. It’s part of the overall market liberalization taking place
as the EU matures. Many C&Is have already exercised their right to switch. Many
more have renegotiated their utility contracts.

Typically, C&Is have represented the greater share of European utilities’ revenues
– 70 percent is not uncommon. In addition, service to C&Is has generally been
a higher-margin business for them than has mass-market retail. As a consequence,
European utilities have approached renegotiation carefully. Their aim has been

  • Optimize portfolios by balancing customers with sourcing. To do so,
    they have been forced to evaluate not only the profitability of individual
    customers to the utility but also the market drivers affecting that profitability.
  • Structure contracts that best suit their purchasing capability. European
    utilities commonly offer incentives for long-term contracts, which offer their
    traders greater leverage in wholesale markets. And they have increased their
    monitoring of trends in their customers’ businesses so as to more accurately
    predict their demand.
  • Segment customers. This helps differentiate the needs of groups and
    suggests the parameters of tailored contracts. Not every group wants quoting
    or risk management services. But some do.
  • Identify commonalities among customers. It is important, for instance,
    to identify factors that result in losses. While European utilities generally
    retain an obligation to serve, they are not forced to do so at a loss. And
    they have every right to develop strategies that turn problem customers into
  • Understand customers’ views toward costs and services. Research by Datamonitor
    in the German market,[11] for instance, shows C&I customers have responded
    more favorably to complete solutions than to a price-focused, commodity-only
    view of the utility-client relationship.

Step Four: Address Billing

A number of utilities have as yet failed to provide C&I customers with relatively
simple billing options that can help them analyze their own consumption and
reduce their own costs. Among these options are:

  • Consolidated billing. Companies with multiple sites may want one energy
    bill sent to a central financial office. They may also want copies sent to
    the sites. Or they may want sites to receive only their own consumption statistics.
    And that’s just the start. Some companies want each site to compare its consumption
    against company averages. Or a benchmark of similar businesses in a state.
    Some companies want tables. Some want graphs. The list goes on.
  • Convergent billing, or billing for multiple services on a single bill.
    Here again, requests for different formats abound.
  • Electronic bill presentment and payment. This lets businesses review and
    pay bills online. Large businesses may want raw data on CD so they can run
    their own comparisons.

Step Five: Outreach

As energy prices rise, an increasing number of companies find value in investing
in energy savings and energy “insurance” tools like price stabilization. Utilities
are likely to find increasing interest in such services as:

  • Web-based energy audits tailored to types of installations – offices,
    factories, distribution centers. The Web and email are also valuable tools
    for ongoing help – from the “tip of the week” to fairly technical comparisons
    of office-cooling strategies.
  • Special telephone numbers that connect to business-savvy service representatives.
    Staff in these specialized call centers generally require ready access to
    online help, including scripts and business process assistants that help with
    tough questions.
  • Service and price guarantees tailored to the type of business. A
    retailer may need a guarantee during the holiday shopping season; a farmer
    may need one for the summer irrigation season.
  • On-site energy audits. Possibly for a fee or for some percent of
    the savings generated.

  • Cost management programs. A utility may, for instance, offer to share
    the savings from changes it would undertake to lighting or cooling.
  • Energy-quality guarantees. These can be vital to computer-based businesses.
    And paying for power quality guarantees from the utility can be a welcome
    alternative when it reduces the need for backup power sources.
  • Incentives for backup power supplies. Many utilities have demandresponse
    programs that offer incentives to businesses that agree to reduce or cut power
    use during times of tight supply or a distribution emergency. But not all.
    And not all businesses believe the incentives are adequate. Exploring ways
    to expand existing programs may generate new ideas. They may also convince
    skeptical businesses that current programs are justified and deserve support.
  • Net metering. Do all businesses with backup power supplies have the
    opportunity to generate energy for the grid when utility supply runs short?
    Net metering is now a mandatory option in some states, and interest is growing.Special
    telephone numbers that connect to business-savvy service representatives.
    Staff in these specialized call centers generally require ready access to
    online help, including scripts and business process assistants that help with
    tough questions.

Preparing the Foundation for Tomorrow’s Discussions

Over the next decade, many utilities will want to implement at least some elements
of AMI. Support from C&I customers is likely to be essential in winning regulatory

Utilities will be best able to win that support by linking AMI to C&I concerns
about rising energy costs. But while those links exist, they may not carry the
day if C&Is are skeptical of utilities’ general commitments to meeting their
needs. C&Is are far more likely to support the AMI proposals of utilities that
have previously demonstrated that commitment by putting in place a variety of
cost-cutting services.

For utilities, then, one of the best ways to ensure support for AMI in the
future is to maximize the number and effectiveness of programs to help C&Is
cut costs today.


  1. Electric Power Annual 2004, Energy Information Administration, U.S. Department
    of Energy, epa_sum.html.
  2. and
  3. Bernard A. Gelb, Industrial Energy Intensiveness and Energy Costs in the
    Context of Climate Change Policy,” a CRS Report for Congress, November 21,
    11.cfm?&CFID=9567575&CFTOKEN=6848150. See also figures from the Energy Information
    Administration, U.S. Department of Energy, at
  5. Electric Power Annual 2004, Energy Information Administration, U.S. Department
    of Energy, epa_sum.html.
  6. and
  7. Electric Power Annual 2004, Energy Information Administration, U.S. Department
    of Energy, epa_sum.html.
  8. New York Public Service Commission Staff Report on the State of Competitive
    Energy Markets: Progress to Date and Future Opportunities.
  9. Electric Industry Restructuring in Massachusetts: Progress in Achieving
    the Goals of the Restructuring Act, Associated Industries of Massachusetts
    Foundation, Inc., Home&TEMPLATE=/CM/ContentDisplay.cfm&CONTENTID=7783
  10. The difference between energy-only and full-service providers for industrial
    electricity is less dramatic – 5.06 cents per kilowatt-hour for energyonly,
    against the full-service providers’ 5.10 cents.
  11. “Competitor Tracking, Customer Acquisition in the German Major Power Users
    Sector,” Datamonitor, Issue 1.

The World’s Hottest Retail Energy Markets

The number of retail energy markets open to competition grows year-on-year
and research carried out by Peace Software and VaasaEmg has provided for the
first time an “apples for apples” comparison of customer switching across competitive
retail energy markets around the world. Great Britain and the state of Victoria
in Australia are revealed to be by far the most active retail energy markets,
at times exceeding the rate of 20 percent customer switching per year.

Customer switch rates in more than 30 competitive retail energy markets have
been monitored on an ongoing basis by the Peace Software and VaasaEmg Utility
Customer Switching Research Project team. Peace Software is a developer of utility
customer information software for regulated utilities and competitive energy
retailers and VaasaEmg is a university-based research center that specializes
in electricity, gas and related utilities marketing to end customers.

Customer switch rates are an important metric of retail energy market competitiveness
and have the advantage of being objective, measurable and comparable between
markets. Eric Cody, retail energy markets consultant and former vice president
at National Grid, said: “Regulators will find this comparative customer switch
rate information essential for benchmarking the success of their own retail
competition initiatives, and energy retailers can apply the insights to their
customer acquisition and retention strategies.”

The research project’s customer switch rate metric is calculated by dividing
the number of customers that switched suppliers in a given period by the total
number of customers in the market, and the result is then converted to an annual
rate. For example, if 1 percent of customers switch suppliers in a given month,
that month has a 12 percent annualized customer switch rate. This approach has
substantial advantages over commonly reported switch rates that measure the
cumulative market share of regulated utility providers versus competitive providers.

comparative switch rate research has enabled the classification of markets into
four categories: Hot, Active, Slow and Dormant. Hot markets demonstrate annualized
switch rates of 15 percent or higher; Active is at least 5 percent; Slow is
below 5 percent; and Dormant markets exhibit less than 1 percent switching per
year. Figure 1 compares customer switching trends in a selection of markets
across these categories.

Hot Markets
Great Britain has consistently been at the forefront of
utility customer switching activity since full market opening in 1999. Rising
energy retail prices in recent years motivated British utility customers to
switch supplier and led the incumbent utilityaffiliated suppliers to ramp up
customer win-back campaigns. Price hikes have especially impacted British Gas,
which reputedly lost approximately 800,000 gas accounts between August 2004
and August 2005. The principal market share beneficiaries at this time are thought
to have been Scottish Power and Scottish and Southern Energy. It is believed
that Scottish Power achieved a net gain of around 1 million energy customers
between January 2004 and August 2005.

Meanwhile, down under in Australia, the state of Victoria has fast become a
hot spot of energy retail competition. Victoria introduced full retail competition
for electricity and gas in 2002 and it has exhibited increased customer switching
year-on-year, peaking at over 20 percent in 2005. Strong competition from out-of-state
incumbents and new start-up energy retailers have contributed to this dramatic
level of switch activity, along with the introduction of lifestyle products
cleverly targeted at niche customer segments.

Active Markets
Active markets include Flanders, the Netherlands, New
South Wales, New Zealand, South Australia, Sweden, Norway and Texas.

In Belgium, only the Flanders region is open to full electricity and gas retail
competition. The other Belgian regions of Wallonia and Brussels are introducing
full retail competition starting July 2007. The rate of customer switching in
Flanders slowed to around 5 percent in 2005 after hitting peaks of over 10 percent
in 2004.

The Netherlands introduced full retail competition for both electricity and
gas in July 2004 and today it is one of the most active European retail energy
markets. In the initial months after full market opening, most customer switching
activity related to electricity rather than gas.

New South Wales in Australia has exhibited a steady increase in customer switching
levels since full market opening in 2002. Customer switch rates in 2005 hovered
just above 5 percent, much lower than its neighboring states, but clearly active.

Zealand has the longest history of full energy retail competition of any country,
dating back to 1994. As is often characteristic of a mature energy retail market,
New Zealand experienced extremely high peaks of customer switching early on
– around 30 percent per year in mid-2001 – before easing and stabilizing in
later years. In 2005, New Zealand exhibited customer switching around the 8
percent level.

South Australia opened its doors to full retail electricity competition in
2003 and customer switch rates quickly soared. Principal reasons behind this
rapid acceleration include the divestment of the retail customer base by the
state government that removed the incumbent brand advantage, the granting of
switching credits to a portion of the customer base and rising retail prices
that motivated customers to shop around. Customer switching in South Australia
eased in 2005 to an estimated 11 percent rate.

Customer switch rates in Sweden have increased year-on-year since full market
opening in 1999, reaching 10 percent in 2004 before falling back to 6 to 7 percent
in 2005. A winter 2005 survey published by market research agency TEMO highlighted
that a cumulative 32 percent of Swedish energy consumers have switched supplier
at least once.

Norway was one of the most active energy retail markets in the world in 2003
with customer switching around the 20 percent level, following a temporary but
massive hike in wholesale prices and extensive utility marketing activity. Customer
switching levels have since stabilized at the 7 to 10 percent level.

The Texas electricity market opened to full retail competition in January 2002
and is widely considered the most competitive North American retail energy market.
It stands alone in U.S. markets for having separated its utility retail operations
from distribution, a market structure that has more in common with competitive
retail markets in Australia and Europe than with other U.S. states, most of
which employ a hybrid coexistence of regulated and competitive utility operations.
In 2005, Texas exhibited customer switching around the 7 percent level. The
Texas market is notable for the sheer number of participants, with over 40 energy
retailers actively competing for customers.

Slow Markets
In the Slow category for 2005 are the markets of Finland,
Denmark and Spain, with switching levels of less than 5 percent. Switching in
Finland has historically been inhibited by low customer awareness, and a sheer
lack of aggressive acquisition marketing. Denmark suffers from small savings
potential, and in Spain the incumbent utilities remain the dominant force with
little incentive for customers to switch.

Dormant Markets
Dormant markets are those in which all customers are
able to choose their retail energy supplier, but which do not exhibit significant
levels of customer switching. More than half of all markets monitored by the
research project remain Dormant, with switching levels below 1 percent per year.
This includes a number of European markets, such as Germany, which lack a consistent
method for switching and a centralized market registry infrastructure.

Almost all North American markets are classified as Dormant, including New
York, Pennsylvania, Massachusetts and Ohio. Their market structures inhibit
healthy competition through the continued role of the regulated utility as “last
resort” supplier and issuer of the customer bill within their respective distribution

The research project provides a consistent and objective basis for benchmarking
the competitiveness of retail energy markets around the world. Leading markets
have sustained active levels of competition for many years and this should be
viewed as proof that retail energy competition can thrive in new markets provided
they are appropriately structured.

Fusion of Work Management and Supply Chain Management

life cycle management is an asset management strategy designed to manage the
cradle-to-grave life cycle of key assets, considering all classes, stakeholders
and life cycle stages, in order to reap the maximum value throughout the lifetime
of the assets. Developing and employing an active asset life cycle management
strategy is a crucial step in achieving operational excellence for asset-intensive
industries. This emphasis is echoed by leading analyst groups such as ARC Advisory
Group: “Optimizing asset utilization can cut costs and increase profitability
in tough economic times.”[1]

Effective asset management requires full consideration of managing assets in
the context of the life cycle of the asset: plan, design, build, operate, maintain
and dispose. Throughout the life cycle, critical information is captured regarding
the asset, its characteristics, its costs, its use, its health, its relationship
to other assets and the strategies to employ in conjunction with it (see Figure

The ability to drive value from an asset management strategy requires a consistent
model and intricate coordination between work management, supply chain management
and a common asset information model. In light of the highly collaborative nature
of the processes involved within the asset life cycle, these three areas are
tightly linked, as data flows are highly interdependent and interactive among
the three. So, the marriage of the work management and supply chain management
information and work flows is central to managing the holistic asset life cycle
process (see Figure 2).

Looking at Supply Chain

Given the essential role that the supply chain performs in effective asset
management, it is not surprising that Gartner’s most recent Magic Quadrant report
defines criteria for quality asset management as laden with numerous supply
chain requirements. In fact, more than half of the requirements identified are
clearly related to supply chain functionality, including:

  • Detailed asset registry, combined with detailed parts and support descriptions;
  • Complex inventory relationships for indirect goods (blue-collar maintenance,
    repair and operations) that are associated with forecasts of planned and unplanned
    work on installed assets;
  • Supply chain capability for indirect goods, with demand planning linked
    to maintenance and repair schedules;
  • Probability-based “just in case” rather than “just in time” inventory and
  • procurement;

  • Supporting complex logistics for inbound material to remote locations;
  • Serial number tracking and tracing for equipment and parts;
  • Financial support via detailed cost analysis; and
  • Extensive warranty tracking to component levels.[2]

It logically follows that asset life cycle management, with conjoined work
management and supply chain management systems, are key capabilities for industries
where asset uptime is a critical factor in keeping the business running. The
organization must capture data for modeling and forecasting repairs, replacements,
failure rates and critical components, while operating in an environment of
unknown timing and unanticipated conditions – all without losing sight of the
goal of minimizing costs. Consequently, the entire life cycle of the physical
asset requires continuous capture, exchange and analysis of both supply chain
and work management information throughout each life cycle stage and the associated
iterations during the asset’s useful life.

analyzing work management routines in most businesses, it is clear that most
work (and hence asset life cycle management) requires a part, as well as manpower.
The financial impact of this is highlighted when you realize that in the maintenance,
repair and operations environment, spare parts generally constitute about 80
percent of the purchasing department’s transaction volume. The connection is
further emphasized by the fact that a primary reason for delay in completion
of work assignments is the lack of appropriate parts.

Therefore, understanding that inventory can be the single largest budget item
within a maintenance, repair and operations environment, industry leaders recognize
that maintaining appropriate and optimum inventory levels is imperative. Consequently,
well-developed planning and maintenance techniques such as condition-based maintenance,
predictive maintenance, reliability-centered maintenance, critical parts analysis
and service networks are now being called upon to play important roles in better
managing this asset-intensive environment. Each is dependent on the fusion of
information from the supply chain and work management. Without the interaction
of information from these two systems, management must base their maintenance
decisions on little more than guesswork and speculation.

“Clumsy parts and labor management raises total cost of asset ownership. Power
turbine users spend up to $3 on maintenance costs for every dollar invested
up front. The culprits? Poor MRO [maintenance, repair and operations] inventory
management practices like duplicate parts ordering – and inefficient workforce
management that results in dispatching the wrong technicians with inadequate

“Operating assets are the backbone of firms’ business functions. Yet
firms’ reliance on ad hoc processes and siloed apps prevent them from extracting
the most value out of their assets.”[3]
– Navi Radjou with Laurie M. Orlov and Liz Herbert; Forrester

Keeping work management, inventory and purchasing seamlessly connected provides
a smooth flow from work activities to material requests to job fulfillment –
and throughout the maintenance arena. A tightly integrated process streamlines
“real world” events such as those being rescheduled or canceled, the need for
more materials or fewer materials actually used. It gives the full cost picture
of the work to the work order planner, along with real-time data on whether
the material has been reserved for the job, and if it is not available, when
it will be received from the supplier or other site. Accomplishing these tasks
or work flow – while functioning outside a fused work management / supply chain
environment – is cumbersome, less functional and ultimately more expensive (see
Figure 3).

business in a world where each element brings its own information model poses
problems to the asset-intensive organization. Since “work leads to demand” in
the maintenance, repair and operations environment, an important link is missing.
The “just in time” supply chain mentality predominates – similar to the traditional
manufacturing or retail paradigms – rather than the “just in case” philosophy
that must also be brought into the equation for consideration within the life
cycle of the asset (for items such as critical spare parts). A model without
the binding between supply chain and work management functioning would be ill-suited
for the maintenance, repair and operations environment because it was designed
and built for a different type of business operation.

ERP or EAM Models

It might appear that supply chain functionality (including purchasing, inventory
and contract management) could be provided either by financial ERP systems (like
Peoplesoft, SAP or Oracle) which specialize in discrete manufacturing activities,
or enterprise asset management (EAM) systems, which are oriented to asset life
cycle management activities. This assumes that supply chain models are interchangeable
– “parts are parts.” And although the similar nomenclature (i.e., both worlds
using the “supply chain management” terminology) might indicate otherwise, there
is a distinct “best fit” for each of these types of systems, depending on the
utilization being targeted.

In the maintenance, repair and operations world, the majority of work is unique
and the materials that are eventually used vary, so it is not as easy to predict
the material usage as it is with production work orders. Parts are issued and
charged against budget in the maintenance, repair and operations environment
based on usage. Although a maintenance work order may have a bill of materials,
the work order is only charged with the parts that were actually used. The rest
of the material quantities are released back to inventory and must subsequently
be accounted for.

and costing of work plays a significant role in whether a maintenance work order
is approved, and based on estimated lead time for materials and resources or
budget, a job’s schedule may move on the calendar. However, the concepts of
service labor, service parts and contractor management are unknown features
within a production work order. The production work order typically only considers
the inventoried raw materials and production equipment needs – insufficient
information in the maintenance, repair and operations context.

So, from the perspective of the financial system, the assumption would be made
that supply chain management is more of a corporate function and should stay
with the financials, because financial management is generally a corporate function.
In this vein, supply chain management is viewed as fairly synonymous with the
purchasing function. However, as we’ve seen, in an asset-intensive environment,
many of the financial transactions are coming from work management, and supply
chain needs are being driven from maintenance work orders. Thus the operational
accounting requirements of an assetintensive organization are fully contained
within a well-formed EAM solution, combining both work management and supply
chain management (see Figure 4). The subsequent integration to a financial system
is considerably more straightforward and less intrusive into an organization’s
operational performance.


Asset life cycle management needs to encompass work management as a companion
and collaborative system to the supply chain system. This combination ensures
having the right part, at the right place, at the right time, thereby reducing
quantities on hand, which in turn leads to reduced inventory overhead, reduced
warehouse space and better financial control. Solutions should be selected based
on distinct capabilities that match the environment and the required information
flows for effective decision making.


  1. Optimizing Global Asset Management – ARC Advisory Group, Inc., ARCWire Industry
    News, March 1, 2002.
  2. Magic Quadrant for Enterprise Asset Management, 1H06, Gartner, March 29,
    2006, Kristian Steenstrup, Billy Maynard.
  3. How Firms Can Get Value From Physical Assets, Forrester Brief Series: July
    28, 2003, Navi Radjou with Laurie M. Orlov and Liz Herbert.

The Energy Policy Act of 2005

On August 8, 2005, President George W. Bush signed the Energy Policy Act of
2005 into law, following four long years of congressional debates, negotiations
and revisions. The act is the first major federal energy policy legislation
since the Energy Policy Act of 1992. Since it institutes tax incentives and
regulatory changes that will substantially affect U.S. energy markets over the
next few decades, utilities should consider it carefully when developing and
implementing their business strategies and tactics for 2006 and beyond.

Important Areas for Utilities

The Energy Policy Act of 2005 (the act) includes numerous provisions that affect
a range of industries. This article focuses on the following three key areas
that are most important for utilities:

  • Less-restricted utility ownership;
  • Improved infrastructure and reliability; and
  • Increased diversity of fuel supplies.

These areas reflect the primary U.S. energy policy goals of reducing dependence
on foreign energy sources and increasing the competitiveness, efficiency and
reliability of U.S. energy markets. We think that these are important and worthwhile
goals and that the act represents real progress toward achieving them.

Less-Restricted Utility Ownership

The act repeals the Public Utility Holding Company Act (PUHCA) of 1935 as of
February 2006, reducing restrictions on utility ownership, expanding FERC’s
(Federal Energy Regulatory Commission) merger approval authority and removing
some hurdles to mergers. The act shifts federal merger approval authority from
the SEC to FERC and no longer requires utilities considering a merger to have
a network interconnection, so the geographic diversity of utility holdings can
be greatly increased. Related to the shift in regulatory jurisdiction, utilities
will now be required to open their books to both FERC and state utility commissions
during the merger review process.

PUHCA repeal will likely result in increased utility merger and acquisition
activity, and nationwide utility holding companies are likely to emerge over
the next few years. The act also lifts restrictions that had made it difficult
for private equity, foreign and nonutility investors to own utilities. However,
these potential investors will continue to face significant state regulatory
hurdles, and the state regulatory approval process itself will remain onerous.
In fact, state regulatory approval may become more complicated with the increased
powers granted to state regulators. And the shift to FERC approval authority
for mergers, while likely to be a positive development for acquiring utilities,
introduces some new uncertainty into the merger process.

Utility merger and acquisition activity is also likely to increase as a result
of other provisions in the act. Meeting the standards for transmission reliability
and developing new nuclear plants will require substantial capital and deep
expertise, driving utilities to consolidate into larger corporations that have
access to the necessary financial and human capital.

Improved Infrastructure and Reliability

The act establishes mandatory electricity transmission reliability standards,
provides incentives for electric grid and gas distribution system improvements,
increases open transmission access and calls for state consideration of demand-response
programs and timedifferentiated rate structures.

Transmission Reliability Standards and Monitoring

The act calls for common nationwide standards for system reliability to which
utilities must conform and it establishes a national authority – the Electric
Reliability Organization (ERO) – to provide unprecedented monitoring of transmission
grid status. To help the industry meet these reliability standards, the act
provides incentives to attract new transmission infrastructure investment. These

  • Allow for increased rates of return on equity for interstate transmission
  • Allow accelerated depreciation of qualified transmission facilities;
  • Permit recovery of costs incurred to comply with new reliability standards;
  • Allow deferred gains on the sale of transmission assets through 2007 to
    FERC-approved transmission organizations.

also gains the authority to site new transmission facilities in transmission
corridors of national interest if state authorities cannot or will not take
action. Although this will not entirely eliminate the often-difficult struggles
required to overcome public and municipal resistance to new transmission lines,
we expect that this new authority will lead to needed investments in the country’s
most supply-constrained regions.

All of these provisions substantially increase the already considerable control
of the transmission grid by the federal government. The creation of the ERO
is likely to result in increased penalties for violations of reliability standards,
increasing the incentives for major new transmission investments where the transmission
grid is constrained. But the new reliability standards will be met through more
than simply investments in new transmission lines. We also expect increased
investments in what we call the intelligent network: the automated metering,
monitoring and switching hardware and software needed to extract more capacity
from existing lines and rights of way.

Real-time Pricing, Smart Metering and Net Metering

The act calls for state regulators to consider adopting the following:

  • Interconnection service – Make interconnection to distributed generation
    facilities a standard service available to all customers;
  • Smart meters – Make time-based meters widely available to allow for time-of-use
  • Time-of-use rates – Offer all customer classes a time-based rate schedule;
  • Net metering – Provide customers with the option of netting the electricity
    they provide to the grid from on-site distributed generation facilities from
    the electricity they consume from the grid.

State regulators must begin considering these standards within the next one
to two years and decide on whether to adopt them within two to three years.
These provisions will further encourage state regulators that are already pushing
for smart metering and time-of-use pricing (e.g., California) and will quickly
begin to drive other states in this direction, even if they have not yet been
active in these areas. Within a few years, we expect that regulators nationwide
will regard time-of-use rates and the supporting infrastructure as utility service
obligations rather than optionally available programs.

Natural Gas Storage and Transmission Incentives

The act provides that the FERC allow natural gas companies to charge market-based
rates for new gas storage capacity, even in cases where the company is perceived
to have market power. It also allows for accelerated depreciation of new natural
gas distribution lines and provides relief from royalties on gas production
from new wells in the Gulf of Mexico. Utilities that build or expand natural
gas storage facilities clearly stand to benefit, and large investors in natural
gas pipeline construction are likely to see increased cash flows resulting from
the accelerated depreciation.

Increased Diversity of Fuel Supplies

The act provides incentives for new nuclear plant development and for gas utilities
to develop distribution pipelines and gas storage facilities; reauthorizes incentives
for clean coal technology and renewable energy sources; and clarifies liquefied
natural gas (LNG) facility siting authority.

Nuclear Plant Investment Incentives

The act provides for a tax credit of 1.8 cents per kWh for electricity produced
at new nuclear facilities over the first eight years of production and creates
insurance for nuclear plant owners/builders to protect them against losses caused
by litigation and/or regulatory approval delays. Liability protection for nuclear
plants is extended to 2025, continuing the limitation on the exposure of plant
owners to financial risk in the case of an accident. Loans for up to 80 percent
of the project cost of advanced nuclear facilities will be guaranteed (contingent
upon Congress appropriating funds for this purpose), and the tax treatment of
funds required to be set aside for safe nuclear plant decommissioning is improved.

The act could dramatically improve the financial viability of new nuclear plants
and at least partially mitigate several major risks to investors. As a result
of these changes, in combination with the continued pressure to reduce greenhouse
gas emissions and the increasing price of natural gas, we expect to see the
start of substantial new nuclear plant development in the United States over
the next few years. We recognize that considerable hurdles to new nuclear plants
remain, as public distrust of nuclear power is still very high, homeland security
concerns will need to be addressed and facilities for long-term storage of nuclear
waste (at Yucca Mountain or elsewhere) are yet to be established. But overall,
we expect nuclear power to be a primary contributor in moving the United States
toward a more diversified energy base, and we suggest that power generators
consider nuclear generation as they plan for increasing their capacity.

Clean Coal and Renewables Incentives

The act provides for federal investment tax credits for clean coal and integrated
gasification combined cycle (IGCC) generation projects and reduces the cost
recovery period for pollution control equipment on older coal plants from 20
years to seven years. The placed-in-service date to qualify for the tax credit
on renewable energy production is extended through 2007 and covers qualifying
production from wind, biomass, geothermal, small irrigation, landfill gas and
trash combustion facilities.

The act also pushes state regulators to influence fuel diversity by calling
for them to consider standards for:

  • Fuel sources; and
  • Fossil fuel generation efficiency.

Utilities should develop plans to significantly reduce their dependence on
any one fuel source. In addition, utilities should develop plans to increase
the efficiency of their fossil generation.

Other Fuel Diversity Provisions

Provisions in a couple of other areas should help increase fuel diversity:

  • LNG siting authority – The act grants FERC the exclusive authority on siting
    LNG facilities, effectively ending the challenge to this authority set forth
    by the California Public Utilities Commission. Assuming that FERC allows for
    more flexibility for LNG facilities, this should provide the United States
    with an increasing supply of natural gas from international markets, reducing
    dependence on domestic sources.
  • Alternative generation tax credits – The act provides for tax credits for
    solar energy, fuel cells and distributed generation equipment.

LNG facility investors should note that while the FERC has the exclusive authority
on siting LNG facilities, the states will continue to have considerable control
over siting decisions through such vehicles as the Coastal Zone Management Act,
the Clean Air Act and the Clean Water Act.

What It All Means

Overall, we expect that the act will help drive continued utility consolidation,
although the challenges of successfully integrating two utility operations and
realizing merger benefits will remain. We believe that utilities need to consider
mergers and acquisitions in their strategic plans. To this end, utilities should
answer a fundamental question: Are we going to grow through acquisitions, or
are we prepared to be acquired?

The act also should spur investment. In addition, it should help the United
States gain a more diversified and reliable network for natural gas storage
and delivery. We expect the act’s incentives and potential standards to at least
sustain the current rate of investment in renewable energy sources and continue
to help the United States diversify its energy fuel supplies. The new and extended
production incentives may also result in an increased role for private equity
in the industry as private equity groups look at investments in large utilities
as a means to take advantage of the variety of production tax credits.

With the act’s alternative generation tax credit incentives and the continued
advancements in fuel cells, we expect micro-generation activity to increase,
with some combination of central generation farms, substation-located generation
and distributed generation at customer locations.


We believe that the Energy Policy Act of 2005 will have substantial and lasting
impacts that will benefit U.S. energy markets and that utilities will need to
carefully consider the act as they conduct both short-term and long-term planning
activities. In particular, we recommend a careful focus on growth strategies
through acquisitions, consideration of new nuclear generation, targeted investments
in transmission capacity and the technology to manage it and development of
more sophisticated electricity pricing options to meet both customer and regulatory

Utilities and investors in utilities are responsible for obtaining appropriate
legal, accounting and tax advice in connection with the Energy Policy Act of
2005 and other relevant laws and regulations.


Shifting Regulatory Oversight of Utility Mergers

Many in the energy industry are anticipating a wave of mergers that will fundamentally
alter and concentrate the energy industry, both from utility consolidation and
mergers between utilities and firms from other business sectors. Warren Buffett,
for example, acquired MidAmerican Energy five years ago, announced the acquisition
of PacifiCorp last year and recently declared, “We’ll be looking for more.”[1]
The number of utilities has been shrinking for years, as those that remain have
gotten larger, and the Energy Policy Act of 2005 (EPAct 2005) has removed one
of the significant obstacles to further consolidation, largely through the repeal
of the Public Utility Holding Company Act (PUHCA) of 1935. The Federal Energy
Regulatory Commission (FERC) has taken on additional merger oversight responsibilities
as a result of these changes and the role of the Securities and Exchange Commission
has been greatly diminished. In addition, EPAct 2005 has also eased restrictions
against ownership by firms that are foreign owned or involved in different industries.

While PUHCA is gone, state legislatures and the various state commissions may
step up to fill the void. The greatest uncertainties over the degree of consolidation
relate to state-level activism. States’ actions will impose costs on the merging
parties and as a result will play a major role in determining just how much
consolidation takes

Merger Incentives

Under PUHCA, the geographic scope of businesses that could be owned by holding
companies was limited to those that could be operated as a single integrated
system. Companies were quite often restricted to either operations within a
single state or to interconnected utilities across state lines. While there
had been some success in pushing the geographic envelope – AEP’s Texas-to-Michigan
and Exelon’s Illinois-to-Pennsylvania systems come to mind – gaining regulatory
approval of these kinds of mergers was a challenge. Actions taken to increase
the interconnectedness of the merged entity (which might also be tied to increased
efficiencies) could aggravate market power concerns. Thus, merging parties were
caught between two contradictory regulatory objectives: They had to show they
were part of an interconnected system while also demonstrating that market power
was not an issue.

While the number of publicly traded regulated electric utilities in the United
States has dropped over the last 15 years, it is still remarkably large when
considering maturity and capital intensity of the industry. Benefits from consolidation
may include general scale economies from larger operations, reduced overhead
costs, better access to capital, better access to technology and an ability
to manage risk differently. Some smaller companies may be prime takeover targets
due to higher costs, due to factors such as the requirements imposed under Sarbanes-Oxley,
which disproportionately affect these smaller utilities.

In theory, reduced restrictions on mergers may open up opportunities for risk
management that previously had been limited. The combination of utilities in
different geographic locations provides an opportunity to diversify weather
risks (extreme temperatures and storms) as well as regulatory risk. Ownership
affiliations with major oil, gas and coal firms could provide much more sophisticated
risk management of fuel supplies and their costs. Increased scale may provide
benefits in energy supply management and large-scale power plant ownership.

Nevertheless, exploitable efficiency gains through merger are not necessarily
clear with the repeal of PUHCA, especially with the associated regulatory burdens.
The current structure of regulated utilities is a reflection of the balkanized
state of regulation itself. The ultimate effect on the multitude and form of
mergers depends upon the regulatory review of proposed combinations in the post-PUHCA
environment compared to the environment prior to repeal. This comes down to
the way in which FERC and the states will handle utility mergers. FERC seems
poised to view utility mergers in a manner generally consistent with its rulings
in the past. State legislatures and public service commissions, however, are
re-evaluating their role and objectives in merger reviews. There are indications
that states are concerned with the possibility of substantial consolidation,
and may take actions in the interest of protecting consumers.

FERC’s Expanded Role

FERC is preparing for the review of a larger number of mergers than it had
in the past, expecting to receive eight in both FY 2006 and FY 2007, up from
4 in 2005.[2] EPAct 2005 amends Section 203(a) of the Federal Power Act to grant
FERC the authority to approve public utility and holding company transactions
involving a target (including generation facilities) with a value of more than
$10 million. In doing so:

“After notice and opportunity for hearing, the Commission shall
approve the proposed disposition, consolidation, acquisition, or change in control,
if it finds that the proposed transaction will be consistent with the public
interest, and will not result in crosssubsidization of a non-utility associate
company or the pledge or encumbrance of utility assets for the benefit of an
associate company, unless the Commission determines that the crosssubsidization,
pledge, or encumbrance will be consistent with the public interest.”[3] [emphasis

The “public interest” standard is consistent with FERC’s rules in the past
regarding merger approval. In applying this standard, FERC has evaluated a transaction’s
effect on competition, rates and regulation. In reviewing mergers, FERC follows
the Department of Justice (DOJ) and Federal Trade Commission (FTC) Merger Guidelines,
and has used an analytic tool of its own device (i.e., its Appendix A Analysis)
to apply to utility mergers.

In addition to the “public interest” standard, FERC must find that the acquisition
will not result in a cross-subsidization to an associate company unless that
cross-subsidization is in the public interest. Cross-subsidization can occur
when a regulated company shares administrative, capital or operating costs with
an unregulated company. FERC will be tasked with determining when cross-subsidies
are in the public interest and when they are not. Presumably, merging companies
will be pressed to demonstrate how efficiency gains from cross-subsidies will
benefit consumers, or how these cross-subsidies otherwise are in the public

Under the new legislation, FERC is mandated to act within 180 days of a filed
application, and is to “adopt procedures for the expeditious consideration of
applications.”[4] While this review period is longer than the 60 days typically
required of DOJ and FTC, specifying a time for review is a positive step to
ensuring that merger applications are evaluated relatively quickly and companies
are not left in limbo awaiting regulatory approval. In practice, FERC may receive
additional time from the merging parties if the transaction is particularly
large or troublesome, much as merging parties often grant the DOJ or FTC additional
time if requested.

While FERC’s responsibilities have increased, recent changes build on a long
history of merger oversight. That does not mean that everyone will be content
with the scope of FERC’s oversight or its judgment in applying the public interest

State Merger Activism

The states have long been integral to any merger approval and the recent regulatory
changes have, if anything, increased their role.[5] While each state has its
own requirements, the state commission’s role as the approver of retail rates
ensures involvement in any merger involving state-regulated utilities.

Merger announcements are often coupled with assertions of synergies and efficiencies.
It does not take long for state commissions to ask, “What’s in it for ratepayers?”
Many mergers have been approved once tied to a period of guaranteed rate reductions
but this increases the cost of the merger. Rate reductions might be paid for
out-of-operational cost savings, but predicted efficiency gains are uncertain.

Merger benefits could arise from risk sharing between formerly separate entities
but such benefits may be hard to capture when the entities are separately regulated.
After all, some benefits are present only when unexpected losses in one area
can be offset by gains from another. Only a very disciplined and optimistic
state regulator would be willing to allow ratepayers in his state to bear financial
burden from problems elsewhere (outside of his state) in the hopes that the
favor would be returned some day if fortunes were reversed. Such hypothetical
risk sharing is even more unlikely to be approved by state commissions if a
merger is contemplated between regulated and unregulated affiliates, such as
between an electric utility and a fuel company.

State commissions have a long history of concern about the potential consequences
arising from consolidation between utilities and unregulated companies. Commissions
have focused concern on the possibility that the finances of the regulated utility
would be jeopardized by an unregulated affiliate. Unregulated businesses are
often riskier and in any event are largely outside of the state commissions’
control. In evaluating the risk management issues involved in a potential merger,
commissions also may focus on issues such as transfer pricing, crosssubsidization
and financial abuse arising post-merger.

Transfer pricing involves the setting of prices for any good that passes between
regulated and unregulated (or separately regulated) entities. Consumers can
be harmed if transfer prices are higher than market rates and the utility attempts
to capture these prices in the rate base. Crosssubsidization concerns may arise
if companies share operating, administrative or capital costs. Limitations on
cross-subsidies may mitigate potential efficiencies from size or scope, preventing
a reduction in capital costs for the merger entity, for instance.

The states are showing no indication of backing away from these issues. States
are considering instituting increased authority in reviewing mergers and oversight
of holding companies following the repeal of PUHCA. Laws to increase this state-level
authority are being referred to as “mini-PUHCAs,” and it is possible that they
could have a stifling effect on consolidation similar to that of the original

While so-called mini-PUHCAs may give states additional access to holding companies,
it is not clear the extent to which they will be employed or persist. In a recent
conversation, Robert Burns of the National Regulatory Research Institute said
“Mini-PUHCAs essentially allow a state to reach into the holding company itself
but potentially violate the intent of Congress to encourage investment in utilities
and their infrastructure.”

States seem to be considering “ring fencing” as a viable tool to mitigate undesirable
consequences from the combination of regulated and unregulated businesses. Ring
fencing is designed to protect a regulated company from unregulated affiliates
via certain restrictions such as capital structure requirements, independent
boards and investment restrictions on the unregulated entity.

Many states may not be able to require ring fencing of merging parties given
their current laws and regulations. The National Regulatory Research Institute
has identified only three states – Wisconsin, Virginia and Oregon – that have
regulatory tools at their disposal sufficient to insulate regulated utilities
from nonutility affiliate undertakings.[6] In the face of new complex mergers,
many states may beef up their ringfencing authority.

However, overly aggressive ring fencing could erase efficiencies contemplated
by a merger. In a situation where there are no clear, large efficiency gains,
merging entities may not be willing to promise large benefits to state commissions,
especially when facing additional restrictions from states that may erode the
very benefits in which states may seek to share. This combination of circumstances
may dissuade a number of potential mergers.

Looking Forward – Industry Perspectives

The prevailing view seems to be that repeal of PUHCA will increase the number
of mergers and introduce additional regulatory complexities:.

  • “A decade from now, the total number of investor-owned utilities will be
    way down, the American marketplace will be far more internationalized, and
    there will almost certainly be more unbundling of assets to minimize negative
    impacts of the market power problems these large mergers will create,” says
    Roger W. Gale, CEO of GF Energy.[7]
  • “The effect will likely be greater consolidation of the electric industry,
    greater concentration of ownership, more complex company structures, and more
    opportunities for the exercise of market power… . Greater concentration in
    ownership of generating assets will only add to the structural problems, increasing
    the potential for market manipulation,” the American Public Power Association
    has stated.[8]
  • “Due to the repeal of the PUHCA discussed above, FERC and state commissions
    can expect more mergers and acquisitions, many of which may involve diversified
    activities within holding company structures,” The National Regulatory Research
    Institute has stated.[9]

Some energy company executives expect further consolidation but largely as
a result of other pressures in the industry, and not directly, as due to the
repeal of PUHCA. And while some feel consolidation will occur within the regulated
utility sector, they do not see a wider formation of conglomerate mergers, even
within the energy industry.

  • “I believe that there will be further consolidation. I don’t know that the
    repeal will have much to do with it. There will obviously be a positive impact.
    But I think many companies, if you look at Warren Buffett, Duke, and others,
    have been ignoring the holding company act already,” says Warren Robinson,
    Executive VP and CFO of MDU Resources.[10]
  • “I don’t think the problem was PUHCA. Running power plants is really not
    what oil companies do. And you also have – even though you are allowed to
    do it – you do have the rate regulation problem. Most oil companies don’t
    like any more government regulation than they currently have,” says Stephen
    I. Chazen, Senior Executive VP and CFO, Occidental Petroleum.[11]

The impact on consolidation by the repeal of PUHCA, increased regulatory authority
of FERC and potential for states to shift their focus is unclear. We might see
some additional combinations designed to diversify geographically, and additional
conglomerate mergers. Much of the impact will depend on the extent to which
states step in to fill the perceived void left by the repeal of PUHCA.

While it might be tempting for companies to wait and see to what extent states
will be proactive in merger review, some more-daring companies might initiate
mergers while state commissions themselves are unsure about what authority they
possess. Of course, no company wants to be a test case resulting in a long drawn-out
fight at the state level. Regardless, merging parties will likely be asked to
provide assurances to states in which they operate, and states will carefully
assess mergers that appear particularly complex. This alone will temper potential
consolidations contemplated as a result of the repeal of PUHCA.

This article does not represent the views of LECG or other experts at LECG.


  1. Berkshire Hathaway Annual Report, Letter to Shareholders, February 28, 2006,
    p. 6.
  2. “FY 2007 Congressional Performance Budget Request,” Federal Energy Regulatory
    Commission, February 2006.
  3. Energy Policy Act of 2005, Section 1289, Merger Review Reform.
  4. Ibid.
  5. The EPAct 2005 has explicitly increased the authority of the states by explicitly
    permitting them to gain access to holding-company records, including associate
    companies and affiliates.
  6. “Briefing Paper: Implications of EPAct 2005 for State Commission,” The National
    Regulatory Research Institute, October 2005, p. 7.
  7. Gale, Roger W., “What the New Mergers Will Mean,” Energybiz Magazine, July/August
  8. “The Electric Utility Industry After PUHCA Repeal: What Happens Next?” American
    Public Power Association, October 2005, p. 1.
  9. “Briefing Paper: Implications of EPAct 2005 for State Commission,” The National
    Regulatory Research Institute, October 2005, p. 8.
  10. Stravos, Richard, “CFOs Speak Out: Looking Beyond Power,” Public Utility
    Fortnightly, October 2005.
  11. Ibid.


Survival Skills for Utility Mergers

The recent surge in merger, acquisition and divestiture activity in the U.S.
utilities industry has come as no surprise to leaders in the industry who have
long anticipated the repeal of the Public Utility Holding Company Act (PUHCA),
a depression-era law that restricted ownership of franchised utilities. With
the passage of the Energy Policy Act of 2005 and the subsequent repeal of PUHCA,
electric utilities can now be owned by non-franchise utility entities and no
longer need to demonstrate interconnection between operating companies.

The clear path toward consolidation was also reinforced by global trends in
the energy industry, such as increasing raw energy prices, more stringent environmental
regulations and the continuous quest for synergies and lower costs to meet the
expectations of Wall Street. Distribution companies with local service requirements
and regulatory scrutiny of cost and service delivery are searching for increased
economies of scale, improved reliability and power quality and enhanced customer
service levels through strategic combinations that cross state and regional

Despite the optimistic projections of value creation for shareholders, surveys
of business leaders, as well as popular sentiment, indicate that the majority
of mergers “failed to achieve financial goals set by top management and roughly
half destroyed shareholder value.”[1] Business consultants focused on the energy
and utilities industry are concerned with the poor financial results as well
as perceptions of decreased service quality and, perhaps most crucially, the
potential consequence of further delaying needed investment in energy infrastructure.
Study of management actions both pre- and post-merger and correlation of characteristics
of both successful and not-so-successful corporate transformations have led
consultants to develop a systematic approach to both identifying and delivering

Doing the Deal Right Versus Doing the Right Deal

Although conventional wisdom holds that the seeds for successful value creation
are planted in boardrooms during the strategic negotiations and financial engineering
of the pre-merger phase, analysis of the results of hundreds of corporate combinations
clearly indicates that the activities that follow on the heels of the deal announcement
are actually more crucial in determining long-term success. In other words,
“doing the right deal” can only succeed by doing the deal right, which means
early and continuous focus on many issues which do not get much attention in
boardroom meetings.

is an analysis of both the key pre- and post-completion activities and the resultant
integrated approach to assist utilities in maintaining focus on both value generation
and realization. This approach harnesses the resources of merging companies
and establishes clear channels of communication for tracking and reporting crucial
integration activities (see Figure 1).

Our experience has shown that the majority of obstacles to successful value
realization arise during the integration planning and implementation phases,
often long after headlines of the deal have disappeared. The reasons are many
and are inherently understandable, but, time and again, companies make decisions,
and in many cases fail to make decisions, crucial for successful organizational
transformation. Some examples include a lack of clear goals and timetables,
inability to devote quality resources to planning and management of the transformation,
delays due to work overload, pervasive resistance to change, lack of project
management approach and discipline and lack of experience in driving transformational
value that has an impact on bottom-line results.

Our integrated framework for supporting utility integrations is based on a
relentless focus on driving the benefits envisioned by the executive deal makers
as well as identifying and unlocking additional areas of value often hidden
in legacy organizations. Our experience providing support to thousands of organizations
in integration processes led us to identify the key problems faced by managers
and executives. In turn, we developed methods and tools to assist our clients.

Driving Value Through Integration

The complexity factor of mergers and acquisitions between utilities is greater
than corporate transformations in other economic sectors due to numerous factors
inherent in utility operations and regulation. In addition to common deal-specific
factors such as size, geographic footprint and legacy corporate cultures of
the organizations, utilities also must deal with industry-specific factors,
such as intensive state regulation, a maturing workforce, aging asset base and
varied labor agreements. As managers and executives attempt to manage these
issues, in addition to delivering reliable and safe energy to our homes and
businesses 24/7/365, it is no wonder that executing complex merger integrations
is a challenge.

The top reasons the full potential of expected value from utility mergers is
not realized include:

  • Managers must maintain focus on day-to-day operations and
    respond to unplanned events such as storms and forced
  • Executives have difficulty providing sufficient support to
    drive the organizational and process changes required to
    realize synergies;
  • No formal programs are created for tracking benefits and
    holding people accountable for delivering the planned
    savings; and
  • Performance metrics and incentives for senior and middle
    managers focus on operations/engineering rather than
    financial results.

The solutions to these and other challenges involve the rapid initiation of
an integrated cross-organizational approach, the creation of clear implementation
plans with specific results, definition of roles and assumption of responsibilities,
the adoption of project management discipline and an unwavering focus on driving
value to the bottom line. Lessons learned from both successful and unsuccessful
mergers have contributed to the specific approaches and tools introduced below.

Rapid initiation of cross-organizational involvement and communications

Speed is crucial as organizations lay the groundwork for integration activities.
Quickly establishing a communications strategy, encompassing elements of both
general and targeted messaging, is critical in managing the anxiety that accompanies
times of uncertainty. The lack of specific answers should not dissuade an active
communications program. Experience has shown that the presence of a communications
program alone reduces the level of anxiety. It is even helpful in the early
stages to communicate what is not known and the plan for issue resolution.

Even in the integration of acquisitions of much smaller entities, inclusion
of professionals from both organizations in a combined integration team is a
powerful message to the existing workforce. The recognition of cultural strengths,
identity and values in planning and implementation activities can lead to reductions
in the level of attrition due to fear of the unknown and targeted poaching by
competitors during transition.

Transformation, not accumulation

The goal of most mergers and acquisitions is to create a new organization that
is greater than the sum of its parts. But this goal is not achievable right
away and can only be accurately evaluated over several years. Where the envisioned
value is based on synergies, economies of scale and new ways of doing business,
transformational change of parts or entire areas of operations and support becomes

A transformational approach to integration does not focus only on short-term
cost reduction but defines a process of continuous change focused on clear strategic
objectives. Components of the transformation may include streamlining, automation,
centralization, decentralization or outsourcing of processes, each with their
pros and cons. Typical post-merger transformation targets include the following
organizational areas:

  • Procurement – Focus on leveraged buying, strategic sourcing, reduction
    of maverick purchases and streamlining of administrative processes;
  • Finance and administration – Reduction of closing cycles by up to
    50 percent;[2]
  • Human resources – Streamlined and improved quality of employee support
    and reduction of investment in systems as percentage of HR budget;
  • Customer care and billing – Reductions of 25 percent in customer interaction
    costs, reduced time from meter to cash and acceleration of recovery and reduction
    of bad debt;[3] and
  • Technology – Optimization of infrastructure, increased flexibility,
    enhanced resilience and improved flow of information throughout the organization.

Definition of Project and Organizational Roles and Responsibilities

The first rule to be observed by business leaders of corporate transformation
is clear: Maintain focus on your customers. They are still No. 1. Although with
the chaos and inherent stress of the transformation, it may not always seem
to be a critical path, proactively managing customer concerns can reduce concern
in the community and reduce levels of inquiries and complaints. The community,
regulatory commissions and local government and public services are also customers
and deserve careful attention throughout the process.

Key to effectively leading the transformation “project” is putting someone
in charge. Integration and transformation activities require strategic planning
and detailed actions over many months and, often, years, and should receive
the full-time focus of accomplished managers. The wider transition team often
includes other full-time members as well as a larger group of part-time members
or specialists who assist in addressing specific aspects of the transition.

Involving professionals from diverse areas of the companies is important in
establishing momentum for organizational change. Team members require training
and guidance throughout the course of transformation to assist in handling the
emotional roller coaster that accompanies participation in a merger or acquisition.

Helping to Ensure Project Management Discipline

The cross-functional complexities of integrations span a wide range of technical,
legal, engineering, systems, financial and personnel issues. Constant changes
in requirements, priorities and, often, executive direction, require firm guidance
as well as flexibility in adaptation. As such, the traditional engineering approach
of many utilities does not lend itself well to success.

We have successfully introduced goal-directed project management (GDPM) methods
in widely varied integration efforts. These methods are based on risk assessment
tools, clear measures for evaluation and prioritization of initiatives (i.e.,
which activities to continue and which to stop) as well as robust documentation
and performance reporting processes. Methods and tools appropriate for the needs
of utilities at various stages of the merger and integration process assist
in establishing an integrated road map leading to a clearly defined goal.

essence of GDPM focuses on evaluating hundreds of items that contribute to merger
complexity, developing action plans for each, assigning individual or team responsibilities
and providing managerial support and resources for teams to execute. An important
methodology utilized is called radical prioritization, which provides analysis
and management information on which existing activities and projects to prioritize,
which to continue and which to suspend (see Figure 2).

Radical prioritization provides the continuity between the topdown goals envisioned
in the merger/acquisition and engages and aligns the organization to focus on
attainment of these goals. The complex interaction of transformation teams,
financial analysis and ongoing operational priorities are evaluated across both
organizations, enabling value-based decisions which take into account both internal
and external priorities and maintain a clear focus on achieving business results.

Coordinating the sheer number of moving parts and steering the organization
through both internal and external resistance requires a steady hand and executive
support to make tough decisions necessary to break down barriers and eliminate
obstacles from the path forward. Frequent re-evaluation of the relative priorities
of actions and concise management dashboard reporting enable clear, objective
and consistent communications throughout both organizations.

Relentless Focus on Driving Value

One important key to ensuring the realization of value and contribution of
dollars to the bottom line is a focus we call benefits realization. This method
identifies value enablers at the initial stage of project formation. Traditional
approaches emphasize transformation or integration levers, but our focus on
quantifiable results enables early prioritization of producing bottom-line results
to guide the formation of work teams as well as correct prioritization of issues
and dependencies linked to dollars.

Benefits realization focuses on three key steps: validating financial projection
and identifying additional potential benefits, developing and implementing plans
for realization of benefits and producing processes and scorecards for tracking
benefits and correcting underrealization on a regular basis. These steps enable
integration and transformation efforts to be linked to the bottom line and to
clearly present financial impacts to areas of operations over many years in
the future. Benefit scorecards are also often linked to measures of quality
and satisfaction to enable more detailed impact analyses.

As previously mentioned, internal focus on transition and integration efforts
often wanes as the news of the deal fades and employees become adjusted to a
new reality. Adherence to wellplanned implementation plans lessens as employees
assume new job roles, managers and priorities. It is at this phase that many
of the projected benefits fail to materialize in the absence of a rigorous benefits
realization program that reflects a long-term company focus (often five to 10
years in the future) despite short-term priorities and changes in leadership.


Lessons learned from hundreds of utilities and thousands of companies globally
that have navigated the turbulent waters of integration and transformation are
assisting utilities in achieving their strategic objectives and in producing
measurable financial returns. The future will be brightest for the fittest of
utilities – those that are able to thrive and grow in an increasingly competitive
market. Driving results from integration planning through execution will be
a tangible measure of their success.


  1. BusinessWeek
  2. IBM Business Consulting Services analysis, 2006.
  3. ibid.


Setting Fixed Electric Rates and Bills

Today many utilities strive to increase customer satisfaction by offering new
products and services to meet customer demands. One of the most basic benefits
customers want is certainty, as evidenced by their participation in services
that reduce risk and increase certainty such as gas line and appliance protection
plans, surge suppression programs, fixed unit rate programs and fixed-bill programs.

Consumer Demand for Fixed-Rate Products

Fixed Unit Rate Product
Energy suppliers and utilities in many markets have offered fixed unit rates
to their customers. These offerings of a fixed price per therm of natural gas
or per kWh of electricity are popular with customers, offering them the opportunity
to lock in the rate they pay for all the energy they use during a contract period.
A fixed unit rate eliminates the consumer’s price risk; however, their bills
will still change as weather causes their usage to fluctuate.

Fixed-Bill Product
Fixed-bill programs guarantee the entire payment amount
of the customer’s energy bill during the contract period, thus eliminating both
price and weather risk for the consumer. In a recent survey, E Source found
that 25 percent of residential customers were very interested in a fixed bill;
a higher ranking than surge suppression, appliance warranties or any other product
in the survey.[1] Fixed-bill programs are currently offered in at least 13 states
by regulated gas and electric utilities as well as by unregulated energy suppliers.
These programs are well-accepted in every market, as evidenced by strong renewal
rates of about 90 percent.

Risks Associated With Fixing an Electric Rate

In order to offer a fixed rate to electric customers, utilities or energy suppliers
must manage the associated risks. Examples of these risks and how they are manifested

1: Unit-Cost Risk

In this example, a hypothetical utility plans a fixed unit rate program for
an estimated 50,000 consumers. Each consumer is expected to use 10,000 kWh annually.
The rate to be offered to the consumer is $0.08 per kWh, while the estimated
cost is $0.04 per kWh. The utility expects to generate $20 million in margin.
To demonstrate unit-cost risk, we assume that all variables behaved exactly
as planned except the cost to supply the consumer, which jumped from $0.04 to
$0.06 due to the doubling of oil as feed fuel (see Figure 1).

This unexpected price spike reduces the unit margin from $0.04 to $0.02. The
risks driven by changes in unit costs are known as “unit cost risks.” These
risks can be due to market pricing, changing fuel costs or unexpected changes
in generation mix such as losing a base-load plant.

2: Marketing Risk

To demonstrate marketing risk, consider this example of a fixed unit rate program
under the same expectations as those in the previous case. In this case, the
utility actually secures the cost of the energy by purchasing 10,000 kWh of
electricity for each consumer at $0.04 prior to marketing the program.

In this example, only 25,000 consumers signed up for the program. The costs
of the excess secured-energy supply cannot be recovered from nonprogram customers.
The excess supply is sold at a loss of $0.02 per kWh (see Figure 2).

Marketing risk is tied to consumer acceptance of a program and the ability
to execute an appropriate marketing plan. Components of marketing risk include
impacts of press coverage, failure of models predicting consumer purchase behavior
and failures in the marketing process such as bad messaging.

3: Consumption Risk

To demonstrate consumption risk, we start with the same planned fixed unit rate
program proposed in Case 2. As in the previous case, the utility actually secures
the cost of the energy by purchasing 10,000 kWh of electricity for each consumer
at $0.04. However, the average consumer used 16,000 kWh. In this example we
assume the additional energy was purchased at market prices averaging $0.10
per kWh (see Figure 3).

In this case, the revenue increased by $24 million and the cost increased by
$30 million, thereby reducing the margin by $6 million. The risks driven by
changes in consumption are called “consumption risk.” It is this risk that the
price paid for supply will be different than expected that causes concern with
a fixed unit rate program. There is no need to differentiate the different types
of consumption risks for a fixed unit rate program, since the customer will
pay for all the energy used at the fixed unit rate.

4: Cross-Product Risk

Cross-product risk is a mathematical term. It does not refer to a specific risk
related to marketing consumer products, but rather considers the interplay of

Consider a complicated example involving consumption increasing and marketing
errors. In this example 60,000 consumers sign up and use an average of 16,000
kWh. As before, the utility arranges supply for the expected 50,000 consumers
at $0.04. Additional power is purchased at an average of $0.10 per kWh (see
Figure 4).

The compounding effect between the marketing risk and the consumption risk
is referred to as “cross-product risk.” In this example all the unexpected additional
customers drive additional supply to be purchased at a loss. The loss on purchasing
for unexpected consumption of planned consumers is consumption risk. The loss
on unexpected consumption of additional unexpected consumers is the cross-product

Additional Risks of a Fixed-Bill Program

Each of the risks associated with
a fixed unit rate program must also be managed for a fixed-bill program. In
addition, the utility or energy supplier must also consider the volume risks
associated with changes in consumption. Consumption risks are divided into those
risks caused by changes in temperature and those caused by changes in consumer

However, these types of consumption risk are handled differently in a fixed-bill
program. In a fixed-bill program, the customer pays a set amount per month for
the contract period. This fixed-bill amount is calculated using a fixed unit
rate and expected use at normal weather. The changes in consumption caused by
changes in temperature are considered weather risk and must be carefully managed
in a fixed-bill program. In many cases, this weather risk is held by the energy
provider and provides earnings stabilization without the purchase of weather

Because each consumer’s fixed-bill amount depends upon their expected use at
normal weather, there are additional risks associated with the quality of the
model used to predict that normal weather usage. A poor-quality model may introduce
selection bias and temperature bias. Using a model that accurately predicts
how the consumer uses energy at different temperatures can minimize these biases
and the risks associated with them.[2]

Impact of Risks on Electric Providers

The impact of these risks on any specific electric provider will depend on
its unique circumstances. We will discuss two distinct scenarios describing
electric providers that bracket the range of possibilities. In practice, each
provider’s situation will fall somewhere between the two scenarios.

Secure Supply Scenario
In this scenario, the provider is generally a base-load heavy generator with
significant net exports of base-load power. The provider usually has a large
capital-cost recovery charge and a low fuel cost built into their normal unit
costs. For example, a utility with a large portion of their generation provided
by nuclear and coal plants will have a $0.10-perkWh price, of which $0.02 to
$0.03 is fuel cost. Assuming a $0.02-perunit distribution cost, the capital-recovery
cost is $0.05 to $0.06 per kWh. This type of provider can fix their costs to
provide the exact amount of energy needed with low or no risk. They have little
risk of purchasing either too much or too little energy.

When the provider has the great majority of its supply in inexpensive, price-certain
generation, all risks can be categorized as opportunity costs. If more power
needs to be delivered, it defers external sales that may be more profitable,
but it does not trigger loss conditions.

Since the unit-cost risk of supply impacts most of the other risks in fixed-bill
and fixed-rate programs, it is no surprise that the most prominent early electric
fixed-bill programs were found in the southeastern United States utility companies.
These companies have large nuclear and coal plants with low unit-cost risk.
In other words they are operating in the Secure Supply Scenario.

Risky Supply Scenario
In the risky supply scenario, the provider is either a net purchaser of power,
or a generator dependent on a price-volatile fuel mix such as oil or gas. The
provider generally has a small capital-cost recovery charge and a high fuel
cost built into its normal unit costs. For example, a utility with a large portion
of its generation provided by natural gas and oil plants will have a $0.10-per-kWh
price, of which $0.04 to $0.05 is fuel cost. Assuming a $0.02-per-unit distribution
cost, the capital-recovery cost is $0.03 to $0.04 per kWh. More importantly,
if the fuel costs double, the total cost of power increases by 50 percent. This
type of provider may be exposed to grid price risk for some or all of its supply
and to the risk associated with changing fuel prices, because most grid supply
is provided by otherwise-unused resources that have higher unit costs than those
used to supply their own customers.

Due to this complexity, energy providers in the Risky Supply Scenario
have been cautious about offering fixed-unit rate or fixed-bill programs.

Techniques exist to minimize risks in the scenario of risky supply. Feed-fuel
purchasing can be hedged for generation mixes. Many grid simulation and hedging
techniques have been proven to build hedging strategy for purchased energy.
Plant outage risk and storm damage recovery risks can be handled through a combination
of insurance instruments and other hedging techniques.

Managing the Risks

an Appropriate Fixed Unit Rate

Figure 5 shows a sample simulation of unit costs for a utility with net purchasing,
plant outages, fuel-price volatility and weather for a typical northern utility
with significant purchased power. Using advanced simulation methodology, it
is possible to determine an appropriate fixed price per kWh that covers the
expected costs and risks associated with either a fixed unit rate or a fixed

Figure 6 shows the sample cumulative probability distribution of a hedged fixed-bill
program’s impact on normal earnings versus not having the program. This simulation
shows there is approximately 19 percent probability that in any given year the
utility will achieve a higher margin without a fixed-bill program and approximately
81 percent probability that this utility will achieve a higher margin in any
given year if they run a fixed-bill program.

Since the fixed bill carries additional risk beyond price risk, volume modeling
is extremely important. The financial performance and hedging creates numerous
cross products with other risks. Hedging of price risks requires extreme precision
in the modeling and risk simulations. Modeling risk can lead to hedging imperfections
and, in the case of a fixed bill, selection bias risks.

In these simulations, dependencies need to be considered and conditional branching
of purchasing logic needs to be included. For these reasons, and the sheer size
of the problem, the use of canned simulation packages or spreadsheets is not
sufficient. In addition, proper calculations are very long-running, requiring
parallel processing and other supercomputing techniques.


With the advanced simulation techniques available today, it is possible to
identify and mitigate risks in order to set fixed electric rates either to be
offered directly to the consumer or to be used in providing a fixed-bill program.


  1. “Residential Marketing Survey 2004,” E Source, December 2004.
  2. “Understanding Selection Bias,” The Energy and Utilities Project, Volume
    5, May 2005.


Innovation Does Matter

Innovation may be one of the most overused terms among strategists, used interchangeably
with technology improvements, process changes and many other tactical notions.
This has devalued the concept of innovation, and made it seem common, not distinctive.
For those of us who thrive on quantitative data to make key decisions, innovation
has become suspect – used too often to justify too little. At best, we see that
results are difficult to measure. At worst, we hear colleagues cloak proposals
with the term “strategic,” often to justify an investment with the hope of not
having to quantify the results.

As a result, developing a corporate commitment to innovation has become difficult
for the electric and gas utilities industry. With the other pressures facing
executives, it is difficult to focus significant resources on innovation. This
is a sad state of affairs, given the industry’s past role as a technology leader.
As the industry tackles its next set of challenges, innovation needs to come
back to the forefront. To do so, the industry must encourage the development
of a culture of innovation, similar to that found in other industries.

Industry Forces

The electric and gas utilities industry faces complex financial, market and
operational challenges. Heightened financial pressures continue to plague the
industry as companies strive to balance adequate shareholder returns with reasonable
rates in a period of intense infrastructure investment and rising fuel prices.
Multibillion-dollar capital requirements over the next decade will put stress
on the traditional financial strength of the industry. Meeting those needs represents
a significant challenge. Companies will continue to struggle to provide investors
with growing returns while keeping their promise to focus on their core business.

The first wave of restructured states are exiting their transition periods,
which will test policymakers’ resolve regarding the benefits of competitive
markets – and their willingness to provide structures that support them. This,
coupled with rising fuel prices, will have a significant impact on consumers’
appetite for competition; restructuring was a good idea while consumers enjoyed
guarantees and competitors offered price reductions. A lack of stability in
the wholesale markets compounds the overall issue, especially as the industry
faces significant increases in electric demand over the next two decades.

Additionally, operational issues loom in the forefront of senior executives’
minds. Reliability has rarely been more important. The blackouts underscored
this issue for senior executives but, even more so, for customers. Increasing
numbers of industrial and commercial customers have determined that they cannot
rely on utilities to provide the type of reliability such businesses require.

Why Innovation Matters

Why should a company care about innovation? Arguably, capital constraints and
higher priority investments might seem more important. Putting innovation into
the equation has the potential to divert limited resources, both financial and
human, from the key issues at hand. While this view seems like a valid short-term
answer, it fails to achieve the longterm vision. In fact, innovation matters
now more than ever as there may be no other way for the industry to find its
way to a long-term solution.

It is important to recognize the results of past innovation. For example, innovation
has resulted in exponential efficiency gains for the generation sector. Manufacturers
have produced equipment with efficiency ratings not thought possible just a
few decades ago. The smart grid has started to move from concept to reality.
Innovations in grid technologies and broadband over power lines (BPL) have begun
to make this possible. These are a few key examples that demonstrate the promise
of innovation.

Why is technological and product innovation a key feature of so many other
industries but lacking in the utility business? The electric and gas utilities
industry has not made enough of a concerted effort to capitalize on the overall
possibilities enabled by innovation. While deregulation has brought more pricing
and product options, there has been little of the innovation seen in other industries.

To change this, the industry needs to make fundamental changes in its view
of innovation and develop a culture of innovation throughout the entire industry
ecosystem. Companies can no longer rely strictly on vendors for the investments.
Vendors can no longer hold back on innovation because the “industry is not asking
for it.” Regulators need to stop punishing utilities for well-thought-out and
sound decisions that did not succeed. Consumer advocates need to consider what
customers really value rather than replay the tired regulatory “evil utility
gouging customers” tune when real electric prices are far lower than they were
20 years ago.

Rewarding, Not Ordering

Utilities must embrace innovation as an integral concept, not one left to a
small group within the company that controls a research and development (R&D)
budget. Companies cannot order innovation, only foster it. Transforming from
an operationally focused, risk-averse culture to one that rewards innovative
ideas represents a dramatic change, uncharacteristic of much of the industry
over the past few years.

The first step in this journey centers on leadership; the “tone at the top”
could be the most important component of this transformation. Senior leaders
need to embrace innovation and continually stress its importance. This means
accepting both successes and failures. This concept is difficult to accept in
an industry that has written off more than $40 billion of unsuccessful investments
over the past two decades, few of which resulted from attempts at technology
or product innovation.

Companies need to adjust the strategic planning process to generate, incubate
and implement innovative ideas. Executives need to debate ideas from a strategic
and quantitative perspective, allowing the best ideas to move forward. This
also will help a company understand which concepts need incubation over the
next three to five years and require specific funding from a financial and human
capital perspective. Adding a specific incubation function to the company will
provide proper focus and attention on the best nascent ideas. Shell’s Game-Changer
process is an excellent example of an innovation-friendly process woven into
the R&D infrastructure that has delivered results.

The industry also needs a fundamental change to the overall capital allocation
practices and processes within the company. Traditional practices do little
to foster innovative ideas and investments. While capital allocation processes
have become more sophisticated in the past decade, they do not adequately account
for the need for innovation investments. Adopting a portfolio approach that
adequately weights the value of innovation is a step in the right direction.
However, companies need to assign a percentage of the capital budget to innovation.
Allocating at least 10 percent of the capital budget to innovative investments
is a good starting point.

This does not mean discarding the financial and economic analysis normally
required for capital investments. While rigorous innovation may sound like an
oxymoron, rigorous quantitative analysis represents a critical factor in not
repeating investment mistakes of the past three decades. This analysis needs
to be sophisticated enough to account for the various technical, operational
and regulatory risks associated with these types of investments. A simple business
case that relies on a single return on investment figure will not be sufficient
to analyze potential investments.

Industry Ecosystem

Other members of the industry need to participate in the drive for innovation.
For example, regulators need to embrace the idea and be willing to modify standard
ratemaking to motivate innovation.

The Energy Policy Act of 2005 has some specific incentives for innovation relating
to real-time pricing, smart meters, transmission infrastructure, nuclear power,
clean coal and renewables. They mainly come in the form of financial incentives
and a push for adoption of standards. While this is a start, state regulatory
agencies play a significant role in the process. Many believe in the concept,
but few have been willing to adopt mechanisms that both motivate and reward
utilities on the cutting edge of innovation. California’s push toward automated
metering is a notable example of how a regulatory commission is willing to push
innovation. Regulators’ current assumptions that customers are unwilling to
pay more for innovative services is belied by the panoply of products and services
in telecommunications. Price is important, but not the only consideration that
customers factor in to purchasing decisions.

Vendors have a wealth of intellectual capital, much of which focuses on industries
more willing to take advantage of innovation. While we wish vendors would believe
in the industry’s appetite for innovation, most have been disappointed. Therefore,
the utility industry must convince vendors to focus as much effort on the utility
industry as they do on other industries.

Future of Innovation

A culture of innovation will fuel the necessary growth and efficiencies. If
the industry players make this a priority, we will see a great exploitation
of process and technological innovations over the next decade. Some of these
are well under way but need further study, while others are at a more nascent
phase of the development cycle.

Key areas of innovation are likely to be:

  • Workforce: Processes and technologies that transform the technical worker
    into a knowledge worker;

  • Generation: Technologies that provide more than 20 percent gains in the
    efficiencies of the existing generation resources;

  • Clean Coal: A “zero emission” coal-generating station;
  • Coal Gasification: Technologies that solve the problem of increased gas-fired
    generation with a decreasing supply and increased price of natural gas;

  • Integrated Home Premise: Technologies that provide seamless integration
    of utility demand management, communication, entertainment, etc., including
    the use of BPL; and

  • Reliability: Distribution and transmission technologies that exponentially
    increase reliability factors in a cost-effective manner.

This is by no means a comprehensive list, but it represents some key
areas of innovation that hold the greatest promise. Business will drive all
of these innovations (Figure 1).


Traditionally, innovation fulfills two critical needs: top-line growth and
improved efficiency. For the utility industry, the latter will be the most significant
over the next decade.

Over the next decade, the utility industry needs to become
known for its innovative developments and leadership in this area, not for being
the laggard that was unwilling to embrace change.

The New (Nuclear Power) Generation

During the relatively quiet period of nuclear power development over the last
15 years, there has been a consolidation of expertise in relation to nuclear
power reactor design and development so that today several significant new designs,
all evolved from previous and mostly well-tested antecedents, are available.

In fact, utilities looking around for up-to-date nuclear power plants have
quite a lot to choose from. Designs have become more international than last
time most of them had their checkbooks out. There are some innovations, as well
as the designs which have steadily evolved from the 400-plus workhorses of today.

Several generations of reactors are commonly distinguished. Generation I reactors
were developed in the 1950-60s, and outside the United Kingdom, none is still
running today. Generation II reactors are typified by the present U.S. fleet
and most in operation elsewhere. Generation III are the advanced reactors discussed
here. The first of these are in operation in Japan and others are under construction
or ready to be ordered. Generation IV designs are still on the drawing board
and will not be operational before 2020 at the earliest.

About 85 percent of the world’s nuclear electricity is generated by reactors
derived from designs originally developed for naval use (see Figure 1). These
and other second-generation nuclear power units have been found to be safe and
reliable. In the last decade, the capacity of many has been marginally increased
and the actual kilowatt-hour output from that capacity has risen remarkably.
In addition, many have had operating licenses extended to 60 years (see Figure
2). However, they are being superseded by better designs.

Reactor suppliers in North America, Japan, Europe, Russia and South Africa
have a dozen new nuclear reactor designs at advanced stages of planning, while
others are at a research and development stage. Fourth-generation reactors are
at concept stage.

Third-generation reactors have:

  • a standardized design for each type to expedite licensing, reduce capital
    cost and reduce construction time;
  • a simpler and more rugged design, making them easier to operate and less
    vulnerable to operational upsets;
  • higher availability and longer operating life – typically 60 years;
  • reduced possibility of core melt accidents;
  • minimal effect on the environment;
  • increased efficiency through higher burn-up to reduce fuel use and the amount
    of waste; and
  • burnable absorbers (“poisons”) to extend fuel life.

The greatest departure from second-generation designs is that many incorporate
passive or inherent safety features[1] which require no active controls or operational
intervention to avoid accidents in the event of malfunction, and may rely on
gravity, natural convection or resistance to high temperatures.

In the United States, the Department of Energy (DOE) and the commercial nuclear
industry in the 1990s developed four advanced reactor types. Two of them fall
into the category of large “evolutionary” designs which build directly on the
experience of operating light water reactors in the United States, Japan and
Western Europe.

One is an advanced boiling water reactor (ABWR) derived from an earlier General
Electric design. Four 1300-1380 MWe examples are in commercial operation in
Japan, with another under construction there and two in Taiwan. Four more are
planned in Japan and perhaps another in the United States, which was bid for
the fifth Finnish reactor in 2003.

The other type, System 80+, is an advanced pressurized water reactor (PWR),
which was ready for commercialization but is not now being promoted for sale.
But eight System 80 reactors in South Korea incorporate many design features
of the System 80+, which is the basis of the Korean Next Generation Reactor
program, specifically the larger APR-1400 which is expected to be in operation
soon after 2010, and marketed worldwide.

The U.S. Nuclear Regulatory Commission (NRC) gave final design certification
for both in 1997, noting that they exceeded NRC “safety goals by several orders
of magnitude.” The ABWR has also been certified as meeting European requirements
for advanced reactors.

Another, more innovative U.S. advanced reactor is smaller – 600 MWe – and has
passive safety features. The Westinghouse AP-600 gained NRC final design certification
in 1999 (AP = Advanced Passive).

These NRC approvals were the first such generic certifications to be issued
and are valid for 15 years. As a result of an exhaustive public process, safety
issues within the scope of the certified designs have been fully resolved and
hence will not be open to legal challenge during licensing for particular plants.
U.S. utilities will be able to obtain a single NRC license to both construct
and operate a reactor before construction begins.

Separate from the NRC process and beyond its immediate requirements, the U.S.
nuclear industry selected one standardized design in each category – the large
ABWR and the medium-sized AP- 600 – for detailed first-of-a-kind engineering
work. The $200 million program was half funded by the DOE. It means that prospective
buyers now have firm information on construction costs and schedules.

The Westinghouse AP-1000, scaled up from the AP-600, received final design
certification from the NRC in December 2005 – the first generation 3+ type to
do so. It represents the culmination of a 1,300 man-year and $440 million design
and testing program. Overnight capital costs are projected at $1,200 per kilowatt
and modular design will reduce construction time to 36 months. The 1100 MWe
AP-1000 generating costs are expected to be less than $3.5 cents/kWh and it
has a 60-year operating life. It is under active consideration for building
in China, Europe and the United States.

General Electric has developed the ESBWR of 1390 MWe with passive safety systems,
from its ABWR design. This then grew to 1550 MWe and has been submitted for
NRC design certification in the United States. Design approval is expected by
2007. It is favored for early U.S. construction as the Economic & Simplified

Also Ready for Action

Europe, designs have been developed to meet the European Utility Requirements
of French and German utilities, which have stringent safety criteria.

Framatome ANP has developed a large (1600 and up to 1750 MWe) European Pressurized
Water Reactor, which was confirmed in 1995 as the new standard design for France
and received French design approval in 2004. It is derived from the French N4
and German Konvoi types and is expected to provide power about 10 percent cheaper
than the N4. It will operate flexibly to follow loads, have high fuel utilization
and the highest thermal efficiency of any light water reactor, at 36 percent.
Availability is expected to be 92 percent over a 60-year service life. The first
unit is being built at Olkiluoto in Finland; the second is planned at Flamanville
in France. The U.S. EPR is also undergoing review in the United States with
intention of a design certification application in 2007.

Together with German utilities and safety authorities, Framatome ANP also developed
another evolutionary design, the SWR 1000, a 1000-1290 MWe boiling water reactor
which was bid for Finland in 2003. The design was completed in 1999 and development
continues, with U.S. design certification being sought. As well as many passive
safety features, the reactor is simpler overall and uses efficient fuels, giving
it refueling intervals of up to 24 months. It is ready for commercial deployment.

In Russia, Gidropress 1000 MWe V-392 (advanced VVER-1000) units with enhanced
safety are planned for Novovoronezh and are being built in India. A transitional
VVER-91 (1000 MWe) was developed with Western control systems – two are being
built in China at Jiangsu Tianwan (the first started up in December) and it
was bid for Finland.

The larger VVER-1500 V-448 model is being developed by OKBM, and two units
each are planned as replacement plants for Leningrad and Kursk. It will have
high fuel efficiency and enhanced safety. Design is expected to be complete
in 2007 and the first units commissioned in 2012-13.

Canada has had two designs under development which are based on its reliable
CANDU-6 reactors, the most recent of which are operating in China.

The main one is the Advanced Candu Reactor (ACR). While retaining the low-pressure,
heavy water moderator, it incorporates some features of the pressurized water
reactor. Adopting light water cooling and a more compact core reduces capital
cost, and because the reactor is run at higher temperature and coolant pressure,
it has higher thermal efficiency.

The ACR-700 is 750 MWe but is physically much smaller, simpler and more efficient
as well as 40 percent cheaper than the CANDU-6. But the ACR-1000 of 1200 MWe
is now the focus of attention by AECL. It has more fuel channels (each of which
can be regarded as a module of about 2.5 MWe). Projected overnight capital cost
of $1,000/kWe and operating costs of 3 cents/kWh have been claimed. The ACR
will run on lowenriched uranium (about 1.5 to 2.0 percent U-235) with high efficiency,
extending the fuel life by about three times and reducing high-level waste volumes
accordingly. Regulatory confidence in safety is enhanced by changes in the reactor
physics, and it utilizes other passive safety features. Units will be assembled
from prefabricated modules, eventually cutting construction time to three years.
Development is under way and the project is expected to be ready to build soon.
Meanwhile it is moving toward design certification in Canada, with a view to
following in China, the United States and the United Kingdom. The first ACR-1000
unit is expected to be operating in 2014 in Ontario.

In Japan, the first two GE-Hitachi-Toshiba ABWRs have been operating since
1996 and are expected to have a 60-year life. Two more started up in 2004 and
2005 and others are under construction in Japan and Taiwan. Also a large (1500
MWe) advanced PWR (APWR) is being developed by four utilities together with
Westinghouse and Mitsubishi. The first two are planned for Tsuruga. It is simpler,
combines active and passive cooling systems to greater effect, with high fuel
efficiency. Design work continues and will be the basis for the next generation
of Japanese PWRs. In addition, Mitsubishi is participating in development of
Westinghouse’s AP-1000 reactor.

All of the above are moderated and cooled by water, but an entirely different
approach is based on pioneering work in the United States and Germany. This
involves using helium cooling and much higher temperatures; hence, greater thermodynamic

South Africa’s Pebble Bed Modular Reactor (PBMR) is being developed by a consortium
led by the utility Eskom, and drawing on German expertise. It aims for a step
change in safety, economics and proliferation resistance. Production units will
be 165 MWe. They will have a direct-cycle gas turbine generator and thermal
efficiency of about 42 percent. Up to 450,000 fuel pebbles recycle through the
reactor continuously (about six times each) until they are expended, giving
an average enrichment in the fuel load of 4 to 5 percent. The pressure vessel
is lined with graphite and there is a central column of graphite as reflector.
Control rods are in the side reflectors and cold shutdown units in the central
column. Performance includes great flexibility in loads (40 to 100 percent),
with rapid change in power settings. Each unit will finally discharge about
19 tonnes/yr of spent pebbles to ventilated, on-site storage bins.

Construction costs (especially when in clusters of four to eight units) and
operating costs are expected to be low. Investors in the PBMR project are Eskom,
the South African Industrial Development Corporation and Westinghouse. A demonstration
plant is due to be built in 2007 (most contracts have already been let) for
commercial operation in 2010.

China’s INET is rapidly progressing with a similar design, of 200 MWe, which
it plans to have running in 2010. It then hopes to build a plant comprising
18 of these at Weihai in Shandong province.

Beyond all these, two major international initiatives have been launched to
define future reactor and fuel-cycle technology, mostly looking further ahead
than what has been discussed so far.

Technology Always Changing

The Generation IV International Forum is a U.S.-led grouping established in
2001 which has identified six reactor concepts for further investigation with
a view to commercial deployment by 2030. Parallel to this, the IAEA’s International
Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO) is focused more
on the needs of developing countries, involving Russia rather than the United
States. It is now funded through the IAEA budget.

Along with development of new reactors in the medium- to long-term time frame
is a renewed focus on reprocessing used fuel from the reactors, both to extract
more energy value from it and to expedite the disposal of high-level wastes.
Since President Jimmy Carter, U.S. policy has turned away from this, but a major
change is now under way, and under the Advanced Fuel Cycle Initiative the Argonne
National Laboratory is planning an engineering-scale demonstration of a new
process for reprocessing used fuel. It is estimated that using this process,
the effective capacity of the Yucca Mountain repository could be increased fivefold
and much better utilization of uranium achieved. The policy change is strongly
supported by industry and the U.S. professional association.

So, there is a variety of reactor technology available or soon to be available,
and more still after 2020. These will take the world nuclear power industry
into an era of upgraded equipment which is safer, simpler, more economic and
more durable, while playing a major role in limiting world carbon dioxide emissions.
A doubling of nuclear capacity will reduce carbon dioxide emissions by almost
one-third of present levels from power generation.

But beyond simple economics and the related question of climate change is the
looming issue of energy security. Whereas the logistics of fossil fuel supply
are increasingly fraught, it is very easy and inexpensive to store a few years’
supply of uranium or fabricated nuclear fuel.

it is up to governments to adopt sound policies regarding energy security and
the environment, the ball is then in the court of utilities and financiers to
invest wisely in plants which will perform economically and reliably for a long

Humankind cannot conceivably achieve the clean-energy revolution which is so
obviously necessary today without a rapid expansion of nuclear power. Initially
this is to generate electricity, but let us not lose sight of desalination and
the future supply of hydrogen for transport. I believe that this will require
a twenty-fold increase in nuclear energy during this century, from 440 to some
8,000 reactors. While this is a big challenge, it is certainly achievable given
the maturity of the technology involved.


  1. Traditional reactor safety systems are “active” in the sense that they involve
    electrical or mechanical operation on command. Some engineered systems operate
    passively (e.g., pressure relief valves). Both require parallel redundant
    systems. Inherent or full passive safety depends only on physical phenomena
    such as convection, gravity or resistance to high temperatures, not on functioning
    of engineered components.