Trends in Bill Payment

One of the most significant trends within the payments market is the increasing
substitution of electronic forms of payment for traditional paper-based payment
forms such as cash and checks. The impact of this trend is being felt in all bill
payment industries, and this warrants a response from utilities in both regulated
and deregulated markets.

Regulated players need to broaden payment options to maintain customer satisfaction
levels. Deregulated players need to offer all payment options to acquire and
retain customers.

Over the next few years, electronic payments are likely to surpass checks as
the most popular form of payment. Cash payment transactions will likely stay
fairly constant. Checks’ share of noncash transaction volume has already decreased
over the past decade and will drop even further over the next decade as electronic
payments increase their share of transaction volume.

This change is driven primarily by advances in technology and consumer demand,
specifically the consumer’s desire for convenience and the increase in technology-enabled
payment options.

Electronic payment transactions fall within four distinct categories:

1. General purpose credit cards: Includes co-branded credit cards, charge
cards, co-branded charge cards, secured credit cards, T&E cards, commercial
cards, and new payment technologies that route transactions through card association
networks.

2. Private label cards: Includes those run by individual retailers or
gas companies, fleet card, and third-party receivable owners. These cards are
typically used only for transactions at the issuing company.

3. Offline/online debit: Offline debit refers to debit transactions
requiring a customer’s signature. Online debit refers to a debit transactions
requiring a PIN, and funds are settled over EFT networks. Debit payments are
extracted directly from the consumer’s bank account.

4. Automated Clearing House: ACH networks used to transfer payments
electronically from one account to another.

Consumer Bills

Checks still dominate consumers’ bill payment behavior. However, the use of
electronic payment for paying bills is on the rise. Most billers recognize the
benefits of switching customers to electronic payments. The cost of bill presentment,
check processing, and float are enough to warrant consideration of automatic,
electronic payment options, which often eliminate the need for presentment and
result in significant float savings.

On top of this, offering electronic payments is quite often a customer satisfier,
as many consumers prefer to use these payment options. Companies use a variety
of strategies to drive consumers to use electronic methods:

• Pull strategy: Offering price incentives to use electronic forms of payment,
which replace paper bills. This strategy promotes migration among customers
who are ready to make the transition, while still catering to those who are
more comfortable with checks.

• Push strategy: Penalizing customers who continue to receive paper bills with
additional fees. This strategy will generate some customer dissatisfaction,
and this should be weighed against the cost savings generated by eliminating
the paper bills.

• Providing multiple electronic payment options to cater to diverse consumer
preferences. For example, offering bank deductions only will not address the
segment of the population that is uncomfortable releasing bank account information
to third parties.

• Offering value-added services such as archiving and spend analysis capabilities
for electronic payments.

• Emphasizing trust and security with the electronic payment options and illustrating
the decreased likelihood of late payments and associated fees.

Bill consolidation Web sites are on the rise. However, direct biller sites
are currently in the lead and have the competitive advantage with respect to
customer ownership. As these consolidation Web sites grow and banks increase
their presence in the bill presentment and payment space, direct billers will
face increasing threats in their ability to directly own the customer from a
payment perspective.

Automatic Bill Payments

The use of credit cards as a recurring payment option in bill payment industries
is growing at a significant pace. For example, spending on American Express
cards in bill payment industries has grown at an average annual rate of 21 percent
over the past three years. Credit card recurring billing is growing in many
industries, including cellular, long distance, cable, insurance, subscriptions,
and even residential rent payments.

In the past, credit and charge cards were traditionally used as a borrowing
vehicle or a medium to pay for travel and entertainment expenses. Consumer trends
indicate, however, that a growing number of people increasingly perceive these
cards as a cash management tool.

People are becoming more insistent on using credit/charge cards across many
new industries, and they often desire the convenience and security of paying
this way.

Combined with loyalty rewards programs, credit and charge cards are a powerful
incentive for consumers to be loyal to those companies that accept their payment
option of choice.

Demographic trends are also playing a part in this shift; younger consumers
are more likely to favor electronic payments methods than paper. As this segment
matures, the speed of electronic payment adoption should accelerate.

Consumer Research

In 2001, American Express commissioned AB Research Associates to determine
the level of consumer demand for automatic payments in bill payment industries
and the impact such programs would have on consumer perception.

The results below are based on a population of 300 American Express card holders
who are responsible for bill payment and have paid some bills using methods
other than credit or charge card automatic billing in the prior six months:

How will offering recurring billing on credit cards impact on consumer
perception?
• 52 percent say they are much more or somewhat more likely to
stay with the current provider of a service if they offered automatic billing
on their card and others did not.
• 78 percent agree that it is important for merchants to accept the payment
methods that customers prefer.

Will offering recurring credit card billing improve the collections process
and generate incremental float benefit?
• 54 percent agree that using a card for automatic billing of recurring
expenses increases the likelihood of paying the bill on or before it is due.
• 57 percent agree that using a card for automatic billing of recurring
expenses makes one confident that the bills would be paid correctly and on time.

How does the consumer view recurring billing on credit cards?
• 59 percent agree that using the card for automatic billing of
recurring expenses makes it easier to avoid late payment fees and penalties.
• 58 percent agree that using the card for automatic billing of recurring
expenses makes the payment process easier and more convenient.
• 56 percent agree that using the card for automatic billing of recurring
expenses helps to save time paying bills.

How do consumers view paying utility bills on their credit cards?
• 25 percent say they would feel more positive toward a merchant
who gave them the option to use automatic billing on their card to pay electric,
gas, and oil bills.
• 26 percent say they would be likely to use automatic billing on their
card to pay electric, gas, and oil bills

The key take-away from the research is that consumers value choice and will
reward companies that provide it to them. A driver of consumers’ preference
for credit cards is the collection of loyalty points, since respondents who
collected rewards points tend to respond more favorably when compared with those
who did not.

The impact in deregulated markets is clear: offer consumers choice and flexibility
or they will leave. In regulated markets, players who are interested in maintaining
higher levels of customer satisfaction, brand affinity, and perception need
to broaden the offering of payment options.

Areas of Benefit

Credit card automatic bill payment is a solution that can work very effectively
for both the utility company and the consumer. Consumers can sign up on application
forms, over the phone, or through the Internet. It helps to strengthen the relationship
with customers and reduces the high cost of replacing them. With increased competition,
building loyalty is the key to success. Some of the benefits from the biller’s
perspective include:

Guaranteed Payment

By utilizing credit card automatic bill payment, the utility company will
have no bounced checks, should have no collection charges, and will be paid
automatically and on time.

Faster Payment

Funds are available much sooner to utility companies offering credit card automatic
bill payment. Most credit cards offer the biller payment within a few days.

Improved Cash Flow

Increased monthly receipts can result in greater financial flexibility, lower
cost of capital, and better business planning.

Lower Operating Costs

Costly and inefficient processes associated with nonpayment can be reduced
as well as late payment and insufficient funds and billing and statement processing
costs.

Automatic Customer Updates

Certain companies offer automatic customer updates for expired and lost cards,
which reduces calls to customer service centers.

Improved Customer Satisfaction

Offering multiple payment options can only improve customer satisfaction ratings,
especially with those customers wedded to certain payment options.

Improved Customer Acquisition/Retention

In deregulated markets, new entrants will use any advantage they can to acquire
new customers, including offering incentives such as collection of credit card
rewards points. Players that don’t offer similar payment options run the risk
of increased attrition.

A study is currently being commissioned by American Express to collect qualitative
data on the benefits of automatic credit card recurring billing. Companies across
multiple industries are being surveyed, and the following quotations were generated
from individual respondents:

“Customer service is better for credit card holders because we can address
billing problems faster.”
– Top-10 Internet service provider

“Because of recurring billing, our delinquency rate is at a low rate
of 3 percent.”
– Multi-state health club

“Credit card payments are slightly better because they allow for authentication
which verifies other customer information.”
– Top-10 Internet service provider

Consumers are looking for more convenient, cost effective ways of paying bills.
Automatic payment methods can leverage this trend by satisfying the desire for
convenience and by helping consumers avoid late fees or other charges due to
late bill payments.

Credit cards already have a critical mass advantage when it comes to capturing
these payments, and consumers are increasingly motivated by the rewards options
available. In deciding how to roll out a variety of payment options, utility
companies need to look at:

• Consumer convenience: Is the payment option easy to use or familiar
to the consumer?
• Consumer security: Does the consumer feel comfortable with the payment
option?
• Consumer benefits: Are there benefits for the consumer to use the payment
option (rewards points, for example)?
• Cost savings: Does the payment option lower the cost of bill presentment,
processing, float expense, etc.?
• Additional biller benefits: What other benefits, such as marketing support,
does the payment facilitator offer to the utility?

Conclusion

In both regulated and deregulated markets, automatic bill payment options should
be considered by utility companies to improve customer satisfaction and retention
as well as lowering the cost of processing payments.

Portfolio Value Management

To position their companies for success in the energy industry of the future,
utility executives must have a clear vision of the future of the industry and
the operating model they believe will be a winning strategy.

They must be able to rapidly assess the effect on the enterprise as a whole
on the performance of each asset and the impact of asset acquisitions, mergers,
divestitures, or other major capital decisions. This has led to the need for
portfolio value management (PVM), which recognizes that any company has flexibility
in how it operates, maintains, acquires, and disposes of its assets. PVM recognizes
that even if there are no obvious business connections among assets, investing
in one project or business may affect investment in others.

Executives today face an incredibly complex and constantly evolving industry
structure, with innumerable unknown variables, complexities, and potential variations
of operating models.

How, in the face of all this change and uncertainty, can any executive hope
to effectively manage and optimize the value to shareholders of a portfolio
of assets? Well, reaching into a desk drawer and grabbing a couple of rubber
bands, a couple of pieces of cardboard, and a handful of fasteners would be
an excellent start.

Zeeman’s Machine

These materials are required to build an instructive toy called Zeeman’s Machine,
developed by Dr. E.C. Zeeman in the 1970s. It is nothing more than a cardboard
disk fastened at the center to a large base, and at one point on its periphery
to two rubber bands. One of the rubber bands is secured to the base a fixed
distance from the center of the disk, and the other band is moved freely by
hand from position to position. When the free end is moved, the disk at the
center of the machine moves smoothly and predictably in response to steady movement
of the rubber band that controls the machine’s motion for a while, then jumps
unexpectedly to a new position in response to a very slight additional movement
— not unlike sudden jumps or drops in revenue, stock price, or profitability,
some might say.

While they seem random, these sudden, dramatic changes in the state of the
system can actually be predicted, and their occurrence controlled, if the drivers
of the machine are understood. To fully understand the behavior of Zeeman’s
Machine, one would have to be versed in the principles of a complex branch of
mathematics named (with unintentionally negative connotations) catastrophe theory.

Mercifully, understanding and using catastrophe theory isn’t about to be portrayed
as a quantitative technique for energy company PVM. But the mathematical theory
is symbolically very meaningful. It provides an excellent framework to explain
why PVM is such an effective technique for shareholder value optimization.

System Variables

Contrary to the images it first may bring up, catastrophe theory has nothing
to do with predicting or explaining things like Chernobyl, Enron, or the California
market meltdown. A simplified version of its basic premise can be expressed
as follows:

In most cases, the state of a complex system of interdependent differential
equations can at any time be completely specified by the values of a very large
but finite number of system variables.

However, if a relatively small number of control variables can be defined,
the final configuration determining the performance of the system can be specified
as one of a small number of defined variations, and the ability to predict its
behavior depends not on the huge number of system variables, but on the much
smaller number of control variables for that variation.

Certainly, it is true that the financial performance of any diversified energy
company is driven by a very large but finite number of system variables (a staggering
array of factors such as heat rates, fuel costs, temperatures, transmission
line losses, regional business growth rates, market pricing, interest rates,
etc.) and the performance of a complex system of interdependent business operations.

Many compelling arguments have been made over the last few years that there
is likely to be a small number of defined variations of end-state structures
for utility companies — Moody’s May 2002 five-business model layout for
the merchant sector and our own four-model Vision of the Energy and Utilities
Industry Circa 2007, published last October, being two recent examples.

So if you accept those premises, then all that is missing to make sense of
the developing business outlook is the identification of the control variables
that will drive the systems under each variation.

Defining the Control Variables

In the context of the type of business model within which the utility is working,
value-based management (VBM) must serve as the underpinning for PVM. Formally,
VBM is a process for establishing performance goals for key metrics (our control
variables) in all business units, evaluating new opportunities, and implementing
them in a way that allows for sustained growth.

The shaping of the control variables begins with the financial markets, which
set the value of each business continually. A company, presumably, has chosen
the type of company it wants to become, whether that be a retail, distribution,
transmission, or merchant energy company, or some viable combination.

Periodically, key strategic objectives are set by executives who recognize
that the value of one business is influenced directly by its own performance
and indirectly by the performance of certain others — interdependencies
similar to those mentioned in the catastrophe theory analogy. They are fully
aware that specific new business units may bring experience, operational flexibility,
or customer base to the entire company that improves the performance of other
units.

Each of the strategic objectives set by the executive team has one or more
critical success factors. Each of these factors, in turn, is dependent on key
value drivers. And it is precisely these key value drivers that become the small
number of control variables that will become the focus of PVM.

Making the Right Decisions

Utility companies have historically tended to value acquisition targets based
on simple lifecycle performance. For example, a power plant that operates an
average of 3,500 hours a year and produces mid-load power will produce a certain
amount of revenue based on specific price projections. Subtracting the plant’s
operating costs yields its annual value to the company, and extending the same
algorithm over the expected lifespan yields its lifecycle performance. Adjusting
this for inflation and other factors provides a basic appraisal of net present
value.

For the utility or integrated energy company (IEC) of the future, however,
such a simple approach to valuation is inadequate because it fails to account
for the plant’s role as part of the overall strategy for the variation under
which it operates. Under PVM, the executive team works within the proper strategy
for the variation of company it aims to become and implements specific initiatives
that can be measured quantitatively.

Targets for each of the selected value drivers — our control variables
— are set by management. The right control variables of shareholder value
for different companies are those value drivers which give executives and line
management levers they can directly control that they know ultimately affect
the company’s market valuation.

This process helps ensure that there is a distinct, well-specified path that
guides the decision-making process from the requirements of the financial markets
that set the basis for valuation under VBM through these interdependencies to
the ultimate actions of managers and employees.

Communicating the Variables

In order to make PVM work, however, the information required to assess the
level of the control variable and how specific actions have changed it must
be communicated regularly throughout the organization. That way, the entire
flow of information set into motion by the initial VBM process — from the
financial markets to the individuals who can move the drivers — is complete.

Some companies are already using Web-based portals to communicate this information
on a real-time basis to executives across the company, and tying compensation
incentives to the direction in which these drivers move over time. One large
IEC is using executive portals to deliver strategic performance measures to
top executives. Measures used in each portal are designed to help executives
monitor how the company is performing relative to its targets and how it is
perceived by key constituents.

The variety of measures and display format will enable management to make more
timely and informed decisions. Financial and non-financial performance measures
linked to the company’s strategic initiatives are displayed in a balanced-scorecard
format and are updated monthly. Financial measures include total shareholder
return (TSR) and return on equity (ROE), while non-financial metrics include
measures to gauge leadership development, safety, and customer satisfaction.

For several measures, targets are tied directly to the IEC’s executive incentive
compensation program. These include ROE, TSR, and safety, with incentive targets
presented directly alongside each measure. Analysis and alerts are also provided
for key measures to provide perspective and explanation for recent trends and
to highlight performance issues. In this way, PVM can detect trends earlier,
avoid unexpected problems, and ultimately increase shareholder value by enabling
better-informed portfolio decisions.

How PVM Will Work

Ten years ago, utilities were conglomerates of regulated businesses that could
be evaluated on identical metrics (ROE, TSR, etc.). That’s no longer the case.
Because of the differences in the businesses being combined today and the complexity
of intertwining their goals and performance measures, PVM is emerging as an
important philosophy for integrated energy company executives to embrace.

As companies have spun off, acquired, and merged business units in response
to the shifting requirements of increasingly competitive markets and changing
regulatory structures, the natural reflex has been to pull focus down to the
financial results at business unit level. However, as real changes begin to
take hold, the focus will have to be brought back to the integrated enterprise
level. And that’s exactly what PVM helps utilities and IECs do.

PVM recognizes that nearly every asset will have some optionality. The key
in extracting the greatest value from the portfolio management approach will
be the willingness to value assets holistically and objectively, and to take
action based on how those valuations compare to established criteria and constraints.

The potential for great value in application of PVM is becoming apparent today.
For example, in looking at merchant power asset planning and budgeting, we see
that this is where the commodity is actually being produced, and the assets
themselves are the most mobile — that is, they are easy to buy, sell, and
integrate into a system. Here, an effective PVM program can help decision-makers
view the benefits and drawbacks of ownership and various operational strategy
variations for generation assets, and plan their investment and operational
budgeting activities in a way that exploits the form of that variation.

Distribution System Value

While a distribution system, on the other hand, tends to be a very large and
geographically dispersed asset that is not easily parsed into smaller units,
its management can also benefit from implementation of PVM. For example, to
the degree smaller distribution systems exist as islands within larger ones,
or adjacent systems have overlapping or checkerboard areas, properties may be
swapped to achieve efficiencies for both systems.

As distribution continues evolving horizontally — with regional and global
consolidation to achieve scale economies — distribution companies will
pursue a portfolio approach in acquiring and liquidating properties, as well
as making substantial investments to improve performance. Companies might seek
to acquire poorly performing properties with an eye toward improving them with
new technology and management practices. Then they can either sell such properties
or continue to operate them, as best fits their strategic objectives.

This does not imply that companies will haphazardly buy and sell major fixed
assets as if they were porcelain clowns on eBay, however. Recognizing that power
plants and distribution grids are not exactly liquid assets, they will focus
on a longer horizon and a strategic bigger picture than, say, a financial portfolio
manager might. An integrated energy company might bank certain properties because
they offer site, market, or risk mitigation characteristics that are expected
to be more valuable under alternative scenarios and conditions imposed as part
of the PVM assessment.

On the other hand, if the value of a certain business is less to one company
than what it may be worth in another company’s portfolio, then the portfolio
manager may wish to sell the business and reinvest the proceeds into other assets.

A truly effective portfolio manager should be willing to part with an asset
if the price is greater than the value to the portfolio.

The PVM Toolkit

In order to make such complex decisions, however, the decision-maker must have
the tools to do this consistently and effectively. While Zeeman’s Machine is
certainly nice to have to play with when stress levels increase, it’s not fundamentally
useful to the company’s steward of PVM activities. However, useful tools do
exist. There is a fairly substantial experience base in other industries in
making PVM work, and the tools used to perform the analysis required have generally
fallen into one of four categories:

Scenario Analysis

This is usually the starting point, as it is the most qualitative and, hence,
conceptually the easiest to grasp. However, even by itself, when done thoroughly,
it can be a powerful tool for evaluating strategic options.

One large company with whom IBM has worked held a series of workshops to do
scenario planning in an attempt to assess enterprise performance improvement
opportunities. Their first set of meetings was designed around understanding
their competitive position in the markets in which they operated.

Another set of meetings was set up to brainstorm and develop a wide variety
of scenarios for future industry developments. These were then narrowed down
to a set of the most likely scenarios, and the steps required to make improvements
that would position the company to take advantage of each scenario were laid
out.

By looking at commonalities among the steps and the relative likelihood of
each scenario, a well-defined, economically supportable strategy for moving
forward is now being developed; even though the future is still uncertain, this
plan will position the company to be ready to capitalize on whatever comes.

Financial Portfolio Analysis Techniques

Standard financial portfolio analysis tools can be used successfully to screen
potential investments and measure projects against established strategic objectives.

For example, efficient frontier analysis considers the balance between value
and risk in the selection of optimal portfolios. The theory behind the efficient
frontier is that there is not one optimal portfolio, but many different portfolios
based on different levels of risk. A portfolio is considered efficient if no
other portfolio has greater value for the same level or less risk. Similarly,
a portfolio is efficient if no other portfolio has less risk for the same or
more value.

This theory is translated into practice by first determining those projects
or initiatives that could potentially be part of a forward-looking capital investment
strategy. The company would evaluate its overall corporate strategy to determine
basic elements such as life extension goals, cash flow requirements, total generating
or transmission capacity growth needs, and so on. Viable groups of projects
within budgetary constraints would be assembled to create individual portfolios,
each of which would be plotted in the context of risk versus total return.

Based on this, an optimal corporate portfolio along the efficient frontier
could be selected based on a chosen level of risk and used as a basis for further
decision-making. The goal is to evaluate the impact on total portfolio value
and overall risk of any major investment proposed, which, due to project interdependencies,
may result in changes greater or less than the value of the investment on its
own.

Real Options Modeling

Real options modeling techniques look at each asset in the portfolio not as
a static revenue generator with set associated costs, but as a financial instrument
that allows flexibility in its use in terms of the overall portfolio. A single
asset or project in the portfolio can be deferred, abandoned, expanded, contracted,
or otherwise altered in response to changing market conditions.

Real options modeling allows for quantitative assessment of this optionality
and inclusion of its value in the assessment of portfolio change options over
time.

Integer Programming Algorithms

Integer programming algorithms dynamically evaluate many different combinations
of operational and project options under specific financial, resource requirement,
regulatory, or other constraints.

The reason integer programming is usually used is that each project or asset
can be included or excluded (in the algorithm, assigned an integer value of
1 or 0) from the portfolio of interest, and then IP algorithms can be developed
to efficiently iterate over the possible combinations to provide a menu of best-choice
portfolios.

Conclusion

Over the next decade, some utility companies will experience sudden, startling
jumps in profitability and industry position (positive or negative), just as
the disk in Zeeman’s Machine undergoes abrupt changes in position from time
to time. Many executives will continue to see these changes as random, just
like the first-time observer of Zeeman’s Machine tends to do.

By not understanding their company’s variation of business model, they will
spend too much time on analysis of the vast quantity of system variables they
glean from performance reports, and will fail to make decisions that will have
a substantial impact on the direction of the company.

The winners in this new environment, however, will understand their company’s
business model variation, focus on making the changes that positively affect
its own specific control variables, and exploit the interdependent nature of
the portfolio of businesses under their direction. By using PVM tools, they
will have the potential to distinguish between decisions that will lead to ordinary
(or worse) performance and those which will cause quantum leaps in profitability
and industry status.

These executives will be the ones who lead their companies to jumps in performance
that will soon put them among the industry’s elite, to the surprise of many
— but not to themselves.

After all, these random changes were predictable and controllable, weren’t
they? Dr. Zeeman would agree.

Partial Deregulation and Credit

Deregulation or, as it should be more properly called, restructuring of U.S. power
markets was supposed to induce greater efficiencies and reduce prices to consumers.
Restructuring should also foster greater innovation and investor confidence for
much-needed new infrastructure development.

Yet the U.S. transition to competitive wholesale power markets has stalled
(see Figures 1 and 2). Indeed, many states are retreating from restructuring,
and the Federal Energy Regulatory Commission (FERC), the Securities and Exchange
Commission, and politicians continue their vigorous investigations into trading
and accounting practices.

To some, the goals of restructuring probably remain frustratingly elusive,
and their concern may be justified. This paper will explore why credit risks
are likely to remain high in the power markets long after the current set of
concerns are resolved. We also believe raising capital will remain difficult
for the long-term.

Already, widening credit spreads and equity prices that have fallen to multi-year
lows have sent investors fleeing from the sector. And credit risk is continuing
to intensify in the face of continuing regulatory and political uncertainty
and declining market fundamentals.

Investors may find that generation assets, particularly new construction, are
at risk of becoming partially stranded investments if they cannot access their
intended markets or simply cannot transact as intended because of inadequate
transmission.

Two Problems

Partial restructuring has created dysfunctional wholesale electricity markets.
This situation is largely attributable to two problems. One, the United States
has a transmission system that largely isn’t designed to operate in competitive
markets. Many markets allocate transmission usage using non-market-based means
while generation tries to operate competitively. In such a market, the premise
of investments may prove wrong and investors may find themselves at risk of
a credit surprise.

Second, only about one-third of the U.S. generation fleet operates in a competitive
environment; the rest still operates within the complacency of cost-of-service,
rate-of-return environments. That means where merchant power plants operate
along side of utility-owned generation, an uneven playing field exits.

The industry of competitive generation has suffered an unprecedented downward
credit spiral this year with very few investment-grade players remaining and
some on the verge of bankruptcy. The power industry’s recent boom, which inflated
asset price values beyond which normal markets could hope to sustain, led many
lenders to overvalue the collateral upon which they loaned capital.

The resulting cheap credit led energy merchant companies to aggressive borrowing
and high leverage. Most borrowing and lending relied upon two key assumptions
that didn’t materialize: first, competition and deregulation would spread quickly
and widely; and second, older coal-fired and nuclear power plants would permanently
retire.

Now, amidst the industry’s poor fundamentals, weak power prices and surplus
generation capacity, expectations are growing for a record number of defaults.
Already lenders have suffered defaults of bank loans and capital markets bonds.
Still, billions more in loans have become problem loans — a signal that
likely portends more defaults to come and threatens the entire business model
of competitive power.

Despite transitional problems, expect restructuring to forge ahead. As some
unknown cowboy philosopher said, “It is easier to let the cat out of the bag
than to put it back in.” So it goes for deregulation and restructuring.

Major markets, mainly New England, New York, Pennsylvania-New Jersey-Maryland
(PJM), and Texas, are well into restructuring and aren’t turning back. Moreover,
the industry’s turmoil has neither completely consumed nor paralyzed FERC. FERC
is moving forward with restructuring as evidenced by the tremendous scope of
its standardized wholesale market design (SMD) proposal.

According to FERC Commissioner Nora Brownell, who spoke at a Standard & Poor’s
co-sponsored conference in New York, “The cost of doing nothing is greater than
the cost of doing something.” Brownell also stated that the vulnerability of
the entire market terrifies FERC and that the reality is that the current electricity
infrastructure will not support economic growth in the United States.

As FERC presses ahead with electricity reform, we’ve cautioned investors that
deregulation will not follow the paths of other restructured industries either
in the United States or abroad. Electricity is a unique commodity, if indeed
it can be called a commodity, and because of its differences, credit surprises
could be in the making, particularly if restructuring does not progress beyond
its stalled state. Certainly against the wave of defaults, eminent defaults,
and regulatory and political uncertainty, the industry may be hard-pressed to
raise capital for new investment that may be needed in a few years.

What Distinguishes Electricity

It would surprise few that restructuring the U.S. electricity market and introducing
competition has been difficult. For one, electricity is like no other commodity,
including traded financial securities or oil and gas. That means attempts to
draw analogies to the economic and physical behavior of common commodities,
such as oil and gas, metals, agricultural products, and financial products,
may come up short. Many rules that apply in these markets break down in electricity
markets, making the analysis of competitive electricity investment and credit
a thorny task. Consider the following:

• Electricity is extremely capital-intensive. The generation of the first
electron from a power plant can cost hundreds of millions of dollars.
• Electricity cannot be stored in any meaningful amounts. As a result,
some power plants rarely operate, while some run continuously.
• Electricity has a unique nondirectional aspect of transport. Somewhat
like air pollution, it follows the rules of physics and not those of various
jurisdictions or marketers.
• A powerful inelastic demand quality of electricity governs consumer behavior.
There just are not viable substitutes for it.
• The current industry structure largely provides no real-time means to
adjust consumer demand based on price.
• Weather heavily influences day-to-day and hour-by-hour demand and, in
some markets, notably the hydro-rich Pacific Northwest, available supply.
• Electricity prices can exhibit extreme volatility at times because of
the above qualities, because its value to consumers changes hourly, and because
transmission availability can restrict power flows.
• Few commodities can incite the heated national, regional, and local debates
that electricity restructuring seems to inspire.
• The marginal cost of production drives the price of power down, except
during shortages, to levels that often fall short of capital recovery needs.

Such characteristics, particularly volatility, potentially turn what might
be a small credit concern in a functional market into a much bigger credit problem
in a partially deregulated environment. For example, a generator with an all-requirements
contract that cannot access transmission during a congestion period may find
itself paying a fortune in replacement power to satisfy contractual obligations
to its seller.

Figure 1: Status of Electricity Restructuring — February 2001                                                                          Source:
FERC

Coordination and Reliability

Electricity has another unique characteristic that complicates restructuring:
coordination. At all times, aggregate supply and demand must operate in equilibrium;
a system cannot generate more or less electricity than demand or else the system
shorts out. But the coordination function is not something left to the markets.

Utilities once closely coordinated their own generation and transmission investment
and operations through central dispatch and planning processes. In a competitive
market, utilities don’t necessarily own all, or any, of the generation and transmission
assets that service their franchise areas.

Now, according to the North American Electric Reliability Council, that coordination
and reliability task, which once provided the most reliable system in the world,
is being done by various means and numerous market participants, including energy
marketing and trading companies.

Reliability is a catchall term that the industry uses to measure the performance
of the bulk power system that delivers electricity to consumers when they want
it and in the amounts they need. It considers the performance of generation,
transmission, and distribution systems by measuring the frequency, magnitude,
and duration of adverse effects of the system, such as blackouts. A key component
of reliability is transmission.

An “incomplete transition to fair and efficient competitive wholesale markets”
has exacerbated transmission problems, according to the Department of Energy
in its May 2002 National Transmission Grid Study. The evidence is compelling.
Transmission loading relief actions (TLRs) are up — incidents where transmission
congestion prevents market-based transactions from occurring.

Frequent TLRs indicate that efficient generation may be curtailed and that
load-serving entities (distribution companies) have to pay more than is necessary
for electricity. FERC has observed instances where congestion causes the price
of power in one region to be higher than surrounding regions at the same time.

In two relatively well-functioning markets, New York and PJM, FERC estimates
that transmission-related congestion problems cost consumers about $2 billion
in 2000. In New York alone during the summer of 2000, the estimated cost was
about $700 million. Blackouts in California during 2000 and 2001 and unprecedented
high electricity prices throughout the western states also evidence an incomplete
transition.

During the first half of the 1900s, power engineers designed the current transmission
hub-and-spoke systems for vertically integrated utilities that serviced their
own franchise territories. Central coordination of generation and transmission
worked well in that environment where ratepayers could absorb the costs of inefficient,
albeit effective, operations.

However, competitive trading and marketing of power requires transmission to
move power in varying amounts and in different directions throughout the day.
These patterns significantly differ from patterns contemplated when transmission
was built. Without that capability, it is difficult to see how deep and liquid
power markets will develop, regionally or nationally.

That suggests that trading and marketing companies may be riskier than measures
of value-at-risk suggest. Moreover, calculations of mark-to-market would be
suspect if transmission were not “competitive-capable” (could wash trades be
an unintended consequence of partial restructuring?). Consider the folly of
building a just-in-time manufacturing plant with no ability to store production
and which could not rely on trucks and highways to move goods.

Figure 2: Status of Electricity Restructuring —March 2002                                                                          Source:
FERC

What FERC Is Doing

The California debacle and the Enron bankruptcy have strengthened FERC’s restructuring
resolve. FERC and others have acknowledged that a lack of a standardized wholesale
market design is impeding development of a well-functioning wholesale market.
FERC wants a wholesale market that recognizes real physical differences in regional
markets, but with a single tariff design.

The commission opened the way for competitive wholesale power through FERC
Orders 888 and 889. FERC Order 2000 continued restructuring by encouraging the
transition from more than 140 control areas to the development of four to five
regional transmission organizations. The United States Supreme Court recently
strengthened prior FERC orders by affirming open access for independent power
and traders to utilities’ transmission lines. FERC’s proposed standardized wholesale
market design is the next step.

FERC stated that it wants to establish a common market framework within which
all customers can benefit from an efficient and well-operating competitive wholesale
market, regardless of the state of retail access. FERC seeks to develop standard
rules and a uniform tariff and require that transmission systems operate independently
of the market participants they serve.

A standard market design would provide a system that generates price signals
that reflect the temporal and locational value of electricity — transmission
congestion pricing. And finally, the SMD would reflect regional characteristics,
such as the generation and fuel mix and use patterns.

Gaining consensus among industry participants and stakeholders will be a major
challenge for FERC. In fact, since issuing the SMD Notice of Proposed Rulemaking
(NOPR), FERC has had to acknowledge that there will have to be regional differences
in how markets are designed. That said, many states and their regulators are
firmly opposed to the SMD NOPR.

Unlike electricity industries in other countries, the U.S. electricity industry
operates in a complicated regulatory setting, which is frustrating FERC’s efforts.
In addition to FERC at the national level, 50 states and many municipalities
also regulate the industry. Nonprofit entities alone own about 30 percent of
the industry, while the investor-owned utilities, independent power companies,
and energy merchants own the remaining 70 percent. About 25 percent of the industry
sits outside of FERC’s authority.

Transmission-Related Credit Risks

According to FERC and others, a transmission system that cannot meet competitive
wholesale electricity market needs could be causing any one of a number of problems:

• In markets with bilateral contracts between sellers and buyers (which
includes most markets), a non-price allocation of capacity creates congestion
during peak times, which means that some sellers and buyers are curtailed and
prevented from transacting in the market.
• Systems that manage congestion based upon non-economic systems hide price
signals that tell investors where to build generation and transmission assets.
Therefore, a risk exists that either investments get built in the wrong locations
or they do not get built at all.
• Market congestion is likely contributing to illiquid markets, which hinders
the development of efficient (and perhaps profitable) trading and marketing.
• In some markets where they own both generation and transmission, utilities
have incentives to discriminate against merchant generators in providing transmission
services.
• Adjoining regions with different regulations, computer systems, market
rules, software, and reliability objectives will impede trade and introduce
economic inefficiencies in a way not dissimilar to international trade barriers
(referred to as a seams problem). In extreme cases they may give rise to gaming
incentives with names such as Fat Boy, Ricochet, and the like, which were popularized
by Enron traders.

FERC’s implementation of a standard market design could drag on for years.
Utilities with low-cost operations will be challenged to find an upside in embracing
competition and connecting with larger markets that could raise costs to their
customers. Their regulators may see little benefit to the local markets in participating
in regional transmission projects where multiple jurisdictions are involved.

Although some “low cost” utilities may find that their low costs are transitory
if environmental compliance raises their cost of generation, politicians will
be quick to resist any restructuring that carries a risk of increased retail
rates for their constituents.

The Dual-Universe of Generation

Except for certain pockets, such as in the New York City area, Boston, San
Francisco, Wisconsin, and Southwest Connecticut, many of the nation’s power
markets have too much capacity, and that will generally pressure credit. Power
prices have largely been driven down to the marginal cost of production. That
means that power plants selling into depressed markets are likely earning little
or nothing toward their fixed costs.

Such a situation should force unprofitable plants into retirement, and that
would be the case in a truly competitive, restructured power market. However,
about two-thirds of U.S. generation still operates in a cost-of-service, rate-of-return
environment, which means that ratepayers carry the fixed costs of generation
owned by vertically integrated utilities.

A recent trend away from deregulation in some states is allowing utilities
to reintegrate vertically. That could exacerbate credit pressures for some merchant
generators, such as in Arizona and New Mexico. Potentially worse is the possibility
that announcements of retirements, to the extent there any material ones, may
actually be plants going into a mothballed state, standing by in case power
prices recover.

Merchant power plants, even those with efficient heat rates, will find competing
in markets where regulated utility generation operates alongside them difficult
unless demand begins to rise or power plants permanently retire. Utility-owned
generation need only bid their marginal production costs in the market.

Such a bid strategy will tend to drive down energy prices in a way that will
hurt unregulated, independent merchant generation. In markets that have forced
utilities to completely divest, such as New England, this dual universe of generation
ownership should be less of a problem.

As a point aside, one of the intended benefits of deregulation has been the
introduction of competition among wholesale generators. Without the complacency
of cost-of-service operating environments, the new owners of older utility generation
have focused on improving operations. A visible sign has been the improvement
in generation availability and load factors for base load plants, particularly
nuclear and coal.

While many independent market studies that accompanied the new gas-fired merchant
power plant construction assumed that older plants would retire, just the opposite
has happened. Many have extended their lives and now produce more low-cost power
than before — a situation that has also contributed to excess generation
and credit pressures in some markets.

Outlook for Credit

What does the restructuring of the U.S. power sector hold for credit? At a
minimum, restructuring is a lengthy work-in-progress that could take many turns
along the way. The U.K.’s electricity regulator, for instance, has made numerous
changes since deregulation began in 1991 to its industry structure that affected,
and continues to affect, the credit of most participants. But the effects were
not uniform; Standard & Poor’s expects that U.S. restructuring also will affect
credit differently for industry participants, as it already has. Much of the
effects will rest with how participants themselves choose to capitalize their
firms, as well as in what parts of the industry they chose to enter and in what
markets they elect to compete.

Nonetheless, the interim state of U.S. restructuring has left a cloud of uncertainty
over investors and lenders. That in and of itself raises credit risk. The absence
of a standardized wholesale market design, which includes an efficiently operating
transmission system, and an uneven playing field in wholesale generation raises
credit concerns because of unintentional results, which include:

• Some buyers and sellers of power cannot enter into some transactions
when power is most expensive because transmission is allocated using non-market
methods.
• Lack of a standardized wholesale market design is masking price signals
that either lead to infrastructure investments in the wrong locations —
or of the wrong type — or investments not being made.
• The effective subsidization of utility-owned generation that creates
an uneven playing field for merchant generation in the wholesale generation
market.

Look for FERC and many states to press ahead with restructuring, but expect
a slow timeframe. In particular for FERC, restructuring will be a major initiative
even after the current turmoil and collapse in power prices subside. In the
meantime, however, credit risk will not likely abate, and raising new capital
will be a challenge.

We Need Standard Market Design

No doubt in the future we will look back at this time as a turning point in
the history of America’s $225 billion electricity industry. The past decade’s
overly long transition to competition after a century of hands-on rate regulation
has been uneven and badly tainted by California’s painful market restructuring
missteps.

Competition in the electricity sector has been dealt a one-two punch, with
the stunning financial collapse of Enron Corporation coming close on the heels
of California’s electricity crisis. Wall Street has punished the entire energy
industry for the bad actions of a few. The public and even some policy-makers
have lost confidence in competitive markets as a driver of efficiencies that
lower costs for customers.

Given the vital importance of reliable, affordable electricity — the lifeblood
of our economy — we must not fail in completing the transition to competitive
markets. Competition will promote adequate investment in the electricity sector
to modernize the power grid, assure customers pay the lowest possible price
for electricity, and reduce manufacturing costs and protect jobs.

We must not let the problems of Enron and California allow us to lose sight
of other, highly successful market restructurings elsewhere in the nation and
internationally. Competitive energy market reforms have produced billions of
dollars in economic savings, and completing the transition to competition promises
to bring significant future economic stimulus and technological innovation.

An about-face to cost-of-service ratemaking is not a feasible option and flies
in the face of recognition that more than 100 years of regulation failed to
guarantee customers the lowest costs possible. And maintaining the transition
at mid-stream would promote market instability, higher prices, less reliability,
and costly litigation. The nation has no other choice but to finally complete
the transition to competition and make wholesale power markets work to the benefit
of customers.

The Proposed Rule

It is with this goal in mind that the Federal Energy Regulatory Commission
unanimously proposed new rules to standardize the structure and operation of
competitive wholesale power markets nationally. Our proposal for a Standard
Market Design represents the agency’s commitment to markets and marks our determination
that costly market dysfunctions such as California’s never happen again.

The rulemaking is a necessary step to restore public confidence in competitive
power markets by assuring adequate generation resources and establishing a
standard platform for the exchange of electricity and transmission services.

Its fundamental elements include active monitoring and mitigation to prevent
market abuses, well-organized spot power markets that complement a decentralized
contracts-based market for long-term power supplies, and price discovery and
market transparency.

The market standardization proposal will set the “rules of the road” and
spur sorely needed investment in electricity generation and transmission by
providing regulatory certainty and earnings opportunity.

History proves this approach works. The commission initiated a similar evolution
away from inefficient regulation to competition in the natural gas sector
more than a decade ago. Those competitive reforms lowered prices and provided
billions of dollars in customer savings. One estimate places the aggregate
customer savings from natural gas restructuring since 1984 at $600 billion,
or about $6,000 in savings for the average family.

The Department of Energy estimates that competitive reforms to date in the
electricity sector have produced $13 billion in annual economic savings. This
offers a key indicator of the potential savings from completing the competitive
transition.

In my home state of Texas, where the retail market opened to competition
in January 2002, customers within a matter of months were paying 10 percent
less for electricity than the previous year despite rising fuel costs. The
Texas restructuring effort also prompted a record upsurge in investment by
merchant power producers.

The Pennsylvania Public Utility Commission found that state’s Electric Choice
program produced more than $4 billion in electricity savings over five years.
In the PJM Interconnection, competition and the anticipation of competition
since 1996 has reduced the forced outage rate for generation by nearly 5 percent.
Competitive pressures have contributed to better operations and greater plant
utilization.

Similarly, on a national basis, wholesale market competitive pressures have
contributed to nuclear power plant operators establishing record capacity
utilization, confounding predictions that competition would spark early nuclear
plant closures.

Failure to complete the transition to competition puts these and many other
benefits at risk and will perpetuate problems that threaten our nation’s ability
to maintain economic growth.

 

Regional Transmission Organizations

 

High Costs of the Status Quo

For example, investment in new transmission has not kept pace with electricity
demand growth. In the past decade, transmission investment grew a mere 6 percent
as electricity demand surged 20 percent. With transmission representing only
between 5 and 10 percent of the delivered price of power, we should not let
such comparatively small costs become a barrier to overall customer savings
from lower-cost energy.

FERC staff found the top 10 transmission constraints cost customers more than
$1 billion during the summers of 2000 and 2001. Congestion-based pricing for
transmission helps assign a cost of these bottlenecks to those who cause the
congestion on the grid.

The White House National Energy Plan cited a need for 393,000 megawatts of
new generation by 2020, or the equivalent of 1,300 to 1,900 new power plants.
More than a third of generating plants are 35 years old or greater. They will
need to be repowered or otherwise rehabilitated to remain competitive. And the
industry will need to invest billions in coming years to meet new clean air
standards. We need a standard competitive framework that allows rational investment
decisions.

Clearly, there is no alternative to rolling up our sleeves and, learning from
experience, laying down rules that make markets work.

A leading example of the new market design FERC envisions can be found in the
effort to form an integrated wholesale power market in the Midwest and Mid-Atlantic
regions. That market integration is conservatively projected to save customers
in 26 states $7 billion over the next decade.

Making Markets Work

The Standard Market Design is the third chapter in FERC’s electricity restructuring
saga. The first came in 1996, when the commission issued Order No. 888 to open
up transmission lines to competing power providers on a nondiscriminatory basis.

But FERC soon realized that more must be done to truly ensure effective competition
in electricity. In December 1999, the commission issued Order No. 2000, directing
utilities to turn operation of their transmission lines over to independent
management by regional transmission organizations (RTOs).

In addition to eliminating the residual ability of utilities to use their control
of transmission lines for commercial benefit, RTOs promise to bring efficiencies
to the current state of operations of the nation’s interconnected power grid.
With seamless trading across regional markets and between regional markets,
transmission customers will avoid so-called “pancaked” rates in which fees are
paid to each individual transmission-system owner.

The RTOs now well on their way to being implemented in various regions of the
country are integral elements of the commission’s Standard Market Design. RTOs
will enhance competitive access and power-grid reliability. These independent
transmission providers will administer competitive spot markets for wholesale
power and help the commission police against potential anticompetitive actions
by market participants. Truly independent governance of RTOs is a bedrock principle
of Order No. 2000 and the proposed Standard Market Design.

The third and final chapter, FERC’s Standard Market Design proposal, builds
on the wealth of experience the commission has accumulated in its nearly decade-long
experience with competitive wholesale power markets. The proposal marks an end
to a period of state- and regional-level experimentation with competitive wholesale
electricity markets. FERC’s stan-dardization incorporates a “best practices”
framework based on the experience gained from years of experimentation, while
providing necessary flexibility to account for regional differences.

Wholesale Electricity Markets

Under the Standard Market Design proposal, an independent entity will administer
spot markets for wholesale power, ancillary services, and transmission congestion
rights; a real-time balancing market to maintain reliable operations of the
power grid; and a separate day-ahead market. These will complement bilateral
contracts for long- and short-term energy purchases.

The voluntary centralized spot-power market is a “security-constrained, bid-based”
system. “Security-constrained” assures that energy transactions will not jeopardize
grid reliability, while “bid-based” describes the proposed auction for imbalance
energy. Power will be bought and sold through a power auction in which buyers
and sellers bid the price at which they will buy or sell power in any hour.
The market-clearing price will be revealed to all supply-and-demand-reduction
sources to encourage efficient short- and long-run operations and provide market
transparency.

The commission’s proposal also provides a mechanism to curb potential runaway
market volatility, much like the so-called “circuit-breaker” tool used by the
New York Stock Exchange. This proposed mitigation measure would bar power providers
from bidding to supply power at prices higher than $1,000 per megawatt-hour.
Such a $1,000 volatility check is already in place in the Northeast states and
Texas.

But the vast majority of power transactions will still be made under bilateral
contracts negotiated between buyers and sellers. Energy delivered under these
contracts will have to secure transportation between generator and customer,
which can be assured by obtaining rights to transact between those points. Firm
transmission rights, or FTRs, are tradable financial rights for transmission
between two points on the grid over a particular period of time, and lock in
a fixed price for the transmission, making the FTR holder indifferent to the
cost of congestion over that pathway.

Transmission Service

The proposal creates a new, universal form of transmission service to replace
the two types of open-access transmission tariffs provided for under Order No.
888. The new form of a network transmission service tariff combines elements
of the existing network and point-to-point services available under Order No.
888, and allows all wholesale power sellers to use the grid much as transmission-owning
utilities do.

An important change in the proposed new transmission service is the commission’s
call for all transmission uses to be scheduled under the network tariff. Thus,
transmission service in support of both wholesale and retail transactions will
fall under a common tariff for the first time. The commission’s assertion of
jurisdiction over all interstate uses of transmission acts on an invitation
from the Supreme Court in its decision upholding Order No. 888.

This change will prevent transmission providers from reserving more capacity
than needed to serve native load retail customers as a means of blunting competitive
entry at wholesale. Remedying undue discrimination in interstate commerce requires
that the commission assert its authority over bundled transmission.

The commission proposes to eliminate point-to-point transmission service as
a standalone service. This addresses a key concern that utilities, after denying
point-to-point service to a competing power provider, use the information provided
by the transmission customer to sell power to their competitor’s intended customer.
Under the Standard Market Design proposal, all transmission users will be able
to schedule power deliveries using multiple receipt and delivery points, providing
the same operational flexibility enjoyed by transmission owners.

A primary difference between the existing form of network service and the new
proposed tariff is the feature creating a market for firm transmission rights
to lock in a fixed price for transmission across power-grid bottlenecks.

By allowing transmission customers to lock in transfer rights across congested
transmission pathways with FTRs, and by promoting a secondary market for those
firm transmission rights, the commission lets the market assign a value to the
congestion, which signals investment needed to relieve the bottleneck.

The proposed market design also incorporates locational marginal pricing (LMP),
a form of congestion pricing, which allows more efficient management of the
transmission grid. LMP provides price signals indicating where investment in
generation and transmission is needed to improve grid operations. LMP has proven
to be the most reliable and efficient market design, and it minimizes opportunities
for market manipulation.

The proposal further provides an incentive for power grid enhancement by allowing
the companies that invest in new transmission to retain the fixed rights to
the added power-transfer capacity. By providing predictable rules and clear
rewards for investment, the commission expects the proposal to speed the necessary
expansion of the nation’s highly interconnected electricity grid.

This congestion pricing and management approach should dramatically reduce,
or even eliminate, the need for curtailment of transactions as a means of preserving
power-grid stability. Recent commission staff studies found the use of transmission
loading relief, or TLR, procedures increased markedly in the Midwest and Southeast
in recent years, suggesting that transmission owners are exercising the option
for more than grid reliability purposes.

Demand Reduction

To promote long overdue investment and avoid over-reliance on the spot market
auction, the commission establishes a mechanism to ensure sufficient resources
will be on the system when needed in the future. This approach will help avoid
undue price volatility in the spot markets.

A watershed feature in the proposal encourages the use of demand reduction
to meet the resource adequacy requirement. Reducing electricity usage shaves
the peaks in high-demand periods when prices typically spike, and thus dampens
high prices and moderates price volatility. The commission proposes that demand
reduction be bid into the spot market in addition to power supply. Demand-reduction
mechanisms can include electing for service interruption or taking steps to
lessen electricity use.

By making certain segments of the market receptive to price signals, the commission
aims to address a key inefficiency in electricity markets: “inelasticity.” Most
electricity customers today buy their power at a fixed price that does not vary.
Whether prices in wholesale markets are high during periods of peak demand or
low during slack demand, the price remains the same. Economists have consistently
urged that customers be exposed to price signals that would curb demand for
electricity during periods of peak demand when prices tend to spike as a means
of providing “elasticity” in the market.

Decisions about demand-reduction programs would be left to state and regional
transmission planners. Most of those customers opting for exposure to potentially
volatile real-time pricing are expected to be sophisticated industrial entities
initially, although similar opportunities are expected for average household
and commercial customers over time.

But FERC can’t complete this needed transition from regulation to competition
alone.

State and federal regulators must work together so customers, whether or not
they live in states with retail competition, reap the benefits of a national
marketplace for wholesale electricity. Through a coordinated approach to power
markets, adequate and reliable supplies of electric energy at just and reasonable
prices can be assured while respecting the unique characteristics of regional
power markets.

Without this cooperation, we risk continued under-investment in generation
and transmission, divergent rules that promote discriminatory business practices,
and lost economic gains for customers across the nation.

We have an opportunity now to make markets work in the interest of customers.
Our nation will succeed in this important endeavor.

The Myth of Deregulation

In 1996, California passed the nation’s first and boldest electricity deregulation
law. As we all know now, the state jumped off a cliff with no parachute, and the
landing was a hard one. By the time the Federal Energy Regulatory Commission belatedly
intervened with price caps, Californians had spent $40 billion more for electricity
than they would have under regulation.

Deregulation’s supporters have maintained that it was California’s mistakes
that caused the failure of deregulation. But a closer look at what happened
in California reveals that the fatal flaws are not in California’s form of deregulation,
but in deregulation itself. This paper will challenge the fundamental arguments
underlying ongoing efforts to restructure the electric industry.

California’s catastrophe hasn’t stopped market boosters from trying to re-ignite
enthusiasm for electricity deregulation. Explanations for the crisis included:
glitches in the design of the PX, shortages resulting from droughts in the Pacific
Northwest, and an unprecedented rise in natural gas prices. Some even ventured
to say that the elements of the “perfect storm” had coalesced.

The truth is that California’s regulated system had weathered such storms before
and had managed to keep rates relatively reasonable and stable.

Crisis and Revelation

If 2000 and 2001 were the years of crisis, then 2002 was the year of revelations.
Investigations showed the perfect storm wasn’t a natural market phenomenon but
more a product of human greed and malfeasance. Further, the loss of state oversight
curtailed the ability of regulators to fix the problems.

Where did the $40 billion go? Even the bad guys themselves became market victims.
Like a black hole, this failed policy sucked in even those who had initially
profited. The energy sector has started imploding at a dizzying rate. Approximately
$25 billion in debt by merchant generators will come due during 2003. Surplus
energy with soft prices remains on the near horizon. In one company’s case,
new plants will be turned over to bankers upon completion.

None of this should be a surprise. Sound economic analysis would predict the
dismal failure of deregulation — in California or elsewhere — for
a host of rather obvious reasons.

Competition and Capacity

In industries such as electric power, pure competition cannot result in an
efficient level of capacity, nor can it result in just and reasonable rates.
To get a profitable level of capacity the generators must cooperate to run enough,
but not too much, plant. Greed may have driven this to an extreme in California,
but keeping plants offline to fix prices is necessary for profits everywhere.
Manipulation is, in effect, required under deregulation.

The promise that eager competitors — “new entrants” — would add capacity
and keep prices in line was an empty one, as evidenced by widespread cancellations
of scheduled new plants. Wall Street, rather than system planners or even risk-taking
merchant builders, will decide what capacity will be built. Profits, not reliability,
will drive additions.

Recovery of investment in long-lived facilities like generation and transmission
requires years of useful operation. This requires a predictable customer base,
assured through contracts or through a franchise that protects the base. Merchant
generators, selling into a market, can no longer be financed.

Choice was promoted as a benefit to customers — a powerful force that
would drive demand for superior products, quality, and services. The powerful
and informed customer would not only get the pleasure and satisfaction of having
desires met, but would also discipline and direct the market. There are several
problems with this analysis.

First, customers can’t effectively exercise the most important choice, which
is to not consume at all. Electricity is an essential product; when we need
it, we have to have it. In 2000 and 2001, companies took advantage of that fact
and turned off California’s lights in order to command outrageous prices. In
the end, unless the market functions perfectly, which it cannot, customers must
suffer the consequences.

Second, electricity is a fungible product. Although the largest customers may
demand voltage stability and reliability that small customers do not, there
is no evidence to suggest that this will be any more attainable in a deregulated
environment.

Small customers, with few exceptions, will not put effort into shopping for
products that are by and large indistinguishable from each other. Only where
retail choice promises environmentally sustainable generation have small customers
demonstrated any interest in their choice option. And after five years of experimentation,
it is clear that regulatory commitments to develop sustainable resources are
much more effective and efficient than niche marketing.

Finally, choice cannot, as proponents have argued, discipline the market. Marketing
to small customers represents substantial transaction costs. There is no incentive
for competitors or their agents, assuming they ever emerge, to target individual
small accounts.

Default Customers

For all these reasons, small customers usually default to the utility that
has for decades served in the role of default provider with an obligation to
serve. Market enthusiasts do not adequately protect default customers. Under
a regulatory scheme, these customers are protected by rules that the utility
justify its revenue requirement and justify the allocation of that revenue requirement
among customer classes. In a deregulated market, default customers are forced
to absorb costs that are shifted from attractive customers who might be able
to negotiate lower rates.

The system envisioned under unregulated electricity markets makes coherent
planning for resource development or procurement impossible. Without long-term
commitments, energy service providers cannot predict how much capital to commit
to the development of new resources. The loss of the ability to pursue integrated
resource planning in the context of reasonable projections for increased demand
is a tragedy of the deregulated system. It could mean that the system would
ricochet between high prices with tight supply and soft prices in surplus supply.

More Problems Ahead

The crowd that pushed for deregulation is undaunted. Some have been indicted
and more have been disgraced, but many prominent advocates still insist that
deregulation can be done right. A brief look at the fixes they propose shows,
however, that the current advice is no better than what came before. Most of
the fixes require regulation to work well, or at all.

The fixes proposed to make deregulation work include excess capacity reserves,
long-term contracts, real-time pricing, vigorous antitrust enforcement, auctioning
of the default market, and independent grid operators. However, only the last
will deliver benefits for the public.

Market proponents have frequently remarked that the lesson to be learned from
California is “don’t start deregulation without excess capacity.” But California
had excess capacity when it started deregulation, and the market failed.

Solutions aimed at fixing the problems in California won’t fix the essential
problems with deregulation.

Capacity Reserves

It makes sense to have capacity reserves for the expected peak, plus a bit
more for the unexpected demand or for the loss of capacity due to equipment
failure. Regulated utilities built facilities to meet that required reserve.

One of the arguments for deregulation was that it would eliminate excess capacity
that existed only, it was claimed, because utilities were regulated. Now the
same voices that argued for the elimination of excess capacity are arguing that
capacity reserves are required to make deregulation work.

Developing and maintaining capacity reserves requires each customer or service
provider to pay the cost. But the generators are to be free to refuse to commit
to supplying reserves unless they get the price they demand. Thus, this “fix”
puts us right back in the danger of supplier manipulation. Because the generators
benefit from tight capacity, they will refuse to build until they get a guaranteed
high price.

Long-Term Contracts

Long-term contracts are the preferred fix of many. A service provider should,
they say, lock up long-term contracts to avoid the volatility of the spot market.
However, if the long-contracts are at prices above the spot market — as
they might be from time-to-time — then the individual customers can leave
the supplier and shop on the spot market. The supplier then has high-priced
power it can’t sell.

California found itself in this position because of long-term contracts signed
by the Governor. Texas Utilities lost $2 billion bailing out of the United Kingdom
in November as its customers fled to avoid high-priced contracts. Customers
must be locked to a supplier, or the supplier can’t commit to a contract for
power. Essentially, deregulation must be stopped in order to save it. This is
not a fix.

Real-Time Pricing

Economists advocating real-time pricing describe it as “allowing” customers
to see the real cost of electricity. Discouraging consumption when capacity
is tight makes a lot of sense. But real-time pricing can’t work in a deregulated
market.

The problem is this: The peak-period price, under this scheme, is the cost
of power from the most expensive plant dispatched. This is the pricing scheme
that led to manipulation at the California PX. All the lower-cost plants get
the high price paid to the costliest plant. Most of the power, however, will
come from cheap base-load plants, and the average cost will be much lower than
the peak price set by real-time pricing. Profits during the peak hours will
not be connected to the cost of production. What’s worse, this pricing scheme
is a direct invitation to manipulate the market by withholding capacity, an
instant replay of the California crisis of 2000.

Curiously, peak-load pricing can work effectively, but only under regulation.
Under regulation this pricing can be used with a quite different outcome. The
high on-peak prices still result in prices higher than costs during the on-peak
period. But since there is a profit constraint under regulation, the profits
must be passed back to customers by charging lower prices in other periods.
Under regulation, the motive to manipulate disappears.

Notice the potential for cross-subsidy here. On-peak prices paid by small businesses
would subsidize process industries like refineries and cement plants running
around the clock. Utilities historically have supported subsidizing large customers
by smaller ones in order to have growth in sales. The regulator’s art would
be to combine prices and the length of the peak and off-peak periods to produce
the desired outcome — just and reasonable rates.

Default Markets

Like long-term contracts, the default market curtails choice and undoes a basic
premise of retail competition. Individual customers cannot be profitably acquired
or served, but large groups of individual customers are very profitable for
a generator or electric service provider. For these reasons, adherents to the
market solution have come up with the concept of facilitating competition by
holding auctions for the opportunity to serve the default market.

Instead of freedom to choose, the individual is instead auctioned to the lowest
bidder. This solution exposes the fallacy of choice and competitive markets
and makes it abundantly clear that deregulation is promoted to serve market
participants, but not necessarily the customers they gain title to.

Standard Market Design

Looking beyond generation, FERC focuses Standard Market Design on locational
marginal pricing (LMP). This combines two problems. First, it uses marginal-cost
pricing in a segment with large overhead costs so that total costs must be covered
by a regulated return on investment, regardless of actual electrons traversing
the wires. Ignored under LMP, moreover, is that transmission lines can be providing
essential service to the system even when carrying no current. Second, SMD makes
Wall Street financiers, rather than system planners, responsible for what transmission
gets built. This would be the end of least-cost planning.

Vigorous Antitrust Enforcement

Antitrust enforcement comes after an abuse, rather than preventing one. The
weak record of antitrust enforcement, the rapid consolidation that has already
taken place, and the further mergers inevitable given the financial condition
of the industry suggest that antitrust enforcement won’t be able to control
a very tight oligopoly.

Conclusion

The final tragedy and the best argument for other jurisdictions to scuttle
plans for deregulation is that it is so hard to rebuild the system after it
has been destroyed. The current climate of high-profile revelations, investigations,
and prosecutions proves only how disproportionate to the damages are remedies
cobbled together after enforcement and prosecution.

California lost close to $40 billion. This is the best we can expect from the
backstop of FERC oversight. It is woefully inadequate. FERC is supposed to be
engaged when the market isn’t working.

If it is true, as this paper argues, that the deregulated market cannot work,
then FERC is going to be very busy from now on providing woefully inadequate
remedies. Deregulation did not live up to its promises, because its promises
were illusions, or delusions.

Deregulation can’t work. The largest experiment in restructuring was a dismal
failure, and deregulation made everything worse. Those who defend it or try
to fix it have had our attention long enough.

The Lessons of Deregulation

As leaders in the utility industry, part of your job is to conjure up a full understanding
of both the perils and opportunities confronting your organizations. Given the
pace of change these days, that’s a risky endeavor. I think you’d agree that most
of us find it hard enough to understand what’s happening today, let alone what
might occur tomorrow.

Much of this effort rests on the importance of good information and on not
assuming that what you see going on today has much to do with what’s going to
happen tomorrow. This is especially good advice, it seems to me, for any business
operating in an industry in the midst of, or on the verge of, deregulation.

While I am certainly no expert when it comes to utilities, the ongoing debate
over the wisdom and methods of deregulating the generation and distribution
of electric power is reminiscent of the debates that swirled around the airline
business 25 years ago.

It’s hard to argue with the notion that in any business, competition tends
to drive prices lower, while promoting both efficiency and innovation. In a
macro sense, that’s certainly been true in the case of the airlines, which have
responded to deregulation by providing consumers with ever-more service at ever-lower
fares. Today, despite the enormous losses airlines are suffering, fares are
lower than they have been for 15 years.

Despite that success, airline deregulation has distributed the benefits of
change less equally, and in less predictable ways, than the theoreticians of
deregulation imagined it would. That reality underscores an important point
— when regulation is removed or modified, what actually happens often bears
little resemblance to what was anticipated.

To illustrate my point, let’s turn back the clock and review some of what has
changed since deregulation swept over the airline industry back in 1978. Given
the behavior of today’s airlines, it’s hard to believe that from 1938 to 1978,
airlines couldn’t fly anywhere, couldn’t change their fares, couldn’t even vary
their menus without the explicit permission of something called the C.A.B. —
the Civil Aeronautics Board.

Quick, Brutal Change

All that changed in April 1978. Suddenly — literally overnight —
every airline could fly wherever it wanted and charge whatever it deemed appropriate.
Suddenly, what had been a carefully regulated semi-utility became a ferociously
competitive, capital- and labor-intensive business, selling a non-differentiated
product in a nearly perfect retail market at very low margins.

The tumultuous years that followed brought about the demise of more than 180
airlines, including such venerable names as Pan Am, Eastern, and Braniff. For
my company, American Airlines, deregulation represented both a dire threat and
a great opportunity. Although the company’s history dates to the mid-1920s,
by the early ’70s it had become a distinct also-ran, falling behind its competitors
on many fronts. The analysts of the time, noting its weaknesses, marked it as
a probable loser in the deregulation game.

We weren’t sure what the deregulated industry was going to look like, but we
were very sure it wasn’t going to look anything like it had before. We also
knew that the analysts were right in believing we were clearly at risk. That
meant we had to very carefully evaluate the strengths and weaknesses of our
past strategies and come up with some dramatically different approaches.

That’s exactly the analytical challenge your business now faces — and
is part one of what I think of as the two-part job description of every business
leader. Part one requires creative, imaginative thinking about what your business
now is and might be in the future, with the objective of producing a business
plan consistent with your opportunities and responsive to your perils. Part
two of the job is execution — making sure your organization does its job
better than your competitors do theirs, day in and day out.

Normally, both halves of the management equation are important. However, during
times of rapid change, the part of your job that is focused on figuring out
what is going on and determining what might happen in the future becomes uniquely
important. As many failed business leaders have discovered, an inaccurate vision
will send you off in the wrong direction. If that happens, good execution will
do little to make things better.

Unforeseen Circumstances

Unfortunately, accurate crystal balls are hard to come by. As airline deregulation
was occurring, there was lots of speculation about how the airlines would respond.
Almost all the speculation was wrong — and in fact, the changes that occurred
are far more radical than anything then foreseen. Let me touch on just a few
of the highlights:

• Since deregulation, each of the major U.S. airlines has restructured
its route system around several major hubs and has discontinued various nonstop
routes mandated by the C.A.B. during the industry’s regulated years. The hubs
have several purposes but are primarily a means of offering many more products
using the same amount of productive capacity. For example, five airplanes, each
flying from point “A” to point “B”, produce just five products. On the other
hand, five airplanes flying from various points east of a hub and then to various
points west of the hub offer service in 35 separate origin-destination city
pairs, or markets. (See Figure 1.)

• As a consequence of that comprehensive route restructuring, competition
in the airline business is now principally between airline networks, rather
than between airlines serving the same point-to-point route.

• Twenty-five years ago, no one had ever heard of airline loyalty programs,
better known as frequent flier plans.

• Twenty-five years ago, airline computerized reservations systems (CRS)
were in the very early stages of development. Today, access to these powerful
systems is ubiquitous and is fundamentally altering the industry’s distribution
channels.

• And finally, in the airline industry of 25 years ago, pricing was straightforward.
If you wanted to change your price, you applied to the C.A.B., which most often
declined the request. Today, when one airline decides to cut a price, it can
be sure that within 15 minutes — and more likely within 15 seconds —
competitors will follow suit.

None of these changes was predicted by the academic theoreticians behind deregulation,
and only a few were properly anticipated by airline practitioners. Those who
got it wrong — Braniff and Pan Am are excellent examples — failed
rather quickly, as did many lesser contenders. Their experience underscores
the fact that everyone in a business that is in the process of being deregulated
has to think long and hard — not just about what changes are likely in
the first, second, or third year, but about what changes are conceivable across
a longer time frame.

(See Larger Image)
Figure 1: Under the deregulated system, five non-stop flights simply allow
for five city pairings. The deregulated route system makes use of a central
hub, enabling 35 possible pairings with the same productive capacity.

Wild Ride Ahead

While you all know much more about it than I do, I think it’s safe to say that
already, in California and elsewhere, the effects of deregulation in your business
have diverged wildly from expectations.

California taxpayers are on the hook for tens of billions of dollars the state
has committed to assure a steady supply of electricity. In the early days, wholesale
prices spiked — driven, some contend, by various highly technical and entirely
unanticipated market-manipulation mechanisms. That outcome has slowed the deregulation
bandwagon considerably, as politicians, academics, and industry theoreticians
debate alternative approaches to the goal of stimulating non-destructive competition.

While California’s disastrous early experience has proven how badly things
can go, there is as yet no broad-based consensus on the best alternative approach
— or even about the wisdom of deregulation as a strategy.

As you know, the Federal Energy Regulatory Commission has issued a proposal
for a major overhaul of the nation’s power distribution rules, calling for a
“standard market design” that would reduce state and local control over individual
power markets and attempt to create a more efficient national grid open to all
generators.

While debate on the specifics of the FERC proposal rages, the more interesting
question is whether, once let out, the deregulation genie can be put back in
its bottle. In my view, it probably cannot, which suggests that the probability
of more dramatic change in your business is high. That means, of course, that
the importance of part one of the leadership challenge — understanding
the most likely outcomes — is higher than ever.

Short-Lived Advantages

But let us not forget part two of the challenge — execution. One of the
realities of life is that your competitors are thinking hard about the same
issues you’re considering. If you do an outstanding job of figuring out what
lies ahead, you’ll have some advantage. But it won’t be long-lived. In fairly
short order, it’s likely that most of your competitors will be following a game
plan quite similar to yours.

In the airline industry of the 1980s, American was a step ahead in lots of
important areas. We were first with a full-featured reservation system. We were
first with the frequent flier idea. We led the way in the black art of yield
management. And we were more aggressive than most in focusing our airplanes
on hub-and-spoke operations. But our competitors moved quickly to catch up,
and each advantage was short-lived.

In most businesses, as in aviation, a given company’s strategy is quite transparent.
So where do you find your competitive advantage? Faced with competitors with
ample access to capital, the same technology, and similar strategies, your best
chance to win is to simply execute better than the other guy. And unless you’re
a very small business, outstanding execution depends entirely on the performance
of your people.

I’m sure you’ve heard it said that people are a company’s greatest resource.
I would amend that slightly to say the right people are a company’s greatest
resource. Talented people — in the right environment and properly motivated
— have better ideas and execute those ideas better. Larry Bossidy, former
CEO of Allied Signal, put it well when he said, “At the end of the day, we bet
on people, not strategies.”

In a fast-changing world, in which no one knows for sure what lurks around
the next corner, good people who are able to help think and help execute are
every leader’s best hope. This, of course, makes the role of the leader even
more important, for it is he or she who will shape employee experiences and
perceptions.

Unfortunately, as we’ve all learned, even if your strategy is first-class and
you hire and motivate an excellent team that executes on all cylinders, disruptive
change remains a constant threat. Aspects of your company’s fate — particularly
in a deregulated environment — will always be, at least partially, beyond
your immediate control. So, without losing sight of who you are or where you
want to go, you must build as much flexibility as possible into your contingency
plans and always be prepared to shift gears in response to changing circumstances.

New Frustrations

Consider my former colleagues in the airline industry. Today, the major carriers
— each a survivor of the deregulation trauma — are confronted with
a new series of challenges, even more threatening than those of 1978.

The first is what the industry calls the hassle factor, which is the impact
on customer service of the various security steps that have been taken since
the dreadful events of 9/11. While no one disagrees with the need for a high
level of aviation security, the unhappy truth is that much of what has been
done to date has little to do with enhancing security, and lots to do with increasing
both costs and customer annoyance.

While there has been some encouraging talk of late about modifying some of
the more bothersome practices, the reality is that today’s aviation security
system is inappropriately intrusive and inordinately expensive. And it’s likely
to get both more intrusive and more expensive in the months to come. Among other
new challenges are:

• The proliferation of sophisticated online travel agencies that make
it easier for consumers to find the cheapest alternative.

• The rapid emergence of low-fare carriers, whose costs are dramatically
lower than those of the established carriers.

• The decline of business travel, coupled with a new tendency of business
travelers to “buy down.”

The impact of these challenges can be far-reaching. For example, the pattern
of frequent and affordable service across connecting hubs was put in place to
cater to business fliers. After deregulation, the airlines sold the system’s
excess capacity to leisure travelers — using mechanisms such as Saturday
night stays and advance purchase requirements to prevent high-fare travelers
from buying down. For reasons I’ve already noted, this plan isn’t working anymore,
and unless the airlines find a way to cut their costs dramatically, the inevitable
result will be many fewer flights and a much less convenient system.

Industry Response

The major airlines have responded vigorously by grounding hundreds of aircraft,
laying off tens of thousands of workers, and taking on billions of dollars of
debt to cover their losses. The carriers are attacking their costs on every
level. In the months ahead, I think you can expect to see their efforts reflected
in fewer flights, less personal service, higher seat density, less food, and
various other changes.

On the other hand, some things will almost certainly remain much the same.
The hub and spoke system will be modified, but won’t disappear, because most
point-to-point markets in this country are simply too small to warrant nonstop
service. As long as the industry remains deregulated, nobody is going to offer
nonstop service between Syracuse and Des Moines. That means anyone who wants
to make that journey will continue to connect over a hub.

In an effort to cut costs, the airlines are restructuring their hubs to maximize
airplane and employee productivity rather than customer convenience. The airlines
are also substituting regional jets for big airplanes in more markets, and are
working hard to increase employee productivity — which will mean lots more
self-service automation at the airports.

My friends in the airline business certainly have their hands full as they
contend with what you might call the second big wave of post-deregulation change.
While some airlines will fail, I think most of the industry will eventually
manage its way through this trauma and return to at least minimal profitability.
We should all hope they do, because the alternative, which would leave the United
States without any viable inter-city transportation system, is simply not acceptable.

Visionaries Needed

Whether any of the big carriers can make it through without going through the
bankruptcy process is an open question. To a large extent, the outcome for each
airline will be determined by the skills and attitudes of its leaders —
with the gold ring going to those best able to formulate, articulate, and merchandise
an accurate vision of the future.

You may take comfort in the fact that you don’t work in the airline industry.
Yet, as deregulation impacts your business more and more, your success will
similarly depend on your ability to anticipate and respond quickly to dramatic
upheaval.

In your business and mine — indeed, in any business — today’s tremendous
pace of change means, among other things, that what got you here won’t keep
you here. In a period of disruptive change — whether it’s driven by deregulation
of some other force — it’s up to every organization’s leadership to determine
which aspects of its plan fit with tomorrow’s reality, and which don’t.

RTO Technology Vision

Perhaps we should not measure the role of technology in realizing the Federal
Energy Regulatory Commission’s Standard Market Design by the volume of text that
it is afforded in the commission’s “Notice of Proposed Rule Making.” A scant five
pages outlines FERC’s technology vision.

Experience teaches us in the technology business that you need to get your
business requirements right if you want the technology to be right. So perhaps
FERC is correct in spending the vast amount of text on the design of the market
instead of the technology.

But FERC has outlined a tall order in its technology blueprint for Standard
Market Design (SMD). We need to manage expectations. SMD is a vision. The reality
will be something less. But the key is to keep pushing the vision.

FERC’s Vision

FERC outlines three foundation attributes of the design’s technology:

• Transparency: the ability to understand what the software does.
• Testability: the ability to understand and compare performance.
• Modularity: the ability to change software modules without changing other
software.

These three attributes are all reasonable, perhaps noble, objectives. FERC,
however, frames them as somewhat equal. The reality is that there is more cause
and effect than appears on the surface. There are also big differences in the
achievability of any of these objectives in the near-term.

First, lets take modularity. As FERC notes, achieving this objective requires
standard interfaces. But actually it takes much more. It requires a common vision
on design and architecture within the components that make up the software systems
in an ISO/RTO. The data constructs and the methods utilized within the components
must also be standardized, or else the data that crosses these “standard interfaces”
will be useless.

An example would be you picking up the phone and calling France. This is seamless
and easy because long ago the interfaces between the U.S. network and the French
system were standardized. But if the person who answers the phone knows only
French, and you know only English, you are not going to get very far in the
conversation. It is the same for systems. To make these systems really modular
there will need to be a much deeper level of standards and integration. Modularity
is much more intrusive into the design of software components than just interfaces.

So what about testability? Certainly the notion of a standard benchmark test
suite for similar components is a good idea. Testability in the context of SMD
is really a proxy for certainty. That is, a known input will produce a known
output. The proxy for certainty so far in ISO implementations has effectively
been operational audits of the market systems that are part of many ISO/RTO
tariffs. These audits are effective, but they do not provide the degree of cross
market/operator benchmarking at which FERC is driving. FERC is shrewd in pushing
this point since such benchmarking would be a subtle but effective pressure
on vendors to conform and improve systems performance over time. If there is
anything a vendor likes least, it is to see their name last in a published benchmark
“bake off.” It is just too competitive out there.

Which leads us to transparency. This element of SMD is actually quite easy
to achieve yet so difficult to implement at the same time. Transparency is really
documentation. Public domain documentation. In an industry standard framework
and at an industry standard level of quality. For us working in this environment
every day, such documentation would be a breath of fresh air.

I will go out on a limb here. One significant act that FERC can take in regard
to technology in SMD implementation is to mandate that all SMD implementation
be documented at an industry standard level and placed into the public domain.
Of course, there will be some screams from the system vendors notwithstanding
their rhetoric in the market about supporting open standards, etc. That is because
they all believe that the “special” elements of their systems are their competitive
differentiator. And indeed in reality they may be right. Why else would so many
ISO/RTO executives feel “locked in” to a vendor?

I outlined FERC attributes in reverse order for a reason. They are not equal.
You arrive at modularity through good design, be it mandated or market driven.
Either way, you get standardization that then can be tested (compared) as FERC
envisions. To be testable, you must already be transparent. So there is a natural
sequence to these objectives. Modularity is the end point. Transparence is the
starting point.

If Not Now, When?

So just how achievable is FERC’s vision in the medium- to long-term? (You will
understand as you read on, the short-term prospects are dim).

FERC’s basic vision of the technology is right. But as the saying goes, the
devil is in the details. The details will likely be messy and expensive.

First, one must recognize that the technical environment that SMD targets is
not green field; it is legacy. A sizable number of ISO/RTOs are already operational.
They have already made their first wave investments in market and operational
systems. No matter how visionary SMD is, implementation will be incremental
to their current environment.

Second, green field or not, the ISO/RTOs can only implement this vision if
the system vendors provide a suite of systems as envisioned in SMD.

Which leads to the third element. Vendors are clearly not all that motivated
to provide SMD “vision-like” technology fast or for free.

Figure 1 outlines the continuum of ISO/RTO systems. If you look at currently
operating ISO/RTOs, for the most part, their systems are unique. Unique in the
sense that a single market operator, without modifications to their systems,
could not just up and operate in another’s jurisdiction. The architectures of
the systems are generally specific to the market the system serves. This is
an artifact of the history and timing of each of the currently operating ISO/RTO’s
start-up.

 

Figure 1: ISO/RTO Continuum

The next step up on the continuum is open documentation (read FERC transparency).
The level of documentation of ISO/RTO systems is actually more available than
many think. (Indeed it is remarkable the level of documentation on ISO/RTO in
the public domain already). But it is also true that there are large holes in
available documentation.

Indeed, a survey would likely reveal that each ISO/RTO has a large degree of
documentation regarding systems, processes, and operations residing in the cerebral
cortices of a few and consequently very important individuals at each of the
respective ISO/RTO and/or its vendor. As I outlined above, conforming this knowledge
to software industry best practice in terms of form and substance and making
it available in the public domain is a significant enabler for advancing the
state of ISO/RTO transparency.

Further up the continuum is the notion of componentization. This is the FERC
modularity element. It is also the notion that has formed the foundation for
best practice in software design and implementation for some time. But here
is where the Standard Market Design vision starts to be more difficult.

From the incumbent vendor’s perspective, breaking major portions of their offering
into plug-and-play components is not trivial technically or commercially. Most
of these systems have evolved from power engineering algorithms utilized during
pre-deregulation. Without question, the focus has been on the technical elements
of the algorithms instead of the eloquence of the integration design. It is
not that a more “eloquent” architectural design is not viewed as preferable
by these vendors. The question is who is going to pay and how long do they have
to complete the migration.

At the extreme end of the continuum is the notion of “plug-and-play.” FERC
uses this language directly in the NOPR. Plug-and-play conjures up notions of
the common RJ11 phone jack. No matter who manufactures your phone, the jack
is the same. When you plug it into the wall — presto — dial tone (the
play part).

Reality Check

This is where a reality check is needed. Plug-and-play, in the context of the
pedestrian concept of plug-and-play, within and/or between ISO/RTO systems,
will not occur anytime soon unless it is mandated by law (it is unclear if FERC
has the authority to make such a mandate) and it is subsidized (it is very clear
that the ratepayers and taxpayers would be the ones paying).

Why? The answer lies in the current market realities. The fact is that there
is little market gravity pulling vendors in FERC’s direction. First, the market
is quite small relative to the potential wallet of spending in the future. Thus
the return for making a strategic investment by any single vendor is not all
that good. The risk of both time and future market success would likely send
vendor corporate strategic planners looking for greener pastures for their scarce
investment dollars. Second, the rework required to current vendor systems is
extensive when one considers how much structural change is required to migrate
current systems to a technical architecture that supports the objective of Standard
Market Design. Last, the migration will require vendor cooperation and/or standards
(not vision but detailed technical standards). The former is not likely and
the latter will take a long time.

If you review the record for Standard Market Design, the above reality check
is not a surprise. FERC has been told in a number of technical conferences that
there is a gap between vision and reality. It is, however, remarkable that nearly
every presenter on the subject, be it a vendor, a market operator, or a market
participant, points to the same general technical solution. There is a remarkable
degree of convergence on a common view for integration in ISO/RTO systems looking
forward.

There are many names for this integration structure: common information bus,
application integrating bus, and enterprise application integration (EAI), to
name a few. The EAI approach is a predominant strategy in a number of industries.

EAI is best suited to an environment that requires an integration of operational
and business systems into a single system; that requires an integration of both
established legacy applications with new applications; and finally, has the
desire to implement systems that could be easily adapted in the future to new
and changing application requirements.

EAI uses the latest software advances to seamlessly integrate separate systems.
It is an integrating platform that allows use of best-in-class tools for each
operational requirement in an integrated manner.

Figure 2 is purposely a different depiction of EAI than you have likely seen
before. Indeed most pictures of EAI opt for the over-simplified “EAI Magic Bus”
(EMB). You just plug the applications into the EMB and you have plug-and-play.
Want to switch stuff around? No problem. Just unplug one component and plug
in a new one. If it were only that simple.

(See Larger Image)
Figure 1: EAI Hierarchy

EAI is a very powerful technology for integration. And indeed as a foundation
for ISO/RTO systems in the future, it has the potential to take center stage.
But it is not magic. Much design, architecture, and new development are required
before the simple “magic bus” picture is anything close to reality.

Conclusion

Looking at the hundreds of pages on market design and the mere five pages on
technology design in the SMD NOPR, one can really get an idea of what FERC is
thinking. Technology is not considered the issue.

Indeed, FERC is right. The technology frameworks and standards to achieve FERC’s
vision are no longer bleeding edge technology. Transparent, testable, and modular
systems have been the design and market norm for quite some time.

Achieving FERC’s technology vision is a market issue. A market issue that starts
with the very premise of SMD in the first place: A single standard design for
electric wholesale markets in the United States. If such a standard did exist,
then all the vendors would be in a race to provide the systems to implement
this design. And operators would be in a race to ensure the new design cured
the issues of the past five to 10 years of implementations. I am sure if that
were the case, we would end up with modular, testable, and transparent systems.

So perhaps FERC was prudent in spending only five pages on technology vision
after all. As is frequently the case, technology is not the issue. Business
design is.

Electricity Reform That Works

The near meltdown of the California electricity markets prompted many, both inside
and outside government, to question whether deregulation can work and, if so,
how. Looking at the successes and failures of reform efforts worldwide, it is
clear that electricity restructuring can work — if it meets the needs of
the local economic and political environments.

During the last decade more than 60 countries restructured their electricity
industry with varying degrees of success, although it is a misnomer to refer
to these efforts as “deregulation.” In all cases, sectors such as generation
or marketing may be deregulated, but the transmission of power remains regulated.

The underlying long-term objective of these reform efforts has been to provide
reliable electricity service at reasonable cost for all consumers. Successful
reforms share two main features: an effective regulatory framework and workable
competitive markets. An effective regulatory framework restrains private firms
from abusing their market power and constrains the political and administrative
actors from directly or indirectly expropriating investors’ assets. Competition
reduces the need for command-and-control forms of regulation, provides incentives
for private investments, and improves the industry’s performance.

The reforms have often involved unbundling vertically integrated utilities
into generation, transmission, distribution, and retail marketing sectors, introducing
competition into the generation and retail marketing sectors, and providing
suppliers and buyers access to the regulated transmission and distribution systems.

In countries where the industry was mostly state-owned, the reform efforts
sought to transform state-owned and centralized electricity sectors into decentralized,
market-oriented industries with private sector participation. Experiences over
the past two decades, however, have shown that achieving the goals of electricity
reform generally is more complex than initially anticipated.

Key Factors

Providing effective regulatory governance is paramount to attract private investment.
The disappointing experiences with reforms observed in various countries are
generally the result of design flaws in the regulatory governance regime rather
than market design flaws. California’s experience, however, clearly illustrates
that market design is also critical to success. In that sense, developing competitive
generation and retail market sectors in turn requires: limiting the potential
for the exercise of market power; allowing parties to enter long-term forward
contracts; providing a workable spot market; and allowing retail customers access
to wholesale prices.

Fundamental to any reform effort is ensuring that an effective regulatory governance
structure with the right incentives is in place. The electricity industry has
traditionally been regulated because of three distinguishing features: large
sunk investments, economies of scale and scope, and universal consumption of
its products. These features imply that firms may be willing to continue operating
even if prices do not recover sunk investments and that market prices will be
politically sensitive.

This combination exposes investors to the potential for governmental opportunism.
An effective regulatory governance structure requires institutional arrangements
that limit the government’s discretionary powers once investments are in place.

Weak regulatory governance structure offers no credible assurance against direct
or indirect expropriation of private property and makes it difficult, if not
impossible, to encourage private investment.

Developing workable competitive generation markets, and to a lesser extent
retail service markets, has been the goal of many restructuring efforts around
the world. The complexity of the electricity industry, however, makes achieving
a workable competitive market a work in progress, with no detailed blueprint
available for all circumstances.

A competitive commodity market requires that no single participant have the
ability to set prices artificially high for a sustained period of time. This
is usually a problem in the generation segment, as a market with few sellers
may not be immune to price manipulation.

The structural and preferred approach to dealing with horizontal market concentration
is to disaggregate the sectors into numerous firms with market share limits.
When not practical, the government will usually take the behavioral approach
and regulate the commodity price through price caps or cost-of-service pricing.
This approach to market power misses a basic point.

Market power exists when buyers have few alternatives. In the electricity market
wholesale buyers are few and relatively large — distribution companies,
large users, or brokers. As long as their purchasing strategies are unrestricted,
including entering into long-term contracts with incumbent generators and new
entrants alike, or vertically integrating, the exercise of market power will
be a difficult undertaking, even in a concentrated capacity market.

Competitive wholesale markets require liquid long-term contract (forward and
futures) markets. Long-term contracts facilitate hedging against price risk
for both buyers and sellers. The most common forward trading mechanism is bilateral
contracts with its obligation to deliver a physical product at a specified location.

In electricity markets, there is no way of directly linking the energy put
into the system at one end and taken out at the other end. As a result, many
experts argue that bilateral physical contracts for power are irrelevant unless
the plant is next door to the load. These experts argue that contracts for power
should be strictly financial contracts (contracts for differences) that essentially
assure buyers and sellers a given price for a certain amount of power injected
or actually taken from the system.

Emphasis is, therefore, placed on developing an “efficient” spot market where
all physical sales will take place, with forward contracts used only to cover
price risk. Heavy reliance on spot markets, however, makes the industry vulnerable
to price manipulation. The alternative is to make contracts physical or dispatchable,
moving trading away from spot or real-time markets.

A third requirement for a workable electricity market is a well-designed wholesale
spot or balancing market that allows market participants to take their forward
contracts to delivery. For the balancing market to work effectively — we
emphasize effectively, not efficiently — there must be rules concerning
trading of energy and capacity in the real-time market, allocating common transmission
costs, pricing of congestion, and allocating responsibility for ensuring system
reliability. Given the complexity of the system, translating theoretically efficient
models into workable market models, which are simple to implement everywhere,
may not be feasible.

The most common structure chosen is a single-price auction where supply bids
and offers to buy are matched to create a single market-clearing price and where
participation is voluntary.

An alternative approach is to minimize the role of the spot or balancing market
as the focus for determining investment decisions by letting the forward contracting
market play that role. Under this model the forward contract prices will reflect
the players’ estimates of the relevant long-run marginal cost. With limited
barriers to entry, these prices will be less susceptible to market power manipulation.

The fourth requirement of a competitive electricity market is providing retail
consumers full access to prices in the wholesale market. If retail customers
remain captive of the distribution utilities they need to rely on the effectiveness
of the regulatory regime to derive benefits from competition. Instead, the retail
market can be deregulated and all customers allowed to negotiate their own supply
contract directly with suppliers, as successfully done in as varied jurisdictions
as Norway, New Zealand, the United Kingdom, or El Salvador.

Successful Market Reforms

There is substantial variability in the nature of the reforms undertaken during
the last decade. Some countries have focused only on attracting private investment
while keeping the system state-owned or tightly regulated. Others have attempted
to vertically separate and privatize the industry in addition to creating competitive
generation and retail service markets. Here we will focus on the much smaller
second group.

The emerging consensus is that unbundling and introducing competition into
the generation and marketing sector has been successful. Examining a few of
these major reform efforts provides instructive examples of the role the key
factors discussed above played in these markets success.

United Kingdom

The United Kingdom was one of the first countries to restructure its electricity
industry in 1990. The United Kingdom’s initial effort has been proclaimed a
success, having resulted in lower prices, substantial public sector receipts
from the privatization of public assets, and vigorous investment in new power
plants. However, this success did not come right away. Wholesale electricity
prices initially increased from 1990 to 1993, and remained above 1990 levels
(in constant dollar terms) until 1999. This occurred in the face of a 30 to
40 percent decrease in coal and gas prices during this period.

Economists and government regulators attributed the wholesale market outcomes
to capacity market concentration (three companies with 80 percent market share)
and market design flaws (in particular, generators being required to sell the
vast majority of their output through the spot market with long-term contracts
limited to contracts-for-differences, and no demand-side participation). The
combined concentration of market share and the emphasis on the spot market allowed
generators to develop trading strategies that manipulated the spot market price.

The country’s regulatory agency quickly attempted to correct these market flaws
through the introduction of price caps and other regulatory measures. While
government actions and high market prices encouraged new entry, concentration
remained high until the government forced the two largest generators to divest
6,000 megawatts in 1996 and required further divestiture such that no generator
had more than a 20 percent market share by 1999. In 1999, retail service was
also completely deregulated.

The U.K. reform process is still a design work in progress. On March 27, 2001,
the New Electricity Trading Arrangement (NETA) replaced the former power pool.
Under NETA, the prior uniform price auction spot market was replaced by a contract
market and pay-as-bid residual balancing market. The rationale for the change
was to shift most transactions into the bilateral contract market and to minimize
the use and gaming of the balancing market.

Since the NETA reforms were proposed in 1998, month-ahead wholesale prices
have fallen by 40 percent (see Figure 1). Since the NETA started operating in
March 2001, residential customers’ prices decreased by 8 to 15 percent, depending
on whether customers stayed or switched suppliers, and industrial and commercial
prices decreased by 20 to 25 percent, according to the Office of Gas and Electricity
Markets.

Figure 1: Spot Prices, United Kingdom

 

Chile

Chile is an even earlier reformer with the restructuring and privatization
of its industry occurring between 1982 and 1991. Chile’s restructuring effort
has also been proclaimed a success with 40 percent reduction in wholesale prices
between 1987 and 1998, large decreases in retail rates (see Figure 2), more
than $6 billion in private investment between 1990 and 1999, about 40 percent
increase in capacity, and a reduction of almost 20 percent in retail consumer
prices.

Figure 2: Average Annual Electricity Prices in Chile, 1987-1999     
              
              
              Source:
NCI based on CNE

 

However, Chile’s reform effort ran afoul of our rule on a workable balancing
market. Chile’s initial reformers worried about the potential exercise of market
power by a relatively concentrated industry. While Chile’s initial restructured
industry had 11 generating companies, the largest generator controlled 50 percent
of the system capacity, and the three largest generators controlled 67 percent
of capacity (down to 54 percent in 1999). Rather than relying on long-term contracting
to solve whatever market power problem that existed, the government regulated,
in a forward-looking manner, the price of spot transactions to distribution
companies.

Although Chile’s reform effort has resulted in a successful privatized electricity
market, it encountered serious problems when a seemingly unexpected drought
limited supply drastically in 1998-1999. Wholesale prices were unable to adjust
to the new reality, so consumer demand did not fall, nor did supply increase
substantially. Drastic blackouts followed, encompassing the whole country.

Much of this negative development could have been avoided if Chile had relied
more on freely negotiated contracts rather than regulation for mitigating market
power excesses. Nevertheless, given the stable investment climate and apolitical
regulatory framework in Chile, their reform effort was able to continue successfully
attracting investments.

Australia

The Australian state of Victoria represents another case where restructuring
has been proclaimed a success. Reformers in Victoria attempted to learn from
the experiences in the United Kingdom and Chile and adopted slightly different
approaches as a result. Their restructuring effort met with initial success,
with wholesale prices decreasing by one-third since 1993, while average retail
prices for commercial and industrial users decreased by 22 percent from 1994,
the time of introduction, to 1998. Over this period there was also significant
foreign investment in the Australian electricity market.

Their initial strategy was to encourage competition by atomizing the generation
segment. Each large-scale thermal plant was sold independently. The distribution
sector was restructured into five regional suppliers with initial monopoly rights
to small customers. Legal limits were imposed on vertical and horizontal cross-ownership,
with no company owning more than one generation or distribution supplier. Retail
competition was phased, starting with large customers and leading up to all
customers by 2000. The wholesale markets were viewed as the main mode of competition,
and in 1998 the federal government created a national wholesale market and merged
the local state markets into the National Energy Market.

The Australian spot market was fashioned from the original U.K. market, with
its requirement that all generators bid their capacity into the market and all
accepted bids pay the market clearing price. Generators were required to provide
long-term contracts to retail companies before they were privatized. In addition
to being a financial hedge for both generators and retail customers, the bilateral
(vesting) contracts significantly decreased the incentives of generators to
increase wholesale prices. Higher prices would only benefit a net long position
on the spot market while putting them at risk if they were net short compared
to their contract obligations. With high contract coverage, generators could
not benefit from manipulating the spot market.

Low wholesale prices in the spot market, however, have been blamed for the
perceived lack of investment in the generation capacity addition, which has
resulted in a gradual exhaustion of excess capacity as demand has risen. Since
1999 wholesale prices have increased by 20 percent. With the expiration of these
vesting contracts, Australian regulators, mistakenly in our view, decided not
to encourage distribution utilities to enter new contracts under the assumption
that retail consumers and their service providers will do a better job of negotiating
the price risk return tradeoffs with generators.

California’s Failure

Like the Australian markets, California also borrowed heavily from the U.K.
reform effort. California’s reformers put little effort on regulatory governance
issues. Rather, the focus of the reform effort was on developing a workably
competitive market.

California provided incentives to its incumbent utilities to divest their generation
assets. The result was that by the summer of 2000 no company had more than 21
percent of generation capacity. This, however, did not prevent generators from
being able to exercise market power. When increased demand due to hot weather
in the summer of 2000 was combined with reduced hydroelectric production, and
some unanticipated generation outages resulted in an imbalance of supply and
demand, even marginal suppliers had the ability to influence wholesale prices.
The rest is history.

California’s reforms had two basic design flaws: It discouraged long-term contracting
by distribution utilities, and it put a cap on the utilities’ retail prices
while supposedly promoting retail competition. In the regulators’ attempts to
stimulate the spot market, to facilitate retail bypass, and to avoid self-dealing,
they not only encouraged the utilities to divest most of their generation assets,
but also discouraged them from entering into long-term contracts. The lack of
long-term contracts and/or captive supplies exposed the distribution utilities
and their customers to price risk in a volatile spot market.

When spot prices skyrocketed in the summer of 2000, large increases in retail
prices were necessary to keep the distributors whole. Regulators could have
deregulated retail rates; instead they balked. With most sales going through
the spot market, and with almost no demand responsiveness due to fixed retail
rates, generators had few incentives to moderate their spot price bids. Distribution
companies, then, became insolvent. Eventually, state regulators had to pass
those high costs on to the retail customers. To prevent customers from fleeing
the utilities, the legislature rescinded the direct access provision. Deregulation
and reform for the retail sector was essentially eliminated.

Many critics of the California market have pointed to design flaws in the spot
or balancing market as the main reason for the market collapse. The Federal
Energy Regulatory Commission now proposes a standard spot market design for
the entire country in an effort to avoid future market meltdowns à la California.
We do not see this as a promising avenue of reform. There are multiple ways
of reaching a workable wholesale spot market. The key requirements are that
these balancing markets should be relatively effective in allowing suppliers
and consumers to take their forward contracts to delivery, and they should account
for only a small fraction of market trades.

California: Policy and Reform

In the post-Cold War era, we supposedly all understand the benefits of privatization
and the deregulation of markets. They bring efficiency, innovation, and lower
prices. Yet in California the attempt to form electricity markets — arguably
a move away from regulation — ended up with an embarrassing period of black
outs, significant price increases, and little in the way of innovation.

It didn’t have to turn out this way. Fundamental flaws in market design resulted
in serious problems when the market drifted into a supply-demand imbalance.

The California experience offers lessons not only about the design of electricity
markets, but about the operation of markets more generally. First, the institutional
structure of markets matters, especially in complicated industries such as electricity.
Second, it is hard if not impossible to capture the benefits of any market unless
prices are allowed to signal scarcity and surplus to buyers and sellers.

Third, policy makers should fully appreciate the reasons why particular industry
structures, (e.g., vertical integration between electricity generation and distribution)
have emerged over time, and disband them only with great care. Fourth, the intervention
of politics, acrimony and litigation are never propitious for any industry,
but especially not for one in a crisis that is also burdened with newly formed
or inexperienced institutions.

We’ll sketch the origins of the crisis and the major policy errors, then turn
to identify principles which, if followed, will fix the situation and restore
an investment climate sufficiently positive to support the new investment needed
to achieve an improvement in supply and more responsive demand. Because of the
complexities, our treatment is necessarily incomplete, but we believe we present
an accurate summary of the facts, a correct diagnosis of the policy errors,
and a forward-looking view of the reform opportunities.

Restructuring in California

The deregulation of banking, airlines, trucking, and to some extent telecommunications
preceded electricity, and consumers have been the beneficiaries. Admittedly,
electricity creates a special set of problems of its own — arguably more
challenging than some other industries. In particular, with electricity the
ability to store the product is extremely limited, so generators must be motivated
to supply exactly the amount that customers want at any time.

This “load balancing” requires the active involvement of generators, transmission
companies, local distributors as well as customers, and arguably regulators
too. The conundrum in electricity restructuring is how to achieve the cooperation
necessary to maintain reliability in an interrelated industry, while simultaneously
effectuating the competition that’s required to bring about greater efficiency
and lower prices.

The introduction of competition in the United States has moved at different
speeds and in different manners in different states. California was one of the
pioneers — beginning with a 1993 policy study.

Following authorizing legislation (California’s AB 1890 in 1996), this program
was implemented by the California Public Utilities Commission (CPUC) in cooperation
with the Federal Energy Regulatory Commission. As a result:

• The California Power Exchange was set up to run an independent centralized
energy auction.
• The California Independent System Operator (ISO) was established to operate
the transmission network owned by three investor-owned utilities.
• The CPUC essentially required the Pacific Gas & Electric (PG&E) and Southern
California Edison to divest 50 percent of their fossil fuel generation capacity.
• Rates for residential and small business customers were frozen at 90
percent of prior levels, with the rate cut financed by bonds they were obliged
to repay.
• Up to four years was provided for utilities to recover stranded costs
of prior generation investments, after which retail rates would be set competitively.
• The CPUC required utilities to buy their entire net electricity needs
from the PX at spot, or near-spot prices only — not through contracts with
generators.
• Independent power marketers were authorized to sell electricity directly
to all customers for delivery over utility distribution systems.

Twin Crises

Under the new structure, California electricity markets worked reasonably well
from April 1998 through April 2000. Wholesale electricity prices averaged $30/MWH,
customers enjoyed reduced frozen rates, many new power plants were proposed,
utilities progressed on stranded cost recovery, and retail competitors attracted
a substantial share of large customer loads.

However, the situation changed in the summer of 2000 when (both peak and off
peak) prices spiked up to nearly 10 times those of the previous two years, and
stayed at elevated levels for an entire year.

Regulatory constraints meant that the utilities could neither protect themselves
against high spot prices through long-term contracts nor pass on the higher
prices to their customers. The resultant financial squeeze forced PG&E and Southern
California Edison into insolvency, led to a 40 percent increase in retail rates,
killed retail electricity competition, began California’s slide into its current
fiscal peril, and led to recriminations and uncertainty from which the state’s
energy investment climate may not recover for many years.

The wholesale price increases that precipitated the crisis have been attributed
to a number of factors, including:

Supply-Demand Imbalances

During the 1990s, capacity to produce and deliver electricity to users had
failed to keep up with growth in demand, amplified by:

• Hot weather throughout the Western United States that increased seasonal
demand.
• Reduced electricity imports due to reduced rainfall in the Pacific Northwest.
• Rising, and ultimately skyrocketing natural gas prices.
• Increasing costs of emissions credits needed for electricity generation
in the Los Angeles basin.

Lack of Demand Elasticity

Frozen retail prices gave customers no economic reason to curtail demand even
as the average wholesale cost of electricity soared.

Absence of Real Time Metering

There was no mechanism in the market to allow higher relative prices during
periods of peak use — thus foregoing beneficial incentives to conserve
power when it was most valuable, and to reschedule use for time periods when
electricity market costs were lower.

Lack of Long-Term Contracts

The “buy-sell” rule kept utilities from protecting themselves against high
wholesale prices through entering long term contracts. Put differently, the
natural hedge contained in the industry’s prior level of vertical integration
was undone — and not replaced — when utilities were forced to divest
much of their generation capacity.

Auction Design

The market design adopted for the PX had a single clearing price in which all
generators/suppliers got the expected bid price required to clear the market.
A tight supply situation created the potential (at least in theory) for market
“manipulation” or supply “withholding” — even by individual sellers acting
alone — to raise the price of power traded in the market. The market design
chosen had important and serious implications for pricing behavior in the market,
and for questions of whether market participants may have withheld output to
cause prices to rise. Because of the importance and complexity of these market
power issues, they are explored in more detail below, albeit in a preliminary
and abridged fashion.

Figure 1: Monthly Costs for Energy and Ancillary Services for CAISO Control
Area per Dollars per MWh                 
Source: CAISO-DMA (2002a); CAISO-DMA (2202b)

Market Power

Claims abound that the high market prices of May 2000 to June 2001 were largely
due to market manipulation and the exercise of market power by some electricity
producers and marketers. However, what seems obvious to some turns out to be
a rather complicated and very technical issue that turns on economic principles
many do not appear to appreciate.

Numerous governmental investigations and analogous lawsuits seem premised on
the belief that if electricity prices rose to levels several times those experienced
in recent years, then producers must have engaged in improper actions designed
to raise prices. As a matter of economic principle, this is quite simply wrong.

A firm has market power when it can price without regard to competition. More
technically, monopoly power is sometimes defined as the ability of a firm to
price above competitive levels and sustain that price for an extended period,
despite the actions of its competitors. In the California electricity market
with no demand responsiveness it was belatedly recognized that tight market
conditions might cause even a relatively small generator’s production decisions
to have price impacts when market conditions were tight, and most or all available
power generation was needed to avoid blackouts.

Embedded in these pricing issues is a resource allocation function of considerable
importance. In times of scarcity, market prices go up in ways economists recognize
as legitimate, important, and not necessarily an indication of market power.
So-called “scarcity rents” (the profits that result from scarcity) are a rational
way to allocate scarce supplies to those who most value them, and to offer a
strong incentive for entry by additional suppliers able to meet consumers’ demands.

High natural gas (input) costs can also cause high electricity prices. So can
the need for certain electricity generators (“peaking units”) to recover all
their costs during only a relatively few hours of operation each year.

Understanding why wholesale electricity prices rose in California requires
a careful assessment of these (and other) factors. Clearly, high prices alone
are not an indication of the presence of market power; they may simply reflect
fundamental scarcity. Indeed, the existence of prices above even long-run costs
occurs in many industries, and is usually eroded in due course by entry or expansion
of other providers.

Distinguishing market power from scarcity rents is sometimes an analytical
challenge. A key factor is the assessment of “withholding” of output. A producer
who is offering to the market all that is economic to produce is not exercising
market power even if prices are high — like a landlord charging market
rents that far exceed a building’s historical construction cost.

An exercise of market power requires many elements and requires a contrived
shortage with the artificial shortage or shortfall not being replaced by increased
supplies from other providers. Contriving a shortage requires the firm to leave
some of its potential output unsold in order to sustain an above-market price.
This is why a focus on output — and in particular, whether a firm is using
its available capacity to produce electricity — is important to distinguishing
market power from scarcity rents.

Problems with the design of the California wholesale market complicate the
analysis of potential market power. The retail rate freeze imposed by the PUC
in conjunction with the determination of the ISO to avoid blackouts, created
a market where electricity demand held steady, regardless of price. Neither
customers, nor the ISO on their behalf, were able to respond to higher wholesale
prices by cutting back demand, as would occur in a normal market situation.
This made wholesale prices highly sensitive to small variations in available
generation output during high demand periods.

Accordingly, the diagnosis of market power (and the alleged “overcharges” that
might be associated with it) might potentially hinge on why, for example, a
relatively small amount of generating capacity might have been offline at a
given time. Even worse, because the CPUC’s “buy-sell” requirement forced the
bulk of market purchases to occur at spot prices, volatile prices affected roughly
half the state’s power bill at any given time — volatility from which utilities
(and ultimately customers) could have been protected through the long-term contracts
the CPUC prohibited. In combination, these two market design errors created
an unfortunate situation that amplified the effects of many ordinary day-to-day
decisions by power plant operators.

Not surprisingly some analysts have tried to test for market power in California’s
electricity pricing. However, the observation that prices were above producers’
marginal operating costs (the test many studies have employed) is not especially
meaningful, especially where scarcity rents may exist, where opportunity costs
are significant (as when a hydroelectric system can utilize its water resources
now or later), where factors other than operating costs may be relevant, where
other regulatory limitations (such as air quality emission limits or taxes,
or new plants sighting delays) restrict output or raise costs, or where a flawed
bidding scheme may have offered incentives for above-cost bids.

The financial crisis caused by insolvency of the distribution companies likewise
created a risk that producers would not be paid for their electricity just as
natural gas prices (the essential input) reached record levels. Simple operating
cost-based offer bids would make no sense for a generator under such circumstances.
Arguments about whether particular power plants should have been running at
particular times have yet to yield persuasive evidence of strategic outages.

Our conclusion is that electricity markets should not be designed in such a
way that performance assessment involves subtle distinctions between acceptable
and unacceptable market behavior. The good news is that it is entirely feasible
to design markets that avoid the problems experienced in California.

Reforms That Work

The need for reforms in California has already been recognized by many market
participants and federal and state regulators, although the CPUC has tended
towards a revisionist command-and-control philosophy that will make matters
worse. California also bears a unique financial challenge due to the inept response
of the state’s officials to the crisis.

Concerns about price volatility, competition, and market power can be greatly
mitigated by adopting the following basic principles:

Open and free contracting between parties. The CPUC has already
recognized the error of its ways in prohibiting utilities from engaging in contracts
with power producers that would have mitigated risk for both parties, and ultimately
for customers as well. However, it is still not clear that the CPUC is willing
to forego the subsequent second-guessing of the merits of such contracts.

Eliminate barriers. California is notorious for its permitting
delays for new power plant construction, a process that extends a typical project
to about four years. The ability of new producers to readily enter the market
(or for existing producers to expand output quickly) is essential to maintaining
competition and limiting the scarcity rents that occur naturally. Free and open
contracting also reinforces rapid entry.

Implement real-time pricing. Real-time pricing is essential to
allow customers to shift their demands in response to what prices tell them
about when electricity is cheap, and dear. The wholesale market also needs to
feel the impact of such customer responses.

Minimize government’s role. There is clearly a role for government
oversight of electricity markets, especially in terms of establishing and policing
rules for wholesale markets and the related maintenance of reliable service.
But it would be incorrect to read the California crisis as a justification for
more traditional (and discredited) market intervention by government.

Final Comments

We don’t mean to imply that the implementation of these principles is either
politically or technically easy. They are not. However, there is sufficient
experience from other jurisdictions (in the United States and abroad) to provide
strong guidance on the technical issues. The political challenges are undoubtedly
considerable. But we do believe that one of the biggest problems the State faces
is that it has over-politicized electricity, and tried to obfuscate the reasons
for the crisis.

With the 2002 gubernatorial elections over, one can only hope that political
concerns can be subordinated to the public interest for a period sufficient
to allow reform opportunities to be embraced.

America”s Next Nightmare: SMD

Reliable, low-cost electric energy makes the American economy go. It is the essential
service, so necessary to modern life that its momentary absence defines a disaster.
After every hurricane or earthquake, the critical question always is how many
households are without power.

Electric utilities used to be called public service companies because that
is what they did — provided an essential public service. Enron, Enron wannabes,
and Enron disciples at the Federal Energy Regulatory Commission (FERC) would
turn this on its head. In their view, the public exists to serve the marketer,
which turns out the lights if its economic demands are not met.

In California, we had a contrived cataclysm of epic proportions: Tens of millions
of California residents and businesses were threatened every day with rolling
blackouts and service curtailments while Enron and other merchants of doom offered
to avert disaster for payments at levels that plundered utility bank accounts
and the state treasury.

It was unconscionable. FERC stood by and watched while it happened. California’s
cataclysm was made possible by an ill-considered experiment in deregulation
that opened the door to the rapacious traders and merchant generators. California
has now spent two years and billions of dollars trying to recover from that
cataclysm.

FERC’s Standard Market Design (SMD) obstructs California’s recovery effort
by promoting an illegal and ill-considered federalization of retail service
that would impose the worst features of the California experiment on the nation.
It would bring Enron’s dream — and California’s nightmare — to America’s
power business.

FERC’s Illegal Plan

For 70 years, the statutory division between state and federal jurisdiction
in electricity regulation has been between retail and wholesale service. Federal
regulation is devoted exclusively to wholesale electric service in interstate
commerce, while the states regulate service both in interstate and intrastate
commerce. FERC’s SMD proposal obliterates the line carefully drawn by Congress
in the Federal Power Act. It usurps state authority to regulate retail service.

While Congress may choose to give FERC such authority, it has not yet done
so. In proposing SMD, FERC not only has exceeded its statutory authority but
cannot implement its plan. During the years it will take the courts to confirm
the invalidity of SMD, the entire industry — consumers, providers, investors,
regulators, and pundits — will be mired in confusion rather than addressing
real issues and problems.

The Federal Power Act authorizes FERC to regulate wholesale electricity sales
in interstate commerce and electricity transmission in interstate commerce.
Congress has not authorized FERC to regulate electric generation facilities
and facilities used to provide retail service, whether or not they are “in interstate
commerce.”

The United States Supreme Court confirmed this dichotomy in Connecticut Light
and Power v. Federal Power Commission, 324 U.S. 515, 530 (1945):

“Congress is acutely aware of the existence and vitality of these state governments.
It sometimes is moved to respect state rights and local institutions even when
some degree of efficiency of a federal plan is thereby sacrificed. Congress
may think it expedient to avoid clashes between state and federal officials
in administering an act such as we have here …

“Congress may think complete centralization of control of the electric industry
likely to overtax administrative capacity of a federal commission. It may, too,
think it wise to keep the hand of state regulatory bodies in this business,
for the ‘insulated chambers of the states’ are still laboratories where many
lessons in regulation may be learned by trial and error on a small scale without
involving a whole national industry in every experiment.”

Truer words have seldom been spoken. Proponents of deregulated electricity
trading, like Enron, have been trying for years to ease the “bright line” between
wholesale and retail regulation. They have consistently been rebuffed by Congress.

State Jurisdiction

In the 1992 Energy Policy Act, Congress added specific language to the Federal
Power Act that reinforced state retail jurisdiction:

“No order may be issued under this chapter which is inconsistent with any state
law which governs the retail marketing areas of electric utilities.”

This explicit restriction on FERC regulation is reinforced by an original Federal
Power Act provision that denies FERC any ability to encroach upon the states’
reserved authority:

“Federal regulation … of the transmission of electric energy in interstate
commerce and the sale of such energy at wholesale in interstate commerce is
necessary in the public interest, such federal regulation, however, to extend
only to those matters which are not subject to regulation by the states.”

During March 2002, for the first time, the Supreme Court blurred slightly the
wholesale-retail distinction by authorizing FERC to remedy discrimination in
transmission service, including discrimination affecting retail transmission
service, where a state had voluntarily “unbundled” transmission from energy
supply for retail customers. The decisions of some states, such as California,
Pennsylvania, and New York, to permit so-called “retail choice” of electric
suppliers are examples of this unbundling.

The court held that FERC had the authority under the Federal Power Act to issue
its Order 888 regulating transmission in interstate commerce, including voluntarily
unbundled retail transmission. The Supreme Court specifically rejected Enron’s
argument that FERC should take control of the entire national grid to facilitate
trading, whether or not a state had unbundled under state law.

However, FERC has adopted Enron’s suggestion through the creation of an astonishing
fiction that a utility “discriminates” whenever it uses its transmission systems
to serve its own customers with its own power plants (i.e., provide “bundled”
retail service), rather than serve wholesale generators. This astonishing notion,
which flies directly in the face of the 1935 Federal Power Act and 1992 amendments,
supplies the legal underpinnings for SMD.

Ill-Considered

A group of regulators representing 15 states recently signed an “Alliance Statement,”
which opposed SMD and called on FERC to withdraw it. Specifically, the statement
calls on FERC to:

• Refrain from asserting jurisdiction over the transmission component
of bundled retail sales.
• Refrain from asserting jurisdiction over power supply planning functions.
• Refrain from asserting jurisdiction over demand-response functions.
• Refrain from imposing highly complex, untested market mechanisms throughout
the nation.
• Focus on improving the wholesale electricity market through monitoring
and better enforcement.
• Return to regional transmission system discussions.
• Identify on a regional, cooperative basis, real (not theoretical) problems,
and fashion practical, evidence-based solutions.
• Before implementing any new programs or market designs, subject them
to a rigorous cost-benefit-risk analysis that measures net benefits to consumers.

The statement catalogs both what FERC is doing wrong and what FERC should be
doing but is not.

FERC is attempting to promote investment in electric transmission upgrades
that have significant environmental and cost consequences. Congress has consistently
withheld this authority from FERC, in clear contrast to its allowing FERC to
regulate interstate gas pipelines.

FERC is attempting to accomplish by indirection what it cannot do directly,
by mandating the creation of regional transmission organizations (RTOs) that
would have planning authority.

FERC is also pushing a variety of devices to permit electricity day traders
to use deception — such as “virtual bidding” of electricity they do not
actually have — to confuse grid operators and make money. We have had a
terrible experience with this type of deceptive behavior in California, but
FERC would require it nationwide.

Problems Ignored

Equally important is what FERC is failing to do, particularly in the area of
market manipulation and abuse.

Since Spring 2002, when FERC proposed SMD, almost daily revelations are uncovered
about the way gas and electricity marketers manipulated and abused California
during the energy crisis. Sadly, it appears that in some instances FERC had
information that it ignored or suppressed. FERC took action only when that evidence
was publicly disclosed through other means.

In April 2000, California parties complained that El Paso Natural Gas was using
its market power to raise gas prices in California. In August 2002, 28 months
and many billions of dollars later, FERC’s chief administrative law judge agreed.
(At press time, the case was pending before the full commission.)

In May 2000, fraudulent arrangements to illegally withhold electric supply
were made between two FERC-regulated wholesale generators and marketers. California
authorities informed FERC of this illegal conduct in February 2001.

FERC settled the case and suppressed the information until it was disclosed
in October 2002 in litigation between California agencies and one of the perpetrators.

FERC has subsequently expanded the types of evidence it will consider in calculating
refunds for California consumers, but it waited more than 18 months after receiving
this evidence.

In August 2002, FERC staff issued a report detailing reasons why certain natural
gas price indexes that FERC had been using to calculate refunds were unreliable
and vulnerable to manipulation. Since that time, the Commodity Future Trading
Corporation (CFTC) has issued subpoena to gas traders, and a number of gas marketers
have terminated employees for erroneous reporting that might have influenced
the indexes and financial statements and skewed arrangements premised on the
indexes. FERC has not even commented on these revelations, much less pursued
them.

“Horrific” Outcomes

The deregulated wholesale market that FERC is promoting with SMD has serious
problems, one of which continues to be FERC’s unwillingness or inability to
police deceptive and fraudulent behavior. Instead of pushing SMD, FERC should
rein in the cowboys in those wholesale markets for which FERC clearly has responsibility.

Deregulation’s consequences for consumers have been horrific. The consequences
for investors and employees have been equally horrific. The intellectual bankruptcy
of FERC’s approach is best understood in the actual bankruptcies, valueless
stock, and business collapses of the electric and gas traders.

An innkeeper in Greek mythology had a bed that fit every traveler — those
who were too tall had their heads and feet chopped off; those who were too short
were pounded with an iron club and stretched out to fill the dimensions. FERC’s
one-size-fits-all SMD is a “bed of Procrustes” that would ruin the American
economy.

Instead of allowing FERC to strap each state into that bed, we should try to
solve real problems and make real improvements both in federal regulation of
wholesale services and in state regulation of retail services. As the alliance
statement from state regulators concludes:

“There may well be problems to solve and improvements to be made both in federal
regulation of wholesale services and in state regulation of retail services.
… We are convinced that consumers will benefit if both the Commission and the
states focus on identifying and solving real problems, whether through RTOs
or other means.

“But the Commission’s unprecedented and aggressive jurisdictional reach, which
shifts the ground rules and undermines trust at many levels, disrupts useful
aspects of the … process and the ability to make informed decisions.

“We urge the Commission to recognize the limits of its own reach and respect
the partnership it must forge with the states, and to withdraw the proposed
rulemaking.”

FERC should withdraw SMD and return to the voluntary, incremental approach
to regional development that has served the nation well for almost 70 years.
The last five years of deregulation are a nightmare that we should never dream
again.