The Changing Face of Utility M&A

Since the enactment of the National Energy Policy Act of 1992, there have been
approximately 90 announced mergers between, or acquisitions of, publicly traded
gas and/or electric utility companies. This consolidation activity has reduced
the number of independent, publicly traded utility companies during the past
decade from approximately 160 to 100 (pro forma for pending Exelon-PSEG merger).
The reduction would have been even greater but for the fact that a number of
announced mergers ultimately did not close and that, during this period, there
were a number of new, independent companies, such as Mirant, NRG Energy and
Reliant Energy, formed via spin-offs from utility parents.

Further, during this period there have been approximately 200 additional transactions
involving the purchase of utility transmission and distribution (T&D) or electric
power generation assets. The acquirers of these assets have ranged from traditional
utilities to industrial companies and conglomerates to financial investors.

The Phases of M&A

This 13-year period of consolidation activity can be broadly categorized into
three phases, each predominated by a certain type of merger transaction and/or
participant.

The first phase, covering the period from the end of 1992 through 1997, was
characterized largely by stock-for-stock mergers of neighboring utilities, typically
involving observed merger premiums of 20 percent or less. Most utilities during
this period suffered from an excess of generation capacity and a lack of new
opportunities to invest in their regulated base of assets (rate base). They
viewed consolidation with neighbors as an opportunity to reduce costs, create
scale efficiencies and increase growth in earnings per share (EPS). In addition,
many companies believed that the pending deregulation (and potential forced
divestment) of the electric power generation side of the business would result
in a much smaller residual T&D company. Mergers were viewed as an opportunity
to “bulk up” in advance of any potential separation or spin-off or sale of generation
– in order to ensure that the residual T&D entity would have sufficient critical
mass to compete in a post-deregulation environment. Many of the combinations
entered into during this period were “mergers of equals” (MOE) or “modified
mergers of equals” (MMOE) transactions in which two companies of similar size
joined together to jointly manage the combined business, and the 100 percent
stock consideration that predominated was consistent with – and indeed necessary
for – the mergers not to be characterized as change of control (i.e., sale of
the company) transactions. Few companies during this period, in fact, viewed
themselves as “sellers”; and few, if any, of the transactions involving larger
premiums were “shopped” to multiple potential acquirers, as would be the case
when a company was trying solely to realize the highest short-term value for
shareholders. The pace of consolidation during this period was moderate and
deliberate, averaging roughly six to seven combinations per year.

The second phase of utility industry consolidation commenced in late 1997/early
1998, as industry participants became convinced that the industry was indeed
undergoing a radical transformation – and that simply maintaining the status
quo was not an option. The period from 1998-2000 witnessed a major shift in
both the type and number of transactions announced annually. On the corporate
utility front, the number of annual transaction announcements jumped into the
double digits (peaking at 27 announced transactions in 1999). Equally noteworthy,
however, was the shift observed in the form of consideration being used to effect
these combinations. In the five years (1993-1997) preceding this 1998-2000 feeding
frenzy, there were a total of 34 corporate utility transactions – 30 of these
were all-stock combinations, while just four involved all cash or a cash/stock
mix. Over the three-year period spanning 1998-2000, the industry witnessed no
less than 45 announced utility merger/acquisition transactions (roughly 15 per
year – or more than double the pace of the 1993-1997 phase). Of these 45 transactions,
only eight were 100 percent stock-for-stock combinations; 37 of the 45 involved
some use of cash as part of the merger consideration.

Using Cash

The increased use of cash in utility merger and acquisition transactions resulted
from the increasing acceptance of the inevitability of continuing industry consolidation
– particularly on the part of many smaller companies, as well as certain larger
companies whose CEOs were nearing retirement age or which were otherwise considered
attractive by one or more of their larger neighbors. Anxious not to let the
parade pass them by, and realizing that there weren’t enough (or often any)
neighbors who could be legitimate MOE partner candidates, they elected to enter
into a “sale of the company” transaction with a third party which could offer
the highest price to their shareholders (often representing a significant premium
to the company’s then-current market value). In an industry in which few companies
enjoy much of a P/E multiple advantage relative to their neighbors, potential
buyers, anxious to not be outbid for the rapidly disappearing universe of potential
targets, were very willing to begin using cash as part of the acquisition consideration
mix. Cash was viewed as beneficial by selling companies interested in preserving
a certain (known) sale price per share (and minimizing the risk of value variability
inherent in many stock transactions) and also viewed as necessary by many acquiring
companies who needed to be able to pay a full and competitive premium to the
target company while minimizing the otherwise earnings-dilutive effect of paying
a large premium using a significant amount of common stock.

Indeed, the increased use of cash as part of the mix of merger consideration
resulted in an increase in the premiums paid in acquisitions during the period
from 1998–2000. The average premium paid during this period was 27 percent (and
ranged as high as 120 percent) compared to an average premium of 16 percent
during the period from 1993–1997.

The Unbundling Effect

The period from late-1997 though 2000 also witnessed the onset – in a number
of regions of the country – of the long-awaited “unbundling” of electric generation
assets from their host regulated utilities. Many utilities took advantage of
this one-time opportunity to divest some or all of their generation assets in
order to quantify and secure financial recovery of their “stranded costs” in
this asset class (stranded costs typically being defined as the differential
between the depreciated value of the capital invested in the assets to date
and the then-current – and often much lower – market value of those assets).

While many followers of the industry expected that the auctioning off of so
much of the industry’s generation capacity would attract a number of new buyers
and capital from outside the industry, actual results confounded these expectations.

The first significant divestiture of an entire generation portfolio by an integrated
electric utility was announced in August 1997 and involved the sale by New England
Electric System (NEES) of approximately 4,000 MW of generation and 5,000 MW
of purchased power contracts to an unregulated generating affiliate of California’s
Pacific Gas & Electric (an electric utility) for $1.6 billion in cash, plus
the assumption of a number of high-cost power purchase contracts. From the date
of this watershed announcement through December 2000, there were in excess of
80 sales of generation assets by U.S. utilities involving total (cash) consideration
of over $44 billion. Virtually all of this generation capacity was purchased
by unregulated affiliates of other U.S. utilities that had made the strategic
decision to build a presence in the emerging unregulated generation sector.
The names of the acquirers of these divested generation assets reads like a
“Who’s Who” of the U.S. utility industry and included affiliates of: FPL Group,
Southern Company, PG&E Corp., Edison International, Dominion Resources, Duke
Energy, Entergy, Reliant Resources, Northern States Power (NRG Energy). The
list goes on. Virtually all of these purchases were made for 100 percent cash,
and very few of these acquiring companies chose to issue parent company common
stock to help fund these purchases.

The effect of the shift to more cash consideration in the corporate utility
merger arena, coupled with the massive debt-financed acquisition binge in the
generation sector, caused a significant increase in the consolidated debt leverage
of the industry, and a corresponding decline in average credit rating[1]. Consolidated
debt leverage (as a percentage of total capitalization) in the industry peaked
in 2001 at approximately 63 percent, just as many of these acquisitions – which
were announced in the 1999 and 2000 time frame – closed. The higher balance
sheet leverage and rising business risk associated with nonregulated generation
activities, combined with a worsening overall economy and falling power prices
(deteriorating generation economics) in turn produced a record number of credit
rating downgrades by the rating agencies during the 2002-2003 period as shown
in Figure 1.

Perhaps not surprisingly, as power prices and trading profits plunged, P/E
multiples in the sector – which had risen gradually from 1995 through early
1999 – also began declining and reached single-digit lows in late 2002, as shown
in Figure 2.

Special Effects

With stock prices depressed and balance sheets over-leveraged, many companies
(or their nonregulated generation affiliates) began to suffer the very real
downside consequences of their miscalculations and misfortunes: They had overbought,
overpaid and were experiencing markets with overcapacity and falling prices.
Debt rating downgrades, falling trading profits and large cash collateral calls
all created further balance sheet and liquidity stresses. Further, the Enron
bankruptcy and other U.S. corporate scandals – which prompted the passage of
the Sarbanes-Oxley Act – caused most of corporate America (including the utility
sector) to enter a period of extreme caution and introspection.

The effects of all this on utility industry M&A activity were pronounced and
predictable. There was a virtual cessation of corporate level utility M&A activity
in 2001 through 2003. But even as the utility industry began eschewing corporate-level
M&A, many companies were nonetheless realizing that they had to take a step
back and reassess where they had come from – and the difficulties they had gotten
themselves into – and do the best they could to retreat gracefully. And although
this largely ruled out corporate-level M&A, many companies were left with no
choice but to pursue the divestiture of many of the assets they had acquired
over the previous four to five years in order to refocus their business strategy
and repair badly damaged balance sheets. (Those that were less fortunate sold
off these assets in the context of bankruptcy proceedings of their unregulated
affiliates.)

Thus was “born” the third phase of the utility industry’s consolidation history.
By any measure, the 2001–2003 period was a slow one, with transaction activity
diminishing significantly. As P/E multiples and debt ratings fell, there was
a severe shortage of buyers capable of maintaining the historical pace of consolidation.
As corporate-level merger activity virtually disappeared, the market came to
be dominated by sales of generation assets. With few strategic utility buyers
aggressively pursuing these assets, unregulated generation asset valuations
became severely challenged. It had become a buyers’ market (see Figure 3).

Many Players

It was in this relative vacuum of strategic buying interest that a number of
new industry entrants, (i.e., asset acquirers – primarily financial buyers,
but not all LBO firms) began to experience some of their first acquisition successes.
Examples of some of these new entrants include ArcLight Capital, GE Commercial
Finance, AIG Highstar, KKR, Matlin Patterson, Brascan and Texas Pacific Group,
as well as other less-well-known names. While most of these new entrants experienced
their successes acquiring generation assets, a few of these names have been
involved in the pursuit of T&D acquisitions as well – namely KKR with UniSource
(announced Nov. 24, 2003) and Texas Pacific Group with Portland General Electric
(announced Nov. 18, 2003). Many of these financial players had been studying
the industry for years; in some cases, they had even managed to make some large
structured investments (e.g., KKR into Dayton Power & Light) but few outright
corporate-level acquisitions.

Indeed, KKR’s attempted acquisition of UniSource was rejected by regulators
Dec. 30, 2004 (prompting a termination of the merger agreement), and TPG’s planned
purchase of Portland General was similarly denied by the Oregon Public Utility
Commission on March 10, 2005, and was also mutually terminated. The reason cited
by regulators for denial of these requested acquisition approvals was similar
– namely, the financial buyers failed to demonstrate that the acquisition would
be in the best interest of utility customers. Financial buyers have fared far
better in their pursuit of generation and FERC-regulated transmission assets
(at least in terms of successfully closing on contracted purchases). It remains
to be seen what level of long-term returns they will ultimately earn on these
acquisitions.

What’s Next

Looking ahead to the next phase in the evolution of the industry’s consolidation,
we expect a gradual resumption of corporate level M&A activity between and among
the utility industry players. Following the recent unsuccessful experience of
KKR with UniSource and the difficulties being encountered by TPG with Portland
General, financial buyers are unlikely to want to invest the time, money and
energy to attempt to acquire regulated utility T&D companies (or assets). Indeed,
with utility industry companies generally on the mend and healthier financially
than they have been in many years, we expect peer (strategic) utility companies
to be the consolidators of choice for the foreseeable future. Financial buyers
(to the extent that any are still interested after the KKR and TPG experiences)
will find potential sellers skeptical at best, and strategic utility buyers
will be the preferred buyer for a neighboring company, owing typically to a
lower cost of capital and a greater potential to realize and retain synergies.

Healthy, integrated T&D-oriented gas and electric utilities will have the means,
motivation and strategic rationale to prevail over most nonindustry participants
as the next round of (corporate-level) industry consolidation plays out. This
next round of consolidation is likely to be heavily driven by stock-for-stock
combinations, typically involving low to moderate premiums (e.g., in the single-digit
to approximately 20 percent range). Few companies will want to risk impairing
their balance sheets (and credit ratings) when a stock acquisition is feasible.
But we are unlikely to see a return to the “feeding frenzy” mentality of the
late 1990s, as utility management (and their boards) have become far more selective
and disciplined in their approach to potential merger transactions.

Finally, as the ongoing reshuffling of the unregulated generation asset base
continues its slow wind-down, we do expect to see a mix of industry and nonindustry
players competing for these assets, with the nonindustry acquirers winning at
least a fair share of the contestable asset auctions. As strategic industry
players continue to be highly selective and sensitive to EPS dilution in determining
what assets they will (and will not) pursue, there will be buying opportunities
in this asset space for those acquirers who are willing and able to take a longer-term
view regarding power prices and generation asset value recovery.

Endnote

  1. While the transfer of cash from one industry participant to another should
    not create an increase in total industry net debt, generation acquirers were
    concurrently also heavily investing in new (greenfield) generation plants,
    repurchasing common stock or investing in other ventures, thereby further
    contributing to the increase in industry net debt levels.

 

 

Investing in Renewable Energy

For the first 50 years following the passage of the Depression-era Public Utilities
Holding Company Act, governmental regulation of the electric utility industry
largely restricted investments in electric generating facilities to highly regulated
utilities.[1] But beginning with the enactment of the Public Utilities Regulatory
Policies Act of 1978 (PURPA), various measures designed to encourage investments
in electric energy generating facilities by non-utility investors have become
law.[2] Intended to bring an element of competition to the geographic monopolies
granted to utilities as part of the regulatory bargain, PURPA enabled non-utilities
to build, own and operate certain types of electric generating facilities (called
“qualifying facilities”) and sell the output of those facilities to investor-owned
utilities at a price (the “avoided cost”) designed to reflect the cost the utility
would have incurred had it built, owned and operated the resource itself. While
definitely a move in the direction of greater competition, the provisions of
PURPA that forced the incumbent utility to purchase the output of qualifying
facilities at the applicable avoided cost embodied a regulatory mandate that
cut against the competitive grain. Nevertheless, PURPA did provide significant
opportunities for non-utilities to participate in the generation side of the
electric industry.

The move to greater competition in the electric industry advanced further in
1992 with the enactment of Title VII of the Energy Policy Act of 1992, which
allowed non-utilities to gain the status of an exempt wholesale generator (EWG),
enabling them to sell electric at wholesale at marketbased prices – that is,
at price negotiated at arm’s length between the EWG and the purchaser. [3] (Sales
of electricity by an EWG are limited to wholesale transactions – that is, sales
where the purchaser will resell the output to third parties. Hence, an EWG can
sell to a utility that buys the power for the purpose of reselling it to the
utility’s retail customers. But an EWG cannot sell the power to an ultimate
end user such as a manufacturing plant.) To further facilitate the entry of
nonutilities in the electric generation business, the Federal Energy Regulatory
Commission subsequently adopted orders requiring regulated transmission providers
to allow third party generators access to the transmission system and to interconnect
their facilities to the transmission system under standardized agreements.[4]
These actions, along with various initiatives by state public utility commissions
to increase competition in the industry, combined to make non-utility generators
important players in the quest to meet the nation’s demands for energy.

The Role of Renewable Technologies

While opening up the electric industry to competition was perhaps the prime
motivating factor behind these developments, in many respects they were also
motivated by, or were at least complementary to, efforts to reduce the nation’s
reliance on imported oil. A key focus in this regard has been the development
and implementation of alternate energy technologies such as wind, biomass, solar
and geothermal – renewable energy sources that are not dependent on petrochemicals
as the fuel to generate electricity in commercial quantities. The revamped regulatory
environment opened the door to developers of renewable energy facilities to
demonstrate and prove their technologies as viable elements of a diversified
electric generation system.

But even with the great strides that have been made in recent decades in improving
the efficiency, cost and reliability of renewable energy technologies, many
– such as wind and solar – still cannot compete effectively on a price basis
with fossil fuel technologies. While the gap is closing as oil prices rise,
a significant opportunity to diversify the generation base and gain hands-on
experience with the integration of commercial-size renewable energy facilities
into our electric supply system would have been lost in the absence of special
measures designed to overcome this economic barrier. The federal and state governments
have responded with a variety of such measures, ranging from renewable portfolio
standards to various federal and state subsidies such as grants and tax incentives.[5]

The most significant subsidy in the renewables area is the federal production
tax credit (PTC). By effective utilization of PTCs and other tax benefits (such
as depreciation), the price of energy produced by a qualifying renewable energy
facility can be brought down to a level where it is an economically competitive
alternative to fossil fuel generation.

Creation of Investment Opportunities

Commercial-size renewable energy facilities require significant capital investment
and generate correspondingly significant tax benefits such as depreciation and
PTCs. Some of the non-utility developers of renewable energy resources are entities
that have – or that are parts of larger consolidated groups that have – substantial
taxable income and can therefore efficiently utilize these tax benefits. However,
many renewables developers are economically smaller enterprises that lack the
taxable income to take full advantage of the associated tax benefits. And even
for some larger enterprises that might otherwise be in a position to effectively
utilize these tax benefits, their business strategies may involve the “monetization”
of those tax benefits by effectively transferring them to a third party in exchange
for an immediate cash return.

As a result, the entrance of non-utilities in the electric generation business
coupled with the tax benefits associated with renewable generation facilities
have created further opportunities for non-utilities who lack the development
and operation expertise to become significant investors in renewable generation
assets. Over the last few years, a growing number of investors (primarily financial
institutions) have turned their attention to renewable generation facilities
as an investment vehicle. This has been driven not only by growing maturity
of the renewables industry that has witnessed an increasing number of renewable
generation facilities being constructed each year, but also by other developments
such as decreasing returns on alternative investment vehicles. For example,
many of these financial institutions have long invested in low income housing
projects, transactions driven by the availability of the low income housing
tax credits.[6] But in recent years the returns on low income housing tax credit
investments have significantly decreased, making the returns available in the
renewable generation area that much more attractive. Furthermore, the structure
of PTC/tax benefit-motivated investments in the renewable area is the same in
many basic respects as the structure used in the low income housing transactions,
thus presenting these investors with a new application of a familiar investment
vehicle.

Equally important, these renewable energy investments do not just serve as
a vehicle for ensuring efficient use of the associated tax benefits. By monetizing
the PTCs and other tax benefits, investors and developers have been able to
significantly improve the economics of qualifying renewable power projects,
using a number of structuring alternatives to tailor the transaction to the
particular project and objectives of the participants.

The term “monetizing” is somewhat of a misnomer, since these transactions do
not involve “sales” of the PTCs and other tax benefits, as was done, for example,
with investment tax credits under the old safe harbor lease transactions in
the late 1980s. Rather, these transactions constitute real investments in wind
power projects by investors with sufficient income from other sources to fully
utilize the PTC and other potentially significant tax benefits generated by
those projects, with the potential of significantly higher returns. Choosing
the right structure will depend on the particular needs of the investor and
the developer, including the investor’s return requirements. With the right
structure, these transactions can have a tremendously positive impact on the
power market by matching investors that can use tax benefits with developers
looking for favorable financing alternatives.

The Production Tax Credit

The PTC is a nonrefundable credit against federal income tax liability that
is available for electricity produced at a facility owned by the taxpayer from
certain qualified renewable resources, including wind, biomass, solar and geothermal,
and sold to unrelated parties.[7] The amount of the PTC for 2005 is 1.9 cents
per kilowatt-hour ($19 per MWh) of electricity produced and sold during the
taxable year, an increase over the 1.8 cents per kilowatt-hour rate for 2004.[8]
This amount is adjusted annually using an inflation adjustment factor published
by the Internal Revenue Service each spring.[9] The PTC amount is reduced by
a half for open-loop biomass, small irrigation power, municipal solid waste
and refined coal facilities.[10] For wind and solar projects, the PTC is available
for each taxable year in the 10-year period that begins on the date a project
is placed in service.[11] For other qualifying renewable energy projects such
as biomass and geothermal, this period is reduced to five years.[12] Because
the PTC is a relatively stable amount that is not directly affected by market
conditions or the financial performance of the project, it can help to ensure
a somewhat consistent return on investment regardless of the price of power,
project expenses or other variable factors.

To qualify for the PTC, a facility must be “placed in service” on or before
Dec. 31, 2005.[13] A facility generally is considered to be placed in service
for this purpose when it is “placed in a condition or state of readiness and
availability for a specifically assigned function.”[14] This is considered to
have occurred when each of the following requirements is satisfied: (i) the
necessary permits and licenses to operate the facility have been approved, (ii)
the critical tests for the various components of the project have been completed,
(iii) the project is placed in control of the operator by the contractor, (iv)
the project is synchronized into the grid for the purpose of generating electrical
energy for production of income, and (v) daily operation of the project has
begun.[15] In the case of a wind project, each wind turbine that can be operated
and metered separately is treated as a separate facility for purposes of these
rules.[16]

Monetization, Motivation and Structure

As noted, many developers do not have sufficient federal income tax liability
to fully utilize the PTC and other tax benefits generated by a renewable power
project. By bringing in an investor with sufficient federal income tax liability
from other sources to use those benefits, a developer can fund the project while
providing the realistic possibility of a high and relatively stable rate of
return on the investor’s equity investment. Because of the relative certainty
of the availability of PTCs, some equity investors are willing to provide certain
limited credit support for project debt financing, which may increase the leverage
available for a project (reducing the amount of equity needed to fund the project)
and improve the investor’s overall returns.

There are a variety of ways to structure an investment in a renewable power
project to take advantage of the PTC and other tax benefits. The specific structure
of an investment must be highly customized to meet the investor’s, developer’s
and project lender’s particular circumstances and requirements. To qualify for
the PTC, electricity generally must be produced at a facility that is “owned
by” the taxpayer seeking to claim the PTC.[17] This requirement has a significant
impact on how PTC “monetization” transactions may be structured. Whatever the
structure, the parties must be comfortable that the investor will be treated
as the “owner” of the facility (or the entity that owns the facility) for federal
income tax purposes.

One potentially useful structure involves an equity investment in a partnership
(or a limited liability company taxed as a partnership) that owns the renewable
power project (the “owner”). The basic project economics are premised on the
sale of the electricity generated by the project and any associated “green tags”
to utilities or other wholesale power marketers pursuant to one or more long-term
power purchase agreements, generating a relatively stable stream of revenues.
The equity interests in the owner can be structured so that the investor initially
receives a large percentage of all-cash distributions and tax benefits from
the project, including the PTC. Once the investor has received an agreed-upon
after-tax return or some other objective standard has been met, the sharing
ratios may flip so that the investor receives a smaller portion of cash distributions
and tax benefits and the developer receives a larger portion. The developer
also may retain an option to purchase the investor’s equity interest upon the
occurrence of an agreed-upon event. The alternatives available for structuring
the relationship between the investor and the developer are quite varied and
these transactions frequently are quite complex. Thus, these transactions require
careful structuring and drafting to meet each developer’s and investor’s unique
requirements.

Debt Financing, Credit Support and Project Financing

Renewable electricity projects often are partly financed with project debt,
which can increase the returns offered to an investor. As discussed above, because
of the availability of the PTCs combined with an investor’s desire to increase
the return on investment, an investor may be willing to provide credit support
to cover certain circumstances that may occur, such as lack of wind in a wind
power project, that could prevent a project from generating sufficient revenue
to cover debt service. This credit support may take a variety of forms, such
as an agreement by the investor to make additional capital contributions in
the event of a cash shortfall, to be made up in later years from distributions
when project production is at or above anticipated levels. In addition, products
such as insurance and liquidity facilities may be available from third-party
financial institutions to cover some of these risks. If the investor is willing
to provide credit support, or if insurance or liquidity support is available
at a favorable price, the amount of debt may be increased or the overall cost
of debt financing may be significantly reduced, or some combination of the two,
which may improve the investor’s overall return.

The developer typically manages the wind power project and is responsible for
operation and maintenance, subject to customary rights of the investor to remove
the developer from management and operation functions due to nonperformance.

Conclusion

Private investments in renewable projects should be carefully structured to
enable the investor to take full advantage of the PTC and other tax benefits
available. Because of the complexity in this area, structuring a private investment
in a renewable energy project should involve careful analysis and planning.

Endnotes

  1. 15 U.S.C.A 79a et seq.
  2. Public Law 95-617 (92 Stat. 3117); portions were codified at 16 U.S.C.A.
    2601 et seq.; various provisions appear elsewhere in the U.S. Code.
  3. This Act effected various amendments to the Public Utilities Holding Company
    Act of 1935, supra, PURPA, supra, and the Federal Power Act, 16 U.S.C.A 791a
    et seq.
  4. See FERC Order No. 888, Promoting Wholesale Competition Through Open Access
    Non-Discriminatory Transmission Services by Public Utilities, 61 Fed. Reg.
    21,540 (May 10, 1996), FERC Stats & Regs. ¶ 31,036 (1996), order on reh’g,
    Order No. 888-A, 62 Fed. Reg. 12,274 (March 14, 1997), FERC Stats & Regs.
    ¶ 31, 048 (1997); order on reh’g, Order No. 888-B, 81 FERC ¶ 61,248 (1997),
    order on reh’g, Order No. 888-C, 82 FERC ¶ 61,046 (1998), aff’d in relevant
    part sub nom. Transmission Access Policy Study Group v. FERC, 225 F.3d 667
    (D.C. Cir. 2000), aff’d sub nom. New York v. FERC, 535 U.S. 1 (2002); and
    FERC Order No. 2003, Standardization of Generator Interconnection Agreements
    and Procedures, 68 Fed. Reg. 49,845 (August 19, 2003), FERC Stats & Regs.
    ¶ 31,146 (2003), order on reh’g, Order 2003-A, 69 Fed. Reg. 15932 (March 26,
    2004), FERC Stats & Regs ¶ 31,160 (2004).
  5. Programs implemented by some states requiring regulated utilities in those
    states to have a certain percentage of their loads met by renewable energy
    sources.
  6. Available for qualifying low income housing projects under IRC § 42.
  7. See IRC § 45(a), (c)(1)(A). The PTC is allowed as part of the general business
    credit pursuant to IRC § 38. Although the PTC is not refundable, any unused
    credit for a particular tax year can be carried back one year and forward
    20 years. See IRC § 39. The PTC generally does not apply to electricity sold
    to a utility pursuant to an avoided cost contract entered into before January
    1, 1987. See IRC § 45(e)(7).
  8. The PTC is 1.5 cents per kWh, adjusted for inflation. See IRC § 45(a)(1).
  9. See Notice 2005-___, 70 Fed. Reg 18071-01 (Apr. 8, 2005) (announcing that
    the inflation adjustment factor for 2005 is 1.2528).
  10. See IRC § 45(b)(4)(A).
  11. See IRC § 45(a)(2)(A)(ii).
  12. See IRC § 45(b)(4)(B).
  13. See IRC § 45(d)(1). This sunset date has been extended a number of times
    over the years and has been extended each time it expired.
  14. See Treas Reg § 1.46-3(d)(1)(ii) (applying a similar standard for purposes
    of the investment tax credit); see also Rev Rul 76-256, 1976-2 CB 46; Oglethorpe
    Power Corp. v. Commissioner, TC Memo 1990-505.
  15. See generally Rev Rul 76-256, supra.
  16. See Rev Rul 94-31, 1994-1 CB 16.
  17. See IRC § 45(d)(1). An exception is made for open-loop biomass facilities.
    Under this exception, if the owner of a facility is not the producer of the
    electricity, the lessee or operator of the facility who actually produces
    the electricity can claim the PTC. See IRC § 45(d)(3)(B).

Understanding Selection Bias

Selection bias can be defined as “a nonrandom participation in a program offer
leading to damaging financial results.” It can affect the profitability of any
product line where a group of consumers is offered products or services with
varying profitability among the customers. Selection bias may occur when consumers
have insight into the relative desirability of their individual offer based
on knowledge that the seller does not have or cannot control. For example, if
a randomly selected, unsegmented group of consumers is offered a single-price
furnace warranty, those consumers who own older, inefficient furnaces are more
likely to enroll than those whose furnaces are relatively new. If the seller
bases its price on the average age of furnaces, selection bias will damage the
warranty program’s financial performance.

Although selection bias may occur in virtually any consumer program, this paper
focuses on the fixed utility bill product because it is gaining significant
acceptance in the marketplace and it can be significantly affected by selection
bias. A fixed bill is a predetermined, guaranteed utility bill that provides
the consumer a known monthly payment amount with no adjustments or true-ups.
These offers are generally customized to reflect the usage patterns of the individual
consumers.

Fixed bill providers use mathematic techniques with varying degrees of sophistication
to generate consumer offers. Selection bias is introduced by imperfections in
the data and techniques used to generate the consumer quotes. For the purposes
of this paper, a “perfect quote” is defined as the amount that an average consumer
would expect to be charged for this product, whether or not they are willing
to accept the offer. In almost any offer, there are consumers who will reject
an offer and there are those who will accept an offer. Therefore, “perfect”
is defined with respect to average customer expectations and, in this study,
unlike in the real world, quotes are the same for all customers.

Case 1: No Selection Bias

Fifty consumers are offered perfect fixed bill quotes of $1,000, each with
a $200 margin. Assuming 100 percent participation, the program revenue is $50,000
with $10,000 of expected margin.

A second set of 50 consumers is offered fixed bill quotes generated with inaccurate
data and/or poor modeling techniques. The quotes are randomly priced but maintain
an average $200 margin per customer. Assuming everyone accepts their offer from
these quotes, the expected margin is still $10,000 and the financial results
do not change.

From this example, we can see that selection bias does not occur either with
perfect quotes or with 100 percent acceptance.

Case 2: Selection Bias

For the two scenarios in Table 1, assume penetration is less than 100 percent.
In the first scenario, penetration is flat – the same level of acceptance occurs
at all quote amounts. In the second scenario, the acceptance rate decreases
as the quote amounts go up. It is reasonable to assume that quotes that are
lower than they should be are more likely to be accepted than quotes that are
higher than they should be. The flat scenario illustrates results with no selection
bias.

The selection bias in the decreasing acceptance rate scenario yields a shortfall
in margin of $1,000 over the flat penetration scenario. This shortfall means
that margins will be $1,000 lower than anticipated in any weather condition.
(Misquoting does not have an impact on the cost to serve the customer.) Selection
bias only occurs with imperfect quoting and penetration that is related to the
price of the quote.

Accuracy in quotes yields a tighter relationship between price and margin and
a higher likelihood of evenly distributed quote acceptance. Therefore, the more
accurately a quoting model predicts, the lower the selection bias will be.

How do you determine if selection bias has occurred? It can only be detected
after the fact by looking at program results. It cannot be predicted prior to
marketing since, by definition, selection bias results when consumers have knowledge
that sellers do not have.

In a fixed bill program, selection bias can be recognized by studying the difference
between the consumers’ actual energy usage and the usage predicted by the quoting
models, given the actual weather that occurred. The difference between the calculated
quote and the actual bills is called non-weather variance. Non-weather variance
has many components including selection bias, model error, behavior change due
to fixed bill program participation and behavior change in the normal population,
such as the purchase of more efficient equipment. If the nonweather variance
is consistently small over several years, selection bias has been low. Low selection
bias minimizes the adverse financial impact, particularly in a fixed bill program.

Model Quality Scenarios

In the furnace warranty example, modeling can be used to create customized
individual prices offered to the consumers, or to set the overall pricing for
tightly segmented groups of customers equal to that group’s risk.

The critical success factor in implementing a consumer fixed bill program and
ensuring low selection bias is the use of precise predictive models. With precision
models, the total non-weather variance over a longrunning fixed bill program
can be near zero. Such precision modeling can eliminate the need for the large
price adders to cover selection bias, which are usually required by less precise
modeling techniques. Without precision modeling, consumers are forced to pay
higher costs to cover the risks associated with selection bias.

Four scenarios follow to illustrate the difference that model quality can make
in reducing selection bias. The results described in each of these scenarios
were developed using a highly simplified simulation. Each scenario was run assuming
5,000 consumers and using Monte Carlo methods to determine if each consumer
accepts or declines the offer. In each scenario, two different predictive models
are used. Model 2 uses a more accurate prediction method, as demonstrated by
the lower standard deviation associated with the individual quotes. In addition,
Model 2 is assumed to have no predictive biases. The results for each scenario
show the impact of selection bias that results from the less accurate Model
1.

Scenario 1: Average Quotes Are Attractively Priced

In this scenario, both models produce quotes that are in a range that is reasonably
attractive to consumers.

Scenario 1 assumptions:

  • The perfect quote is $1,000 for all consumers.
  • Model 1 quotes an average $1,000 with a 5 percent standard deviation ($50).
  • Model 2 quotes an average $1,000 with a 0.5 percent standard deviation ($5).
  • Costs are $800.
  • Acceptance rate is linear.

    – If the quote is 40 percent of a perfect quote, the acceptance rate is
    10 percent.

    – If the quote is 160 percent of a perfect quote, the acceptance rate is
    0 percent.

As shown in Table 2, the cost associated with selection bias is $1,074. The
difference between the models’ margin is the selection bias difference – in
this scenario 1.9 percent. A positive difference indicates a cost from using
Model 1 versus Model 2. The percentage is the difference divided by the Model
2 margin. The difference in Scenario 1 results entirely from the higher standard
deviation of Model 1.

Scenario 2: Quotes Are Higher Than Perfect

  • Perfect quote is $1,000 for all consumers.
  • Model 1 quotes an average $1,300 with a 5 percent standard deviation ($65).
  • Model 2 quotes an average $1,300 with a 0.5 percent standard deviation ($6.50).
  • Costs are $1100.
  • Acceptance rate is as stated in Scenario 1.

Table 2 shows the cost of selection bias is $3,026. The differences in model
accuracy are the same as in Scenario 1. However, note the significant increase
in selection bias risk to 12.4 percent. The further the price quotes deviate
from what consumers view as expected or fair prices, the more selection bias
is magnified and the greater the adverse impact with a less precise predictive
model.

Scenario 3: Model Bias Predicts Higher Than Perfect

In this scenario, there is a significant modeling bias (a structural problem
in the model) in Model 1 that results in higher quotes than with Model 2, which
has no bias. (Ordinary least square regression models inherently have such data-induced
modeling biases. There are many other causes of modeling bias; these causes
are beyond the scope of this paper.)

Scenario 3 assumptions:

  • Perfect quote is $1,000 for all consumers.
  • Model 1 quotes an average $1,300 with a 5 percent standard deviation ($65).
  • Model 2 quotes an average $1,000 with a 0.5 percent standard deviation ($5).
  • Costs are $800.
  • Acceptance rate is as stated in Scenario 1.

As shown in Table 2, there is a gain associated with selection bias of $7,599.
In this case, inaccuracy in modeling actually produces favorable results for
the seller. There is a 14 percent gain where the less accurate Model 1 actually
produced more value due to the over pricing of the offers. This year’s bias
worked in the seller’s favor due to a favorable temperature bias. However, as
the next scenario will illustrate, this type of model bias can cause significant
swings from year to year. The swings are not symmetrical, with losses being
significantly larger than gains.

Scenario 4: Model Bias Predicts Lower Than Perfect

It is the Scenario 4 underprediction case in which selection bias is most dangerous
with wide standard deviation models. In this final scenario, Model 1 has the
same bias as in the Scenario 3 but, due to differing data input such as temperature
changes, it produces quotes lower than those of the more accurate Model 2.

Scenario 4 assumptions:

  • Perfect quote is $1,000 for all consumers.
  • Model 1 quotes an average $700 with a 5 percent standard deviation ($35).
  • Model 2 quotes an average $1,000 with a 0.5 percent standard deviation ($5).
  • Costs are $800.
  • Acceptance rate is as stated in Scenario 1.

In this scenario, Table 2 shows that Model 1 predictions result in a 180 percent
loss of $95,740. When such biased models underpredict offers, the impact is
significantly greater because selection bias increases. While Scenario 3 shows
that in some years the bias may produce favorable results, the results from
negative years are much greater in magnitude.

These scenarios clearly illustrate how model quality can create selection bias
and ultimately be detrimental to a program’s financial results.

Some consultants suggest that by limiting offers to high R-square (a statistical
measure of goodness-of-fit) quotes, the effects of selection bias can be reduced.
A model R-square of .995 indicates that the model is 99.5 percent descriptive.
However, the .995 R-square does not mean that the model is predictively accurate
to within one-half percent. Correlation is not causation.

For example, you can build a relatively high R-square regression relationship
between northern hemisphere gas consumption and hours of sunlight in New Zealand.
This does not indicate that sunlight in the Southern Hemisphere causes consumption
in the Northern Hemisphere. It is just that the hours of sunlight correspond
to northern seasonality. In this case, the R-square of the model is high – the
model is descriptive but has almost no predictive power.

Predictive accuracy and the standard deviation of blind prediction is a completely
different topic and is unrelated to R-square. Predictive accuracy, rather than
the descriptive power of the model, is the important issue in consumer programs
such as fixed bill products.

Conclusion

To revisit the original furnace warranty case, a seller preparing such a furnace
warranty offer would be wise to carefully analyze the age and other characteristics
of furnaces in the potential consumer base to segment the customer base and
to price the offers.

To mitigate selection bias, the seller should replace poor modeling techniques
with high-quality, predictive models and eliminate data inaccuracies. The seller
may use Monte Carlo simulation to understand selection bias and risk profiles
relating to its specific product.

Selection bias results from random margin levels by customer and from the individual
consumer having better knowledge than the seller. If unchecked, selection bias
can significantly impact the financial performance of a consumer program.

 

 

 

 

Toward a Capacity Demand Curve Market

The market for electric generation facilities has passed through several stages
in the transition to more competitive markets. Questions over the viability
of competitive markets gave way to fundamental market restructuring, investor
optimism, tremendous enthusiasm and investment, overbuilding and price depression
in seemingly rapid succession. Market designers are now looking toward the future,
trying to determine how the markets can be structured to attract investments
when and where they are needed. Problems are caused by both too much and too
little investment, and competitive markets will only succeed if they result
in maintaining a reasonable balance between these alternatives. In this environment,
one approach gathering momentum uses an administratively set demand curve to
set prices based on supply levels. Suppliers still face market risks, but the
risks are tied to the level of installed resources, under the hope that this
predictable market incentive will lead toward better stability and balance.

System reliability requires enough capacity to meet load in every hour during
the year and that requires maintaining resources that will only rarely be needed.
Maintaining reliability through the incentives available in a competitive market
is the challenge. Energy markets alone could provide this incentive if energy
prices were allowed to spike to very high levels. But, such pricing has not
been allowed due to concerns over market power, bidding behavior and consumer
reaction to extreme prices. A market with unrestricted energy prices is not
a practical alternative. The development of significant demand-response from
high prices could also help solve this problem, but such response has been slow
to develop. Nevertheless, enough, but not too much capacity, must be built.
Failing to achieve this delicate balance can either cost a lot of money associated
with overbuilding or threaten system reliability.

Too Much, Too Fast

One measure of the success of the competitive wholesale market is its ability
to attract new investment. The burst of new investment in the late 1990s and
early 2000s would seem to suggest that competitive wholesale markets have been
successful. It is now clear, however, that too much capacity was built and came
on line far too fast. Reserve margins that were once considered a problem if
they exceeded 20 percent now reach 30 percent in many areas. Prices have plummeted,
however, and past investor optimism cannot be relied upon to maintain high,
or even adequate levels of capacity.

The present excess capacity situation – along with heightened FERC oversight
of participant behavior – has dampened energy market price spikes. In markets
where energy prices are often correlated with the short-run marginal cost of
the last unit dispatched, the highest-cost units do not make much money. Energy
prices alone are not providing sufficient revenues to cover the total cost of
new investment (see Figure 1).

In the past, regulated utilities were required to maintain adequate capacity
levels, which were frequently tied to a specific reserve margin. This concept
has continued in competitive markets, generally by requiring the payment of
capacity deficiency charges for any load-serving entity that falls short of
its requirement. In a short-term capacity market, this creates the incentive
for a binary price: Prices for capacity equal the deficiency charge when the
market is short, but fall quickly to zero with higher levels. This price signal
is not proving to be adequate to provide the kind of stability investors or
regulators desire. It is compounded by the disincentives many load-serving entities
face in contracting for long-term power supplies in the current environment.

Demand Curve Concept

The solution, or at least the next approach to be tried, is the demand curve
concept. New York, New England and PJM have all introduced capacity markets
based on some sort of a downward sloping demand curve. New York has been operating
its demand curve-based capacity market since the summer of 2003, while New England
and PJM are currently developing capacity markets based on the demand curve
structure. The New England ISO has proposed such a market structure under a
FERC mandate, and the details of that design are currently being litigated at
FERC. A decision is expected in June 2005 and the market is to be operational
in calendar year 2006. In PJM, there is a stakeholder working group process
under way that is collectively determining the specific structure of the future
demand curve based capacity market. PJM anticipates holding auctions under the
new approach for the 2006–2007 planning year.

In a demand curve capacity market, the demand curve is set administratively,
based on the amount of capacity that is in the market. This price ranges from
substantially above the expected annual carrying and operating (fixed) cost
of a new peaking unit during times of relative shortage, down to zero in times
of substantial excess. Payments are made to all generators in the region, with
allowances for imports and exports. The cost of the program is then assessed
on load on a pro-rata basis. Properly structured, these payments will provide
the right incentive for entry, and units will have the opportunity to make a
reasonable return on investment, as capacity levels in the region generally
trend around the desired level.

Figure 2 presents the general form of the demand curves for each of these regions.
This comparison should not be taken too literally, because there are many differences
regarding such issues as adjustments made for expected energy revenues that
would otherwise be earned. But the graph gives a general idea of how these curves
look. While this graph has reflected a specific geographic sub-region for each
of the markets, all three systems have recommended different demand curves at
different geographic locations.


Source: ISO-NE, PJM and ISO-NY ICAP Demand Curves

The demand curve provides some predictability and stability to capacity revenues.
There is some debate over the conceptual framework upon which the parameters
of the demand curve should be based. Some look to the pricing dynamics of an
uncapped energy market (UEM) or a value of lost load framework (VOLL). In the
authors’ view, these theoretical constructs can be limiting, because they rely
on hypothetical market designs where prices are allowed to spike to unlimited
levels (UEM), or require controversial calculations of the value of supply interruptions
and the probability of such interruptions at varying levels of supply (VOLL).
Instead, a more straightforward method to maintain a reliable supply and lower
costs to consumers is desirable. Based on these objectives, the demand curve
mechanism provides a way to effectively put one’s “thumb on the scale” in the
supply-demand balance. The additional capacity payments should increase supplies
in support of reliability, and wherever possible the curve parameters should
be designed to lower the cost of that capacity in the market. Lowering investor
risks will ultimately translate to lower costs to consumers. The “thumb on the
scale” metaphor implies some deliberate intervention, yet – in theory – still
allows market participants on the supply side to respond to price signals to
achieve a balance between supply and demand. It also provides the flexibility
to lower costs to consumers, by structuring the payments in ways that make it
easier for new entrants to raise capital and by tying payments to performance
that can increase system reliability.

Unlike the capacity deficiency programs of the past, the downward sloping demand
curve provides a more continuous payment stream to generators across varying
capacity levels that are higher at critically low levels of reserves and lower
when the system has sufficient (but not too much) reserves. Investors will better
understand how prices can be expected to change, but are given no guarantees.
Investment decision will now focus on the potential for shortages and surpluses
– which matches squarely with the need to maintain reliability without excess
costs. New units are unlikely to enter unless they expect to earn at least equilibrium
returns in their first year of operation. This price level – associated with
the total cost of entry – is critical because the market should equilibrate
around this level. Estimates of the cost of new entry are used in designing
the demand curve, but market performance will ultimately be based on the actions
of participants. Prices will reflect suppliers’ actions, which are based on
their actual costs, not the initial estimates. Market exit decisions will also
be based on market prices, and retirements are likely to occur during times
of relative surplus, ameliorating excess supply conditions. Investors will bear
the risk of their decisions – whether profit or loss – based in part on their
own projections of the future supply/demand balance. This creates the incentive
to keep the system in balance.

Developing Demand Curves

While no demand curve will eliminate the risk of excess capacity eroding revenues,
the slope of the demand curve has significant implications. Keeping the demand
curve steep around the expected equilibrium level of the market – which is centered
on the cost of new entry in each of the proposed ICAP markets – will keep equity
investors focused on targeting new investment for periods when it is needed.
In this way, the signal to attract investment is highly focused on the level
of investment that is needed by the system to maintain reliability, and the
range of deviation around that point is minimized. A steep demand curve also
minimizes the cost of errors associated with the estimate of the cost of new
capacity. On the other hand, flattening the curve provides more price stability.
Within limits, providing some price stability will provide the revenue predictability
that makes it easier to attract capital – particularly low-cost debt.

The different perspectives of equity and debt investors should be considered
in evaluating demand curve parameters. Equity investors look at expected returns
over the long term. Large risks can be worthwhile, as long as the potential
returns are sufficiently high. Lenders are much more focused on the likelihood
of meeting debt payments on a yearly basis from project cash flows. A great
equity investment can be a very poor opportunity from the lender’s perspective.
Imagine, for example, an investment that hinges on the flip of a coin and pays
three dollars for “heads,” nothing for “tails.” While an excellent overall investment,
a lender sees a 50 percent chance of total failure; hardly the performance of
investment grade commercial paper. The lender’s return in a good year is capped
at the debt payment, and it generally does not have the luxury of offsetting
underperforming years with overperforming years over the course of the project’s
life. This difference in perspective between debt and equity comes in greatest
contrast for projects with the most volatile revenues.

A demand curve can be structured to consider the needs of both investors. The
equity investor’s money can be placed at risk, with the potential for appropriate
returns, while the need for greater certainty debt investors can be accommodated.
The risk to equity investors provides proper incentives to maintain system reliability,
while greater certainty provided to debt investors lowers the cost of capital
and ultimately lowers prices for consumers. While not explicit, these considerations
support the concept of a kinked demand curve, which is relatively steep in the
range of the carrying cost of new capacity, but with a flatter slope and therefore
greater payments at lower levels. This gives equity investors the incentive
to maintain just the level of capacity needed in the market, but makes it easier
for them to borrow money to build the new units.

Another key consideration pertains to how capacity payments are adjusted to
reflect energy rents to the hypothetical new generator. Investments are made
based on the expectation of total payments, but some of these will come from
short-term profits in the energy market. These are often called energy rents
and are the difference between energy revenues and the variable costs incurred
in producing the energy sold. The energy rent adjustment is made in all demand
curve markets, but there are differences regarding how the adjustments are made.
The closer the adjustments are tied to actual energy prices, the more accurate
the adjustment and therefore the more accurate the targeting of the desired
capacity level. Energy rent adjustments in New York (and proposed adjustments
in PJM) are based on generic projections, while New England proposes that the
adjustment be based on actual data for 12 months prior to the monthly auction.

As one digs into the details of these markets, other controversial issues arise.
Adjustments for unit availability and performance are important, and different
approaches can favor different participants. The goal for consumers is to incent
the behavior that provides the greatest reliability at the lowest cost. The
implications for trade between regions must be considered as well. These and
other details matter a great deal. A substantial portion of the total cost of
wholesale energy will be covered, either directly or indirectly, by these capacity
markets.

There are also questions about the effectiveness of demand curvebased capacity
markets in solving reliability problems in small zones. To some uncertain extent,
these designs may help reduce the need for reliability-must-run contracts. The
tradeoffs are likely to be based on the size of the payments needed to ensure
a competitive solution to local reliability problems. At some point the costs
needed to provide for a fully competitive solution to all circumstances can
be unreasonable, and nonmarket solutions are likely to be part of the market
for some time.

Will it Work?

While economic theory suggests that a properly designed demand curvebased ICAP
market should be superior to past attempts to address reliability issues, unfortunately
there is no guarantee of success and many issues have yet to be resolved. For
example, risk created by the potential for regulators to change the program
in the future will continue to exist. If the regulators do not let the market
respond to the price signals and instead intervene or force a regime change,
confidence could falter. In addition, the demand curve-based capacity markets
may not work as planned. The immense dollars at stake in these programs raises
concerns about unintended consequences and gaming. This concern will not dissipate
until the market has been in operation for enough time to ensure that such behavior
will not occur, or identify and address the behavior as it arises. In any event,
recent FERC actions clearly indicate that the demand curve concept will be tested
in the marketplace.

 

 

Restructuring as Erosion

Throughout 2005, five anniversaries will mark mileposts in public power’s pursuit
to protect electricity consumers and strengthen the value of community ownership
of electric utilities.

The first is the 125th anniversary of the institution of public power. Butler,
Mo., has the longest- serving public power system. It was established in 1880
to provide arc lighting in the village square. As electricity quickly went from
phenomenon to necessity, thousands of communities created their own municipally
owned electric utilities, in many cases because the market wasn’t meeting their
needs quickly or cheaply enough. The number of public power systems reached
its apex in 1923 with a total of more than 3,500. Thereafter, the numbers declined
as hundreds of smaller public power systems and smaller IOUs were acquired by
the ever-expanding utility holding companies controlled by Samuel Insull and
other industry leaders. The vast majority of those that survived – until the
utility holding companies were brought under control – continue to serve their
communities today.

Another anniversary is the creation of the American Public Power Association
65 years ago. Today, the APPA is the national voice for the interests of nearly
2,000 publicly owned, locally controlled electric utilities. Collectively, public
power systems provide electricity to more than 14 percent of the nation’s ultimate
retail customers, yet the systems own only about 10 percent of the nation’s
installed generation capacity (see Figure 1). It is obvious from these figures
that we are net purchasers of wholesale power, and equally obvious why we have
a very keen interest in properly functioning wholesale markets that meet our
needs.

Public power systems range in size from quite large, such as the Los Angeles
Department of Water and Power with 1.5 million metered customers, to Radium
City, Kan., with 23. Some are vertically integrated; some are distribution-only
companies. All were created to serve their communities and all share the characteristics
of public ownership and public service orientation.

There are some legal anniversaries that need noting, too. Seventy years ago,
Congress enacted the Public Utilities Act. Title I became the Public Utility
Holding Company Act of 1935 while Title II contained the Federal Power Act.
Both statutes remain on the books, although the Securities and Exchange Commission,
which has pleaded for PUHCA repeal, now acts as though it has been repealed
despite the lack of congressional action.

These statutes and the principles underlying them have served the public well
over the last seven decades. We debated over the promised benefits of deregulation
and restructuring. Remarks about the failures of regulation have also been prevalent
over the last few years. What is gradually displacing those comments is the
realization of how horrendously complex (and easily corrupted) the restructuring
process has turned out to be. Indeed, the process of deregulation is proving
to be more corruptible than the process of regulation. The “just and reasonable”
wholesale rate requirement embodied in the Federal Power Act continues to be
the law of the land and critical to protecting electric consumers from the abuses
of market power. The act’s prohibition against “undue discrimination” is the
foundation on which open access to transmission facilities has been built.

Finally, the last anniversary is that of the Federal Energy Regulatory Commission’s
Notice of Proposed Rulemaking (NOPR) that 10 years ago resulted in FERC Order
No. 888. This anniversary, and more specifically the lessons learned through
our experiments with open transmission access and industry restructuring over
the last decade, provides an opportunity to notice what has worked (and what
hasn’t) and what needs to be done.

Where We’ve Been

The journey toward restructuring and open access has been both fascinating
and frustrating. We’ve put on a lot of miles. Unfortunately, we haven’t made
much progress toward the goals we embraced at the outset. When it all began,
our goals were lower rates, better service and greater innovation through markets
and competition. New transmission organizations that would provide nondiscriminatory
access, eliminate rate pancaking and engage in regional planning (and possibly
construction) of transmission facilities were a means to these ends. For the
most part, public power systems were enthusiastic supporters of these new transmission
organizations.

While we were perhaps more skeptical (some would say more realistic) about
the probability of successfully restructuring this industry all the way down
to retail choice, we did look forward to the benefits of displacing regulation
with competition where that could occur without losing focus on the real beneficiaries
of the process – the end user or consumer. Sadly, we have yet to see these predictions
come true or the new institutions perform the functions as they were initially
envisioned.

The regional transmission organization (RTO) and independent system operator
(ISO) institutions sanctioned by FERC to date are quite different from the ones
we initially visualized, and all are blemished by spiraling costs, unaccountable
governance, and most important, service offerings that do not meet transmission
customer and end-user needs.

Hiding Behind FTRs

Public power systems are load-serving entities with the sole mission of meeting
the electric service needs of their customers and communities as cheaply and
reliably as possible. Most depend to some extent on the wholesale power market
to serve their retail load. Long-term assured access to transmission at stable
and predictable rates is essential to meet this mission. RTOs and ISOs are not
helping to meet this mission, and are actually impeding us.

In the RTO world of today, APPA members are being forced to exchange their
physical firm long-term transmission rights (often hard-won through litigation)
for financial transmission rights (FTRs) – that are inadequate in quantity and
term. An RTO’s idea of a “long-term” FTR is one to five years, while public
power’s idea of “long term” is measured in decades. The ability of public power
systems to plan for and procure long-term generation resources to serve load
is being hindered. Credit rating agencies, which have liked our business model
and consistently given us high ratings despite the financial meltdown of others
in our industry, have taken note of this problem, and that, too, concerns us.

Not only do public power systems need long-term assured access, they need reasonable
stability in pricing. Access to the transmission system in RTO regions is being
rationed by price under the locational marginal pricing (LMP) construct. Rate
pancaking, one of the ills that was to be eliminated through RTOs, has been
replaced by LMP differentials that often have the same (if not worse) economic
effect.

Leave It to the Market

We do need a more robust transmission grid. However, the LMP/FTR system taken
alone does not ensure the construction of adequate transmission infrastructure.
At best, it shows which source and sink pairings create congestion with the
hope that this information will be sufficient for the “market” to develop economically
efficient solutions. For the most part, merchant transmission companies have
not formed. And incumbent transmission owners have reasons of their own for
not being eager to build new transmission facilities.

Complicating this problem is the fact that some RTO transmissionplanning regimes
have focused on the artificial distinction between new transmission needed for
“reliability” and that needed for “economic” purposes. Where new transmission
is deemed necessary solely for economic reasons, construction is being left
to the “market” with less than optimal results. As Professor Paul Joskow of
the Massachusetts Institute of Technology noted in a recent analysis of PJM
transmission additions, “‘Economic’ transmission investments can also often
confer ‘reliability’ benefits as well. Thus … reliability and economic transmission
investments are interdependent. At worst, the distinction between them is analytically
arbitrary.” There is no bright-line distinction, and rather than focusing on
transmission to enhance reliability or transmission to enable transactions,
we should be looking for an adequate transmission infrastructure that meets
society’s needs for electricity at reasonable cost.

These developments inevitably lead to the perverse result that many APPA members
are not looking to the wholesale power market but are instead either renewing
power supply contracts with their existing IOU suppliers, or building their
own generation as close to their own loads as possible – all to reduce transmission-related
risk and uncertainty. These decisions may not produce the most economically
efficient generation resource results, but they are being driven toward these
outcomes by the RTO/LMP market construct. What is not developing is a well-functioning
wholesale power market with many healthy competitors.

Worse yet, we are paying for living in this frustratingly complex RTO environment
because RTO administrative costs have climbed with little apparent accountability
for or appreciation of the impact of these outlays on electric consumers. This
problem is additionally aggravating for the mostly small public power systems
within the RTO footprints that must add staff, hardware and software simply
to cope with these new markets, protocols and requirements.

Getting a Good Rate

The other shoe on FERC’s policy foot is its market-based rate regime. FERC’s
market-based rate policy until now has assumed that competitive markets (supplemented
in RTO regions by RTO market monitoring and mitigation regimes) will produce
just and reasonable rates for wholesale power supplies. In many real-world instances,
in the organized markets and elsewhere, this has proven not to be the case.

Prices for generation charged in the organized markets are often substantially
above what would result if cost of service regulation were used. It is clear
that many suppliers in these markets are not bidding their marginal costs, as
the theory underpinning centralized single price clearing markets posits, but
instead are charging what they think the market will bear. What the market will
bear often does not pass the Federal Power Act’s “just and reasonable” smell
test, especially in periods of high demand. Moreover, the price volatility in
these short-term markets does not match up with the steady stream of long-term
revenue that investors in new generation facilities now like to see.

Sale of power at market-based rates is a privilege, not a right. To obtain
FERC permission to sell wholesale power at market-based rates, the seller must
demonstrate that it does not possess, or has mitigated market power. There is,
as well, a continuing obligation on FERC’s part to ensure that market-based
rates remain just and reasonable. If that obligation was not clear before, it
was made crystal clear last fall by the 9th Circuit Court of Appeals decision
in State of California, ex rel. Bill Lockyer v. FERC.

FERC’s assumption that the entire footprint of an RTO constitutes the relevant
geographic market, and that the RTO’s mitigation regime is sufficient to counteract
any generation market power as seller, may have to be examined carefully, as
does the actual performance of market monitors. They are the first line of defense
in ensuring that market-based rates continue to be just and reasonable.

FERC is increasingly relying on market monitors, who are either employees of
or contractors to the RTOs, to assess whether the wholesale energy and transmission
prices reflect those of competitive markets. But the recent decision of the
United States Court of Appeals for the District of Columbia Circuit in Electric
Power Supply Association v. FERC calls this continued reliance into question.
As FERC Commissioner Joseph T. Kelliher has noted, it may not be appropriate
(or legal) for FERC to delegate determination of “just and reasonable” rates
to market monitors.

The APPA has recently commissioned an assessment of RTO and RTOsponsored empirical
studies on how restructured wholesale power markets in the mid-Atlantic and
Northeastern U.S. are performing. The preliminary results are not comforting.
The analysis is not yet final, but suggests there is good cause to be concerned
over abuses of market power, strategic behavior on the part of suppliers intending
to raise prices and the lack of competitive market forces to constrain anticompetitive
behavior of market participants.

Anecdotally, many APPA members face threats to their viability because of the
lack of availability of long-term firm transmission and increasing generation
consolidation. They get few if any viable bids from suppliers in response to
their RFPs, can’t obtain transmission to reach alternative sources of power
and suffer dramatic price increases from local suppliers with market power.
Little wonder, then, that we look with alarm at more consolidation within the
industry, such as the recently announced gigamerger of Exelon and PSEG, which
will create a company controlling more than 50,000 megawatts of generation.
Industry observers assume this marriage will be blessed by FERC because any
possible anticompetitive problems will be mitigated simply by membership of
the merged utility in PJM. That assumption must be validated.

Calling All Cops

As noted earlier, this is the 10th anniversary of the FERC NOPR that produced
Order No. 888. The Open Access Transmission Tariffs (OATTs) required by Order
No. 888 to ensure transmission access on a nondiscriminatory basis were a behavioral
remedy intended to address the exercise of market power. To work, they require
policing by FERC. Some police work is occurring, and some bad guys are being
caught. We learned at the end of last year, for example, that random audits
by FERC’s Office of Market Oversight and Investigations found that both Arizona
Public Service and Tucson Electric Power had failed to make timely data postings
and had provided favorable treatment to their merchant power subsidiaries. The
existence of these anticompetitive practices should come as no shock. Utilities
have been using their transmission assets to disadvantage competitors for decades.
It seems likely that APS and TEP are not isolated incidents, and in fact FERC
Chairman Pat Wood III recently suggested that we could see additional violations
uncovered as a result of other random audits. However, just these two examples
suggest refinement and improvement of Order No. 888.

FERC, unfortunately, issued Order No. 888, then quickly shifted its focus to
RTO activities. Unless a transmission customer called the hotline or filed a
complaint, FERC generally assumed all was well with OATT administration. FERC
should undertake a comprehensive look at ways its OATT regime could be improved.

Anniversaries

A lot has happened in 10 years. While things have not turned out as the APPA
and its members had planned, at least we seem to be taking the lessons learned.
The debates on how to move forward are shifting away from blind faith in markets
to consideration of actual facts on the ground.

The proposition that public sentiment will trump economic theory, especially
misguided theory, also appears to be affecting current thinking about restructuring.

In 2005, there will be a new Congress, a new secretary of energy and possibly
changes in personnel at FERC. We have the benefit of a decade of experience
from which we can measure the successes and failures of restructuring against
the successes and failures of regulation. All of this sets the stage for 2005,
which looks to be a watershed year.

This paper was adapted from a speech delivered at the American Antitrust
Institute in January 2005.

Good Energy Policy = Balanced and Diversified

Dear Secretary Bodman:

Thank you for your outreach to democrats, former Energy secretaries and Western
governors. I represent each of these categories and appreciate the fact that
you are interested in what I have to say. Because of its importance to our national
security, our economy and our environmental future, energy policy must be treated
as a bipartisan issue – and we must work together toward goals that will set
the nation on the pathway to energy security.

About Energy Policy

Over the long run, the decisions we make regarding energy use and energy supply
have proven to have huge implications. They have drawn us into international
disputes – arguably into war and occupation. They affect the very underpinnings
of our nation’s economy and the ability of households and businesses to prosper
or even to survive. And they have enormous impact on the environment – from
oil and gas leasing proposed in treasured Western places to the greenhouse gases
that increase in our global atmosphere and may be threatening the very nature
of life on earth.

Despite the fact that our nation has experienced international difficulty,
price and supply vulnerability and environmental damage as a result of our energy
policies, we don’t seem to have learned our lessons. Instead, opponents of new
energy policies often complain about their potential costs. Yet over the past
four years, according to the Industrial Energy Consumers of America, the price
spike in natural gas alone has cost businesses and consumers an extra $150 billion
or more. The price impacts of easily achievable conservation and clean energy
policies would be far below that number, and these policies would create jobs
instead of killing them.

Americans should stop holding themselves hostage to higher oil, gasoline and
natural gas prices that could be having structural impacts on our economy. We
should also recognize that there are huge changes occurring in international
energy markets, from the reduced production at Iraq’s oilfields to the booming
growth of energy consumption in large, fast-growing nations such as China and
India. Our energy policies need to prepare this nation for the markets that
will exist in coming decades. Because of these changes, it will be worth a small
expenditure to bring on diversified domestic energy sources. And, unfortunately,
the bill now making its way through the House of Representatives includes $7.7
billion of subsidies in the wrong places, ignoring important priorities such
as renewable energy production tax credits and incentives for hybrid vehicles.

As of this writing, the energy bill seems to be protecting special interests,
not advancing the national interest.

Secretary Bodman, I will work with you toward accomplishing comprehensive national
energy policy to:

  1. Create energy diversity and enhance domestic supply. The United States may
    never achieve energy independence, but it must make a high priority of reducing
    its dependence on overseas energy sources subject to price and supply disruption.
    It’s time for a large investment in renewables – one that will kick-start
    clean energy production and new storage technologies, from compressed air
    to hydrogen, to make renewables deliverable and reliable. By diversifying
    and domesticating our energy sources, we will create hundreds of thousands
    of high-quality jobs, reduce the export of oil dollars and allow the conservation
    of places – such as New Mexico’s Otero Mesa and Valle Vidal – that shouldn’t
    be drilled for oil and gas. Congress should quickly act on the renewable energy
    production tax credits extended for just 14 months in October of last year,
    with a 10-year renewal that will encourage rational, planned investment in
    sensible energy alternatives, and it should immediately enact an investment
    tax credit for storage options to help us toward the hydrogen economy. These
    actions will sharply reduce the imbalance between natural gas supply and demand
    in the U.S.
  2. Make energy efficiency our first priority. The nation is ready for strong
    energy efficiency leadership from Congress and the current administration.
    The energy efficiency incentives contained in bipartisan legislation proposed
    by Sen. Olympia Snowe of Maine and Sen. Diane Feinstein of California in the
    108th Congress will inspire fast, effective energy conservation and efficiency.
    Major industries dependent on natural gas support immediate investment in
    conservation and efficiency that will reduce pressure on gas supplies, in
    particular. Increasing natural gas supply by building the Alaska gasline is
    a great idea; creating a new dependence on overseas natural gas sources by
    vastly increasing LNG imports is not. The administration should give the natural
    gas legislation proposed by Sen. Lamar Alexander of Tenn. and Sen. Tim Johnson
    of S.D. a good, hard look, and it should recognize that increased automobile
    efficiency and new technologies are a critical part of addressing spiraling
    gasoline prices.
  3. Increase the nation’s attention to its electric grid. Congress has sat too
    long on reliability legislation. The northeast blackout of 2003 was another
    expensive warning that we need to adopt standards for grid management, hold
    grid users accountable and invest in transmission system planning and improvements.
    As a nation, we had the foresight and the common sense to dig deep and build
    an interstate highway system that has become central to the country’s economic
    health. The grid needs similar emphasis. The actions that the Federal Energy
    Regulatory Commission is taking now, to enhance access and affordability of
    renewable energy sources on the grid, are much needed, and the Department
    of Energy should cooperate with FERC on new transmission policies and plans
    throughout the country. Federal support for the development of high-efficiency
    transmission technologies will allow us to make better use of existing transmission
    corridors, as well.
  4. Recognize and regulate the threat of carbon emissions around the world.
    As we diversify our energy sources and create new renewable energy supplies,
    our nation will also make itself a partner with other nations rightly concerned
    about the potential for global warming – the result in part from greenhouse
    gas emissions. While we build a stronger, more diversified energy economy,
    we will increase our ability to create market-based structures to limit and
    eventually reduce U.S. carbon emissions. The bipartisan cap-and-trade proposal
    put forward by Sens. John McCain of Arizona and Joe Lieberman of Connecticut
    would be the sensible place to start. As the world’s leading emitter, we should
    rejoin global negotiations regarding greenhouse gas emissions – perhaps you
    can make the case for a new emphasis on global climate negotiations and partnerships
    with Secretary of State Condoleezza Rice. A nationwide renewable energy requirement,
    like the ones recently adopted by the New Mexico Legislature and Colorado
    voters, is also needed. More than 20 states have adopted renewables requirements.
    Sen. Pete Domenici of New Mexico, chairing the Senate Energy Committee, recognizes
    the importance of national energy policy leadership when he declares that
    he is open to the concept of a renewables requirement in this year’s energy
    legislation. Just as Congress created standards for auto safety and air pollution,
    it will serve the public interest by setting standards for renewable energy
    delivery. (And as a recent report by the Office of Management and Budget indicated,
    with support from former EPA Administrator Bill Reilly, industry often overestimates
    the cost and price impact of federal environmental requirements, while underestimating
    the public health and economic benefits.)

The Federal-State Partnership

We here in the states stand ready to assist in this new national energy policy.
In fact, in the absence of congressional consensus on energy policy, and in
the face of the administration’s over-emphasis on oil and gas development, the
states have been leading. New Mexico, a longtime energy state, now calls itself
the Clean Energy State because our legislature and myself have created a strong
partnership around development of new clean energy policies that we think will
help turn today’s oil- and gas-based economy into a broader and possibly longerlasting
diversified energy economy.

We have adopted a wide variety of policies intended to increase clean energy
development, from tax credits to net metering to energy efficiency to renewable
energy requirements for utilities. And we are pushing proposed electric generating
facilities to consider gasification options as well as dry cooling that will
save millions of acre-feet of precious Western water.

Here in the Southwest we have significant wind energy potential. We could become
the Persian Gulf of solar energy development, offering a more reliable, price-predictable
and secure energy source. So we are investigating the feasibility of concentrated
solar power and providing incentives for households and businesses to create
distributed electric generation through net metering and other incentives.

The Western states are acting as regional clean energy partners as well. Last
year the Western Governors’ Association (WGA), acting on a resolution co-sponsored
by California Gov. Arnold Schwarzenegger and myself, set ambitious but achievable
clean energy goals for the 18 Western states. We started a process for the West
to produce 30,000 megawatts of clean energy by 2015, with a 20 percent increase
in energy efficiency by 2020. The support for this approach was unanimous, bipartisan
and regionwide. Why? Because Westerners, like other Americans, recognize that
it is critical for this nation to diversify and domesticate its energy sources.
This is not an effort to prevent other types of electric generation, but instead
to ensure that we are mapping out and building the foundation for clean energy
sources – included zero-emission coal – to become a large part of the Western
market.

To achieve the WGA targets, however, the states need the federal government’s
support. In the area of clean coal, as an example, I am encouraged by the support
offered by the White House and the Department of Energy for new zero-emission
coal-gasification technology. But clean coal technologies need to be tested
and implemented not only in the East and Midwest, but also in the West, where
higher elevations and different coal types could significantly affect gasification.

We also need help with transmission. Out West, where the wind blows and the
sun shines, we can produce vast amounts of energy for our fellow Americans.
But we need help getting the energy we produce to the markets that demand energy.
As mentioned, FERC has become the de facto leader in removing obstacles to national
renewable energy development, but it can do more in national transmission planning
and project development. Without this kind of leadership, our vast renewable
energy resources will not be developed.

How to Do It

The nation’s new approach to energy policy will require a new attitude from
the DOE and the administration. The administration should not have spent three
fruitless years arguing against my air conditioner efficiency standards in court.
These standards, though high, represented efficiency equivalent to the removal
of 1 million cars from America’s highways, at reasonable and cost-effective
expense to consumers. Instead of battling measures such as cost-effective appliance
efficiency standards, the administration should recognize that industry and
consumers will benefit from activist energy policy. As energy becomes more and
more international, it is increasingly important for our national leaders to
adopt policy that protects the nation’s economic and environmental interests.

The administration’s energy bill is seeing its third year of absence from the
president’s desk. The bill’s failure to move down Pennsylvania Avenue is a direct
result of the majority’s failure to work the issues with the minority – not,
as too often and unconstructively stated by the president, obstruction by the
minority.

Americans know we need new energy policy. According to polls – including a
recent poll here in New Mexico that showed the vast majority of New Mexicans
identifying energy issues as our most challenging problem, beyond drought and
the economy – the public is also aware that the administration’s approach to
energy has been tilted toward big existing industries rather than toward the
development of new technologies and renewable energy.

The time has come for our congressional leaders, and the administration, to
throw out the most extreme proposals that have prevented the adoption of legislation.
We need to come together around the sensible center.

We also need to recognize that getting ourselves out of our overdependence
on certain energy sources will cost money. The president’s budget is a tight
one, and his targets for deficit reduction will be hard to achieve under any
circumstances. But a nation that fails to invest in its energy future is a nation
whose economy, people, businesses and environment will remain vulnerable and
pay a significant ongoing penalty.

One last word: I hope you can help defuse the hostile relations that persist
among some U.S. energy producers and advocates of clean energy. The existing
energy industry has provided our nation with fuel to grow and become great.
It has the expertise and the resources to help us into a new energy future.

Clean energy advocates hold out promising ideas and policies that will diversify
and strengthen our nation’s energy portfolio, with significant economic and
environmental benefits. It is unfortunate, and certainly unconstructive, for
industry leaders and clean energy advocates to be so loudly and publicly at
loggerheads in the media and in the halls of Congress. The Secretary of Energy
can play the pivotal role in quieting the noise, creating open dialogue, listening
carefully and balancing the nation’s energy policy so that we accomplish great
things, together.

Secretary Bodman, you have enormous influence at this critical juncture in
the nation’s energy history. Your steady leadership and renowned management
skills will be needed, and tested, in the years ahead. You can count on me,
and many other Americans, to help if you call, and to support you in your implementation
of forward-looking policies that build our energy future.

Prioritizing Growth

After years of cost cutting and risk management to bring companies “back to
the basics” in response to the market anomalies, reliability issues and regulatory
uncertainty of the past decade, growth has moved back onto the CEO’s agenda.
In a 2004 IBM survey of more than 450 global CEOs, an overwhelming majority
cited growth as the top priority for driving their companies’ performance in
the coming years.[1] More than 80 percent of the CEOs surveyed highlighted growth
as a key component of their near-term strategy portfolios, more than 15 percentage
points higher than those citing cost reduction as key goals and over twice the
number of those emphasizing asset utilization and risk management.

Though the more pressing short-term priorities took their rightful place at
center stage over the last five years, the case for growth has remained self-evident.
Growth creates shareholder value, advances careers and makes work more rewarding.
From a societal perspective, growth drives the economy, creates jobs and improves
quality of life by bringing new products and services to the world. Thus, it
is important for utility executives to understand how companies achieve consistent,
successful growth – and how those lessons can be used in their own growth efforts.

Survey Says

To investigate what successful companies do to achieve growth and sustain it
over long periods, IBM completed a global study at the end of 2004 that focused
on three questions:

  • Who are the successful growers and what patterns are associated with them?
  • What do successful growers do differently?
  • How can other companies apply what they do?

To answer these questions, the team analyzed the growth and value creation
record of 1,238 companies that had been included in the S&P 1,200 for all or
part of the decade spanning 1994 to 2003. The companies included in the study
represent a wide range of sizes, industries and global geographies. Together
they account for 70 percent of the world’s market capitalization. Of these companies,
79 were electric and gas utilities – 41 North Americanbased companies, 22 based
in Europe, 11 based in the Asia-Pacific region and five based in South America.

The team went on to examine the actions of those companies defined as “successful
growers” – those that grew both revenue and total shareholder return (TSR) faster
than the median for their peer group – to determine what they do differently
from others that do not achieve successful growth. Collectively, this group
recorded median revenue growth of 8.5 percent and median TSR growth of 8.8 percent.
Proving that growth has not been limited to specific sectors or emerging economies,
these 413 companies are an eclectic group. It includes many well-known consumer
products, electronics, telecommunications, financial and other diversified firms
such as Procter & Gamble, Cisco Systems, Vodafone and Capital One, as well as
20 leading electric and gas utilities spanning the globe, such as FirstEnergy,
Exelon, Constellation Energy, National Grid, Centrica and Australian Gas and
Light.

Our team formulated hypotheses to explain the variation in outcomes and analyzed
individual companies. We found that successful growers break free of perceived
constraints related to industry boundaries, geographic neighborhood and company
size; use merger and acquisition (M&A) strategies effectively, contrary to the
belief of some that M&A destroys value; and possess the conviction and resilience
they need to recover from setbacks, correcting their course to sustain industry-leading
growth over the long term.

Facts or Urban Legends?

Executives sometimes view their growth potential as limited by a number of
factors: the nature of their industry and geographic “neighborhoods,” the size
of the company, the perceived dangers of M&A activity and the very real difficulty
of sustaining growth year after year. This study strongly suggests that these
perceptions are more likely self-imposed limits than marketplace realities and
as such can be overcome.

Is Neighborhood Really Destiny?

Growth leaders are not limited by industry maturity or geography. The S&P 1,200
companies demonstrated a much wider range of growth within each industry and
geography than across them. In each of four primary geographies and across 18
industries, high-growth companies not only outperformed their peers, but did
so by wide margins.

Figure 1 shows the range of compounded annual growth rate (CAGR) and the median
for eight industry groups. Within the utility sample in the study group, the
median growth rate was 8.4 percent – but with the fastestgrowing company sustaining
a CAGR of over 10 times that rate. This was not a wild outlier – seven of the
companies grew at three or more times the median, and nearly a third of the
companies exceeded the median by 50 percent or more. And geography was not the
primary driver of robust, sustained growth; while Europe and South America had
strong showings and the Asia-Pacific region a relatively weak one, no single
geographic region dominated the list of the top 25 fastest growers.

The message is clear: Neighborhood is not destiny. Executives have more room
to be ambitious than they tend to believe. Winning companies set ambitious growth
plans regardless of industry or geographic limits. They aim for targets above
and beyond what they and their peers typically expect.

Can a Company Be Too Big to Grow?

Another common perception is that large companies are slow growing. Our study
suggests otherwise.

As Figure 2 illustrates, companies with more than $10 billion in revenue increased
and grew revenue and TSR as fast as, if not faster than, their smaller counterparts
within the S&P 1200 sample. Now, admittedly, a billion-dollar company is not
exactly “small.” However, this data is powerful motivation indeed for the leaders
of multibillion-dollar utilities who have been led to believe that they are
too large to envision strong growth.

M&A: Value Destroyer or Growth Engine?

Several widely read articles and white papers have reported that a high percentage
of acquisitions (typically more than 50 percent) destroy value. But the growth
rates observed for the large companies in this study beg the question: Are these
companies using M&A to sustain their growth? The short answer is yes; companies
with revenue greater than $10 billion made 50 percent more acquisitions over
the decade than smaller companies. And contrary to the aforementioned works,
this acquisition-led growth did not come at the cost of value creation. In fact,
large firms grew their value (TSR) at 10.5 percent, versus 7.2 percent for their
smaller counterparts. Furthermore, we found that successful growers, regardless
of size, were more likely to acquire companies than others. In the sample we
studied, successful growers recorded twice as many acquisitions over the decade
as other companies.

Our research suggests that companies that build M&A skills can successfully
leverage acquisitions to drive their growth agenda. Why did we conclude this?
While M&A was not the focus of this study, we do have two hypotheses worth examining
further. First, some M&A studies take a short-term, typically one-year, view
of results. This study takes a longer, 10-year view. For energy and utility
companies, some would even say a decade is short-term planning.) Considering
the pains of integration, acquisitions may yield better results over the long
term than they do in the first few years.

Second, it appears from our research that a successful minority of companies
makes more acquisitions. They are able to find better deals and execute them
more effectively. This suggests that M&A is a game of skill, not chance. We
also noted that the successful growers made acquisitions that seemed to stay
closer to their core. For example, they made fewer (about half as many) international
acquisitions than other companies. They were also about 50 percent more likely
to acquire entire companies rather than business lines, brands, assets or partial
shares of a company.

Cisco Systems is one such company. Over the years, Cisco has built a repeatable
capability to leverage M&A. Its first acquisition, Crescendo Communications,
was met with skepticism when it was announced in 1993. As it turned out, however,
the move was based on a sound strategy, and Cisco’s revenue skyrocketed in the
wake of the deal. The head of Cisco’s acquisition program during the 1990s noted
that the initial Crescendo success made the company’s subsequent acquisitions
easier.[2]

With the Crescendo deal as a foundation, Cisco embarked on a strategy of acquisition
as a growth engine. From 1994 to 2003, it executed over 80 deals, while recording
an annual growth rate of 40 percent per year and a TSR growth of 30 percent
per year. Noting the scrutiny the strategy had to endure from Cisco’s shareholders,
one analyst quipped that the company had “done it backwards in high heels with
the whole world watching.”[3],[4] How was Cisco able to defy the odds? It maintained
clarity on its objectives, built sustaining capability and stayed disciplined
in its execution. Cisco’s deals have consistently focused on acquiring technology
capabilities rather than established revenue streams or an existing customer
base. It has also limited its deals to a manageable size. Since 1994, only one
acquisition exceeded 5 percent of the company’s market capitalization at the
time of the deal, and nearly all were less than 2 percent.

While Cisco’s approach reflects a carefully laid out strategy, during the M&A
explosion of the dot-com years, some of its peers fell into the trap of pursuing
acquisitions “at any cost for any reason.” Between 1998 and 2000, for example,
one of Cisco’s major competitors invested billions of dollars in acquisitions
that proved to be failures. In the end, these acquisitions were either shut
down or sold for a fraction of their purchase price.

Is a Bad Year the Beginning of the End?

Successful growth is sometimes portrayed in the business press as the result
of strategic genius or uncanny foresight. In practice, the vast majority of
companies – even successful growers – stumble at some point. Of the companies
that outgrew their industry median over the 10-year study period, only 6 percent
did so every year of the decade. Fully 94 percent of successful growers experienced
at least one year of below-median growth; 72 percent fell below the median for
three years or more. What distinguishes successful growers is not perfection,
but the courage and conviction to recover from imperfections.

The case of the Wrigley Company provides one example of such resilience. Through
the 1990s, the company drove its growth via steady geographic expansion. But
by the late 1990s, its results were flagging.[5] In 1999, as company leadership
transitioned from one Wrigley generation to the next, the company outlined a
plan to take bold steps into new product markets through innovation and acquisition.
Like all transitions of its scope, Wrigley’s strategic shift was not entirely
seamless and, for a while, its results lagged those of its peers.[6] But by
2001, the company had restored market momentum and confidence. Though the elder
and younger Wrigleys followed markedly different strategic paths as CEO, the
company’s resilience in making changes gave the Wrigley Company the ability
to bounce back and beat its industry peers in growth and value creation over
the entire decade.

Turning Growth Into Shareholder Value

For an individual company, growth is neither risk free, nor a guarantee of
value creation. For the S&P 1200, several companies with above-median growth
delivered below-median value creation. These “unsuccessful growers” saw above-median
revenue increases while delivering belowmedian, or even negative, shareholder
return.

However, on average, growth is strongly correlated with higher value. We found
that the slowest growth quartile grew its TSR by under 1 percent a year over
the decade. At the other end of the spectrum, the companies in the highest-growth
quartile delivered more than 16 percent TSR growth annually (see Figure 3).

Our research showed that roughly one-third of companies with CAGR below the
median rate sustained above-median growth in shareholder value, while more than
two-thirds with above-median CAGR were able to sustain that high level of shareholder
value growth. A similar growth study conducted in 1998 found the same relationship
– clearly the relationship between growth and value creation is a firm and stable
one. The implications are clear – superior growth doubles the odds of superior
shareholder value creation. Winners understand that over the long haul, growth
– rather than cost cutting – is the lower-risk path. Indeed, the greatest risk
is not to bet on growth enough.

Our conclusions show that attaining success requires excellence in several
key disciplines, similar to competing in a triathlon, where excelling in a single
event is not enough. Utilities that wish to embark on a new growth agenda must
begin to ensure they can excel in every one of the following areas of growth
expertise:

Course: The identification and selection of opportunities, the development
of a winning model and the creation and funding of initiatives sufficient for
sustaining growth. The key questions are: Where is the industry headed? Where
do we play in this future? How will we win and keep winning?

Capability: The activities, skills and assets that support the operational
model and enable the successful execution of the growth strategy. Here, executives
must ask: What do we need to win?

Conviction: The creation of organizational belief, momentum and resilience
in moving toward growth goals. The key question here is: How will we generate
action, maintain momentum and bounce back from failure?

While U.S. athletes Lance Armstrong (six-time winner of the Tour de France),
Michael Phelps (winner of 6 gold medals in swimming at the 2004 Athens Olympics)
and Allyson Felix (silver medalist in the 200-meter sprint at the 2004 Athens
Olympics) are among the best at their individual sport, none have the training
in the other sports to excel in a triathlon the way they do in their specialties.
The message to executives here is that strong and sustained growth is within
your grasp – but you must excel in each of the three areas mentioned above.
Read more on the importance of the 3C’s on page 11.

Endnotes

  1. “Your Turn: The Global CEO Study 2004,” IBM Business Consulting Services.
    2004.
  2. Paulson, Ed. Inside Cisco: The Real Story of Sustained M&A Growth. John
    Wiley and Sons. 2001.
  3. Cisco Systems. “Acquisition Summary.” www.cisco.com/en/US/about/ac49/ac0/ac1/about_cisco_acquisition_
    years_list.html.
  4. Paulson, Ed., op cit.
  5. “Wrigley Gives Rivals News to Chew On,” Financial Times (U.K.), Oct. 16,
    1999.
  6. “Wrigley Rebounds with $70B for 3 Brands,” Brandweek, April 21, 2003.

 

 

Re-Evaluating a Core/Noncore Electric Market

It is difficult to discuss or propose a core/noncore market structure without
discussing California’s previous retail market restructuring effort. Many academics
and others have written papers pointing out the flaws in California’s previous
actions. One aspect of those earlier restructuring debates deserves prominent
treatment: customer choice.

In the aftermath of the electricity crisis, we have given comparatively short
shrift to customers served by California’s electric system. What do customers
want? The short, anecdotal answer is that they want three things: lower prices,
high-quality service and options (about where they buy their power and what
type of power they buy). In numerous surveys, customers – especially residential
customers – report that they would actually pay more for green power. Knowing
this, it is fair to conclude that, in many instances, the current market structure
is not serving the needs and wants of customers.

To that end, a different core/noncore structure should be considered under
which a 200-kW size threshold is set for customers being automatically defined
as noncore, as long as aggregation is allowed and preconditions are in effect.
Those include, primarily, mechanisms to guard against cost-shifting – both of
past costs and of future utility generation investments – between so-called
“captive customers” and those who opt for choice.

Companies are risk-averse and long-term investment requires a stable environment
that is dependent upon a considerable degree of certainty in the regulatory
arena. For generation investment, merchant generators cannot get financing for
investments without the guarantee of a multiyear power purchase agreement from
a regulated utility, which in turn requires certainty of cost recovery from
rate payers that can come only from the California Public Utilities Commission.

Despite these uncertainties, we have greater certainty about other aspects
of the system. Certain physical realities remain. There are chiefly two: first,
load levels and load growth are fairly predictable, at least over the short
and medium term. Regardless of who serves that load, it can be counted on to
exist and grow, at a modest 2 percent or so at least, over the next several
years. Second, the amount of electric generation available in the state today
is measurable. Again, this is regardless of who actually owns the capacity or
what entities are proposing to build new generation. So, it is possible to calculate,
within reasonable bounds, what the electric supply and demand balance is likely
to be over the next few years.

Thus, what we are actually addressing with a core/noncore market structure
is purely economic policy. We are struggling with how to allocate costs (and
therefore risk) among a series of actors in the market: customers, utilities,
generators, energy service providers and, for the last three, shareholders.
Each of these actors would like to minimize their risk, and it is the job of
regulators and legislators to balance that risk and ensure that it is shared.

The key element, therefore, becomes the application of uniform resource adequacy
requirements on all load-serving entities (LSEs) in the system. If all LSEs
are required to have under contract sufficient capacity and energy to serve
their customers, plus a reserve margin, there should be ample opportunity for
investment and profit, while spreading the risk of reliability failure among
a number of actors. One option for addressing this problem is the development
of a capacity market. In meeting the resource adequacy requirements, LSEs should
manage a diverse portfolio of types of resources as well as contract terms.
There should be business risk for all LSEs, investor-owned utilities (IOUs)
and energy service providers (ESPs) for prudent portfolio management.

Cost Issues

Utilities know the costs of their retained generation in the past and on an
ongoing basis. The costs of the power contracts the state has with the Department
of Water Resources are finite and the time period is fixed. Recent and future
investment, either in the form of a physical asset or a contractual commitment,
is also knowable. Legal and regulatory requirements exist, to one degree or
another, that constrain our flexibility in allocating all of these costs. However,
there could be other creative ways of assessing charges to cover these costs,
without creating cost shifts or potential cost shifts among customers. The benefit
of this new assessment structure would be greater customer choice at an earlier
time period.

Current and future investments in generation have the same potential to become
future stranded costs. Thus, under any core/noncore model, we will need an ongoing
mechanism to guard against cost shifting.

Utilities argue that until their customer base is reasonably certain, they
are unable to make long-term investments in generation. The same is likely true
for ESPs. So, without certain entry and exit rules, no LSE is going to be willing
to make long-term investments. The scenario that most observers are worried
about is when an IOU invests in a long-term generation resource for a certain
forecasted future load, and then loses that load to a direct access (or noncore)
provider. In this situation, the concern is about remaining customers of the
IOU being required to pick up the cost of the generation investment.

In reality, if an IOU makes an investment that turns out not to be needed to
serve its future retail load, the IOU will sell its excess generation on the
wholesale market. If an ESP needs generation resources, it may buy the excess
IOU generation. This gives rise to the worry that there could be a socalled
“death spiral,” whereby IOUs invest in generation for a decreasing customer
base; that customer base migrates to direct access or noncore status, forcing
IOUs to sell their excess power at a loss on the wholesale market, finally leading
to cost shifts to remaining IOU customers, and further incentive for noncore
exit.

In this situation, it is also important to keep in mind that the size of the
potential cost shift, however, is not the full cost of the investment in generation
by the IOU, but the difference between the wholesale market price and retail
rates received by the IOU. This amount should be coverable by instituting reasonable
market rules for switching and cost responsibility principles. Structuring a
capacity market is another way to address this issue.

In discussing this alternative proposal for core/noncore structure, the following
principles are important to consider:

  • Certainty of structure and rules is paramount;
  • Cost causation;
  • Rational rate design;
  • Preserving reliability;
  • Five-year planning horizons (supply and demand);
  • Importance of aggregation as option; and
  • Customer size threshold for noncore.

Certainty

It is a fairly obvious and often-made point that certainty of market rules
promotes investment. Certainty, in this case, means not only a clear market
structure, but also clear implementation rules and time frames. We need to establish
a definition of which customers are core and are eligible for noncore; rules
for switching from core to noncore status and back again need to be clear and
stable; cost responsibility needs to be clear and calculable for customers making
economic decisions.

Cost Causation

In general, customers should pay for generation costs incurred on their behalf.
If an IOU makes a power plant investment while serving a particular noncore
eligible customer, for example, that customer should be responsible for paying
its fair share of the cost of that investment, even if it later elects service
from an energy service provider. This, in effect, covers the revenue requirement
of a generation investment.

Rational Rate Design

In addition to covering the revenue requirement, a wholesale effort is needed
to rationalize the rate structures for many customer classes to reflect the
true cost of serving those customers. Generally speaking, fixed costs should
be assessed with a fixed charge, while variable costs should vary by usage levels.
Moving toward real-time pricing and other tariff designs that allow rates to
fluctuate with costs is not only a principle necessary for a functional core/noncore
market structure, it is also likely to be a reasonable precondition.

Preserving Reliability

As discussed, any market structure change should occur only in the context
of a stable resource adequacy requirement for all LSEs. If all entities serving
customers in the market are required to prove resource adequacy, then the system
in general should be resource adequate, regardless of which entity is serving
a particular customer.

This leads to a discussion of the “provider of last resort” issue. IOUs worry
that no matter how the market rules are structured, if some unanticipated situation
occurs and there is a system emergency, all customers will expect that they
will be able to switch back to their IOU provider and be served. The IOUs should
be the provider of last resort, but should be appropriately compensated for
fulfilling that role.

Planning Horizons

Most customers in the market have a one- to two-year planning horizon, while
most power plants cannot be built without at least a 10-year revenue stream.
The need to bridge this gap exists both for IOUs and ESPs, since both want to
be able to serve their customers at the lowest cost, which involves some long-term
commitments. To balance the risk and allow for reasonable planning horizons,
require a five-year commitment from customers to their core or noncore status.
This would mean that customers wishing to become noncore would pay a cost responsibility
surcharge for generation built or contracted for on their behalf while the IOU
served them. Likewise, a noncore customer who made an initial five-year commitment
to noncore status but wishes to switch back to the IOU, would have to pay the
market rate for the remainder of the five-year commitment to noncore. Switching
among non-IOU providers would not create additional cost responsibility, beyond
the five-year commitment, but if there was any IOU service in the interim, the
customer would pay the market rate.

Aggregation

Aggregation of customers under the noncore size threshold – whatever it is
finally resolved to be – is of critical importance to satisfying customer needs.
For a number of noncore customers, the advantages of noncore service will not
be limited to price, but will include such important customer service options
as innovative billing and metering services, more responsive customer service
representatives or the ability to serve statewide chain stores through one provider.
For example, a fast food chain with locations in all service territories could
have one noncore ESP that provides aggregated billing to the corporate headquarters
for all locations. No IOU can offer that service, by definition. Most fast food
chains would not come close to meeting a 200 kW-per-month size threshold at
each location/meter, but through aggregation, these types of customers’ needs
can be served.

Aggregation is also an important option for smaller customers wishing to choose
green power options. Without allowance for small customer aggregation, retail
ESPs with green portfolios would not be able to serve residential customers.

Customer Size Threshold

If aggregation is allowed, the size threshold required to achieve noncore status
becomes less important. A 500-kW threshold for monthly peak demand would create
an automatic noncore status only for the very largest big box retail stores
and office buildings, plus most industrial customers. A 200-kW monthly peak
demand threshold would capture a much larger portion of the commercial market.
The CPUC’s preference would be for a 200-kW threshold, although 500 kW would
be satisfactory if aggregation of smaller customer loads is allowed.

Meeting the needs of utilities, independent power producers and Wall Street
is important, but should not be the primary function of the CPUC. We exist to
ensure that customers are served. In implementing a core/noncore in the structure
as outlined, we can give customers the choices they want and also meet the needs
of California and its power providers and generators.

Setting a Course for Growth

Within the next few years, energy and utility companies will be operating in
a more disaggregated, multinational industry. The demands of this new market
will drive companies to transform into larger, more dynamic and better-focused
organizations. Market-driven corporate structures will replace today’s largely
regulatory-driven ones. Companies will need to optimize their use of assets
and resources and meet the diverse demands of competitive and regulated businesses
in multiple geographies. One of the key requirements to be a leader in this
newly forged industry structure will be the ability to focus on and fully exploit
competencies that support growth.

As companies turn their attention to growth, they confront critical questions
about how to successfully execute growth strategies. Earlier in this book we
overviewed a recent IBM study. (See 2004 IBM survey cited in “Prioritizing Growth,”
page 8.) Based on that, a growth framework that emphasizes the best practices
of successful growers in other industries emerges. Companies that wish to achieve
strong long-term growth must excel in all three vital strategic disciplines:
course, capability and conviction.

The Winning Formula

Course: The Paths to Growth Are Many

Clear strategic direction is fundamental to success, yet its formulation and
adaptation generates divergent ideas and vigorous debate. To learn what works,
we examined the growth records of more than 60 companies. Our goal was to determine
what strategic principles are associated with their success. This review suggested
that the shaping and adapting of a successful course could be largely attributed
to superior application of four principles.

1. Develop a Point of View on the Future

Successful growers have a clear point of view on where their industry is headed.
Regulations change; customers exhibit new aggregate patterns of behavior; new
or disruptive technologies proliferate and new competitors arise; and successful
growers have a point of view on how to exploit these changes. That said, they
recognize they are not clairvoyant and revisit their point of view periodically.

Successful growers use a number of levers to put this principle into action.
They:

  • Understand the forces that have an impact on the industry and how they shape
    its future;
  • Demand and recognize insights from the business units and senior management
    on where value will be created;
  • Acknowledge areas of uncertainty and continually reassess the point of view;
    and
  • Create internal forums for industry and strategic discussion that are independent
    of operational reviews.

2. Continuously Evolve the Product-Market Portfolio

Successful growers evolve their product-market portfolio on an ongoing basis.
Even seemingly rock-steady firms exhibit a level of restless change behind the
scenes. While they stay grounded by understanding their true strengths, they
seek new opportunities to leverage their capabilities. To bring this principle
to life, they:

  • Take an expansive, customer-based view of markets not limited by current
    definitions of product and service categories;
  • Understand the potential and respect the limits of the company’s capabilities;
  • Realign the portfolio based on the attractiveness of opportunities and their
    fit with capabilities;
  • Consider alliances, acquisitions and divestitures, if necessary, and build
    the ability to execute and integrate transactions; and
  • Develop an external new business network, and over time formulate an internal
    venture capital capability.

3. Develop a Competitive Model

Bold industry-level visions notwithstanding, successful growers keep a sharp
eye on their competitive proposition and how it is working on the ground, market
by market, deal by deal. They:

  • Use customer, competitive and technology insights to create compelling value
    propositions;
  • Define the ways they can beat competitors in delivering and capturing value;
    and
  • Influence the environment through active participation in industry groups,
    regulatory and legislative processes and research consortia to shift the game.

4. Create and Sustain Multiple Growth Initiatives

No matter how successful, every strategy has a limited shelf life.

Successful growers sustain the growth quest by developing multiple growth initiatives,
allowing the company to draw from a portfolio of options. Chosen initiatives
must comprise a consistent, mutuallyreinforcing whole and cannot be simply an
aggregation of disconnected projects. Successful growers:

  • Create multiple, mutually-reinforcing growth initiatives sufficient to achieve
    growth goals;
  • Build management systems to nurture initiatives through development stages,
    each with different needs; and
  • Maintain ongoing focus on cost and asset management to create funding for
    growth initiatives.

Capability: The Paths to Growth Rest on Strengths

Whichever growth path a company chooses, it is vital to align the operational
model with the capabilities that are the sustaining foundation of the overall
strategy.

As Table 1 shows, each major growth path requires a different set of capabilities.
Successful growers develop their capabilities methodically, harnessing process,
organizational and technological elements to create an ingrained, repeatable
strength. To develop and align their capabilities, leaders in growth:

  • Define operational models and capabilities against chosen growth strategies;
    identify required changes as strategies evolve; and close gaps;
  • Overcome the inertia of existing power structures to realign the model where
    necessary; and
  • Consider alliances and acquisitions if necessary to develop timely capabilities.

Building effective capabilities can drive growth that outpaces industry performance.
Figure 1 illustrates one example: Companies in this industry that develop strong
innovation capabilities and align them with their product and innovation strategy
consistently outperform their peers.

Conviction: Are You for Real?

Course and capability are necessary, but not sufficient conditions for successful
growth. A company must also demonstrate conviction to growth in both word and
action. Growth requires constant change, but this change can be wrenching and
organizations often resist it.

Successful growers develop a culture that embraces change and identifies leaders
with the passion to make it stick. When setbacks occur, these companies have
the resilience to bounce back. To drive this conviction deep into the organization,
they:

  • Create an inspiring purpose and ambitious goals;
  • Communicate a believable and consistent growth strategy to employees and
    investors;
  • Establish an effective system of metrics and incentives against market and
    capability initiatives;
  • Foster a culture of honest, fact-based debate on strategy and performance,
    and create management forums and processes to support it; and
  • Stay alert to organizational impediments and act quickly to unblock them.

Putting It Together

We have examined each of the three C’s – course, capability, conviction – individually.
How do successful growers put them together in practice? What happens when they
become disjointed or diverge?

Hit the High C’s

In our work, we found that successful growers not only score higher on the
three C’s in aggregate, they score higher on each of the three C’s, with a far
lower variation in performance than other companies (see Figure 2). This finding
suggests one reason why growth is often so difficult: to succeed, businesses
– like triathletes – need to excel in all three areas of the game, all the time.

Course, capability and conviction are equally important to achieving and sustaining
growth over the long term. In the short term, a smart strategy may be able to
compensate for weak capabilities – but any success it achieves will be fleeting.
Neither will a superior combination of strategy and capability lead to growth
if the organization does not believe the commitment is real, or if management
is unable to convert its intent into corresponding action.

Consider the legacy of AT&T Wireless. Early in the wireless industry’s development,
the company set a course for robust growth by aggressively buying spectrum to
establish a broad presence across the U.S. market and build its brand. The company
recognized that its superior network coverage offered the potential for rate
plans without roaming charges. Its launch of the U.S. market’s first “national
onerate” plan drove rapid share gains.[1] Over time, the strong sense of conviction
that AT&T Wireless had inherited from its founding entity, McCaw Cellular, waned.
Increasing control from AT&T corporate and a series of leadership changes depleted
the company’s entrepreneurial spirit, prompted the departure of employees and,
eventually, undermined the company commitment to its course.

It had developed one of the most effective strategies in the industry but would
ultimately fall short in the areas of capability and conviction. Despite positioning
itself for strong growth and building significant market share, it failed to
fully develop corresponding capability (network and customer service capacity)
to support the increased volume. It fell behind competitors in building out
adequate coverage to support a genuinely national service, instead remaining
dependent on other providers for coverage in key markets. When quality began
to suffer and complaints mounted, AT&T Wireless was unprepared to respond.[2]

By 2003, AT&T Wireless, at this point an independent company spun off from
AT&T, had suffered declining growth and negative shareholder return over the
period. As a result, by 2004, AT&T Wireless was an acquisition candidate (and
completed a merger with Cingular Wireless late in 2004).

Align the 3C’s

To be effective, the three C’s must also be developed and aligned in an integrated
manner. Procter and Gamble exemplifies a successful approach to this alignment.

In the late 1990s, based on its strengths in branding, innovation and go-to-market
capability with global scale, P&G decided to pursue a new course. It moved to
steadily shift its portfolio away from staid categories like food and toward
categories like beauty and healthcare that offered higher returns on innovation
and global scale. P&G realized that winning in these markets would require a
fast pace of product introduction and sought to streamline the organization
to strengthen this capability. In 1999, it launched the Organization 2005 initiative,
which shifted authority from country general managers to newly created business
units that would drive new innovations globally.[3] While this realignment was
central to its goal of enabling faster deployment of innovation, it altered
the company’s longestablished management structure in fundamental ways. Now,
five years later, the radical shift appears to have been successful.

P&G also realized that to meet its growth goals, it needed to drive a greater
number of product introductions than it could generate in its own labs. To accomplish
this, it adopted an explicit course of driving half its innovations from ideas
sourced outside the company and created a network of external partners that
included contributions from academia and even competitors. Making this happen
required conviction to change a proud R&D culture that valued its unique capability,
most clearly demonstrated by P&G’s new R&D leader, who explicitly stated that
the best ideas need not come from within the company.[4]

Recent developments show the fruits of this conviction. When P&G reinvented
the cleaning category with the introduction of Swiffer in 2001, the success
owed much to technology licensed from a Japanese competitor.[5] P&G management
has since broken new ground in exploiting its own technology in new ways by
forming a joint venture with competitor Clorox to commercialize technologies
it developed. These instances are a far cry from its previous “go-it-alone”
R&D culture.

The results of these changes speak for themselves. Not only did P&G outperform
its industry during the decade, achieving annualized shareholder return of 15
percent over the period, it has also corrected missteps and accelerated its
growth in each of the last three years. While remaining an iconic leader in
its industry, behind the scenes, the P&G of today is very different – in terms
of portfolio and organization – than the P&G of a decade ago.

Evolve the 3C’s

No competitive model is successful forever. Sooner or later, every growth path
runs its course and needs reinvention. Successful growers not only align the
three C’s to begin with, they also evolve them over time.

Another successful grower, global telecom leader Vodafone, demonstrates this
well. In the 1990s, it sought to project its “mobile only” business model on
a global scale, pursuing an aggressive, acquisition-led course by acquiring
established players in mature markets and newer entrants in developing markets.
At the same time, it divested businesses obtained through acquisition that did
not fit with its mobile-only strategy. To execute its strategy, Vodafone developed
strong capabilities to identify and execute large global acquisitions. It successfully
drove financial integration and controlled costs, earning it the respect of
the investment community. This, in turn, translated into higher valuations,
providing currency for subsequent acquisitions.[6] The results? Vodafone enjoyed
a very successful decade, boasting almost 50 percent compound annual growth
rate and 16 percent annualized shareholder return.

But the game has changed and Vodafone is in the midst of evolving its model
significantly around each of the three C’s, adapting its course from conquest
to consolidation. It now seeks to exploit the global scale it has built to create
greater global leverage of technology, procurement, and customers. The new course
requires new capabilities, and Vodafone is integrating its technology platform
across markets as well as converging its phone models to better leverage clout
with suppliers.

Despite struggling to overcome barriers, management is seeking to replace the
company’s deal-making focus with an operations focus and has communicated resolve
to see the changes through.

Conclusion

To successfully grow, utilities will need to escape the geographic and regulatory
boundaries that have constrained – and protected – them in the past, presenting
executives with profound challenges. But these challenges are not without precedent.
The experience of successful growth companies in other industries (some of which,
such as telecommunications, have undergone similar radical regulatory and structural
change) can be instructive and provide guidelines for pursuit of growth initiatives.

Successful growers set the right course – or growth direction – by forming
a clear point of view on the future, evolving their productmarket portfolio
without being limited by geographical or historical boundaries, building a competitive
model to win and pursuing reinforcing initiatives to sustain growth. They truly
understand their capability – based on realistic assessments of strengths and
limitations – and evolve the operational model to support the growth strategy.
Successful growers also build organizationwide conviction for growth plans that
translate intent into living, breathing action on the front line.

A new world is opening for energy and utility companies, both more complex
and perilous, but also more exciting and rewarding. This world will cultivate
scale, efficiency and innovation, rewarding the best-prepared and hardest-working
with value beyond anything achievable in past years. Companies that can manage
a balanced portfolio of regulated and unregulated assets under a growth agenda
based on the 3C’s will be well positioned to reap the rewards of competition
– and define a leading role in this new world.

Endnotes

  1. Richman, Dan. “The Fall of AT&T Wireless.” Seattle Post-Intelligencer. Sept.
    21, 2004.
  2. Ibid.
  3. “Moody’s downgrades long-term rating of Procter & Gamble (Senior to Aa3)
    – Confirms short-term rating at prime-1 – Outlook stable.” Moody’s Investor
    Service Press Release. Oct. 19, 2001.
  4. “P & G – How A.G. Lafley is revolutionizing a bastion of corporate conservatism.”
    BusinessWeek. July 7, 2003.
  5. “Japan proves new-product gold mine for P&G.” Nikkei Weekly. July 12, 2004.
  6. “Vodafone posts £4bn profit.” BBC News Online. May 29, 2001.

 

Exploiting Broadband Over Power Lines

Broadband over power lines (BPL) and power line communications (PLC) are sets
of equipment, software and management services that when overlaid on the electric
grid provide users with communication means over existing “power lines” (cables
transmitting electricity). In the U.S., the two terms are used to differentiate
broadband versus narrowband communications. In other parts of the world, PLC
is often used to mean the underlying communications technology and is used to
represent both broadband and narrowband communications.

Utility companies have used their power lines for communications and control
for more than 30 years. Applications have included automated meter reading and
monitoring and control of grid operations. These applications require a very
limited bandwidth, as the data transfer rates are small and communications speeds
have been very slow.

In the early 1990s, several companies, mainly in Europe, began to research
PLC technology and high bandwidth signals. Research continued throughout the
1990s as the technology matured and trials conducted. Commercial deployments
are now actively taking place globally, and the United States is somewhat behind
Europe in BPL deployment.

The new technologies operate in the 1-to-30 MHz range. The current technology
delivers 45 Mbps and it is anticipated that the next generation will deliver
200 Mbps to the transformer. Capacity on the low-voltage network between individual
homes is shared. Integrators are engineering their networks to provide 25 Mbps
on average per home passed. Data transmission rates are symmetrical, so download
and upload speeds are equivalent, unlike the asymmetrical digital subscriber
line service.

The Alternative Broadband

BPL technology offers an alternative means of providing high-speed Internet
access, voice over Internet protocol (VoIP), video on demand (VOD) and other
broadband services, using medium- and low-voltage lines to access homes and
businesses. BPL’s technical feasibility has been demonstrated in more than a
dozen field tests, and BPL as a business is being tested for the first time
in Manassas, Va., and Cincinnati, where BPL networks are being assembled to
reach thousands of customers.

IBM is working with a large provider in Texas to conduct a pilot of BPL technology
in that utility’s territory. In addition, IBM is testing VoIP, VOD and “utility
side applications” such as auto turn-on-turn-off, BPLenabled AMR meters and
others in its BPL lab. The City of Manassas Electric Department is making BPL
available to the city’s 35,000 residents. In Cincinnati, Cinergy’s Current Broadband
is creating a network that will extend to 260,000 customers. PPL Telcom is also
deploying an “advanced market trial,” which is relatively large in scope, with
a network that has passed about 16,000 homes, 1,200 of them already having subscribed
as of September 2004.

As a means of high-speed Internet access, BPL has a number of appealing features,
including transmission speeds that can be higher than cable and DSL. With BPL,
both uplink and downlink speeds are equally fast, making it a better option
when compared to the slower uplink speeds of cable and DSL.

BPL also offers electric utilities a high-value communications network that
can enhance the power delivery system. BPL can help utilities with activities
such as automated outage detection and restoration confirmation, remote capability
to connect and disconnect electricity service and more efficient demand-side
management programs via automated meter reading. One of the biggest benefits
of BPL for utilities is in providing an intelligent grid.

BPL technology has received endorsements from Federal Energy Regulatory Commission
Chairman Pat Wood and former Federal Communications Commission Chairman Michael
Powell. Given that the technology can function through virtually any electrical
outlet, BPL has the potential to provide a third alternative broadband option
with ubiquitous service to all Americans at affordable rates. In fact, the FCC
says that about 15 percent of all homes capable of getting high-speed Internet
have chosen to buy the service. That’s about 6.2 million residences that have
opted to pay $40 a month, versus about $20 for dial-up service. The FCC is also
actively working to reduce any concerns between the HAM radio operators and
utilities deploying BPL.

How Does It Work?

In the BPL basic architecture, signals are injected into the electric grid
from a head end on the medium- or low-voltage lines at a substation. To back
haul the signal to the head end from the substation, fiber or wireless connections
are used. The signals traverse the grid network over medium- and low-voltage
lines to the home or business of the end user. Links between the medium- and
low-voltage lines are facilitated by channeling signals either through the transformers
or by bypassing the transformer via bridges.

In order to reach the end users, there are two alternatives that can be used.
Vendors such as Amperion provide interconnect with end users via wireless connections
at the transformer. It must be a WiFi (802.11b for now) connection at two points:
at a service injection point for medium voltage (12/23 kV) line, and at the
customer drop. Repeaters and extractors along the line boost the signal and
provide customer access via WiFi. Linemounted extractors can be powered through
induction (requires >70A line) and have an internal WiFi antenna. Pole-mounted
and enclosure-mounted (for areas with underground wires) installations require
a transformer and external antenna. One clever touch is an antenna hidden inside
a light pole. The solution uses off-the-shelf WiFi equipment as CPE. Others,
such as Mitsubishi, offer wire line connections, where users can plug a BPL
modem into any electrical outlet. They then connect their PC to the BPL modem
with an Ethernet or USB cable to finish the connection. The process is similar
to that required by users to connect to a cable-based Internet service. BPL
is able to transport data, voice and video at broadband speeds for end-user
customer connections.

There are numerous BPL vendors in the marketplace, and the leading U.S. vendors
in terms of installs are Amperion, Ambient, Current Technologies and Main.net
(see Table 1).

In addition, Mitsubishi Electric has several installations in the U.S. and
uses a bridging technology to bypass the transformer. It is the largest company
in the BPL equipment market, with a global reach, and it bears watching as the
market develops.

There is a wide range of energy companies in the U.S. that have shown interest
in BPL or are currently using BPL on a trial basis. Cinergy is the first to
go commercial. At least four other major U.S. utility companies have decided
to go commercial in 2005. These utility companies represent close to 10 million
potential BPL customers.

There are two sets of applications that are enabled through the use of BPL.
They are “revenue-side” and “utility-side” applications. The basic revenue side
applications are Internet service, VoIP services and video services. This is
commonly called the “triple play.” There is a wide range of utility-side applications
enabled through BPL. The utility companies that are using BPL on a trial basis
are coming up with new applications all the time. Utility applications cover
system monitoring and customer-facing applications such as AMR (automated meter
reading), demand-side management, load shedding and others.

BPL Risks

Managing risk is arguably one of the greatest challenges companies face in
harnessing the rewards of new technologies such as BPL. Their ability to understand
and manage operational, business and technical risks is crucial to protect brand
image, develop customer confidence, increase market penetration and achieve
long-term success.

Operationally, BPL is an overlay, not a grid element. There are no physical
electrical path changes. The technology cannot disable the grid. Installation
can be completed by contract personnel or by utility field crews, and crew availability
can have an impact on deployment schedules.

From a business perspective, implementing BPL is often seen as a foreign and
risky concept to utility companies (those burned in the Internet bubble can
attest to this). Expertise is required in broadband network deployment and management
as well as ISP systems. These risks can be mitigated through the use of business
partners with experience in deployment and operation of ISP operations.

Additionally, BPL will face competition from cable companies and Telcos. And
they will join in because there have been numerous studies which indicate that
there will be a market for BPL-based services.

Technically, BPL works well. Two key issues that need to be managed are interference
and the lack of existing standards. These risks are mitigated through good network
design, testing, deployment and management. Equipment must be FCC Part 15 certified
to mitigate interference. Notching techniques can be used as well to help eliminate
interference.

BPL Trials

The PLCforum estimates that there are more than 80 PLC initiatives in more
than 40 countries that have been launched by electric utilities. They indicate
that pilot sites, technological or commercial trials and deployments are numerous
in Europe. The PLCforum highlights what they consider the most important European
initiatives, as the ones developed by EDF (www.edf.fr, France), EDP (www.edp.pt,
Portugal), EEF (www.eef.ch, Switzerland), Endesa (www.endesa.es, Spain), Iberdrola
(www.iberdrola.es, Spain), PPC (www.ppcag.de, Germany) and SSE (www.scottish-southern.co.u.k.,
Scotland).

In Africa, BPL is being used on a trial basis in Ghana and in South Africa.
The City of Tshwane Metropolitan Municipality is leading the country forward
with the deployment of BPL in South Africa.

BPL Associations

In the U.S., the leading associations are the United Power Line Council and
the Power Line Communications Association.

The UPLC is an alliance of electric utilities and technology companies working
together to drive the development of broadband over power lines in a manner
that helps utilities and their partners in North America. The UPLC’s efforts
are focused in three strategic areas: market awareness, regulatory and legislative
advocacy, and technical operability.

The PLCA is a trade association representing the interests of electric utilities
interested in offering power line communications. Associate membership in the
PLCA is open to other parties who have an interest in PLC, such as equipment
manufacturers. The PLCA held its first industry conference on Dec. 12-13, 2001.
The founding membership of the PLCA includes electric utilities that collectively
serve more than 9 million U.S. households and more than 27 million households
globally.

Worldwide, the PLCforum is the leading BPL association. The PLCforum is a leading
international association that represents the interests of manufacturers, energy
utilities and other organizations (universities, other PLC Associations, consultants,
etc.) active in the field of access and in-home PLC technologies. The PLCforum
was created at the start of 2000 and its membership stands at more than 60.

In Japan, there is the PLC-J, in South America, APTEL leads BPL efforts. In
Europe, there is a local body called the PLC Utilities Alliance, involving several
major utility companies, including; EDF (France), EDP (Portugal), EEF (Switzerland),
EnBW (Germany), Endesa (Spain), ENEL (Italy), Iberdrola (Spain), and Unión Fenosa
(Spain). This association represents more than 100 million potential users.

BPL Adoption

The industry is adopting three basic models that vary based on the amount the
utility wants to invest and the level of risk they are willing to accept.

At one end is the landlord model where the utility company basically rents
their grid to an outside party (tenant) for a percentage of the profits this
company can obtain. In this role, there is little investment required and low
risk. Companies exploring the landlord model will usually want free use of the
BPL infrastructure for internal utility use. The tenant will generally comply
with this request as utility-side applications use little bandwidth.

At the other end of the spectrum is the developer model. In this model, the
utility company fully funds all the capital needed to enable BPL. In this role,
there is more risk but also more opportunity to generate revenue both from wholesale
of the ISP and last mile access to offering utility-side applications. Companies
interested in this model have often had some success with Internet offerings
and are seen to have progressive management.

The third model, combining elements of both above models, is the joint venture
model, whereby utilities and ISPs negotiate their partnerships based upon their
capabilities, appetites for investment and responsibilities. Joint ventures
often represent strategic decisions by the parties to refocus, to develop new
capabilities and position their companies in new market spaces, and are often
commitments over longer periods of time.

Conclusion

BPL is a technology that is maturing fast and is about to be deployed in 2005
by several leading U.S. utility companies. BPL has been implemented on every
continent. Europe is the leader and significant deployments in Asia are expected
in 2005. Any utility should seriously consider implementing BPL.