Public Power: Bucking the Trend

As a result of the early failures of deregulation in the United States, we have
learned two critically important lessons about the balance of private enterprise
and public interests.

First, it is now very clear that the process of deregulation is more corruptible
than the process of regulation. California alone has alleged that these practices
cost the state nearly $10 billion in electric overcharges. Some estimates exceed
$20 billion.

It is impossible to identify any failures of traditional utility regulation
that have cost consumers even a fraction of that amount. The process of deregulation
is corruptible where policy-makers abdicate their responsibility to establish
rules in the public interest, either through their own inaction or by allowing
market participants to make the rules. And the process is corruptible if regulators
don’t effectively regulate deregulation.

Second, deregulation (at least successful deregulation of existing monopolies
that delivers on its promises of lower rates, better service, and greater innovation
for consumers) is a government project. It doesn’t happen through the simple
elimination of regulatory constraints.

It is, in fact, hard work. Smart people — public servants who act in the
public interest — must design a deregulated environment. That environment
must have very clear rules. Such rules are difficult to devise because of the
unique characteristics of our industry and because smart people will push the
limits and shift the rules to their own financial advantage.

Strong and effective regulators must enforce these rules. Again these regulators
must be public servants employed by the government with a clear vision of what
is in the public interest as well as effective sanctioning powers.

The Role of FERC

In America, I believe we are beginning to understand these lessons. The Federal
Energy Regulatory Commission, for example, has become much more aggressive in
monitoring the market. It is beginning to develop rules that will provide greater
transparency of market information. It appears to want to punish those who have
abused their power in the marketplace in the past, and prevent such abuses in
the future.

It is less clear whether our Congress has learned these lessons. The United
States is very much a capitalist society. There is a very strong resistance
to government interference in business. Many still hold an almost religious
belief in the virtues of free markets totally unrestrained by government regulations.

At the outset of debates on utility deregulation, many industry observers predicted
the rapid demise of public power. It was argued that public power systems simply
couldn’t survive in a deregulated environment because they were too small to
capture economies of scale in generation and distribution, and because they
were too unsophisticated to deal with the complexities of a deregulated environment.

These predictions, of course, fit well with the ideological belief that government
had no business in the electric utility business. As it turns out, none of these
predictions has come to pass, and public power is stronger than ever.

Almost half of our 50 states enacted retail deregulation legislation. In every
case, the self-regulating, publicly owned electric utilities were allowed to
opt out of the deregulated environment. They fought for this right to chart
their own course; it was part of their tradition of local control.

Today, they and their customers are extremely glad that they prevailed. In
some states that enacted retail deregulation, most notably in California but
in others as well, the privately owned, vertically integrated electric utilities
were required to divest their generation resources. Publicly owned utilities
were exempt from this legislation and decided to retain their generation. The
wisdom of these decisions has been borne out by their experience.

California — Public and Private

California’s three private power companies sold 50 percent or more of their
generation. When the energy crisis hit the state in the summer of 2000 and continued
through the first half of 2001, the privately owned utilities had insufficient
generation to meet demand. They, and their customers, were at the mercy of the
dysfunctional wholesale market.

However, the state-restructuring law prohibited these utilities from passing
along these higher costs. (Remember, these utilities helped draft this law,
so they bear much of the blame for this provision.) As a result of this crisis,
California’s largest privately owned utility filed for bankruptcy, and the second
largest private utility was pushed to the brink.

In contrast, California’s public power systems had sufficient reserves to meet
the needs of their communities. They maintained their rates based on cost-of-service
principles. In some cases, they had surplus power that they could sell to others.
As a result, these systems today have fiercely loyal customers and are financially
sound. Their ownership of generation and their control of their own destiny
helped them weather this storm.

Figure 1: Ownership of Electric Utilities in the United States
Courtesy of: American Public Power Association

Public Power Becomes a Goal

Public power’s record of success in California has been noticed. This success
has been a catalyst for community leaders in other California cities and throughout
the country to take a new, hard look at the public power option. In fact, the
interest in public power is higher today than at any time in my 25-year career
with the APPA.

High electricity rates, poor service, and a desire for local control are motivating
factors for cities and towns to explore the public power option. Voters in Clark
County, Nevada, approved a non-binding referendum that gave the Southern Nevada
Water Authority a boost toward its goal of purchasing Nevada Power Company.
At the same time, the city of Corona, California, announced plans to take over
electricity service from Southern California Edison. The city of Portland and
surrounding counties are preparing a proposal to purchase Portland General Electric,
currently owned by the bankrupt Enron. Fifteen towns in Iowa are considering
municipalizing their electric service, as are a number of communities in Florida.

People are gaining a new appreciation for the fact that electricity is not
just another commodity. It is an essential service. More than that, it is a
local service. The way it is generated, transmitted, and distributed can be
subject to local control, and local control can provide protections against
volatility in supply and price.

There is also the recognition that publicly owned electric utilities are more
than able to manage their business affairs in a complicated and rapidly changing
environment. There is nothing anachronistic about today’s public power systems.
They have demonstrated that they can handle the challenges of today and tomorrow.

Adverse Impacts

Despite these success stories, many publicly owned electric utilities have
been adversely affected by deregulation. Most of California’s publicly owned
utilities are part of the integrated utility system within the state. Very few
operate their own control areas. Therefore, when rolling blackouts were instituted
to protect the integrity of the system statewide, they were affected even though
they had adequate generation to meet their own needs.

A few publicly owned utilities in California, and many more in the two states
to the north — Oregon and Washington — were short on generation because
they rely on hydropower, and the region was in the middle of an extended drought.
These systems were forced into transacting in a volatile and dysfunctional wholesale
market. They were forced to pay several times the actual cost of production
for power to meet their loads.

Some incurred huge debts, were required to implement rate increases of 50 percent
or more, and now must deal with these financial and public relations problems
for the next few years.

Even here, however, public power’s virtues are apparent. Those publicly owned
utilities that were adversely affected, for example Seattle City Light, acted
quickly to deal with the crisis. Local control and public support enabled the
utility to borrow money as needed and to raise rates immediately to cover costs.
The credit rating agencies recognized local control as an important attribute
of public power.

Today, these rating agencies are expressing their confidence that publicly
owned utilities are financially sound and can weather whatever storms they might
face in the future. This is a tremendously important vote of confidence in public
enterprise from America’s financial community.

Figure 2: Location of Community-Owned Electric Utilities in the United States
Courtesy of: American Public Power Association

New Generation

Yes, some industry participants and observers predicted deregulation would
be the end of public power. Indeed, many publicly owned utilities felt the best
and safest route was to become nothing more than a wires company, distributing
power to their citizen-owners but disengaging from the power supply business.
The experience in California changed all that.

Today, more and more publicly owned utilities believe that the only true hedge
to market volatility and instability is the ownership of generation. So individually
or collectively, publicly-owned utilities are reevaluating their future power
supply plans, and more and more are looking at or actually developing new generation.
As a result, their presence in the market will increase, not decrease. This
is exactly the opposite of what many predicted just a few years ago.

Publicly owned utilities are a minority player in America’s electric utility
industry. We are and will remain dependent on others for our transmission needs,
and we are likely to remain dependent on others for at least a portion of our
power supply needs.

Diversity of power supply resources, some owned and some purchased under power
supply contracts, is a prudent way to manage risk, and publicly owned utilities
quite clearly are prudently managed local enterprises. However, because of our
dependence on others, including private power companies, independent power generators,
and power marketers, it is clear that we cannot isolate ourselves from the rest
of the industry.

We have a vested interest in how the industry will be restructured, because
its structure will affect our future and our ability to continue to provide
low-cost, reliable service to our customers. For that reason, we have been and
will continue to be very active participants in debates over the future of our
industry.

Disruptive Technologies

With the collapse of energy trading and the slowing of retail deregulation, energy
and utility companies are going back to the basics. They are focusing on taking
additional costs out of the business, increasing asset productivity, and shoring
up the balance sheet by selling non-core assets and exiting non-core businesses,
such as energy marketing and trading.

Meanwhile, the stock prices of energy companies are slowly recovering, having
lost more than $300 billion in market capitalization. The current focus by most
energy and utility management teams comes as no surprise. All of these actions
are necessary to position their companies for the next wave of growth through
acquisitions or ventures into unregulated activities.

The traditional focus in the regulated environment was on asset-intensive businesses
involving exploration and production, generation, transmission, pipeline, and
electric distribution. However, two disruptive technologies are gaining momentum
and acceptance: nanotechnology and the hydrogen economy.

These technologies are being promoted and explored primarily by non-utility-industry
giants and, when fully deployed, will significantly alter the energy and utilities
business as we know it. While the full impact of these technologies will not
be realized for seven to 15 years, the impact needs to be assessed and acknowledged
today to facilitate long-term decision-making.

Nanotechnology is the science of building previously unfathomably small things.
It is taking pervasive computing to the atomic or subatomic level. Think of
invisible-to-the-naked-eye special-purpose sensors in everything, wirelessly
sending their status and the health of whatever they are watching to a remote
monitoring system.

This monitoring system, using artificial intelligence, watches for trends and
patterns that alert other systems and operators to the onset of problems much
more quickly than previous systems could have detected the beginning of a system
or component failure.

The hydrogen economy involves replacing our dependence on hydrocarbon-based
fuels with technologies that run off pure hydrogen. Hydrogen is an extremely
abundant renewable energy source easily separable from water, which covers 70
percent of the earth. Replacing hydrocarbon fuels with hydrogen is less harmful
to the atmosphere (no air pollution), supports energy independence, and proliferates
the equivalent of the personal computer revolution in the generation of electricity.

In the remainder of this paper, we look at these technologies in more detail,
including who the market-makers are, when these technologies will begin to significantly
impact the energy and utilities industry, and what some of those impacts will
be.

Nanotechnology

Every day, nanotechnology advances are shrinking the size of motors, machines,
and sensors. Startup companies are commercializing these technologies, discovered
in research laboratories, by building the infrastructure to construct the tools
necessary to assemble these miniscule machines.

Traditional instrumentation and control vendors are pioneering the all-digital
sensor. This sensor is self-calibrating, automatically compensates for set-point
drift, and is self-diagnosing, meaning that it will place a trouble call to
a technician whenever problems are predicted or encountered. Original equipment
manufacturers are finding innovative ways to embed these miniature sensors in
such devices as pumps, motors, transformers, valves, circuit breakers, and compressors.

Think of the applications for this technology in the energy and utility industry:

• New instrumentation that is all-digital, self-calibrating, and set-point
compensating could send its output signal to a digital communications bus. This
will be the end of the 4-20 milliamp and 0-10 volt DC analog control circuits
that have dominated the process control business for the past century. Deployment
of this new generation of instrumentation and control will increase efficiency
by deploying technicians to more valuable activities.

• Instrumentation small enough and specifically designed to fit inside
equipment to monitor the health of the equipment itself or of specific aspects
of the materials used in a piece of equipment, such as metal, bearings, seals,
fouling, insulation, and oil. The dream of self-diagnosing equipment capable
of “calling home” when it is in need of repair can now be realized. This technology
enables the often-elusive, condition-based maintenance model that the industry
has been pursuing for years. Hitachi has been pioneering these capabilities
in its substation of the future since 1997.

• Miniature sensors deployed throughout an entire transmission, pipeline,
or distribution network. Utilities will have access to data and information
previously unavailable to them. Operations decision-making will move into a
realm previously thought unattainable. Real-time status on the integrity of
a pipeline system will be available, making compliance with the recently promulgated
“high consequence area” pipeline integrity rules significantly easier. Real-time
energized status of distribution feeders will virtually eliminate the need for
customers to report outages to the call center and will speed outage restoration.
Further, phase-balancing and loss is easier to manage and helps improves overall
operation of the distribution feeder network.

When you extend this to the utility’s customers, machine-to-machine decision-making
becomes a reality. Imagine a world where you program your preferences into the
machine, (e.g., a dishwasher or washing machine) and it looks for the least
expensive time to run while satisfying your preferences. The implications for
the industry as we know it today are considerable:

• Every portion of the value chain (generation, transmission, distribution,
energy marketing and trading) must be capable of sending real-time pricing signals
to end-use machines.
• Sub-metering at the machine level may obviate the need and usefulness
of automatic meter reading of the premises meter.
• Set-top boxes or gateways must be revamped to include point-of-sale register
information so that each set of transactions for the device can be captured
by an independent arbitrator.
• A new-generation customer-information system must be developed to handle
the billing and customer-care aspects of machine-to-machine decision-making.
For example, imagine a call to a customer service representative where the customer
says he did not request that the dishwasher run on-peak. As a result, he wants
credit for the difference between on- and off-peak run rates on washing dishes.

The Hydrogen Economy

Major automobile manufacturers have endorsed hydrogen-based fuel cells for
the propulsion system of electric cars. Several major oil companies have committed
to replacing their heavy reliance on oil with hydrogen. According to “The Futurist,”
fuel cells will be generating 90 gigawatts of power by 2007.

In his recently written book, The Hydrogen Economy, Jeremy Rifkin presents
the thesis that hydrogen has the potential to end the world’s reliance on oil
and will stop global warming by dramatically reducing carbon dioxide emissions.
Rifkin further asserts that because hydrogen is so plentiful and exists everywhere
on earth, every human being could be “empowered,” leading the first truly democratic
energy regime in history.

Iceland has made a commitment to be the first country totally powered by hydrogen
and free of dependence on fossil fuels. Everything from cars, trucks, buses,
power plants, and fishing fleets will use hydrogen as the primary fuel. Already,
automakers and oil giants are steaming toward Iceland’s shore to participate
in making this vision a reality.

EPRI has a vision of a continental SuperGrid that delivers electricity and
hydrogen in an integrated energy pipeline. The SuperGrid would use a high-capacity,
superconducting power-transmission cable cooled with liquid hydrogen produced
by advanced nuclear plants, with some hydrogen ultimately used in fuel cell
vehicles and generators.

More recently, the Department of Energy issued the National Hydrogen Energy
Roadmap, the path toward an enhanced and cleaner energy future for America,
in November 2002.

The combination of these trends could make significant contributions to the
gross national product and our lifestyles by 2010. For example, consider the
old utility view of the electric vehicle as contrasted to the new.

In the old model, the owner of a battery-powered car plugged the car into a
battery charger that represented a fairly large residential load. In the new
model, the owner of a fuel cell-powered car plugs the car into the home circuitry.
The car then becomes the non-polluting electricity source for the entire home.

The impacts from this are widespread. Transmission and distribution utilities
may become necessary only for standby and backup service instead of primary
service, resulting in significant changes to current utility business models,
including:

• The creation of stranded costs as the existing infrastructure is bypassed.
• Reliability takes on a different meaning.
• A company’s basis for recovering costs shifts from meter-based to more
of a fixed charge, like local phone companies.
• Performance-based rates are rethought.

The hydrogen economy is poised to do to the energy and utility business what
the personal computer did to the mainframe computing industry 20 years ago.
We should look to the IT industry for lessons learned for this important industry
trend, and executives should ask themselves what role their company will take
in deploying distributed generation and responding to the challenges it presents.

The Future

The technologies discussed have been around for several years. What is different
today is that major players external to major industry suppliers and participants
are taking significant economic stakes and betting their futures on the successful
and early deployment of these technologies. Nanotechnology and the hydrogen
economy are in a dead-heat race at this time. Each will disrupt the energy and
utility industry in its own right.

Consequently, the questions that you and your company should be asking are
not, “Will this affect our company?” but rather, “How will it affect us? When
will we be impacted? And what role do we want to play?”

Wind Energy: Winds of Change

A look at the U.S. wind energy industry’s growth chart shows an upward curve that
takes off at the close of the 1990s after a decade of dormancy.

Even now, a good year may be followed by a lull, which is then followed by
a great year. Consider this: 1999 was a record year for the wind energy industry,
with 732 megawatts of new capacity installed. Growth slowed during 2000, but
soared again in 2001, when newly installed capacity reached nearly 1,700 megawatts.
After another drop-off in 2002, the American Wind Energy Association (AWEA)
is forecasting a new record for 2003 with some 2,000 megawatts of generating
capacity coming online.

The years of rapid growth are easily explained. The efficiency of wind generating
systems continues to rise. A federal wind-energy production tax credit spurred
investment. And wind is a non-polluting energy source. Some 3 million tons of
carbon dioxide and 27,000 tons of noxious pollutants would be emitted into the
air every year if the electricity generated from the new wind farms installed
in 2001 were produced from the average U.S. electricity fuel mix. While still
only about 0.3 percent of the nation’s electricity supply comes from wind today,
the AWEA estimates that 6 percent of U.S. electricity will come from wind by
2020.

But why are there lulls in the growth pattern? In this paper, we’ll review
significant developments in the wind energy industry over the past few years.
As you’ll see, the industry’s variable pace of growth is also easy to understand
when looking at these factors.

On-Again, Off-Again Signals

A federal incentive, the wind energy production tax credit, helps bring down
wind energy’s already declining cost and is spurring development. Unfortunately,
the incentive has repeatedly been allowed to expire before being extended for
short periods, first in 1999 and again in 2001, causing the market to grow in
an on-again, off-again pattern. After its record year in 2001, the industry
lurched to a near-stop when Congress, immersed in a partisan battle over larger
economic issues, failed to extend the wind energy production tax credit before
it expired in December 2001. An extension was signed into law in March 2002.
In the interim, however, the costs of the boom-and-bust cycles caused by the
credit’s expiration were dramatically highlighted, as some $3 billion worth
of proposed projects were put on hold, and hundreds of workers were laid off
pending extension of the incentive.

This sequence of events illustrates how detrimental inconsistent policy can
be. Future policy decisions at the state, regional, and federal levels will
go a long way toward determining how quickly and to what extent the United States
begins to reap its enormous wind energy potential, even though the wind industry
is continuing to drive down the cost of wind-generated electricity, enhancing
the competitiveness and attractiveness of its product to electricity providers.

State Policies Can Make a Difference

More than half of 2001’s new capacity came in a single state, Texas, where
915 megawatts of capacity were installed. That’s more than had ever been installed
in the entire country in any prior year. The remarkable total resulted from
the successful combination of that state’s renewables portfolio standard (RPS)
with the federal tax credit. The Texas RPS was passed in 1999 after “deliberative
polling” by utilities revealed widespread support for renewable energy throughout
the state and from all consumer types (including commercial and industrial customers).
It requires the state’s utilities to acquire 2,000 megawatts of new renewable
generating capacity by Jan. 1, 2009.

The first stage of the law required 400 megawatts of renewable capacity by
Jan. 1, 2003. Instead, 915 megawatts of wind alone — more than double the
required amount for all renewable energy technologies — was in place a
year early. Bids received in response to utility procurement auctions required
by the RPS proved to be cost-competitive with other utility resource options,
and voluntary “overcompliance” resulted.

In 2001, Texas showed us what an effective RPS can do when it is combined with
a production tax credit. The result of those two policies was explosive growth
in wind, far surpassing anything this country has previously seen in the more
than 20 years since wind energy got its start. Clearly, we have found a policy
combination that works incredibly well and should be expanded to the national
level with a federal RPS.

What Happened in California?

Institutional challenges remain. One is in California, scene of a crisis that
saw rolling blackouts and stunningly high power prices. What first appeared
to be a favorable scenario for wind, an abundant resource in California and
neighboring states, quickly turned into a nightmare as electricity prices soared
and state agencies frantically began signing high-cost, long-term contracts
for new natural gas generation, while refusing to buy electricity from lower-cost
“non-firm” (variable) generators like wind plants.

The California Energy Commission has been proceeding for some time on a track
of inviting proposals and awarding cash incentives for new wind and other renewable
projects, using funds collected from a charge on ratepayers. But because of
California utilities’ financial problems and the gridlock in new state procurements
while California swallows the long-term contracts bought at the peak of the
crisis, the vast majority of proposed wind projects — now totaling over
1,500 megawatts in capacity — have never been built, and there is no indication
when they will be. Only a few wind projects are moving forward in the state.

An RPS adopted this year by the state calling for 20 percent of electricity
to come from renewable sources may provide the market with a new impetus. But,
in contrast to the simple, market-friendly, and effective regulations put in
place in Texas to implement that state’s RPS, concerns cloud the implementation
of California’s convoluted RPS legislation. As Arthur O’Donnell, editor of “California
Energy Markets,” observed:

“The many changes in the [California] RPS bills nearly all served to defer
utility commitments and reduce the likelihood of a vibrant green market, as
cost-causing provisions collide against cost-limiting proscriptions. … For
the program to succeed will take a commitment to working together that has
been sorely lacking of late.”

Figure 1: The wind energy market is taking off in the United States.                                                 Source
AWEA

Transmission Is Major Challenge

In order to deliver large amounts of wind power from wind-rich regions such
as the Great Plains to electricity-hungry markets in other parts of the country,
additional transmission lines and upgrades will be needed. Lack of transmission
capacity is already impeding the development of significant amounts of wind
power in the Dakotas. By contrast, Texas, anticipating continued investments
in wind energy, is preparing to expand transmission capacity from western areas
of the state where the best wind resource lies, to load centers further east.
As regional transmission organizations and other relevant entities assess infrastructure
needs across the country, they must realize that expansion plans should include
“high wind” scenarios that could benefit electricity consumers, rural landowners
looking for economic development, and the environmental public interest. Failure
to take wind’s potential contribution into account in the planning process could
shut out development prospects in the windiest parts of the country.

The wind industry also faces a serious hurdle in gaining access to the utility
transmission system for wind-generated electricity at a reasonable price so
that it can be profitably sold in the wholesale electricity market. In the past,
most wind projects sold their output directly to the local utility. But as restructuring
“unbundles” the vertically integrated electric utility industry monopoly, utilities
are required to make their transmission facilities available to all generators
under “open access” fee structures, or “tariffs,” that presume generators will
precisely schedule their transmission usage in advance and precisely control
their output to match those schedules.

These open-access tariffs impose penalties of as much as 2.5 to 3.5 cents per
kilowatt-hour on wind generators for use of the transmission system in addition
to normal transmission access fees. Such a penalty can double the wholesale
cost of wind-generated electricity. The high penalties are exacted because of
wind’s variable operation and the fact that a wind plant cannot guarantee delivery
of a certain amount of electricity at a given scheduled time and date. The penalties
are not based on actual costs that a failure to deliver may impose on the system,
but are self-described punitive penalties to enforce “good behavior” on large
generators who can precisely control their output and have been known to do
so in ways designed to raise their own profits at the expense of others.

As control of the interstate transmission grid evolves toward transmission-specific
entities — independent system operators — and as the Federal Energy
Regulatory Commission works to update its early version of an industry standard
tariff, the impacts of these penalties is being reassessed. The New York ISO
and ERCOT in Texas have special rules for “as available” resources like wind
that exempt them from these penalties. The PJM ISO in the mid-Atlantic region
has a different market design that can accommodate the variable output of wind
projects without penalties.

 

Table 1: Leading States in Wind Capacity
(Through 2001)
Table 2: Largest Wind Farms Operating
in the United States

 

An agreement has also been reached that will reduce penalties for wind plants
on the California ISO system. However, penalties are still a problem in important
and windy areas of the country such as the Pacific Northwest and the Midwest.
In the Northwest, for example, development of some 830 megawatts of projects
($800 million worth) sought by the Bonneville Power Administration has been
stalled by the penalty issue. Some of the most discriminatory penalties have
recently been removed in that region, and continued, if slow, progress is expected
on this issue.

Attracting Large Players

The wind energy market’s performance is catching the interest of some large
corporations. In February 2002, GE Power Systems acquired Enron Wind Corporation,
the largest remaining domestic manufacturer of commercial utility-scale wind
turbines.

“The acquisition of Enron Wind represents GE Power Systems’ initial investment
into renewable wind power, one of the fastest-growing energy sectors,” said
John Rice, president and CEO of GE Power Systems. The wind energy industry,
the firm said, is expected to grow at an annual rate of about 20 percent, with
principal markets in Europe, the United States, and Latin America. For the parent
company, General Electric, the move was a return to the wind business after
an absence of nearly 20 years. GE’s aerospace division was a major contractor
of research-oriented wind turbines for the U.S. Department of Energy in the
early 1980s.

Houston-based TXU, already one of the largest purchasers of wind power in the
United States, unveiled in October plans to purchase the power from a large
new wind farm to be built in western Texas. American Electric Power is positioning
itself as a major player in the market through AEP Energy Services, which has
developed a 150-megawatt wind farm and purchased another, both in Texas. FPL
Energy, a subsidiary of FPL Group and the largest wind energy developer and
operator in the United States, has emphasized the growth of wind energy.

Vibrant Market for Small Systems

Consumer interest in small wind systems for homes and small businesses surged
following concerns about high rates and brownouts during the California crisis
and resulted in healthy sales even as the crisis abated later in the year.

The market for small wind systems (less than 100-kilowatt capacity) is also
expanding as a result of policies adopted in a growing number of states. California
and Illinois run well-established rebate programs that help reduce the high
upfront cost of a wind system, and New York, New Jersey, Delaware, and Rhode
Island have followed suit. Also in California, the small wind turbine industry
welcomed a law enacted in 2001 providing relief from overly restrictive local
zoning ordinances on tower height and installations.

Net metering, a policy under which the owner of a small wind or other renewable
energy system is allowed to spin his or her electricity meter backwards if the
system is generating more power than is being consumed, has been adopted in
more than 30 states. Such policies, along with simplified, standard-ized interconnection
rules, reduce the expense and time required to install a small wind or photovoltaic
system for a home or business.

The industry recommends a federal tax incentive for small wind systems to help
reduce their high upfront cost, increase sales, help manufacturers to increase
volume, and thus lower costs even further.

Exports of small wind turbines — used in village power, off grid, water-pumping,
and other applications — have declined in recent years due to fluctuations
in the U.S. dollar, reductions in support from the U.S. Agency for International
Development for renewable energy project activity, and increased bilateral aid
by competing export countries.

Optimistic Long-Term Outlook

On balance, although the institutional issues noted above remain of concern,
there is strong optimism about the wind industry’s long-term future.

Some 10 billion kwh of electricity were generated from wind in 2002 in the
United States, enough to power about 1 million households without emissions
of carbon dioxide and other pollutants. The average electricity fuel mix would
produce about 7.5 million tons of carbon dioxide while producing that much power.
It would take a forest of 4,000 square miles to absorb that much carbon dioxide.

Federal estimates place the nation’s wind energy potential at 10.7 trillion
kwh annually — more than twice the electricity generated in the United
States today.

A single megawatt-scale wind turbine provides $2,000 a year or more for landowners
who lease their wind rights. Farmers can grow crops right up to the base of
the turbines. Wind energy’s stable and increasingly competitive cost is an attractive
factor for utilities seeking to diversify their energy portfolio.

Even while the level of new construction slowed during 2002, the AWEA remains
confident that wind power’s future prospects are strong. Our forecast for total
wind energy production by 2020 is 6 percent of U.S. electricity supply —
about 20 times the current level.

Carbon Dioxide Strategies

The Kyoto Protocol would have required the United States to commit to a 7 percent
reduction in carbon emissions from the 1990 level by 2010. This paper will examine
the strategies, technologies, and costs that could be tried to reduce emissions
within the power sector.

According to the Energy Information Administration (EIA), the actual emissions
from the power sector in 1990 were 507 million metric tons (mmt) of carbon,
or 1,859 mmt of CO2. The estimated 2000 emissions were 2,352.5 mmt of CO2.

Using the EIA’s projected power growth rate of 1.7 percent per year, and assuming
most new generation plants are natural gas, the estimated 2010 emissions are
712 mmt of carbon, or 2,603 mmt of CO2.

Compliance with Kyoto would require the power sector, which represents about
a third of the total U.S. carbon emissions, to limit emissions to 472 mmt of
carbon, thus yielding a reduction of 240 mmt by 2010.

It is a daunting challenge. Achieving that result would be equal to replacing
one-third of existing coal-fired plants with a zero-CO2 emission technology
such as nuclear.

We’ll look at a variety of approaches and their costs, including plant replacement
and sequestration.

New Plants

Aging coal-fired power plants could be replaced with new, cleaner plants that
emit less or no CO2, including:

• Pulverized-coal-fired plants with a supercritical steam cycle (PCSC)
• Natural-gas-fired combined cycle plants (NGCC)
• Integrated coal-gasification combined cycle plants (IGCC)
• Nuclear power plants

CO2 scrubbing can be done using post-combustion or pre-combustion technologies.
A pulverized-coal plant would require post-combustion scrubbing. Natural gas
and coal-gasification plants would require pre-combustion scrubbing.

Post-combustion of coal-fired boiler flue gas has been performed on a small
scale. In this technology, the flue gas is cooled by water spraying and then
treated in an amine scrubber, where CO2 is stripped. The CO2 collected from
the amine regenerator is compressed and liquefied for export.

Pre-combustion scrubbing can be implemented on a natural gas plant or a coal
gasification plant before the fuel is combusted in the gas turbine. In this
technology, fuel gas is converted to hydrogen and carbon dioxide through steam
reforming (natural gas only) and water shift conversion. Carbon dioxide is then
stripped out by an acid gas removal system (such as amine scrubbing), and hydrogen
is left as the fuel for gas turbine.

Slack Demand

Industrial and commercial uses of CO2 are limited compared with the amount
emitted from power plants. The total industrial use of CO2 in the United States
is less than 50 million tons per year, which is less than 1 percent of the approximately
6 billion tons emitted annually from fossil fuel combustion. The largest single
use of CO2 is for enhanced oil recovery (EOR); other uses of CO2 include carbonated
drinks, freeze drying, and food packaging, but these usage rates are minimal.

Potential industrial uses of CO2 are also small compared with man-caused emissions
of CO2. Assuming that CO2 is substituted for fossil fuel feedstock in all plastics
production in the United States (there is no suggestion that this is feasible),
less than 200 million tons of CO2 per year would be required, or about 3 percent
of the total CO2 produced in the United States.

Use of CO2 in fuel making (converting syngas to methanol, for example) has
been suggested. However, methanol or its derivative, MTBE, is not considered
suitable fuel for motor vehicles. Another potential industrial use involves
creating urea by combining ammonia with carbon dioxide. But this use, too, has
a very limited potential because the annual amount of fertilizers produced is
two orders of magnitude smaller than the amount of CO2 available annually. A
thorough review of beneficial uses of CO2 shows that its potential is very limited.

Figure 1: Economic Analysis
Notes
(1) For each technology, the plant output for cases with CO2 removal is lower
than output without CO2 removal by the amount of energy used for CO2 scrubbing.
(2) For plants without CO2 removal, the number shown is the amount of
CO2 avoided by replacing an existing subcritical plant emitting 850 kg of CO2
per megawatt. For plants with CO2 removal, the amount shown is based on 90 percent
removal on total tonnage basis, but adjusted for reduced net plant output. As
an example for the PCSC plant without CO2 removal, which emits 800 kg/mwh, the
CO2 avoided is 50 kg/mwh (850-800). However, for the same plant with 90 percent
CO2 removal, the CO2 removed is 960 kg/mwh [0.90 x 800 kg/mwh divided by 600
mw/800 mw].

Sequestration

CO2 sequestration can be accomplished either by an offset (indirect sequestration)
or by reduction in emissions from the generation facility (direct sequestration).

Indirect sequestration can be done by removing CO2 from the atmosphere by terrestrial
uptake (e.g., sequestering CO2 in soil, forests, and other vegetation) or by
ocean uptake (e.g., ocean fertilization). Direct sequestration requires separation
and capture of CO2 emissions from the generation facility, transportation, and
long-term storage in geological formations or in the ocean.

Carbon dioxide can be compressed and/or liquefied as necessary for export.
For long-distance hauling, it is desirable to liquefy the gas to make it a cold
liquid. CO2 can be transported at low pressure in insulated tank trucks at -30°C.

Artificial photosynthesis is another approach for CO2 fixation under ambient
conditions. Microalgae systems present the best biological technology for the
direct capture and utilization of CO2 emitted by power plants. These microscopic
plants would be grown in large, open ponds, wherein power plant flue gas or
pure CO2 would be introduced as small bubbles.

The estimated mitigation cost for this type of process can be as high as $100
per metric ton of CO2 recycled. A pond of about 50 to 100 square kilometers
(12,250 to 24,500 acres) would be needed for a 500 megawatt power plant. The
biomass would be harvested as a fossil fuel replacement. Considerable opportunities
exist for improving these systems.

The estimated cost of afforestation — using forests to absorb the gas
— is in the range of $3 to $10 per metric ton of CO2. This cost does not
include land or forest management.

Isolation

Direct CO2 sequestration technologies include:

• Injection into oil and gas reservoirs
• Injection into deep, unmineable coal seams
• Injection into saline aquifers
• Injection of liquid CO2 into the deep ocean

Geological formations such as oil and gas fields, coal beds, and saline aquifers
are likely to provide the first large-scale opportunity for CO2 sequestration.
During 1998, oil field operations pumped about 43 million tons of CO2 into about
70 wells for oil recovery. Many additional candidates exist for EOR. CO2 disposal
in depleted oil and gas reservoirs is possible today, but the ability to dispose
of the extremely large quantities that may be required is uncertain.

A much greater opportunity for storing large volumes of CO2 injection lies
in geologic formations. Confined saline aquifers, rock caverns, and salt domes
have the potential to sequester millions and possibly billions of tons of CO2,
but many technical, safety, liability, economic, and environmental issues remain
unresolved.

Disposal of CO2 in deep, unmineable coal beds may offer the advantage of displacing
methane from the coal beds for enhanced coal bed methane recovery. A pilot program
of CO2-assisted coal bed methane production has been in operation since 1996.
Four million cubic feet per day of pipeline-fed CO2 from a natural source is
injected daily. Full field development of the process could boost recovery of
in-place methane by 75 percent.

Sequestration in deep saline formations or in oil and gas reservoirs can be
achieved by a combination of three mechanisms: displacement of the natural fluids
by CO2, dissolution of CO2 into the fluids, and chemical reaction of CO2 with
minerals present in the formation to form stable compounds like carbonates.

Deep aquifers may be the best long-term underground storage option. Such aquifers
are generally saline and are separated hydraulically from more shallow aquifers
and surface water supplies used for drinking water. The spatial match between
storage locations and CO2 sources is better for deep aquifers than for gas and
oil reservoirs. A study has estimated that 65 percent of CO2 captured from U.S.
power plants can be injected into deep aquifers without the need for long pipelines.

The ocean holds an estimated 40,000 gigatons of carbon (compared with 750 gigatons
in the atmosphere) and is considered to be a potential sink for very large-scale
CO2 sequestration. Two basic methods are used for ocean sequestration: direct
injection of CO2 into the deep ocean; and indirect sequestration in which the
net natural CO2 uptake is increased via the use of micronutrients in areas of
the ocean where CO2 could be absorbed by an increase in plankton growth.

For ocean sequestration to be effective, the CO2 must be injected below the
thermocline, the layer of ocean between approximately 100 and 1,000 meters,
in which temperatures decrease dramatically with depth. The thermocline is a
stable stratification due to the large temperature and density gradients, thus
inhibiting vertical mixing and slowing the leakage of CO2. The water beneath
the thermocline may take centuries to mix with surface waters, and CO2 below
the boundary will be effectively trapped. CO2 could be dissolved at moderate
depths (1,000 to 2,000 meters) to form a dilute solution or could be injected
below 3,000 meters (the depth at which CO2 becomes negatively buoyant) to form
a CO2 lake.

Subsea pipelines to 1,000-meter depths are expected to cost on the order of
$1.2 million per kilometer or more. As indicated by the material presented previously,
considerable research and development would be needed to find cost-effective
ways to sequestrate CO2, or it will become a technology barrier for implementing
CO2 removal technologies.

Direct CO2 sequestration cost estimates vary from as low as $5 to higher than
$50 per metric ton because of limited industry operating experience. Cost varies
with the volume to be disposed of, distance to the sequestration point, and
sequestration technology.

CO2 sequestration in deep wells appears to be a feasible technology. The cost
of transportation and injection is estimated at $10 per metric ton. However,
if some other disposal method is used, such as fixing the CO2 in carbonate material
for disposal, the cost of sequestration is estimated at $30 per metric ton.
For this paper, a sequestration cost of $20 per metric ton has been assumed.

Economics

Table 1 shows assumed values of the major technical parameters (plant size,
heat rate, capital cost, and capacity factor) for the new plants under consideration,
both without and with CO2 removal and sequestration. The major financial assumptions
are:

• Plant economic life is 30 years
• Discount rate is 10 percent
• Equity share is 30 percent
• Debt tenor is 18 years
• Interest on debt is 8 percent
• General inflation is 2.5 percent per year
• Return on equity (after tax) is 16 percent
• First-year COE is stated in 2002 dollars
• Fuel cost is $1.19/GJ ($1.25/million Btu) for coal, $3.80/GJ ($4.0/million
Btu) for gas, and $0.47/GJ ($0.50/million Btu) for nuclear

Other plant data such as construction duration and plant operations and maintenance
cost is per industry norms. Note that many of these variables are site, fuel,
ambient conditions, customer, and dispatch specific, so the assumed number can
vary 10 to 15 percent or more.

As stated earlier, nuclear — from a CO2 emissions reduction viewpoint
— is ideal. Public acceptance, however, continues to be a barrier for nuclear.

Kyoto Protocol versus Bush Initiative

The United States has the lowest electricity rates among the developed countries,
predominantly due to inexpensive fossil-fuel-based generation. Because of the
strong link (which may be loosening if not already broken) between economic
growth and electricity demand (and hence greenhouse gas emissions), a rapid
reduction in emissions would be costly and would threaten U.S. economic growth.

Consequently, in February 2002 U.S. President George W. Bush announced a new
initiative, which would reduce the greenhouse gas (GHG) intensity — defined
as metric tons of carbon (mtc) emissions per million dollars of gross domestic
product — by 18 percent in the next 10 years.

In 2002, the GHG intensity stands at 183 million mtc. Based on the current
GDP of $10 trillion, total emissions are 1,830 mtc. Using the projected GDP
growth of 2.9 percent per year, the GDP in 2010 would be $12.55 trillion. With
the Bush initiative of 1.8 percent average reduction in GHG intensity, the target
GHG intensity in 2010 would be 157.4 mtc (prorated from 151 mtc in 2012). This
equals total emissions of 1,975 million mtc in year 2010.

Since the power sector accounts for one-third of total emissions, the power
sector CO2 reduction goal by 2010 is 48 million mtc. This goal is 20 percent
of the 240 million mtc reduction under the Kyoto protocol.

Another salient difference of the Bush initiative is that stabilization and,
ultimately, reversal of GHG emissions are anticipated to occur in 25 to 30 years
as opposed to 12 to 15 years under Kyoto.

As expected, the increase in COE is much lower under the Bush initiative than
under the Kyoto Protocol. This is because the bulk of existing coal-fired generation
(which is responsible for lower COE in the United States) is still maintained.

Conclusions

• Strictly from a capacity addition viewpoint with the assumed price for
coal as $1.19/GJ ($1.25/MMBtu) and gas as $3.8/GJ ($4/MMBtu); coal is competitive
with gas (a natural gas plant). For a coal gasification plant and nuclear to
be competitive, the gas price must increase beyond $4.4/GJ ($4.6/MMBtu).

• Removal and direct sequestration of CO2 are very expensive, especially
for a pulverized coal plant technology. Considerable R&D is needed to develop
cost-effective technologies. Until then, indirect sequestration is a more pragmatic
approach for the United States.

• If CO2 removal and sequestration become a priority, nuclear and other
CO2 neutral technologies become attractive options.

• The Bush initiative, while less burdensome for the U.S. economy, will
take about 15 years longer than the Kyoto Protocol to stabilize and eventually
reverse the GHG emissions trend.

The Big Shift Change

Let us look at one situation with two very different endings. Reliable operations,
employee safety, and corporate profitability are all at stake. This is not a dinner
theater gimmick, but rather a way to illustrate a utility industry predicament
to which immediate attention must be paid at the highest levels of utility management
to avoid serious problems down the line.

Scenario One: Jim Burton

A fire breaks out in the generator at one of your small generating units. Alarms
go off in the control room indicating that a fire is in progress. Jim Burton,
an operator on duty during that shift, springs to action. He carefully studies
the control panel for a minute to see if there are any signals lit that would
indicate other problems; there are none. Having recently received training on
control room procedures, he consults the emergency procedure manual appropriate
to the situation.

After a few minutes, Jim has the affected systems shut down and isolated as
required. The fire squad is called and arrives at the scene of a spectacular
fire. After spending hours extinguishing the fire and inspecting the damage,
the fire squad radios back to the control room that the fire is out but the
unit is a total loss. By the way, they ask, why wasn’t the fire suppression
system activated before they arrived?

Scenario Two: Rick Wise

A fire breaks out in the generator at one of your small generating units. Alarms
go off in the control room indicating that a fire is in progress. Rick Wise,
an operator on duty during that shift, springs to action. He glances at the
control panels with which he has grown so familiar over the years to find, to
his dismay, that the lights that would indicate that the fire suppression system
has come on are not lit.

He instinctively activates it manually, shuts down the appropriate systems
carefully using the emergency procedure manual appropriate to the situation,
and notifies the fire squad. When the fire squad arrives, they quickly move
to finish the job of putting out the fire. The damage is done, but it is fairly
localized, thanks to Rick’s quick reaction.

The Critical Difference

In our scenarios, both Rick and Jim are bright, capable individuals who have
great pride in and commitment to their job. Both Rick and Jim have received
the mandated levels of training for their job and the proper certifications.
The difference is that in Scenario Two, Rick is a 56-year-old veteran with more
than 25 years’ experience at the plant. Plant operations are practically a reflex
for him now, and years of exposure to various situations has prepared him well
for the fire that broke out.

In Scenario One, Jim is a 27-year-old operator who was brought in to fill Rick’s
position when he opted for early retirement, and he had three months of experience
when the fire broke out. He knew procedurally what to do, but didn’t have the
instincts to diagnose the situation correctly. He looked for lights that would
indicate additional problems, but didn’t realize that the failure of a light
to be on was an indication of a serious problem.

Under a great deal of stress and having never yet experienced a serious problem
at the unit, he made what human reliability experts call an error of omission
— inadvertently skipping a critical step in a written procedure. He had
scanned the emergency procedures and saw the instruction “Check to see if fire
suppression system has been activated,” and assumed it had been because there
was no warning light to indicate that it had not been activated.

In his rush to make the situation safe, he failed to read the statement right
under that that clearly said “Indicators 355A and 355B should be lit; otherwise,
manually activate fire suppression system using switch directly under indicators.”
Human reliability experts in the nuclear power industry estimate that a novice
operator under high stress is 10 to 50 times more likely to make an error of
this type. Here, the results were devastating and costly.

The Big Shift Change

Right now, many operations, maintenance, administrative, executive, and finance
positions are probably held by people like Rick Wise — sharp functional
experts in operations, maintenance, finance, accounting, and so on who have
a crystal-clear view of how the world works in the context of their specific
utility. But by 2010, people like Jim Burton will probably be the norm within
your organization at all levels (if you can find these people at all).

A massive transfer of plant and job-specific knowledge out of the industry
is slowly but steadily occurring, and it will take years to rebuild that expertise.
At the same time, the pool of technical and engineering talent available to
fill critical jobs is quietly shrinking, meaning filling slots opened by retirements
is becoming increasingly difficult. These facts may already be having an impact
on safety, reliability, and profitability throughout the industry, in any of
the following ways:

• Increases in the duration of planned outages as the new hires gradually
build the expertise and efficiency in their jobs that the current workforce
possesses.

• Increased frequencies of forced outages and accidents caused by human
error as highly experienced operators retire, since human error rates would
be expected to significantly decrease with experience.

• Falling productivity in areas of maintenance or operations that require
physical strength, agility, and durability. As a transmission company executive
succinctly points out, “Most 47-year-old linemen simply can’t perform the same
work as easily as a 27-year-old one.” Also, when older workers get hurt, their
work time lost in recovery is much greater; from ages 19 to 29, the average
days lost is 10.4, but the average days lost for those age 50 to 59 is 47.5,
according to Stephen G. Minter’s recent report in Occupational Hazards.

The problem is particularly acute in areas such as transmission and distribution
and nuclear generation, where highly specialized skills are required, and the
statistics and demographics highlighting the current situation set off more
alarm bells than a generator fire. The approximate average age of the workforce
at nuclear power plants is 47 years old, according to various sources.

Estimates of potential retirements over the next five to 10 years in some parts
of the utility industry are as high as 50 percent for key operational, engineering,
and technical positions, and as high as 75 percent for key management posts.

What’s worse, the well of potential replacements for these workers is disturbingly
dry. Colleges and universities have seen a 50 percent decline in the number
of graduating engineers over the past 15 years. On top of that, the utility
industry is not one to which young and mid-career technical and managerial employees
are flocking, in part because the industry is not seen as high-tech nor as rapidly
growing as many other fields, and also because the utility industry tends to
lag in terms of compensation.

For example, information from the 2001 IEEE Salary Survey shows that the median
annual income for an engineer in the energy delivery industry is nearly 20 percent
less than the median income for electrical engineers in their area of technical
expertise and more than one-third less than the most-highly compensated engineers.

If that scenario is scary enough to make you consider taking your own company’s
early retirement plan and getting out before the situation becomes unmanageable
(personally exacerbating the flight of expertise out of the organization), you’re
not alone. However, it’s not too late to begin taking steps to mitigate the
problem and oversee a smooth, safe, and profitable change over the next decade.
The key is to make succession planning an area of strategic focus over the next
several years.

Watch the Holes

One of the authors of this paper is a veteran of navigating the crowded streets
and highways of Boston, where the road signs have safety messages like “Death
Before Yielding” and “Please Decrease Speed After Hitting Pedestrians.” This
author has, while driving these streets, explained to many a dashboard-gripping
passenger that the key to making progress while driving on these roads is to
focus more on the holes that develop between the cars than on the cars themselves.

This allows the driver to make the move forward in the safest, most efficient
manner when the opportunity presents itself. Similarly, the utility executive
will need to put a lot of focus on the “holes” which will open in the organization
as retirements accelerate. The time to do this is now, not once the retirement
parties are under way. The most effective way to do this is to develop a formal
succession planning process that looks several years forward in a structured,
strategic way.

Succession planning has been traditionally defined as a process by which one
or more successors are identified for key positions, with career moves and development
carefully planned for these successors. It has traditionally been used in the
context of positions in the executive suite, but with the impending exodus of
technical expertise, it will require a much broader scope of execution across
a much broader segment of the workforce.

Some of the elements of the type of succession planning process necessary to
meet today’s urgent needs will include:

• Well-defined plans for recruiting new staff, particularly those just
starting their careers
• Forecasts of vacancies in the organizational structure as senior staff
at all levels retire
• Identification of successors to current staff at all levels
• Plans for documenting and transferring the existing knowledge base
• Development of formal programs for identifying, mentoring, and rewarding
high-potential employees and providing training and clear development plans
for them and closely tracking their effectiveness using measurable metrics
• Definition of strategic skills needed to accomplish current operations
as well as future initiatives, how they will be kept in-house if they are already
in place, and how they will be obtained if they are not in place at the levels
needed.

The job of succession planning across the enterprise has often been left to
the human resources department in isolation, but to stave off the impending
crisis, HR, the leaders of each business unit, and the executives at the highest
level of the utility must cooperate to formulate a plan to strategically attack
the “Big Shift Change.”

A cross-organizational leadership team that willingly, openly, and honestly
discusses the capabilities and performance of all staff and develops a career
and succession plan for identifiable high-potential individuals across all segments
of the organization, without regard to functional ownership, will ultimately
develop a succession plan fully capable of keeping critical knowledge capital
within the organization and effectively documenting it and transferring it across
the company.

Engaging in turf wars over high potential staff, failing to identify the key
skills needed to meet the organization’s strategic plans, or failing to pass
knowledge on to the next generation could impose huge costs on a company in
the long term and even, as we have seen above, in the very near term.

The Next Wave

In the past 15 years or so, careers with utilities have not been high-profile
targets of large numbers of college students in the way that, for example, high
tech, biotech, and financial services have. But now that the dot-com dust has
settled and utilities have become more competitive, a creative HR department
can position a utility career as offering significant opportunities for advancement
and an exciting career path.

On the technical and professional side of the company, college internship programs
are often excellent ways to introduce the young workforce to the utility world.
In a tight job market, internship positions will be targeted more and more by
practical undergraduates looking to get a leg up on their peers in the post-graduation
job hunt. A truly effective internship program targets undergraduates from competitive
educational environments, excites interns about the future of the industry,
and provides operational exposure over a broad range of geographies and functions.
With these elements in place, a key pipeline for new talent exists.

The craft labor force will have to be rebuilt as well, and working hand-in-hand
with unions to attract new blood is critical to maintaining craft skills. One
utility, for example, has worked with its local union to create a program in
which temporary help can be brought on part time to fill peaks in workload and
eventually, as they become more experienced, can actually enter the workforce
as journeyman apprentices. This tactic has already resulted in several new hires
into positions vacated by retiring workers.

Planning to bring the new workforce into the company is the obvious need. However,
before this next generation of utility employees is on board, it will be incumbent
on utility management to initiate several actions that may be less obvious.

• Develop training programs that will facilitate the passing of expertise
on from the current generation of employees to the next.
• Develop other processes to fully document existing technical knowledge
in the workforce.
• Develop processes to identify high potential staff and guide them in
shaping their careers.
• Monitor these programs closely and include experiential training for
those whose developmental needs have been identified.
• Determine how these future leaders will be rewarded and on what basis.

Passing the Torch

Verifying expertise is retained within the organization will involve extensive
work by and cooperation among the company’s HR, training, engineering, operations,
and maintenance departments. Formal training has long been a staple of utility
HR programs, but for specialized technical, engineering, and management positions,
the scope of knowledge needed is too narrow to be met effectively through this
means exclusively. In an ideal world, mentor/coach programs would be the answer
to this problem. While they can be expensive in terms of labor productivity,
the financial and safety benefits are likely to far exceed the initial costs.

A technique sometimes used successfully is to bring retirees back in the specific
capacity of mentor to a junior technical specialist or executive. If a traditional
mentoring program had begun in earnest prior to Rick’s retirement, the experiential
learning that Rick had built over the years could have been transferred to Jim.
The cost incurred in, for example, having Jim shadow Rick for a year, even at
fully loaded pay, would have been a fraction of the costs of the consequences
of Jim’s inexperience.

While the type of worst case seen in Scenario One isn’t likely to happen everywhere,
it is a near certainty that a mentor/ coaching program, on average, will pay
for itself over time by increasing technical knowledge, decreasing errors and
accidents, and ramping up the learning curve for jobs with specialized skills.

However, given the very short time until the retirement of many key staff,
parallel efforts should be undertaken to formally document any knowledge critical
to operations, maintenance, engineering, or administration. Other than in the
nuclear sector, where such rigorous documentation is largely mandated, the level
of critical knowledge that exists only in the expertise of certain individuals
is astonishing. Looking forward to upcoming retirements, managers in all of
these areas should begin to get work processes identified and developed.

The work planning and content management modules of many of today’s enterprise
asset management software packages provide an ideal place for much of this information.
In working with many of our clients, we have found, however, that many of the
capabilities of such software that could be used to retain existing knowledge
are unused, underutilized, or misapplied. With the large investments many companies
have in such information technology systems, a review of how effectively they
have been used to capture this knowledge would be a prudent.

High Potential and Performance

Widespread implementation of corporate enterprise management systems has also
made it easier to identify those employees with the skills, competencies, capabilities,
and interests to take on critical roles. Most of the major enterprise management
software packages available to utilities today contain a set of HR modules with
functionality that allows skill-matching database queries of a centralized employee
database.

Using tools like these, the competencies and capabilities required for the
key jobs within the organization can be specifically profiled and the ideal
candidate for these roles identified. When a key position becomes vacant due
to a retirement, promotion, or other reason, the company is easily able to identify
in-house resources that fit the required competencies and capabilities.

However, this is not an off-the-shelf application of information technology.
Serious thought and planning must go into the definition of profiles to determine
that they are well-defined, allowing the company to easily find the best individual
to fill the vacant position. Database applications must be developed that capture
information about an employee’s training, education, and career goals, and that
permit matching of these profiles to verify the ability to identify the right
person, for the right job, at the right time. Now is the time to do this, since
once the need becomes evident, too much time will be required to acquire the
information that will be necessary to achieve this objective.

If the activities listed above can be successfully accomplished, the welcome
mat for the high-potential utility employee of the future will have been laid.
The final piece of the preparation is to determine how their performance will
be measured and rewarded once they have walked in the door. An essential part
of the “next wave” planning process, then, is to commit to a planned and objective
incentive, reward, and recognition program. This program should be characterized
by measurable criteria and achievable, time-bound goal-setting that pave the
way for meaningful internal promotions.

The issue of compensation is obviously an important one, but geography, competition
from other key local industries, and competition from other utilities will play
the largest role in determining overall compensation levels required. There
is not a lot that can be done about the first two.

Without a doubt, a draftsman, lineman, or accountant in New York or Los Angeles
will require a considerably higher salary than one in rural Kansas. The key
is to monitor levels of the third factor across the industry (adjusting for
local economic realities) and determine that compensation for an employee’s
skills compares favorably with similarly skilled employees at other utilities
(particularly nearby ones).

Any effort to be above the median should be a conscious one, and this philosophical
stance should be transparent to the staff and all internal and external stakeholders.
Making this a well-publicized goal of the company is the best way to obtain
the intended motivational effect of keeping compensation levels competitively
high.

Incentive compensation can be another tool for motivating employees to work
at their highest level, but it is a tool that must be used carefully. If high
achievement is not sufficiently differentiated from the average performance,
the motivational need to augment performance is not clear, and there is a very
real danger of an incentive pay program evolving quickly into a sort of entitlement.

In heavily unionized operations, where group expectations play a huge role,
this message must be tempered by the realization that an employee that is highly
motivated may encounter a hostile environment in trying to achieve more if it
conflicts with informal work group expectations.

Finally, there is the danger that the concept of rewarding incentive pay for
above-target financial performance can give the illusion that conflicting messages
are being given to employees in some parts of the industry where factors other
than strict financial performance control public perception (e.g., nuclear plants).

Legitimately valuable incentive programs that appear to focus exclusively or
too heavily on financial measures may send poor messages to regulatory bodies,
governmental organizations, and the public as a whole, implying that profit
is being pursued with inappropriate attention to the public good. A well-defined
balanced-scorecard approach to assessment of staff and organizational performance
is often effective in avoiding this trap, as it explicitly shows all stakeholders
how employees are rewarded for breakthrough performance in safety and culture
as well as financial profitability or operating performance.

Long-Term Benefits

By integrating succession planning with the other key elements of an organization’s
strategic plan, a utility will have the potential to:

• Develop effective processes for identifying future competency requirements
for strategic positions.
• Define and use consistent approaches for matching existing high potential
employees to key future vacancies.
• Effectively use in-place information technology resources to allow executives
and managers ready access to succession planning information.
• Determine whether compensation and career packages offered are sufficient
to attract and retain top talent, and make adjustments if necessary.
• Put in place consistent processes for identifying competency requirements
for all new positions and roles.
• Make plans for filling intermediate-level positions from which high-potential
employees have been promoted over and above the traditional executive-level
succession planning process.

When a significant percentage of the workforce — and the specialized knowledge
needed to keep plants and transmission and distribution lines in operation —
is getting ready to retire, failure to plan early could spell disaster. Reliable
and safe operations, continued profitability, and a leadership role in the industry
will be the ultimate payoff for those well-prepared for the Big Shift Change.

Portfolio Approach to Cutting Costs

The renewed and particularly intense focus on performance improvement and cost
reduction in utilities is traceable to three compelling drivers of change:

• Strong capital market demands
• Selective opportunities for growth
• Increasing performance expectations for regulated business

As Wall Street increases its scrutiny on energy, and utility companies become
more reliant on raising capital from the financial markets, short- and medium-term
financial performance and shareholder value become all-important. The fall of
the trading market and continued uncertainty in the marketplace have exacerbated
utilities’ deteriorating financial conditions and have precipitated creditworthiness
challenges throughout the industry.

The capital markets will continue to dissect the industry and its players,
demanding strong balance sheet management and consistent, stable earnings. Top
players are looking for ways to achieve scale economies, both through best-in-class
operations and supplier consolidation. They are also motivated by a strong desire
for earnings growth — internally, through cost management, as well as externally,
through acquisition.

Put simply, utilities are looking for sensible ways to grow and leverage their
businesses, consistent with their overall business strategies. The trend toward
outsourcing — from tree trimming to plant construction, from HR to finance
— provides several avenues for utility performance improvement, improved
resource utilization, and cost reduction.

New enabling technologies, such as customer information systems and risk-management
systems, will assist in enhancing performance in new businesses, enabling the
utility to keep a close eye on and drive efficiencies from the core business.
Taken together, the three drivers of change are generating substantial momentum
for performance improvement and cost reduction in utilities.

The Portfolio Approach

There are many approaches to improving the bottom line. The truth of the matter
is that the approach that works for one company may not get the best results
for another. Why? Because each company has a different strategy, operations,
health, and culture that can drive the decision variables regarding which approach
to use.

This article will focus on where to look for cost reduction opportunities and
the various approaches that have been successful across industries. We have
observed that companies often draw from a mix of approaches to develop the programs
to realize their goals, driven by an overall business strategy. The mix of approaches
helps develop a portfolio of projects from both a top-down and bottom-up approach.

Before we focus on specific approaches, let us take a quick look at some of
the opportunity levers companies use to drive cost-performance improvement.
These are the areas that typically yield potential large opportunities. Benchmarking
and diagnostics can be used to identify priority areas for your company.

Shared Services or Outsourcing

Each company has core competencies it must do well to be successful in delivering
its products or services to its customers. Non-core areas present opportunities
to consider shared services, or outsourcing, where the transaction-related activities
of the function are the core competency of a new unit or vendor.

Areas to consider include: customer relationship management, finance, human
resources, information technology (infrastructure, network and applications
maintenance), procurement, and facilities management. Each should be evaluated
from both a business case perspective and from a strategic perspective to determine
what makes sense for your organization.

Organization

It is obvious that the success of an organization to a large part comes from
its people. Organizing effectively, with the right skills and tools combined
with a sound business model and processes to deliver the strategy, is important.
Many organizations are hampered by a “silo” or “stovepipe” organizational structure,
where processes are suboptimal within the silo.

It is critical to: look at processes from a customer point of view, migrating
to best practices in procedures and technology; and effectively organize around
processes with the right customer-driven performance measures and job functions
to get the organization aligned and moving in the right direction. The costs
should come down, and revenues and service levels should improve.

Capital and Expenses

Don’t forget the balance sheet and how you spend money. On the balance sheet
side, receivables, payables, inventories, and assets are a source of cash that
is often overlooked. Targeted programs for eliminating revenue leakage, paying
per contract terms, reducing inventories through better supply chain management,
and eliminating unneeded assets can improve your working capital situation and
cash flow. Expenses can be attacked by evaluating expense policies and complying
with them.

Better procurement processes and contracts can lead to significant benefits
by curtailing unneeded spending or improving unit costs through strategic sourcing
initiatives. Often, recent mergers and acquisitions create many of these issues,
where companies fall short on integrating best practices between the two legacy
organizations.

Operations

Operational priorities and a tight rein on cost control govern many of the
current and priority initiatives within utilities. This requires a thorough
examination of strategies, skill sets, and processes. Value chain analysis also
becomes critical as companies prioritize areas ripe for improvement and enhanced
efficiencies.

The final step consists of building the necessary process and technology infrastructure.
Establishing a standard architecture for operational processes, information
technologies, and management competencies optimizes a company’s ability to succeed
in driving meaningful, enduring improvements to operations.

The utility industry today is focused on driving cash flow, balance sheet strength,
and return on invested capital through balanced, integrated operations between
the regulated and merchant businesses.

On the regulated side, utilities are seeking ways to enhance earnings stability
and growth. Low-cost operations and regulatory innovation are key to creating
premier regulated franchises. Operational excellence, best-in-class transmission
and distribution (T&D) cost position, and solid regulatory relationships remain
critical success factors. More than ever, our clients are renewing their focus
on extracting optimum value from regulated operations.

New and improved processes are being implemented to maintain and improve system
reliability and customer satisfaction, and organizational structures are being
transformed to meet emerging requirements of the T&D businesses from state and
FERC regulators. Further efficiencies are being sought from back-office operations
such as HR and finance.

For merchant businesses, low-cost generation, enhanced market share at acceptable
levels of risk, and aggressive risk management remain key to developing and
sustaining best-in-class merchant capabilities. Leading utilities are implementing
unique work management processes and solutions to enhance efficiencies within
their generation businesses. Processes are being redesigned within the generation
and trading businesses to mitigate risk to the organization and decrease potential
adverse impact on earnings.

Improvement Approaches

Now that we’ve covered some areas to examine, how should we go about defining
the specific programs to improve cost performance? We have chosen to focus on
three approaches that have yielded results across various industries:

• Budget cuts
• Business process redesign
• Six Sigma

Old Reliable — Budget Cuts

This is a top-down approach where you know there is fat to be trimmed and you
need a forcing function to do it. Rather than across-the-board cuts, most companies
today conduct benchmarking studies to identify target spending or staffing levels
for various functions and then “bake” them into the budgets. Then, you leave
it up to the organizations responsible to figure out how to get there.

The advantage is it is relatively easy to set the new spending levels, but
they are often arrived at through negotiation. Without an underlying structured
method, it can get the desired results, but it may suboptimize operations since
it usually takes a functional view rather than a process view. Also, there is
often little consideration for the customer.

Process Re-engineering

Many companies use this method, first made popular in the early 1990s, because
it focuses on how a company conducts business to deliver products and services.
Makes sense. It is important to start with the business model and how the processes
support the business model. Since this method is not bound by organization,
it focuses on tangible improvement and reduced waste in business processes through
process changes, elimination of steps, or improved technology and organizational
alignment.

With the development of industry-tailored software packages with industry best
practice processes already included, there is an emerging trend to re-engineer
to the plain vanilla, industry-standard software solutions that can reduce ongoing
system maintenance and upgrade costs.

The key is to customize industry-standard solutions only where you need to
for competitive reasons. It is important to verify that re-engineering teams
have a solid outside perspective so that they can consider breakthrough improvements
and determine that the teams focus on implementation and process measures to
drive the changes into the organization.

Six Sigma

Six Sigma has emerged since the mid-1990s, most notably through the visibility
it has been given by companies such as General Electric and Motorola. It is
a highly structured, process-based approach that is culturally driven using
statistical tools to evaluate process performance. In this approach, certified
Six Sigma black belts address specific improvement projects from start to finish.
It involves a heavy investment in training and usually does not pay back for
two or more years.

However, Six Sigma tends to generate a lot of organizational buy-in and develops
strong process-based and problem-solving skills in the organization.

Successful performance improvement is dependent on developing the right projects
and effectively implementing them. We suggest that you look across your organization
through process-driven approaches, and you should expect to develop a portfolio
of projects.

Which Is Best?

The approach you choose will depend on your specific situation, where you are
in terms of improving your performance, the speed of implementation needed,
and your desire to engage in a top-down or bottom-up approach. Our experience
is that you will likely need more than one approach to achieve your desired
results.

Regardless of the path you take, make sure you engage the organization, create
strong accountability, and measure your progress.

Building an Enterprise GIS

Early on, utilities, with no specific cartographic expertise and lacking any standards
around scale, orientation, map sheet size, symbology, or detail, created a huge
investment in paper maps and records. Whole organizational and operational processes
were linked to maps and map grids. That legacy largely remains today.

Today’s modern geographic information system (GIS) and its associated spatial
database can offer radical improvements over the legacy paper map processes
and thinking. Or the GIS can be compromised to look, feel, and behave like an
automated map machine, retaining the old limitations, processes, organizational
structures, and systems that preceded its implementation.

This paper will describe the evolution of GIS and associated processes at utilities.
It will also provide some examples of how leading utilities have shed and shredded
their legacy processes to produce real business value with an enterprise GIS.

Progress is often defined as improving upon past accomplishments. A breakthrough
is defined as breaking away from past accomplishments. Progress is gradual;
breakthroughs are quantum leaps. Breakthroughs require new knowledge, seeing
things in a different way, and visualizing a new paradigm. Progress solves the
problem of “How can I make this process work better?” A breakthrough revolutionizes
the business.

There is nothing wrong with progress. All businesses must improve their current
operations, gradually tweaking processes, building new tools and facilities.
Occasionally, new technology arrives that has the potential to significantly
alter how a business operates. GIS is one of those technologies. The challenge
for utilities is first to recognize that GIS can enable breakthrough improvement,
and second, to shed the legacy that inhibits breakthrough in favor of just progress.

Why is it so tough to shed legacy processes and thinking? The answer is that
the legacy processes work so well and are so entrenched into the business that
no one can even see that they are obsolete. Henry Ford was the pioneer of mass-produced
vehicles. Mass production was a breakthrough from custom-built production methods.
His processes and methods were emulated throughout the world. As part of the
Ford process, large inventories of spare parts were kept, and large inventories
of completed automobiles were manufactured awaiting customer orders. This worked
for many years.

Ford’s process was improved upon by more sophisticated manufacturing techniques,
including robotics and computers. Toyota questioned Ford’s manufacturing process
in a fundamental way by employing Demming’s statistical process control for
quality and lean methods for minimizing inventory. The Toyota Production System
(TPS) broke from the past to manufacture automobiles that were cheaper to produce
with a quantum leap in quality. Toyota’s breakthrough implementation of TPS
resulted in nothing short of market dominance.

Evolution of Mapping Systems

Maps have been used in utilities for many years. They needed to be. Workers
needed to find their way to customers, underground structures, poles, and equipment.
The maps served three main functions:

• Documentation for what had been built — a “visual facilities inventory”
• Basis for engineering upgrades and new customer connections
• Representation of the network for analysis of electric or gas flow

Often the responsibility for the creation and maintenance of the maps was with
the engineering department. The engineers and technicians created new work orders.
After the new facilities were built, drafters would copy the work-order data
along with field notes and built changes back on to the maps.

Map sheets were sent to the field offices and trouble centers. Copies of the
maps were marked up by field people and sent back to the mapping group for inclusion
on the maps.

In the beginning someone within each utility made a decision to create maps.
At that time they decided how big they should be and what should be captured
on the maps. Utilities often would hire mapping contractors to draw the maps
or simply copy maps from published map books, such as Thomas Bros. or from USGS
quads or wherever maps could be found. As systems became more complex, new maps
were created to show greater levels of detail.

Over time, mapping became a big deal, with large staffs maintaining an ever-increasing
number of map products with varying degrees of detail and annotation. Some maps
were very accurate, others more schematic-based. Most were developed with no
particular mapping standards, since all maps were used internally. Symbols,
scales, and levels of detail were largely invented by the utility.

Utility mapping groups organized their paper maps by grids. Sometimes they
aligned with standard map grids, like the state plane coordinate systems, and
other times they were just arbitrary. However, once the map gridding system
became established, many other processes began using the map grid as a means
of organizing work. Electrical and gas equipment, pole numbers, value numbers,
even meter reading routes were often adopted or derived from the legacy map
gridding system. These map grids found their way into early plant accounting,
meter inventory, billing, trouble call, and customer systems of the late 1960s
and 1970s.

Operating divisional structures were often divided up along map grid lines.

While the facilities paper maps were used extensively for engineering and operations,
they were largely ignored in other major business units within the utility,
probably because they were hard to understand and contained highly technical
information. They were, in effect, engineering documents. The value of the information
contained on these maps and records was largely hidden from the rest of the
business.

The Wonder Years End

Investor-owned utilities enjoyed steady growth and prosperity, with little
significant regulatory interference from the 1930s breakup of the utility holding
companies (with the passage of the Public Utility Holding Company Act of 1938)
until the 1965 Northeast Blackout. The blackout, the 1970 Clean Air Act, the
1973 oil embargo, the Clean Water Act of 1977, Three Mile Island, the 1978 Natural
Gas Policy Act, the Public Utility Regulatory Policies Act (PURPA) of 1978,
the Energy Policy Act of 1992, FERC Order 636, FERC Order 888, and others caused
huge investments in plant and equipment that did not generate one additional
kilowatt-hour or Btu of additional demand for electric and gas utilities.

In order to preserve shareholder earnings, rates needed to rise. However, with
the breakup of AT&T, the rise in consumerism, and the post-Watergate negativity
with large institutions, rate increases were hardly popular. State public utility
commission and the Federal Regulatory Energy Commission could only approve rate
increases.

By the late 1970s, it was clear that utilities needed to cut costs to maintain
decent shareholder return. By then, information technology could be applied
to help improve productivity and customer service.

The classic utility IT systems were created at this time: customer information
systems (CIS), trouble call, meter inventory, work order tracking, payroll,
supply chain, and financial. They helped drive down costs. All aspects of the
utility processes needed to be improved. Visionaries saw that the information
trapped on the hand-created maps and records might have additional benefit throughout
the enterprise to help decision-making and improve performance and communication.

Mapping Systems Evolve

Computer-aided design and computer-aided manufacturing (CAD/CAM) were becoming
affordable. By the early 1980s, PC CAD/CAM systems were adapted to engineering
drafting by utilities. CAD systems quickly replaced the manual drafting board
for new design work. CAD was a natural for designing new substations, take-stations,
and buildings. The time to create a new drawing was cut in half, revisions took
seconds not hours, the quality was better, and the ability to print and electronically
communicate was spectacular. CAD was great for new drawings. It automated the
drawing process.

Since it was important to have access to data about facilities, CAD systems
were modified to allow facility data to be isolated and associated with the
graphical representation. Thus the term automated mapping/facility management
(AM/FM) was adopted. These modified CAD systems allowed some degree of storing
and reporting of facility attributes in a database, while still maintaining
the mapping functionality. While some AM/FM systems attempted to piece together
the individual map files to form continuous maps, fundamentally they were managing
drawings and associated data. The paradigm was essentially the same as the old
mapping systems. Maps are drawings. They have a size, a scale, and symbology.

During the same time as AM/FM systems were being deployed, GIS technology was
maturing. GIS, like CAD and AM/FM systems, produced maps, but did so in an entirely
different way. CAD and AM/FM automate the drawing and data capture process to
create maps. GIS manages spatial information about things, places, and features
in a database and displays the results in the form of the map. CAD and AM/FM
replicate known information — a sketch becomes a finished product, whereas
GIS creates new information.

A word processor creates a finished typed page from a hand-written document
(no new information). A database query solves a problem (a query creates new
information). CAD and AM/FM systems created drawings from sketches (providing
no new information). A GIS solves a spatial problem and creates new insight
and information. Progress is automating the mapping function (using a CAD system).
A breakthrough is uncovering the cause of a recurring reliability problem (using
spatial analysis of a GIS).

Whether the asset maps are stored on paper or in CAD files, visionary utilities
have shredded the map sheets and viewed the facility data to solve broad-based
business problems, not just engineering or network analysis problems.


(See Larger Image)

Figure 1: CenterPoint Energy – Know Where Your Assets Are and Are Not

GIS — Adding Value

GIS is about three things, and we’ll take a closer look at each one:

• Decision-making
• Enterprise communication
• Efficiency

Decision-Making

Electric and gas distribution utilities have assets geographically dispersed
throughout the service territory. The infrastructure is aging. It is simply
too expensive to replace equipment just because it’s getting older. The key
is to understand which assets need to be replaced and when. Replacing equipment
too early means money could have been spent in other, more critical areas. Replacing
too late means replacing after the equipment has failed, leaving customers without
gas or electricity. GIS allows disparate information to be integrated.

CenterPoint Energy of Houston, one of the largest electric and gas distributors
in the nation, uses GIS extensively for managing its assets. Certainly, CenterPoint
uses the GIS to produce facility maps. However, it uses the data underneath
the maps to help manage its assets. Knowing where equipment is so managers can
make intelligent decisions about planning and operational issues is critical.

During 1998-1999, the CenterPoint Energy Underground Locating Division recognized
a rapid increase in requests to locate underground facilities. That growth,
along with the acquisition of a gas company and more facilities to be located,
gave CenterPoint the opportunity to identify economies of scale in its underground
locating business process.

The result of that effort was Underground Locating and Ticket Research Application
(ULTRA), a system that integrated GIS, new technologies, and process re-engineering
to save CenterPoint Energy $1 million in its first year of implementation. Savings
were achieved by using GIS tools and data to produce digital maps and processes
that resulted in fewer “field locate” queries being required and increased efficiencies
in the locates that were performed.

Decision-making using GIS can be applied to these utility tasks:

• Economic development — Encouraging development near existing
underutilized gas mains and power lines.
• Demographic studies — Sub-regional load forecasting, enhancing
in-house data with external data sources such as land-use inventories, zoning
data, and wetland protection areas.
• Vegetation management — Linking reliability data to vegetation
growth characteristics, as well as providing analysis to support just-in-time
tree trimming or weed abatement.
• System planning — Determining patterns in load growth.
• Rate case studies — determining the cost of service.
• Long-term capital planning — Linking historic weather, environmental,
political, and social data to form long-term capital spending programs.
• Reliability centered maintenance — Linking reliability to
maintenance planning and inspection data.
• Loss evaluation — Seeing where losses are higher or lower.

Enterprise Communication

When things go wrong, or when disasters strike, utilities need to communicate
to customers, employees, the media, local government, and often state and federal
emergency management agencies. The ability to communicate the current state
of work or outages or crew deployments is critical.

A variety of utilities use GIS to display in real time the current status of
outages and the extent of the damage.

In customer satisfaction surveys, customers consistently either praise or criticize
utilities for their ability to communicate the current state of progress of
outage repair, in-progress work for new construction, or for something as simple
as repairing a street light. People generally understand that things can go
wrong. The utility can be heroic in its efforts to restore power, but if it
is unable to communicate progress to customers, then its efforts go largely
unnoticed. GIS is used extensively in communication:

• Work order status — Coordination of work internally and externally
with other agencies
• Weather and how it relates to current work in progress
• Safety — Notification of where crews are in relation to switching
operations
• Crew and vehicle tracking
• Communication of work division performance
• Outage reporting to media
• Bad debt analysis

Efficiency

Given the demands on today’s modern utilities, incremental improvements are
not sufficient. Breakthroughs are required. Returns on investments (ROI) must
be large; paybacks must be short.

Aside from fuel purchases (whether through purchased power contracts or internal
generation), electric and gas distribution utilities largest expenses are:

• Labor — internal or contract
• Materials and hardware
• Carrying costs
• Fleet

Utilization of Labor

Southern California Gas, a Sempra company, uses GIS capabilities for its Automated
Meter-reading Information and Geographic Optimization System (AMIGO). The project
provides a meter-reading realignment and optimization system in order to plan
and dispatch routes for meter-reading staff. The system interfaces with Southern
California Gas’ Customer Information System (CIS) and the data from the handheld
field-meter-reading units.

AMIGO is used to visualize the complex relation between Segments and Sections
(local areas and combination of street blocks) for planning purposes. Furthermore,
it allows the user to automatically create new, balanced routes using street-level
routing optimization routines. AMIGO’s spatial database will store not only
routes and employee information but also a read history for SoCalGas’ more than
5 million customer locations totaling in more than 60 million records.

Additionally, meter-reading supervisors can remotely access the routes on a
map display and move route segments between employees in case of employee unavailability.
The system also allows SoCalGas to clean up inconsistencies between the street
data addresses and the current CIS address information and make the improved
data available to other departments in the organization.

SoCalGas leverages geospatial processing for other logistical challenges, like
the routing of customer service personnel. This project is called Advanced Resourcing
Tool (ART). ART interfaces the dispatching system PACER, allowing dispatcher
personnel to identify route assignments and exceptions visually. Routes are
built for approximately 2,500 field technicians on a daily basis.

Other areas where GIS has been applied for efficiencies are:

• Engineering and design (using spatial optimization to minimize costs)
• Operational planning/switching analysis
• Permitting and notification
• Work scheduling
• Crew scheduling
• Crew coordination
• Fault location
• Productivity measurement by location
• Meter reading and service
• Automatic meter reading deployment
• Bad debt collections

Summary

The challenge for utilities implementing true enterprise GIS is to fully understand
that a GIS is not an automation of the former mapping business process (an application)
but a foundation for many business processes, many of them new. The justification
for the GIS is not in the improvement of the mapping process, but as one of
the enablers for fundamental and perhaps radical process improvement and its
associated savings in cycle time, asset utilization, and productivity.

GIS is a breakthrough technology. Utilities can fully leverage the data and
its relationship to other information stored internally or outside the enterprise
to better make decisions, facilitate broad internal and external communication,
and improve work processes, increasing total shareholder return. n

Yallourn Energy Powers Maintenance Efforts with MRO Software”s Strategic Asset Management Solution: An MRO Software Case Study

Based in the Latrobe Valley in the Australian state of Victoria, 150 kilometers
east of Melbourne, Yallourn Energy operates Australia’s second largest open cut
coal mine and owns and supplies the Yallourn “W” Power Station. The station produces
about a quarter of the State’s electricity needs – a tenth of the nation’s power
– from four generating units with a combined capacity of 1,450 megawatts.

Until it became the first Victorian generating entity to be privatized in 1996,
Yallourn Energy was a part of the State Government-owned State Electricity Commission,
Victoria (SECV). Over the years, the SECV deployed different purpose-built legacy
systems for maintenance and inventory control at each mine and power station.
At the time of privatization, a service bureau took over all the legacy systems
and reinstalled them offsite. Yallourn then relayed the maintenance information
to the bureau for processing.

In order to streamline operations and reduce IT and inventory costs, Yallourn
Energy realized that they needed to rationalize the business process with just
one replacement asset management system across the whole operation. It was also
decided to dispense with the service bureau and install and manage the new system
in-house. Management also adopted a “no customization” policy at that time because
the legacy systems being replaced had been heavily customized. These legacy
systems were expensive and difficult to maintain, and the only people who knew
how to upgrade them were actual system developers.

“We needed a system that was not only robust enough to help us maintain the
assets that supply a quarter of the State’s electricity needs, but also one
that needed no customization,” commented Steve Brown, Yallourn Energy’s Production
Services Manager. “MAXIMO® met these requirements.”

More than 300 employees use MAXIMO for all aspects of asset management including
services and materials, contracts, job plans and infrastructure maintenance
for buildings and roads. Logs show that there are usually up to 75 people accessing
the system at any one time, including employees located at Yallourn Energy’s
office in downtown Melbourne. The value of the assets that MAXIMO helps Yallourn
maintain is staggering. For example, there are five bucket wheel and chain dredgers
at Yallourn’s open cut brown coal mine that cost up to A$80 million each.

More than two years after taking MAXIMO live, Yallourn Energy continues to
successfully maintain the pragmatic and economic “no customization” policy.

“We might change a screen here and there, but we don’t go anywhere near the
system flow or code,” continued Brown. “Using MAXIMO without any customization
does not compromise any of the functionality we need to run our business, and
the ease-of-use makes maintenance of the system affordable as we do all our
own upgrades.”

In addition to IT savings on upgrades, MAXIMO also saved the organization millions
of dollars in inventory. “MAXIMO helped us reduce inventory levels from 20,000
to 17,000 items,” stated Brown. “This represented a savings of more than A$7.0
million. In addition, we reduced inventory staffing by 25 percent, and reallocated
those labor resources to other roles within the organization.”

Yallourn Energy also uses MAXIMO to procure the parts it needs to keep its
assets up and running. MAXIMO checks the availability of the required parts
from internal sources first and then if the part is not in stock Yallourn Energy
leverages MAXIMO’s electronic requisitioning capability.

Yallourn Energy also saved significantly on IT infrastructure costs. “We discontinued
a service bureau that was costing us some A$680,000 a year and replaced it with
an in-house infrastructure that cost us A$220,000 in year one, including setup
costs and the training of 380 people over an eight-week period. Costs are now
flattening out at around A$80,000 per year, a yearly saving of A$600,000.”

Yallourn Energy is now undergoing significant modernization, development and
structural changes. These include a joint venture that will see the development
of a new coal field that will produce coal for Yallourn through the year 2032.

“As we continue to change the business in response to the competitive national
electricity market, we are confident that MAXIMO can accommodate our growing
needs,” concluded Brown. “MAXIMO was the most user-friendly IT implementation
experienced by our company. We’ve had minimal need to call support. We didn’t
need external help with our last upgrade, and we don’t anticipate the next upgrade
being any different.”

Goals:

• Decrease inventory levels

• Minimize IT infrastructure costs

• Implement a system that needs no customization to adapt to their business

Results:

• Reduced inventory levels from 20,000 items to 17,000 items and saved
more than A$7 million

• Decreased yearly IT infrastructure costs from A$680,000 to A$80,000,
a yearly savings of A$600,000

• No customization implementation of MAXIMO did not compromise any the
functionality as the system met Yallourn’s business needs

On Performance Management

As the energy industry evolves, leveraging business intelligence becomes ever
more important. The well-publicized problems of major energy traders over the
last year have spurred a renewed emphasis on steady earnings derived from tangible
assets and have underscored the importance of managing risk across the entire
spectrum of a company’s activities.

In such an environment, the strategic imperatives of reducing costs, controlling
risk, and monitoring the performance of the business portfolio become clear.
Enterprise performance management (EPM) can help organizations leverage information
to optimize value by offering a framework to identify a company’s strategic
imperatives and by developing the best measures for monitoring progress and
enabling decision-making relative to those imperatives.

There is no shortage of enterprise-level performance measures to choose from.
They include relatively recent and well-publicized measures such as economic
value added (EVA), which is the Stern Stewart approach described in “The Quest
for Value,” and the Holt Value Associates method known as cash flow return on
investment (CFROI). More traditional measures include return on invested capital
(ROIC) and earnings per share.

It would not be surprising for companies to feel some degree of confusion over
what the “right” measures are. Add to this the tendency for practitioners to
come up with the next big thing and the tendency for some firms to promote their
own measures, and the issue gets foggier still.

There is no perfect measure or set of measures — all have pros, cons,
and limitations that need to be understood. Assuming any of the measures a company
may use today possesses some fundamental merit, what ultimately matters is that
its use is understood and will get used by the organization.

In our experience, laying the groundwork for a successful program involves
the following key steps, which we’ll discuss in more detail:

• Identify the users and the intended use.
• Map the organization’s strategy, goals, and initiatives to develop the
right measures for that organization.
• Set realistic, meaningful targets that can be aligned with individual
compensation.
• Gain buy-in of the performance management initiative throughout the project.
• Keep the performance management framework simple, yet comprehensive.
• Plan for implementation from day one.

Identifying Users and Use

It is important to understand who will be using the measures and what they’ll
be used for. We have found it useful to classify measures into two categories:
enterprise-level and operational-level.

Enterprise-level measures are those used by CXOs and business unit heads in
managing the business portfolio. They provide the information necessary for
capital allocation and investment/divestment decisions and are concerned with
optimizing enterprise performance. The sensitivity of this information is greater
than at the operational level and demands a higher degree of security in its
handling.

Operational-level metrics, on the other hand, are concerned with optimizing
operational performance at the business unit, department/process, and employee
level. At the plant level, an example would be operating and maintenance cost
per megawatt-hour, and at the employee level, the percentage of time maintenance
personnel spend on productive work (often referred to as wrench time). Operational
measures are necessarily related to enterprise-level measures, as top-level
corporate performance will be driven by the performance of the operating units.

Executives also need to know what the measures will be used for and how to
use them. While this may seem obvious in principle, it is, in our experience,
less obvious in practice. In discussions we’ve had with corporate executives,
a common reprise is, “I get too many measures from too many sources — what
am I supposed to do with all of these?”

Effective measures should enable the decision-making for which they are intended
(for example, portfolio composition decisions and capital allocation decisions)
and should present sufficient information to provide a “pulse” of the enterprise
and trend progress toward achieving strategic objectives.

Presentation and context are particularly important; we have found that an
accompanying analysis describing the recent performance of a particular measure
can be extremely beneficial in helping executives interpret performance and
make (or not make, or delay) decisions.

We recommend that at an enterprise-level, the number of performance measures
be kept to no more than 10, and fewer would be fine.


(See Larger Image)
Figure 1: Levels of Performance Measures

The Right Measures

For any organization, the right measures to use are those that gauge a company’s
progress toward achieving its strategic objectives. In developing the right
measures for a company or organization, the use of a strategy map greatly facilitates
the discussion necessary to drive a select group of performance measures. The
tool, developed by Robert S. Kaplan and David P. Norton and discussed extensively
in “The Strategy-Focused Organization,” is typically best used as part of interviews
and workshops with executive stakeholders to develop the organization’s strategy,
objectives, and initiatives.

A strategy map can assist in actually developing the company’s strategy, but
is typically used to link a company’s strategy, or strategic themes, with measurable
strategic objectives and appropriate performance measures. For example, an organization’s
strategic theme could be to “deliver shareholder value within the top quartile
of its competitors through a balanced business model.”

Particular strategic objectives to deliver this performance could be to maintain
predictable cash streams or optimize the asset base. The associated performance
measures could include free cash flow and ROIC. Of course, one could question
whether these are the ideal measures for the intended purpose. Perhaps knowledgeable
practitioners would view other metrics, such as EVA, as more useful.

Regardless, the introduction of free cash flow and ROIC represents a step forward
from a strict earnings-per-share mentality and should be viewed as progress
toward driving value-based behavior within the organization.

Good practices related to effective performance measure development and implementation
include:

• Focus on the vital few — keep the number of measures to a minimum.
• Provide a balanced set of measures.
• Leading versus lagging (i.e., predictive versus backward-looking)
• Financial and nonfinancial
• Objective and subjective (e.g., safety versus corporate perception)
• Ensure measures reflect stakeholder requirements.
• Provide a dashboard view of the measures.

Using these “good practices,” a balanced-scorecard approach comprising financial
and nonfinancial measures works well for developing an effective performance
management framework. Common nonfinancial measures are those used to gauge the
effectiveness of corporate initiatives, but for which information does not come
from the financial statements. In our work, we have helped clients with the
following categories of measures (examples of measures are shown in parentheses):

Customer-Related

• Market share (total megawatt-hour sales in a given region)
• Revenue growth (annualized revenue growth rate, overall and by customer
class)
• Customer profitability (net income per megawatt-hour sold by customer
class)
• Customer satisfaction (index based on weighted average of national percentile
rankings among residential, commercial, and key account customers)

Internal/Process-Related

• Risk management (value at risk, or VaR)
• Asset utilization (return on assets, or ROA)
• Environmental compliance (NOx and SOx emissions)

Employee-Related

• Leadership development (number of successors that possess the skills,
knowledge, and experience required to fill a leadership position)
• Safety (OSHA-recordable case rate)
• Job satisfaction (employee turnover)

While we consider these types of measures as representative and appropriate
within the energy industry, particularly at the senior-executive level, each
company’s individual business strategy and needs should ultimately shape the
final selection of suitable performance measures. For example, enterprise risk
management has become a primary issue for almost every major energy company.

This focus has resulted in the need to develop a scorecard that quantifies
measures and targets, inclusive of risk alerts across every relevant category
applicable to the business. Key categories of risk may include financial, market,
credit, environmental, organizational, and operational. We have worked with
energy companies to develop a scorecard from the overall scorecard for the business.


(See Larger Image)
Figure 2: Strategy Map

Realistic, Meaningful Targets

Target-setting can be a contentious issue in many organizations. After all,
once a target is set, there are implications to missing it as well as exceeding
it, and there are always the questions over what type of conduct the target
may really be driving. Related to this is the concept of tying targets to incentive
compensation, which is necessary but can be equally problematic. That being
said, it is clear that a company needs to establish targets if it hopes to have
any control over achieving its performance objectives.

For financial measures such as free cash flow and ROIC, some companies use
internally generated targets derived from the company’s annual budget or forecast.
Figures for individual business units may initially be difficult to obtain,
depending on how the company rolls up its numbers, but can usually be generated
for at least the major business units. Internal targets can typically be monitored
and measured on a monthly basis since information comes directly from the financial
statements and company systems. A monthly reporting capability provides a company
with significant information from which to understand and guide performance.

Externally generated targets also play an important role in performance management.
Such targets can be generated by benchmarking performance against competitors
from a peer group comparable in size, business composition, and strategy and
then targeting the top quartile value from the peer group.

These targets add an additional dimension to performance measurement against
internal targets, especially for measures that by nature are relative versus
absolute (e.g., measures that are expressed as a percentage, such as ROIC, versus
free cash flow or earnings). For these types of measures, the internally generated
target tells the company how well it’s doing relative to its own goals, while
the external target tells it if it could be doing even better in comparison
to the company’s peers.

For either internal or external targets, it’s necessary to make sure that the
targets chosen don’t act to produce undesired conduct. ROIC, for example, can
be increased in the short term by deferring needed investments, but at the cost
of inefficient plant or factory performance and costly breakdowns. Targets like
this need to be balanced with additional lower-level metrics, such as process
efficiency and cost measures, quality, and capacity utilization measures.

Buy-In and Managing Change

Implementing an enterprise performance management framework can be a significant
change in the way executives and managers evaluate business and employee performance.
As a result, continuous communication and feedback must occur to develop performance
measures that are accepted and embraced. When users are involved in the development
process, they gain a better understanding of the company’s strategy and the
reasons for selecting the particular performance measures.

Figure 3: Sample Financial Measure for Utilities

Be Simple, Yet Comprehensive

The best measures for a given company are those it will actually use and understand.
For this reason, performance measurement and management should be approached
as an evolutionary process, where the introduction of more sophisticated measures
occurs in stages rather than all at once. Using this approach permits flexibility
in establishing and modifying performance measures as necessary over time.

In one case, a large client active in generation, transmission, distribution,
and trading resisted an EVA approach due to the complexities of implementation
and acceptance by users, especially as it applied to the regulated side of the
business. Progress in driving value-based conduct eventually came by achieving
buy-in on ROIC and free cash flow, derived from standard financial statements.

For resistant organizations, deriving measures based on readily available accounting
information requiring little additional manipulation is advisable. We have developed
a maturity profile that scores companies on various components of their performance
management program, which include the type of performance measures currently
in place, the degree to which measures have been cascaded through the organization,
the integrity and timeliness of the data, the governance structure associated
with the program, and the data available through existing systems.

When used as a diagnostic tool at the beginning of a project, the insights
gained can help determine the receptivity of the company to new measures, its
ability to implement them, and potential opportunities for improvements. In
Figure 4, the outer line represents “best practice” level of performance (Level
4), while a company’s actual performance in a given area would be shown at a
level between one and four.

Figure 4: Performance Management Maturity Model

Implement from Day One

Measures need to be timely and cost-effective. Most large corporations close
their books monthly and have a well-defined schedule as to what data will be
available when. Knowing this schedule as well as the degree of additional manipulation
required to produce a given measure can be a great help in determining the timeliness
with which measures can be delivered to their audience.

Obtaining the necessary data from existing systems is ideal. In doing so, it
is wise to assess the suitability of installed accounting and information systems
in delivering the required data. In some instances, current systems deliver
only limited information (e.g., earnings by business unit) on a quarterly basis.
Measures such as free cash flow and ROIC may require information on a monthly
basis from the balance sheet, income statement, and cash flow statement, depending
on how they have been defined, which may be beyond the ability of some systems.

For more sophisticated measures such as EVA and CFROI, additional data manipulation
beyond what is available in financial reporting systems will be required. Having
an owner for each measure with the responsibility for producing the required
calculations by a certain date every month, for example, will foster accountability.


(See Larger Image)
Figure 5: Displaying Performance Measures

The foregoing considerations point to the need for a gap analysis and formal
governance model. A simple gap analysis derived from interviews with accounting
and finance personnel is a useful tool in assessing data needs and availability,
while a governance model designed with the input of key stakeholders (e.g.,
those with responsibility for producing the measures) and detailing, at a minimum,
the timing of data and measure availability, accountability, data integrity,
and required security features, will clarify roles and responsibilities.

Web technology provides an ideal delivery vehicle. IBM Business Consulting
Services has used a main page showing a view of all of the measures in a scorecard
format, with the possibility for alerts on each measure to inform management
when targets are exceeded or missed. There should also be a separate primary
view for each measure in graphical format and with accompanying analysis. From
this view, “drill downs” can be designed to view the measure at a greater level
of detail (e.g., to view its composite components) or to see the relative contribution
of various business units. There are several other ways to deliver the final
information.

To repeat a premise stated at the beginning of this article, the best measures
are those that are understood and get used. To help ensure the right measures
are defined and leveraged, we offer the following guidelines that summarize
several of the key points and practices already discussed:

• Obtain client sponsorship at the highest levels of the organization.
• Keep primary stakeholders involved throughout the process.
• Keep recommended measures set to a “vital few.”
• Establish business owners/champions for each measure.
• Use a strategy map as a tool for prompting discussion and identifying
the “right set” of measures for a company.
• Determine that the performance management framework is simple yet comprehensive,
and flexible and adaptable to meet evolving information requirements.

By maintaining a focus on simplicity and usability, companies will be well-positioned
for success as they undertake their performance management effort.

Network Asset Maintenance

In many jurisdictions, any negative customer impact is felt by the utility as
a penalty payment, so network performance (system outages, voltage fluctuations)
and customer management (such as missed appointments) must be optimized.

Traditionally, network infrastructure was designed for extremely high reliability
and fail-safe operation in accordance with sound engineering practice. Much
of this was achieved through over-design, and over-maintenance was seen as simply
more of a good thing.

Today, with the introduction of competitive markets, utilities recognize the
need to reduce operational costs while providing for a safe minimum amount of
maintenance. We can no longer simply throw money at a technical problem. This
article outlines a way forward tailored to our unique environment.

Cost Curtailment

Historically, utilities were set up and operated as monopolies with heavy reliance
on regulation to avoid abuses. The generation assets and transmission and distribution
networks were designed to provide highly reliable service to customers. As every
engineer knows, it is possible to support highly reliable service delivery through
the use of installed redundancy; if one asset breaks down, the standby asset
takes over and service disruption is either avoided or reduced, depending on
the speed of switching.

Of course, assets do break down from time to time and they require maintenance
to restore service or backup capability. Maintainers also know that one way
to enhance reliability is to prevent failures from occurring. So, conventional
practice suggests that doing more preventive maintenance might reduce failures
and achieve more uptime or availability. In geographically far-flung networks,
it was convenient to have plenty of people available to respond to trouble calls
and restore service. It was expected that if those people were not responding
to trouble calls, they were doing more preventive maintenance.

All of this was paid for by the ratepayers. Ratepayers were usually protected
by regulators from the full brunt of the costs through rate caps and restrictions
on rate increases. Capital expansion and replacement programs were funded by
the issuance of debt instruments that would one day be paid off by those same
ratepayers. Utilities did what they needed to do knowing it would be paid for
eventually. As businesses, they provided highly reliable service, at any cost.

Today our industry is deregulating. The whole idea behind deregulation is to
allow the markets and costs of service delivery to be driven by their own natural
forces, thus benefiting the ratepayers and taxpayers. One of the biggest impacts
of deregulation is that utilities can no longer pass the costs of operation
on to the customer base. Disaggregation, the splitting of integrated utilities
into smaller arms-length companies, has focused attention on the financial performance
of wires companies as never before.

Because the wires part of the utilities industry is a natural monopoly, operating
rules impose nominal performance metrics, as well as penalties if the metrics
are not achieved. In this new environment, the ability to recover costs is constrained,
and the need for increased system reliability is greater. We are now being asked
to manage our assets as a business that must serve competing market demands
for service and delivery, but now it’s with market-driven cost ceilings. That
requires a change.

We must accept the premise that we have often over-designed, over-maintained,
and over-staffed to meet demands for service continuity. There must be opportunities
to cut costs while maintaining high service levels and customer satisfaction.
And there are. But, those methods are not well aligned with the traditional
methods we have used for years. The change is in our thinking and where that
thinking leads.

Better Asset Management

To contain or reduce costs we need to find more effective and efficient ways
of managing the asset base.

While maintaining our assets we disturb them. We take them out of service for
a brief time, take them apart, clean, lubricate, reassemble, test, and reinstall
them. Much of this work is done by skilled maintainers in conditions that are
far from ideal — exposed to weather, in awkward locations atop poles, standing
in buckets several meters above street level, with thick gloves on, and so on.

Despite the care they take, they can’t help but make mistakes. The result is
that those assets often don’t work properly after they’ve been maintained. We’ve
all experienced the car that doesn’t run as well after a trip to the garage
— this is the same sort of phenomenon. Statistically, these problems are
the most common type of failure we experience, and in those cases, we would
have been better off to leave things alone.

Using designed-in redundancy is expensive and doesn’t always produce the desired
results. This redundant capability means that we have assets that are bought,
installed, and maintained, but seldom called upon to operate. Sometimes we induce
failures as discussed above. Sometimes when we need them we find they are not
available for use; they are already in a failed state, such as switches that
haven’t moved for a long time that are now stuck closed or stuck open. We then
have expensive failed system components that haven’t added the value we needed
— they didn’t work.

We can observe what has been done in other industries that have lived with
similar market forces for much longer. Air transport, in particular, has learned
to maintain its widely dispersed assets for high levels of reliability and service
while doing so cost-effectively in a highly competitive yet tightly regulated
business environment. Evidence of this is found in Stanley Nowlan and Howard
Heap’s report “Reliability Centered Maintenance” from 1978 and John Moubray’s
“RCM 2” in 2002.

The airline industry of the 1950s and ’60s also experienced high costs due
to maintenance, and it experienced many asset failures that led to disastrous
consequences. The industry had roughly 60 crashed aircraft for every 1 million
takeoffs, and more than 40 of those were due to equipment failure. Airlines
tried what was then obvious: increasing the maintenance so equipment failures
would be avoided. The situation actually got worse!

The industry realized eventually that traditional thinking wouldn’t work as
the larger wide-bodied all-jet fleets were introduced. Airlines studied how
assets failed and recognized that different failures required different approaches.
They recognized that many failures were maintenance-induced, that many were
entirely random, that many could be detected before they got to the totally
failed state, and that many were hidden under normal circumstances. All of these
can be managed.

The airlines developed a method to determine the amount of maintenance that
decreased the failure consequences to safety, the environment, and operations.
As a result of widespread application of this method, they experienced a 30-fold
reduction in crashes (today the rate is about two per million takeoffs) and
a 133-fold improvement in those that are equipment failure-related (to less
than 0.3 per million). They actually do less of the fixed interval type of overhaul
work that they used to: In those early days 80 percent of their maintenance
was fixed interval work; today it is less than 20 percent. The work they do
is now less expensive.

Our utilities need to embrace this thinking for one simple reason: It works.
The method is called reliability centered maintenance (RCM). While there are
a variety of RCM methods available today, only a handful are thorough enough
to work well for our industry. In this article, we examine the benefits of RCM
methods applicable to utilities.

Figure 1: Finding the RCM Maintenance Target Zone

In a recent application of RCM in a large municipal electrical distribution
utility, we demonstrated a 34 percent cost reduction for the asset classes to
which it was applied. The total costs to achieve that were small: Internal rate
of return was calculated in excess of 180 percent. It revealed many instances
of hidden failures that were inadequately dealt with in the past. It used risk-based
decision techniques to make the best use of installed redundancy in order to
reduce over-maintaining.

The RCM application also favored condition monitoring over fixed interval maintenance.
It also revealed many instances of maintenance practices that induced failures;
many of those practices were dropped. It also found several instances where
over-design actually led to failures — design specifications and installed
equipment are being modified to help eliminate those failures. A valuable ancillary
benefit was that knowledge about the assets was captured in a structured and
easily accessible database. When the experienced people who participate retire,
their knowledge won’t leave with them.

The utility above and others that are leading the way in adopting RCM are proving
that this proactive method of determining the right maintenance can produce
substantial cost savings without compromising safety or service. The technology
now exists to capture the full value potential that is there. The money is on
the table and astute utilities are starting to grab it. Their timing is right.

The savings will come from doing the right maintenance at the right time —
no more, no less. It liberates manpower and uses fewer parts and replacements.
By identifying unnecessary designed-in features that lead to failures, it enhances
system performance and strives to eliminate risks. By using risk-based decision
techniques, it verifies that money is being spent where it will do the most
good. It provides the reliability and redundant asset availability that delivers
customer satisfaction and it does it for the lowest cost.

Optimizing the Workforce

For reasons mentioned above, utilities have traditionally relied on a large
labor force. The average “tool time” per tradesperson was low, typically only
two to three working hours per day. In a cost-constrained business environment,
there is a new pressure on utility owners to improve productivity. Opportunities
for workforce reduction are therefore welcome, as long as necessary work can
be accomplished to meet reliability and service targets.

By redesigning maintenance work as a result of using an RCM approach to work
identification, it is likely that the total amount of maintenance work will
decline. The short-term result is that we may find that we have more people
than we need and that can lead to painful choices. Fortunately, however, the
timing coincides with another major change that is taking place in our society:
the early retirement of some of the baby boomers — the oldest are now about
55. As these experienced and valuable people leave the workforce, we will find
fewer to replace them. The baby bust produced far fewer people, and they are
already in the workforce — the youngest are 23.

We already have difficulty finding skilled tradespeople to replace those who
leave. The echo boom is just beginning to enter the workforce, but they are
not embracing the trades; they are the first all-computer generation. We need
to get used to working with fewer skilled tradespeople in our workforce. We
can’t easily correct the supply problem, but RCM can help us prepare by reducing
the demand for those skilled trades.

Trade unions may not like the sound of this but it is as real a problem for
them as it is for those of us managing our utilities. Their membership will
shrink and they won’t be able to meet demand. This can help them too. RCM enables
us to verify that the few skilled tradespeople we will have are indeed doing
the work that adds the most value — the safe minimum amount of maintenance
work.

The next benefit to be attained is to more effectively manage the (now reduced)
workforce. The target here is to increase productivity. Design software based
on a geographic information system (GIS) can be used to carry out design of
new and replacement distribution infrastructure. The key benefits are that field
trips by designers can be reduced, the design can be automatically routed to
asset and work management systems as compatible units, and completed work can
be automatically settled to the fixed assets subsystems of the utility’s financial
system. All of these benefits translate into significant reductions in manual
processes, with consequent cost savings as well as improvements in data accuracy.

Resource management software can be deployed to optimize the process of allocating
human resources to the work that must be done. These tools verify that work
is performed in a sequence such that crew travel time between jobs is decreased,
overtime is reduced, material is staged to be at the work location at the correct
time, and work is “clustered” so that all jobs in a certain geographic area
are given to those field crews in or assigned to that area. This reduces the
need for multiple trips to the same remote substation, for example.

Depending on how frequently the software is updated with actual field data,
work assignments can be automatically adjusted on the fly so that unplanned
field conditions can be worked around. By understanding the available skills
in the resource pool (comprising both human and equipment resources), their
location and workload, resource management software can optimally assemble crews
to achieve the same goals: reduced downtime and travel time.

Of course, in emergency situations, such as a storm response, all of the productivity-enhancing
and cost-containment rules that resource management software uses for normal
crew management may no longer apply. However, it is precisely in this scenario
that good resource management applications reveal their worth. Work assignment
rules can be changed rapidly to help meet the objective of minimal system down
time. Tight data integration verifies that the status of work, planned and actual,
is available to planners and to customers through the appropriate business processes.

There has always been a need to get accurate and timely data from the field.
With the advent of relatively inexpensive and reliable PDAs, all manner of field
data can be collected and sent to upstream applications. Job status data allows
schedulers to determine with fine-grained accuracy if the crew can accept new
work, typically on a per-hour basis.

Physical location can be collected via GPS units in trucks, and the location
of poles can be plotted and used to update GIS-based maps. Actual asset condition,
the keystone for best-in-class asset management, can be captured with nominal
error and optimal accuracy. Of course, it was always possible to collect this
data using manual processes. The problem was that data was often incomplete
and the accuracy poor. The quality of decisions made with poor data can never
be more than poor.

Current technology allows for bar code reading, “write once” data capture,
and above all, significant improvements to user productivity. Of course, geography,
available infrastructure, and cost are all determinants on how these field devices
are deployed and how successful they will be. Nevertheless, there have been
some significant successes achieved by utilities that have made the investment.
As with any investment, there needs to be a careful analysis done before committing
to large expenditures. For example, one utility allows workers to use their
PDAs for personal use, thereby giving the worker an incentive to protect the
device.

Outsourcing

Much has been written about the potential for introducing “contestability”
into certain business processes of utilities. Functions such as meter reading,
billing, selected maintenance tasks, and IT infrastructure management were early
candidates for being outsourced to third parties. If the outsourcing arrangement
is set up correctly (and this is not easy), the result is potentially lower
costs to the utility with equal or improved operating performance. ??