Tackling Utility Customer Cost to Serve

Utilities in the U.S. stand to save
considerable expense by evaluating
their customer service operations
from a customer cost-to-serve perspective.
They spend on average $42.50
per year servicing each customer, according
to research conducted by The Gartner
Group in 2006. Shaving a mere dollar off
this cost-to-serve figure can amount to
millions of dollars in annual savings for a
large utility.

Utility customer cost-to-serve is the
sum of costs associated with retailing
energy to the customer, including labor
and technology costs, divided by the number
of customers. It’s distinct from other
utility value chain costs like wholesale
energy and energy transmission and distribution.
Utility customer cost-to-serve
generally includes all costs related to
metering, billing, payment, collections
and customer service (see Figure 1).

Activity-Based Costs vs. Departmental Cost Structures

Breaking down the utility customer cost-to-serve into individual components
is
the first step to savings. This is easier
said than done, however, and most utilities
struggle with accounting cost structures
that do not relate easily to operational
processes.

Since utility retail costs are typically
concentrated in the call center and in
operating the large-scale IT systems that
automate billing and payment processes,
it is not uncommon for utility accountants
to break down cost to serve along departmental
lines: 1) call center IT systems and
telecommunications; 2) call center staff;
3) billing staff; 4) IT staff; 5) customer
information and billing IT systems; and
6) outsourced service costs.

This breakdown is relatively easy to
measure and is helpful to those departments
managing cost with “blunt instruments,”
such as headcount reduction.
Unfortunately, it is of limited value in
identifying cost-saving opportunities and
implementing innovative process improvements
because it doesn’t address the root
sources of cost. For example, consider
the process of taking meter readings and
sending bills to customers. Part of the cost
is the consequence of inaccurate billing,
which includes the cost of fielding customer
inquiry calls. Applying the accountant-
style cost breakdown above, this part
of the billing cost is buried somewhere in
call center staff and systems costs.

Conversely, when a call center agent
fields an inbound customer service call,
such as a question about moving to a different
pricing structure, the agent relies
on access to the customer information
and billing system. So in this case, part of
the cost of delivering customer service
through the call center actually lies in the
cost of providing the customer information
and billing system. In other words, the
call center systems and staff costs alone
do not reflect the complete cost of delivering
customer service.

A superior approach is to break down
cost to serve into key operational processes:
1) metering; 2) billing; 3) payment;
4) collections; and 5) customer service.

With this operational activity-based
breakdown, underlying cost causes
become more transparent and are more
readily measured. For example, it can be
determined that a portion of customer
calls relate to inaccurate bills, which
should be allocated to the billing component
of customer cost to serve. Deploying
a better, more accurate billing system
would translate into fewer calls about inaccurate
bills, thereby reducing call center
volume and reducing overall billing costs.

Is Metering a Retail Cost?

When utilities separate their retail and distribution
operations, the meters often fall
under the distribution network as it covers
field personnel that maintain the meters.
Conversely, meter-reading personnel are
often identified with the billing department
as the billing process drives the need for
timely meter reading. For calculating utility
customer cost to serve, is metering a
distribution cost or a retail cost?

Utilities in restructured markets, including
those with competitive metering, have
faced this question and, in most cases,
they have determined that metering costs
fall to the retailers, regardless of which
party owns the meters. Therefore, metering
is a component of customer cost to
serve. The Gartner Group reinforced this
conclusion when it included metering as
one of the top-level customer cost-to-serve components in its groundbreaking
benchmark study in 2006 of utility cost
to serve in the U.S.. Likewise, leading
research firm Datamonitor included
metering costs in its studies of comparative
utility customer cost to serve in Britain,
Australia and Europe.

Bad Debt – Part of Cost to Serve

Figure 1: Utility Value Chain Costs Showing
Customer Cost to ServeSeveral times a year, as a matter of policy,
a utility may write off debts unpaid for
more than 180 days. This can be considered
a financial transaction irrelevant
to customer service and, as such, could
reasonably be excluded from the operational
costs of running the customer service
department. Nevertheless, bad debt
expense is most definitely part of the cost
of doing business with retail customers
and, therefore, a component of overall
customer cost to serve.

The upside of including bad debt in
the cost-to-serve equation is collections
managers see how their receivables ledger
fits into the bigger picture. For example,
a collections department manager could
achieve savings by reducing headcount,
but it would be a false economy if the
end result is less effective collection of
overdue debt and an increase in bad-debt
write-off. Similarly, utilizing a new technology-
based service to improve collections
performance might increase collections
department costs but also
yield much greater savings by
reducing bad debt. In both of
these cases, a cost-to-serve
approach linking individual
costs to the overall business
process outcome would provide
a sound basis for decision
making.

Percentage Breakdown of Cost to Serve
Figure 2 represents a typical
breakdown of overall cost
to serve for a large North
American utility. Next, we
will walk through each operational
area shown to see how
some true efficiencies can be
achieved.

Metering Costs Likely to Increase

Metering costs include:

  • Regular field meter reading;
  • Special reads/out-of-cycle reads;
  • Meter service including meter installation
    removal, change, maintenance
    and testing;
  • Amortized cost of the meter asset; and
  • AMR/AMI and interval meter reading.

The widespread rollout of interval meter
reading has the potential to increase
costs substantially and, consequently,
to increase the customer cost to serve.
The objective will be to decrease costs
in other parts of the value chain, such as
transmission cost and wholesale costs at
peak times.

Billing Inaccuracy Compounds Costs

The American Public Power Association’s
2005 Customer Service Benchmarking
Study revealed a wide range of billing
inaccuracy rates among its utility
members, ranging from 0.004 percent
to as high as 8 percent. Inaccurate bills
drive up costs in a number of ways: more
complaints to the call center, repeat field
visits to re-read meters, back-office staff
time spent canceling and rebilling and the
additional special-run printing and postage
to send a second bill.

It makes sense to take advantage of
economies of scale in print and mail. There
are print shops that produce billions of
pieces of mail per year; at those volumes,
they can achieve efficiencies beyond even
the largest utility’s bill-printing operation.
But even more substantial cost savings
can often be found by re-examining the
overall billing process and the relationship
between billing and payment and the
consumer. Billing triggers customer calls,
so it follows that sending fewer frequent
bills (by moving to bimonthly or quarterly
billing, for example) can reduce call center
volumes. The lower costs of e-billing are
also well-documented.

A major chunk of the billing cost relates
to the number of staff required to operate
billing systems and resolve billing
exceptions. For example, a business task
such as a rate adjustment might require
several staff to spend several days using
one legacy billing system. But a billing
software package with more sophisticated
automation might only require a 10-minute
configuration and validation.

Low-Cost Payment Methods and Channels

The payment cost component of utility
customer cost to serve is an average
across all of the payment methods offered
by the utility and used by its customers.
Utilities in the U.S. have, in general, been
slow to offer the convenience of wideranging
payment options, with regulatory
restrictions often cited as the barrier to
implementing more expensive methods
such as credit card payment. Consider the
high number of utility payments still made
by check in the U.S. – up to 80 percent
(UtiliPoint, 2007) – with associated costs
of return postage and remittance processing.
Contrast that figure with other industries
or, indeed, utilities around the world
where, in some cases, 100 percent of payments
are made electronically, avoiding all
postage and remittance processing costs.

There can be wide variations in actual
payment costs across a customer base,
depending on the type of payment
(check, cash, credit card and electronic)
and the payment channel (over the counter,
call center, post, website, agency,
bank and payment service). Another
factor is payment frequency. The annual
payment processing cost for a customer
that pays four times a year using an
expensive channel can be less than the
total payment processing cost for a
customer that uses a cheaper payment
method but pays weekly.

An additional consideration is the relationship
between payment processing
and collections and bad debt costs. Some
payment methods may seem expensive
on the surface but offer substantial
benefits in reduced collection and bad
debt expense. For example, in the case
of credit card payment, the card issuer
bears most of the bad debt risk.

Combating Collections Costs

Figure 2: Cost-to-Serve Breakdown
for a Typical Large UtilityIt is estimated by information services
company Chartwell Inc. that more than
$1.7 billion in revenue is written off by utilities
in the U.S. every year, an average of
$8.50 uncollected per customer.

In addition, up to 40 percent of call
center agent time is spent on overdue
payment, payment arrangements or collections
activities. Standard utility
practice is to issue reminder letters when
debt reaches a certain number of days
past due, and these reminders
are a major trigger of calls to the
call center. Legacy billing systems
are often not designed to handle
complex modern scenarios, such as
multijurisdictional regulatory constraints
on collections activity or
tailored collection paths for different
segments of a customer base.
So collections efficiency suffers and
bad debt grows, which both result in
increases to the cost to serve.

Customer Service Costs Reflect System Usability

Absent more sophisticated costto-
serve analysis, a utility might simply
divide its customer base by its number
of call center agents to gauge whether it
has a high or low customer service costto-
serve component. It is not unusual for
some incumbent utilities running legacy
systems to have 100 call center agents
for every 100,000 customers while new
start-up entrants with better technology
and processes may have as few as 10 to
20 agents serving the same number of
customers.

This was the case for a European
retailer with 2 billion customers. The
government forced the retail department
to cut all ties with its distribution
affiliate, which revealed its high customer
cost to serve of more than EUR100/customer.
The newly separated retailer was
forced to reinvent itself from scratch
with advanced systems and business
processes in order to compete effectively
against lean, aggressive new entrants
with much lower cost to serve.

Poor customer information system
usability will also drive up customer service
labor costs. Some systems require
users to traverse 10 or more screens to
complete a common transaction, such as
signing up a new customer, while others
provide a single screen with relevant customer
information consolidated to respond
to 80 percent of calls. The best systems
place a high priority on usability engineering
and support utility-specific customer
relationship management (CRM) functions,
thereby eliminating the need for separate
billing and CRM systems.

Driving Down Cost to Serve With Meter-to-Cash Outsourcing

Certainly utilities can lower their cost
to serve through system and process
improvements in areas of metering, billing,
payment, collections and customer service,
but it is by outsourcing the full meterto-
cash transaction cycle that utilities can
radically reduce their cost to serve.

There has been tremendous attention
paid to major utilities outsourcing their
customer service operations, such as TXU
to Cap Energy and Nicor to IBM. But these
business process outsourcing (BPO) deals
tend to emphasize the financial engineering
aspects in order to yield short-term
financial benefit. Even though service-level
agreements are in place, the metrics used
typically focus on operational basics and
motivate the BPO vendor to save costs in
the near term rather than seek fundamental,
long-term improvement. They tend to
be motivated to do the same things more
cheaply, rather than to examine whether
the same tasks will actually be necessary
as business objectives, the industry and
technology evolve.

Under a new transaction-based model
of outsourcing, however, the outsourcer
owns the technology assets and intellectual
property underpinning its services.
Thus, the outsourcer can combine the
best technologies and processes in ways
that achieve far greater efficiencies and
cost reduction. This new model also provides
an outsource option for transaction
automation while allowing the utility to
retain control of its vital call center customer
interactions.

If utilities are to determine their customer
cost to serve accurately, they
need to approach the equation by looking
at key operational areas, not by dividing
costs according to department. It is from
this perspective that they can then identify
new ways of achieving cost savings
and as well as increasing efficiency.
Information technology and the correct
type of outsourcing model are two fundamentals
in helping utilities realize significant
cost-to-serve savings and start
to make a dent in the $1 billion waiting
to be saved.

A Long-Term View of Technology’s Value to Utilities

The rise of wireless data networks, coupled with the evolution of software solutions, has made advances in field automation, project management and collaboration widely accessible to investor-owned utility (IOU) companies. As in many other industries, the cost and perceived value of these solutions need to be addressed over time by utilities. And that cost has to be looked at both positively and negatively; that is, both the value resulting from streamlining operations as well as the cost to an organization when the technology it relies upon fails.

Once IOUs become dependent on having computers in the field, the pain – and cost – of downtime becomes very real. For this reason, utilities should take a hard look at return on investment (ROI) and total cost of ownership (TCO) in technology decision making.

Defining ROI and TCO

Total cost of ownership is exactly what the term suggests: the total cost to own a product throughout its lifetime. This includes the purchase price, deployment, maintenance and decommission.

TCO cannot truly be calculated until the day the technology is retired. However, it is important to remember that keeping costs in control over that lifetime can be far more imperative than keeping costs down at any single point in the TCO equation, including at the time of initial purchase. The secret to making good capital decisions is to use TCO to manage risk and minimize unexpected lifetime costs.

ROI is the relative value a product will bring to an organization throughout its deployment. Maximizing return on investment means getting the greatest advantage possible from resources. However, ROI does not simply equate to recouping the original purchase price of technology.

If you can figure out how much time will genuinely be saved or which goals can be better met with the right technology, ROI will tell you exactly how much value can be delivered beyond any specific dollar figure, like the purchase price. As long as customers are better served, field teams are more efficient and information is gathered more rapidly or more accurately, ROI will continue to accrue.

While TCO tells us how much a solution will really cost and ROI tells us how much value a given technology will bring to our users, both are fundamental to making good decisions.

The Impact of Reliability on Total Cost of Ownership

TCO is especially significant in mobile deployments because of their high risk of failure. The decision to embrace computer technology is based on an expectation that solutions will work whenever and wherever needed. Unfortunately that reliability is not guaranteed.

For example, in 2006, Gartner Inc. published a report benchmarking computer failure rates, which encompasses any need for some form of hardware repair. The report stated that within the first year, business notebooks failed15 percent of the time. By the third year, those estimates escalate to 22 percent. PC Magazine, in its annual reader survey (September 2006) reported a 23 percent annualized failure rate in business notebooks.

It’s not clear if these failures are caused by poor quality or by the way people use or abuse their machines, but it’s obvious that this level of downtime can significantly impact both ROI and TCO. Statistics like these underscore the importance of building solutions appropriate for the user environment, and they are one of the main reasons that IOUs opt to deploy more reliable, rugged computers.

Downtime impacts operations and customer perceptions because it directly impacts the ability of technicians to:

  • Respond quickly to outages. Techs cannot restore service if they cannot receive work assignments remotely.
  • Recognize trends and determine the source of an issue. The source of an outage can be identified early on when work assignments are delivered in real time to a mapping system on a mobile computer, thereby speeding resolution.
  • Access customer information systems. Insight into customer history helps technicians avoid repeatedly repairing surface issues rather than getting at the core problem.
  • Access maintenance and repair instructions or manuals in real time. A lack of critical information may lead to time-consuming failed repair attempts and costly follow-up visits.
  • Map territory. Utility technicians are dependent upon GIS (geographical information systems) to display maps of utility grids, street maps, the location of power lines and poles, and other assets.
  • Remain remote. Without digital order processing, technicians must manually submit completed orders. When up and running reliably, mobile devices save substantial travel time and fuel costs.

Factor in the Intangibles

The examples above demonstrate that there are obvious connections between technology, operational efficiency and customer service. Fundamentally if techs can’t restore service, IOUs can’t bill the customer and drive revenue and shareholder value.

However, less-concrete factors are also important, and intuition and experience are also critical in making a successful buying decision. Any TCO model is best when it includes intangibles. Some product features that don’t directly increase productivity or efficiency, for example, may have a strong impact on the person using the mobile device or on the experience of the person being serviced. For instance:

  • How much will productivity increase if a field tech uses a touch screen instead of a keyboard?
  • If techs can read screens in full daylight, how much time can they save?
  • How does using technology impact overall morale? Employees value being given the right tools to do their jobs – and being trained effectively.
  • How do customers perceive companies that embrace technology to improve service?

Of course, any technology solution must have clear buy-in from employees to be effective. But people generally don’t like change, and in some cases, IOUs will be automating paper processes for the first time. Adoption rates among first-time users can make or break the success of any long-term project.

Invest Up Front in Operational Analysis

No matter the project, it is critical to establish goals and identify the ways in which you can ensure that your TCO remains low and your ROI high.

This operational analysis should include an honest assessment of technology alternatives based on their relative cost and perceived value. In order to be truly effective, it must include financial data, validated by accounting, that support internal resource allocation processes and look ahead to predict future budget requirements.

By looking at these kinds of issues, project teams will have a much easier time convincing their IT and procurement organizations that a purpose-built rugged computing solution will be more beneficial to their organization than trying to press corporate-standard notebooks into heavy-duty service. In fact, it should help prove that taking a one-size-fits-all computing approach can be detrimental and run counter to the goals of process automation.

The goal of any technology implementation should be to ensure that workers are given the right tools to help them be more effective in support of customer service. With these factors in mind, calculating TCO and ROI becomes much easier, and the business case for adopting technology can be as easy as ABC.

Energy Independence Is Cool(er)?

Figure 1: Energy Flow, 2005, in Quadrillion BtuThere is growing interest in the U.S.
both in achieving some increased
degree of energy independence
and in reducing CO2 emissions to minimize
global climate change. While both
are broad “goals,” no plan yet exists to
accomplish either.

Some believe that the U.S. should make
a comprehensive effort to become completely
energy independent, while others
believe that a significant reduction in our
dependence on foreign energy sources
would be worth accomplishing. Similarly,
some believe the U.S. should ratify the
Kyoto Protocol, while others believe that
CO2 emissions must be far more drastically
reduced to stabilize CO2 concentrations
in the atmosphere.

Those who believe that atmospheric
CO2 concentrations must be stabilized
at or below 500 ppm to avoid irreversible,
catastrophic global climate change
estimate that this would require approximately
a 75 percent reduction in global
annual anthropogenic CO2 emissions by
2050. Achieving this level of global CO2
reductions on a per capita basis would
require a reduction of approximately 95
percent in U.S. CO2 emissions over the
same period.

Following is a conceptual plan that
would meet this reduction and, in the
process, allow the U.S. to become completely
independent from foreign sources
of energy. It is intended merely to demonstrate
the feasibility, expansive scope
and massive costs associated with the
above-stated goals, not to accept or
endorse them. It also does not suggest
that this conceptual plan is the only
approach; merely that it represents one
possibility.

The fundamental assumptions underlying
the development of this conceptual
plan are as follows:

  • U.S. population will continue to grow,
    reaching about 500 million by 2050;
  • U.S. per capita energy consumption
    will remain relatively stable throughout
    the period;
  • All current U.S. electricity generation
    capacity, with the exception of hydroelectric
    generation, will have reached
    the end of its useful life by 2050;
  • All current U.S. energy-consuming systems
    and equipment will have reached
    the end of their useful lives by 2050;
  • Technology that is not on the path to
    achieving the plan goals will not be
    implemented; and,
  • New technologies will continue to be
    developed and implemented throughout
    the period.

Each of these assumptions is subject to
challenge; some are subject to legislative
and regulatory interventions.

Stationary Sectors

Figure 2: Generation InvestmentThe U.S. currently has an electric generation
capacity of approximately 950 gigawatts,
with a capacity reserve margin of
~17 percent, serving a population of ~ 300
million. Figure 1 illustrates energy flow
in the U.S. All but ~100 gigawatts of this
capacity (hydro and pumped hydro) will
need to be replaced by 2050. In addition,
the U.S. will require additional capacity
of ~ 600 gigawatts by 2050 to meet the
electric demands of its ~500 million population.
This means that a total of ~ 1450
gigawatts of new and replacement generating
capacity must be sited, designed,
approved, financed and constructed over
the next 43 years. This requirement could
be reduced by a combination of demandside
management, efficiency improvements
and conservation; however, no
national goal quantifies this potential and
no national plan exists to accomplish it.

No technology has yet been demonstrated
at near-commercial scale that can
accomplish permanent fixation of CO2.
Therefore this plan assumes that all new
central generation, other than renewable
generation, will be nuclear until permanent
fixation of CO2 has been demonstrated.
This plan further assumes that renewable
generation sources will provide 20 percent
of total generating capacity in 2050, or
~ 300 gigawatts. The renewable share of
generating capacity could be substantially
larger if the technology to exploit dry hot
rock geothermal resources, ocean thermal
energy conversion or wave energy generation
becomes economically practical. The
renewable share could also be substantially
greater if large-scale electric storage
technology becomes commercially available
and economically feasible early in the
planning period, thus expanding the applicability
of both solar and wind generation.

Assuming a three-year design and
approval cycle and a 10-year construction
and commissioning cycle for nuclear generation,
this would require the completion
and commissioning of ~40 gigawatts of
new central generation capacity each year,
beginning in 2020. All of this capacity construction
would have to occur, regardless
of any effort to reduce CO2 emissions, to
meet growing demand for electric power.
However, the mix of generating technologies,
including the fraction of renewable
generation, would be affected by the
efforts to control climate change.

In addition, all residential, commercial
and industrial end uses of natural gas,
propane and fuel oil would be replaced
by electricity or renewable fuels, assuming
that biofuels can be produced and
consumed with zero net CO2 emissions
(ZNE). It is highly unlikely that CO2 fixation
technology will be developed sufficiently
within the planning horizon to make
distributed CO2 capture and fixation economic.
This would involve the construction
of an additional ~750 gigawatts of electric
generation over the planning period.

Finally, all transportation use of gasoline
and diesel fuel would be replaced
with some combination of electric power,
electrolytic hydrogen and ZNE renewable
fuels. This would require the construction
of an additional ~750 gigawatts of electric
generation, the equivalent useful energy
capacity of ZNE biofuels, or some combination
of both sources.

Therefore, if we assume that a combination
of advanced nuclear generation
technology, increased design commonality,
expedited siting and design approval,
uninterrupted construction and expedited
commissioning results in an installed capital
cost for nuclear generation of $1,500
per kilowatt, the generation investments
required are shown in Figure 2.

Regulatory delays could easily increase
these investment requirements, as they
did during the construction of the existing
U.S. nuclear fleet. Once the technology for
permanent CO2 fixation has been developed
and demonstrated, new coal generation
facilities could also be included in the
generation mix.

The amounts in Figure 2 do not include
estimates of the investments required in
nuclear fuel processing and reprocessing
to support these nuclear plants. They also
do not include any estimate of the investments
needed to reinforce and expand
the electric transmission and distribution
infrastructure. Assuming that the current
relationship between generation investment
and transmission/distribution investment
persists in the future, these investments
would be on the order of $1 trillion.

The sale of natural gas-, propane- and
fuel-oil-powered residential and commercial
appliances and equipment would be
prohibited after 2020. This would permit
them to serve out their normal useful lives
and be replaced by electric appliances
and equipment, or refitted to burn ZNE
biofuels by 2040. This would then permit
the natural gas pipeline system to be refitted
or replaced to transport compressed
hydrogen and the propane pipeline system
to be refitted for biofuels transportation.

Some gas and liquid pipeline capacity
would still be needed to serve chemical
manufacturing plants and fertilizer industry
needs. It is assumed these industries
would have sufficient scale to apply permanent
CO2 fixation technology similar to
that used on coal-fueled power generation.

No attempt is made here to estimate
the cost of replacing customer-owned
direct natural gas, propane and fuel-oil
burning equipment, since this equipment
would be expected to require replacement
during the planning period. Replacement
with high-efficiency electric appliances
would involve relatively modest incremental
investment in most cases.

Transportation Sectors

Accomplishing a 95 percent reduction
in U.S. CO2 emissions would require the
complete replacement of petroleum-based
fuel systems with ZNE engines and electric
motors, since it is highly unlikely that
permanent CO2 fixation technology will
be applicable to vehicles, at least during
the planning period. The vehicle fleet will
consist of electric vehicles, hybrid and
plug-hybrid vehicles, fuel cell vehicles
and engine-driven vehicles. Vehicle fuels
would include hydrogen and renewables,
such as ethanol and bio-diesel, assuming
that the renewable fuels can achieve ZNE.

Based on a useful life of ~20 years for
transportation vehicles, no new gasoline,
E-85 or petro-diesel vehicles would be
available for sale after 2020, allowing the
vehicle fleet to be retired at the end of its
useful life and replaced with ZNE vehicles
by 2040. The petroleum refining infrastructure
could then be largely retired, and
the pipeline transportation infrastructure
could be refitted for the transportation of
renewable liquid fuels. The petroleum marketing
infrastructure could also be completely
refitted for the sale of renewable
liquid fuels and compressed hydrogen.

The investment in electrolytic hydrogen
production facilities to produce the hydrogen
required for vehicle fuel would range
from $1 trillion to $3 trillion, depending on
the portion of the market using hydrogen
fuel cells or hydrogen-fueled engines and
the types of vehicles in the H2 vehicle mix.
The investment in distribution and marketing
facilities is also estimated to be in the
$1 trillion to $3 trillion range.

No attempt is made here to estimate
the cost of replacing transportation vehicles,
since they would also need replacing
during the planning period. At that time,
they would be replaced with ZNE vehicles.

Related Issues

The U.S. is already experiencing regional
water supply shortages at its current population of ~300 million. These
shortages will be exacerbated by the
anticipated population increase. Nuclear
electric generation facilities located near
the coasts or offshore would have access
to virtually unlimited supplies of salt or
brackish water for power plant cooling, as
well as for desalination and the separation
of hydrogen for use as vehicle fuel. The
incremental investment required to use
surplus off-peak electricity to produce the
additional potable water required by the
U.S. population in 2050 is on the order of
$1 trillion.

Assuming the average load factor on
the U.S. generation fleet does not increase
significantly above current levels, surplus
generation capacity could be used offpeak
to provide both potable water and
hydrogen. The investment required in
water purification and hydrogen production
facilities is difficult to determine,
because it is a function of the cooling
water requirements of the new power
generation facilities and the relative competitiveness
and market acceptability of
hydrogen as a vehicle fuel.

Summary & Conclusions

This cursory analysis suggests that it
would be technically possible for the U.S.
to transition to full energy independence
by 2050, using technology to reduce U.S.
CO2 emissions by ~95 percent. The total
investment cost of this transition would be
on the order of $10 trillion over the period.
This estimate is based on data from a variety
of sources within the U.S. government,
including the Energy Information Administration
and the national laboratories; and,
in many cases, is based not on the current
cost of the approaches identified but on
projections of future costs. The total costs
could be substantially higher if the anticipated
technology improvements and cost
reductions fail to occur or occur later in
the planning horizon. Approximately 75
percent of this investment would be incremental
to a business-as-usual scenario.
This investment is approximately equivalent
to the U.S. annual gross domestic
product; thus it would require investment
of about 2 percent of GDP per year over
the planning period.

Achieving this transition would require
an immediate decision to proceed toward
these goals. It would also require federal
and state actions to accelerate facility
siting approval and regulatory review
and approval of design, construction and
commissioning. While this is certainly possible
and has been accomplished in other
countries, it would represent a major
shift for the U.S. The timely siting of the
new power generation facilities will be
perhaps the greatest political challenge,
in light of the NIMBY (not in my backyard)
and BANANA (build absolutely nothing
anywhere near anyone) issues that have
historically plagued the power industry in
general and the nuclear power industry in
particular.

One potential approach to siting nuclear
generation, hydrogen production and
water desalination plants is to develop
floating, offshore facilities installed on
barges anchored behind massive artificial
breakwaters. This concept, first
“floated” by Westinghouse and Tenneco
in the 1970s, would involve construction
of “cookie cutter” facilities on the barges,
which would be towed into position and
placed in service. In today’s world, these
floating power parks would require the
protection of anti-ballistic missile and antisubmarine
warfare installations to protect
them from attack. One of the benefits of
this approach is reduction of the costs
associated with the one-off design of these
facilities, as well as reduction in the time
and cost of construction and inspection.
Siting would likely be a major political
issue, as evidenced by the resistance to
liquefied natural gas terminals, oil production
platforms and even wind farms offshore
but within sight of land.

It’s questionable whether the political
will exists to make this massive investment
in this time frame. If stabilizing CO2
concentrations is the goal, however, this
conceptual plan represents at least one
approach to getting there.

INTERVIEW: Alan H. Richardson

Alan H. Richardson, President and CEO, American Public Power AssocationEnergy & Utilities: Let’s start by taking a look at the
state of the electric utility industry marketplace as it relates to congressional
debates over reducing carbon dioxide emissions.

Alan Richardson: Congressional action on climate change will
have a huge impact on the electric utility industry. Most APPA members of APPA
acknowledge the scientific reality that human activity contributes to climate
change as well as the political reality that Congress is going to deal with
the issue in some fashion. The APPA has endorsed a set of principles that we
think must be included in any legislation that Congress enacts. For example,
we believe that any legislation should apply to all sectors of the economy;
place an enhanced and economywide focus on energy efficiency; consider the financial
impact on consumers; preserve a diverse mix of fuels for the generation of electricity;
and take into account the impact on the economy and jobs within the U.S.

There are a few options that Congress
might use to limit greenhouse gas emissions.
One is the cap-in-trade approach, where the
government would set an allowance of a number
of metric tons of carbon that could be
emitted into the atmosphere, and then if you’re
short you buy allowances from others, and if
you’re long you sell. Another alternative is to
impose a tax on carbon that ramps up over
time until it’s more painful to pay the tax than
it is to find alternatives to emissions reductions.
Straw polls of my members suggest they
find a tax approach much easier to administer
and more equitable. And it’s more compatible
with the idea that you need an economywide
program that doesn’t put all the burden on one
sector of the economy, electric utilities, just
because that’s the easiest one to deal with.
The method Congress selects will affect the
industry in a couple of ways. If there’s a carbon
tax, costs will increase to reflect this fact.
It’s going to increase the cost of doing business
for every utility that generates electricity
using fossil fuels. But it will also affect other
segments of the economy, such as individuals
who drive their automobiles and have to pay
a little bit more for gasoline. If Congress
selects a cap-in-trade approach, we need to
create a new market for emission allowances.
For electric utilities in some regions, this market
will be superimposed on top of complex
wholesale electric markets set up by regional
transmission organizations or independent
system operators.

We’ve been very concerned about the
dysfunctionality of those wholesale markets
and the relative ease with which they can be
manipulated. More important, the prices we’re
seeing in those markets are not what we would
expect to find in truly competitive marketplaces.
For example, the last unit of generation
needed to meet electric load at any point
in time sets the market clearing price for all
other generation sold at that time. If emission
allowances must be purchased in order to
generate that last bit of electricity, then the
cost of those allowances will increase the
cost of all other generation. So tying back to
the climate change issue, one of the many
things that concerns us about the cap-in-trade
approach is the superimposition of a new market
for carbon emissions on top of these organized
wholesale markets, making the whole
process that much more complex, difficult to
manage and more costly for all consumers.

E&U: What has restructuring achieved in terms of lowering
prices and providing incentives to invest, and how has restructuring affected
American businesses and consumers in terms of prices and access to a reliable
supply of electricity?

AR: There were great promises of lower rates, better service
and more innovation through restructuring the markets and getting the regulators
out of the way. Over the past several years, a number of studies have purported
to show that, indeed, those results have been achieved. Some of these studies
show billions of dollars of savings for consumers. My members haven’t seen those
savings. In fact, if anything, they are experiencing higher costs for wholesale
power. The APPA commissioned Northeastern University Professor of Economics
John Kwoka to look at the validity and accuracy of those studies and the conclusions
reached. His analysis found that none of the studies withstood scrutiny and
that policy makers should not be relying upon those results when they decide
whether consumers are better or worse off under restructuring.

As far as reliability, some observers
have suggested that restructuring has
adversely affected operational reliability
because power generators are
focused almost exclusively on profits.
There may be some truth to that, but
there is another problem, particularly in
these organized markets, and that is the
failure of investors to come forward to
build the new generation or transmission
infrastructure necessary as we move
further into this century. The markets
simply have not been providing the kind
of incentives to investors that were predicted
at the outset, and now we’ve got
concerns about whether there will be
enough generation to meet tomorrow’s
needs. Therefore, public power systems
that historically relied upon the market
are now realizing that it isn’t producing
what they thought it would and what the
economists thought it would, and there
is a tremendous push within my membership
to move forward with building new
generation on their own.

E&U: How did Congress address the electricity marketplace
in its umbrella Energy Policy Act of 2005?

AR: It did a number of things, some of which are perhaps a
little contradictory. You can look at certain parts and say the Energy Policy
Act supported the promotion of competition in the electric utility industry,
at least in the wholesale markets, while other parts of the Act suggest Congress
isn’t ready to rely solely on competition to set rates and protect consumers.

Congress repealed the Public Utility
Holding Company Act on the assumption
that it was an impediment to competition
and investment in new facilities. We’re
not necessarily seeing investment in new
facilities now, but instead, we’re seeing
acquisition activity among entities that
previously would not have been allowed
to enter the electric utility industry in the
first place. Consider the private equity
firms that are attempting to purchase
Texas Utilities (TXU). These venture
capitalists see opportunities to acquire
utilities. The typical VC pattern is “strip
and flip” buy companies, strip them and
sell them at a profit. This may be profitable,
but it is not suitable for an industry
as critical as the electric utility industry.
Repealing the Holding Company Act has
allowed introduction of new players into
the electric utility industry whose interests
are not likely to be consistent with
the best interests of electric consumers.

E&U: What is FERC doing to meet its congressionally mandated
mission to ensure electricity rates are in fact just and reasonable?

AR: Well, they recognize that that they have this responsibility,
but it’s not clear whether they are exercising it to the fullest extent possible.
We would like to see them be more active in examining the relationship between
prices paid for power on the wholesale market and the actual cost of production.
The commission is assuming that competitive wholesale markets will produce results
that meet the statutory definition of “just and reasonable.” If we were dealing
with widgets or oranges, that might well be the case. Competition usually does
produce that kind of result for those kinds of commodities. But the electricity
marketplace is fundamentally different. Electricity isn’t really a commodity;
it’s more of a phenomenon. It is a necessity for which there is no substitute,
and there is a limited number of sellers. It is difficult to rely on competition
under these circumstances. The commission needs to be looking at this more carefully,
and we have been encouraging it to do so.

E&U: Are higher fuel costs the explanation for higher
electricity rates?

AR: That’s another one of the arguments that has been used
to justify the increased prices being charged in RTO- and ISOmanaged wholesale
markets. We have taken a close look at this issue. It’s clear that fuel costs
have risen and that rising fuel costs do increase the cost of generating electricity.
But it’s not clear that higher fuel costs alone are responsible for the higher
prices that are being paid, particularly in the organized markets. As I mentioned
earlier, the methodology for setting the price in these organized markets is
a single-bid clearing price auction – the last generator to be dispatched to
meet the load within a particular area sets the price for all other generators.
High-cost natural gas-fired generation is frequently the last generation unit
dispatched and the cost of that generation sets the price for lower-cost generation
from coal or nuclear plants.

E&U: What lessons are we learning from Europe about its
marketplace approach to CO2 reduction, and how do we apply those lessons to
U.S. policy to hold down electricity costs?

AR: I hope we’re taking time to learn lessons from Europe.
They’ve employed a very complicated cap-and-trade approach, and there’s a great
deal of momentum here in the U.S. behind cap-and-trade too, particularly on
Capitol Hill. There are about 12,000 emitting sources that are subject to the
European Union (EU) cap right now, accounting for about 50 percent of carbon
emissions. That leaves another 50 percent that are not covered by the cap-and-trade
program. It’s hard to slow, stop and reverse emissions of greenhouse gases if
you are only dealing with half of total emissions. We can avoid this problem
in the U.S. if Congress endorses our principle that any legislation must cover
all emitters in all sectors of our economy.

Another problem in the EU is that
there’s no single monitoring or enforcement
source to make sure those with allocations are abiding by the allocations, that
they are emitting no more than allowed
to emit by law or that they are purchasing
allowances when necessary. There were
also too many allowances given out in
the EU when the cap-and-trade approach
was first introduced, meaning they were
allowing more emissions than they should
have and the price fluctuated wildly as a
consequence. And there were some costly
mistakes for consumers. For example,
some of these allowances were given away
free to electric utilities that turned around
and calculated the price of the emissions
in the marketplace and added that price
to the power they were selling to consumers.
They got a windfall at the expense of
consumers. If Congress does embrace a
cap-and-trade approach, it should protect
against behavior such as this.

E&U: It sounds like we can learn from the mistakes of
the EU experience as well as any smart things they’re doing.

AR: Many people say this whole capand- trade approach is similar
to what we used for controlling sulphur dioxide several years ago. You’ll hear
the argument that we did it for acid rain, we can do it to reduce greenhouse
gas emissions to address climate change. But with acid rain, we had a relatively
discrete number of emitting sources, mainly stationary power plants. We had
technology available for putting things on the end of the combustion cycle that
would capture the SO2 , and we’ve been able to monitor how well that technology
works. Plus, we had the EPA, a federal agency that was able to enforce the emissions
limits. Even under these conditions, however, some companies tried to game the
system.

As mentioned, the EU has a multitude
of emitting sources. The U.S. will have to
deal with the same problem if we go to
a cap-in-trade approach with CO2 . We’ll
have thousands and thousands of emitting
sources and no clear way of ensuring that
they’re abiding by the rules. Then we have
the question of how those allowances will
be allocated. You can bet the infighting
will be fierce in Congress as various industries
scramble to make sure they get their
allowances. But since there will be an absolute
limit on the number of allowances, for
every winner, there will also be a loser.

E&U: As it legislates CO2 policy, what kinds of considerations
must Congress take into account in relation to the electric power grid and how
the marketplace is currently organized?

AR: Energy drives our economy. But we don’t yet know how to
capture and store carbon on a commercial scale, and there are many, many questions
that must be answered. Environmental policy and technology must proceed in tandem.
Congress shouldn’t set emission targets that simply cannot be met. Much research,
development and deployment of potentially promising technologies will be required
in our search for answers. Congress should ensure adequate funding for these
activities. Congress should measure alternative means of mandating emission
reductions, based on how simple, efficient and equitable they are. From my personal
perspective, this suggests a greenhouse gas tax rather than a cap-and-trade
approach. And as far as the electric power marketplace is concerned, Congress
should take into account what the consequences for consumers will be if they
do decide to superimpose an emission allowance market on top of the RTO- and
ISO-managed wholesale electric markets.

The Emergence of the Intelligent Utility: A New View of Service Delivery

Utilities encountered a decade of
challenges in the 1990s – utility
diversification, attempts to adjust
to emerging regulatory opportunities and
distractions from extended investment
strategies. Consequently, in the early
2000s, many of these utilities began to
reevaluate their business practices and
followed a “back to basics” approach
to recover from the failed strategies of
the earlier era. While the elimination of
noncore business activities may have
helped to shore up those utilities’ financial
condition, the strategy proved ineffective
in addressing the core challenges of an
aging infrastructure, aging workforce and
much-needed gains in productivity.

Now utilities are searching for and
embracing new strategies for managing
assets and performing work – strategies
that are both pragmatic enough to
deliver results today, yet forward-thinking
enough to provide continued leverage in
the future. The aim is to achieve service
transformation, to completely make over
the methodology and infrastructure supporting
the delivery of services to customers.
Many are beginning to look for
new ways to create an intelligent utility
platform – an intelligent infrastructure,
if you will – to drive improved productivity
and achieve better business results.
The results of this strategy validate the
fact that harnessing an emerging and
maturing intelligent infrastructure can
help utilities employ new strategies and
technology and reach the goal of service
transformation.

Back to Basics – It’s Not Enough

The back-to-basics trends of the past
decade were survival tactics deployed
in the face of increasing cost pressures,
which have created a constant struggle
to satisfy escalating market expectations.
Mandates to cut costs, coupled with the
need to improve infrastructure and customer
satisfaction levels, have created
seemingly conflicting objectives. As with
most industries, the entire utilities cost
structure has continued to increase, adding
fuel to this already volatile equation.
Labor costs, long recognized as a major
portion of the service delivery expense,
have continued to rise while labor resources
have become scarcer, creating
still more concern. Transportation costs
also increased well above historical
highs and must be included in the growing
cost spiral.

In addition to these tangible impacts
on cost, more elusive “relationship” elements
have begun exerting pressure on
utilities. Investor-owned utilities have had
to deal with the high expectations of Wall
Street, demanding 5 to 6 percent profit
growth, while organic growth remained at
less than 3 percent in most areas. Moreover,
the entanglement of the customer
relationship with the regulatory relationship
became critically important as many
utilities found it essential to seek rate
changes.

It has become clear that the need to
meet the regulatory and customer goals
for greater reliability, improved levels
of service and stable rates has further
increased the stakes for transformation. In
short, these changes cannot be achieved
via a simple back-to-basics strategy.

But solving this conundrum requires
new solutions and strategies to transform
the way service is delivered. The outcome
must impact the three most critical
aspects and financial-consequence areas
of the utility business – its assets, customers
and workforce. Fortunately, there is a
solution to drive change and yield results.

The Intelligent Infrastructure

Figure 1: The Intelligent InfrastructureA new paradigm is changing the way
assets are managed and work is executed.
It is enabled by an emerging and maturing
Intelligent Infrastructure (see Figure 1).
This intelligent infrastructure correlates
with the concept of the intelligent grid.
The intelligent infrastructure allows technologies
to link between IT software systems and the tangible assets within the
infrastructure, which, in turn, enables the
physical tracking of vehicles and people,
providing sensing capabilities to alert
when assets are failing, and including
communication capabilities to interconnect
the elements and systems within
the enterprise.

Being able to leverage these technologies
is a recent phenomenon, but it is also
a realistic solution. Moreover, the cost
provides a reasonable return on investment.
The data-enabled 800 MHz radio
systems are relics of the utility industry’s
past; complete intelligent infrastructures
are the tools of today’s savvy management
team. We now have extensive tools
that can directly impact the efficiency of
any work process. These tools support a
new strategy for how we manage assets
and deploy our workforce – appreciably
impacting the results we achieve.

Addressing the Challenges – The Business Case for Service Transformation

So what exactly is involved in service
transformation? What must a utility
do to become an “intelligent utility”?
The underlying business case for service
transformation – harnessing the
intelligent infrastructure, embracing a
new strategy and employing solutions
designed to enable execution of this strategy
– can be illustrated by looking
at several interrelated areas:

  • Sweating the Assets – An Aging
    Infrastructure
  • Capturing Business Knowledge –
    An Aging Workforce
  • Creating a Platform for Consolidation
    – The Enterprise View
  • Improving Worker Productivity –
    Enterprise Workforce Management
  • Coupling People and Parts – The
    Fusion of Supply Chain and Work

Sweating the Assets – An Aging Infrastructure

The utility industry is clearly dependent
on a highly distributed yet interdependent
infrastructure to deliver the utility commodity.
The health of that generation,
transmission and distribution network is
a key determinant of the business results
achieved. For example, research shows
that the industry as a whole is based on
an aging infrastructure, where the average
age of transformers is 38 years – a
significant statistic in light of their typical
40-year design life. In fact, the recurring
joke in the industry is that most of the
nation’s power distribution systems are
now eligible for AARP membership. Compound
this with documented studies indicating
that the failure rate on transformers
escalates to 50 percent at 50 years,
and the stage is set for disaster.[1]

While simply replacing this infrastructure
is an obvious answer, it is clearly not
practical and it’s much too expensive.
Modernization must be combined with
strategies for “sweating the assets” in
order to extend their useful life – on an
individual component basis. Specialized
software is at the heart of the new maintenance
strategies and techniques that
will let utilities walk the tightrope. This
approach allows for the network to be
modernized at a robust pace and takes
fullest advantage of components already
in place – while continuing to ensure safe
and reliable power.

Capturing Business Knowledge – An Aging Workforce

Utilities must also acknowledge that
the workforce itself is aging. In fact,
statistics show that the median age
of employees in the utility industry is
higher than in other lines of work, and a
noteworthy spike exists in the 45 to 54
age group.[2] These facts point toward
a looming crisis for labor replacement
and knowledge retention, creating an
urgent need to standardize practices
and capture knowledge in systems. The
classic utility culture relies heavily upon
“lore” passed down through mentors and
trainees. It’s also based on the traditional
expectation of a stable workforce. In
today’s work environment, however, it’s
no longer safe to count on an ongoing
supply of lifetime and intergenerational
employees. New workers must have the
knowledge-bearing tools to guide them
through processes at experienced levels.
As noted by one leading utility expert,
“The system needs the benefit of 20
years of experience, not the worker.” It’s
a daunting prospect, but this forecasted
attrition rate should serve as a catalyst
for change, driving the implementation of
technology-achieved productivity gains
to offset hiring needs.

Creating a Platform for Consolidation – The Enterprise View

Elements of Effective Enterprise Workforce ManagementConsolidation is yet another emerging
trend in the market. Consolidation opportunities
and drivers exist at several levels.
One is the opportunity to expand systems
to break down traditional silos and take
an enterprise view of the service delivery
functions spanning customers, assets and
workforce. The norm of siloed information,
with each organization having its
own systems, is giving way to an enterprisewide
view that yields significantly
improved results and better performance.

For example, a December 2006 Aberdeen
study shows that standardized,
enterprisewide, proactive maintenance
processes not only increase asset uptime
and availability and asset productivity
(as a percentage of capacity) over ad
hoc maintenance processes, they also
decrease service and maintenance costs
as a percentage of revenue. Furthermore
utilities are seeing these results to a
greater extent than other asset-intensive
industries because they are twice as likely
to use standardized, enterprisewide maintenance
processes.[3]

These types of results are above and
beyond the typical impact of simply eliminating
redundant systems. However, the
key is having a single platform capable
of being configured to meet the business
needs of multiple users.

Another element of the consolidation
trend is to enable effective post-merger
assimilation. There is a unique source of
business value in having highly scalable
systems that can absorb multiple companies into a single entity. Thus, common
trends seek to drive consolidation business
value in order to:

  • Deploy applications on an enterprise
    basis to handle “any work type, anywhere,
    by anybody”;
  • Standardize business practices and
    streamline processes;
  • Create supply chain efficiencies;
  • Reduce integration complexity; and
  • Focus on fewer, more strategic
  • vendors.

Improving Worker Productivity – Enterprise Workforce Management

Improving worker productivity is a common
goal, but it can be tricky to quantify
the gains realized. One place where tangible
benefits can be rapidly achieved is in
the area of enterprise workforce management.
The problem is easy to define: The
typical utility field worker is on the job
and able to work only 1.6 to 2.8 hours per
day. This provides an enormous opportunity
for immediate productivity improvements
ranging from 20 to 35 percent. This
requires three interrelated capabilities,
shown in Figure 2:

  • A resource management platform capable
    of managing workforce availability;
  • An assignment tool to automate the
    optimal distribution of work across the
    available workforce; and
  • A mobile data infrastructure to feed the
    work to the appropriate technician.

In the past, these tools have been limited
in several ways.

First, the capabilities have only been
deployed against individual work groups.
Therefore, sharing of resources across
the organization has been hampered, as
technicians cannot be easily redeployed
to respond to the ebb and flow of business
needs – regardless of work type (e.g.,
construction, inspection and maintenance,
repair or service). Work assignment is thus
not based on skills and work proximity, but
on the basis of arbitrary organization and
system silos.

Second, past deployments have not
leveraged significant advancements in the
area of optimization technology, which
facilitates the assignment of work to
the right technician, with the right skills,
parts and tools to do the job. Optimal
assignment yields huge savings, as this
minimizes drive time and increases productivity.
But, it’s a problem that cannot
be easily solved without the aid of technology.
Consider that 10 jobs can be ordersequenced
in more than 3 million different
ways. Each yields a different business
result in terms of efficiency and effectiveness.
Utilities deal with hundreds, if not
thousands, of orders per day across tens,
if not hundreds, of technicians; clearly,
the correct assignment and sequencing of
work can yield phenomenal benefits.

Third, all of this potential benefit is lost
without the ability to communicate reliably
with the field. No plan is immune to
the test of reality once the truck rolls out
of the yard. Thus, it is essential to harness
the intelligent infrastructure to react to
changes, so that work can be deployed to
the right resource and without being constrained
by the tools’ lack of connectivity.

Coupling People and Parts – The Fusion of Supply Chain and Work

One of the most dramatic areas for
productivity improvement is often overlooked.
It is the ability to link people and
parts, fusing the supply chain aspects of
the business with the work management
aspects. Most work requires specific
parts matched with skilled labor for the
job to be completed correctly. Moreover,
an effective asset management strategy
must take into consideration both supply
and demand.

Spare parts often comprise 80 percent
of the purchasing department’s transaction volume in a utility maintenance
environment,
yet the primary reason for delay
in completion of work assignments is the
lack of necessary parts. Supply chain
systems designed to be companions to
financial systems, rather than work management
systems, do not have the needed
functionality, process orientation or work
flows for achieving optimal asset life cycle
management results.

A major Tier 1 utility in North America
recently quantified the importance of recognizing
and implementing this strategy.
The utility found that linking people and
parts in its asset life cycle management
enabled the following benefits:

  • Energy delivery headcount reduced
    from 326 to 217 employees;
  • Inventory decreased from $121M to
    $56M;
  • 93 percent fill rate increased to 99.9
    percent;
  • 0 percent material invoice automation
    increased to 90 percent;
  • 0 percent automated purchase orders
    increased to 73 percent; and
  • 49 supply facilities reduced to 28.

A New View of Service Delivery – Leveraging the Intelligent Infrastructure

Figure 3: The Intelligent Infrastructure is critical in connecting the complex array
of interrelated business processAchieving maximum benefits and savings
requires a holistic view of the business,
as savings in one area is often affected
by another. While each organization will
choose an area to focus on first, a calculated
– and often evolutionary – enterprisewide
strategy must be articulated as
well. Just as the intelligent infrastructure
connects the dispersed and disparate
components of the utility’s network of
assets, service delivery management
connects a complex web of highly interrelated
business processes. Having a “process-
centric” perspective is paramount to
driving change. So the need for an intelligent
infrastructure is even more critical
to success!

A simple way of looking at this is in the
context of work management activities.
One can easily see how work planning,
work allocation and work execution all
impact each other, as shown in Figure 3.

Many organizations fail to implement a
sound strategy – and solutions – for ensuring
that the right work is being performed.
A combination of preventive and predictive
planning strategies can overcome this
liability. Traditional field service solutions
don’t tell you anything about whether
specific work should have been identified
at the project outset. These tools can help
get technicians started working faster, but
they don’t provide any analytics to help
determine if the task was truly needed in
the first place.

The next step focuses on work allocation.
In determining how to optimize work
assignments, organizations must consider
the skill match with technicians, parts
availability and work and schedule prioritization.

Only when you’ve determined the
appropriate work to be done and the optimum
sequence for executing that work,
are you ready to perform the work. And
then, you need mobile support. Plans are
outdated as soon as the technician is given
the day’s schedule, and good intentions
fall apart in the unpredictable real-world
environment. Therefore, the ability to
address, augment and modify that optimized
plan on a real-time basis completes
the process of carrying out work in the
most efficient and effective way.

Summary

The intelligent infrastructure, along
with a new view of service delivery and
appropriate technology tools, enables
pragmatic yet progressive means to
achieve transformational results. Service
delivery management recognizes the
optimum chain of work logistics, leading
utilities to identify and prioritize better;
to improve the match of resources, parts
and work management; and, finally, to
execute with ongoing real-time contact
and continually refine work plans to
reflect changing circumstances. This supports
the planning, allocation and execution
for achieving excellence in service
delivery, effectively delivering on the
promise of the intelligent utility.

Endnotes
1. William
H. Bartley, “Life Cycle Management of Unility Transformer Assets
: p.6.
(http://www.serveron.com/downloads/dl_files/HSB.Bill%20Bartley.Life%20Cycle%20Management%20of%20Utility%20Transformer%20Assets.pdf)
2. U.S. Bureau of Labor Statistics, 2005
3. Collaborative Asset Maintenance Strategies, Mark Vigoroso, Michael Israel;
Aberdeen Group Inc., December 2006.

Technology Support for Utility Analytics

Intelligent utility networks (IUNs), also
known in the electric power industry
as intelligent grids, smart grids or
modern grids, make use of large numbers
of sensing points and intelligent devices
to greatly increase the observability of
the grid state, device states and quality
of delivered service. Utilities are learning
to use this massive flood of new data
to make significant improvements in the
three primary functions of the utility:
delivery of reliable, high-quality power,
support for sophisticated customer
services and advanced work and asset
management. The amount of data that
an intelligent utility network may produce
can only be handled by automated
analytics, since there is far too much
data streaming in at high data rates for
humans to comprehend and act upon
directly.

We define analytics as software tools
that transform data into information
that can be acted upon, in the forms of
automatic controls, decision support or
performance indicators that influence
operations or planning. In the past, utilities
have often used stand-alone analytics
systems that had little or no ability to
integrate with business systems and other
utility applications and had limited ability
to expand or scale. However, with the use
of a solid architectural framework and
modern technology support, utilities can
implement flexible and scalable analytics
systems that enable them to realize the
full value of their investments in intelligent
grid infrastructure.

To appreciate the need for these technologies,
we review some key aspects of
intelligent utility networks, starting with
the nature of the utility assets themselves.
We will then look at the infrastructure that
transforms traditional utility infrastructure
into an intelligent utility network,
and then will examine technologies that
support the implementation and operation
of advanced analytics for the intelligent
utility network.

Utility Asset Characteristics and Intelligent Utility Network Structure

Utility assets have several important distinguishing
characteristics that impact
the nature of analytics technology:

  • They should operate continuously
    (24/7/365);
  • They are geographically distributed;
  • They have a definite hierarchical structure;
    and
  • It takes a great many sensing points
    and analytics to make these assets fully
    observable; a few key performance indicators
    (KPIs) are not sufficient.

Figure 1: Analytics Hierarchy for a Transmission and Distribution UtilityConsider an electric transmission and
distribution utility as an example. At the
logical top of the hierarchy we have the
business operations. Below that, we have
the control centers for transmission and
distribution. Below each of these we have
substations and the equipment contained
therein. On the distribution side, the hierarchy
continues downward to the feeder
circuits and associated devices, and to the
customer meters. If we consider the analytics
necessary to characterize fully this
set of assets and associated operations,
we see a matching hierarchy, as shown in
Figure 1. Arrows indicate the flow of analytics
results.

What is not as clear from the diagram,
but is still eminently true, is that operational
time scales for analytics shorten
as we move down the hierarchy. At the
feeder circuit level, we may require analytics
to operate in milliseconds, whereas at
the enterprise level, we may only require
analytics to operate on a weekly, monthly
or quarterly basis. One exception here
is that billing-related meter functions
do not need to operate on millisecond
time scales. However, in cases where the
meters are to be used as grid sensors (for,
say, outage detection/localization or gridstate
monitoring) then the more rapid
times scales do apply to those analytics.

The implications of such a logical and
temporal hierarchy with geospatial distribution
of assets is that we must use technologies
to support analytics that provide
for distributed sensing, processing and
communications, as well as geospatial,
temporal and topological (grid connectivity)
awareness.

The distribution issue is especially
important. There are certain analytics
that can only be implemented in a
centralized fashion, since they are inherently
global in nature, such as system
performance metrics. Others, however,
are essentially local in nature and can be
computed right at the sensing points with
smart sensors or smart RTUs (remote
terminal units) and then reported out to
applications and repositories as needed.
Examples of local analytics include RMS
voltage, THD, and real and reactive power
flow. In some cases, analytics are better
implemented in a partitioned fashion, with
some elements being computed locally,
and some elements being computed in a
centralized server. This is especially true
for analytics that assemble a global view
of system performance from a number
of localized but complex measurements,
such as for high impedance fault location
via distributed sensors.

Below we review a number of advanced
technologies supporting the implementation
of analytics solutions for IUNs.

IUN Component Technologies

Data Sources
A wide variety of sensing devices is available,
and they increasingly include embedded
intelligence. From sensors connected
to microprocessor relays in substations
(also known as intelligent electronic
devices or IEDs) to smart grid devices
such as intelligent reclosers and capacitor
bank controllers to line monitors with or
without smart RTUs, there are many ways
to obtain measurements on service delivery
and on device status and health. For
electric distribution grids, these devices
generally provide data on grid state (voltage,
current, real and reactive power, etc.),
device state and device stress history,
power quality and power reliability, faults
and failures, and safety conditions. Key
technologies here are device-monitoring
tools, software tools that support remote
programming and application download
for flexible distributed intelligence, digital
communications interfaces and IEEE
1451-based transducer electronic data
sheet (TEDS) services.

Data Transport
There are over two dozen communications
technologies that can and are used
by utilities and it is not unusual to see a
utility use six or more simultaneously. The
key issue here is that utility data communications
networks that support advanced
utility analytics must be TCP/IP-enabled.
This provides the necessary flexibility
and interoperability to support sensor
data transport, network management,
data security services support and smart
device management.

Data Storage for Analytics
The nature of utility analytics is that they
are sensor-data-driven, and ordinary relational
databases are not good at handling
such data streams. For utility analytics
data, there are three primary data storage
technologies: data historians, meter
databases and CIM-structured data
warehouses. Data historians use special
formats to store sensor data and are
capable of keeping years’ worth of such
data (up to multiple terabytes) online and
rapidly accessible. Data historians may
be either centralized or distributed. CIMstructured
databases use the utility Common
Information Model as the basis for a
data warehouse that contains data from
a variety of utility sources and provides a
store against which analytics may be run
without loading down other utility databases
or applications. CIM also provides
an open-standard data model schema for
utilities, which avoids proprietary database
formats and goes far toward guaranteeing
interoperability with newer utility
control systems.

Meter databases have typically been
siloed in the past and have been managed
by meter data management systems.
However, both the meter databases and
data historians can be federated to CIM
data warehouses via middleware tools
made specifically for such database
integration. In this way, analytics can be
built to access only the data warehouse,
with data being automatically fetched
from the historian or meter database as
needed without the need to copy large
volumes of data from either of these
specialized databases into a relational
database (something that would overload
the relational database system easily).
In addition, some meter data collection
systems support multiple-event subscribers,
thus allowing a meter data management
system to get usage and event data,
while also allowing other systems, such
as outage intelligence systems, to have
simultaneous near-real-time access to
event messages.

Integration Buses
In the past many utility analytics have
been created to operate in stand-alone
fashion, and any integration among them
has been “swivel-chair integration,” where
the user manually transfers data or commands
from one screen to another. The
enterprise software integration bus, especially
in the context of a services-oriented
architecture (SOA), provides a basis for
integrating analytics to utility applications
and back-office applications in a way that
preserves performance and modularity,
and provides vendor independence by isolating
the effects of changing or replacing
any particular application or analytic tool.

For utility analytics systems, we recommend
a dual bus arrangement, where
a standard enterprise integration bus
provides connection among analytics
and enterprise systems, and a second
event-processing bus handles the higher
bandwidth data transport and rapid event
response traffic. This extended SOA
approach for utility analytics systems
supports both centralized and distributed
analytics processing, as well as providing
mechanisms for machine-to-machine
(M2M) communication for automatic use
of analytics outputs in control applications
and in support of composable (compound)
analytics services.

Event Correlation and Notification
Several technologies support analysis
of events as represented in sensor data.
Generally speaking, sensor and event
data must be time-correlated, so data
should be time-stamped. Newer utility
devices make use of GPS timing information
to provide precise and accurate time
correlation. Many analytics also require
geospatial correlation (to determine if a
lightning strike near a substation caused
a circuit breaker trip, for example).
Geographic information systems (GIS)
are used by most utilities to track asset
locations. With proper integration, GIS
databases can augment both real-time
and post-event analytics. Connectivity
models (which are inherent in CIM-structured
databases and also exist in various
forms in energy management and distribution
management systems) provide
the necessary topological information
for event correlation. Combined, these
technologies yield the ability to analyze
grid events through three search criteria:
a time window, a geospatial window and a
connectedness window. Event correlation
tools that perform in these three search
dimensions greatly ease the problem
of analyzing complex events in a utility
transmission or distribution system.
In addition to post-event correlation
analysis, utilities must monitor a great
many data points and a great many analytics
that derive from measured data.
It is impractical and ineffective to have
people monitor screens full of streaming
numbers, so automated configurable notification
engines must be used to scan the
data and analytics outputs continually and
then generate notifications to the right
parties when events occur. Event-processing
technology can supply tools that perform
such monitoring and notification on
a subscription/configuration basis. Such
event notification can be implemented in
data historians or in separate event-processing
services.

General Analytics Tools
Several general-purpose software
technologies have proven useful in analytics
systems in other industries and
fit into the context of utility analytics
as well. These include online analytical
processing, known as OLAP, and its ancillary
tool, the cubing engine. These tools
provide the means to rapidly examine a
multidimensional data set from various,
possibly rapidly evolving viewpoints so as
to obtain a clear visualization of the inherent
meaning of the data. Separately, data
mining technology provides the means to
sift large volumes of data automatically to
identify patterns and trends that a person
might never recognize in a mountain of
data. Data mining technology is typically
used to assist offline analysis in support
of strategic planning and long-term trend
analysis or event correlation.

Analytics Management
In a modern utility analytics environment,
there are so many rapidly updating
analytics, metrics and key performance
indicators that it is necessary to provide
tools to support analytics management.
Analytics management entails three functions:
control of access to analytics based
on job roles; subscription to analytics by
users on an ad hoc basis; and configuration
of subscribed analytics on a user-byuser
basis. Analytics management tools
provide the means for a user to subscribe
to a particular analytic and have it delivered
to the desktop or to email or a pager
service as needed and then unsubscribe
when the need for that particular analytic
is past. This becomes especially important
when tens of thousands of data points are
being measured and analytics are being
derived from these data points.

Results Distribution to Humans
Many of the analytics results must be presented
to people to support various decision
and actions. Appropriate technology
for such presentations includes portals,
dashboards and notifications via email,
pager or cell phone. Portal, dashboard
and related technologies have become
quite common in advanced information
systems and in most back-office systems.
Their use in utility analytics systems is
more recent but is growing rapidly and
represents a de facto standard approach
to distribution of analytics and KPIs.
Distribution of notifications via email
and pager is also common in monitoring
systems – we extend the concept to monitoring
of advanced analytics in addition to
basic variable threshold crossings.

Analytics Architectural Framework

Figure 2: Example Analytics Architectural FrameworkIt is not enough to have a selection of
technologies available for use in intelligent
grid analytics implementations.
These components must fit into a framework
that provides the environment to
integrate existing and future applications
and analytics into the operating
and business environment of the utility.
The architectural framework defines the
integration schema, the relevant communications
standards, how analytics are
managed, how multiple vendor analytics
tools are integrated and how analytics
results are distributed. Figure 2 shows an
example of an architectural framework
for utility analytics.

The architectural framework seems
complex at first glance, but its overall
structure adheres to three primary principles:
use of the SOA with an enterprise
bus for integration at the business services
level; use of a second event-processing
bus for integration of the real-time
event data; and integration of data from
a variety of sources into a CIM-structured
data warehouse. The framework has provisions
for data management, analytics
management, and network and device
management, as well as data security
services. The framework supports both
centralized and distributed analytics
and allows for variable trade-offs in the
degree of distribution.

Note that this reference architecture is
a starting point for the development of a
utility analytics solution. It does not represent
a shrink-wrapped, out-of-the-box
solution. Each utility must be prepared to
customize any such reference architecture
to its unique infrastructure and needs.
It does, however, represent an excellent
starting point in developing an appropriate
end-to-end utility analytics solution.
The SOA approach inherently implies use
of a delimited set of open standards for
communications, and this is extremely
important in creating scalable, modular
M2M and process-to-process or service-toservice
communications.

Many commercial products are
available to support the middleware
and database functions implied in this
framework, and the framework is
designed to support the integration of
utility and third-party analytics tools
and functions in a vendor-independent
fashion. This provides the utility with
the ability to protect its investments
and not be locked in to a single vendor,
protocol or tool set.

Conclusions

Utility analytics are becoming more
sophisticated and at the same time
more widely used throughout the utility.
As data volumes increase from intelligent
utility networks, smart grids, etc., so
increases the need for technology to manage
the data flood and the analytics that
convert the data flood into usable information.
Key technologies, such as IP-enabled
digital communications, software integration
buses, CIM-structured data warehouses,
data historians, event-processing
tools, networked device management
tools, machine-to-machine communications,
portals and dashboards for human
interfaces, and even analytics management
tools are crucial elements of a
successful utility analytics system implementation.
All of these technologies benefit
from an analytics architectural framework
that provides scalability, variable
distribution and modularity, thus ensuring
flexibility and therefore protection of the
utility’s investment.

Leveraging Operational Systems Assets to Realize Their Full Value

The electric utility industry finds
itself confronted by a dynamic
regulatory and market environment
that is necessitating changes at
all levels of these organizations. Whether
a generation utility is participating in a
commercialized or regulated market,
the successful producer must be costeffective,
environmentally compliant
and flexible.

Realizing such a position is no easy
task. All of the obvious measures have
been taken – utilities have made improvements
in their supply chain, reduced
staff and outsourced a variety of functions.
Success requires that generation
utilities dissect their core business processes
in search of quantum improvements
in operations. This doesn’t have
to require significant investment; rather,
that utilities more fully leverage the
investments they’ve already made,
including those in a spectrum of computer-
based information and control
technologies. These technologies can be
organized into a three-tiered hierarchy
that, working together, will provide the
data and automation engines necessary
to identify, implement and sustain necessary
changes to operational processes.

The following classes of computing technologies
are included in this hierarchy:

  • Enterprise-level Systems: These systems provide the basic
    computing infrastructure for the entire organization and include applications,
    services and resources such as financial analysis, ERP, desktop platforms,
    security, Internet presence, business continuance and LAN/WAN networking.
    Given the corporatewide expanse of enterprise systems, the staff levels needed
    to manage and support them, and the large costs involved, they receive significant
    attention at the executive level and have highly refined governance models.
    Enterprise systems can be thought of as enabling more specialized applications
    by providing the services and infrastructure they need. Most utilities have
    long since standardized enterprise applications across their organizations.
    Having done so, they have been able to reduce costs, obtain procurement leverage,
    avail themselves of outsource opportunities and better utilize internal technical
    resources. Enterprise systems are often models of IT governance best practices.
  • Embedded Systems: At the other end of the hierarchy reside
    embedded systems. This category includes highly specialized hardware/software
    such as intelligent sensors, SCADA remotes, control devices, specialized diagnostic
    handhelds, etc. Embedded systems are rarely of any concern to the IT department
    except if they utilize connectivity and security resources provided by IT
    or source operational data to IT-hosted applications, scorecards and performance
    dashboards.
  • Operational Systems: The final category of computer systems
    falls in the middle of the hierarchy. Utilities employ specialized control
    and information applications to operate, maintain and manage their generating
    assets. These specialized applications – collectively referred to as “operational
    software” – typically include distributed control systems, maintenance management
    systems and a variety of software applications to analyze and optimize unit
    performance. While software at this level may make extensive use of enterprise
    resources and platforms, the specialized applications themselves are not widespread
    within a utility; they are largely the concern of engineers and other specialists.
    As a result, they have historically been below the radar of IT management.

It has been noted that utilities often model
best practice behavior at the enterprise
level of their organizations. Unfortunately
such practices have not typically extended
to operational software and systems,
which is a smaller but potentially more
critical class of application. Systems at the
operational level deserve the same C-level
governance attention.

Operational Systems Issues

While some operational improvements
may involve costly and time-consuming
mechanical modifications to generating
plants, others can be realized more
quickly and inexpensively by leveraging
the information available from plant control
and information systems.

Challenges facing generation utilities
include responding to newly imposed
emissions mandates, improve heat rates,
decrease the number of forced outages
and reduce plant staffing and operating
costs. Operational systems are critical to
these goals, since they either generate the
data needed for assessment and response
and/or execute necessary actions.

Many utilities are not fully leveraging
the operational systems assets in their
generating fleets; others have operational
software and hardware infrastructures
that do not readily lend themselves to
higher-order capabilities. Historically,
operational systems and software have
been procured by individual plants, on
limited budgets, to meet local concerns
often driven by the need to replace obsolescent
systems. Even in situations where
advanced capabilities have been implemented,
they are not used consistently.

In brief, operational systems are not
deployed using the same cross-competency
approaches used for enterpriselevel
systems, which may be hindering the
realization of their full value.
Of specific note are the following problem
areas:

Corporate Governance
Despite investments exceeding $4 million
per unit, operational systems seem to fly
below the radar of many utility executives.
When compared with enterprise
applications, an individual operational
system represents a relatively small dollar
amount and has significantly fewer users.
A different perspective would be gained
if one considers the collective investment
a utility has in operational systems at all
units in its fleet, which can easily extend
to the tens of millions of dollars and
encompass thousands of users.

The most significant impact of this
lack of visibility may that many utilities
have not standardized on a specific plant
control platform. In these instances, control
and information systems are bought
locally with minimal corporate input and
standards guidance. What are often lacking
are not only provisions for a common
hardware/software platform but standards
for functionality too. Functionality should
relate directly to the challenges utilities
face in improving their unit operations.

Overemphasis on Technology
The focus on technology encompasses
the entire project life cycle, from project
inception through implementation and
rollout. This was a major theme that
emerged from a series of in-depth interviews
with the persons experienced in
architecting, deploying and managing
operational systems at mid-to-large-sized
investor-owned electric utilities. The
research found that in initiating these
projects, technical obsolescence of an
existing software implementation was
cited as the reason for embarking on
an upgrade or replacement more than
50 percent of the time. The need for
enhanced functionality was the driver in
only 33 percent of replacement and was
not cited at all as a reason for an upgrade.

Once an operational systems project
is approved, it moves into the requirements
definition phase. Specifications for
operational systems are largely technical
in nature, and references to operational
objectives or business processes are
not often mentioned. Business process
improvements are often used to justify
software procurement, but they are
apparently not being integrated into the
specifications for operational software.
One participant in the writer’s research
said that only a small minority, “certainly
no more than 20 percent,” of request-forproposal
documents include any discussion
of operational objectives.

Lack of User Involvement
There is general lack of user participation
across the entire project life cycle
from initial project inception and planning
through final rollout of the application.
Many organizations say they involve users
throughout software implementation
projects, but a closer look shows this may
be limited to a few “super users” – those
most technically astute who are involved
with the most sophisticated application of
the system in question.

Even super-user participation is mostly
limited to the development of specifications
at the project start. The vast majority
don’t jump back in until the software is
ready to be rolled out for testing and training.
Users participated minimally during
the design phase, leaving their needs to
the interpretation of software developers
who may not be fully cog-nizant of user
job tasks. As a result, user needs are often
not fully realized in the final software
implementation, when it is often not feasible
financially or schedule-wise to make
corrections to anything but the most egregious
issues. The remaining user issues
are left to “future releases” of the software
and often are not addressed at all.

Training and Change Management
The general population of users doesn’t
see an operational software application
until it’s time for training. Usually this
covers the functions of the software
package, as opposed to training people
on how to use the software to perform
specific work-related functions. This
approach, though prevalent, is unfortunate
and significantly raises the risk that
the utility won’t realize its ultimate business
goals for implementing the system.
For example, inappropriately trained
operators tend to turn off optimization
software applications they do not fully
understand. Often these are the same
applications that generators are counting
on to address efficiency improvement
and regulatory issues associated with
their fleets.

Overemphasizing Implementation Costs
Generally speaking, many organizations
obsess over initial capital outlays for software
and associated implementation costs
with little attention paid to life cycle costs
– despite that total cost of ownership can
be significant and is not necessarily reflective
of initial purchase and implementation
costs. Depending on the complexity of the
software, its administrative burden, scalability
and the duration of the expected
upgrade cycle, the need for technical
personnel to maintain software can vary
considerably. In addition, a cost-centric
approach can impact project elements
beyond technology. Training is often the
first thing cut when budgetary issues are
encountered. Similarly, process re-engineering
and user representation are frequently
eliminated to save money.

New Directions

The previous sections have identified
operational systems as a distinct and vital
class of technology within utilities, one
that mandates a governance approach
already in place at the enterprise IT level.
Moving forward, utilities should consider
several calls to action.

Develop Overall Governance Models
Utilities would be well-served to elevate
operational systems to the purview of
executive management:

  • Standardize on system platforms to
    more fully leverage procurement advantages
    and outsource opportunities and
    to more effectively concentrate increasingly
    scarce human resources on operational
    issues. Technology management
    is not a utility core competency; leveraging
    it to operational advantage is.
  • Develop standard control and optimization
    applications that directly reflect
    operational and regulatory needs,
    reduce buss bar costs, etc. Utilities are
    advised to develop systems specifications
    centered on operational objectives,
    with technology assessed only to
    ensure efficient execution of needed
    functionality.
  • Manage the project and the business
    case. Research shows that utilities justify
    the procurement of an operational
    system based on a business case but
    often fall short of achieving it. Projects
    are managed to be on time, on budget
    and compliant with technical specifications,
    but it is rare for a utility to revisit
    an implementation to validate whether it
    met its financial goals.
  • Elevate operational systems to the
    status of enterprise applications. Operational
    applications often languish in
    obsolescence because of the rigid business-
    case criteria that must be met to
    justify replacement or upgrade. Control
    and information systems are critical to
    the business and should be elevated to
    the same infrastructure status as enterprise
    systems. Risk avoidance alone is a
    strong justification for this.

De-emphasizing Technology
Technology is just one competency needed
for successful software/systems implementation.
Three others are coordination
with strategy, change management and
process re-engineering. All too often technology
is chosen based on a bits-and-bytes
assessment; rather, it is how that technology
is deployed that is critical to success.
A control provider that integrates logic
with continuous control; has algorithms
and optimization software appropriate
to generation, for example; and has
credible experience in the industry is
far more valuable than is any specific
underlying technology.

Promote User Involvement
Lack of user participation is the most
significant cause of project failure, a lesson
that’s not been lost on enterprise IT.
User involvement promotes more effective
use of system features, avoids undue
complexity and ensures timely retirement
of costly legacy systems. Change management
is often equated with “training,” but
traditional classroom training is a small,
albeit important, component of the overall
picture. Equally critical to success is user
participation in developing requirements,
consultation with an appropriate cross
section of users during the design process
and an effective rollout strategy.

Effective training is also critical,
especially as plant systems grow more
sophisticated. At the same time, there is
the need to improve operating efficiencies,
meet emissions requirements and
manipulate a unit in response to varying
market demands. If we consider EPRI’s
findings, as reported by Power Magazine,
that 30 percent of the utility workforce
is over 50 years old and will be gone in
10 years, training becomes an even more
urgent need. Operators must know how to
operate the unit competently – not simply
access the features of a particular control
platform. And depending on the specifics
of their situation, they must be periodically
retrained. Fortunately simulator-based
training tools are available and enhanced
versions are being developed.

Recognize Value, Not Costs
Finally, when one considers the efficiency
impacts of marginally designed systems
and inadequately trained operators,
appropriate funding for a cross-competency
approach to operational systems is
justifiable. Many of the cost issues encountered
stem from operational systems not
being viewed as on par with enterprise
applications. Elevating operational systems
to enterprise status would establish
standards, implementation methodologies
and funding regimes, thus mitigating
financial obstacles that have a detrimental
effect on final results.

Often it is not even the costs associated
with operational systems, but rather
the justification of those costs that is
the real issue. Control and information
systems providers recognize the need
to architect, implement and manage
systems in line with business strategies.
While no single source of expertise can
provide a complete picture, control and
information systems providers have a
unique perspective on the industry – one
that utility executives would be wise to
consult in making decisions regarding
operational systems.

Operational systems are critical to the
core business of utilities: the safe and
efficient generation of electricity, in compliance
with environmental regulations.
These systems, long outside executive
purview, are critical to the success of utilities
in a changing market environment.

Best Practices in Distribution Asset and Work Management

Utilities face escalating operational
expenses due to rising fuel costs
and increasing investment costs
for environmental regulation while simultaneously
being limited in their ability to
improve revenues through rate cases. As
a result, utility companies are challenged
to continuously reduce operating costs
while simultaneously maintaining reliability
standards. Installing technology
alone will not deliver the cost savings;
the investment must be accompanied
by a change in the way we think about,
organize, and execute our business. This
paper explores the concepts of asset and
work management that complement technology
in achieving the goal of reduced,
consistent, predictable operational costs.

Background

Historically, utility owners established
capital and O&M budgets, and utility
workers were responsible for spending
the budgets. Performance was defined as
spending exactly the amount budgeted.
Variance was measured throughout the
organization on a monthly basis and was
treated as a performance problem that
had to be managed. First-line supervision
was more concerned with “budget variance”
reconciliation than unit costs or
volume production variances, if these factors
were considered at all. When determining
the budget for the following year,
momentum played the largest role in the
decision. For many of the business units,
last year’s budget was simply escalated
for inflation, and the cycle of “budgetbased
performance” was repeated. This is
particularly problematic for routine, shortcycle
operations and maintenance work,
where budgetary performance has little
correlation with results achieved.

At senior leadership levels, little consideration
was given to unit cost performance,
focusing instead on “how much
can we get back from our next rate case”
and “how much capital do we need to
spend to improve our position in the rate
case.” Field work performance was unimportant,
unless the expense became too
great, affecting cash flows and, thereby,
financing.

The Asset Management Model

Utilities, like most asset-intensive industries,
can be thought of as having four
primary internal stakeholders:

  • Asset Owners – represent the shareholder.
    They obtain funding for operations
    and capital investment and establish
    the required return on investment
    for the owned assets and investments.
    They also have primary responsibility
    for business performance management,
    tracking objective measures of
    performance (unit cost, total volume,
    customer satisfaction, etc.) and reporting
    these metrics to foster a discussion
    between the other stakeholders.
  • Asset Managers – analyze business and
    operating environmental conditions
    and direct investment to deliver the
    required return defined by the asset
    owner.
  • Utility Operations – execute the business
    plans of the asset managers in the
    most cost-effective manner. They are
    accountable to the asset managers.
  • Relationship Managers – manages communications
    and information shared
    within the utility, and between the
    internal and the primary external stakeholders
    (i.e., customers and regulators).
    They are equally accountable to three
    internal stakeholders (asset owner,
    asset manager, utility operations) and
    the external stakeholders (customers
    and regulators).

Asset Management

Figure 1: Key Players and Their RolesAs is done with airlines, chemical companies,
large manufacturing and other
capital, asset-intensive industries, capital
assets should be considered investments
and be justified through expected
returns. The “obligation to serve” and
commensurate right to a return through
rate case adjustment (though less certain
as we go forward) modifies the equations
by reducing some risk to investment,
and thereby the effective cost of capital.
However, the overall business remains
the same – invest in that which returns
net value for the business. As such,
investments must be focused to meet the
objectives of a strategic plan. The plan
must recognize and operate within the
constraints of the prevailing business
reality, and risks to operations must be
considered and mitigated. The business
must know the details of the assets it currently
owns (no small task for some traditional
utilities) and be able to access this
information as needed for plan development
or reaction to emerging conditions
such as environmental regulation.

To this end, the asset manager develops
a systemwide strategy to meet the goals
established by the asset owners, taking
the form of a system master plan. This
plan defines funding constraints for both
the capital investment and routine operations
and maintenance expenses. The riskadjusted
benefits of both types of work
are evaluated, and only that work scope
with the highest return on investment
receives funding (as constrained by the
financial limitations of the organization
e.g., debt to equity, cost of capital). The
subtle conclusion inherent in this statement
is that not all O&M should be considered
worthwhile, and the long-term riskadjusted
benefit of routine maintenance
needs to be evaluated and consciously
decided upon. Momentum-based decisions
have no place here.

Work Management

Operations are an expense. The science
of management focuses on removing the
waste created by complexity to get the
defined task completed with the desired
quality for the least cost on an ongoing
basis. For utilities, this comes down to
taking all of the moving parts associated
with operating resources and making
them dance together. In the past, we could
thrive with individual business units doing
things their own way; it was not uncommon
for overhead and underground crews
to have little in common beyond the logo
on their company badges. The name of the
game now is consistency and standardization,
and that comes from having a plan.

One important discipline of work
management must be followed: All work
must be recorded. This is a critical link to
measuring and controlling the resources
planned for and used in performing work.
Without this information, the resources
cannot be accurately planned and performance
accurately measured.

Job Initiation

Given that all work must be recorded, a
system for intentionally and correctly initiating
work orders for all types of work to
be performed must be created. Broadly,
work orders fall into one of two categories:
planned work and unplanned work.

The enterprise initiates planned work.
That includes all routine maintenance and
capital construction. The asset manager
governs this scope of work to meet the
goals defined by the asset owners.

Unplanned work is that which is in
response to a customer request or a
system disruption, such as requests for
meter inspections, temporary disconnection
of service for construction and restoration
following weather-related service
interruption. This work is reactive and
typically initiated from within the service
provider organization.

At best-practice utilities, the asset management
organization initiates all capital
projects and determines what routine
maintenance and compliance maintenance
will be performed. Utility operations
initiates emergent (unplanned) work.

The key to managing new customer
connections and routine repair and
replacements is standardization. Best
practices include standard work plans that
are applied to the majority of jobs and
rigorous performance management.

Job Design

As stated previously, the name of the
game is consistency. A well-planned,
compatible unit hierarchy with a modest
number of individual elements combined
to form larger standard assemblies will
not only minimize design efforts throughout
a franchise, but will also help to
standardize maintenance and operation
plans that can be executed consistently
at minimum cost.

In an example of pushing this concept
to the limit, one utility company limited its
new high-voltage substation options to a
single, scalable design, reducing design
and construction time for any new substation
from months to weeks, while substantially
reducing construction and routine
maintenance costs. They noted that any
operator could go to any of the new substations
and operate it with only a quick
reference to feeder alignment diagrams
because all substations are the same, thus
minimizing training costs and the potential
for operator error.

Job Planning

Job planning brings together resources
(materials, equipment and labor) to
accomplish the work required by the
work order. Again, the goal is standardization.
To the greatest extent possible,
planned work (routine maintenance and
capital construction) should be repetitive
according to a standard plan. This job plan
should be “assembled” from standard
plan components tried and proven many
times before. Valves are tested using the
same tools and according to the same procedures
throughout the franchise. Capital
construction crews can put together a
high-voltage substation quickly and easily,
as they have done the same thing with the
same equipment a hundred times.

This also applies largely to unplanned
work. Although the timing and details
of the customer request to relight a gas
appliance or remove and reinstall an over-
head service for home improvement is not
known, the required resources (material,
equipment, labor) are nearly identical
for each repetition of the task. One need
only look at the trend in volume of this
work over the past few years to realize
the volume can be predicted based on
environmental events, such as the first
time overnight temperatures drop below
freezing for three consecutive days. As
such, template work plans should be created
and available for application to this
work, minimizing planning and maximizing
standardization.

Job Scheduling

Scheduling work brings all the pieces
together at the right time. That sounds
simple enough, but anyone who has tried
to orchestrate scarce resources within a
framework of competing priorities recognizes
the inherent complexity. There
are many ways to perform the scheduling
function. Here are a few best practices
worth considering.

  • Optimize work schedules of time-insensitive
    work over a multimonth period,
    smoothing peaks and valleys of labor
    and equipment demand, and minimizing
    the impact of constrained resources;
  • Schedule activities and resources over
    larger geographic areas. This improves
    the flexibility for resource sharing
    rather than artificially constraining
    labor and equipment to a single “operations
    center” or territory; and
  • Improve material staging to eliminate
    impact on other resources. The “tail”
    of supply chain should never “wag the
    dog” of operations.

Emergent work can be the spoiler as it
robs the organization of the efficiency
and cost-effectiveness of well-planned
and executed work, replacing it with the
haphazard returns of reactionary “last
minute” execution. The success of scheduling
depends upon the ability to “lock
in” the work in time to orchestrate it with
crew, material and operational considerations.
Best-in-class work managers
monitor and anticipate problems, minimize
“sponsored work” and utilize sophisticated
scheduling tools and processes.
With trouble calls staffed by dedicated
resources, virtually all other work should
be “fixed and scheduled” 48 to 60 hours
in advance.

Work Execution and Completion

Once the standard job plans are scheduled
to be performed with available
resources, and there is little variation
as the work execution date approaches,
execution translates to field crews carrying
out the plan. Here are a few more
best practices worth incorporating into
the way of doing business.

  • Consider all resource alternatives (i.e.,
    contractors or employees) based on
    availability and cost. When the volume
    of work associated with a task is constant,
    it is an ideal task to outsource on
    a fixed-fee basis;
  • Utilize remote dispatch, job site reporting
    and home dispatching to reduce
    travel time and improve response to
    emergent work;
  • Reduce nonproductive time (late out,
    early return, extended breaks) to
    improve field force utilization;
  • Utilize field forces to close work plans at
    the time of work completion, eliminating
    additional administrative cost; and
  • Identify/process follow-on work immediately
    – eliminating delays and costs.

Performance Management

Asset management focuses on investments
to ensure the proper return is
achieved. Work management organizes
performing work to track and ultimately
optimize the resources expended in executing
operations. Performance management
is the method of ensuring the successful
completion of these programs. The
term performance management has been
loosely applied for a long time, and it now
has multiple meanings. For our purposes,
we define performance management as
the mechanism for measuring and communicating
results of asset and work management
efforts. Consider this question:
Who is responsible for performance management
in your organization? Whether
the answer is “everyone” or “no one,” the
results are probably the same – mediocre
at best. In the asset management model,
the asset owner is responsible for:

  • Extracting the primary data of operations
    (from work management systems,
    financial and revenue systems, outage
    management systems, etc.);
  • Analyzing this data against stated
    goals, formatting the results into easily
    digested tables and charts (often called
    “dashboards”); and
  • Conveying the information to leadership
    for communication with their
    direct reports.

In the best circumstances, the asset
owner is present at routine status meetings
between the C-level executive and
his lieutenants and fosters the discussion
about performance.

There is an important subtle statement
here: Performance management is key to
success. It should not be a part-time or
spare-time job. It includes distributing data
that clearly indicates how the enterprise
is performing against goals, and looking
at the rest of the data, comparing it with
benchmark organizations and determining
what the enterprise should be striving
for after it achieves its current goals. This
should include areas such as reduced
routine maintenance on noncritical components
– something utilities loathe, but upon
which they expend inordinate resources.
It also includes establishing unit cost and
volume targets for routine work instead of
budgetary targets, and determining which
activity based management work should
be focused on next to optimize savings
from improvement. These activities cannot
be performed while putting out the
daily fires associated with operating a live
system. The asset owner is normally not
a large group, but should be the individuals
who both know the business (including
where the “skeletons” are) and know the
industry (benchmarks and best practices).

The Parts Move in Harmony

Expectations of investors must be met with
results, which means performance at cost.
These expectations are carefully captured
and communicated by the asset owner.
The asset manager translates them into a
business strategy. Strategies are executed
by the utility operators through standard,
repeatable, and cost-effective tasks. The
relationship manager represents customers
and regulators, and the cycle repeats.

A strategic network of information is
leveraged to enable the process. An
asset register containing details of the
infrastructure is integrated with work management,
analytical and risk assessment
tools. Design standards are aligned with
master plans and strategies with appropriate
compliance programs are established.
Forecasting capability is enhanced to
assure reliability and prevent/reduce
emergent work. Maintenance programs
utilize a reliability-centered maintenance
approach to optimize identification and
continual improvement of maintenance
tactics. Materials for procurement projects
are integrated and optimized to reduce
investment and operating cost.

Finally, best practices are captured,
communicated are capitalized on to facilitate
better scheduling and cost estimating.
Individual jobs are measured and tracked
– holding field forces accountable for completing
work as planned/scheduled. Actual
parameter data are captured and made
available to anyone with access to the
work management system. Performance
metrics are evaluated centrally in a consistent
manner across all OCs. The result is
a coordinated system of people and tools
that can res-pond to any threat or opportunity
while simultaneously performing
routine operations in a consistent, predictable
and cost-effective manner.

Understanding and Driving Consumer Adoption of E-Bills

This is an excerpt. To read this paper in it’s entirety: Understanding and Driving Consumer Adoption of E-Bills

Introduction

Online bill payment is an American mainstream practice. Three out of four U.S. online households have paid at least one monthly bill online, according to a 2007 survey titled, Consumer Billing and Payment Trends 2002-2007; The Volume of Electronic Bill Payments Exceeds Check Bill Payments for the First Time, conducted by CheckFree Research Services. The same study indicated that the percentage of electronic bill payments has surpassed the percentage of check bill payments among online households. But as paperless bill payment has briskly moved through the stages of consumer adoption, its counterpart, paperless bill presentment, has been slower to gain mainstream momentum.

An electronic bill (e-bill) is defined as an electronic version of a company’s bill that is delivered to a consumer through the website of the company that issued the bill (a “biller”), a financial institution Internet site or a web portal. An e-bill contains the same information as a paper bill and has the same due date. Paperless billing occurs when consumers replace a paper bill with an e-bill. Today, many firms require customers to “shut off” their paper bill, either immediately, or after some period of time, in order to receive an e-bill.

Trends, leading indicators and precursors to e-bill adoption indicate that the potential exists for stronger growth. Laying the groundwork for mainstream adoption are: broadband penetration, generational usage, recent press coverage (regarding “going green”), and growth in online banking, bill payment and online bill viewing (without the paper shutoff).

To date, the adoption rate for e-bills lies squarely in the middle of the critical early adopter stage. To “cross the chasm” into the mainstream, electronic billing will require focused investment in terms of product and promotion to overcome ingrained human habit, lack of education and, at times, a perceived lack of comparative advantage to paper methods.

This document provides a comprehensive analysis of paperless billing utilizing several research inputs and models to help biller organizations understand the challenges and opportunities associated with consumer adoption of e-bills. Its purpose is also to assist organizations in implementing marketing strategies and tactics that will drive paperless bill conversion rates to optimal levels.

This is an excerpt. To read this paper in it’s entirety: Understanding and Driving Consumer Adoption of E-Bills

Critical Call – The Emerging Criticality of Cellular Services

The advancements in cellular
services that have been witnessed
in recent years have had a dramatic
effect on the decision-making processes
of many field organizations within
the utilities industry. Current emergency
voice systems, primarily based on land
mobile radio (LMR), are reaching or have
exceeded their traditionally long life
cycle. The cost of replacement or upgrade
is enormous, considering the scale of the
systems and the cost of replacing mobile
and dispatch units in the field. Even the
greatest advancements in LMR, from
trunking to digital IP voice and data systems,
are no match for the advancements
witnessed in cellular. From critical voice
to the emergence of mobile data, large
enterprises, such as utilities, are finding
themselves in need of more than LMR
– yet they are culturally attached to it.

Some of the key questions coming out
of the utilities industry today are, “Where
does cellular fit within my long-term strategy?” and “What do I do about the
near term requirements for emergency
voice services?”

The Growing Criticality of Cellular

Utilities are finding that they have
a growing dependence on cellular
services.

  • The widespread use of cellular for noncritical
    applications has subtly become
    critical as people and process become
    more integrated;
  • Anytime, anywhere access to key
    resources and decision makers is
    already critical;
  • Service technician “call ahead” supports
    customer satisfaction objectives
    and workforce efficiencies;
  • The use of mobile data dispatch will
    dramatically increase the criticality
    of cellular; and
  • The increase in real-time data value,
    from scheduling changes to emergency
    response, is raising the bar on data
    communication criticality.

The Workforce Mobility Effect – A Catalyst for Change

The nature of field communications
is clearly changing with mobile data.
The adoption of mobile computing,
changing from clipboard to computer,
is enabling more efficient execution
and accounting of workforce activities,
resulting in real business improvements.
A key to realizing these improvements
is the real or near-real-time data communications
between the dispatch and
field crew applications. For this discussion,
the term “near-real-time” describes
data synchronization between field and
dispatch within a one- or two-hour window.
These applications can (but don’t
have to) communicate real-time changes
in priority, crew location information, and
routing and mapping. These “everyday”
processes are even more critical when
widespread service disruption occurs.
The vast majority of utilities that have
moved toward workforce mobility have
done so by relying heavily on cellular
carrier networks.

As the workforce moves toward a communications
model that includes data
as a key aspect, utilities are also facing
increasing communications costs and a
new dependence on cellular providers.

Radio System Challenges and Drivers

As the dependence on cellular increases,
the traditional radio systems are seeing
fewer and fewer users during normal
operating hours. Some key issues confronting large-scale field force operations
with respect to field communications are:

  • LMR upgrade or replacement costs are
    simply out of reach for systems that can
    span 10 to 20,000 square miles;
  • Spectrum availability and suitability limits
    the ability to harmonize the network
    using a common platform;
  • Land mobile radio OEMs have consolidated
    or left the business leaving
    typically a choice between one or two
    vendors; and
  • The user base is shrinking as utilities
    outsource construction and other field
    services.

LMR upgrade and replacement activity is
tempered by the realization that even the
best LMR today will provide much slower
data speeds than what is currently available
throughout much of their service territories
from cellular providers. Additionally
it will require a minimum capacity for
emergency response that will be significantly
underutilized during normal operations.
Furthermore the ecosystem that
supports field operations will extend well
beyond the enterprise and into the partner
community. Providing mobile radio and/or
mobile data equipment to that community
becomes an even greater challenge.

The shift from voice to data services is
also driving down capacity requirements,
but not scale, resulting in significantly less
voice traffic. Many utilities have already
shifted noncritical voice communications
to carriers and have adopted carrier-provided
mobile data services.

Tactical vs. Strategic Response

Utilities are being forced to confront
a very basic question in regard to
field communications: “Will my response
to an aging infrastructure be tactical
and focus on current like kind, or will the
investment align with a broader strategy
that includes everything from electronic
meter reading to asset monitoring and
intelligent grid initiatives?”

The Tactical Response

The tactical response leads down a narrow
path. For example:

  • Fewer and fewer vendors offer land
    mobile radio equipment;
  • Spectrum availability limits common
    system architectures;
  • Transition from analog to digital
    requires a new network design, coverage
    analysis, site requirements and a
    likelihood of additional sites; and
  • Minimum capacity requirements for
    emergency response will result in significant
    underutilization during normal
    operations as more field workers use
    cellular as their primary means of communicating.

The fact is that utilities must have communications
during critical moments
throughout the day. Safety is critical. A
renewed investment into reliable communications
is inevitable; your crew’s life may
depend on it. But in many cases, real analysis
needs to be done before you invest;
you need to know what you are getting
and not getting; and you often need a
new strategic approach to maximizing the
value of the investment.

The Strategic Response

Utilities that have a strategic response
take a structured view of current systems,
assets and capabilities as well as current
and future field requirements.

Some of the key strategic challenges
are:

  • How do we create the most business
    value for the least cost?
  • How do we leverage as few solutions
    as possible to enable business requirements?
  • Does a single solution provide the best
    solution?
  • Does the solution provide for future
    communication requirements?

The Optimal Comparative Communications
Architecture Methodology (OCCAM),
a methodology created by IBM, provides
a structured approach to building design
templates and business case templates
for studying the optimal communications
architecture for a given environment
and business need. This architecture is
based on matching the most appropriate
communication technology with a given
geographic area and business need. The
OCCAM approach is broken down into the
following phases:

  • Phase 1: Technology Identification
    – During the technology identification
    phase, the study will focus on identifying
    relevant communications technologies,
    understanding each technology’s
    strengths and weaknesses, and documenting
    the high-level costs associated
    with each technology.
  • Phase 2: Environment Stratification
    – During the environment stratification
    phase, the study will focus on identifying
    the attributes that make up a unique
    environment. These environments
    should be general enough to be replicated
    in multiple geographic locations
    through the utility service territory but
    specific enough so as to best meet the
    financial and business needs for a
    given area.
  • Phase 3: Design & Business Case
    Template Development – During the
    design & business case template development
    phase, the study will leverage
    the information gathered during the
    previous phases to develop architectural
    design templates and business
    case templates for each of the unique
    environments identified within the
    utility’s service territory.

Conclusion

In applying the OCCAM principles to several
utilities that are in need of investing
in aging field radio systems, a case can
be made for shifting to a cellular-servicebased
model. In order for utilities to realize
the benefits of this model, investment
and cooperation will be vital.

Collaboration between cellular carriers
and utilities have the potential to drive
significant improvement in carrier networks
and create unlikely investments in
carrier infrastructures from enterprises
that can spend less to invest in hardening
cellular sites and establishing priority,
coverage and availability requirements,
than investing in a wholesale replacement
of their existing LMR environment. In addition,
as carriers advance, whether through
licensed WIMAX or future 4G technologies,
collaboration will allow both carrier
and utility to better understand the needs
and capabilities of each other. In this
increasingly globally integrated world, the
integration of interests between cellular
service providers and large-scale mobile
workforces has the potential to benefit
much more than just the carriers and
the workforces. Improvements in backup
power facilities, in cooperative priority
restoration services, and coverage
enhancements that reach every power
meter, would translate into greater coverage
and availability for us all.

Regardless of the final outcome, utilities
must have a plan for the future that meets
the field requirements of today
and tomorrow.

Utilities Should:

  • Develop a comprehensive communication
    road map defined by business
    goals;
  • Develop detailed communications
    requirements for achieving business
    goals;
  • Collaborate with those that have intersecting
    interests; and
  • Develop a clear path to closing the gap
    between current state and future state.

Utilities Should Not:

  • Always wait for something better;
  • Ignore obsolescence and an aging infrastructure;
  • Develop technology road maps that fail
    to map to business goals; and
  • Fail to collaborate within their business
    ecosystem.

In IBM’s quest to drive innovation that
matters, we often encounter the effects of
culture and tradition. Moving beyond that
is the most difficult step a utility will make.
It is also potentially the most beneficial.