Transforming Communication Infrastructure

For a long time the standard point-to-point analog line to a substation was
all that was needed in the grid of the past. It worked well for many years and
by default kept data requirements simple and minimized application integration.
If a new data requirement was warranted, such as a dial-up circuit for a C&I
meter, one would just add another phone line, and communication costs increased.
Upon further advances in technology in equipment and applications, a spaghetti
factory of dedicated circuits was installed into silos of technology at the

The communication and maintenance costs for the legacy-dedicated circuits provide
little integration and no foundation for future requirements. This architecture
hampers the use of new technology which is unable to leverage the existing network
platform and becomes a budgetary hurdle for implementation of new practices.
It is also a ticking time bomb of availability as communication providers eliminate
analog circuits and continually increase the recurring costs.

The substation is rapidly becoming a distinct network of information not limited
to just operations. Applications and employees need real-time connectivity to
the data assets for constant evaluation and maintenance. Resisting change and
maintaining the status quo is quickly becoming the wrong answer for distribution
companies. A strategic investment in network architecture will enable the use
of information and provide a lasting benefit for the foreseeable future.


Any change or investment in advancement of infrastructure must improve service
levels and network performance while minimizing the regulatory pressures on
price. Most distribution companies’ rejections to change network architecture
have been for two reasons. Cost is the first. The second has been the resistance
to change.

Not updating the network architecture may result in stable prices, but is likely
to increase service interruptions. Any change must be price sensitive and provide
real return on investment. It has to address the concerns of operations, make
use of existing capital infrastructure and create a secure environment and meet
new regulatory standards.

A major consideration for the chosen network platform is to support multiple
applications and data assets within the substation and the enterprise. The use
of new technology will assist in a positive ROI scenario, enabling better asset
management and utilization techniques. Figure 1 lists several components and
use cases possible on the network.

There is only one real choice of platform that effectively meets considerations
outlined above and that is a TCP/IP network. Implementation of a TCP/IP network
will not only be cost effective but also provide far greater functionality to
distribution companies than today’s legacy network. For instance, with the migration
of SCADA and control traffic to an IP-based solution, single use and high-cost
analog circuits can be decommissioned. The data can now be delivered over a
single digital circuit. With an IP-controlled network, engineers will now be
able to monitor and make changes to the site remotely, reducing the expense
and delay involved in making physical site visits.

An IP-based solution enables redundancies to be put in place that cannot be
achieved with today’s analog network. For sites where connectivity is imperative,
an IP solution provides the ability to dial up over a backup circuit or reroute
traffic to a secondary data center in the case of a communications failure with
the primary data center. It can also allow duplication of the information generated
so multiple data centers can simultaneously monitor the networks.

The IP solution will also allow the control of SCADA information to be encrypted
before it is sent across the network, providing an additional layer of operational
security and meeting potential future regulatory requirements. Once the IP migration
is complete, significant cost savings will be achieved with the decommissioning
of legacy infrastructure required by today’s network.

Network Considerations

There are many choices of TCP/IP platforms to evaluate for the communication
infrastructure. Most commonly used are satellite, dedicated fiber, frame relay,
cellular, radio and PSTN (public switched telephone network). Each has their
advantages and disadvantages and all can be a part of the total solution.

Satellite networks are easy to integrate and are very portable for installation
but the relative data throughput and cost are unattractive. Also, they have
a delay in transmission that not all applications can handle, especially older

Cellular is also easy to install but has coverage problems, limiting network
availability. This would warrant it for only redundancy or noncritical data
assets. These assets would include AMR, C&I metering and remote diagnostics.
One must also be careful to secure a cellular network with proper components,
as typically they are open to the Internet. With proper security it is a credible
low cost platform for redundancy implementation. The recurring cost is associated
with bandwidth utilization and can run as low as a few dollars per month for
service. Only when a critical failure occurs in the primary network would cellular
network be utilized. The portability and ease of installation makes the total
cost of ownership of cellular the primary choice for redundancy.

Dedicated fiber fits all the requirements but is very expensive to install
and maintain once in place. Radio networks will work for most applications but
throughput is a major concern as well as interference from other sources. Only
a few types of radio networks in the non-licensed spectrum will support TCP/IP.
PSTN networks will work for low bandwidth applications – similar to cellular
only the costs are higher. It would be a good choice when cellular is not available.

Frame relay is the best choice for the primary network. It has the high availability
required for mission-critical throughput performance desired for applications.
Frame relay is a copper connection from the communication provider’s local point
of presence and is purchased in different segments as required per substation
environment. An advantage is when your bandwidth demand exceeds the current
segment throughput over the same copper connection can ratchet up as needed
from the service provider – be it a 56Kbps connection up to a full T-1.

A multi-protocol label switching (MPLS) solution as the core of the network
is recommended and connecting all of the circuits into an MPLS cloud. The MPLS
functionality rides on top of the basic TCP/IP network structure, providing
advanced features that have real relevance in the utility industry. Benefits
of an MPLS network are:

  • Any-to-any connectivity;
  • IP convergence;
  • Redundancy and availability;
  • High degree of bandwidth scalability; and
  • Performance.

MPLS will provide a core network that is more resilient because, unlike the
current ATM clouds and other platforms, MPLS has the ability to route around
failures within the service provider’s network. Therefore, if the service provider
is experiencing congestion or hardware failures within its network, one will
not lose connectivity between its sites.

The MPLS network has the intelligence to find a different path between the
endpoints and will automatically reroute the traffic around the network failure.
Service level agreements are obtainable from the service providers covering
availability and performance.

MPLS provides the ability for any site connected to the cloud to communicate
directly with any other site connected to the cloud. By removing the aggregation
points within the network, any-to-any connectivity helps support future projects
such as the consolidation of servers or call centers into a centralized facility.
Any-to-any connectivity also supports the migration of analog voice to a voice
over IP solution in the future. The removal of the dependencies on aggregation
sites enables shorter circuits to be installed and the future removal of legacy
communication infrastructure. Shorter circuits are less expensive and the removal
of the legacy links will greatly reduce operating and maintenance expenses and
reduce future capital expenditures.

MPLS also allows prioritization of data and configuration within each network
endpoint. This is critical to the operation of the communication infrastructure
within the utility marketplace. The classification of each data component enables
the end user to reduce the segment size of the pipe and allocate portions of
the segment for critical applications. Examples of the classification are in
Figure 2.

Class A data is considered the highest priority for real-time control and operations.
Class B is the next level and so on. The benefit of classification is that each
class is configurable for bandwidth utilization with a standard and maximum.
This allows the higher classes to acquire bandwidth as needed. This means the
frame relay network can be focused in avalanche mode for applications critical
for operations and services. During steady-state or non-avalanche mode, a larger
percentage of the bandwidth is available for the lower classes to operate but
is limited in critical situations.


New regulatory standards are requiring that all routable protocols linked to
critical assets be cyber secure, preventing unauthorized access. Also, physical
access to the critical assets in the substation is to be logged and monitored
during operation, adding another data requirement to the network architecture.

Cyber security is achieved through industry-standard techniques such as secure
shell and IPSec. This will provide both authentication and encryption of the
required network connections within the substation. The physical requirement
can be accomplished through several techniques such as cameras, RFID or bar
scanners with off-the-shelf applications. The application provides real-time
notification and logs the identification of any personnel entering each critical


There are several tangible benefits for distribution companies that implement
a TCP/IP communication platform. Some but not all benefits can be:


  • Enhanced information allows faster and cheaper recovery from faults.
  • Remote asset monitoring allows faults to be anticipated and avoided.
  • Real-time information provides detailed data on faults, keeping blackouts
    as short as possible.
  • Higher availability of data in network performance and redundancy can increase
    service levels.
  • Reduces outages and network downtime.
  • Meets new regulatory requirements for cyber security of critical assets.


  • Lower communication costs achievable.
  • Load balancing of demand to better manage and reduce peaks.
  • Custom analytics enhances asset life prediction and network planning.
  • Provides low cost availability of data to back-end applications.
  • Responds to opportunities derived from deregulation.
  • Remote asset monitoring supports “just in time” replacement of failing assets.
  • Optimizes usage of aging network infrastructure.


Utilities require a strategic investment in communication infrastructure to
provide a platform that enables business units to operate and grow effectively
within the enterprise. The legacy approach to build point solutions for communication
needs is a failure looking forward. An investment in a TCP/IP platform, such
as an MPLS frame relay network, is the only cost-effective approach to remain
competitive in an industry ready to enable growth.

Toward Intelligent Grids

The promise of the intelligent energy grid remains elusive; however, a steady
wave of innovation in communication and information technologies, combined with
advancement in emerging energy technologies, is bringing it closer to reality.
Wider adoption of control-theory principles in a business environment (e.g.,
business process management, adaptability, real-time enterprise) and the convergence
of communication, information and energy technologies are transforming traditional
network management and coalescing business systems, field personnel and local
control devices into one connected environment (see Figure 1).

As some semblance of normalcy has returned to the industry (e.g., return of
restructuring comparability for large commercial and industrial [C&I] customers,
upturn in the economy, restoration of utility financial health), one of the
issues we see as turning the back-to-basics corner is the continuing emergence
of the intelligent-grid concept. Interestingly enough, two major intelligent-grid
enablers – communications and software – are progressing steadily irrespective
of energy player spending. The third intelligent-grid enabler – energy technologies
– remains dependent on energy player investments but represents one of the areas
where spending is increasing in the face of reliability challenges, wholesale
market evolution, customer (large C&I) expectations and regulator demands.

Communication Technologies

Utilities in the run-up to competitive retail markets made significant communications
plays, from extreme electricity to telecommunications flip, to common carrier
models, to simply trenching and construction. Although most utilities stepped
back from aggressive telecom play, communication advances continue to proliferate,
and the energy industry is a free rider.

IP Ubiquity

Standardization on the ethernet protocol across a wide range of energy networks
(e.g., business, control, mobile, AMR, LAN, WAN, SCADA, customer gateways, substation
automation, weather stations) is finally solving the decades-old problem of
a lack of universal communication infrastructure among disparate islands of
automation in energy. The first attempt to direct coherence of communication
networks in energy in the late 1980s, through the Electric Power Research Institute
utility communication architecture (UCA) initiative, failed to include customer
networks and encourage protocol standardization among business systems, process
control and customer communication networks. The latest initiatives (e.g., UCA
2.0, E2I CEIDS) and activities of the IEEE and IEC standard groups (e.g., IEC
870) have established ethernet as a dominant communication method and promoted
IP ubiquity. The proliferation of IP-enabled sensors, devices and automata owned
by energy companies and end-customers is creating a communication environment
that, leveraging Metcalfe’s Law, can significantly amplify the value of the
energy communication network and considerably increase the number of business
processes that can be automated, monitored and optimized by achieved information
ubiquity and resultant controllability and observability of the energy grid.

Pervasive Communication

In addition to IP ubiquity and creation of an enterprisewide communication
environment that integrates business systems (including partners’), utility
personnel, customer networks, sensors, devices and automata, the buildup and
technological advancement of public and private networks (e.g., WAN: 2.5/3G,
LAN:-802.11g, PAN: Bluetooth) have created much better and more affordable communication
coverage. This effect (resulting from Butter’s Law) makes information not only
financially affordable, but also pervasive and persistent. This affordable and
always-on access to information encourages adoption of sense-and-respond business
paradigms (e.g., on demand, adaptive, real time), creating energy companies
that are more agile and ready to react to technical process variation or business
environment changes as they occur.

PLC and BoPL Accessibility

Utilities are once again emerging in the communication space, this time leveraging
the regulated wire system for the transport of information and data that will
bridge the last mile to premises and that will provide the communication transport
for grid information ubiquity. Utilities are using two plays: the energy management
play and the communication play. Regarding the energy management play, utilities
have used PLC at very low speed to capture energy data from remote locations.
This utility play is now using faster-speed two-way transport to move information
throughout the system, linking from the consumer to the generator and back.
Typically, this service is offered by the wires’ line of business (LOB) for
meter data collection, dynamic pricing, controlling devices, etc. The more robust
play in this space by utilities, the communication play, is broadband over power
lines as a communication, ISP and entertainment service offering that can also
provide energy management services. This is typically offered by the communication
subsidiary of the utility; however, in the case of the offering being provisioned
by a vendor/third party, the utility simply may be providing a landlord service
over the utility’s wires.

Information Technologies Trends

IT and software, much like communications, are advancing largely as a function
of the global economical drivers and not by the energy industry. Here are the
key trends that impact intelligent grid transformation.

Integration Through Service-Oriented Architecture

The major IT architectural trend is the introduction of the service-oriented
architecture (SOA), based on the principle of the object-oriented modularization
and interoperability through XML-based technologies such as Web services. Realizing
inflexibility, high total cost of ownership and upgradeability issues of the
currently dominant monolithic enterprise applications (e.g., multi-tiered, transactional,
single relational database management system), vendors and leading IT organizations
are moving toward an enterprise IT environment where a requested unit of work
(a service) could be accomplished by a service provider (e.g., an application
module, an external BPO vendor, a device) invoked by a service broker. Using
an enterprise service bus as integration and a service orchestration backbone,
new composite applications are extending over components of the legacy systems
wrapped in a Web services envelope, commercial offthe- shelf vendor-procured
modules and external vendor-provided services. SOA architecture also facilitates
inclusion of personnel (e.g., field crews), devices and automata (e.g., sensor,
programmable logic controllers, intelligent protection units) using XML-based
technology and industryspecific vocabularies (e.g., CIM).

Software Extensibility

After harvesting the low-hanging fruit of incremental process improvement enabled
by compartmentalized applications, energy companies are now focusing on specific
energy cross-functional end-to-end business processes (life cycles) covering
several LOBs and extending across the back office, middle office and front office
into the service points (e.g., field service, IP-enabled sensor, automata) and,
in some cases, even including partner and customer systems. This vertical trend
is a business counterpart to the IT enterprise integration architecture movement
(enabled through SOA and enterprise service bus). It also resembles horizontal
back-office process integration initiatives created by the emergence of the
ERP systems and, later, CRM systems that were focused on the entire life cycle
processing. Optimization/orchestration of the complex end-to-end business processes
(extended across applications into the field and monitored/executed by sensors
and automata) is enabled by convergence of the communication, information and
energy technologies into an intelligent grid.


The trend toward an adaptive real-time enterprise is based on the premise that
an organization can achieve a new level of operational excellence by reducing
latency and improving the visibility and analysis of the data across systems,
both within an enterprise and throughout its supply chain. It is essentially
a business instantiation of the fundamental control theory concept of a negative
feedback control and resembles the Darwinian sense-and-respond paradigm. The
promise of shortening the information creation and delivery cycle and providing
an ability to monitor business process in real time through a closed-loop decision
support system is alluring for energy companies focused on cost reduction and
achieving operational efficiency. In addition to business key performance indicators
provided through advanced analytics that sniff the transaction processing environment,
the operational process indicators (e.g., transformer loading) acquired through
SCADA and local monitoring devices can provide additional insights into the
state of the energy system. This can enable better asset utilization and tighter
operation of the energy delivery infrastructure, resulting in higher reliability,
better efficiency and faster response to environmental changes.


Field forces are becoming a component of the intelligent grid, fueled by ubiquitous
communications, extensibility and mobile devices. As applications and back-office
services are extended into the field (forward deployment), the field crew is
empowered to resolve issues in real time, for the benefit of the grid, the consumer
and the utility. Similarly, the ability to dispatch and redispatch field forces,
contractors and outsourcers on an almost real-time basis is the key for the
wires’ LOBs to achieve operational efficiency. As back-office applications and
field force dispatchability are implemented, the field force can be enabled
to validate and renew distribution system asset data in real time. Finally,
as mobile device functionality and diversity improve, costs continue to drop
and communication ubiquity is achieved, the increases in numbers of devices
(outpacing traditional PCs) will further enable mobility and make the field
force a true component of the intelligent grid.

The Impact of Emerging Energy Technologies

Unlike communications and IT/software advances that will fuel the intelligent
grid transformation regardless of the energy sector involvement, the third prong
to advance the grid – energy technologies – is the province of the energy sector.
Key technologies that enable innovative business processes serve customers and
benefit the pool, include sensors, power electronics, modeling and simulation,
intelligent control systems, distributed generation and storage technologies.
Once integrated, they will enhance the quality of the energy delivery and maintenance
of the supply/demand equilibrium through increased reliability, consumer gateways,
automated megawatts and distributed energy resources delivering atomistic megawatts.


Radial feeder electric distribution systems deliver reliability levels typically
in the range of 99.98 percent (approximately 100 minutes of annual outage time,
as indicated by SAIDI). New technology in C&I processes has created an environment
where some customers have a zero tolerance of any type of power interruption.
Numerous initiatives (e.g., DV 2010 consortium) are focused on increasing system
reliability of the existing network by changing the operating mode from radial
to looped and investing into new multi-tiered protection schemata enabled by
local automation control. The high-speed networked primary voltage distribution
system implements a combination of traditional directional overcurrent protection,
distribution automation and high-speed communications to accomplish a high level
of reliability to the customer, creating a premium operating district. The system
integrates equipment on both sides of the substation fence into a single system
using peer-to-peer communication and control logic to autonomously perform remote
switching to provide high-speed interruption, system restoration, voltage control,
remote monitoring and control, and outage detection.


Once the promise of commoditized energy markets, consumer gateways were in
vogue throughout the early days of energy market restructuring. As the markets
halted, the promise of gateways eroded. Now, however, gateways are re-emerging
as a tool that will enable end users to become a component of the intelligent
grid. Gateways enable consumers to provision self-service on their home turf,
not just through energy company portals, linking and integrating energy management
into the consumer’s total quality of life (the hot in the water, cold in the
beer and drive in the motor, with apologies to RMI CEO Amory Lovins), enabling
early adopters to think and act as an energy generator, and making energy consumption
choices on the consumer’s own terms rather than on the utility’s terms.

Distributed Energy Resources

Distributed energy resources (DER) (e.g., distributed generation sets, energy
storage, power parks) enable a wide variety of energy services for both the
restructured energy markets requiring high energy to enable profitability and
grid stability and the digital economy requiring high power through reliability,
criticality and productivity. DER operation requires local and distributed control
different from conventional centralized EMS controls. DER-induced reversed energy
flows require new multi-tiered protection schema. Finally, DER interconnection
standards are emerging through IEEE 1547 and through regulatory proceedings.
As cost curves decline, interconnection barriers crumble and focus on energy
security expands, penetration of DER is fueled and the grid becomes smarter
– benefiting consumers, the pool and the utility.

Demand Response

Demand Response (DR) programs are examples of integration of several energy
technologies enabled by communication and IT that link the retail and wholesale
markets and promote market maturity. Through these programs, consumers become
part of the equation and make the grid smarter as economic signals are delivered
and knowledgeable/intelligent reaction is enabled, if not automated. These programs
can be structured to meet emergency conditions on the grid or to provide economic
alternatives to consumers. Communication is a key component of DR programs,
with advanced two-way communication in real-time-coupling usage information
and pricing information. Sophisticated metering is the core enabling technology.
Advanced billing and revenue management systems are also critical for DR deployment.
Price-transparent DR programs will promote disruptive technology (DG, energy
storage, power parks) deployment by customers. Although cyclical in nature even
in supply boom periods, DR serves a valuable system optimization function.

Leading energy companies must develop a vision for their play in the emerging
energy industry. Industry players that have best weathered the past half-decade
storm are the ones that have had a physical presence and linkage with the consumer.
The grid will continue to play an increasingly important role in the market
and will drive decent, if not handsome, returns.

The New New Grid

Under pressure from four distinct sources – aging assets, growing peak demand,
the emergence of new power generation technologies and revenue constraints from
regulation and theft – distribution companies around the world are seeking a
new, smarter approach to operating their networks.

Intelligent networks are based on advanced network analytics, automated meter
management, remote asset monitoring and control, mobile workforce management
and Internet-enabled supervisory control and data acquisition (SCADA). Distribution
companies operating intelligent networks have a much stronger business case
to present when they seek regulatory approval for asset investments since intelligent
networks are designed to enable electricity grids to deliver better service
without sudden price increases.

The Older Generation

In much of the electrified world, modern grids were built in the 1950s, 1960s
and 1970s. Now, many of the assets critical to running these networks (such
as power transformers and substations) are approaching the end of their expected
life spans. Yet with regulators reluctant to approve capital-intensive infrastructure
upgrades (due to the price increases they may trigger), distribution companies
find themselves operating assets beyond those design limits.

The average accounting age of assets is declining, but this can be misleading:
For accounting purposes, that age does not include the fully depreciated assets
that remain in operation. While the average accounting age of assets in the
U.S. has declined from 24.1 years in 1999 to 15.8 years in 2003, many assets
are fast approaching the end of their design life (see Figure 1). A similar
situation exists in the United Kingdom and Australia, where investment in distribution
assets peaked in the late 1960s and early 1970s.

Many Small Generators

The economics of the electricity industry show some signs of changing to favor
small-scale power generation connected to the distribution system. Two trends
push this shift. First, concern over emissions is sparking interest in new electricity
generation technologies. Second, the quest for efficiency is driving onsite
use of small-scale, gas-fired generators. New technologies, such as fuel cells,
will also be used in buildings and homes to generate electricity and heat water.

When producing electricity with a greater number of smaller generators, it
makes more economic sense to place the generator closer to the customer so that
less power is lost over the network. As a result, a myriad of small power sources
are being embedded in grids originally designed for large, centralized power
production and are designed to adjust automatically to provide voltage control
to meet requirements within a small tolerance. The presence of many small generators
can wreak havoc with these controls. Moreover, while central transmission networks
are designed to handle power flows with sufficient flexibility to prevent a
failure, peripheral distribution networks – where distributed power generators
are being added – can handle only the maximum flow required by customers. These
networks are simply not built to handle the complex power flow management issues
that come with distributed generation, such as sudden reverse flows when customers
disconnect generators.

Consequently, distribution companies face a choice in how to handle the complexities:
either passively, by upgrading wires and other components to handle the maximum
flow from each generator, or actively, by building in sensors and switches to
monitor and control the output of generators, avoid bottlenecks, keep fault
currents within safe levels and keep voltages within statutory limits.

Under Pressure

Added to this, revenue pressures from regulation and theft are constraining
their ability to invest in new infrastructure. Despite key differences in regulatory
regimes globally, in most markets changes in network pricing and rates of return
will continue to require regulatory clearance. Regulators are often reluctant
to authorize investment in distribution assets and protect the interests of
customers by ensuring a continuous, high-quality supply of electricity; but
they also seek to avoid the political ramifications of rate hikes. This combination
of motives gives officials an acute cost-benefit sensitivity. In this climate,
distribution companies must demonstrate the business case that the money they
propose for renewing the network is money well invested.

Revenue lost to theft also represents a constraint on investment. Theft of
electricity is a major issue affecting distribution company balance sheets worldwide.
In 2002, the estimated range of power theft in the United Kingdom was $72 million
to $541 million; in 1998, companies in the U.S. experienced a whopping $1.6
billion to $10.9 billion loss. (In the context of electricity, “loss” is taken
to mean how much power is “lost” between the power station and the paying customer
– which includes theft, but also includes electrical losses.) To reduce theft,
distribution companies need a much more detailed view of where and how electricity
exits their networks.

Keeping Up

Today, in almost every electricity market, peak demand is growing, creating
a need to augment the capacity of aging networks. Peak demand for electricity
generally grows as a function of gross domestic product. So unless GDP stagnation
is a permanent fixture in a country’s economy, that country’s grid can be expected
to face a nearcontinuous need to increase capacity.

Demand growth boosts the overall yearly capital costs of operating the network.
In a regulatory climate where rate hikes are problematic at best, the options
are clear: Keep up with growth or risk letting service levels slide. If left
unaddressed, growing demand can leave the electricity distribution company with
significant problems.

Using What You Have

These growing pressures are forcing electricity distribution companies to make
difficult choices. By avoiding investment in network upgrades and by operating
transformers and other capital-intensive network components beyond their design
life, they keep costs low in the short term.

Historically, technological constraints have forced network designers to plan
around worst-case scenarios. This approach requires distribution companies to
build components larger than needed and replace them earlier than necessary.
But in today’s cost-conscious regulatory environment, erring on the side of
caution is an expensive strategy.

However, as sensor technologies decline in price and the industry develops
advanced network analytics and real-time monitoring, reconfiguration of the
network is a growing possibility.

The intelligent network offers a more granular, real-time view of its status.
It does away with point-to-point communications in favor of standardized, packet-based
networking (like the Internet). Intelligent networks provide not only data that
predict and help prevent faults, but also a real-time picture of what is happening
when a fault does occur, allowing network operators to dispatch engineers to
the right location with the right equipment.

Traditional network operators respond to growing peak demand by adding equipment.
With limited ability to monitor spikes, these networks must build in extra capacity
to cope with periods of peak load. With this approach, both the nominal and
short-rated capacity of assets must grow along with peak demand, and every kilowatt
of peak demand growth costs networks $120 to $180 per year in perpetuity.

In handling distributed generation, the traditional approach is capitalintensive:
Build dedicated wires and upgrade components. The intelligent network approach
enables the existing network to accommodate distributed generation while avoiding
costly upgrades. To identify demand spikes on distributed generators, network
operators can run the worstcase scenario against real-time data on the system’s
actual capacity and estimates of near-term demand – say, when a weather forecast
predicts a cold snap.

Building the Networks

With the growing ability of sensors and smart meters to monitor the status
of the intelligent electricity network continuously, distribution companies
can store the constant stream of data they provide in a data warehouse, where
advanced network analytics can be applied to boost operational efficiency.

With advanced network analytics, sensor and meter data can be mined to support
strategic actions:

  • Targeting investment at components that are about to fail or are running
    near full capacity;
  • Enabling real-time reconfiguration in the event of a blackout (reducing
    downtime, revenue loss and public ill will);
  • Optimizing the configuration of the network (keeping components within operating
    tolerances); and
  • Satisfying regulators that prudent investment decisions are being made.

Asset life analytics, for example, focus on the life span of network components.
Network components (e.g., transformer insulation) deteriorate with use. Because
similar assets fail in similar ways, their life spans can be analyzed based
on historic usage patterns. As assets begin to fail, detailed analytics can
suggest how to adjust the network to protect the asset.

Also, network design optimization can lower the cost of operating networks
and help reduce capital expenditures. Without granular information from the
intelligent network, distribution companies must respond to growing demand by
upgrading the network across the board, as if every customer is the hypothetical
“biggest consumer.” Analysis of individual customer load patterns, on the other
hand, can enable distribution companies to avoid upgrading circuits where upgrades
are not actually needed.

And network operations analytics can focus on power flows within the network,
helping improve reliability and reduce or defer capital expenditures. With real-time
monitoring of contingent fault currents, operators can keep fault currents from
overloading critical components, for instance. Data from smart meters allow
engineers to be dispatched to fault zones with the right equipment, enabling
quicker recovery from network failures. Real-time control of power flows also
enables networks to handle distributed generation.

Four Technology Enablers

Automated meter management. Automated meters can mitigate demand growth and
curtail theft. Smart meters placed in homes and businesses also enable time-of-use
pricing. Peak-sensitive pricing has been proven to lower demand in markets where
it has been implemented. Southern Company’s “Good Cents” time-of-use program
cut consumption by nearly 45 percent during peak hours.

Time-of-use pricing is also popular with regulators, as it mitigates peak demand
growth and allows distribution companies to defer network upgrades, keeping
prices stable for consumers.

Remote asset monitoring and control. Remote sensors can detect whether events
on the network are consistent with the network’s capacity and warn operators
when a component begins to operate outside of optimum ranges. With the ability
to monitor whether power flows are within optimum range, operators can load
components higher than otherwise possible. Sensors can detect when parts of
the network begin to fail. Based on the feedback from these sensors, the control
center can adjust network configurations to reduce the load on compromised assets
and warn field engineers when deterioration creates a probability (albeit low)
that an asset will be unsafe. Data from sensors can also be used to optimize
the maintenance and replacement of assets.

Sensors on transmission wires can also yield improved customer service by warning
when trees or other foliage grow too close to power lines.

Mobile workforce management. This boosts the speed and accuracy of maintenance
and repairs by electronically streamlining the flow of data from sensors through
the central control center to field crews equipped with handheld computers and

Internet-enabled SCADA. This replaces cost-intensive, proprietary SCADA systems
with the standard Internet communications protocol. It can also cut telecommunications
costs by 20 percent or more and offers a robust, fault-tolerant architecture
that scales easily to support the deployment of sensors, smart meters and remote
PDAs across the network.

Internet-enabled SCADA can release utilities from reliance on the proprietary
communications protocols of equipment manufacturers and offers the higher fault
tolerance of a packet-based network. The Internet technology can also provide
a communications platform for future services.

More Than the Sum of the Parts

In addition to the benefits conferred by individual intelligent network components,
implementation of the intelligent network yields synergies in scalability; an
Internet-enabled SCADA network can lower the cost of implementing the systems
and devices that make up the larger intelligent network. Once the digital SCADA
network is built, the incremental cost of adding components is small.

Other synergies are realized via the combination of automated meter management
and remote asset monitoring and control, which can cut the need to deploy sensors
since electricity distributors use fewer sensors to monitor assets on the network
by inferring network currents from meters.

Synergies are also found in the combination of automated meter management and
mobile workforce management. In the event of a fault, this combination can lower
the time and cost of restoring service, as data can be gathered to pre-diagnose
the fault before dispatching an engineer.

Finally, the combination of remote asset monitoring and control and mobile
workforce management can help distribution companies defer the replacement of
failing assets and even avoid upgrading assets where fault current limits are
exceeded only briefly each year. By enabling the distribution company to set
up exclusion zones around affected assets, these combined capabilities allow
at-risk assets to continue to operate.

Caught between a need to renew and upgrade aging networks and a customer base
accustomed to steady rates, distribution companies are fast approaching a point
where they need to make choices. New technological capabilities are becoming
a reality, but only by leveraging them effectively – to operate networks more
intelligently – will electricity distribution companies be able to navigate
the challenges of the 21st century.

Q&A: Meter Reading – Past and Future

Q: In 1997, as Puget Sound Power and Light, you merged with Washington Natural
Gas and became Puget Sound Energy. Was it then that automated meter reading
made the most sense?

A: We were piloting different technologies in 1997, communications technologies
that would actually deliver the metering information to us. At that time, most
of the automated meter-reading technology that was available was really just
drive-by metering. And so you were able to make meter reading more efficient,
you didn’t have people running around all over the place, you could drive by
in a car, and you could remotely read the meters from a van or a vehicle.

Q: You would capture the radio frequency signals?

A: Right, exactly. It was a more efficient way of reading meters. We were really
looking at a way to get information off of the metering system in a manner in
which you could do more than just read meters with it. You could actually provide
that information to customers. They could manage the energy usage. We would
be more familiar with usage patterns and could manage our own resources better.
Customer information people would have that information available to them when
talking to customers. And our planners would be able to use it in the system
design. Simultaneously, we set about to develop a new customer information system.
We also recognized that the customer information systems that were available
at that time only had the capacity to manage the meter-reading information from
a once-amonth read and produce a bill. And we wanted a system, and then ultimately
installed a system, that would give us daily reads; and if we wanted to we could
have even done hourly reads. But even daily reads over a million-and-a-half
meters, that’s a lot of information, so we had to have a new CIS system that
was capable of managing that kind of information and then actually producing
a bill from it.

Q: What is the primary technology that is used to feed the information to
the home office?

A: The system that we installed was a Cellnet system. At the time it was a
state-of-the-art system. We had designs on making it a two-way communications
system, but it just wasn’t available then. So we installed a one-way system.
We could get the reads off it, but it was not designed specifically for the
utility to be able to control loads off of that system as well. So it was primarily
an information-gathering system, and it worked very well for that. And that’s
the one that’s in place at Puget Sound Energy today.

Q: And it’s still only one-way today?

A: Puget Sound Energy’s AMR deployment was the largest in the country when
it was installed, and we did some programs with it that were the first of their
kind. And at the time it was state of the art and we really had some fun with
it and we did differentiate ourselves. We ran some pretty important programs
on the West Coast during that energy crisis that helped the utility by the fact
that we had that system installed.

Q: Who built the CIS to go with it?

A: We developed it ourselves. We put a team of people together internally and
charged them with the responsibility of developing a system. And it went relatively
well. We completed the development through a subsidiary company we formed called
Connext. We formed Connext because when the development was complete it was
going to be one of very few systems that had this kind of capability, and we
wanted to be able to sell it to others.

Q: Who else had a hand in implementing the system?

A: There were basically two technologies at the time that were competing then,
and we piloted both of them. We put two 10,000-meter pilots in place. We tested
most of the systems and the technologies that were available at the time to
give us the kind of communications capability that we sought. We did not want
to do drive-by. We wanted it to be able to do more than just read meters. That
was a No. 1 criteria for us. So that limited us to two possible suppliers at
that time.

Q: What was the first big test for the system?

A: We were probably about a little over halfway through our deployment in 2000
when energy markets on the West Coast really began to heat up. And in the fall
of 2000 we said, look, we’ve got enough of these meters installed now and operating
– you can incrementally put them in and start using them immediately – that
we could actually use this system to do some pricing options to see if we could
get customers to reduce demand. We had our CIS system installed, so we had the
ability to take the information in and actually produce a bill based upon time
of use. Customers would get price signals four different time periods a day
and would be able to move their usage around to different times; and if they
moved it into the off-peak hours, we’d give them a rate break below what they
would pay under a flat-rate scenario. So the Washington State Utilities and
Transportation Commission noodled over that – not for very long, only about
27 days – and said OK, do it. So for 150,000 customers, they immediately got
put on this time-of-use program. If they had a computer, they could bring it
up on a daily basis and see what their usage was in each of those time periods.
A survey later showed a very high percentage of people said they were actually
utilizing the information.

Q: Was there confusion going from a flat rate to time of use?

A: We put people on a pricing mechanism. We sent information to them because
they didn’t have a choice. And all of a sudden the bills they were going to
be paying were going to be based upon time of use. We had interveners in the
hearings who didn’t like that. They thought it was too much, too soon, and it
would not be well-received by customers. That summer – we put it in place April
1, 2001 – we did some survey work with the customers, because the commission
wanted us to come back in the fall to check our results and see if the program
should be expanded. The results were in the high 80s for the percentage of customers
who said they were actually doing things to shift their usage from on-peak to
off-peak. The feedback we got from the customer surveys clearly showed they
liked it, they didn’t feel like they had been put upon, they liked the idea
that they could actually save money by shifting their usage from on-peak, and
they thought they were doing something for the environment and to help the West
Coast energy crisis.

Q: Is now the time for utilities to install AMR?

A: The energy markets have stayed pretty stable. At least stable enough that
utilities haven’t had the kind of changes in retail rates that utilities had
to put in place to pass those costs through to customers in the past. This is
the opportunity to get the technology in place so that when we get volatile
markets or an energy crisis because of high temperatures in California or low
snow packs in Washington or whatever it is, the mechanisms are in place to deal
with it. The longterm real key will be to demonstrate not just time-of-use but
real-time pricing. My opinion is, that’s where this needs to go: where customers
can see, in near-real time, changes in the market, and if markets are low they
can use as much energy as they want, and if markets are high they can choose
to cut back wherever they want to cut back. So I’m a real advocate for real-time
pricing, and the good news is there is technology available today capable of
delivering it.

Q: Is AMR on the verge of expanding dramatically?

A: I believe it is, but few utilities are even looking at this as a strategic
asset. They still look at their generation investments as strategic, because
there’s so much money involved. Meter-reading systems are not inexpensive, but
compared to generation they are relatively small. While I love that the name
has changed from AMR to advanced metering infrastructure, I think most executives
aren’t that interested in meter reading. But I believe that just the change
in nomenclature itself to AMI will be helpful in getting people to think more
about it and its capability beyond just meter reading. But it’s going to take
regulators and policy makers to make this expand dramatically. The good news
is, in many parts of the country, they are playing close attention and getting
involved with their utilities to make it happen.

Q: What will hinder expansion?

A: Several things. There are states that are regulated and deregulated with
no indication of which way we’re going. There are still questions about who
is going to be responsible for the metering long term. Are utilities going to
stay in the metering business, or is some third-party provider going to come
in and take over the metering business? As long as those kinds of questions
are hanging around, it’s going to be difficult for utilities given that kind
of risk potential to invest tens of millions of dollars in a new system. I’ve
believed that it makes no sense for utilities to get out of the metering business.
The metering information can be shared, but the responsibility needs to be with
the utilities. I suppose a case can be made that this is really a non-issue.
After all, even if somebody else took it over, they’d have to take over the
utility’s meterreading system, whatever it is. So the utility has someone to
buy it. Unfortunately, it’s still a question mark.

Q: Will technological advances help sell AMR?

A: Without a doubt the answer is yes. State-of-the-art systems not only remotely
gather the metering information from customers over a variety of systems, including
fixed wireless networks and power line carriers; they now have two-way capability,
allowing the utility to control utility system devices as well as customer end-use
devices. These capabilities expand the value of the systems to the utilities.
In addition, some companies are designing programs that assist utilities and
their customers in using the information obtained over these systems to drive
costs down and manage usage. When Puget Sound Energy was providing its customers
with time-of-use pricing and daily usage information, customers were pretty
much left to shift their usage on their own. Today, energy management hardware
and software exist for use inside the home or business. There are companies
designing systems that will respond to utility price signals and automatically
change home or business usage patterns to accommodate whatever price signal
is being delivered by the utility. From a consumer perspective, there’s some
really neat stuff coming together now that will include things other than just
energy management. A customer will be able to buy systems that incorporate Internet
access, TV, CD, DVD, radio and telephone in addition to energy management. When
this happens, we’ll have consumers helping to drive this market and we will
no longer have to rely solely on the utility.

Q: Do the cost benefits also help sell the concept?

A: When state commissions take a look at a utility’s investment in this, they
look at it through the window of utility only. That utility has to kind of stand
on its own, either from internal efficiency or the price it’s paying for power,
to cost-justify it. While we were implementing our system, a study was done
that showed the economic benefit to U.S. customers, just from their ability
to manage wholesale prices, if all homes in the country had AMI. It was like
$15 billion a year in savings that would accrue across the country to customers
if they had the tools in place to manage usage during times of peak pricing.
Nobody incorporates that kind of economic benefit into their cost-effectiveness
and analysis. And it’s probably one of the biggest benefits that will ultimately
happen when you have a broad deployment of these systems.



On Better Business Intelligence

Utilities are working hard to address new financial control and reporting requirements,
making progress on significant cost reduction and revenue enhancement initiatives
and improving the management of financial and operational risks. But some key
performance management process, technology and organizational hurdles are making
these efforts far more difficult than is necessary. To clear the hurdles, companies,
including IBM, have developed and successfully implemented a comprehensive business
performance management (BPM) approach that helps utilities avoid many typical
problems with performance reporting and analysis and increase focus on improving
financial and operational performance, driving bottom-line benefits for both
customers and shareholders.

BPM Challenges for Utilities

More than ever, utilities require access to accurate, timely data and the ability
to perform powerful financial and operational analyses. One obvious driver of
these changes is the increased reporting requirements resulting from Sarbanes-Oxley
legislation. Utilities are also facing pressure to manage costs more effectively,
enhance revenues and increase profitability, all of which require close monitoring
and analysis of operational and financial performance. At the same time, financial
and operational risk measurement and management remain a high priority for both
regulated and unregulated utility businesses. Utilities that excel in each of
these areas will be among the industry leaders over the next decade, but many
utilities must first face some fundamental challenges in BPM data collection,
reporting and analysis.

One BPM challenge is the need to better align all activities in the utility
with the strategies and performance goals of the company as a whole. While many
utilities understand what drives their business performance at a high level,
they have not built this understanding into their detailed reporting systems,
analysis tools and performance metrics. The result is that incentives don’t
necessarily drive staff to contribute to better bottom-line performance, and
reports don’t focus on the most important financial and operational performance
measures. Additionally, analysis of the linkages between financial and operational
performance measures is limited.

Another BPM challenge is the excessive effort currently required to access,
validate, analyze and report performance data. Many utilities still have groups
of analysts that provide support to executive, finance and operations groups
by building and modifying custom spreadsheets and other ad hoc reporting and
analysis tools. These manual processes require substantial resources and take
those resources away from the far more valuable work of providing historical
analysis, performing scenario planning and developing professional insights
on what is happening and how to deal with it.

A related BPM challenge is the lack of integration among corporate planning,
functional groups and operational business units with respect to budgeting,
forecasting, analysis and reporting tasks. This results in the need for additional
analyst resources to pull together disparate data and to manually link unconnected
processes, creating yet another distraction from more important and valuable
planning and analysis activities.

The final BPM challenge is that utilities simply have an overload of data.
Due to regulatory requirements and a historical emphasis on engineering analysis,
utilities typically have more financial and operational data than companies
in other industries, resulting in utility staff, managers and executives who
are simply bludgeoned with raw data. Utility management is unable to effectively
sort through data, extract the information they need, analyze performance in
their areas of responsibility and use this analysis to efficiently decide on
actions required to improve that performance.

Comprehensive BPM Vision

BPM addresses all of these challenges. As presented in Figure 1, BPM integrates
three process areas that are fundamental for utility success:

  • Business planning and simulation;
  • Performance analysis and improvement; and
  • Portfolio optimization.

Although these three process areas are fundamental, it’s the BPM enablers that
are the keys to BPM vision, since these are designed to overcome the BPM challenges
that most utilities face.

Each of the BPM enablers shown in Figure 1 is necessary, but three are particularly

Alignment of Strategy and Measures

Utilities effectively have dual goals: deliver value to shareholders and meet
regulatory requirements. Managing a utility involves striking a careful balance
between these two sometimes competing goals. To be successful, utilities must
achieve measurement balance, enabling performance in all business units and
at all staff levels to drive toward success. With this balance, each employee
understands how to contribute to all of the company’s strategic goals, and how
that contribution will be measured.

Consistent Processes and Information Delivery

This is at the center of Figure 1, and it is at the center of successful BPM
as well. Substantial utility resources are currently devoted to overcoming BPM
process disconnects, and closing these gaps will help enable managers and executives
to spend more time on analyzing the important performance issues and developing
action plans to address them. Consistent information will help eliminate confusion
and reduce the challenge of sorting through mounds of data to find what is important
and useful.

Support for Infrastructure

The infrastructure for performance data collection, reporting and analysis
at many utilities is fragmented, heavily labor-intensive and difficult to maintain,
creating the need to revamp much of their infrastructure to better enable the
three BPM process areas. The good news is that investments in this area can
actually pay for themselves directly through reduced BPM costs by eliminating
manual and ad hoc operations and the heavy data correction and filtering processes
that they currently require.

Four Elements of BPM Execution

With a clear vision for comprehensive BPM capabilities, we can turn to how
to make BPM a reality in today’s utility environment. For utilities, there are
four critical elements for executing a BPM program:

Relevant Content

Different utility stakeholders require different BPM content to understand
performance in their business area and to make effective decisions within their
area of focus. It is important that the content of their reports, analysis and
personal performance measures be relevant to that focus. For example, the CEO
and other C-level executives are focused on strategic issues, and should be
receiving reports and analyses that address those strategic issues rather than
distracting them with unimportant details and an irrelevant volume of information.
In contrast, line managers are focused on day-today operations, and should be
presented with operational performance measures and information on how their
part of company operations is contributing to the company’s overall bottom line.

The need for relevant content may seem rather obvious, but we have found it
to be a considerable challenge for utilities. Instead, utilities often follow
the “more is better” approach, resulting in the presentation of far too much
information, personal performance evaluations based on too many factors and
an overall lack of performance focus.

Efficient and Integrated Processes

Installed BPM processes in utilities are often very inefficient, resulting
in substantial waste of resources and difficulty in obtaining and analyzing
information on a timely basis. Successful BPM execution depends in part on the
application of numerous best practices for the three BPM processes, and the
careful integration of those processes across the company. Table 1 highlights
some of the best practices found in each process area.

Enabling Technology

Technology supports nearly every aspect of BPM, ranging from data collection,
transformation and warehousing to information analytics and report presentation.
Given this importance, utilities need to assess their existing technology strategy,
tools and standards and to understand the problems they may be causing with
BPM activities. Such an assessment typically leads to a future state blueprint
for BPM technology that is founded on the BPM vision, leverages current assets
and employs standardized tools and protocols to facilitate reuse and scalability.

The data architecture supporting BPM is a key part of the enabling technology,
and needs to clearly determine the logical and physical data stores needed to
support the BPM environment, including operational data stores, data marts,
data warehouses and online analytical processing. Another vital part of BPM
enabling technology is the analytical tools, including both generalized planning
and forecasting systems and the specialized models used by utilities for generation
dispatch, portfolio optimization and long-term system planning. The technology
framework needs to support access to and operation of these analytical tools,
and allow for flexibility and scalability, as utilities are adopting sophisticated
analytical tools and techniques more common in the financial services industry.

Organization and Culture Change

Organizational and cultural issues are often overlooked when executing BPM
improvements in any industry, but utilities in particular need to carefully
address these concerns. With its strong traditions in engineering excellence
and long history of regulation, the utility culture is very good at focusing
on operational performance. But this focus often distracts managers, and sometimes
even executives, from attention to financial results. This is especially the
case with middle managers in operations, creating a needed success factor of
BPM execution: changing the cultural mindset to strike a better balance between
financial success and operational excellence.

Fundamental culture change takes time, but it can be accelerated by helping
to ensure that managers and executives have a balanced scorecard that shifts
the performance weighting toward financial performance, and by basing the performance
weights on a rational evaluation of the contribution of operational excellence
to the bottom line. While these accelerators will help, they will not guarantee
the change to a performance-based culture. Senior executives need to recognize
that culture change in utilities requires a strong and sustained push from the
highest levels of the company.

Getting Started on Building Better BPM Capabilities

The first step in building BPM capabilities for a utility is to perform an
assessment of the current situation, minding the four elements of BPM execution.
Typical initial questions for each BPM execution element are presented below:

Relevant Content

  • Is your code block being used consistently and efficiently across the company?
  • Do you have the right financial measures at a meaningful level and do they
    match your business organization and hierarchy?
  • Are you capturing and analyzing the external and internal drivers of your
    financial performance?
  • Are all of the information needs of your internal and external stakeholders
    being addressed?

Efficient and Integrated Processes

  • Is your budget cycle too long and/or too resource-intensive?
  • Are measures and targets being applied consistently across the planning,
    budgeting and reporting activities?
  • Is there consistency and alignment between long-term planning and short-term
    forecasting activities?
  • Is your close cycle too long, and can you get estimates of financial performance
    prior to the regular close date (i.e., an “early close”)?
  • Does report development require excessive analyst and/or IT support?

Enabling Technology

  • Are your business and reporting applications scalable and flexible enough
    to meet your changing business demands?
  • Are you using the right tools for the job, such as relational reporting
    tools versus multidimensional reporting tools?
  • Do you have the tools needed to transform your data for analysis?
  • Have you integrated financial and operational performance measures with
    customer data using an enterprise data warehouse?

Organization and Culture Change

  • Have you aligned individual performance measures and compensation structures
    with your enterprise performance measures?
  • Has a formal owner of BPM been assigned within your company, and do you
    have the right skills in-house to fully leverage valuable BPM information?
  • Does your organization resist change, and how will you ensure that your
    BPM vision is accepted and integrated throughout the company?

The answers to these initial questions will help identify the problem areas
in BPM for the utility and should be followed by development of a long-term
vision for BPM that addresses those problems. The next steps are to build on
this vision to develop a conceptual design for the BPM capabilities with details
on each of the four elements of BPM execution and then to plan for and implement
the new BPM capabilities. Typically, the recommended implementation method is
a staged approach, especially when there are significant technology or cultural
challenges to overcome.


Executing a comprehensive BPM vision can help utilities reduce costs, increase
earnings and shift to a culture that is more focused on financial performance.
Several utilities are now strengthening their BPM capabilities, and having solid
BPM capabilities is an increasingly important part of being a successful utility.


Goodbye AMR, Hello AMM

Not long ago, the decision to pursue an automated meter reading (AMR) project
was straightforward to describe and quantify. The business case was essentially
one of justifying the cost based on the potential labor savings (the classic
“feet off the street” argument), and the limited technologies available were,
for the most part, one-way in nature, with data flowing from the meters back
to the utility data center.

Over the past few years, there has been a clear transition from the classic
AMR approach to advanced meter management (AMM). The transition has been driven
by both technological and data management considerations. The biggest single
technological change that has enabled this transition is the added functionality
made possible by bidirectional communication to the meter endpoints. While receiving
reads from the meters is still clearly the biggest change and provides the largest
perceived benefit for the utility, significant benefits can be obtained from
the new functionalities provided by this technological advance. Additionally,
in many cases the tools to perform analyses using information from the meter
data warehouse were never implemented or did not meet the vision of what was
intended. This has led to a number of companies that had previously implemented
AMR to now look at implementing a fully functional meter data management (MDM)
system to realize that vision.

The combination of benefits of an AMM approach allows these projects to incorporate
into the business cases transformational benefits from multiple parts of the
business, including customer service, operations, finance and technology. Additionally,
geography-specific factors can also play a role in the business cases that drive
the need for the implementations. In Central and South America, for example,
theft detection and prevention is one of the single biggest aspects driving
the move toward AMM. Significant losses are seen every year due to increasing
theft, and an AMM solution inclusive of distribution asset management and threshold
monitoring for usage provides the utilities with a valuable tool in eliminating,
or at a minimum greatly reducing these losses. In parts of Europe, where late
payment or nonpayment is a significant problem, some utilities are including
in their AMM systems the ability to directly control consumption on a premise-by-premise
basis by setting thresholds that cannot be exceeded without tripping a breaker
at the premise. This approach will allow utilities to avoid the huge revenue
losses incurred in the past. The key in both these examples is that the value
is in areas other than the classic “feet off the street” approach. The functionality
available to utilities today is much more robust and resources involved in the
up-front business case and requirements efforts need to take off the blinders
and think outside the box from how a traditional AMR project would be justified
and planned.

Once the argument for an AMM implementation has been successfully made, the
next set of decisions is how to bring the implementation to fruition. Overall,
there are four key components of an AMM implementation:

  • Meter data management;
  • Communications and collection;
  • Installation; and
  • Program management.

While major transformational benefits of an AMM implementation can be achieved,
each of these four components has inherent risks that must be managed in order
to ultimately call the project a success.

Meter Data Management

MDM functionality comprises the central integration hub for today’s AMM environments.
Because it will be the single point of management, processing and integration
to back-end legacy systems within an AMM environment, the selection and proper
implementation of the MDM system is critical. All business rules regarding validation,
editing and estimation (VEE) should be done within the MDM to allow for consistency
and efficiency in ongoing operations and maintenance. MDM vendors today offer
robust business functionality within their products, including:

  • Configurable VEE rules engines;
  • Advanced exception processing capabilities;
  • Integration with asset management systems (meter tracking);
  • Integration with service order and work management systems;
  • Flexible data aggregation capabilities;
  • Tailored functionality for both commercial and industrial as well as mass
    market customers;
  • Direct management of vendor head-end systems for endpoint communications;
  • Integration with key back-end systems (outage management system, customer
    information system, energy management system and geographical information
  • Security and data partitioning across internal users/groups and external
    consumers; and
  • Reporting engines.

These capabilities position the MDM system as a new mission-critical system
within the utility, providing increased efficiencies where much of the functionality
provided would have previously been done with point-to-point interfaces and
duplicative operational processes. Under the scenario where multiple communications
technologies are implemented, building the business logic into the MDM provides
the utility with a much more efficient ongoing operation, as common business
logic is maintained in one place and not replicated multiple times within the
interfaces for each communication system. The selection of the MDM vendor should
therefore be viewed seriously, as would the selection of any other key operational
system that the utility would make. It’s not simply a data warehouse, and making
a poor decision will adversely affect the ability to meet anticipated returns
on investment in the future.

Communications and Collection

The implementation of the communication, collection and endpoint technologies
aspect of an AMM project should ideally be conducted in parallel with the implementation
of the MDM system. For larger utilities embarking on the path of an AMM project
today, there needs to be an approach that includes multiple technologies within
the environment. We do not believe that a “one size fits all” approach will
provide the coverage and functionality that is needed. Differences in geography,
terrain, customer density and requirements for customer functionality are best
addressed through implementation of a suite of communication technologies that
will differ from one company to the next.

For large utilities with multiple services (e.g., electric, gas, water), we
envision a combination of power line carrier, fixed network radio frequency
and mobile/drive-by solutions to address the differences in territory and requirements.
For example, the potential for time-of-use tiered pricing requirements could
heavily influence the technology selected. In areas where time of use is envisioned,
certain technologies are not feasible, such as drive-by or some of the slower
power line carrier options. Another possible influencing factor is the potential
transient customer nature of some geographic areas, such as dense university
environments. The number of connects and disconnects in these areas during certain
periods of the year are a large cost and management issue for the utility. Having
a communications and metering solution in these areas that supports remote connect/disconnect
functionality is a major advantage over other options available. There are many
other specific situations that will ultimately determine the optimal technology
mix to best support the complete AMM vision for the utility. Therefore, up-front
planning and analysis must occur for the project to ultimately be successful;
jumping quickly to a decision on a single technology without having conducted
a sound requirements and analysis phase will lead to many issues down the road
that could have been easily avoided.

Installation and Program Management

The installation period for the project itself will be multiple years in duration
by the time the last meter is installed or retrofitted. The complexity of the
planning and managing of the rollout is one of the biggest risks to project
success. With larger utilities, the rollout will most likely involve thousands
of installs a day. Managing the overall supply chain (including the meter manufacturers,
communications technology manufacturers, cross-dock processing and installer
scheduling) can quickly overwhelm even the most efficient utility.

Many of the large systems integrators are bringing their experience from other
industries to address the complexity of rolling out a large number of endpoint
devices. The good news is that the lessons learned and best practices from other
industries (such as those learned from rolling out point-of-sale devices, kiosks,
ATMs, etc.) can be directly leveraged to mitigate risk and manage success for
an AMM rollout. Combined with the specific device installation services of key
players in the utility engineering services area, a systems integrator will
be able to take responsibility for the rollout along with the other key integration
and program management components of such a complex project.

A major factor contributing to the complexity and risks of an AMM project is
that, unlike IT implementations in which the effects are confined to the back
office, AMM directly touches the end customers. Utilities must not only keep
their customers informed about potential upcoming pricing changes (e.g., for
time of use or critical peak pricing), but they must also be prepared for the
increased number of billing-related calls that will come in during the implementation.
As meters are replaced in the field and billing is performed, many customers
could see significant changes to their bills due to slow meters that may have
been in place for years and are now reporting correctly, estimations that were
low in the past and are now eliminated with actual reads and other potential
customer confusion. A great deal of planning must be done up front to help ensure
that all of these scenarios are anticipated to avoid the potential for increased
complaints or other issues.


The change in the marketplace over the past few years from AMR to AMM has been
exciting. This transformation is not yet complete, and over the next five years
the industry will continue to experience significant advances and changes. What
was once an easy decision to make for a traditional AMR system, now requires
in-depth analysis and research in order to design the best overall AMM solution.
However, utilities should not be afraid to move down this path, because experienced
integrators can work jointly with utilities to design and implement a holistic
solution that is designed to meet the vision of the utility, sharing both the
risks and rewards throughout the project life cycle.

Perhaps some people will wish for the “good old days” of simple AMR implementations
after hearing about the complexities. However, the real message is that, while
complex and risky, the potential benefits of today’s AMM projects can be realized
through an execution plan that is well thought out and proven. After years of
flat technology advances in the AMR space, there is finally a true dramatic
increase in capabilities available to utilities. Finally, it’s time for all
those shelved AMR plans to be dusted off, updated to reflect current AMM approaches
and executed.

Defining Customer Valuation

Most utilities view their interactions with customers – mainly monthly billing
and payment – as highly routine, with neither party giving a second thought
to the value at stake in the relationship. Deregulation and new service offerings
increased the opportunities for utilities to enhance their performance by determining
customer valuations and using them for strategy development and operations.

Customer valuation is defined as the analytical process of increasing knowledge
of customers, at a segment or even individual/household level, in order to determine
and improve the value of customer relationships, interactions with customers
or corporate programs. Customer valuation can utilize any attribute of current
or potential customers necessary to meet strategic objectives.

Consider a brief litmus test to determine if customer valuation may enhance

  • Has an initiative fallen short of its objectives because of a poor understanding
    of how customers would respond?
  • Have executives held back from investing in a strategy or project because
    of uncertainty about what the “take rate,” marketing cost per customer or
    revenue per customer would be?
  • Could the strategy have been more successful if it had first been piloted,
    with hypotheses on customer values and preferences then validated?

Customer valuation can aid in strategy development and execution by helping
utilities understand customer preferences in detail, and how their actions drive
performance at an individual customer or household level. Customer preferences
might include receptiveness to certain offers, bill payment and energy conservation
habits, or power reliability demands.

Opportunities for Using Customer Valuation

In the utility industry of the next several years, there are three opportunities
for utilities to apply knowledge of customer value: improving customer acquisition
and retention; optimizing customer service quality while reducing costs; and
enhancing the valuations of projects, such as infrastructure changes, that impact

Although retail competition has remained dormant for residential customers,
acquisition and retention capabilities are still important for any utility that
sells value-added services such as warranty services, or whose larger customers
have choice. For warranty services, utilities can use customer valuation to

  • Which customers are likely to respond to an offer and be profitable?
  • Which offers’ attributes (e.g., annual versus monthly pricing, rebates on
    new appliances), messages (safety, cost advantages) and touchpoints (e.g.,
    direct mail, email, billing message, combination) will elicit the best response
    or conversion rates? What is most important to the target customer?
  • Does warranty usage increase renewal rates? Which customers have excessive
    warranty usage (e.g., are not profitable)?

Amid increasing regulatory pressure, and an expanding set of customer relationship
management capabilities, customer valuation can be a tool to improve on the
dual goals of increasing service quality and reducing costs while limiting the
constraints they place on each other. Asking and answering the right questions
about customer needs and preferences can provide direction for which capabilities
different customers will use or are using (especially the most valuable customers),
and how investments should be prioritized. For example, many customers follow
seasonal patterns of high-bill complaints or nonpayment that significantly increase
utilities’ cost to serve. Identifying and proactively approaching the right
customers to offer level payment plans or access to energy assistance programs
such as LIHEAP can increase customer valuations through cost avoidance.

Advancing changes in meters, sensors, software and analytical capabilities
for distribution networks allow utilities to create “intelligent” networks with
sophisticated performance management capabilities down to the neighborhood or
even customer premises level. But this intelligence requires knowledge of what
the performance should be, and for which neighborhoods or premises. Accurate
customer valuation and knowledge of customers’ specific needs and preferences
is critical input into the planning process for intelligent networks, as well
as other infrastructure projects requiring choices about customer-impacting

Utility Experience With Customer Valuation

Utility companies use a variety of approaches to understand the value of customers.
In the regulated environment, customers are commonly viewed through a meter-centric
lens that connects them to the company through their meter. Any resulting segmentation
is product-based as determined by which product an account uses and the quantity
and patterns of use. These segments have been used to target programs or offers
and to establish rates, making them very similar to the customer’s rate classification.

In moving toward an environment with more choice, many utilities have developed
profile-based segments to reach customers by examining other aspects of the
relationship, including buying habits, demographic profiles, industry, service
preferences and the like. This involves building a profile of an attractive
customer for a given expansion strategy or program. One of the biggest challenges
often is to segment current customers by profile, and using that designation
to determine customer values and drive specific actions. These profile-based
approaches employ a more complete understanding of the customer, but most utilities
have limited their use to specific programs or expansion strategies, with segment
maintenance or updates.

In the broader view of customer valuation, it often is best to incorporate
a mix of product-based segmentation, profile-based segmentation and other customer
valuation concepts as required. One approach does not fit all situations and
there are myriad ways to leverage data sources, data manipulation techniques
and marketing and analytical tools. The types of decisions required and the
variability of the product or service being offered (e.g., offer content, components,
price, terms) influence choosing a segmentation approach. Project decisions
that are largely financial (e.g., driven by project NPV or ROI) will depend
more on product-based valuation and have more static customer data needs. Decisions
on marketing campaigns or customer service initiatives, managed over time, will
be driven more by profile-based valuation with more complex data needs (e.g.,
update frequency, data volumes). Reaching the most attractive prospects with
the right offer, message or change in performance is the key.

An Approach for Valuing Customers

We posit that utilities can increase profitability by more carefully considering
customer value during the planning process. Improved decision-making for spending
scarce resources can have significant impacts on growth, risk and profitability.

The right method for determining customer value depends on a utility’s business
rationale and strategy for using customer value – real-time updates on customer
website usage would be inappropriate for customer valuation to aid infrastructure
planning, for example. We found that the successful development and use of any
customer valuation capability is based on a simple, four-step approach.

Customer Valuation Approach

Step 1: Set the Strategy

This most critical step in the process requires that stakeholders are clear
and agree about the strategic drivers for customer valuation, be it for marketing
campaigns or prioritizing capital investment options. The challenge is to clearly
articulate which of these drivers are most important at any given time.

Setting the strategy is the responsibility of senior management and requires
the involvement of operations, customer service, marketing and finance. A simple
rule is to involve senior managers from every area touched by the driver(s)
and potential uses of the customer valuation being developed. The senior management
team establishes and communicates commitment across all relevant areas, steers
the project and stays closely involved through key team members. Customer valuation
goals should be simple, measurable, have clear linkages between business strategy
and expected results, and they may build on each other over time. A project
charter can answer the full range of questions, including: What are we doing?
Why are we doing it? How will we use the information? How much time and funding
do we have? How will we measure success?

We needed people who were familiar with the financial systems of the company
so we could get to the data we had. We were fortunate – we had a good financial
director who was deeply interested in the project. Because she was responsible
for consolidation of our annual operating budgets, her operating insights
were indispensable.

—Sandy Bean, Marketing Director, Alabama Gas Corp.

The asset evaluation provides a realistic look at the available information
and skills within the company to get the job done. Many utilities underestimate
how much customer information they have, but overestimate the difficulty of
bringing this information into a usable, scalable format. The most readily available
data – energy usage, payment history and usage patterns for large customers
– will support only product-based segmentation, which is not customer valuation,
but is the extent of what most utility companies do for segmentation. A profile-based
segmentation requires a combination of demographic, aggregate behavioral patterns
and psychographic information. (Demographics consist of basic personal information
like age, marital status, and gender and are the most basic type of profile
data. Behavioral data focuses on buying habits, preferences for newspapers over
TV, etc. Psychographics depict consumers’ beliefs and interests in areas like
musical tastes and political opinions.) And interpreting this information often
requires a skill set not readily available in the utility business.

Step 2: Design the Approach

In this step, a designated project team develops the valuation hypotheses and
plan details. This team should represent all stakeholder areas identified in
Step 1, and it should also include financial analysis and IT support. It is
up to this team to convert the team charter into an executable plan and to provide
the sponsors with detailed information about how they will proceed.

In a customer valuation project, often the information is difficult to retrieve
in a usable format. We suggest that teams start with clearly stated hypotheses
that they hope the data will answer. Combining hypotheses and business knowledge
with IT and process expertise can aid in designing the overall approach, clarifying
data needs and maintaining a manageable flow of data for analysis. It also helps
to provide continuity for the employees who will be implementing and using customer
valuation tools.

If information is needed from an external vendor, define as specifically as
possible the information or service needed and what you plan to do with it once
you get it (think about proving or disproving hypotheses, as you did with your
own data). Specificity often allows vendors to suggest helpful and often less
expensive alternatives.

Then define what the valuation output will look like. Examples include: expected
customer lifetime values of all customers in a segment, relative value of every
current customer, or a listing of target customers. Each of these deliverables
requires a very different level of preparation and analysis. Design the approach
to fit the defined deliverable, while also supporting continuity and usefulness
for subsequent projects. Make sure the executive sponsors understand this output
beforehand, particularly if it was not fully defined in the team charter.

Step 3: Perform the Valuation

This step involves execution of the approach from Step 2. Success in this step
depends on an ability to support or disprove the team’s hypotheses with flawless,
fact-based logic.

Start with the initial hypotheses that drove the data requirements developed
in Step 2. Conduct the analysis by rigorously applying the valuation approach.
Then refine and test your valuation conclusions within the team and with other
stakeholders as needed. Has the team thoroughly proved or disproved each hypothesis
and developed other important insights? If an outcome is counterintuitive, is
the team certain that it is correct and can be clearly explained?

We thought that allocating costs to customer segments would be our most time-consuming
task because it required sifting through so many cost-generating activities
and deciding the basis on which to allocate it to customer groups. This allocation
actually turned out to be one of the easiest portions of the project. What
took the most time was managing our scope and agreeing on how far we wanted
to go with this step, particularly as the work progressed.

—Sandy Bean, Marketing Director, Alabama Gas Corp.

Once the team has finalized its conclusions, it should turn them into recommendations
and action plans. The recommendations and actions should be synchronized with
the strategic drivers and clearly articulated with the project sponsors. While
some rework is customary, this is where a thorough analysis and approach begins
to pay off in the form of vital customer insights, targeted recommendations
and well-documented benefit statements.

Step 4: Implement and Manage the Process

This is where the analysis ends and the application begins. The customer valuations
just completed are used to make decisions on customerimpacting projects, processes
and systems. It is likely that some members of the valuation team will lead
the implementation efforts. In fact, it may be essential to building understanding
and buy-in.

A customer valuation project is rarely a one-time exercise. A customer valuation
for acquisition, retention or customer service capabilities is used frequently
and requires maintenance, refreshes and new functionality. A utility that creates
a customer valuation for a specific project has an opportunity to develop this
customer knowledge into a corporate asset that can be enhanced over time, providing
a common view of customers for any project. Such enhancements and further development
are opportunities for the “measure and refine” loop and can enter the process
at any step.

How to Get Started

Look at some of the following opportunities in your own company to determine
where to start:

  • Evaluate your current operating budget and categorize the investments you
    are making. Does this spending support the services most valued by your customers?
    Which services do your customers value most?
  • Conduct an audit of your current capital projects. Can you identify the
    impact on customers and revenue benefits from these projects? Has that revenue
    potential eroded or could it be improved over initial estimates?
  • Examine the profitability of the six largest marketing programs currently
    under way. What effect would a 5 percent increase in customer conversion rates
    have on the bottom line? What are the biggest barriers to this improvement?
  • Go on marketing and sales calls or review customer service calls. Do marketing
    and sales representatives have the information they need to vary how they
    handle or sell to customers?
  • Benchmark your customer satisfaction scores against a utility peer group
    of similar size, services and geography. Do you know what these other companies
    do or what drives their customer satisfaction? What specific actions could
    you take to match or surpass their satisfaction rates?

If the answers to any of these questions are not what you hoped or the answer
is “I don’t know,” it is time to implement a strategy for customer valuation
and define drivers that are most critical to the success of the business.




Collections Best Practice

Utility bad debt is increasing year-on-year according to market research group
Chartwell in its 2004 report, Credit and Collections in the Utility Industry.
Chartwell estimates that more than $1.7 billion in revenue was written off by
North American utilities in 2003, an average of $8.50 uncollected per customer.
Bad debt ratios for individual utilities can vary widely, with some utilities
having net bad debt as low as 0.15 percent of revenue, while others have seen
their debt rise by millions of dollars per year to well over 2 percent.

In a regulated marketplace the cost of utility bad debt is borne by ratepayers,
but regulators’ tolerance is wearing thin. Many utilities are finding themselves
in the spotlight, having to reduce their bad debt, and it is not uncommon to
see improvements in collections performance listed in the top five corporate

Utility debt is expected to continue its rise amid a sluggish economy and an
environment of rising energy commodity prices, forcing utility collection managers
to revaluate their collections assumptions and processes if they are to make
progress in improving collections performance. Old-school one-size-fits-all
approaches are no longer adequate in today’s challenging economic and regulatory
environment, and collections best-practice concepts are catching on fast.

“Utility collections best practice is about applying collections processes
appropriately to each customer account,” says Bob Cooke, principal of risk strategy
at specialist utility collections consultancy Bass & Company. “It is founded
on a deeper understanding of the credit risk of individual customers and customer
segments and implementing tailored treatment paths that focus resources on customers
most at risk.”

Collections Best Practices

  • Multi-Jurisdiction Compliance – Utilities operating in multiple states or
    countries require wide-ranging collections configurability to meet the varied
    stipulations laid out by each state or country. For example, in some states
    a utility is required to attempt customer contact twice before disconnection.
    In other states, disconnecting certain customer groups during the winter months
    is prohibited.
  • Customer Segmentation – Segmentation of a utility customer base for collections
    activities must extend beyond a traditional credit score or an acknowledgement
    that a customer is either “good” or “bad.” These categories, the basis for
    traditional utility collections practice, are too broad to be effective when
    measured against best-practice concepts. For example, there are customers
    who generally pay on time and occasionally late; others who find it difficult
    to pay and require payment plans; business or government customers who pay
    late but are a very low credit risk; and those who are continually being issued
    termination notices. To classify all four groups as “bad” and treat them equally
    will mean actions are too aggressive for some customers and not aggressive
    enough for others. Best-practice collections will classify customers into
    appropriate segments. In addition, some regulatory jurisdictions require detailed
    customer segmentation. In Pennsylvania, for example, regulations require specific
    collections treatments for “Level 1 and 2” low-income customers during winter
  • Configurable Credit Scoring – Scoring credit risk based on customer attributes,
    historical payment patterns and other factors affecting credit risk as opposed
    to traditional methods that primarily measure payment timeliness. Utilities
    may choose their own business-defined parameters for calculating credit scores
    and may choose to score customers from 1 to 5, 1 to 10, or 1 to 1,000, depending
    upon what best supports their business practice. Credit scores may be adjusted
    based not just on whether customers have historically paid on time, paid late
    or received reminder notices, but also on other factors affecting credit risk
    such as whether the customer provided identification numbers or telephone
    numbers at the time of sign up.
  • Dynamic Action and Reaction – Traditional utility collections operate on
    a cycle basis: Each customer account is checked when its cycle comes around
    each month. Each business day, perhaps 5 percent of the customer base is checked
    for credit and collection status. But the real world is not cycle-based. Circumstances
    can change daily. Customers may make payments or have payments dishonored.
    Customers may provide additional security, their credit ratings may improve
    or worsen, collections policies may be adjusted, or the customer account may
    be reclassified to a different segment. Any of these factors can dictate an
    immediate requirement for collection action. Best-practice collections require
    that collections processes not be bound by billing cycles but by a more dynamic
    action: to act early to collect to minimize credit risk exposure.
  • Tailored Treatment Paths – These are designed to suit the anticipated profile
    of each customer. This might include printed letters, email, text messages,
    automated phone messages, outbound call center calls, account manager escalations,
    disconnection work orders and website messages, when appropriate. These can
    be configured to meet individual customer requirements such as agreed payment
    structures for those struggling to pay, polite reminders for customers who
    normally pay on time but on occasion forget, email reminders for customers
    who have chosen that as their preferred communication channel, and stronger
    warnings and accelerated cycles for delinquent customers who regularly default
    on outstanding debt. Different treatment paths are required to support each
    classified customer segment, within the general requirements for differing
    treatments for types of customer for each regulatory jurisdiction.
  • Outsourcing to Limit Exposure – As the collections process for each customer
    moves further along each treatment path, and as arrears age, the probability
    of recovery decreases – so does the financial value of the receivable (see
    Figure 1). As a customer’s debt ages a utility might look to “factor its debt,”
    meaning sell the receivable to an outside agency or consider whether it should
    simply be written off. Bass & Company research indicates that debt factoring
    can be applied more widely than just for very late-stage delinquencies. For
    example, for higher-risk customers, terminating the account earlier and outsourcing
    it to an agency, depending on the regulatory constraints, can reduce overall
    write-offs as the bad debt exposure is limited by the factoring process. It
    is also important to identify which groups of customers should be outsourced
    to which agencies – some agencies might have more skill and success with business
    customer debt recovery than residential.

  • Manage Cost to Collect – Best-practice collections focus not just on amount
    collected but on optimizing the cost incurred in making those collections.
    Collections costs are incurred in the call center, in letter printing and
    mailing, in working capital costs through delayed receipt of monies, and in
    bad debt written off. For every dollar spent on collections activities, more
    than one dollar should be recovered. By managing collections with an understanding
    of cost-to-collect and options such as debt factoring, collections costs can
    be optimized.
  • Collections Intelligence, Analytics and KPIs – Best-practice collections
    require continuous improvement through an interactive approach. Utility collection
    managers should use targeted collections campaigns and adjustments to collections
    treatment paths, analyzing the impact and regularly refining collection treatment
    paths and customer segmentation based on the results. This continuous improvement
    process requires the ability to monitor collections performance by customer
    segment, and to analyze metrics, key performance indicators (KPIs) and trends,
    and relate them to collections campaign dates. Interactive, what-if analysis
    will allow a utility to model and determine the effectiveness of alternative
    collections policies, monitor such metrics as “bad debt percent to revenue”
    and “days sales outstanding” and gauge their impact on operational costs.
    In addition, advanced data analysis using metrics and trend displays, and
    collection control dashboards that illustrate KPIs such as status of overdue
    receivables and corresponding collections actions, will provide the necessary
    business intelligence to adapt and extend collections activities.


A forward-thinking utility responding to the industrywide best-practice collections
challenge is Xcel Energy, the fourth-largest electricity and gas utility in
the United States. Xcel Energy required parameter-configured collections software
that would allow it to move away from static customer collections profiles that
pigeonhole customers into oversimplified “good” or “bad” categories in order
to deliver targeted credit and collections treatments. Today, Xcel Energy is
able to advance beyond the typical utility collection model of standard reminder
letters, customer calls and disconnection notices generated for all overdue
customers on the same number of days after due date.

Xcel Energy’s service territory covers 10 states, requiring wideranging collections
configurability to meet the varied regulatory stipulations. In New Mexico, for
example, collection cycles are allowed to be far more aggressive than in Colorado,
and it makes little business sense to implement the same collection cycles in
both states. Figure 2 contrasts credit treatment processes in Colorado and New

In Colorado those customers with a “good” credit score receive a reminder letter
33 days after their due date, a notice of disconnection on day 64 and the disconnection
process starts on day 74.

Customers in the “others” category do not receive a reminder letter at all,
only a notice of disconnection on day 33, and the disconnection initiates on
day 41. In New Mexico, by comparison, collections treatments differ for commercial
and residential customer segments. The residential customer has 10 days more
between their invoice date and due date, and the timeframe for disconnection
initiation is much shorter for a commercial versus a residential customer: 10
and 39 days, respectively.

Removing the Systems Barrier

Cooke at Bass & Company believes there is a basic understanding in the utility
industry of the concepts of collections best practices, but almost all utilities
lack the technology to be able to adopt best practices. “Everywhere we have
taken our collections models, technology has been the limiting factor,” he says.
“Without the right systems in place, collections best practice is an unobtainable

Traditionally, collections has been the technology domain of the utility’s
customer information system (CIS). Many utilities struggle with weak and restrictive
collections functionality in an aging CIS. Others may have a relatively new
CIS in place that boasts adequate collections support but is missing key features
to support today’s best-practice collections processes.

Utilities collection managers, dependent upon the CIS platform to manage collections
processes, have been resigned to waiting years, perhaps decades, for a full-scale
CIS replacement opportunity so they can implement improved collection processes.

One might suppose that good, paying utility customers are doomed to subsidize
bad debt as utilities are resigned to absorbing write-offs above regulatory
allowances until the day they can do a full-scale CIS replacement. But this
is no longer so with the advent of new software products that can integrate
with an existing CIS and deliver muchneeded collections capability to support
best-practice approaches.

Utility debt is not going away. There will always be customers who default
on payment. But technology has caught up with the problem, and utilities now
have the option to implement specialist collection software products that enable
best-practice approaches.


Asset Management: Practical Path to Success

The term asset management has taken center stage for most utility transmission
and distribution, primarily because of cost pressures resulting from limited
capital and O&M funds, and customer/regulatory pressure to improve network reliability.
While definitions may vary, asset management is commonly characterized as maintaining
the “health” and “wealth” of the assets. Health is typically described as the
traditional maintenance management, performance monitoring, replacement and
investment planning based on asset operating performance and related operating
and maintenance costs. Wealth takes into consideration the revenue associated
with the asset along with the cost of maintaining its health and is used in
making enterprise investment decisions. Asset “health and wealth,” taken together,
must be managed through the entire network asset life cycle. This life cycle
starts with planning, designing, building, commissions, operation of the network,
monitoring the network, maintenance, and refurbishing all parts of the network
(from equipment to software to infrastructure).

So what makes it so difficult for the utility to implement an effective asset
management program that measures and makes informed decisions on asset health
and wealth? There are instances where the utility has been successful in implementing
asset management for a particular class of assets, or a specific phase of the
life cycle, such as maintenance. With the emphasis placed on implementing information
technology solutions over the past 10 years to support the utility transmission
and distribution business, this would lead one to believe that the right tools
are in place to effectively, if not efficiently, manage assets. Clearly, many
of these technologies have been labeled “asset management” solutions but have
fallen far short of providing the capability to manage assets. So has technology
failed us in our endeavor to manage assets? A few utility technology-related
observations are worth looking at.

Utility Technology-Related Observations

It’s no secret that when utilities are forced into cost-cutting measures (which
has happened significantly over the last five years), the first strategy considered
is personnel reduction. During this same period, however, utilities have procured
and deployed more commercial applications (replacing legacy and in-house developed),
allowing for automation, productivity improvements and extended vendor support
to help justify the reduction in personnel. Figure 1 shows the impact that the
degree of commercial application implementation/ integration has on the line
of business (LOB) and IT personnel.

This indicates a greater percentage reduction in IT staff than in LOB staff,
primarily for two reasons. First, software vendors, through more comprehensive
annual service agreements, are providing increased levels of application and
maintenance support. Second, in-house developed legacy systems often require
more IT personnel for administration and maintenance. Also, internally developed
software is generally susceptible to near-continuous customization requests
by the LOB.

Overall, at first, this seems to present a supportive picture in a utility’s
efforts to cut costs. However, as the degree of commercial application implementation
and integration continues, another phenomenon is taking place. This is best
illustrated through example. A typical distribution utility has 10 to 12 applications
for operations and engineering, each with a minimum of two to three interfaces,
provided by six to eight unique vendors. As these applications become more tightly
integrated, with most vendors providing annual upgrade releases, Figure 2 examines
the scenario that can occur.

Assume that a typical application upgrade is $500,000, and each interface upgrade
is $25,000. Those are probably conservative estimates. For a utility early on
in its commercial application implementation and integration efforts (e.g.,
three applications), it can be looking at nearly $2 million in upgrade costs.
For the completely integrated utility, these costs approach $12 million, more
than the internal IT staff can support and likely more than has been budgeted.
The question is whether the utility can afford to continue on this path, and
more importantly, what is the expected benefit in doing so? The primary benefit
received in application integration is improved business process efficiency,
but it results in minimal positive impact in managing assets. Furthermore, this
improvement in business process efficiency is only temporary if the utility
doesn’t have the financial or technical resources to maintain the technology
integration. It’s a snapshot in time, becoming out of date once the first vendor
provides an upgrade to its application and the utility makes a decision to implement
it. Clearly, fewer vendors providing an integrated solution can help alleviate
the substantial cost and resource requirements needed to keep the solution current.

Another alternative may be to outsource the technology or contract to make
a single vendor responsible for keeping the technology current. Because many
applications (e.g., outage management, engineering design, planned maintenance)
are built around one or two primary business functions, the business units using
these applications can directly benefit from the upgrades. As a whole, the individual
business units may see a benefit from the upgrade (e.g., improved maintenance
optimization, better crew utilization in outage restoration); however, the business
process as a whole most likely suffers if the integrated solution isn’t maintained.

Why does asset management suffer under this scenario? Asset management is not
simply a sum of the parts in an asset’s life cycle. For example, the engineering
department can optimally design an extension to the utility’s physical network,
appropriately sizing the necessary transformers and other assets. However, if
the network is configured such that these transformers are typically operating
with excessive loads, or the manufacturer’s recommended maintenance isn’t performed
(or at least considered) or similar transformer failure history isn’t considered
in the maintenance plan, etc., the results are higher costs and lower reliability.
That is decreased asset heath and wealth. The synergy of asset management is
that the whole of the asset life cycle is much greater than the sum of the individual
phases, even if, individually, the phases have been optimized. So, what should
a utility consider in implementing asset management?

Picking the Right Asset

Undertaking asset management across the entire utility can be a daunting task,
and even those who believed they had the right resources and finances have been
overwhelmed. A practical approach needs to be taken in the successful implementation
of asset management. First and foremost, the utility needs to recognize that
the expertise and much of the data, 80 percent or more, resides within the utility.
Second, although it’s important to implement an overall asset management strategy,
don’t become so engrossed in it that it keeps you from ever implementing. Some
utilities have become paralyzed by the intensive efforts they’ve taken to ensure
that their asset management strategy is just right. Third, start small by implementing
asset management for a specific type of asset, over the entire asset life cycle.

Although utilities have expended considerable resources on technology implementation
and integration to the benefit of the efficiency and effectiveness of business
processes, all is not lost in benefiting asset management. Through application
functional enhancements that technologically enable the associated business
processes, the collection, retention and availability of pertinent asset data
has been dramatically improved. The key is to be able to easily and readily
extract, transform and load the appropriate data from a number of disparate
application data sources so meaningful information can be presented and used
to answer questions and provide decision support for asset management.

This technology, combining a transmission and distribution business intelligence
model with data warehousing, is available and affordable today. Unlike the many
problems associated with keeping complex interfaces between integrated applications
current, this technology depends on the data model that seldom changes even
though the application may undergo frequent upgrades. Also, this technology
easily lends itself to utilityspecific customizations, through the separation
of data tables, and readily supports new software releases and upgrades. Some
solution vendors have developed a robust utility transmission and distribution
data model with accompanying dashboards, key performance indicators, metrics
and reports that reduce this effort to one of a data-mapping exercise to the
utility’s data sources.

With the technology and much of the data available to support asset management,
the holistic view of the asset life cycle must be considered. As noted earlier,
much of the recent efforts to implement technology solutions have focused on
specific business functions, with the integration of these applications primarily
supporting the enhancement of the overall business processes. An approach to
embedding the asset life cycle in asset management is to implement a structured
process that looks at each phase of the asset life cycle by identifying the
questions that need to be answered, determining what data is needed to answer
the questions, and where that data resides. An abbreviated example of this approach
is shown in Figure 3.

As the detail of this structured approach is flushed out, it becomes apparent
that the majority of the questions asked are asset-type or class-independent.
Also, any gaps that may exist in the needed data become apparent.

By selecting a specific asset type or class to apply this approach, the utility
can refrain from being paralyzed by spending too much time on developing just
the right asset management strategy. More importantly, it can learn a lot about
what its overall strategy needs to be in managing assets. Also, another common
fate of those who initially strive for the “perfect” strategy is that they are
overwhelmed with data and can no longer derive meaningful information from it,
eroding the ability to make informed decisions. It is an expensive solution,
not only in terms of dollars, but also the lost opportunity for effective asset

An Asset Management Road Map

This approach to asset management:

  • Benefits from and leverages the investments the utility has made in technology,
    enabling utility transmission and distribution business functions;
  • Builds upon the functional enhancements and availability of data that has
    resulted as transmission and distribution applications have matured;
  • Supports the recent application integration efforts to improve overall business
    process efficiency;
  • Recognizes that proven and affordable technology exists today to technologically
    enable asset management;
  • Embeds the asset life cycle in making asset management decisions;
  • Provides the utility with an early success in asset management;
  • Provides the utility with specific asset management results that can be
    used to shape its overall strategy.

This practical approach to implementing asset management will provide the utility
with a road map that achieves measurable successes early on and places them
on a manageable path to an enterprise asset management solution.


Using the Sarbanes-Oxley Act

Who would have predicted that significant help in managing operational risks
would come from such an unlikely source: Section 404 of the Sarbanes-Oxley Act?
But, by voluntarily extending the Sarbanes-Oxley model by stretching management’s
study and evaluation of controls beyond financial reporting and control to operational
reporting and control, forwardthinking utilities may more effectively uncover
and manage regulatory and business risks, resulting in increased investor confidence.

So the scramble is on: Many public companies have staffed up internally and
hired outside resources to comply with Section 404 of the Sarbanes-Oxley Act.
These companies are consuming much in the way of time, manpower and money to
document, test and evaluate their internal control structures and procedures
for financial reporting. For example, an Ernst & Young survey of 100 major businesses
found that 70 percent of companies anticipate investing at least 10,000 hours
to comply with Section 404. Moreover, a survey by Financial Executives International
found that responding companies expect to pay an average of $732,100 for external
consulting, software and other assistance to comply. (Section 404 requires that
companies document and test controls three months before the end of the fiscal

For most of these companies, the end product of these significant investments
may result in an internal control report from management that accomplishes two
key objectives. First, it will help confirm management’s responsibility for
establishing and maintaining an adequate internal control structure and procedures
for financial reporting. Second, it will help assess how well the structure
and procedures work. The company’s external auditors must attest to, and report
on, management’s assessment as well as the effectiveness of internal controls
over financial reporting as of the assessment date.

When it’s all said and done, management at most companies will likely breathe
a sigh of relief, be glad it’s over, put Section 404 activities on the shelf
for awhile and move on to more invigorating activities.

It becomes clear, however, that forward-looking companies should consider taking
the logical next step of extending the Section 404 framework from the financial
side of the house to the operational side.

Yes, that’s right, voluntarily extend the Section 404 mindset to operations,
proactively immersing the entire organization in the identification, documentation
and testing of operational controls.

Changing a Mindset

Many companies view elements of Sarbanes-Oxley as time-consuming and expensive.
However, beyond the direct improvements resulting in financial reporting, such
efforts have additional potential upside as well. The most important upside
is: taking the expertise developed in documenting, measuring and testing financial
controls and applying it to operations.

In many ways, it is no more than the logical next step. Those who translate
everything they’ve learned on the financial side of the house to the operations
side will emerge from the 404 process the strongest.

The need for better operational controls and reporting has never been greater
for utilities. Over the past decade, new players, new regulations and new market
forces have shaken up the once-stable utility industry. The landscape is littered
with ill-fated attempts at deregulation, high-profile business failures and
scandals at energy trading organizations. Consumers are angry and confused,
investors are wary and state and federal legislators and regulators are determined
to return order to an industry that they believe is in desperate need of it.
All of these factors – plus the capitalintensive nature of the business – point
to increased need for vigorous risk management practices, policies and procedures.

The dawning of Section 404 financial reporting presents a golden opportunity
for management teams to put processes in place that manage risk from its origin
– operations. Without thorough operational control, true financial control may
be difficult to attain.

Focus on Financial

In all but the most sophisticated companies, operational controls traditionally
have been underdeveloped compared with the financial side. The group most often
charged with operational controls – the internal audit function – is usually
focused on compliance, instead of the operational issues that ultimately affect
financials. This focus on financial controls will only grow as compliance with
Section 404 is institutionalized.

In many companies, oversight of operational processes and controls is scattered
throughout the organization. More often than not, the job of mandatory operational
reporting – to various state and federal energy, health and safety, and environmental
regulatory agencies – falls on different functions within the organization,
such as human resources. By default, these departments within a company become
the keepers of various data and information, reporting them to regulatory agencies
on a habitual basis. Because it lacks systemic control, standardized processes
and oversight, this ad hoc reporting style can create significant risks.

As a result, internally driven challenges to what organizations are doing operationally
are often nonexistent. A lot of upward reporting occurs, but not a lot of peer
review or systematic checks and balances. And it’s only through those approaches
that gaps can be uncovered and addressed before they create problems.

Sarbanes-Oxley is forcing companies to be more forthcoming and expose their
financial control risks. To truly control financial risks, however, companies
should think about looking even deeper and identify the operational risks from
which their financial risks emanate. The principles of Section 404 can give
them the tools to do so.

The Payoff: Fewer Surprises

Having stringent operational controls and reporting processes in place can
alert management to potential trouble spots long before their effects show up
in financial statements. This is because there is a very clear and close link
between operational and financial reporting. Financial reports are simply a
reflection of what has happened in operations. Unfortunately, management’s first
look at what happened operationally frequently occurs when the financial statements
are produced. When the results aren’t what they’d expected or hoped for, they
then try to fix the problem, but by then it’s too late. In essence, management
works backward from the financial statements.

Having operational controls and processes in place, documented and monitored
could serve as an early warning system and allow a company to address what is
happening operationally on a more real-time basis. Long before financial statements
are produced, management would recognize that things aren’t going as anticipated
and take corrective action. Many of the recent headline-grabbing corporate scandals
and failures might have been prevented or mitigated with better operational
controls and oversight. With better operational controls, management at these
companies could have known early on that they had a fundamental business problem
brewing and could have acted on it, avoiding all of the tragedy that followed.
Unfortunately, they didn’t understand what was happening in their companies
operationally, and by the time they discovered it, they didn’t have enough time
to manage it.

What to Do

Companies that determine to extend assessment of controls beyond those dealing
with financial reporting will immediately be faced with a daunting task: deciding
where to start.

Most utilities face no shortage of operational risks, and there are essentially
four ways to address them:

  • Insuring them;
  • Hedging them;
  • Learning to live with them because they don’t pose a threat for significant
    loss; and
  • Mitigating them through improved operational and financial controls.

Because it’s impossible, and probably unnecessary, to address every risk, we
suggest employing two criteria for identifying those to tackle immediately.
First, try to zero in on the critical “handfuls” that pose the most significant
potential for loss. Second, consider addressing those that lend themselves to
mitigation by improved internal controls. This would include defining and mapping
the process as it currently exists, defining what it ideally should look like,
and implementing a plan to fill in the gaps.

Internal audit functions are the likely candidates to make suggestions to management
for implementing 404-like practices in operations. But to do so, they will need
to create a new model for developing, implementing and monitoring operational
controls – a revamp that is long overdue in most companies. In the future, the
internal audit groups that deliver the most value to their organizations will
be those that offer innovative solutions to operational control problems.

Though each company faces a unique set of challenges, there are a handful of
likely candidates for Section 404-like documentation, review and testing at
most utilities.

Near the top of the list are energy procurement and commodity trading activities.
In the mid- to late-1990s, most utility companies took advantage of price-risk
protection offered by the use of derivatives by creating their own trading groups
to interact and transact with major trading houses. These groups quickly gained
tremendous market power – constrained only in limited fashion by their regulatory
authorities – and developed significant expertise and sophistication in negotiating
transactions using complicated derivatives.

But as these trading groups grew in sophistication, their operational controls
did not keep up. As a result, they present significant risk from lack of, or
inappropriate, controls over the segregation of duties, as well as the authorization,
validation and confirmation of transactions. Even after numerous high-profile
collapses within the industry, many utilities today still have to increase a
level of scrutiny over controls within their trading departments. While 404
has caused them to focus on controls regarding segregation of duties between
the front-, middle- and back-office functions, the initiation of transactions
and the ensuing confirmation, there are still major operational control issues
that utilities should address, including:

  • Determination of the appropriate authority limits, including spending limits,
    contract length limits, capital at risk or other parameters in line with the
    corporate governance objectives of the organization and the expectation of
  • Credit policies, including collateral requirements and credit limits for
  • Document retention and price reporting policies to provide safe harbor relative
    to regulatory requirements;
  • Segregation of transactions between regulated and unregulated entities under
    the same corporate umbrella;
  • Realignment of compensation models vis-à-vis the trading strategies of the
    organization; and
  • Reassessment of the controls surrounding the physical movement of the commodity,
    including scheduling to minimize imbalances.

Another likely candidate for operational control overhaul in many utilities
is the process surrounding approval, initiation and management of capital construction
projects. Long-term, tangible assets remain the backbone of utilities, and their
integrity must be maintained through frequent upgrades, expansion, repair and

Often, utilities lack appropriate controls over project approval and initiation.
Because field personnel and operations crews are tasked with completing capital
projects within tight deadlines, projects can be established and executed with
minimal scrutiny from an operational standpoint.

A utility’s ability to navigate the new regulatory landscape is another operational
aspect worthy of close examination. During the 1990s and early 2000s, many utilities
dramatically altered their strategic growth plans to focus on nonregulated energy
entities that held the allure of higher, market-based returns. Stable, cash-generating
regulated activities, while still at the core of the utility business, were
often labeled as “slow growth” or “no growth” enterprises. As a result, the
in-house regulatory affairs capabilities of many utilities, once large and well-staffed,
changed focus and shrunk dramatically.

Fast forward to 2005. Because of recent spectacular deregulation failures,
many nonregulated energy service offerings have declined or disappeared over
the last few years. Many utilities have gone back to the basics – safely maintaining
their assets, reliably delivering energy and collecting revenue in a timely
manner – with regulated activities once again accounting for the lion’s share
of utilities’ growth prospects.

During the period of infatuation with nonregulated businesses, there was little
major utility rate case activity. Now with the focus shifting back to regulated
activities, many utilities are likely to no longer have the inhouse capabilities,
skill sets or resources to mount any significant rate case work.

As energy utilities embark on their strategy development beyond the first year
of 404, they must discover resources capable of reviving their own industry
expertise, market intelligence and regulatory insight. As these resources are
added, implementing a framework where operational controls are present provides
the best opportunity to manage risk and prevent many of the mistakes of the

Who Will Lead?

Because recent business failures and scandals have created an environment that
puts internal operational controls under the microscope, it’s logical to think
that many companies will embrace this idea. But that’s not likely to happen.

Managements are so focused on 404 implementation that many may fail to recognize,
due to time and resource constraints, that assessing financial reporting controls
is only half the battle. To date, the investment community has not placed a
great deal of emphasis on strong operational controls. But it reacts negatively
to financial reporting control failures, often not realizing that these failures
usually result from the lack of operational controls. Investors and other constituents
of these companies will come to that realization when another scandal occurs
despite Section 404 compliance.

If Section 404-weary management does not lead the charge to extend this concept
to operational controls and reporting, the call for stronger operational controls
will likely come from regulators, boards of directors, audit committees, investors,
rating agencies and other stakeholders, as they realize that Section 404 compliance
does not address all ills a company faces. When this happens, investors and
other constituents will then place a premium on organizations that are well-managed
and have best-in-class operational controls.

The irony is that, for companies consumed with the Section 404 compliance flurry,
a promising solution for addressing their operational risks could be right in
front of them.