SmartGridNet Architecture for Utilities

UTILITY NETWORK BUSINESS DRIVERS

With the accelerating movement toward distributed generation and the rapid shift in energy consumption patterns, today’s power utilities are facing growing requirements for improved management, capacity planning, control, security and administration of their infrastructure and services.

These requirements are driving a need for greater automation and control throughout the power infrastructure, from generation through the customer site. In addition, utilities are interested in providing end-customers with new applications, such as advanced metering infrastructure (AMI), online usage reports and outage status. In addition to meeting these requirements, utilities are under pressure to reduce costs and automate operations, as well as protect their infrastructures from service disruption in compliance with homeland security requirements.

To succeed, utilities must seamlessly support these demands with an embedded infrastructure of traditional devices and technologies. This will allow them to provide a smooth evolution to next-generation capabilities, manage life cycle issues for aging equipment and devices, maintain service continuity, minimize capital investment, and ensure scalability and future-proofing for new applications, such as smart metering.

By adopting an evolutionary approach to an intelligent communications network (SmartGridNet), utilities can maximize their ability to leverage the existing asset base and minimize capital and operations expenses.

THE NEED FOR AN INTELLIGENT UTILITY NETWORK

As a first step toward implementing a SmartGridNet, utilities must implement intelligent electronic devices (IEDs) throughout the infrastructure – from generation and transmission through distribution directly to customer premises – if they are to effectively monitor and manage facilities, load and usage. A sophisticated operational communications network then interconnects such devices through control centers, providing support for supervisory control and data acquisition (SCADA), teleprotection, remote meter reading, and operational voice and video. This network also enables new applications such as field personnel management and dispatch, safety and localization. In addition, the utility’s corporate communications network increases employee productivity and improves customer service by providing multimedia; voice, video, and data communications; worker mobility; and contact center capabilities.

These two network types – operational and corporate – and the applications they support may leverage common network facilities; however, they have very different requirements for availability, service assurance, bandwidth, security and performance.

SMARTGRIDNET REQUIREMENTS

Network technology is critical to the evolution of the next-generation utility. The SmartGridNet must support the following key requirements:

  • Virtualization. Enables operation of multiple virtual networks over common infrastructure and facilities while maintaining mutual isolation and distinct levels of service.
  • Quality of service (QoS). Allows priority treatment of critical traffic on a “per-network, per-service, per-user basis.”
  • High availability. Ensures constant availability of critical communications, transparent restoration and “always on” service – even when the public switched telephony network (PSTN) or local power supply suffers outages.
  • Multipoint-to-multipoint communications. Provides integrated control and data collection across multiple sensors and regulators via synchronized, redundant control centers for disaster recovery.
  • Two-way communications. Supports increasingly sophisticated interactions between control centers and end-customers or field forces to enable new capabilities, such as customer sellback, return or credit allocation for locally stored power; improved field service dispatch; information sharing; and reporting.
  • Mobile services. Improves employee efficiency, both within company facilities and in the field.
  • Security. Protects the infrastructure from malicious and inadvertent compromise from both internal and external sources, ensures service reliability and continuity, and complies with critical security regulations such as North American Electric Reliability Corp. (NERC).
  • Legacy service integration. Accommodates the continued presence of legacy remote terminal units (RTUs), meters, sensors and regulators, supporting circuit, X.25, frame relay (FR), and asynchronous transfer mode (ATM) interfaces and communications.
  • Future-proofing. Capability and scalability to meet not just today’s applications, but tomorrow’s, as driven by regulatory requirements (such as smart metering) and new revenue opportunities, such as utility delivery of business and residential telecommunications (U-Telco) services.

SMARTGRIDNET EVOLUTION

A number of network technologies – both wire-line and wireless – work together to achieve these requirements in a SmartGridNet. Utilities must leverage a range of network integration disciplines to engineer a smooth transformation of their existing infrastructure to a SmartGridNet.

The remainder of this paper describes an evolutionary scenario, in which:

  • Next-generation synchronous optical network (SONET)-based multiservice provisioning platforms (MSPPs), with native QoS-enabled Ethernet capabilities are seamlessly introduced at the transport layer to switch traffic from both embedded sensors and next-generation IEDs.
  • Cost-effective wave division multiplexing (WDM) is used to increase communications network capacity for new traffic while leveraging embedded fiber assets.
  • Multiprotocol label switching (MPLS)/ IP routing infrastructure is introduced as an overlay on the transport layer only for traffic requiring higher-layer services that cannot be addressed more efficiently by the transport layer MSPPs.
  • Circuit emulation over IP virtual private networks (VPNs) is supported as a means for carrying sensor traffic over shared or leased network facilities.
  • A variety of communications applications are delivered over this integrated infrastructure to enhance operational efficiency, reliability, employee productivity and customer satisfaction.
  • A toolbox of access technologies is appropriately applied, per specific area characteristics and requirements, to extend power service monitoring and management all the way to the end-customer’s premises.
  • A smart home network offers new capabilities to the end-customer, such as Advanced Metering Infrastructure (AMI), appliance control and flexible billing models.
  • Managed and assured availability, security, performance and regulatory compliance of the communications network.

THE SMARTGRIDNET ARCHITECTURE

Figure 1 provides an architectural framework that we may use to illustrate and map the relevant communications technologies and protocols.

The backbone network in Figure 1 interconnects corporate sites and data centers, control centers, generation facilities, transmission and distribution substations, and other core facilities. It can isolate the distinct operational and corporate communications networks and subnetworks while enforcing the critical network requirements outlined in the section above.

The underlying transport network for this intelligent backbone is made up of both fiber and wireless (for example, microwave) technologies. The backbone also employs ring and mesh architectures to provide high availability and rapid restoration.

INTELLIGENT CORE TRANSPORT

As alluring as pure packet networks may be, synchronous SONET remains a key technology for operational backbones. Only SONET can support the range of new and legacy traffic types while meeting the stringent absolute delay, differential delay and 50-millisecond restoration requirements of real-time traffic.

SONET transport for legacy traffic may be provided in MSPPs, which interoperate with embedded SONET elements to implement ring and mesh protection over fiber facilities and time division multiplexing (TDM)-based microwave. Full-featured Ethernet switch modules in these MSPPs enable next-generation traffic via Ethernet over SONET (EOS) and/or packet over SONET (POS). Appropriate, cost-effective wave division multiplexing (WDM) solutions – for example, coarse, passive and dense WDM – may also be applied to guarantee sufficient capacity while leveraging existing fiber assets.

BACKBONE SWITCHING/ROUTING

From a switching and routing perspective, a significant amount of traffic in the backbone may be managed at the transport layer – for example, via QoS-enabled Ethernet switching capabilities embedded in the SONET-based MSPPs. This is a key capability for supporting expedited delivery of critical traffic types, enabling utilities to migrate to more generic object-oriented substation event (GOOSE)-based inter-substation communications for SCADA and teleprotection in the future in accordance with standards such as IEC 61850.

Where higher-layer services – for example, IP VPN, multicast, ATM and FR – are required, however, utilities can introduce a multi-service switching/routing infrastructure incrementally on top of the transport infrastructure. The switching infrastructure is based on multi-protocol label switching (MPLS), implementing Layer 2 transport encapsulation and/or IP VPNs, per the relevant Internet engineering task force (IETF) requests for comments (RFCs).

This type of unified infrastructure reduces operations costs by sharing switching and restoration capabilities across multiple services. Current IP/MPLS switching technology is consistent with the network requirements summarized above for service traffic requiring higher-layer services, and may be combined with additional advanced services such as Layer 3 VPNs and unified threat management (UTM) devices/firewalls for further protection and isolation of traffic.

CORE COMMUNICATIONS APPLICATIONS

Operational services such as tele-protection and SCADA represent key categories of applications driving the requirements for a robust, secure, cost-effective network as described. Beyond these, there are a number of communications applications enabling improved operational efficiency for the utility, as well as mechanisms to enhance employee productivity and customer service. These include, but are not limited to:

  • Active network controls. Improves capacity and utilization of the electricity network.
  • Voice over IP (VoIP). Leverages common network infrastructure to reduce the cost of operational and corporate voice communications – for example, eliminating costly channel banks for individual lines required at remote substations.
  • Closed circuit TV (CCTV)/Video Over IP. Improves surveillance of remote assets and secure automated facilities.
  • Multimedia collaboration. Combines voice, video and data traffic in a rich application suite to enhance communication and worker productivity, giving employees direct access to centralized expertise and online resources (for example, standards and diagrams).
  • IED interconnection. Better measures and manages the electricity networks.
  • Mobility. Leverages in-plant and field worker mobility – via cellular, land mobile radio (LMR) and WiFi – to improve efficiency of key work processes.
  • Contact center. Employs next-generation communications and best-in-class customer service business processes to improve customer satisfaction.

DISTRIBUTION AND ACCESS NETWORKS

The intelligent utility distribution and access networks are subtending networks from the backbone, accommodating traffic between backbone switches/applications and devices in the distribution infrastructure all the way to the customer premises. IEDs on customer premises include automated meters and device regulators to detect and manage customer power usage.

These new devices are primarily packet-based. They may, therefore, be best supported by packet-based access network technologies. However, for select rings, TDM may also be chosen, as warranted. The packet-based access network technology chosen depends on the specifics of the sites to be connected and the economics associated with that area (for example, right of way, customer densities and embedded infrastructure).

Regardless of the access and last-mile network designs, traffic ultimately arrives at the network via an IP/MPLS edge switch/router with connectivity to the backbone IP/MPLS infrastructure. This switching/routing infrastructure ensures connectivity among the intelligent edge devices, core capabilities and control applications.

THE SMART HOME NETWORK

A futuristic home can support many remotely controlled and managed appliances centered on lifestyle improvements of security, entertainment, health and comfort (see Figure 2). In such a home, applications like smart meters and appliance control could be provided by application service providers (ASPs) (such as smart meter operators or utilities), using a home service manager and appropriate service gateways. This architecture differentiates between the access provider – that is, the utility/U-Telco or other public carrier – and the multiple ASPs who may provide applications to a home via the access provider.

FLEXIBLE CHARGING

By employing smart meters and developing the ability to retrieve electricity usage data at regular intervals – potentially several readings per hour – retailers could make billing a significant competitive differentiator. detailed usage information has already enabled value-added billing in the telecommunications world, and AMI can do likewise for billing electricity services. In time, electricity users will come to expect the same degree of flexible charging with their electricity bill that they already experience with their telephone bills, including, for example, prepaid and post-paid options, tariff in function of time, automated billing for house rental (vacation), family or group tariffs, budget tariffs and messaging.

MANAGING THE COMMUNICATIONS NETWORK

For utilities to leverage the communications network described above to meet business key requirements, they must intelligently manage that network’s facilities and services. This includes:

  • Configuration management. Provisioning services to ensure that underlying switching/routing and transport requirements are met.
  • Fault and performance management. Monitoring, correlating and isolating fault and performance data so that proactive, preventative and reactive corrective actions can be initiated.
  • Maintenance management. Planning of maintenance activities, including material management and logistics, and geographic information management.
  • Restoration management. Creating trouble tickets, dispatching and managing the workforce, and carrying out associated tracking and reporting.
  • Security management. Assuring the security of the infrastructure, managing access to authorized users, responding to security events, and identifying and remediating vulnerabilities per key security requirements such as NERC.

Utilities can integrate these capabilities into their existing network management infrastructures, or they can fully or partially outsource them to managed network service providers.

Figure 3 shows how key technologies are mapped to the architectural framework described previously. Being able to evolve into an intelligent utilities network in a cost-effective manner requires trusted support throughout planning, design, deployment, operations and maintenance.

CONCLUSION

Utilities can evolve their existing infrastructures to meet key SmartGridnet requirements by effectively leveraging a range of technologies and approaches. Through careful planning, designing, engineering and application of this technology, such firms may achieve the business objectives of SmartGridnet while protecting their current investments in infrastructure. Ultimately, by taking an evolutionary approach to SmartGridnet, utilities can maximize their ability to leverage the existing asset base as well as minimize capital and operations expenses.

Building a Vision for the Future

The United States and the world are facing two preeminent energy challenges: the rising cost of energy and the impact of increasing energy use on the environment. As a regulated public utility and one of the largest energy delivery companies in the Mid-Atlantic region, Pepco Holdings Inc. (PHI) recognized that it was uniquely positioned to play a leadership role in helping meet both of these challenges.

PHI calls the plan it developed to meet these challenges the Blueprint for the Future (Blueprint). The plan builds on work already begun through PHI’s Utility of the Future initiative, as well as other programs. The Blueprint focuses on implementing advanced technologies and energy efficiency programs to improve service to its customers and enable them to manage their energy use and costs. By providing tools for nearly 2 million customers across three states and the district of Columbia to better control their electricity use, PHI believes it can make a major contribution to meeting the nation’s energy and environmental challenges, and at the same time help customers keep their electric and natural gas bills as low as possible.

The PHI Blueprint is designed to give customers what they want: reasonable and stable energy costs, responsive customer service, power reliability and environmental stewardship.

PHI is deploying a number of innovative technologies. Some, such as its automated distribution system, help to improve reliability and workforce productivity. Other systems, including an advanced metering infrastructure (AMI), will enable customers to monitor and control their electricity use, reduce their energy costs and gain access to innovative rate options.

PHI’s Blueprint is both ambitious and complex. Over the next five years PHI will be deploying new technologies, modifying and/or creating numerous information systems, redefining customer and operating work processes, restructuring organizations, and managing relationships with customers and regulators in four jurisdictions. PHI intends to do all of this while continuing to provide safe and reliable energy service to its customers.

To assist in developing and executing this plan, PHI reached out to peer utilities and vendors. One significant “partner” group is the Global Intelligent Utility network Coalition (GIUNC), established by IBM, which currently includes CenterPoint Energy (Texas), Country Energy (new South Wales, Australia) and PHI.

Leveraging these resources and others, PHI managers spent much of 2007 compiling detailed plans for realizing the Blueprint. Several aspects of these planning efforts are described below.

VISION AND DESIGN

In 2007, multiple initiatives were launched to flesh out the many aspects of the Blueprint. As Figure 1 illustrates, all of the initiatives were related and designed to generate a deployment plan based on a comprehensive review of the business and technical aspects of the project.

At this early stage, PHI does not yet have all the answers. Indeed, prematurely committing to specific technologies or designs for work that will not be completed for five years can raise the risk of obsolescence and lost investment. The deployment plan and system map, discussed in more detail below, are intended to serve as a guide. They will be updated and modified as decision points are reached and new information becomes available.

BUSINESS CASE VALIDATION

One of the first tasks was to review and define in detail the business case analyses for the project components. Both benefit assumptions and implementation costs were tested. Reference information (benchmarks) for this review came from a variety of sources: IBM experience in projects of similar scope and type; PHI materials and analysis; experiences reported by other GIUNC members; and other utilities and other publicly available sources. This information was compiled, and a present value analysis was conducted on discounted cash flow and rate of return, as shown in Figure 2.

In addition to an “operational benefits” analysis, PHI and the Brattle Group developed value assessments associated with demand response offerings such as critical peak pricing. With demand response, peak consumption can be reduced and capacity cost avoided. This means lower total energy prices for customers and less new capacity additions in the market. As Figure 2 shows, in even the worst-case scenario for demand response savings, operational and customer benefits will offset the cost of PHI’s AMI investment.

The information from these various cases has since been integrated into a single program management tool. Additional capabilities for optimizing results based on value, cost and schedule were developed. Finally, dynamic relationships between variables were modeled and added to the tool, recognizing that assumptions don’t always remain constant as plans are changed. One example of this would be the likely increase in call center cost per meter when deployment accelerates and customer inquiries increase.

HIGH-LEVEL COMMUNICATIONS ARCHITECTURE DESIGN

To define and develop the communications architecture, PHI deployed a structured approach built around IBM’s proprietary optimal comparative communications architecture methodology (OCCAM). This methodology established the communications requirements for AMI, data architecture and other technologies considered in the Blueprint. Next, an evaluation of existing communications infrastructure and capabilities was conducted, which could be leveraged in support of the new technologies. Then, alternative solutions to “close the gap” were reviewed. Finally, all of this information was incorporated in an analytical tool that matched the most appropriate communication technology within a specified geographic area and business need.

SYSTEM MAP AND INFORMATION MODEL

Defining the data framework and the approach to overall data integration elements across the program areas is essential if companies are to effectively and efficiently implement AMI systems and realize their identified benefits.

To help PHI understand what changes are needed to get from their current state to a shared vision of the future, the project team reviewed and documented the “current state” of the systems impacted by their plans. Then, subject matter experts with expertise in meters, billing, outage, system design, work and workforce management, and business data analysis were engaged to expand on the data architecture information, including information on systems, functions and the process flows that tie them all together. Finally, the information gathered was used to develop a shared vision of how PHI processes, functions, systems and data will fit together in the future.

By comparing the design of as-is systems with the to-be architecture of information management and information flows, PHI identified information gaps and developed a set of next steps. One key step establishes an “enterprise architecture” model for development. The first objective would be to establish and enforce governance policies. With these in place, PHI will define, draft and ratify detailed enterprise architecture and enforce priorities, standards, procedures and processes.

PHASE 2 DEPLOYMENT PLAN

Based on the planning conducted over the last half of the year, a high-level project plan for Phase 2 deployment was compiled. The focus was mainly on Blueprint initiatives, while considering dependencies and constraints reported in other transformation initiatives. PHI subject matter experts, project team leads and experience gathered from other utilities were all leveraged to develop the Blueprint deployment plan.

The deployment plan includes multiple types of tasks; processes; and organization, technical and project management office-related activities, and covers a period of five to six years. Initiatives will be deployed in multiple releases, phased across jurisdictions (Delaware, District of Columbia, Maryland, New Jersey) and coordinated between meter installation and communications infrastructure buildout schedules.

The plan incorporates several initiatives, including process design, system development, communications infrastructure and AMI, and various customer initiatives. Because these initiatives are interrelated and complex, some programmatic initiatives are also called for, including change management, benefits realization and program management. From this deployment plan, more detailed project plans and dependencies are being developed to provide PHI with an end-to-end view of implementation.

As part of the planning effort, key risk areas for the Blueprint program were also defined, as shown in Figure 3. Input from interviews and knowledge leveraged from similar projects were included to ensure a comprehensive understanding of program risks and to begin developing mitigation strategies.

CONCLUSION

As PHI moves forward with implementation of its AMI systems, new issues and challenges are certain to arise, and programmatic elements are being established to respond. A program management office has been established and continues to drive more detail into plans while tracking and reporting progress against active elements. AMI process development is providing the details for business requirements, and system architecture discussions are resolving interface issues.

Deployment is still in its early stages, and much work lies ahead. However, with the effort grounded in a clear vision, the journey ahead looks promising.

The Smart Grid: A Balanced View

Energy systems in both mature and developing economies around the world are undergoing fundamental changes. There are early signs of a physical transition from the current centralized energy generation infrastructure toward a distributed generation model, where active network management throughout the system creates a responsive and manageable alignment of supply and demand. At the same time, the desire for market liquidity and transparency is driving the world toward larger trading areas – from national to regional – and providing end-users with new incentives to consume energy more wisely.

CHALLENGES RELATED TO A LOW-CARBON ENERGY MIX

The structure of current energy systems is changing. As load and demand for energy continue to grow, many current-generation assets – particularly coal and nuclear systems – are aging and reaching the end of their useful lives. The increasing public awareness of sustainability is simultaneously driving the international community and national governments alike to accelerate the adoption of low-carbon generation methods. Complicating matters, public acceptance of nuclear energy varies widely from region to region.

Public expectations of what distributed renewable energy sources can deliver – for example, wind, photovoltaic (PV) or micro-combined heat and power (micro-CHP) – are increasing. But unlike conventional sources of generation, the output of many of these sources is not based on electricity load but on weather conditions or heat. From a system perspective, this raises new challenges for balancing supply and demand.

In addition, these new distributed generation technologies require system-dispatching tools to effectively control the low-voltage side of electrical grids. Moreover, they indirectly create a scarcity of “regulating energy” – the energy necessary for transmission operators to maintain the real-time balance of their grids. This forces the industry to try and harness the power of conventional central generation technologies, such as nuclear power, in new ways.

A European Union-funded consortium named Fenix is identifying innovative network and market services that distributed energy resources can potentially deliver, once the grid becomes “smart” enough to integrate all energy resources.

In Figure 1, the Status Quo Future represents how system development would play out under the traditional system operation paradigm characterized by today’s centralized control and passive distribution networks. The alternative, Fenix Future, represents the system capacities with distributed energy resources (DER) and demand-side generation fully integrated into system operation, under a decentralized operating paradigm.

CHALLENGES RELATED TO NETWORK OPERATIONAL SECURITY

The regulatory push toward larger trading areas is increasing the number of market participants. This trend is in turn driving the need for increased network dispatch and control capabilities. Simultaneously, grid operators are expanding their responsibilities across new and complex geographic regions. Combine these factors with an aging workforce (particularly when trying to staff strategic processes such as dispatching), and it’s easy to see why utilities are becoming increasingly dependent on information technology to automate processes that were once performed manually.

Moreover, the stochastic nature of energy sources significantly increases uncertainty regarding supply. Researchers are trying to improve the accuracy of the information captured in substations, but this requires new online dispatching stability tools. Additionally, as grid expansion remains politically controversial, current efforts are mostly focused on optimizing energy flow in existing physical assets, and on trying to feed asset data into systems calculating operational limits in real time.

Last but not least, this enables the extension of generation dispatch and congestion into distribution low-voltage grids. Although these grids were traditionally used to flow energy one way – from generation to transmission to end-users – the increasing penetration of distributed resources creates a new need to coordinate the dispatch of these resources locally, and to minimize transportation costs.

CHALLENGES RELATED TO PARTICIPATING DEMAND

Recent events have shown that decentralized energy markets are vulnerable to price volatility. This poses potentially significant economic threats for some nations because there’s a risk of large industrial companies quitting deregulated countries because they lack visibility into long-term energy price trends.

One potential solution is to improve market liquidity in the shorter term by providing end-users with incentives to conserve energy when demand exceeds supply. The growing public awareness of energy efficiency is already leading end-users to be much more receptive to using sustainable energy; many utilities are adding economic incentives to further motivate end-users.

These trends are expected to create radical shifts in transmission and distribution (T&D) investment activities. After all, traditional centralized system designs, investments and operations are based on the premise that demand is passive and uncontrollable, and that it makes no active contribution to system operations.

However, the extensive rollout of intelligent metering capabilities has the potential to reverse this, and to enable demand to respond to market signals, so that end-users can interact with system operators in real or near real time. The widening availability of smart metering thus has the potential to bring with it unprecedented levels of demand response that will completely change the way power systems are planned, developed and operated.

CHALLENGES RELATED TO REGULATION

Parallel with these changes to the physical system structure, the market and regulatory frameworks supporting energy systems are likewise evolving. Numerous energy directives have established the foundation for a decentralized electricity supply industry that spans formerly disparate markets. This evolution is changing the structure of the industry from vertically integrated, state-owned monopolies into an environment in which unbundled generation, transmission, distribution and retail organizations interact freely via competitive, accessible marketplaces to procure and sell system services and contracts for energy on an ad hoc basis.

Competition and increased market access seem to be working at the transmission level in markets where there are just a handful of large generators. However, this approach has yet to be proven at the distribution level, where it could facilitate thousands and potentially millions of participants offering energy and systems services in a truly competitive marketplace.

MEETING THE CHALLENGES

As a result, despite all the promise of distributed generation, the current decentralized system will become increasingly unstable without the corresponding development of technical, market and regulatory frameworks over the next three to five years.

System management costs are increasing, and threats to system security are a growing concern as installed distributed generating capacity in some areas exceeds local peak demand. The amount of “regulating energy” provisions rises as stress on the system increases; meanwhile, governments continue to push for distributed resource penetration and launch new energy efficiency ideas.

At the same time, most of the large T&D utilities intend to launch new smart grid prototypes that, once stabilized, will be scalable to millions of connection points. The majority of these rollouts are expected to occur between 2010 and 2012.

From a functionality standpoint, the majority of these associated challenges are related to IT system scalability. The process will require applying existing algorithms and processes to generation activities, but in an expanded and more distributed manner.

The following new functions will be required to build a smart grid infrastructure that enables all of this:

New generation dispatch. This will enable utilities to expand their portfolios of current-generation dispatching tools to include schedule-generation assets for transmission and distribution. Utilities could thus better manage the growing number of parameters impacting the decision, including fuel options, maintenance strategies, the generation unit’s physical capability, weather, network constraints, load models, emissions (modeling, rights, trading) and market dynamics (indices, liquidity, volatility).

Renewable and demand-side dispatching systems. By expanding current energy management systems (EMS) capability and architecture, utilities should be able to scale to include millions of active producers and consumers. Resources will be distributed in real time by energy service companies, promoting the most eco-friendly portfolio dispatch methods based on contractual arrangements between the energy service providers and these distributed producers and consumers.

Integrated online asset management systems. new technology tools that help transmission grid operators assess the condition of their overall assets in real time will not only maximize asset usage, but will lead to better leveraging of utilities’ field forces. new standards such as IEC61850 offer opportunities to manage such models more centrally and more consistently.

Online stability and defense plans. The increasing penetration of renewable generation into grids combined with deregulation increases the need for fl ow control into interconnections between several transmission system operators (TSOs). Additionally, the industry requires improved “situation awareness” tools to be installed in the control centers of utilities operating in larger geographical markets. Although conventional transmission security steady state indicators have improved, utilities still need better early warning applications and adaptable defense plan systems.

MOVING TOWARDS A DISTRIBUTED FUTURE

As concerns about energy supply have increased worldwide, the focus on curbing demand has intensified. Regulatory bodies around the world are thus actively investigating smart meter options. But despite the benefits that smart meters promise, they also raise new challenges on the IT infrastructure side. Before each end-user is able to flexibly interact with the market and the distribution network operator, massive infrastructure re-engineering will be required.

nonetheless, energy systems throughout the world are already evolving from a centralized to a decentralized model. But to successfully complete this transition, utilities must implement active network management through their systems to enable a responsive and manageable alignment of supply and demand. By accomplishing this, energy producers and consumers alike can better match supply and demand, and drive the world toward sustainable energy conservation.

How Intelligent Is Your Grid?

Many people in the utility industry see the intelligent grid — an electric transmission and distribution network that uses information technology to predict and adjust to network changes — as a long-term goal that utilities are still far from achieving. Energy Insights research, however, indicates that today’s grid is more intelligent than people think. In fact, utilities can begin having the network of the future today by better leveraging their existing resources and focusing on the intelligent-grid backbone.

DRIVERS FOR THE INTELLIGENT GRID

Before discussing the intelligent grid backbone, it’s important to understand the drivers directing the intelligent grid’s progress. While many groups — such as government, utilities and technology companies — may be pushing the intelligent grid forward, they are also slowing it down. Here’s how:

  • Government. With the 2005 U.S. Energy Policy Act and the more recent 2007 Energy Independence and Security Act, the federal government has acknowledged the intelligent grid’s importance and is supporting investment in the area. Furthermore, public utility commissions (PUCs) have begun supporting intelligent grid investments like smart metering. At the same time, however, PUCs have a duty to maintain reasonable prices. Since utilities have not extensively tested the benefits of some intelligent grid technologies, such as distribution line sensors, many regulators hesitate to support utilities investing in intelligent grid technologies beyond smart metering.
  • Utilities. Energy Insights research indicates that information technology, in general, enables utilities to increase operational efficiency and reduce costs. For this reason, utilities are open to information technology; however, they’re often looking for quick cost recovery and benefits. Many intelligent grid technologies provide longer-term benefits, making them difficult to cost-justify over the short term. Since utilities are risk-aware, this can make intelligent grid investments look riskier than traditional information technology investments.
  • Technology. Although advanced enough to function on the grid today, many intelligent grid technologies could become quickly outdated thanks to the rapidly developing marketplace. What’s more, the life span of many intelligent grid technologies is not as long as those of traditional grid assets. For example, a smart meter’s typical life span is about 10 to 15 years, compared with 20 to 30 years for an electro-mechanical meter.

With strong drivers and competing pressures like these, it’s not a question of whether the intelligent grid will happen but when utilities will implement new technologies. Given the challenges facing the intelligent grid, the transition will likely be more of an evolution than a revolution. As a result, utilities are making their grids more intelligent today by focusing on the basics, or the intelligent grid backbone.

THE INTELLIGENT GRID BACKBONE

What comprises this backbone? Answering this question requires a closer look at how intelligence changes the grid. Typically, a utility has good visibility into the operation of its generation and transmission infrastructure but poor visibility into its distribution network. As a result, the utility must respond to a changing distribution network based on very limited information. Furthermore, if a grid event requires attention — such as in the case of a transformer failure — people must review information, decide to act and then manually dispatch field crews. This type of approach translates to slower, less informed reactions to grid events.

The intelligent grid changes these reactions through a backbone of technologies — sensors, communication networks and advanced analytics — especially developed for distribution networks. To better understand these changes, we can imagine a scenario where a utility has an outage on its distribution network. As shown in Figure 1, additional grid sensors collect more information, making it easier to detect problems. Communications networks then allow sensors to convey the problem to the utility. Advanced analytics can efficiently process this information and determine more precisely where the fault is located, as well as automatically respond to the problem and dispatch field crews. These components not only enable faster, better-informed reactions to grid problems, they can also do real-time pricing, improve demand response and better handle distributed and renewable energy sources.

A CLOSER LOOK AT BACKBONE COMPONENTS

A deeper dive into each of these intelligent grid backbone technologies reveals how utilities are gaining more intelligence about their grid today.

Network sensors are important not only for real-time operations — such as locating faults and connecting distributed energy sources to the grid — but also for providing a rich historical data source to improve asset maintenance and load research and forecasting. Today, more utilities are using sensors to better monitor their distribution networks; however, they’re focused primarily on smart meters. The reason for this is that smart meters have immediate operational benefits that make them attractive for many utilities today, including reducing meter reader costs, offering accurate billing information, providing theft control and satisfying regulatory requirements. Yet this focus on smart meters has created a monitoring gap between the transmission network and the smart meter.

A slew of sensors are available from companies such as General Electric, ABB, PowerSense, GridSense and Serveron to fill this monitoring gap. Tracking everything from load balancing and transformer status to circuit breakers and tap changers, energized downed lines, high-impedance faults and stray voltage, and more, these sensors are able to fill the monitoring gap, yet utilities hesitate to invest in them because they lack the immediate operational benefits of smart meters.

By monitoring this gap, however, utilities will sustain longer-term grid benefits such as reduced generation capacity building. Utilities have found they can begin monitoring this gap by:

  • Prioritizing sensor investments. Customer complaints and regulatory pressure have pushed some utilities to take action for particular parts of their service territory. For example, one utility Energy Insights studied received numerous customer complaints about a particular feeder’s reliability, so the utility invested in line sensors for that area. Another utility began considering sensor investments in troubled areas of its distribution network when regulators demanded that the utility raise its System Average Interruption Frequency Index (SAIFI) and System Average Interruption Duration Index (SAIDI) ratings from the bottom 50 percent to the top 25 percent of benchmarked utilities. By focusing on such areas, utilities can achieve “quick wins” with sensors and build utility confidence by using additional sensors on their distribution grid.
  • Realizing it’s all about compromise. Even in high-priority areas, it may not make financial sense for a utility to deploy the full range of sensors for every possible asset. In some situations, utilities may target a particular area of the service territory with a higher density of sensors. For example, a large U.S. investor-owned utility with a medium voltage-sensing program placed a high density of sensors along a specific section of its service territory. On the other hand, utilities might cover a broader area of service territory with fewer sensors, similar to the approach taken by a large investor-owned utility Energy Insights looked at that monitored only transformers across its service territory.
  • Rolling in sensors with other intelligent grid initiatives. Some utilities find ways to combine their smart metering projects with other distribution network sensors or to leverage existing investments that could support additional sensors. One utility that Energy Insights looked at installed transformer sensors along with a smart meter initiative and leveraged the communications networks it used for smart metering.

While sensors provide an important means of capturing information about the grid, communication networks are critical to moving that information throughout the intelligent grid — whether between sensors or field crews. Typically, to enable intelligent grid communications, utilities must either build new communications networks to bring intelligence to the existing grid or incorporate communication networks into new construction. Yet utilities today are also leveraging existing or recently installed communications networks to facilitate more sophisticated intelligent grid initiatives such as the following:

  • Smart metering and automated meter-reading (AMR) initiatives. With the current drive to install smart meters, many utilities are covering their distribution networks with communications infrastructure. Furthermore, existing AMR deployments may include communications networks that can bring data back to the utility. Some utilities are taking advantage of these networks to begin plugging other sensors into their distribution networks.
  • Mobile workforce. The deployment of mobile technologies for field crews is another hot area for utilities right now. Utilities are deploying cellular networks for field crew communications for voice and data. Although utilities have typically been hesitant to work with third-party communications providers, they’ve become more comfortable with outside providers after using them for their mobile technologies. Since most of the cellular networks can provide data coverage as well, some utilities are beginning to use these providers to transmit sensor information across their distribution networks.

Since smart metering and mobile communications networks are already in place, the incremental cost of installing sensors on these networks is relatively low. The key is making sure that different sensors and components can plug into these networks easily (for example, using a standard communications protocol).

The last key piece of the intelligent grid backbone is advanced analytics. Utilities are required to make quick decisions every day if they’re to maintain a safe and reliable grid, and the key to making such decisions is being well informed. Intelligent grid analytics can help utilities quickly process large amounts of data from sensors so that they can make those informed decisions. However, how quickly a decision needs to be made depends on the situation. Intelligent grid analytics assist with two types of decisions: very quick decisions (veQuids) and quick decisions (Quids). veQuids are made in milliseconds by computers and intelligent devices analyzing complex, real-time data – an intelligent grid vision that’s still a future development for most utilities.

Fortunately, many proactive decisions about the grid don’t have to be made in milliseconds. Many utilities today can make Quids — often manual decisions — to predict and adjust to network changes within a time frame of minutes, days or even months.

no matter how quick the decision, however, all predictive efforts are based on access to good-quality data. In putting their Quid capabilities to use today — in particular for predictive maintenance and smart metering — utilities are building not only intelligence about their grids but also a foundation for providing more advanced veQuids analytics in the future through the following:

  • The information foundation. Smart metering and predictive maintenance require utilities to collect not only more data but also more real-time data. Smart metering also helps break down barriers between retail and operational data sources, which in turn creates better visibility across many data sources to provide a better understanding of a complex grid.
  • The automation transition. To make the leap between Quids and veQuids requires more than just better access to more information — it also requires automation. While fully automated decision-making is still a thing of the future, many utilities are taking steps to compile and display data automatically as well as do some basic analysis, using dashboards from providers such as OSIsoft and Obvient Strategies to display high-level information customized for individual users. The user then further analyzes the data, and makes decisions and takes action based on that analysis. Many utilities today use the dashboard model to monitor critical assets based on both real-time and historical data.

ENSURING A MORE INTELLIGENT GRID TODAY AND TOMORROW

As these backbone components show, utilities already have some intelligence on their grids. now, they’re building on that intelligence by leveraging existing infrastructure and resources — whether it’s voice communications providers for data transmission or Quid resources to build a foundation for the veQuids of tomorrow. In particular, utilities need to look at:

  • Scalability. Utilities need to make sure that whatever technologies they put on the grid today can grow to accommodate larger portions of the grid in future.
  • Flexibility. Given rapid technology changes in the marketplace, utilities need to make sure their technology is flexible and adaptable. For example, utilities should consider smart meters that have the ability to change out communications cards to allow for new technologies.
  • Integration. due to the evolutionary nature of the grid, and with so many intelligent grid components that must work together (intelligent sensors at substations, transformers and power lines; smart meters; and distributed and renewable energy sources), utilities need to make sure these disparate components can work with one another. Utilities need to consider how to introduce more flexibility into their intelligent grids to accommodate the increasingly complex network of devices.

As today’s utilities employ targeted efforts to build intelligence about the grid, they must keep in mind that whatever action they take today – no matter how small – must ultimately help them meet the demands of tomorrow.

Collaborative Policy Making And the Smart Grid

A search on Google for the keywords smart grid returns millions of results. A list of organizations talking about or working on smart grid initiatives would likely yield similar results. Although meant humorously, this illustrates the proliferation of groups interested in redesigning and rebuilding the varied power infrastructure to support the future economy. Since building a smart infrastructure is clearly in the public’s interest, it’s important that all affected stakeholders – from utilities and legislators to consumers and regulators – participate in creating the vision, policies and framework for these critical and important investments.

One organization, the GridWise Alliance, was formed specifically to promote a broad collaborative effort for all interest groups shaping this agenda. Representing a consortium of more than 60 public organizations and private companies, GridWise Alliance members are aligned around a shared vision of a transformed and modern electric system that integrates infrastructure, processes, devices, information and market structure so that energy can be generated, distributed and consumed more reliably and cost-effectively.

From the time of its creation in 2003, the GridWise Alliance has focused on the federal legislative process to ensure that smart grid programs and policies were included in the priorities of the various federal agencies. The Alliance continues to focus on articulating to elected officials, public policy agencies and the private sector the urgent need to build a smarter 21st-century utility infrastructure. Last year, the Alliance provided significant input into the development of smart grid legislation, which was passed by both houses of Congress and signed into law by the President at the end of 2007. The Alliance has evolved to become one of the “go-to” parties for members of Congress and their staffs as they prepare for new legislation aimed at transforming to a modern and intelligent electricity grid.

The Alliance continues to demonstrate its effectiveness in various ways. The chair of the Alliance, Guido Bartels, joins representatives from seven other Alliance member companies in recently being named to the U.S. Department of Energy’s Electricity Advisory Committee (EAC). This organization is being established to “enhance leadership in electricity delivery modernization and provide senior-level counsel to DOE on ways that the nation can meet the many barriers to moving forward, including the deployment of smart grid technologies.” Another major area of focus is the national GridWeek conference. This year’s GridWeek 2008 is focused on “delivering sustainable energy.” The Alliance expects more than 800 participants to discuss and debate topics such as Enabling Energy Efficiency, Smart Grid in a Carbon Economy and Securing the Smart Grid.

Going forward, the Alliance will expand its reach by continuing to broaden its membership and by working with other U.S. stakeholder organizations to provide a richer understanding of the value and impacts of a smart grid. The Alliance is already working with organizations such as the NARUC-FERC Smart Grid Collaborative, the National Council of State legislators (NCSl), the National Governors’ Association (NGA), the American Public Power Association (APPA) and others. Beyond U.S. borders, the Alliance will continue to strengthen its relations with other smart grid organizations like those in the European Union and Australia to ensure that we’re gaining insight and best practices from other markets.

Collaboration such as that exemplified by the Alliance is critical for making effective and impactful public policy. The future of our nation’s electricity infrastructure, economy and, ultimately, health and safety depends on the leadership of organizations such as the GridWise Alliance.

Policy and Regulatory Initiatives And the Smart Grid

Public policy is commonly defined as a plan of action designed to guide decisions for achieving a targeted outcome. In the case of smart grids, new policies are needed if smart grids are actually to become a reality. This statement may sound dire, given the recent signing into law of the 2007 Energy Independence and Security Act (EISA) in the United States. And in fact, work is underway in several countries to encourage smart grids and smart grid components such as smart metering. However, the risk still exists that unless stronger policies are enacted, grid modernization investments will fail to leverage the newer and better technologies now emerging, and smart grid efforts will never move beyond demonstration projects. This would be an unfortunate result when you consider the many benefits of a true smart grid: cost savings for the utility, reduced bills for customers, improved reliability and better environmental stewardship.

REGIONAL AND NATIONAL EFFORTS

As mentioned above, several regions are experimenting with smart grid provisions. At the national level, the U.S. federal government has enacted two pieces of legislation that support advanced metering and smart grids. The Energy Policy Act of 2005 directed U.S. utility regulators to consider time-of-use meters for their states. The 2007 EISA legislation has several provisions, including a list of smart grid goals to encourage two-way, real-time digital networks that stretch from a consumer’s home to the distribution network. The law also provides monies for regional demonstration projects and matching grants for smart grid investments. The EISA legislation also mandates the development of an “interoperability framework.”

In Europe, the European Union (E.U.) introduced a strategic energy technology plan in 2006 for the development of a smart electricity system over the next 30 years. The European Technology Platform organization includes representatives from industry, transmission and distribution system operators, research bodies and regulators. The organization has identified objectives and proposes a strategy to make the smart grid vision a reality.

Regionally, several U.S. states and Canadian provinces are focused on smart grid investments. In Canada, the Ontario Energy Board has mandated smart meters, with meter installation completion anticipated by 2010. In Texas, the Public Utilities Commission of Texas (PUCT) has finalized advanced metering legislation that authorizes metering cost recovery through surcharges. The PUCT also stipulated key components of an advanced metering system: two-way communications, time-date stamp, remote connect/disconnect, and access to consumer usage for both the consumer and the retail energy provider. The Massachusetts State Senate approved an energy bill that includes smart grid and time-of-use pricing. The bill requires that utilities submit a plan by Sept. 1, 2008, to the Massachusetts Public Utilities Commission, establishing a six-month pilot program for a smart grid. Most recently, California, Washington state and Maryland all introduced smart grid legislation.

AN ENCOMPASSING VISION

While these national and regional examples represent just a portion of the ongoing activity in this area, the issue remains that smart grid and advanced metering pilot programs do not guarantee a truly integrated, interoperable, scalable smart grid. Granted, a smart grid is not achieved overnight, but an encompassing smart grid vision should be in place as modernization and metering decisions are made, so that investment is consistent with the plan in mind. Obviously, challenges – such as financing, system integration and customer education – exist in moving from pilot to full grid deployment. However, many utility and regulatory personnel perceive these challenges to be ones of costs and technology readiness.

The costs are considerable. KEMA, the global energy consulting firm, estimates that the average cost of a smart meter project (representing just a portion of a smart grid project) is $775 million. The E.U.’s Strategic Energy Technology Plan estimates that the total smart grid investment required could be as much as $750 billion. These amounts are staggering when you consider that the market capitalization of all U.S. investor-owned electric utilities is roughly $550 billion. However, they’re not nearly as significant when you subtract the costs of fixing the grid using business-as-usual methods. Transmission and distribution expenditures are occurring with and without intelligence. The Energy Information Administration (EIA) estimates that between now and 2020 more than $200 billion will be spent to maintain and expand electricity transmission and distribution infrastructures in the United States alone.

Technology readiness will always be a concern in large system projects. Advances are being made in communication, sensor and security technologies, and IT. The Federal Communications Commission is pushing for auctions to accelerate adoption of different communication protocols. Price points are decreasing for pervasive cellular communication networks. Electric power equipment manufacturers are utilizing the new IEC 61850 standard to ensure interoperability among sensor devices. vendors are using approaches for security products that will enable north American Electric Reliability Corp. (nERC) and critical infrastructure protection (CIP) compliance.

In addition, IT providers are using event-driven architecture to ensure responsiveness to external events, rather than processing transactional events, and reaching new levels with high-speed computer analytics. leading service-oriented architecture companies are working with utilities to establish the underlying infrastructure critical to system integration. Finally, work is occurring in the standards community by the E.U., the Gridwise Architecture Council (GAC), Intelligrid, the national Energy Technology laboratory (nETl) and others to create frameworks for linking communication and electricity interoperability among devices, systems and data flows.

THE TIME IS NOW

These challenges should not halt progress – especially when one considers the societal benefits. Time stops for no one, and certainly in the case of the energy sector that statement could not be more accurate. Energy demand is increasing. The Energy Information Administration estimates that annual energy demand will increase roughly 50 percent over the next 25 years. Meanwhile, the debate over global warming seems to have waned. Few authorities are disputing the escalating concentrations of several greenhouse gases due to the burning of fossil fuels. The E.U. is attempting to decrease emissions through its 2006 Energy Efficiency directive. Many industry observers in the United States believe that there will likely be federal regulation of greenhouse gases within the next three years.

A smart grid would address many of these issues, giving options to the consumer to manage their usage and costs. By optimizing asset utilization, the smart grid will provide savings in that there is less need to build more power plants to meet increased electricity demand. As a self-healing grid that detects, responds and restores functions, the smart grid can greatly reduce the economic impact of blackout and power interruption grid failures.

A smart grid that provides the needed power quality can ensure the strong and resilient energy infrastructure necessary for the 21st-century economy. A smart grid also offers consumers options for managing their usage and costs. Further, a smart grid will enable plug-and-play integration of renewables, distributed resources and control systems. lastly, a smart grid will better enable plug-and-play integration of renewables, distributed resources and control systems.

INCENTIVES FOR MODERNIZATION

despite all of these potential benefits, more incentives are needed to drive grid modernization efforts. Several mechanisms are available to encourage investment. Some utilities are already using or evaluating alternative rate structures such as net metering and revenue decoupling to give utilities and consumer incentives to use less energy. net metering awards energy incentives or credit for consumer-based renewables. And revenue decoupling is a mechanism designed to eliminate or reduce dependence of a utility’s revenues on sales. Other programs – such as energy-efficiency or demand-reduction incentives – motivate consumers and businesses to adopt long-term energy-efficient behaviors (such as using programmable thermostats) and to consider energy efficiency when using appliances and computers, and even operating their homes.

Policy and regulatory strategy should incorporate these means and include others, such as accelerated depreciation and tax incentives. Accelerated depreciation encourages businesses to purchase new assets, since depreciation is steeper in the earlier years of the asset’s life and taxes are deferred to a later period. Tax incentives could be put in place for purchasing smart grid components. Utility Commissions could require utilities to consider all societal benefits, rather than just rate impacts, when crafting the business case. Utilities could take federal income tax credits for the investments. leaders could include smart grid technologies as a critical component of their overall energy policy.

Only when all of these policies and incentives are put in place will smart grids truly become a reality.

Madison Gas and Electric Markets Online Billing and Achieves Unexpected Results

As a renewable energy leader, Madison Gas and Electric (MGE) was
seeking ways to better serve their customers while reducing the costs and environmental effects associated with customer billings. The dilemma
was how best to convince customers to transition from paper-based to
electronic delivery and payment of their bills, while maintaining or improving customer satisfaction.

The overarching goal was twofold: to encourage current online bill payers
to eliminate receiving a paper bill, and to persuade non-online customers to try electronic billing solutions. Also, the implementation of any paperless
adoption effort needed to be simple to conduct, and not require a large capital investment.

The Solution

After conducting market research in the MGE service area, the company worked with CheckFree to establish a targeted marketing campaign to
support paperless bill adoption goals and promote paperless bill payment
to customers. The goal was to achieve a three percent electronic bill-pay adoption rate in the first year of the campaign and increase the adoption rate two percent each year for the next three years.

CheckFree provided detailed industry research and recommended early
during the campaign that MGE market the benefits of electronic bill payment in the bill package: the envelope, return envelope, statement, and inserts. Additionally, MGE leveraged other online media outlets to promote their
electronic billing offering. The company developed targeted television ads and ran print advertisements, placed newspaper ads and communicated
the “paperless” message in monthly customer newsletters. Each method has effectively augmented the core bill package campaign and helped to maintain message consistency.

For those “early adopter” customers who had signed up for online billing when the service was first introduced, MGE did not automatically eliminate paper bills for the first eighteen months. As MGE learned more about why customers participated, the decision was made to eliminate paper bills. They re-contacted the “early adopter” customers and reminded them of the benefits of electronic bill payment. Once an “early adopter” signed up for permanent online billing, they received one final paper bill.

The Results

The CheckFree online billing and payments solution, along with MGE’s targeted
paper suppression campaign, has resulted in one of the
highest penetration rates in the utility industry: a 29.1 percent online billing penetration rate versus the industry average of approximately 10 percent. The program has resulted in:

  • A popular and cost-effective EBP solution with minimal capital investment or IT expense
  • Easy integration with MGE’s existing website
  • Reduction in the costs associated with paper bill handling, payment processing and environmental impacts
  • Reduction in payment processing and faster receipt of funds
  • Positive perception of MGE as an environmentally responsible community partner

The homepage link to MGE’s Online Bill Payment page is the top requested link on the company’s website. MGE’s aggressive approach to suppress paper in the billing cycle yielded both savings and consumer goodwill.

For more information, please call us or visit our website
at www.checkfree.com.

Company Realizes 65 percent Increase in E-Bill Adoption Rate with Comprehensive “Green” Marketing Campaign

Consolidated Edison Company of New York (Con Edison) is a regulated utility
serving 3.2 million electric customers in New York City and Westchester
County. The company recognized that it could realize significant cost savings
if more customers would adopt electronic billing, where bills are delivered
electronically without a paper version. Eliminating the printing, postage, labor
and equipment costs associated with paper billing can result in significant
cost savings.

In addition to operational cost savings, further positive results could be
gained from driving e-bill adoption, including improved customer relationships
and fewer billing-related service calls. According to a Harris Interactive study
conducted for CheckFree Research Services, customers who receive e-bills
at a biller organization’s website show higher satisfaction levels, with 30
percent of them reporting an improved relationship with their biller as a
result of receiving e-bills.

The challenge was how to attract more customers to the low-cost,
high-impact online channel for billing activities and shut off their paper bill. In
order to convince customers to change their behavior, Con Edison had to find
a way to cost effectively generate wide-spread awareness of electronic billing
and explain how benefits, such as saving time, reducing clutter and helping
the environment, outweigh concerns the customer may have about giving up
their paper bills.

The Advantages of Electronic Billing

Together, Con Edison and CheckFree developed a comprehensive marketing
campaign designed to communicate the advantages of electronic billing to
as many customers as possible. As a critical first step, Con Edison gained
cross-organizational alignment regarding the campaign strategy. Drawing
from a long-standing commitment to the environment, the company made a
strategic decision to implement an ongoing campaign that conveyed a “go
green with e-bills” message across numerous channels in order to maximize
reach within their customer base. Research has shown that when attempting
to change consumer behavior, a comprehensive, consistent and widespread
marketing campaign is far more effective than “one-off” campaigns using only
minimal tactics.

In May 2007, Con Edison launched the integrated marketing campaign
capitalizing on the wave of consumer awareness on environmental issues.
Con Edison promoted paperless billing and electronic payment through a
variety of methods and channels, including an animated ad to run on the
100-foot Yankee Stadium scoreboard, customer promotions for popular
technology products and ENERGY STAR® appliances, and:

  • Customer e-mails
  • Direct mail postcards
  • On-hold messaging
  • Radio advertising
  • Invoice messaging
  • Press releases
  • Con Edison website messaging
  • My CheckFree® website messaging
  • Customer newsletters
  • Internal employee newsletters

Each communication featured the company’s environmental incentive – for
every customer choosing the paper-saving option of viewing and paying their
bill online, Con Edison would donate $1 to a local, non-profit tree planting
fund to help the environment in New York.

To aid in driving awareness, Con Edison made a deliberate decision to
create an extended campaign designed to consistently reinforce the safety,
security, simplicity and environmental benefits of electronic billing. Based on
the success of the marketing activities seen thus far, Con Edison plans to
include the “go green with e-bills” theme in every consumer communication
going forward.

The Results

Con Edison showed persistence and enthusiasm in pursuing a multi-channel
marketing campaign, and it was well worth the effort. In the first year since
the campaign was launched, Con Edison generated impressive results:

  • More than 75,000 e-bills activated
  • 65 percent increase in e-bill activations over the prior twelve months
  • 20 percent increase in online e-bill payments over the prior twelve months

Con Edison also has benefited from the positive press and goodwill it has
created in the community. By providing its customers with a better, more
environmentally friendly choice for paying and receiving their utility bills,
Con Edison is minimizing costs, maintaining operational control,
optimizing growth for their business and turning customer interactions
into profitable relationships.

Utility Mergers and Acquisitions: Beating the Odds

Merger and acquisition activity in the U.S. electric utility industry has increased following the 2005 repeal of the Public Utility Holding Company Act (PUHCA). A key question for the industry is not whether M&A will continue, but whether utility executives are prepared to manage effectively the complex regulatory challenges that have evolved.

M&A activity is (and always has been) the most potent, visible and (often) irreversible option available to utility CEOs who wish to reshape their portfolios and meet their shareholders’ expectations for returns. However, M&A has too often been applied reflexively – much like the hammer that sees everything as a nail.

The American utility industry is likely to undergo significant consolidation over the next five years. There are several compelling rationales for consolidation. First, M&A has the potential to offer real economic value. Second, capital-market and competitive pressures favor larger companies. Third, the changing regulatory landscape favors larger entities with the balance sheet depth to weather the uncertainties on the horizon.

LEARNING FROM THE PAST

Historically, however, acquirers have found it difficult to derive value from merged utilities. With the exception of some vertically integrated deals, most M&A deals have been value-neutral or value-diluting. This track record can be explained by a combination of factors: steep acquisition premiums, harsh regulatory givebacks, anemic cost reduction targets and (in more than half of the deals) a failure to achieve targets quickly enough to make a difference. In fact, over an eight-year period, less than half the utility mergers actually met or exceeded the announced cost reduction levels resulting from the synergies of the merged utilities (Figure 1).

The lessons learned from these transactions can be summarized as follows: Don’t overpay; negotiate a good regulatory deal; aim high on synergies; and deliver on them.

In trying to deliver value-creating deals, CEOs often bump up against the following realities:

  • The need to win approval from the target’s shareholders drives up acquisition premiums.
  • The need to receive regulatory approval for the deal and to alleviate organizational uncertainty leads to compromises.
  • Conservative estimates of the cost reductions resulting from synergies are made to reduce the risk of giving away too much in regulatory negotiations.
  • Delivering on synergies proves tougher than anticipated because of restrictions agreed to in regulatory deals or because of the organizational inertia that builds up during the 12- to 18-month approval process.

LOOKING AT PERFORMANCE

Total shareholder return (TSR) is significantly affected by two external deal negotiation levers – acquisition premiums and regulatory givebacks – and two internal levers – synergies estimated and synergies delivered. Between 1997 and 2004, mergers in all U.S. industries created an average TSR of 2 to 3 percent relative to the market index two years after closing. In contrast, utilities mergers typically underperformed the utility index by about 2 to 3 percent three years after the transaction announcement. T&D mergers underperformed the index by about 4 percent, whereas mergers of vertically integrated utilities beat the index by about 1 percent three years after the announcement (Figure 2).

For 10 recent mergers, the lower the share of the merger savings retained by the utilities and the higher the premium paid for the acquisition, the greater the likelihood that the deal destroyed shareholder value, resulting in negative TSR.

Although these appear to be obvious pitfalls that a seasoned management team should be able to recognize and overcome, translating this knowledge into tangible actions and results has been difficult.

So how can utility boards and executives avoid being trapped in a cycle of doing the same thing again and again while expecting different results (Einstein’s definition of insanity)? We suggest that a disciplined end-to-end M&A approach will (if well-executed) tilt the balance in the acquirer’s favor and generate long-term shareholder value. That approach should include the four following broad objectives:

  • Establishment of compelling strategic logic and rationale for the deal;
  • A carefully managed regulatory approval process;
  • Integration that takes place early and aggressively; and
  • A top-down approach for designing realistic but ambitious economic targets.

GETTING IT RIGHT: FOUR BROAD OBJECTIVES THAT ENHANCE M&A VALUE CREATION

To complete successful M&As, utilities must develop a more disciplined approach that incorporates the lessons learned from both utilities and other industrial sectors. At the highest level, adopting a framework with four broad objectives will enhance value creation before the announcement of the deal and through post-merger integration. To do this, utilities must:

1. Establish a compelling strategic logic and rationale for the deal. A critical first step is asking the question, why do the merger? To answer this question, deal participants must:

  • Determine the strategic logic for long-term value creation with and without M&A. Too often, executives are optimistic about the opportunity to improve other utilities, but they overlook the performance potential in their current portfolio. For example, without M&A, a utility might be able to invest and grow its rate base, reduce the cost of operations and maintenance, optimize power generation and assets, explore more aggressive rate increases and changes to the regulatory framework, and develop the potential for growth in an unregulated environment. Regardless of whether a utility is an acquirer or a target, a quick (yet comprehensive) assessment will provide a clear perspective on potential shareholder returns (and risks) with and without M&A.
  • Conduct a value-oriented assessment of the target. Utility executives typically have an intuitive feel for the status of potential M&A targets adjacent to their service territories and in the broader subregion. However, when considering M&A, they should go beyond the obvious criteria (size and geography) and candidates (contiguous regional players) to consider specific elements that expose the target’s value potential for the acquirer. Such value drivers could include an enhanced power generation and asset mix, improvements in plant availability and performance, better cost structures, an ability to respond to the regulatory environment, and a positive organizational and cultural fit. Also critical to the assessment are the noneconomic aspects of the deal, such as headquarters sharing, potential loss of key personnel and potential paralysis of the company (for example, when a merger or acquisition freezes a company’s ability to pursue M&A and other large initiatives for two years).
  • Assess internal appetites and capabilities for M&A. Successful M&A requires a broad commitment from the executive team, enough capable people for diligence and integration, and an appetite for making the tough decisions essential to achieving aggressive targets. Acquirers should hold pragmatic executive-level discussions with potential targets to investigate such aspects as cultural fit and congruence of vision. Utility executives should conduct an honest assessment of their own management teams’ M&A capabilities and depth of talent and commitment. Among historic M&A deals, those that involved fewer than three states and those in which the acquirer was twice as big as the target were easier to complete and realized more value.

2. Carefully manage the regulatory approval process. State regulatory approvals present the largest uncertainty and risk in utility M&A, clearly affecting the economics of any deal. However, too often, these discussions start and end with rate reductions so that the utility can secure approvals. The regulatory approval process should be similar to the rigorous due diligence that’s performed before the deal’s announcement. This means that when considering M&A, utilities should:

  • Consider regulatory benefits beyond the typical rate reductions. The regulatory approval process can be used to create many benefits that share rewards and risks, and to provide advantages tailored to the specific merger’s conditions. Such benefits include a stronger combined balance sheet and a potential equity infusion into the target’s subsidiaries; an ability to better manage and hedge a larger combined fuel portfolio; the capacity to improve customer satisfaction; a commitment to specific rate-based investment levels; and a dedication to relieving customer liability on pending litigation. For example, to respond to regulatory policies that mandate reduced emissions, merged companies can benefit not only from larger balance sheets but also from equity infusions to invest in new technology or proven technologies. Merged entities are also afforded the opportunity to leverage combined emissions reduction portfolios.
  • Systematically price out a full range of regulatory benefits. The range should include the timing of “gives” (that is, the sharing of synergy gains with customers in the form of lower rates) as a key value lever; dedicated valuations of potential plans and sensitivities from all stakeholders’ perspectives; and a determination of the features most valued by regulators so that they can be included in a strategy for getting M&A approvals. Executives should be wary of settlements tied to performance metrics that are vaguely defined or inadequately tracked. They should also avoid deals that require new state-level legislation, because too much time will be required to negotiate and close these complex deals. Finally, executives should be wary of plans that put shareholder benefits at the end of the process, because current PUC decisions may not bind future ones.
  • Be prepared to walk away if the settlement conditions imposed by the regulators dilute the economics of the deal. This contingency plan requires that participating executives agree on the economic and timing triggers that could lead to an unattractive deal.

3. Integrate early and aggressively. Historically, utility transactions have taken an average of 15 months from announcement to closing, given the required regulatory approvals. With such a lengthy time lag, it’s been easy for executives to fall into the trap of putting off important decisions related to the integration and post-merger organization. This delay often leads to organizational inertia as employees in the companies dig in their heels on key issues and decisions rather than begin to work together. To avoid such inertia, early momentum in the integration effort, embodied in the steps outlined below, is critical.

  • Announce the executive team’s organization early on. Optimally, announcements should be made within the first 90 days, and three or four well-structured senior-management workshops with the two CEOs and key executives should occur within the first two months. The decisions announced should be based on such considerations as the specific business unit and organizational options, available leadership talent and alignment with synergy targets by area.
  • Make top-down decisions about integration approach according to business and function. Many utility mergers appear to adopt a “template” approach to integration that leads to a false sense of comfort regarding the process. Instead, managers should segment decision making for each business unit and function. For example, when the acquirer has a best-practice model for fossil operations, the target’s plants and organization should simply be absorbed into the acquirer’s model. When both companies have strong practices, a more careful integration will be required. And when both companies need to transform a particular function, the integration approach should be tailored to achieve a change in collective performance.
  • Set clear guidelines and expectations for the integration. A critical part of jump-starting the integration process is appointing an integration officer with true decision-making authority, and articulating the guidelines that will serve as a road map for the integration teams. These guidelines should clearly describe the roles of the corporation and individual operating teams, as well as provide specific directions about control and organizational layers and review and approval mechanisms for major decisions.
  • Systematically address legal and organizational bottlenecks. The integration’s progress can be impeded by legal or organizational constraints on the sharing of sensitive information. In such situations, significant progress can be achieved by using clean teams – neutral people who haven’t worked in the area before – to ensure data is exchanged and sanitized analytical results are shared. Improved information sharing can aid executive-level decision making when it comes to commercially sensitive areas such as commercial marketing-and-trading portfolios, performance improvements, and other unregulated business-planning and organizational decisions.

4. Use a top-down approach to design realistic but ambitious economic targets. Synergies from utility mergers have short shelf lives. With limits on a post-merger rate freeze or rate-case filing, the time to achieve the targets is short. To achieve their economic targets, merged utilities should:

  • Construct the top five to 10 synergy initiatives to capture value and translate them into road maps with milestones and accountabilities. Identifying and promoting clear targets early in the integration effort lead to a focus on the merger’s synergy goals.
  • Identify the links between synergy outcomes and organizational decisions early on, and manage those decisions from the top. Such top-down decisions should specify which business units or functional areas are to be consolidated. Integration teams often become gridlocked over such decisions because of conflicts of interest and a lack of objectivity.
  • Control the human resources policies related to the merger. Important top-down decisions include retention and severance packages and the appointment process. Alternative severance, retirement and retention plans should be priced explicitly to ensure a tight yet fair balance between the plans’ costs and benefits.
  • Exploit the merger to create opportunities for significant reductions in the acquirer’s cost base. Typical merger processes tend to focus on reductions in the target’s cost base. However, in many cases the acquirer’s cost base can also be reduced. Such reductions can be a significant source of value, making the difference between success and failure. They also communicate to the target’s employees that the playing field is level.
  • Avoid the tendency to declare victory too soon. Most synergies are related to standardization and rationalization of practices, consolidation of line functions and optimization of processes and systems. These initiatives require discipline in tracking progress against key milestones and cost targets. They also require a tough-minded assessment of red fl ags and cost increases over a sustained time frame – often two to three years after the closing.

RECOMMENDATIONS: A DISCIPLINED PROCESS IS KEY

despite the inherent difficulties, M&A should remain a strategic option for most utilities. If they can avoid the pitfalls of previous rounds of mergers, executives have an opportunity to create shareholder value, but a disciplined and comprehensive approach to both the M&A process and the subsequent integration is essential.

Such an approach begins with executives who insist on a clear rationale for value creation with and without M&A. Their teams must make pragmatic assessments of a deal’s economics relative to its potential for improving base business. If they determine the deal has a strong rationale, they must then orchestrate a regulatory process that considers broad options beyond rate reductions. Having the discipline to walk away if the settlement conditions dilute the deal’s economics is a key part of this process. A disciplined approach also requires that an aggressive integration effort begin as soon as the deal has been announced – an effort that entails a modular approach with clear, fast, top-down decisions on critical issues. Finally, a disciplined process requires relentless follow-through by executives if the deal is to achieve ambitious yet realistic synergy targets.

Weathering the Perfect Storm

A “perfect storm” of daunting proportions is bearing down on utility companies: assets are aging; the workforce is aging; and legacy information technology (IT) systems are becoming an impediment to efficiency improvements. This article suggests a three-pronged strategy to meet the challenges posed by this triple threat. By implementing best practices in the areas of business process management (BPM), system consolidation and IT service management (ITSM), utilities can operate more efficiently and profitably while addressing their aging infrastructure and staff.

BUSINESS PROCESS MANAGEMENT

In a recent speech before the Utilities Technology Conference, the CIO of one of North America’s largest integrated gas and electric utilities commented that “information technology is a key to future growth and will provide us with a sustainable competitive advantage.” The quest by utilities to improve shareholder and customer satisfaction has led many CIOs to reach this same conclusion: nearly all of their efforts to reduce the costs of managing assets depend on information management.

Echoing this observation, a survey of utility CIOs showed that the top business issue in the industry was the need to improve business process management (BPM).[1] It’s easy to see why.

BPM enables utilities to capture, propagate and evolve asset management best practices while maintaining alignment between work processes and business goals. For most companies, the standardized business processes associated with BPM drive work and asset management activities and bring a host of competitive advantages, including improvements in risk management, revenue generation and customer satisfaction. Standardized business processes also allow management to more successfully implement business transformation in an environment that may include workers acquired in a merger, workers nearing retirement and new workers of any age.

BPM also helps enforce a desirable culture change by creating an adaptive enterprise where agility, flexibility and top-to-bottom alignment of work processes with business goals drive the utility’s operations. These work processes need to be flexible so management can quickly respond to the next bump in the competitive landscape. Using standard work processes drives desired behavior across the organization while promoting the capture of asset-related knowledge held by many long-term employees.

Utility executives also depend on technology-based BPM to improve processes for managing assets. This allows them to reduce staffing levels without affecting worker safety, system reliability or customer satisfaction. These processes, when standardized and enforced, result in common work practices throughout the organization, regardless of region or business unit. BPM can thus yield an integrated set of applications that can be deployed in a pragmatic manner to improve work processes, meet regulatory requirements and reduce total cost of ownership (TCO) of assets.

BPM Capabilities

Although the terms business process management and work flow are often used synonymously – and are indeed related – they refer to distinctly different things. BPM is a strategic activity undertaken by an organization looking to standardize and optimize business processes, whereas work flow refers to IT solutions that automate processes – for example, solutions that support the execution phase of BPM.

There are a number of core BPM capabilities that, although individually important, are even more powerful than the sum of their parts when leveraged together. Combined, they provide a powerful solution to standardize, execute, enforce, test and continuously improve asset management business processes. These capabilities include:

  • Support for local process variations within a common process model;
  • Visual design tools;
  • Revision management of process definitions;
  • Web services interaction with other solutions;
  • XML-based process and escalation definitions;
  • Event-driven user interface interactions;
  • Component-based definition of processes and subprocesses; and
  • Single engine supporting push-based (work flow) and polling-based (escalation) processes.

Since BPM supports knowledge capture from experienced employees, what is the relationship between BPM and knowledge management? Research has shown that the best way to capture knowledge that resides in workers’ heads into some type of system is to transfer the knowledge to systems they already use. Work and asset management systems hold job plans, operational steps, procedures, images, drawings and other documents. These systems are also the best place to put information required to perform a task that an experienced worker “just knows” how to do.

By creating appropriate work flows in support of BPM, workers can be guided through a “debriefing” stage, where they can review existing job plans and procedures, and look for tasks not sufficiently defined to be performed without the tacit knowledge learned through experience. Then, the procedure can be flagged for additional input by a knowledgeable craftsperson. This same approach can even help ensure the success of the “debriefing” application itself, since BPM tools by definition allow guidance to be built in by creating online help or by enhancing screen text to explain the next step.

SYSTEM CONSOLIDATION

System consolidation needs to involve more than simply combining applications. For utilities, system consolidation efforts ought to focus on making systems agile enough to support near real-time visibility into critical asset data. This agility will yield transparency across lines of business on the one hand, and satisfies regulators and customers on the other. To achieve this level of transparency, utilities have an imperative to enforce a modern enterprise architecture that supports service-oriented architectures (SOAs) and also BPM.

Done right, system consolidation allows utilities to create a framework supporting three key business areas:

  • Optimization of both human and physical assets;
  • Standardization of processes, data and accountability; and
  • Flexibility to change and adapt to what’s next.

The Need for Consolidation

Many utility transmission and distribution (T&D) divisions exhibit this need for consolidation. Over time, the business operations of many of these divisions have introduced different systems to support a perceived immediate need – without considering similar systems that may already be implemented within the utility. Eventually, the business finds it owns three different “stacks” of systems managing assets, work assignments and mobile workers – one for short-cycle service work, one for construction and still another for maintenance and inspection work.

With these systems in place, it’s nearly impossible to implement productivity programs – such as cross-training field crews in both construction and service work – or to take advantage of a “common work queue” that would allow workers to fill open time slots without returning to their regional service center. In addition, owning and operating these “siloed” systems adds significant IT costs, as each one has annual maintenance fees, integration costs, yearly application upgrades and retraining requirements.

In such cases, using one system for all work and asset management would eliminate multiple applications and deliver bottom-line operational benefits: more productive workers, more reliable assets and technology cost savings. One large Midwestern utility adopting the system consolidation approach was able to standardize on six core applications: work and asset management, financials, document management, geographic information systems (GIS), scheduling and mobile workforce management. The asset management system alone was able to consolidate more than 60 legacy applications. In addition to the obvious cost savings, these consolidated asset management systems are better able to address operational risk, worker health and safety and regulatory compliance – both operational and financial – making utilities more competitive.

A related benefit of system consolidation concerns the elimination of rogue “pop-up” applications. These are niche applications, often spreadsheets or standalone databases, which “pop up” throughout an organization on engineers’ desktops. Many of these applications perform critical rolls in regulatory compliance yet are unlikely to pass muster at any Sarbanes-Oxley review. Typically, these pop-up applications are built to fill a “functionality gap” in existing legacy systems. Using an asset management system with a standards-based platform allows utilities to roll these pop-up applications directly into their standard supported work and asset management system.

Employees must interact with many systems in a typical day. How productive is the maintenance electrician who uses one system for work management, one for ordering parts and yet another for reporting his or her time at the end of a shift? Think of the time wasted navigating three distinct systems with different user interfaces, and the duplication of data that unavoidably occurs. How much more efficient would it be if the electrician were able to use one system that supported all of his or her work requirements? A logical grouping of systems clearly enables all workers to leverage information technology to be more efficient and effective.

Today, using modern, standards-based technologies like SOAs, utilities can eliminate the counterproductive mix of disparate commercial and “home-grown” systems. Automated processes can be delivered as Web services, allowing asset and service management to be included in the enterprise application portfolio, joining the ranks of human resource (HR), finance and other business-critical applications.

But although system consolidation in general is a good thing, there is a “tipping point” where consolidating simply for the sake of consolidation no longer provides a meaningful return and can actually erode savings and productivity gains. A system consolidation strategy should center on core competencies. For example, accountants or doctors are both skilled service professionals. But their similarity on that high level doesn’t mean you would trade one for the other just to “consolidate” the bills you receive and the checks you have to write. You don’t want accountants reading your X-rays. The same is true for your systems’ needs. Your organization’s accounting or human resource software does not possess the unique capabilities to help you manage your mission-critical transmission and distribution, facilities, vehicle fleet or IT assets. Hence it is unwise to consolidate these mission-critical systems.

System consolidation strategically aligned with business requirements offers huge opportunities for improving productivity and eliminating IT costs. It also improves an organization’s agility and reverses the historical drift toward stovepipe or niche systems by providing appropriate systems for critical roles and stakeholders within the organization.

IT SERVICE MANAGEMENT

IT Service Management (ITSM) is critical to helping utilities deal with aging assets, infrastructure and employees primarily because ITSM enables companies to surf the accelerating trend of asset management convergence instead of falling behind more nimble competitors. Used in combination with pragmatic BPM and system consolidation strategies, ITSM can help utilities exploit the opportunities that this trend presents.

Three key factors are driving the convergence of management processes across IT assets (PCs, servers and the like) and operational assets (the systems and equipment through which utilities deliver service). The first concerns corporate governance, whereby corporate-wide standards and policies are forcing operational units to rethink their use of “siloed” technologies and are paving the way for new, more integrated investments. Second, utilities are realizing that to deal with their aging assets, workforce and systems dilemmas, they must increase their investments in advanced information and engineering technologies. Finally, the functional boundaries between the IT and operational assets themselves are blurring beyond recognition as more and more equipment utilizes on-board computational systems and is linked over the network via IP addresses.

Utilities need to understand this growing interdependency among assets, including the way individual assets affect service to the business and the requirement to provide visibility into asset status in order to properly address questions relating to risk management and compliance.

Corporate Governance Fuels a Cultural Shift

The convergence of IT and operational technology is changing the relationship between the formerly separate operational and IT groups. The operational units are increasingly relying on IT to help deal with their “aging trilogy” problem, as well as to meet escalating regulatory compliance demands and customers’ reliability expectations. In the past, operating units purchased advanced technology (such as advanced metering or substation automation systems) on an as-needed basis, unfettered by corporate IT policies and standards. In the process, they created multiple silos of nonstandard, non-integrated systems. But now, as their dependence on IT grows, corporate governance policies are forcing operating units to work within IT’s framework. Utilities can’t afford the liability and maintenance costs of nonstandard, disparate systems scattered across their operational and IT efforts. This growing dependence on IT has thus created a new cultural challenge.

A study by Gartner of the interactions among IT and operational technology highlights this challenge. It found that “to improve agility and achieve the next level of efficiencies, utilities must embrace technologies that will enable enterprise application access to real-time information for dynamic optimization of business processes. On the other hand, lines of business (LOBs) will increasingly rely on IT organizations because IT is pervasively embedded in operational and energy technologies, and because standard IT platforms, application architectures and communication protocols are getting wider acceptance by OT [operational technology] vendors.”[2]

In fact, an InformationWeek article (“Changes at C-Level,” August 1, 2006) warned that this cultural shift could result in operational conflict if not dealt with. In that article, Nathan Bennett and Stephen Miles wrote, “Companies that look to the IT department to bring a competitive edge and drive revenue growth may find themselves facing an unexpected roadblock: their CIO and COO are butting heads.” As IT assumes more responsibility for running a utility’s operations, the roles of CIO and COO will increasingly converge.

What Is an IT Asset, Anyhow?

An important reason for this shift is the changing nature of the assets themselves, as mentioned previously. Consider the question “What is an IT asset?” In the past, most people would say that this referred to things like PCs, servers, networks and software. But what about a smart meter? It has firmware that needs updates; it resides on a wired or wireless network; and it has an IP address. In an intelligent utility network (IUN), this is true of substation automation equipment and other field-located equipment. The same is true for plant-based monitoring and control equipment. So today, if a smart device fails, do you send a mechanic or an IT technician?

This question underscores why IT asset and service management will play an increasingly important role in a utility’s operations. Utilities will certainly be using more complex technology to operate and maintain assets in the future. Electronic monitoring of asset health and performance based on conditions such as meter or sensor readings and state changes can dramatically improve asset reliability. Remote monitoring agents – from third-party condition monitoring vendors or original equipment manufacturers (OEMs) of highly specialized assets – can help analyze the increasingly complex assets being installed today as well as optimize preventive maintenance and resource planning.

Moreover, utilities will increasingly rely on advanced technology to help them overcome the challenges of their aging assets, workers and systems. For example, as noted above, advanced information technology will be needed to capture the tacit knowledge of experienced workers as well as replace some manual functions with automated systems. Inevitably, operational units will become technology-driven organizations, heavily dependent on the automated systems and processes associated with IT asset and service management.

The good news for utilities is that a playbook of sorts is available that can help them chart the ITSM waters in the future. The de facto global standard for best practices process guidance in ITSM is the IT Infrastructure Library (ITIL), which IT organizations can adopt to support their utility’s business goals. ITIL-based processes can help utilities better manage IT changes, assets, staff and service levels. ITIL extends beyond simple management of asset and service desk activities, creating a more proactive organization that can reduce asset failures, improve customer satisfaction and cut costs. Key components of ITIL best practices include configuration, problem, incident, change and service-level management activities.

Implemented together, ITSM best practices as embodied in ITIL can help utilities:

  • Better align asset health and performance with the needs of the business;
  • Improve risk and compliance management;
  • Improve operational excellence;
  • Reduce the cost of infrastructure support services;
  • Capture tactical knowledge from an aging workforce;
  • Utilize business process management concepts; and
  • More effectively leverage their intelligent assets.

CONCLUSION

The “perfect storm” brought about by aging assets, an aging workforce and legacy IT systems is challenging utilities in ways many have never experienced. The current, fragmented approach to managing assets and services has been a “good enough” solution for most utilities until now. But good enough isn’t good enough anymore, since this fragmentation often has led to siloed systems and organizational “blind spots” that compromise business operations and could lead to regulatory compliance risks.

The convergence of IT and operational technology (with its attendant convergence of asset management processes) represents a challenging cultural change; however, it’s a change that can ultimately confer benefits for utilities. These benefits include not only improvements to the bottom line but also improvements in the agility of the operation and its ability to control risks and meet compliance requirements associated with asset and service management activity.

To help weather the coming perfect storm, utilities can implement best practices in three key areas:

  • BP technology can help utilities capture and propagate asset management best practices to mitigate the looming “brain drain” and improve operational processes.
  • Judicious system consolidation can improve operational efficiency and eliminate legacy systems that are burdening the business.
  • ITSM best practices as exemplified by ITIL can streamline the convergence of IT and operational assets while supporting a positive cultural shift to help operational business units integrate with IT activities and standards.

Best-practices management of all critical assets based on these guidelines will help utilities facilitate the visibility, control and standardization required to continuously improve today’s power generation and delivery environment.

ENDNOTES

1. Gartner’s 2006 CIO Agenda survey.

2. Bradley Williams, Zarko Sumic, James Spiers, Kristian Steenstrup, “IT and OT Interaction: Why Confl ict Resolution Is Important,” Gartner Industry Research, Sept. 15, 2006.